id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
773292 | https://en.wikipedia.org/wiki/Measurement%20problem | Measurement problem | In quantum mechanics, the measurement problem is the problem of definite outcomes: quantum systems have superpositions but quantum measurements only give one definite result.
The wave function in quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition of different states. However, actual measurements always find the physical system in a definite state. Any future evolution of the wave function is based on the state the system was discovered to be in when the measurement was made, meaning that the measurement "did something" to the system that is not obviously a consequence of Schrödinger evolution. The measurement problem is describing what that "something" is, how a superposition of many possible values becomes a single measured value.
To express matters differently (paraphrasing Steven Weinberg), the Schrödinger equation determines the wave function at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave function, why can we not predict precise results for measurements, but only probabilities? As a general question: How can one establish a correspondence between quantum reality and classical reality?
Schrödinger's cat
A thought experiment called Schrödinger's cat illustrates the measurement problem. A mechanism is arranged to kill a cat if a quantum event, such as the decay of a radioactive atom, occurs.
The mechanism and the cat are enclosed in a chamber so the fate of the cat is unknown until the chamber is opened. Prior to observation, according to quantum mechanics, the atom is in a quantum superposition, a linear combination of decayed and intact states. Also according to quantum mechanics, the atom-mechanism-cat composite system is described by superpositions of compound states. Therefore, the cat would be described as in a superposition, a linear combination of two states an "intact atom-alive cat" and a "decayed atom-dead cat". However, when the chamber is opened the cat is either alive or it is dead: there is no superposition observed. After the measurement the cat is definitively alive or dead.
The cat scenario illustrates the measurement problem: how can an indefinite superposition yield a single definite outcome? It also illustrates other issues in quantum measurement, including when does a measurement occur? Was it when the cat was observed? How is a measurement apparatus defined? The mechanism for detecting radioactive decay? The cat? The chamber? What is the role of the observer?
Interpretations
The views often grouped together as the Copenhagen interpretation are the oldest and, collectively, probably still the most widely held attitude about quantum mechanics. N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced.
Generally, views in the Copenhagen tradition posit something in the act of observation which results in the collapse of the wave function. This concept, though often attributed to Niels Bohr, was due to Werner Heisenberg, whose later writings obscured many disagreements he and Bohr had during their collaboration and that the two never resolved. In these schools of thought, wave functions may be regarded as statistical information about a quantum system, and wave function collapse is the updating of that information in response to new data. Exactly how to understand this process remains a topic of dispute.
Bohr discussed his views in a 1947 letter to Pauli. Bohr points out that the measurement processes such as cloud chambers or photographic plates involve enormous amplification requiring energies far in excess of the quantum effects being studied and he notes that these processes are irreversible. He considered a consistent account of this issue to be an unsolved problem.
Hugh Everett's many-worlds interpretation attempts to solve the problem by suggesting that there is only one wave function, the superposition of the entire universe, and it never collapses—so there is no measurement problem. Instead, the act of measurement is simply an interaction between quantum entities, e.g. observer, measuring instrument, electron/positron etc., which entangle to form a single larger entity, for instance living cat/happy scientist. Everett also attempted to demonstrate how the probabilistic nature of quantum mechanics would appear in measurements, a work later extended by Bryce DeWitt. However, proponents of the Everettian program have not yet reached a consensus regarding the correct way to justify the use of the Born rule to calculate probabilities.
The de Broglie–Bohm theory tries to solve the measurement problem very differently: the information describing the system contains not only the wave function, but also supplementary data (a trajectory) giving the position of the particle(s). The role of the wave function is to generate the velocity field for the particles. These velocities are such that the probability distribution for the particle remains consistent with the predictions of the orthodox quantum mechanics. According to the de Broglie–Bohm theory, interaction with the environment during a measurement procedure separates the wave packets in configuration space, which is where apparent wave function collapse comes from, even though there is no actual collapse.
A fourth approach is given by objective-collapse models. In such models, the Schrödinger equation is modified and obtains nonlinear terms. These nonlinear modifications are of stochastic nature and lead to behaviour that for microscopic quantum objects, e.g. electrons or atoms, is unmeasurably close to that given by the usual Schrödinger equation. For macroscopic objects, however, the nonlinear modification becomes important and induces the collapse of the wave function. Objective-collapse models are effective theories. The stochastic modification is thought to stem from some external non-quantum field, but the nature of this field is unknown. One possible candidate is the gravitational interaction as in the models of Diósi and Penrose. The main difference of objective-collapse models compared to the other approaches is that they make falsifiable predictions that differ from standard quantum mechanics. Experiments are already getting close to the parameter regime where these predictions can be tested.
The Ghirardi–Rimini–Weber (GRW) theory proposes that wave function collapse happens spontaneously as part of the dynamics. Particles have a non-zero probability of undergoing a "hit", or spontaneous collapse of the wave function, on the order of once every hundred million years. Though collapse is extremely rare, the sheer number of particles in a measurement system means that the probability of a collapse occurring somewhere in the system is high. Since the entire measurement system is entangled (by quantum entanglement), the collapse of a single particle initiates the collapse of the entire measurement apparatus. Because the GRW theory makes different predictions from orthodox quantum mechanics in some conditions, it is not an interpretation of quantum mechanics in a strict sense.
Role of decoherence
Erich Joos and Heinz-Dieter Zeh claim that the phenomenon of quantum decoherence, which was put on firm ground in the 1980s, resolves the problem. The idea is that the environment causes the classical appearance of macroscopic objects. Zeh further claims that decoherence makes it possible to identify the fuzzy boundary between the quantum microworld and the world where the classical intuition is applicable. Quantum decoherence becomes an important part of some modern updates of the Copenhagen interpretation based on consistent histories. Quantum decoherence does not describe the actual collapse of the wave function, but it explains the conversion of the quantum probabilities (that exhibit interference effects) to the ordinary classical probabilities. See, for example, Zurek, Zeh and Schlosshauer.
The present situation is slowly clarifying, described in a 2006 article by Schlosshauer as follows:
| Physical sciences | Quantum mechanics | Physics |
774640 | https://en.wikipedia.org/wiki/Muntjac | Muntjac | Muntjacs ( ), also known as the barking deer or rib-faced deer, are small deer of the genus Muntiacus native to South Asia and Southeast Asia. Muntjacs are thought to have begun appearing 15–35 million years ago, with remains found in Miocene deposits in France, Germany and Poland. Most are listed as least-concern species or Data Deficient by the International Union for Conservation of Nature (IUCN), although others such as the black muntjac, Bornean yellow muntjac, and giant muntjac are vulnerable, near threatened, and critically endangered, respectively.
Name
The present name is a borrowing of the Latinized form of the Dutch , which was borrowed from the Sundanese mencek (). The Latin form first appeared as in Zimmerman in 1780. An erroneous alternative name of Mastreani deer has its origins in a mischievous Wikipedia entry from 2011 and is incorrect.
Distribution
The present-day species are native to Asia and can be found in India, Sri Lanka, Myanmar, Vietnam, the Indonesian islands, Taiwan and Southern China. Their habitat includes areas of dense vegetation, rainforests, monsoon forests and they like to be close to a water source. They are also found in the lower Himalayas (Terai regions of Nepal and Bhutan).
An invasive population of Reeves's muntjac exists in the United Kingdom and in some areas of Japan. In the United Kingdom, wild muntjac descended from escapees from the Woburn Abbey estate around 1925. Muntjac have expanded rapidly, and are present in most English counties and also in Wales, although they are less common in the north-west. The British Deer Society in 2007 found that muntjac deer had noticeably expanded their range in the UK since 2000. Specimens appeared in Northern Ireland in 2009, and in the Republic of Ireland in 2010.
Inhabiting tropical regions, the deer have no seasonal rut, and mating can take place at any time of year; this behaviour is retained by populations introduced to temperate countries.
Description
Tusks
Males have short antlers, which can regrow, but they tend to fight for territory with their "tusks" (downward-pointing canine teeth). The presence of these "tusks" is otherwise unknown in native British wild deer and can be an identifying feature to differentiate a muntjac from an immature native deer. Water deer also have visible tusks but they are much less widespread.
Although these tusks resemble those of both water deer and the musk deer, the muntjac is not related to either of these (and they are not related to each other). The tusks are of a quite different shape in each.
.
Glands
Muntjacs possess various scent glands that have crucial functions in communication and territorial marking. They use their facial glands primarily to mark the ground and occasionally other individuals, and the glands are opened during defecation and urination, as well as sometimes during social displays. While the frontal glands are typically opened involuntarily as a result of facial muscle contractions, the preorbital glands near the eyes can be voluntarily opened much wider and even everted to push out the underlying glandular tissue. Even young fawns are capable of fully everting their preorbital glands.
Genetics
Muntjac are of great interest in evolutionary studies because of their dramatic chromosome variations and the recent discovery of several new species. The Southern red muntjac (M. muntjak) is the mammal with the lowest recorded chromosome number: The male has a diploid number of 7, the female only 6 chromosomes. Reeves's muntjac (M. reevesi), in comparison, has a diploid number of 46 chromosomes.
Species
The genus Muntiacus has 14 recognized species:
Bornean yellow muntjac, Muntiacus atherodes
Hairy-fronted muntjac or black muntjac, Muntiacus crinifrons
Fea's muntjac, Muntiacus feae
Gongshan muntjac, Muntiacus gongshanensis
Malabar red muntjak, Muntiacus malabaricus
Sumatran muntjac Muntiacus montanus
Southern red muntjac, Muntiacus muntjak
Leaf muntjac Muntiacus putaoensis
Pu Hoat muntjac Muntiacus puhoatensis
Reeves's muntjac or Chinese muntjac, Muntiacus reevesi
Roosevelt's muntjac, Muntiacus rooseveltorum
Truong Son muntjac Muntiacus truongsonensis
Northern red muntjac, Muntiacus vaginalis
Giant muntjac, Muntiacus vuquangensis
| Biology and health sciences | Deer | Animals |
776591 | https://en.wikipedia.org/wiki/Cumulus%20humilis%20cloud | Cumulus humilis cloud | Cumulus humilis are cumuliform clouds with little vertical extent, common in the summer, that are often referred to as "fair weather cumulus". If they develop into cumulus mediocris or cumulus congestus, thunderstorms could form later in the day.
They generally form at lower altitudes (500–3000 m (1,500–10,000 ft)), but in hot countries or over mountainous terrain these clouds can occur at an altitude of up to . They show no significant vertical development, indicating that the temperature in the atmosphere above them either drops off very slowly or not at all with altitude; that is, the environmental lapse rate is small or negative. Cumulus humilis clouds often have little variance in their depths due to their constrained vertical development. Cumulus humilis may be accompanied by other cloud types.
Air below the cloud base can be quite turbulent due to the thermals that formed the clouds, giving occupants of light aircraft an uncomfortable ride. To avoid turbulence where such clouds are present, pilots may climb above the cloud tops. However, glider pilots actively seek out the rising air to gain altitude.
These clouds may later metamorphose into cumulus mediocris and eventually cumulus congestus clouds when convection is intense enough, though the presence of these types of clouds usually indicates fair weather.
Forecasting
Morning cumulus humilis clouds are signs of an unstable atmosphere. Larger clouds or possibly thunderstorms could form throughout the day to cause bad or severe weather in the afternoon or evening. Cumulus humilis clouds are not rain clouds but could precede a storm.
Cumulus humilis are sometimes seen beneath cirrostratus clouds, which block some of the heat from the sun and thus create an inversion, causing any cumuliform clouds to flatten and become cumulus humilis. In this case, a warm front could be approaching and rain is possible for the next 12 to 24 hours.
When cumulus humilis appear in a clear sky, they are an indicator of pleasant weather for the next several hours.
Formation
Cumulus humilis clouds are formed by rising warm air or thermals with ascending air currents of 2–5 m/s (7–17 ft/s). These clouds are usually very small convective clouds and usually form after a thermal reaches the condensation level. They can develop into cumulus mediocris clouds but most often dissipate a few minutes after formation.
| Physical sciences | Clouds | Earth science |
776713 | https://en.wikipedia.org/wiki/Unified%20field%20theory | Unified field theory | In physics, a unified field theory (UFT) is a type of field theory that allows all fundamental forces and elementary particles to be written in terms of a single type of field. According to modern discoveries in physics, forces are not transmitted directly between interacting objects but instead are described and interpreted by intermediary entities called fields. Furthermore, according to quantum field theory, particles are themselves the quanta of fields. Examples of different fields in physics include vector fields such as the electromagnetic field, spinor fields whose quanta are fermionic particles such as electrons, and tensor fields such as the metric tensor field that describes the shape of spacetime and gives rise to gravitation in general relativity. Unified field theory attempts to organize these fields into a single mathematical structure.
For over a century, unified field theory has remained an open line of research. The term was coined by Albert Einstein, who attempted to unify his general theory of relativity with electromagnetism. Einstein attempted to create a classical unified field theory, rejecting quantum mechanics. Among other difficulties, this required a new explanation of particles as singularities or solitons instead of field quanta. Later attempts to unify general relativity with other forces incorporate quantum mechanics. The concept of a "Theory of Everything" or Grand Unified Theory are closely related to unified field theory, but differ by not requiring the basis of nature to be fields, and often by attempting to explain physical constants of nature. Additionally, Grand Unified Theories do not attempt to include the gravitational force and can therefore operate entirely within quantum field theory.
The goal of a unified field theory has led to a great deal of progress in theoretical physics.
Introduction
Unified field theory attempts to give a single elegant description of the following fields:
Forces
All four of the known fundamental forces are mediated by fields. In the Standard Model of particle physics, three of these result from the exchange of gauge bosons. These are:
Strong interaction: the interaction responsible for holding quarks together to form hadrons, and holding neutrons and also protons together to form atomic nuclei. The exchange particle that mediates this force is the gluon.
Electromagnetic interaction: the familiar interaction that acts on electrically charged particles. The photon is the exchange particle for this force.
Weak interaction: a short-range interaction responsible for some forms of radioactivity, that acts on electrons, neutrinos, and quarks. It is mediated by the W and Z bosons.
General relativity likewise describes gravitation as the result of the metric tensor field, which describes the shape of spacetime:
Gravitational interaction: a long-range attractive interaction that acts on all particles. In hypothetical quantum versions of GR, the postulated exchange particle has been named the graviton.
Matter
In the Standard Model, the "matter" particles (electrons, quarks, neutrinos, etc) are described as the quanta of spinor fields. Gauge boson fields also have quanta, such as photons for the electromagnetic field.
Higgs
The Standard Model has a unique fundamental scalar field, the Higgs field, the quanta of which are called Higgs bosons.
History
Classic theory
The first successful classical unified field theory was developed by James Clerk Maxwell. In 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currents. Until then, electricity and magnetism had been thought of as unrelated phenomena. In 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic field. This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetism. By 1905, Albert Einstein had used the constancy of the speed-of-light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime. In 1915, he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional (4D) spacetime.
In the years following the creation of the general theory, a large number of physicists and mathematicians enthusiastically participated in the attempt to unify the then-known fundamental interactions. Given later developments in this domain, of particular interest are the theories of Hermann Weyl of 1919, who introduced the concept of an (electromagnetic) gauge field in a classical field theory and, two years later, that of Theodor Kaluza, who extended General Relativity to five dimensions. Continuing in this latter direction, Oscar Klein proposed in 1926 that the fourth spatial dimension be curled up into a small, unobserved circle. In Kaluza–Klein theory, the gravitational curvature of the extra spatial direction behaves as an additional force similar to electromagnetism. These and other models of electromagnetism and gravity were pursued by Albert Einstein in his attempts at a classical unified field theory. By 1930 Einstein had already considered the Einstein-Maxwell–Dirac System [Dongen]. This system is (heuristically) the super-classical [Varadarajan] limit of (the not mathematically well-defined) quantum electrodynamics. One can extend this system to include the weak and strong nuclear forces to get the Einstein–Yang-Mills–Dirac System. The French physicist Marie-Antoinette Tonnelat published a paper in the early 1940s on the standard commutation relations for the quantized spin-2 field. She continued this work in collaboration with Erwin Schrödinger after World War II. In the 1960s Mendel Sachs proposed a generally covariant field theory that did not require recourse to renormalization or perturbation theory. In 1965, Tonnelat published a book on the state of research on unified field theories.
Modern progress
In 1963, American physicist Sheldon Glashow proposed that the weak nuclear force, electricity, and magnetism could arise from a partially unified electroweak theory. In 1967, Pakistani Abdus Salam and American Steven Weinberg independently revised Glashow's theory by having the masses for the W particle and Z particle arise through spontaneous symmetry breaking with the Higgs mechanism. This unified theory modelled the electroweak interaction as a force mediated by four particles: the photon for the electromagnetic aspect, a neutral Z particle, and two charged W particles for the weak aspect. As a result of the spontaneous symmetry breaking, the weak force becomes short-range and the W and Z bosons acquire masses of 80.4 and , respectively. Their theory was first given experimental support by the discovery of weak neutral currents in 1973. In 1983, the Z and W bosons were first produced at CERN by Carlo Rubbia's team. For their insights, Glashow, Salam, and Weinberg were awarded the Nobel Prize in Physics in 1979. Carlo Rubbia and Simon van der Meer received the Prize in 1984.
After Gerardus 't Hooft showed the Glashow–Weinberg–Salam electroweak interactions to be mathematically consistent, the electroweak theory became a template for further attempts at unifying forces. In 1974, Sheldon Glashow and Howard Georgi proposed unifying the strong and electroweak interactions into the Georgi–Glashow model, the first Grand Unified Theory, which would have observable effects for energies much above 100 GeV.
Since then there have been several proposals for Grand Unified Theories, e.g. the Pati–Salam model, although none is currently universally accepted. A major problem for experimental tests of such theories is the energy scale involved, which is well beyond the reach of current accelerators. Grand Unified Theories make predictions for the relative strengths of the strong, weak, and electromagnetic forces, and in 1991 LEP determined that supersymmetric theories have the correct ratio of couplings for a Georgi–Glashow Grand Unified Theory.
Many Grand Unified Theories (but not Pati–Salam) predict that the proton can decay, and if this were to be seen, details of the decay products could give hints at more aspects of the Grand Unified Theory. It is at present unknown if the proton can decay, although experiments have determined a lower bound of 1035 years for its lifetime.
Current status
Theoretical physicists have not yet formulated a widely accepted, consistent theory that combines general relativity and quantum mechanics to form a theory of everything. Trying to combine the graviton with the strong and electroweak interactions leads to fundamental difficulties and the resulting theory is not renormalizable. The incompatibility of the two theories remains an outstanding problem in the field of physics.
| Physical sciences | Particle physics: General | Physics |
777860 | https://en.wikipedia.org/wiki/Mauve | Mauve | Mauve (, ; , ) is a pale purple color named after the mallow flower (French: mauve). The first use of the word mauve as a color was in 1796–98 according to the Oxford English Dictionary, but its use seems to have been rare before 1859. Another name for the color is mallow, with the first recorded use of mallow as a color name in English in 1611.
Mauve contains more gray and more blue than a pale tint of magenta. Many pale wildflowers called "blue" are more accurately classified as mauve. Mauve is also sometimes described as pale violet.
Mauveine, the first commercial aniline dye
The synthetic dye mauve was first so named in 1859. Chemist William Henry Perkin, then 18, was attempting to synthesize quinine in 1856; quinine was used to treat malaria. He noticed an unexpected residue, which turned out to be the first aniline dye. Perkin originally named the dye Tyrian purple after the historical dye, but the product was renamed mauve after it was marketed in 1859. It is now usually called Perkin's mauve, mauveine, or aniline purple.
Earlier references to a mauve dye in 1856–1858 referred to a color produced using the semi-synthetic dye murexide or a mixture of natural dyes. Perkin was so successful in marketing his discovery to the dye industry that his 2000 biography by Simon Garfield is simply entitled Mauve. Between 1859 and 1861, mauve became a fashion must-have. The weekly journal All the Year Round described women wearing the colour as "all flying countryward, like so many migrating birds of purple paradise". Punch magazine published cartoons poking fun at the huge popularity of the colour: "The Mauve Measles are spreading to so serious an extent that it is high time to consider by what means [they] may be checked."
But, because it faded easily, the success of mauve dye was short-lived; by 1873, it was replaced by other synthetic dyes. As the memory of the original dye soon receded, the contemporary understanding of mauve is as a lighter, less-saturated color than it was originally known.
The 1890s are sometimes referred to in retrospect as the "Mauve Decade" because of the popularity of the subtle color among progressive artistic types, both in Europe and the US.
Variations
Rich mauve
The color displayed at right is the rich tone of mauve called mauve by
Crayola.
French mauve (deep mauve)
The color displayed at right is the deep tone of mauve that is called mauve by
Pourpre.com , a color list widely popular in France.
Opera mauve
The color displayed at right is opera mauve.
The first recorded use of opera mauve as a color name in English was in 1927.
Mauve taupe
The color displayed at right is mauve taupe.
The first recorded use of mauve taupe as a color name in English was in 1925.
Old mauve
The color displayed at right is old mauve.
The first recorded use of old mauve as a color name in English was in 1925.
The normalized color coordinates for old mauve are identical to wine dregs, which was first recorded as a color name in English in 1924.
| Physical sciences | Colors | Physics |
4741089 | https://en.wikipedia.org/wiki/Jeans%20instability | Jeans instability | The Jeans instability is a concept in astrophysics that describes an instability that leads to the gravitational collapse of a cloud of gas or dust. It causes the collapse of interstellar gas clouds and subsequent star formation. It occurs when the internal gas pressure is not strong enough to prevent the gravitational collapse of a region filled with matter. It is named after James Jeans.
For stability, the cloud must be in hydrostatic equilibrium, which in case of a spherical cloud translates to
where is the enclosed mass, is the pressure, is the density of the gas (at radius ), is the gravitational constant, and is the radius. The equilibrium is stable if small perturbations are damped and unstable if they are amplified. In general, the cloud is unstable if it is either very massive at a given temperature or very cool at a given mass; under these circumstances, the gas pressure gradient cannot overcome gravitational force, and the cloud will collapse. This is called the "Jeans Collapse Criterion".
The Jeans instability likely determines when star formation occurs in molecular clouds.
History
In 1720, Edmund Halley considered a universe without edges and pondered what would happen if the "system of the world", which exists within the universe, were finite or infinite. In the finite case, stars would gravitate towards the center, and if infinite, all the stars would be nearly in equilibrium and the stars would eventually reach a resting place.
Contrary to the writing of Halley, Isaac Newton, in a 1692/3 letter to Richard Bentley, wrote that it's hard to imagine that particles in an infinite space should be able to stand in such a configuration to result in a perfect equilibrium.
James Jeans extended the issue of gravitational stability to include pressure. In 1902, Jeans wrote, similarly to Halley, that a finite distribution of matter, assuming pressure does not prevent it, will collapse gravitationally towards its center. For an infinite distribution of matter, there are two possible scenarios. An exactly homogeneous distribution has no clear center of mass and no clear way to define a gravitational acceleration direction. For the other case, Jeans extends what Newton wrote about: Jeans demonstrated that small deviations from exact homogeneity lead to instabilities.
Jeans mass
The Jeans mass is named after the British physicist Sir James Jeans, who considered the process of gravitational collapse within a gaseous cloud. He was able to show that, under appropriate conditions, a cloud, or part of one, would become unstable and begin to collapse when it lacked sufficient gaseous pressure support to balance the force of gravity. The cloud is stable for sufficiently small mass (at a given temperature and radius), but once this critical mass is exceeded, it will begin a process of runaway contraction until some other force can impede the collapse. He derived a formula for calculating this critical mass as a function of its density and temperature. The greater the mass of the cloud, the bigger its size, and the colder its temperature, the less stable it will be against gravitational collapse.
The approximate value of the Jeans mass may be derived through a simple physical argument. One begins with a spherical gaseous region of radius , mass , and with a gaseous sound speed . The gas is compressed slightly and it takes a time
for sound waves to cross the region and attempt to push back and re-establish the system in pressure balance. At the same time, gravity will attempt to contract the system even further, and will do so on a free-fall time
where is the universal gravitational constant, is the gas density within the region, and is the gas number density for mean mass per particle ( is appropriate for molecular hydrogen with 20% helium by number). When the sound-crossing time is less than the free-fall time, pressure forces temporarily overcome gravity, and the system returns to a stable equilibrium. However, when the free-fall time is less than the sound-crossing time, gravity overcomes pressure forces, and the region undergoes gravitational collapse. The condition for gravitational collapse is therefore
The resultant Jeans length is approximately
This length scale is known as the Jeans length. All scales larger than the Jeans length are unstable to gravitational collapse, whereas smaller scales are stable. The Jeans mass is just the mass contained in a sphere of radius ( is half the Jeans length):
"Jeans swindle"
It was later pointed out by other astrophysicists including Binney and Tremaine that the original analysis used by Jeans was flawed: in his formal analysis, although Jeans assumed that the collapsing region of the cloud was surrounded by an infinite, static medium, the surrounding medium should in reality also be collapsing, since all larger scales are also gravitationally unstable by the same analysis. The influence of this medium was completely ignored in Jeans' analysis. This flaw has come to be known as the "Jeans swindle".
Remarkably, when using a more careful analysis taking into account other factors such as the expansion of the Universe fortuitously cancel out the apparent error in Jeans' analysis, and Jeans' equation is correct, even if its derivation might have been dubious.
Energy-based derivation
An alternative, arguably even simpler, derivation can be found using energy considerations. In the interstellar cloud, two opposing forces are at work. The gas pressure, caused by the thermal movement of the atoms or molecules comprising the cloud, tries to make the cloud expand, whereas gravitation tries to make the cloud collapse. The Jeans mass is the critical mass where both forces are in equilibrium with each other. In the following derivation numerical constants (such as ) and constants of nature (such as the gravitational constant) will be ignored. They will be reintroduced in the result.
Consider a homogeneous spherical gas cloud with radius . In order to compress this sphere to a radius , work must be done against the gas pressure. During the compression, gravitational energy is released. When this energy equals the amount of work to be done on the gas, the critical mass is attained. Let be the mass of the cloud, the (absolute) temperature, the particle density, and the gas pressure. The work to be done equals . Using the ideal gas law, according to which , one arrives at the following expression for the work:
The gravitational potential energy of a sphere with mass and radius is, apart from constants, given by the following expression:
The amount of energy released when the sphere contracts from radius to radius is obtained by differentiation this expression to , so
The critical mass is attained as soon as the released gravitational energy is equal to the work done on the gas:
Next, the radius must be expressed in terms of the particle density and the mass . This can be done using the relation
A little algebra leads to the following expression for the critical mass:
If during the derivation all constants are taken along, the resulting expression is
where is the Boltzmann constant, the gravitational constant, and the mass of a particle comprising the gas. Assuming the cloud to consist of atomic hydrogen, the prefactor can be calculated. If we take the solar mass as the unit of mass, and use units of for the particle density, the result is
Jeans' length
Jeans' length is the critical radius of a cloud (typically a cloud of interstellar molecular gas and dust) where thermal energy, which causes the cloud to expand, is counteracted by gravity, which causes the cloud to collapse. It is named after the British astronomer Sir James Jeans, who concerned himself with the stability of spherical nebulae in the early 1900s.
The formula for Jeans length is:
where is the Boltzmann constant, is the temperature of the cloud, is the mean molecular weight of the particles, is the gravitational constant, and is the cloud's mass density (i.e. the cloud's mass divided by the cloud's volume).
Perhaps the easiest way to conceptualize Jeans' length is in terms of a close approximation, in which we discard the factors and and in which we rephrase as . The formula for Jeans' length then becomes:
where is the radius of the cloud.
It follows immediately that when ; i.e., the cloud's radius is the Jeans' length when thermal energy per particle equals gravitational work per particle. At this critical length the cloud neither expands nor contracts. It is only when thermal energy is not equal to gravitational work that the cloud either expands and cools or contracts and warms, a process that continues until equilibrium is reached.
Jeans' length as oscillation wavelength
The Jeans' length is the oscillation wavelength (respectively, Jeans' wavenumber, ) below which stable oscillations rather than gravitational collapse will occur.
where G is the gravitational constant, is the sound speed, and is the enclosed mass density.
It is also the distance a sound wave would travel in the collapse time.
Fragmentation
Jeans instability can also give rise to fragmentation in certain conditions. To derive the condition for fragmentation an adiabatic process is assumed in an ideal gas and also a polytropic equation of state is taken. The derivation is shown below through a dimensional analysis:
If the adiabatic index , the Jeans mass increases with increasing density, while if the Jeans mass decreases with increasing density. During gravitational collapse density always increases, thus in the second case the Jeans mass will decrease during collapse, allowing smaller overdense regions to collapse, leading to fragmentation of the giant molecular cloud. For an ideal monatomic gas, the adiabatic index is 5/3. However, in astrophysical objects this value is usually close to 1 (for example, in partially ionized gas at temperatures low compared to the ionization energy). More generally, the process is not really adiabatic but involves cooling by radiation that is much faster than the contraction, so that the process can be modeled by an adiabatic index as low as 1 (which corresponds to the polytropic index of an isothermal gas). So the second case is the rule rather than an exception in stars. This is the reason why stars usually form in clusters.
| Physical sciences | Stellar astronomy | Astronomy |
4741452 | https://en.wikipedia.org/wiki/Field%20galaxy | Field galaxy | A field galaxy is a galaxy that does not belong to a larger galaxy group or cluster and hence is gravitationally alone.
Roughly 80% of all galaxies located within of the Milky Way are in groups or clusters of galaxies. Most low-surface-brightness galaxies are field galaxies. The median Hubble-type of field galaxies is Sb, a type of spiral galaxy.
List of field galaxies
A list of nearby relatively bright field galaxies within the Local Volume, about
| Physical sciences | Basics_2 | Astronomy |
4741614 | https://en.wikipedia.org/wiki/IC%201101 | IC 1101 | IC 1101 is a class S0 supergiant (cD) lenticular galaxy at the center of the Abell 2029 galaxy cluster. It has an isophotal diameter at about . It possesses a diffuse core which is the largest core of any galaxy known to date, and contains a supermassive black hole, one of the largest discovered.
IC 1101 is located at from Earth. It was discovered on 19 June 1790, by the German-British astronomer William Herschel.
Observation history
IC 1101 was catalogued in the Index Catalogue of galaxies in the late 1800s to the early 1900s, which is where the galaxy got its most-used designation.
In a 1964 study of galaxies accompanied by radio sources, IC 1101 was listed as being among the diffuse elliptical galaxies chosen for the study. The study noted that IC 1101 had not been considered as a radio source, but radio emissions similar to galaxies with such emissions were detected.
Almost a decade and a half later, in 1978, astronomer Alan Dressler analyzed 12 very rich clusters of galaxies, among them Abell 2029, where IC 1101 is located. The following year, he released a paper dedicated solely to IC 1101 and its related dynamics and properties, revealing a rising velocity dispersion profile. After this, he would publish a paper overviewing his recent studies on the cluster and the galaxy.
During 1985, a team of astronomers obtained the spectra of the gas inside several galaxy clusters known to be luminous at X-ray wavelengths, including Abell 2029. Soon after, an investigation into the dynamics of IC 1101 and the galaxies within a few hundred kiloparsecs of it was conducted.
The centers of galaxy clusters are considered to be among the best laboratories for the study of galaxy and cluster evolution, and so during the late 1980s to early 1990s, numerous papers were released, surveying many brightest cluster galaxies. Among them was IC 1101.
R-band (red light) luminosity profiles were soon obtained for IC 1101, revealing a very vast halo of light that could be traced out for more than several hundred kiloparsecs from the galaxy's center.
In 2002, an analysis of Chandra X-ray surveys of the cluster was performed.
In 2011, a survey of over 430 brightest cluster galaxies was conducted, among them IC 1101.
In 2017, a redshift survey of the cluster was conducted, allowing a list of velocity dispersions to be created. This constrains the dynamics of the cluster. That same year, an analysis of the galaxy's inner regions using Hubble Space Telescope images found a huge yet diffuse galactic core, accompanied with mass estimates of the central supermassive black hole.
During 2019-2020, 170 local galaxy clusters were surveyed for a study of brightest cluster galaxies, their structures and the intracluster light around them.
Characteristics
Morphology
The galaxy is classified as a supergiant elliptical (E) to lenticular (S0) and is the brightest galaxy in A2029 (hence its other designation A2029-BCG; BCG meaning brightest cluster galaxy). The galaxy's morphological type is debated due to it possibly being shaped like a flat disc but only visible from Earth at its broadest dimensions. A morphology of S0- (Hubble stage -2; see Hubble stage for details) has been given by the Third Reference Catalogue of Bright Galaxies (RC3) in 1991.
Components and Structure
Like most large galaxies, IC 1101 is populated by a number of metal-rich stars, some of which are as much as seven billion years older than the Sun, making it appear golden yellow in color. It has a very bright radio source at the center, which is likely associated with an ultramassive black hole in the mass range of measured using core dynamical models, or alternatively at using gas accretion rate and growth modelling, which would make IC 1101's black hole one of the most massive known to date. The estimates of the mass of IC 1101's black hole are near the upper bound of cosmological limits, and is referred to as an "overmassive" black hole.
IC 1101's mass-to-light ratio has been described as being anomalously high. The galaxy also has a unique velocity dispersion profile, which indicates a massive dark matter halo. It accretes roughly 450 solar masses per year. The galaxy lacks nuclear emission in visible light at its center as well as signs of recent star formation. There is also no evidence of dust lanes in the core.
For many years it was suggested that IC 1101 was at the center of a massive cooling flow within the Abell 2029 cluster, but later observations dismissed this.
A 2017 paper suggests that IC 1101 has the largest core size of any galaxy with a core radius of around by fitting a model to a Hubble Space Telescope (HST) image of the galaxy. This makes its core larger than the one observed in A2261-BCG, which is . The core is also roughly an order of magnitude larger than the cores of other large elliptical galaxies, such as NGC 4889 and NGC 1600. Estimates of the absolute magnitude of IC 1101's spheroid are very faint for such a large core, indicating a large stellar mass deficit estimated at and a large luminosity deficit estimated at . A hypothesis for the observed properties and peculiarities of the core is that the merger of the central black holes from the formation of the galaxy flung stars out of the core. However, when examining large and diffuse galactic cores, caution must be taken, as various estimates may differ between the computer models used. As an example, Holmberg 15A was originally claimed to have the largest galactic core of any galaxy but other studies proved otherwise, either not finding a core or estimating a smaller size for it. It should also be noted that the satellite galaxies might have had an effect on the estimated properties of the diffuse core.
IC 1101's major axis is oriented in the northeast to southwest direction. The major axis is even aligned on the axis by which Abell 2029 accretes from Abell 2033.
Its components such as the core and main body are well-aligned, but the halo is twisted by 20 degrees from the galaxy's other components. Its isophotes (The shapes connecting areas with the same surface brightness) are predominantly boxy. Closer to the core, IC 1101's isophotes become elongated, suggesting a nuclear disc. This feature might be due to an unresolved double nucleus, produced by a low intensity active galactic nucleus (AGN) or a disrupted satellite galaxy disturbed by the central black hole. Several elliptical galaxies like NGC 4438-B, NGC 5419, and VCC 128 contain two point-sources, producing high ellipticities. The NRAO VLA sky-survey detected a radio source near IC 1101, corroborating a possible AGN. Another, weaker radio source has also been detected nearby, opening up the possibility of a double AGN that cannot be ruled out.
Like most BCGs, IC 1101 has a massive and diffuse stellar halo and has some excessive halo light. The halo is twisted by 20 degrees from the main body and core of IC 1101. This feature, among others, seems to be the reason why IC 1101 is classified as a lenticular galaxy in RC3.
Size
IC 1101 is considered a large galaxy characterized by an extensive, diffuse halo. This is the intracluster light, or ICL, free-flying stars that are not bound to any galaxy. This ubiquitous mass of stars within galaxy clusters are usually more concentrated around the brightest cluster galaxies, such as IC 1101, however. Photometrically, the ICL is indistinguishable from the brightest cluster galaxy, but it can be distinguished kinematically.
During the early 2000s, weak-lensing estimates for Abell 2029 were taken, indicating the distribution of the mass throughout the cluster and galaxy.
Defining the size of a galaxy varies according to the method used in the astronomical literature. Photographic plates of blue light from the galaxy (sampling stars excluding the diffuse halo) yield an effective radius (the radius within which half the light is emitted) of based on an earlier distance measurement. The galaxy has a very large halo of much lower intensity "diffuse light" extending to a radius of . The authors of the study identifying the halo conclude that IC 1101 is "possibly one of the largest and most luminous galaxies in the universe". This view has been stated in several other papers as well, but this figure was based on an earlier assumed distance of .
More recent measurements, using the 25.0 magnitude/arcsec2 standard (commonly known as D25, a method recommended by R.O. Redman in 1936) has been utilized by the RC3 in the B-band, with a measured major axis (log 2a+1) of 1.08 (equivalent to 72.10 arcseconds), translating to a diameter of . Another calculation by the Two Micron All-Sky Survey using the "total" aperture at the K-band yields a much larger size of . Both measurements are based on the currently-accepted distance to IC 1101. This would make it one of the largest and most luminous galaxies known, though there are other galaxies with larger isophotal diameter measurements (such as NGC 623, Abell 1413 BCG, and ESO 306-17).
Distance
The distance to IC 1101 has also been uncertain, with different methods across different wavelengths producing varying results. An earlier distance calculation from 1980 using the galaxy's photometric property yield a distance of and a redshift of z = 0.077, based on a Hubble constant value H0 of 60 km/s/Mpc. The RC3 catalogue gave a nearly similar value of z=0.078, based on optical emission lines, a value conformed to as recently as 2017 based on luminosity, stellar mass, and velocity dispersion functions, all yielding distances of based on the modern value of the Hubble constant H0 = 67.8 km/s/Mpc; the currently accepted values. Lower redshifts have been calculated for other wavelengths such as the photometric redshift measurement by the Two Micron All-Sky Survey (2MASS) in 2014, which gave a value of z = 0.045, translating to a distance of . A measurement made in 2005 by the Arecibo Observatory using the 21-cm hydrogen emission line yields a redshift of z = 0.021, and hence a distance of .
Formation
The lack of other bright and luminous galaxies other than IC 1101 at the center of the Abell 2029 galaxy cluster suggests that they were absorbed and consumed ("chewed-up") by the nascent IC 1101. Since the halo is somewhat flattened, the halo most likely retained the distribution of the bright luminous galaxies as they were consumed.
The depleted core and other characteristics of IC 1101 such as the halo component and its structure at moderate distances from the center suggest that the galaxy underwent numerous galactic mergers and interactions, perhaps as much as 10 or even more. The smoothness of the halo suggests that it formed early in the history of the cluster.
| Physical sciences | Notable galaxies | Astronomy |
4746723 | https://en.wikipedia.org/wiki/Tabasco%20pepper | Tabasco pepper | The tabasco pepper is a variety of the chili pepper species Capsicum frutescens originating in Mexico. It is best known through its use in Tabasco sauce, followed by peppered vinegar.
Like all C. frutescens cultivars, the tabasco plant has a typical bushy growth, which commercial cultivation makes stronger by trimming the plants. The tapered fruits, around 4 cm long, are initially pale yellowish-green and turn yellow and orange before ripening to bright red. Tabascos rate from 30,000 to 50,000 on the Scoville scale. Tabasco fruits, like all other members of the C. frutescens species, remain erect when mature, rather than hanging down from their stems.
A large part of the tabasco pepper stock fell victim to the tobacco mosaic virus in the 1960s; the first resistant variety (Greenleaf tabasco) was not cultivated until around 1970.
Naming
The peppers are named after the Mexican state of Tabasco. The initial letter of tabasco is rendered in lowercase when referring to the botanical variety but capitalized when referring to the Mexican state or to the brand of hot sauce, Tabasco sauce.
Cultivation
Tabasco peppers start out green and ripen to orange and then red. It takes approximately 80 days after germination for them to fully mature. The tabasco plant can grow to tall, with a cream or light yellow flower that will develop into upward-oriented fruits later in the growing season. As they are native to the Mexican state of Tabasco, seeds require much warmth to germinate and grow best when the temperature is between . If grown outside their natural habitat, the peppers are planted two to three weeks after the last frost when soil temperatures exceed and the weather has settled. Peppers are temperamental when it comes to setting fruit; if temperatures are too hot or too cool, or if nighttime temperatures fall below , it can reduce fruit set. A location that receives plenty of light and heat, with soil that is fertile, lightweight, slightly acidic (pH 5.5–7.0), and well-drained, is ideal for growing the plants. Peppers need a steady supply of water for best performance. Growers are careful to make sure that fertilizers and soil are rich in phosphorus, potassium, and calcium and low in nitrogen, which can deter fruit growth.
| Biology and health sciences | Botanical fruits used as culinary vegetables | Plants |
8160460 | https://en.wikipedia.org/wiki/Moonrat | Moonrat | The moonrat (Echinosorex gymnura) is a southeast Asian species of mammal in the family Erinaceidae (the hedgehogs and gymnures). It is the only species in the genus Echinosorex. The moonrat is a fairly small, primarily carnivorous animal which, despite its name, is not closely related to rats or other rodents. The scientific name is sometimes given as Echinosorex gymnurus, but this is incorrect.
Description
The moonrat has a distinct pungent odor with strong ammonia content, different from the musky smell of carnivorans. There are two subspecies: E. g. gymnura is found in Sumatra and the Thai-Malay Peninsula; E. g. alba is found in Borneo. In the former the head and frontal half of the body are white or grey-white; the remaining is mainly black. The latter subspecies is generally white (alba means white in Latin), with a sparse scattering of black hairs; it appears totally white from a distance. Those from western Borneo tend to have a greater proportion of black hairs than those from the east, but animals from Brunei appear intermediate. Largely white E. g. gymnura also occur, but they are rare.
Head and body length is , tail length is , hind foot length is and weight is . The dental formula is . It is possibly the largest member of the order Erinaceomorpha, although the European hedgehog likely weighs a bit more at and up to .
Ecology and habitat
Moonrats are nocturnal and terrestrial, lying up under logs, roots or in abandoned burrows during the day. They inhabit moist forests including mangrove and swamp forests and often enter water. In Borneo, they occur mainly in forests, but in peninsular Malaysia they are also found in gardens and plantations. They feed on earthworms and various small animals, mostly arthropods. The moonrat is a host of the acanthocephalan intestinal parasite Moniliformis echinosorexi.
Lifespan
The lifespan of the moonrat is up to five years.
Conservation status
The moonrat is not considered a threatened species. The main threat to the moonrat is deforestation activities due to human development for agriculture, plantation, and commercial logging. Moreover, other demands from Penan in Borneo for food and traditional medicinal contribute to decreasing numbers of moonrats in Borneo. The species is also found in protected areas, including Matang National Park and Kuching Wetlands National Park. Its IUCN status is Least Concern.
Economic importance
The Penan in Borneo used to trade moonrat meat for other foods and goods among themselves and for money.
| Biology and health sciences | Eulipotyphla | Animals |
8166296 | https://en.wikipedia.org/wiki/Synonym%20%28taxonomy%29 | Synonym (taxonomy) | The Botanical and Zoological Codes of nomenclature treat the concept of synonymy differently.
In botanical nomenclature, a synonym is a scientific name that applies to a taxon that now goes by a different scientific name. For example, Linnaeus was the first to give a scientific name (under the currently used system of scientific nomenclature) to the Norway spruce, which he called Pinus abies. This name is no longer in use, so it is now a synonym of the current scientific name, Picea abies.
In zoology, moving a species from one genus to another results in a different binomen, but the name is considered an alternative combination rather than a synonym. The concept of synonymy in zoology is reserved for two names at the same rank that refers to a taxon at that rank – for example, the name Papilio prorsa Linnaeus, 1758 is a junior synonym of Papilio levana Linnaeus, 1758, being names for different seasonal forms of the species now referred to as Araschnia levana (Linnaeus, 1758), the map butterfly. However, Araschnia levana is not a synonym of Papilio levana in the taxonomic sense employed by the Zoological code.
Unlike synonyms in other contexts, in taxonomy a synonym is not interchangeable with the name of which it is a synonym. In taxonomy, synonyms are not equals, but have a different status. For any taxon with a particular circumscription, position, and rank, only one scientific name is considered to be the correct one at any given time (this correct name is to be determined by applying the relevant code of nomenclature). A synonym cannot exist in isolation: it is always an alternative to a different scientific name. Given that the correct name of a taxon depends on the taxonomic viewpoint used (resulting in a particular circumscription, position and rank) a name that is one taxonomist's synonym may be another taxonomist's correct name (and vice versa).
Synonyms may arise whenever the same taxon is described and named more than once, independently. They may also arise when existing taxa are changed, as when two taxa are joined to become one, a species is moved to a different genus, a variety is moved to a different species, etc. Synonyms also come about when the codes of nomenclature change, so that older names are no longer acceptable; for example, Erica herbacea L. has been rejected in favour of the conserved name of Erica carnea L. and is thus its synonym.
General usage
To the general user of scientific names, in fields such as agriculture, horticulture, ecology, general science, etc., a synonym is a name that was previously used as the correct scientific name (in handbooks and similar sources) but which has been displaced by another scientific name, which is now regarded as correct. Thus Oxford Dictionaries Online defines the term as "a taxonomic name which has the same application as another, especially one which has been superseded and is no longer valid". In handbooks and general texts, it is useful to have synonyms mentioned as such after the current scientific name, so as to avoid confusion. For example, if the much-advertised name change should go through and the scientific name of the fruit fly were changed to Sophophora melanogaster, it would be very helpful if any mention of this name was accompanied by "(syn. Drosophila melanogaster)". Synonyms used in this way may not always meet the strict definitions of the term "synonym" in the formal rules of nomenclature which govern scientific names (see below).
Changes of scientific name have two causes: they may be taxonomic or nomenclatural. A name change may be caused by changes in the circumscription, position or rank of a taxon, representing a change in taxonomic, scientific insight (as would be the case for the fruit fly, mentioned above). A name change may be due to purely nomenclatural reasons, that is, based on the rules of nomenclature; as for example when an older name is (re)discovered which has priority over the current name. Speaking in general, name changes for nomenclatural reasons have become less frequent over time as the rules of nomenclature allow for names to be conserved, so as to promote stability of scientific names.
Zoology
In zoological nomenclature, codified in the International Code of Zoological Nomenclature, synonyms are different scientific names of the same taxonomic rank that pertain to that same taxon. For example, a particular species could, over time, have had two or more species-rank names published for it, while the same is applicable at higher ranks such as genera, families, orders, etc. In each case, the earliest published name is called the senior synonym, while the later name is the junior synonym. In the case where two names for the same taxon have been published simultaneously, the valid name is selected accorded to the principle of the first reviser such that, for example, of the names Strix scandiaca and Strix noctua (Aves), both published by Linnaeus in the same work at the same date for the taxon now determined to be the snowy owl, the epithet scandiaca has been selected as the valid name, with noctua becoming the junior synonym. (Incidentally, this species has since been reclassified and currently resides in the genus Bubo, as Bubo scandiacus).
One basic principle of zoological nomenclature is that the earliest correctly published (and thus available) name, the senior synonym, by default takes precedence in naming rights and therefore, unless other restrictions interfere, must be used for the taxon. However, junior synonyms are still important to document, because if the earliest name cannot be used (for example, because the same spelling had previously been used for a name established for another taxon), then the next available junior synonym must be used for the taxon. For other purposes, if a researcher is interested in consulting or compiling all currently known information regarding a taxon, some of this (including species descriptions, distribution, ecology and more) may well have been published under names now regarded as outdated (i.e., synonyms) and so it is again useful to know a list of historic synonyms which may have been used for a given current (valid) taxon name.
Objective synonyms refer to taxa with the same type and same rank (more or less the same taxon, although circumscription may vary, even widely). This may be species-group taxa of the same rank with the same type specimen, genus-group taxa of the same rank with the same type species or if their type species are themselves objective synonyms, of family-group taxa with the same type genus, etc.
In the case of subjective synonyms, there is no such shared type, so the synonymy is open to taxonomic judgement, meaning that there is room for debate: one researcher might consider the two (or more) types to refer to one and the same taxon, another might consider them to belong to different taxa. For example, John Edward Gray published the name Antilocapra anteflexa in 1855 for a species of pronghorn, based on a pair of horns. However, it is now commonly accepted that his specimen was an unusual individual of the species Antilocapra americana published by George Ord in 1815. Ord's name thus takes precedence, with Antilocapra anteflexa being a junior subjective synonym.
Objective synonyms are common at the rank of genera, because for various reasons two genera may contain the same type species; these are objective synonyms. In many cases researchers established new generic names because they thought this was necessary or did not know that others had previously established another genus for the same group of species. An example is the genus Pomatia Beck, 1837, which was established for a group of terrestrial snails containing as its type species the Burgundy or Roman snail Helix pomatia—since Helix pomatia was already the type species for the genus Helix Linnaeus, 1758, the genus Pomatia was an objective synonym (and useless). On the same occasion, Helix is also a synonym of Pomatia, but it is older and so it has precedence.
At the species level, subjective synonyms are common because of an unexpectedly large range of variation in a species, or simple ignorance about an earlier description, may lead a biologist to describe a newly discovered specimen as a new species. A common reason for objective synonyms at this level is the creation of a replacement name.
A junior synonym can be given precedence over a senior synonym, primarily when the senior name has not been used since 1899, and the junior name is in common use. The older name may be declared to be a nomen oblitum, and the junior name declared a nomen protectum. This rule exists primarily to prevent the confusion that would result if a well-known name, with a large accompanying body of literature, were to be replaced by a completely unfamiliar name. An example is the European land snail Petasina edentula (Draparnaud, 1805). In 2002, researchers found that an older name Helix depilata Draparnaud, 1801 referred to the same species, but this name had never been used after 1899 and was fixed as a nomen oblitum under this rule by Falkner et al. 2002.
Such a reversal of precedence is also possible if the senior synonym was established after 1900, but only if the International Commission on Zoological Nomenclature (ICZN) approves an application. (Here the C in ICZN stands for Commission, not Code as it does at the beginning of § Zoology. The two are related, with only one word difference between their names.) For example, the scientific name of the red imported fire ant, Solenopsis invicta was published by Buren in 1972, who did not know that this species was first named Solenopsis saevissima wagneri by Santschi in 1916; as there were thousands of publications using the name invicta before anyone discovered the synonymy, the ICZN, in 2001, ruled that invicta would be given precedence over wagneri.
To qualify as a synonym in zoology, a name must be properly published in accordance with the rules. Manuscript names and names that were mentioned without any description (nomina nuda) are not considered as synonyms in zoological nomenclature.
Botany
In botanical nomenclature, a synonym is a name that is not correct for the circumscription, position, and rank of the taxon as considered in the particular botanical publication. It is always "a synonym of the correct scientific name", but which name is correct depends on the taxonomic opinion of the author. In botany the various kinds of synonyms are:
Homotypic, or nomenclatural, synonyms (sometimes indicated by ≡) have the same type (specimen) and the same taxonomic rank. The Linnaean name Pinus abies L. has the same type as Picea abies (L.) H.Karst. When Picea is taken to be the correct genus for this species (there is almost complete consensus on that), Pinus abies is a homotypic synonym of Picea abies. However, if the species were considered to belong to Pinus (now unlikely) the relationship would be reversed and Picea abies would become a homotypic synonym of Pinus abies. A homotypic synonym need not share an epithet or name with the correct name; what matters is that it shares the type. For example, the name Taraxacum officinale for a species of dandelion has the same type as Leontodon taraxacum L. The latter is a homotypic synonym of Taraxacum officinale F.H.Wigg.
Heterotypic, or taxonomic, synonyms (sometimes indicated by =) have different types. Some botanists split the common dandelion into many, quite restricted species. The name of each such species has its own type. When the common dandelion is regarded as including all those small species, the names of all those species are heterotypic synonyms of Taraxacum officinale F.H.Wigg. Reducing a taxon to a heterotypic synonym is termed "to sink in synonymy" or "as synonym".
In botany, although a synonym must be a formally accepted scientific name (a validly published name): a listing of "synonyms", a "synonymy", often contains designations that for some reason did not make it as a formal name, such as manuscript names, or even misidentifications (although it is now the usual practice to list misidentifications separately).
Comparison between zoology and botany
Although the basic principles are fairly similar, the treatment of synonyms in botanical nomenclature differs in detail and terminology from zoological nomenclature, where the correct name is included among synonyms, although as first among equals it is the "senior synonym":
Synonyms in botany are equivalent to "junior synonyms" in zoology.
The homotypic or nomenclatural synonyms in botany are equivalent to "objective synonyms" in zoology.
The heterotypic or taxonomic synonyms in botany are equivalent to "subjective synonyms" in zoology.
If the name of a species changes solely on account of its allocation to a new genus ("new combinations"), in botany this is regarded as creating a synonym in the case of the original or previous combination but not in zoology (where the fundamental nomenclatural unit is regarded as the species epithet, not the binomen, and this has generally not changed). Nevertheless, in popular usage, previous or alternative/non current combinations are frequently listed as synonyms in zoology as well as in botany.
Practical applications
Scientific papers may include lists of taxa, synonymizing existing taxa and (in some cases) listing references to them.
The status of a synonym may be indicated by symbols, as for instance in a system proposed for use in paleontology by Rudolf Richter. In that system a v before the year would indicate that the authors have inspected the original material; a . that they take on the responsibility for the act of synonymizing the taxa.
The accurate use of scientific names, including synonyms, is crucial in biomedical and pharmacological research involving plants. Failure to use correct botanical nomenclature can lead to ambiguity, hinder reproducibility of results, and potentially cause errors in medicine. Best practices for publication suggest that researchers should provide the currently accepted binomial with author citation, relevant synonyms, and the accepted family name according to the Angiosperm Phylogeny Group III classification. This practice ensures clear communication, allows proper linking of research to existing literature, and provides insight into phylogenetic relationships that may be relevant to shared chemical constituents or physiological effects. Online databases now make it easy for researchers to access correct nomenclature and synonymy information for plant species.
Other usage
The traditional concept of synonymy is often expanded in taxonomic literature to include pro parte (or "for part") synonyms. These are caused by splits and circumscriptional changes. They are usually indicated by the abbreviation "p.p." For example:
When Dandy described Galium tricornutum, he cited G. tricorne Stokes (1787) pro parte as a synonym, but explicitly excluded the type (specimen) of G. tricorne from the new species G. tricornutum. Thus G. tricorne was subdivided.
The Angiosperm Phylogeny Group's summary of plant classification states that family Verbenaceae "are much reduced compared to a decade or so ago, and many genera have been placed in Lamiaceae", but Avicennia, which was once included in Verbenaceae has been moved to Acanthaceae. Thus, it could be said that Verbenaceae pro parte is a synonym of Acanthaceae, and Verbenaceae pro parte is also a synonym of Lamiaceae. However, this terminology is rarely used because it is clearer to reserve the term "pro parte" for situations that divide a taxon that includes the type from one that does not.
| Biology and health sciences | Phylogenetics and taxonomy | Biology |
23624339 | https://en.wikipedia.org/wiki/Indexed%20family | Indexed family | In mathematics, a family, or indexed family, is informally a collection of objects, each associated with an index from some index set. For example, a family of real numbers, indexed by the set of integers, is a collection of real numbers, where a given function selects one real number for each integer (possibly the same) as indexing.
More formally, an indexed family is a mathematical function together with its domain and image (that is, indexed families and mathematical functions are technically identical, just points of view are different). Often the elements of the set are referred to as making up the family. In this view, indexed families are interpreted as collections of indexed elements instead of functions. The set is called the index set of the family, and is the indexed set.
Sequences are one type of families indexed by natural numbers. In general, the index set is not restricted to be countable. For example, one could consider an uncountable family of subsets of the natural numbers indexed by the real numbers.
Formal definition
Let and be sets and a function such that
where is an element of and the image of under the function is denoted by . For example, is denoted by The symbol is used to indicate that is the element of indexed by The function thus establishes a family of elements in indexed by which is denoted by or simply if the index set is assumed to be known. Sometimes angle brackets or braces are used instead of parentheses, although the use of braces risks confusing indexed families with sets.
Functions and indexed families are formally equivalent, since any function with a domain induces a family and conversely. Being an element of a family is equivalent to being in the range of the corresponding function. In practice, however, a family is viewed as a collection, rather than a function.
Any set gives rise to a family where is indexed by itself (meaning that is the identity function). However, families differ from sets in that the same object can appear multiple times with different indices in a family, whereas a set is a collection of distinct objects. A family contains any element exactly once if and only if the corresponding function is injective.
An indexed family defines a set that is, the image of under Since the mapping is not required to be injective, there may exist with such that Thus, , where denotes the cardinality of the set For example, the sequence indexed by the natural numbers has image set In addition, the set does not carry information about any structures on Hence, by using a set instead of the family, some information might be lost. For example, an ordering on the index set of a family induces an ordering on the family, but no ordering on the corresponding image set.
Indexed subfamily
An indexed family is a subfamily of an indexed family if and only if is a subset of and holds for all
Examples
Indexed vectors
For example, consider the following sentence:
Here denotes a family of vectors. The -th vector only makes sense with respect to this family, as sets are unordered so there is no -th vector of a set. Furthermore, linear independence is defined as a property of a collection; it therefore is important if those vectors are linearly independent as a set or as a family. For example, if we consider and as the same vector, then the set of them consists of only one element (as a set is a collection of unordered distinct elements) and is linearly independent, but the family contains the same element twice (since indexed differently) and is linearly dependent (same vectors are linearly dependent).
Matrices
Suppose a text states the following:
As in the previous example, it is important that the rows of are linearly independent as a family, not as a set. For example, consider the matrix
The set of the rows consists of a single element as a set is made of unique elements so it is linearly independent, but the matrix is not invertible as the matrix determinant is 0. On the other hands, the family of the rows contains two elements indexed differently such as the 1st row and the 2nd row so it is linearly dependent. The statement is therefore correct if it refers to the family of rows, but wrong if it refers to the set of rows. (The statement is also correct when "the rows" is interpreted as referring to a multiset, in which the elements are also kept distinct but which lacks some of the structure of an indexed family.)
Other examples
Let be the finite set where is a positive integer.
An ordered pair (2-tuple) is a family indexed by the set of two elements, each element of the ordered pair is indexed by an element of the set
An -tuple is a family indexed by the set
An infinite sequence is a family indexed by the natural numbers.
A list is an -tuple for an unspecified or an infinite sequence.
An matrix is a family indexed by the Cartesian product which elements are ordered pairs; for example, indexing the matrix element at the 2nd row and the 5th column.
A net is a family indexed by a directed set.
Operations on indexed families
Index sets are often used in sums and other similar operations. For example, if is an indexed family of numbers, the sum of all those numbers is denoted by
When is a family of sets, the union of all those sets is denoted by
Likewise for intersections and Cartesian products.
Usage in category theory
The analogous concept in category theory is called a diagram. A diagram is a functor giving rise to an indexed family of objects in a category , indexed by another category , and related by morphisms depending on two indices.
| Mathematics | Set theory | null |
6197450 | https://en.wikipedia.org/wiki/Reaction%20intermediate | Reaction intermediate | In chemistry, a reaction intermediate, or intermediate, is a molecular entity arising within the sequence of a stepwise chemical reaction. It is formed as the reaction product of an elementary step, from the reactants and/or preceding intermediates, but is consumed in a later step. It does not appear in the chemical equation for the overall reaction.
For example, consider this hypothetical reaction:
A + B → C + D
If this overall reaction comprises two elementary steps thus:
A + B → X
X → C + D
then X is a reaction intermediate.
The phrase reaction intermediate is often abbreviated to the single word intermediate, and this is IUPAC's preferred form of the term. But this shorter form has other uses. It often refers to reactive intermediates. It is also used more widely for chemicals such as cumene which are traded within the chemical industry but are not generally of value outside it.
IUPAC definition
The IUPAC Gold Book defines an intermediate as a compound that has a lifetime greater than a molecular vibration, is formed (directly or indirectly) from the reactants, and reacts further to give (either directly or indirectly) the products of a chemical reaction. The lifetime condition distinguishes true, chemically distinct intermediates, both from vibrational states and from transition states (which, by definition, have lifetimes close to that of molecular vibration).
The different steps of a multi-step reaction often differ widely in their reaction rates. Where the difference is significant, an intermediate consumed more quickly than another may be described as a relative intermediate. A reactive intermediate is one which due to its short lifetime does not remain in the product mixture. Reactive intermediates are usually high-energy, are unstable and are seldom isolated.
Common reaction intermediates
Carbocations
Cations, often carbocations, serve as intermediates in various types of reactions to synthesize new compounds.
Carbocation intermediates in alkene addition
Carbocations are formed in two major alkene addition reactions. In an HX addition reaction, the pi bond of an alkene acts as a nucleophile and bonds with the proton of an HX molecule, where the X is a halogen atom. This forms a carbocation intermediate, and the X then bonds to the positive carbon that is available, as in the following two-step reaction.
Similarly, in an addition reaction, the pi bond of an alkene acts as a nucleophile and bonds with the proton of an molecule. This forms a carbocation intermediate (and an atom); the oxygen atom of then bonds with the positive carbon of the intermediate. The oxygen finally deprotonates to form a final alcohol product, as follows.
Carbocation intermediates in nucleophilic substitution
Nucleophilic substitution reactions occur when a nucleophilic molecule attacks a positive or partially positive electrophilic center by breaking and creating a new bond. SN1 and SN2 are two different mechanisms for nucleophilic substitution, and SN1 involves a carbocation intermediate. In SN1, a leaving group is broken off to create a carbocation reaction intermediate. Then, a nucleophile attacks and forms a new bond with the carbocation intermediate to form the final, substituted product, as shown in the reaction of 2-bromo-2-methylpropane to form 2-methyl-2-propanol.
In this reaction, is the formed carbocation intermediate to form the alcohol product.
Carbocation intermediates in elimination reactions
β-elimination or elimination reactions occur through the loss of a substituent leaving group and loss of a proton to form a pi bond. E1 and E2 are two different mechanisms for elimination reactions, and E1 involves a carbocation intermediate. In E1, a leaving group detaches from a carbon to form a carbocation reaction intermediate. Then, a solvent removes a proton, but the electrons used to form the proton bond form a pi bond, as shown in the pictured reaction on the right.
Carbanions
A carbanion is an organic molecule where a carbon atom is not electron deficient but contain an overall negative charge. Carbanions are strong nucleophiles, which can be used to extend an alkene's carbon backbone in the synthesis reaction shown below.
The alkyne carbanion, , is a reaction intermediate in this reaction.
Radicals
Radicals are highly reactive and short-lived, as they have an unpaired electron which makes them extremely unstable. Radicals often react with hydrogens attached to carbon molecules, effectively making the carbon a radical while stabilizing the former radical in a process called propagation. The formed product, a carbon radical, can react with non-radical molecule to continue propagation or react with another radical to form a new stable molecule such as a longer carbon chain or an alkyl halide.
The example below of methane chlorination shows a multi-step reaction involving radicals.
Methane chlorination
Methane chlorination is a chain reaction. If only the products and reactants are analyzed, the result is:
However, this reaction has 3 intermediate reactants which are formed during a sequence of 4 irreversible second order reactions until we arrive at the final product. This is why it is called a chain reaction. Following only the carbon containing species in series:
Reactants:
Products:
The other species are reaction intermediates:
These are the set of irreversible second-order reactions:
These intermediate species' concentrations can be calculated by integrating the system of kinetic equations. The full reaction is a free radical propagation reaction which is filled out in detail below.
Initiation: This reaction can occur by thermolysis (heating) or photolysis (absorption of light) leading to the breakage of a molecular chlorine bond.
When the bond is broken it produces two highly reactive chlorine atoms.
Propagation: This stage has two distinct reaction classes. The first is the stripping of a hydrogen from the carbon species by the chlorine radicals. This occurs because chlorine atoms alone are unstable, and these chlorine atoms react with one the carbon species' hydrogens. The result is the formation of hydrochloric acid and a new radical methyl group.
These new radical carbon containing species now react with a second molecule. This regenerates the chlorine radical and the cycle continues. This reaction occurs because while the radical methyl species are more stable than the radical chlorines, the overall stability of the newly formed chloromethane species more than makes up the energy difference.
During the propagation of the reaction, there are several highly reactive species that will be removed and stabilized at the termination step.
Termination: This kind of reaction takes place when the radical species interact directly. The products of the termination reactions are typically very low yield in comparison to the main products or intermediates as the highly reactive radical species are in relatively low concentration in relation to the rest of the mixture. This kind of reaction produces stable side products, reactants, or intermediates and slows the propagation reaction by lowering the number of radicals available to propagate the chain reaction.
There are many different termination combinations, some examples are:
Union of methyl radicals from a C-C bond leading to ethane (a side product).
Union of one methyl radical to a Cl radical forming chloromethane (another reaction forming an intermediate).
Union of two Cl radicals to reform chlorine gas (a reaction reforming a reactant).
Applications
Biological intermediates
Reaction intermediates serve purposes in a variety of biological settings. An example of this is demonstrated with the enzyme reaction intermediate of metallo-β-lactamase, which bacteria can use to acquire resistance to commonly used antibiotics such as penicillin. Metallo-β-lactamase can catalyze β-lactams, a family of common antibiotics. Spectroscopy techniques have found that the reaction intermediate of metallo-β-lactamase uses zinc in the resistance pathway.
Another example of the importance of reaction intermediates is seen with AAA-ATPase p97, a protein that used in a variety of cellular metabolic processes. p97 is also linked to degenerative disease and cancer. In a study looking at reaction intermediates of the AAA-ATPase p97 function found an important ADP.Pi nucleotide intermediate is important in the p97 molecular operation.
An additional example of biologically relevant reaction intermediates can be found with the RCL enzymes, which catalyzes glycosidic bonds. When studied using methanolysis, it was found that the reaction required the formation of a reaction intermediate.
Chemical processing industry
In the chemical industry, the term intermediate may also refer to the (stable) product of a reaction that is itself valuable only as a precursor chemical for other industries. A common example is cumene which is made from benzene and propylene and used to make acetone and phenol in the cumene process. The cumene itself is of relatively little value in and of itself, and is typically only bought and sold by chemical companies.
| Physical sciences | Basics_3 | Chemistry |
6198052 | https://en.wikipedia.org/wiki/Light%20tube | Light tube | Light tubes (also known as solar pipes, tubular skylights or sun tunnels) are structures that transmit or distribute natural or artificial light for the purpose of illumination and are examples of optical waveguides.
In their application to daylighting, they are also often called tubular daylighting devices, sun pipes, sun scopes, or daylight pipes. They can be divided into two broad categories: hollow structures that contain the light with reflective surfaces; and transparent solids that contain the light by total internal reflection. Principles of nonimaging optics govern the flow of light through them.
Types
IR light tubes
Manufacturing custom designed infrared light pipes, hollow waveguides and homogenizers is non-trivial. This is because these are tubes lined with a highly polished infrared reflective coating of gold, which can be applied thick enough to permit these tubes to be used in highly corrosive atmospheres. Carbon black can be applied to certain parts of light pipes to absorb IR light (see photonics). This is done to limit IR light to only certain areas of the pipe.
While most light pipes are produced with a round cross-section, light pipes are not limited to this geometry. Square and hexagonal cross-sections are used in special applications. Hexagonal pipes tend to produce the most homogenized type of IR Light. The pipes do not need to be straight. Bends in the pipe have little effect on efficiency.
Light tube with reflective material
The first commercial reflector systems were patented and marketed in the 1850s by Paul Emile Chappuis in London, utilizing various forms of angled mirror designs. Chappuis Ltd's reflectors were in continuous production until the factory was destroyed in 1943. The concept was rediscovered and patented in 1986 by Solatube International of Australia. This system has been marketed for widespread residential and commercial use. Other daylighting products are on the market under various generic names, such as "SunScope", "solar pipe", "light pipe", "light tube", and "tubular skylight".
A tube lined with highly reflective material leads the light rays through a building, starting from an entrance-point located on its roof or one of its outer walls. A light tube is not intended for imaging (in contrast to a periscope, for example); thus image distortions pose no problem and are in many ways encouraged due to the reduction of "directional" light.
The entrance point usually comprises a dome (cupola), which has the function of collecting and reflecting as much sunlight as possible into the tube. Many units also have directional "collectors", "reflectors", or even Fresnel lens devices that assist in collecting additional directional light down the tube.
In 1994, the Windows and Daylighting Group at Lawrence Berkeley National Laboratory (LBNL) developed a series of horizontal light pipe prototypes to increase daylight illuminance at distances of 4.6-9.1 m, to improve the uniformity of daylight distribution and luminance gradient across the room under variable sun and sky conditions throughout the year. The light pipes were designed to passively transport daylighting through relatively small inlet glazing areas by reflecting sunlight to depths greater than conventional sidelight windows or skylights.
A set-up in which a laser cut acrylic panel is arranged to redirect sunlight into a horizontally or vertically orientated mirrored pipe, combined with a light spreading system with a triangular arrangement of laser cut panels that spread the light into the room, was developed at the Queensland University of Technology in Brisbane. In 2003, Veronica Garcia Hansen, Ken Yeang, and Ian Edmonds were awarded the Far East Economic Review Innovation Award in bronze for this development.
Light transmission efficiency is greatest if the tube is short and straight. In longer, angled, or flexible tubes, part of the light intensity is lost. To minimize losses, a high reflectivity of the tube lining is crucial; manufacturers claim reflectivities of their materials, in the visible range, of up to almost 99.5 percent.
At the end point (the point of use), a diffuser spreads the light into the room.
The first full-scale passive horizontal light pipes were built at the Daylight Lab at Texas A&M University, where the annual daylight performance was thoroughly evaluated in a 360 degree rotating 6 m wide by 10 m deep room. The pipe is coated with a 99.3% specular reflective film and the distribution element at the end of the light pipe consists of a 4.6 m long diffusing radial film with an 87% visible transmittance. The light pipe introduces consistently illuminance levels ranging between 300 and 2,500 lux throughout the year at distances between 7.6 m to 10 m.
To further optimize the use of solar light, a heliostat can be installed which tracks the movement of the sun, thereby directing sunlight into the light tube at all times of the day as far as the surroundings' limitations allow, possibly with additional mirrors or other reflective elements that influence the light path. The heliostat can be set to capture moonlight at night.
Optical fiber
Optical fibers can also be used for daylighting. A solar lighting system based on plastic optical fibers was in development at Oak Ridge National Laboratory in 2004. The system was installed at the American Museum of Science and Energy, Tennessee, USA, in 2005, and brought to market the same year by the company Sunlight Direct. However, this system was taken off the market in 2009.
In view of the usually small diameter of the fibers, an efficient daylighting set-up requires a parabolic collector to track the sun and concentrate its light.
Optical fibers intended for light transport need to propagate as much light as possible within the core; in contrast, optical fibers intended for light distribution are designed to let part of the light leak through their cladding.
Optical fibers are also used in the Bjork system sold by Parans Solar Lighting AB. The optic fibers in this system are made of PMMA (PolyMethyl MethAcrylate) and sheathed with Megolon, a halogen-free thermoplastic resin. A system such as this, however, is quite expensive.
The Parans system consists of three parts. A collector, fiber optic cables, and luminaires spreading the light indoors. One or more collectors are placed on or near the building in a place where they will have good access to direct sunlight. The collector consists of lenses mounted in aluminum profiles with a covering glass as protection. These lenses concentrate sunlight down in the fiber optic cables.
The collectors are modular, which means they come with either 4,6,8,12 or 20 cables depending on the need. Every cable can have an individual length. The fiber optic cables transport the natural light 100 meters (30 floors) in and through the property while retaining both a high level of light quality and light intensity. Examples of implementations are Kastrup Airport, University of Arizona and Stockholm University.
A similar system, but using optical fibers of glass, had earlier been under study in Japan.
Corning Inc. makes Fibrance Light-Diffusing Fiber. Fibrance works by shining a laser through a light-diffusing fiber optic cable. The cable gives off a lighted glow.
Optical fibers are used in fiberscopes for imaging applications.
Transparent hollow light guides
A prism light guide was developed in 1981 by Lorne Whitehead, a physics professor at the University of British Columbia, and has been used in solar lighting for both the transport and distribution of light. A large solar pipe based on the same principle was set up in the narrow courtyard of a 14-floor building of a Washington, D.C. law firm in 2001, and a similar proposal has been made for London. A further system has been installed in Berlin.
The 3M company developed a system based on optical lighting film and developed the 3M light pipe, which is a light guide designed to distribute light uniformly over its length, with a thin film incorporating microscopic prisms, which has been marketed in connection with artificial light sources, e.g. sulfur lamps.
In contrast to an optical fiber which has a solid core, a prism light guide leads the light through air and is therefore referred to as a hollow light guide.
The project ARTHELIO, partially funded by the European Commission, was an investigation in years 1998 to 2000 into a system for adaptive mixing of solar and artificial light, and which includes a sulfur lamp, a heliostat, and hollow light guides for light transport and distribution.
Disney has experimented with using 3D printing to print internal light guides for illuminated toys.
Fluorescence based system
In a system developed by Fluorosolar and the University of Technology, Sydney, two fluorescent polymer layers in a flat panel capture short wave sunlight, particularly ultraviolet light, generating red and green light, respectively, which is guided into the interior of a building. There, the red and green light is mixed with artificial blue light to yield white light, without infrared or ultraviolet. This system, which collects light without requiring mobile parts such as a heliostat or a parabolic collector, is intended to transfer light to any place within a building. By capturing ultraviolet, the system can be especially effective on bright but overcast days; this is since ultraviolet is diminished less by cloud cover than are the visible components of sunlight.
Properties and applications
Solar and hybrid lighting systems
Solar light pipes, compared to conventional skylights and other windows, offer better heat insulation properties and more flexibility for use in inner rooms, but less visual contact with the external environment.
In the context of seasonal affective disorder, it may be worth considering that an additional installation of light tubes increases the amount of natural daily light exposure. It could thus possibly contribute to residents´ or employees´ well-being while avoiding over-illumination effects.
Compared to artificial lights, light tubes have the advantage of providing natural light and of saving energy. The transmitted light varies over the day; should this not be desired, light tubes can be combined with artificial light in a hybrid set-up.
Some artificial light sources are marketed which have a spectrum similar to that of sunlight, at least in the human visible spectrum range, as well as low flicker. Their spectrum can be made to vary dynamically such as to mimic changes in natural light over the day. Manufacturers and vendors of such light sources claim that their products can provide the same or similar health effects as natural light. When considered as alternatives to solar light pipes, such products may have lower installation costs but do consume energy during use; therefore they may well be more wasteful in terms of overall energy resources and costs.
On a more practical note, light tubes do not require electric installations or insulation and are thus especially useful for indoor wet areas such as bathrooms and pools. From a more artistic point of view, recent developments, especially those pertaining to transparent light tubes, open new and interesting possibilities for architectural lighting design.
Security applications
Due to the relatively small size and high light output of sun pipes, they have an ideal application to security-oriented situations, such as prisons, police cells, and other locations where restricted access is required. Being of narrow diameter, and not largely affected by internal security grilles, this provides daylight to areas without providing electrical connections or escape access, and without allowing objects to be passed into a secure area.
In electronic devices
Moulded plastic light tubes are commonly used in the electronics industry to direct illumination from LEDs on a circuit board to indicator symbols or buttons. These light tubes typically take on a highly complex shape that uses either gentle curving bends as in an optic fiber or has sharp prismatic folds which reflect off the angled corners. Multiple light tubes are often moulded from a single piece of plastic, permitting easy device assembly since the long thin light tubes are all part of a single rigid component that snaps into place.
Light tube indicators make electronics cheaper to manufacture since the old way would be to mount a tiny lamp into a small socket directly behind the spot to be illuminated. This often requires extensive hand labor for installation and wiring. Light tubes permit all lights to be mounted on a single flat circuit board, but illumination can be directed up, and away from the board wherever it is required.
| Technology | Lighting | null |
19593040 | https://en.wikipedia.org/wiki/Celsius | Celsius | The degree Celsius is the unit of temperature on the Celsius temperature scale (originally known as the centigrade scale outside Sweden), one of two temperature scales used in the International System of Units (SI), the other being the closely related Kelvin scale. The degree Celsius (symbol: °C) can refer to a specific point on the Celsius temperature scale or to a difference or range between two temperatures. It is named after the Swedish astronomer Anders Celsius (1701–1744), who proposed the first version of it in 1742. The unit was called centigrade in several languages (from the Latin centum, which means 100, and gradus, which means steps) for many years. In 1948, the International Committee for Weights and Measures renamed it to honor Celsius and also to remove confusion with the term for one hundredth of a gradian in some languages. Most countries use this scale (the Fahrenheit scale is still used in the United States, some island territories, and Liberia).
Throughout the 19th century, the scale was based on for the freezing point of water and for the boiling point of water at 1 atm pressure. (In Celsius's initial proposal, the values were reversed: the boiling point was 0 degrees and the freezing point was 100 degrees.)
Between 1954 and 2019, the precise definitions of the unit and the Celsius temperature scale used absolute zero and the triple point of water. Since 2007, the Celsius temperature scale has been defined in terms of the kelvin, the SI base unit of thermodynamic temperature (symbol: K). Absolute zero, the lowest temperature, is now defined as being exactly and .
History
In 1742, Swedish astronomer Anders Celsius (1701–1744) created a temperature scale that was the reverse of the scale now known as "Celsius": 0 represented the boiling point of water, while 100 represented the freezing point of water. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that the melting point of ice is essentially unaffected by pressure. He also determined with remarkable precision how the boiling point of water varied as a function of atmospheric pressure. He proposed that the zero point of his temperature scale, being the boiling point, would be calibrated at the mean barometric pressure at mean sea level. This pressure is known as one standard atmosphere. The BIPM's 10th General Conference on Weights and Measures (CGPM) in 1954 defined one standard atmosphere to equal precisely 1,013,250 dynes per square centimeter (101.325 kPa).
In 1743, the French physicist Jean-Pierre Christin, permanent secretary of the Academy of Lyon, inverted the Celsius temperature scale so that 0 represented the freezing point of water and 100 represented the boiling point of water. Some credit Christin for independently inventing the reverse of Celsius's original scale, while others believe Christin merely reversed Celsius's scale. On 19 May 1743 he published the design of a mercury thermometer, the "Thermometer of Lyon" built by the craftsman Pierre Casati that used this scale.
In 1744, coincident with the death of Anders Celsius, the Swedish botanist Carl Linnaeus (1707–1778) reversed Celsius's scale. His custom-made "Linnaeus-thermometer", for use in his greenhouses, was made by Daniel Ekström, Sweden's leading maker of scientific instruments at the time, whose workshop was located in the basement of the Stockholm observatory. As often happened in this age before modern communications, numerous physicists, scientists, and instrument makers are credited with having independently developed this same scale; among them were Pehr Elvius, the secretary of the Royal Swedish Academy of Sciences (which had an instrument workshop) and with whom Linnaeus had been corresponding; , the instrument maker; and Mårten Strömer (1707–1770) who had studied astronomy under Anders Celsius.
The first known Swedish document reporting temperatures in this modern "forward" Celsius temperature scale is the paper Hortus Upsaliensis dated 16 December 1745 that Linnaeus wrote to a student of his, Samuel Nauclér. In it, Linnaeus recounted the temperatures inside the orangery at the University of Uppsala Botanical Garden:
"Centigrade" versus "Celsius"
Since the 19th century, the scientific and thermometry communities worldwide have used the phrase "centigrade scale" and temperatures were often reported simply as "degrees" or, when greater specificity was desired, as "degrees centigrade", with the symbol °C.
In the French language, the term centigrade also means one hundredth of a gradian, when used for angular measurement. The term centesimal degree was later introduced for temperatures but was also problematic, as it means gradian (one hundredth of a right angle) in the French and Spanish languages. The risk of confusion between temperature and angular measurement was eliminated in 1948 when the 9th meeting of the General Conference on Weights and Measures and the Comité International des Poids et Mesures (CIPM) formally adopted "degree Celsius" for temperature.
While "Celsius" is commonly used in scientific work, "centigrade" is still used in French and English-speaking countries, especially in informal contexts. The frequency of the usage of "centigrade" has declined over time.
Due to metrication in Australia, after 1 September 1972 weather reports in the country were exclusively given in Celsius. In the United Kingdom, it was not until February 1985 that forecasts by BBC Weather switched from "centigrade" to "Celsius".
Common temperatures
All phase transitions are at standard atmosphere. Figures are either by definition, or approximated from empirical measurements.
Name and symbol typesetting
The "degree Celsius" has been the only SI unit whose full unit name contains an uppercase letter since 1967, when the SI base unit for temperature became the kelvin, replacing the capitalized term degrees Kelvin. The plural form is "degrees Celsius".
The general rule of the International Bureau of Weights and Measures (BIPM) is that the numerical value always precedes the unit, and a space is always used to separate the unit from the number, (not "" or ""). The only exceptions to this rule are for the unit symbols for degree, minute, and second for plane angle (°, , and , respectively), for which no space is left between the numerical value and the unit symbol. Other languages, and various publishing houses, may follow different typographical rules.
Unicode character
Unicode provides the Celsius symbol at code point . However, this is a compatibility character provided for roundtrip compatibility with legacy encodings. It easily allows correct rendering for vertically written East Asian scripts, such as Chinese. The Unicode standard explicitly discourages the use of this character: "In normal use, it is better to represent degrees Celsius '°C' with a sequence of + , rather than . For searching, treat these two sequences as identical."
Temperatures and intervals
The degree Celsius is subject to the same rules as the kelvin with regard to the use of its unit name and symbol. Thus, besides expressing specific temperatures along its scale (e.g. "Gallium melts at 29.7646 °C" and "The temperature outside is 23 degrees Celsius"), the degree Celsius is also suitable for expressing temperature intervals: differences between temperatures or their uncertainties (e.g. "The output of the heat exchanger is hotter by 40 degrees Celsius", and "Our standard uncertainty is ±3 °C"). Because of this dual usage, one must not rely upon the unit name or its symbol to denote that a quantity is a temperature interval; it must be unambiguous through context or explicit statement that the quantity is an interval. This is sometimes solved by using the symbol °C (pronounced "degrees Celsius") for a temperature, and (pronounced "Celsius degrees") for a temperature interval, although this usage is non-standard. Another way to express the same is , which can be commonly found in literature.
Celsius measurement follows an interval system but not a ratio system; and it follows a relative scale not an absolute scale. For example, an object at 20 °C does not have twice the energy of when it is 10 °C; and 0 °C is not the lowest Celsius value. Thus, degrees Celsius is a useful interval measurement but does not possess the characteristics of ratio measures like weight or distance.
Coexistence with Kelvin
In science and in engineering, the Celsius and Kelvin scales are often used in combination in close contexts, e.g. "a measured value was 0.01023 °C with an uncertainty of 70 μK". This practice is permissible because the magnitude of the degree Celsius is equal to that of the kelvin. Notwithstanding the official endorsement provided by decision no. 3 of Resolution 3 of the 13th CGPM, which stated "a temperature interval may also be expressed in degrees Celsius", the practice of simultaneously using both °C and K remains widespread throughout the scientific world as the use of SI-prefixed forms of the degree Celsius (such as "μ°C" or "microdegrees Celsius") to express a temperature interval has not been widely adopted.
Melting and boiling points of water
The melting and boiling points of water are no longer part of the definition of the Celsius temperature scale. In 1948, the definition was changed to use the triple point of water. In 2005, the definition was further refined to use water with precisely defined isotopic composition (VSMOW) for the triple point. In 2019, the definition was changed to use the Boltzmann constant, completely decoupling the definition of the kelvin from the properties of water. Each of these formal definitions left the numerical values of the Celsius temperature scale identical to the prior definition to within the limits of accuracy of the metrology of the time.
When the melting and boiling points of water ceased being part of the definition, they became measured quantities instead. This is also true of the triple point.
In 1948 when the 9th General Conference on Weights and Measures (CGPM) in Resolution 3 first considered using the triple point of water as a defining point, the triple point was so close to being 0.01 °C greater than water's known melting point, it was simply defined as precisely 0.01 °C. However, later measurements showed that the difference between the triple and melting points of VSMOW is actually very slightly (< 0.001 °C) greater than 0.01 °C. Thus, the actual melting point of ice is very slightly (less than a thousandth of a degree) below 0 °C. Also, defining water's triple point at 273.16 K precisely defined the magnitude of each 1 °C increment in terms of the absolute thermodynamic temperature scale (referencing absolute zero). Now decoupled from the actual boiling point of water, the value "100 °C" is hotter than 0 °C – in absolute terms – by a factor of exactly (approximately 36.61% thermodynamically hotter). When adhering strictly to the two-point definition for calibration, the boiling point of VSMOW under one standard atmosphere of pressure was actually 373.1339 K (99.9839 °C). When calibrated to ITS-90 (a calibration standard comprising many definition points and commonly used for high-precision instrumentation), the boiling point of VSMOW was slightly less, about 99.974 °C.
This boiling-point difference of 16.1 millikelvins between the Celsius temperature scale's original definition and the previous one (based on absolute zero and the triple point) has little practical meaning in common daily applications because water's boiling point is very sensitive to variations in barometric pressure. For example, an altitude change of only causes the boiling point to change by one millikelvin.
| Physical sciences | Temperature | null |
19593121 | https://en.wikipedia.org/wiki/Kelvin | Kelvin | The kelvin (symbol: K) is the base unit for temperature in the International System of Units (SI). The Kelvin scale is an absolute temperature scale that starts at the lowest possible temperature (absolute zero), taken to be 0 K. By definition, the Celsius scale (symbol °C) and the Kelvin scale have the exact same magnitude; that is, a rise of 1 K is equal to a rise of 1 °C and vice versa, and any temperature in degrees Celsius can be converted to kelvin by adding 273.15.
The 19th century British scientist Lord Kelvin first developed and proposed the scale. It was often called the "absolute Celsius" scale in the early 20th century. The kelvin was formally added to the International System of Units in 1954, defining 273.16 K to be the triple point of water. The Celsius, Fahrenheit, and Rankine scales were redefined in terms of the Kelvin scale using this definition. The 2019 revision of the SI now defines the kelvin in terms of energy by setting the Boltzmann constant to exactly ; every 1 K change of thermodynamic temperature corresponds to a thermal energy change of exactly .
History
Precursors
During the 18th century, multiple temperature scales were developed, notably Fahrenheit and centigrade (later Celsius). These scales predated much of the modern science of thermodynamics, including atomic theory and the kinetic theory of gases which underpin the concept of absolute zero. Instead, they chose defining points within the range of human experience that could be reproduced easily and with reasonable accuracy, but lacked any deep significance in thermal physics. In the case of the Celsius scale (and the long since defunct Newton scale and Réaumur scale) the melting point of ice served as such a starting point, with Celsius being defined (from the 1740s to the 1940s) by calibrating a thermometer such that:
Water's freezing point is 0 °C.
Water's boiling point is 100 °C.
This definition assumes pure water at a specific pressure chosen to approximate the natural air pressure at sea level. Thus, an increment of 1 °C equals of the temperature difference between the melting and boiling points. The same temperature interval was later used for the Kelvin scale.
Charles's law
From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0 °C and 100 °C. Extrapolation of this law suggested that a gas cooled to about −273 °C would occupy zero volume.
Lord Kelvin
First absolute scale
In 1848, William Thomson, who was later ennobled as Lord Kelvin, published a paper On an Absolute Thermometric Scale. The scale proposed in the paper turned out to be unsatisfactory, but the principles and formulas upon which the scale was based were correct. For example, in a footnote, Thomson derived the value of −273 °C for absolute zero by calculating the negative reciprocal of 0.00366—the coefficient of thermal expansion of an ideal gas per degree Celsius relative to the ice point. This derived value agrees with the currently accepted value of −273.15 °C, allowing for the precision and uncertainty involved in the calculation.
The scale was designed on the principle that "a unit of heat descending from a body at the temperature ° of this scale, to a body at the temperature , would give out the same mechanical effect, whatever be the number ." Specifically, Thomson expressed the amount of work necessary to produce a unit of heat (the thermal efficiency) as , where is the temperature in Celsius, is the coefficient of thermal expansion, and was "Carnot's function", a substance-independent quantity depending on temperature, motivated by an obsolete version of Carnot's theorem. The scale is derived by finding a change of variables of temperature such that is proportional to .
When Thomson published his paper in 1848, he only considered Regnault's experimental measurements of . That same year, James Prescott Joule suggested to Thomson that the true formula for Carnot's function was
where is "the mechanical equivalent of a unit of heat", now referred to as the specific heat capacity of water, approximately . Thomson was initially skeptical of the deviations of Joule's formula from experiment, stating "I think it will be generally admitted that there can be no such inaccuracy in Regnault's part of the data, and there remains only the uncertainty regarding the density of saturated steam". Thomson referred to the correctness of Joule's formula as "Mayer's hypothesis", on account of it having been first assumed by Mayer. Thomson arranged numerous experiments in coordination with Joule, eventually concluding by 1854 that Joule's formula was correct and the effect of temperature on the density of saturated steam accounted for all discrepancies with Regnault's data. Therefore, in terms of the modern Kelvin scale , the first scale could be expressed as follows:
The parameters of the scale were arbitrarily chosen to coincide with the Celsius scale at 0° and 100 °C or 273 and 373 K (the melting and boiling points of water). On this scale, an increase of approximately 222 degrees corresponds to a doubling of Kelvin temperature, regardless of the starting temperature, and "infinite cold" (absolute zero) has a numerical value of negative infinity.
Modern absolute scale
Thomson understood that with Joule's proposed formula for , the relationship between work and heat for a perfect thermodynamic engine was simply the constant . In 1854, Thomson and Joule thus formulated a second absolute scale that was more practical and convenient, agreeing with air thermometers for most purposes. Specifically, "the numerical measure of temperature shall be simply the mechanical equivalent of the thermal unit divided by Carnot's function."
To explain this definition, consider a reversible Carnot cycle engine, where is the amount of heat energy transferred into the system, is the heat leaving the system, is the work done by the system (), is the temperature of the hot reservoir in Celsius, and is the temperature of the cold reservoir in Celsius. The Carnot function is defined as , and the absolute temperature as . One finds the relationship . By supposing , one obtains the general principle of an absolute thermodynamic temperature scale for the Carnot engine, . The definition can be shown to correspond to the thermometric temperature of the ideal gas laws.
This definition by itself is not sufficient. Thomson specified that the scale should have two properties:
The absolute values of two temperatures are to one another in the proportion of the heat taken in to the heat rejected in a perfect thermodynamic engine working with a source and refrigerator at the higher and lower of the temperatures respectively.
The difference of temperatures between the freezing- and boiling-points of water under standard atmospheric pressure shall be called 100 degrees. (The same increment as the Celsius scale) Thomson's best estimates at the time were that the temperature of freezing water was 273.7 K and the temperature of boiling water was 373.7 K.
These two properties would be featured in all future versions of the Kelvin scale, although it was not yet known by that name. In the early decades of the 20th century, the Kelvin scale was often called the "absolute Celsius" scale, indicating Celsius degrees counted from absolute zero rather than the freezing point of water, and using the same symbol for regular Celsius degrees, °C.
Triple point standard
In 1873, William Thomson's older brother James coined the term triple point to describe the combination of temperature and pressure at which the solid, liquid, and gas phases of a substance were capable of coexisting in thermodynamic equilibrium. While any two phases could coexist along a range of temperature-pressure combinations (e.g. the boiling point of water can be affected quite dramatically by raising or lowering the pressure), the triple point condition for a given substance can occur only at a single pressure and only at a single temperature. By the 1940s, the triple point of water had been experimentally measured to be about 0.6% of standard atmospheric pressure and very close to 0.01 °C per the historical definition of Celsius then in use.
In 1948, the Celsius scale was recalibrated by assigning the triple point temperature of water the value of 0.01 °C exactly and allowing the melting point at standard atmospheric pressure to have an empirically determined value (and the actual melting point at ambient pressure to have a fluctuating value) close to 0 °C. This was justified on the grounds that the triple point was judged to give a more accurately reproducible reference temperature than the melting point. The triple point could be measured with ±0.0001 °C accuracy, while the melting point just to ±0.001 °C.
In 1954, with absolute zero having been experimentally determined to be about −273.15 °C per the definition of °C then in use, Resolution 3 of the 10th General Conference on Weights and Measures (CGPM) introduced a new internationally standardized Kelvin scale which defined the triple point as exactly 273.15 + 0.01 = 273.16 degrees Kelvin.
In 1967/1968, Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature "kelvin", symbol K, replacing "degree Kelvin", symbol . The 13th CGPM also held in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is equal to the fraction of the thermodynamic temperature of the triple point of water."
After the 1983 redefinition of the metre, this left the kelvin, the second, and the kilogram as the only SI units not defined with reference to any other unit.
In 2005, noting that the triple point could be influenced by the isotopic ratio of the hydrogen and oxygen making up a water sample and that this was "now one of the major sources of the observed variability between different realizations of the water triple point", the International Committee for Weights and Measures (CIPM), a committee of the CGPM, affirmed that for the purposes of delineating the temperature of the triple point of water, the definition of the kelvin would refer to water having the isotopic composition specified for Vienna Standard Mean Ocean Water.
2019 redefinition
In 2005, the CIPM began a programme to redefine the kelvin (along with other SI base units) using a more experimentally rigorous method. In particular, the committee proposed redefining the kelvin such that the Boltzmann constant () would take the exact value . The committee hoped the program would be completed in time for its adoption by the CGPM at its 2011 meeting, but at the 2011 meeting the decision was postponed to the 2014 meeting when it would be considered part of a larger program. A challenge was to avoid degrading the accuracy of measurements close to the triple point. The redefinition was further postponed in 2014, pending more accurate measurements of the Boltzmann constant in terms of the current definition, but was finally adopted at the 26th CGPM in late 2018, with a value of =
For scientific purposes, the redefinition's main advantage is in allowing more accurate measurements at very low and very high temperatures, as the techniques used depend on the Boltzmann constant. Independence from any particular substance or measurement is also a philosophical advantage. The kelvin now only depends on the Boltzmann constant and universal constants (see 2019 SI unit dependencies diagram), allowing the kelvin to be expressed exactly as:
1 kelvin = = .
For practical purposes, the redefinition was unnoticed; enough digits were used for the Boltzmann constant to ensure that 273.16 K has enough significant digits to contain the uncertainty of water's triple point and water still normally freezes at 0 °C to a high degree of precision. But before the redefinition, the triple point of water was exact and the Boltzmann constant had a measured value of , with a relative standard uncertainty of . Afterward, the Boltzmann constant is exact and the uncertainty is transferred to the triple point of water, which is now .
The new definition officially came into force on 20 May 2019, the 144th anniversary of the Metre Convention.
Practical uses
Colour temperature
The kelvin is often used as a measure of the colour temperature of light sources. Colour temperature is based upon the principle that a black body radiator emits light with a frequency distribution characteristic of its temperature. Black bodies at temperatures below about appear reddish, whereas those above about appear bluish. Colour temperature is important in the fields of image projection and photography, where a colour temperature of approximately is required to match "daylight" film emulsions.
In astronomy, the stellar classification of stars and their place on the Hertzsprung–Russell diagram are based, in part, upon their surface temperature, known as effective temperature. The photosphere of the Sun, for instance, has an effective temperature of as adopted by IAU 2015 Resolution B3.
Digital cameras and photographic software often use colour temperature in K in edit and setup menus. The simple guide is that higher colour temperature produces an image with enhanced white and blue hues. The reduction in colour temperature produces an image more dominated by reddish, "warmer" colours.
Kelvin as a unit of noise temperature
For electronics, the kelvin is used as an indicator of how noisy a circuit is in relation to an ultimate noise floor, i.e. the noise temperature. The Johnson–Nyquist noise of resistors (which produces an associated kTC noise when combined with capacitors) is a type of thermal noise derived from the Boltzmann constant and can be used to determine the noise temperature of a circuit using the Friis formulas for noise.
Derived units and SI multiples
The only SI derived unit with a special name derived from the kelvin is the degree Celsius. Like other SI units, the kelvin can also be modified by adding a metric prefix that multiplies it by a power of 10:
Orthography
According to SI convention, the kelvin is never referred to nor written as a degree. The word "kelvin" is not capitalized when used as a unit. It may be in plural form as appropriate (for example, "it is 283 kelvins outside", as for "it is 50 degrees Fahrenheit" and "10 degrees Celsius"). The unit's symbol K is a capital letter, per the SI convention to capitalize symbols of units derived from the name of a person. It is common convention to capitalize Kelvin when referring to Lord Kelvin or the Kelvin scale.
The unit symbol K is encoded in Unicode at code point . However, this is a compatibility character provided for compatibility with legacy encodings. The Unicode standard recommends using instead; that is, a normal capital K. "Three letterlike symbols have been given canonical equivalence to regular letters: , , and . In all three instances, the regular letter should be used."
| Physical sciences | Temperature | null |
19593167 | https://en.wikipedia.org/wiki/Heat | Heat | In thermodynamics, heat is energy in transfer between a thermodynamic system and its surroundings by modes other than thermodynamic work and transfer of matter. Such modes are microscopic, mainly thermal conduction, radiation, and friction, as distinct from the macroscopic modes, thermodynamic work and transfer of matter. For a closed system (transfer of matter excluded), the heat involved in a process is the difference in internal energy between the final and initial states of a system, and subtracting the work done in the process. For a closed system, this is the formulation of the first law of thermodynamics.
Calorimetry is measurement of quantity of energy transferred as heat by its effect on the states of interacting bodies, for example, by the amount of ice melted or by change in temperature of a body.
In the International System of Units (SI), the unit of measurement for heat, as a form of energy, is the joule (J).
With various other meanings, the word 'heat' is also used in engineering, and it occurs also in ordinary language, but such are not the topic of the present article.
Notation and units
As a form of energy, heat has the unit joule (J) in the International System of Units (SI). In addition, many applied branches of engineering use other, traditional units, such as the British thermal unit (BTU) and the calorie. The standard unit for the rate of heating is the watt (W), defined as one joule per second.
The symbol for heat was introduced by Rudolf Clausius and Macquorn Rankine in .
Heat released by a system into its surroundings is by convention, as a contributor to internal energy, a negative quantity (); when a system absorbs heat from its surroundings, it is positive (). Heat transfer rate, or heat flow per unit time, is denoted by , but it is not a time derivative of a function of state (which can also be written with the dot notation) since heat is not a function of state. Heat flux is defined as rate of heat transfer per unit cross-sectional area (watts per square metre).
History
In common language, English 'heat' or 'warmth', just as French chaleur, German Hitze or Wärme, Latin calor, Greek θάλπος, etc. refers to either thermal energy or temperature, or the human perception of these. Later, chaleur (as used by Sadi Carnot), 'heat', and Wärme became equivalents also as specific scientific terms at an early stage of thermodynamics. Speculation on 'heat' as a separate form of matter has a long history, involving the phlogiston theory, the caloric theory, and fire. Many careful and accurate historical experiments practically exclude friction, mechanical and thermodynamic work and matter transfer, investigating transfer of energy only by thermal conduction and radiation. Such experiments give impressive rational support to the caloric theory of heat. To account also for changes of internal energy due to friction, and mechanical and thermodynamic work, the caloric theory was, around the end of the eighteenth century, replaced by the "mechanical" theory of heat, which is accepted today.
17th century–early 18th century
"Heat is motion"
As scientists of the early modern age began to adopt the view that matter consists of particles, a close relationship between heat and the motion of those particles was widely surmised, or even the equivalency of the concepts, boldly expressed by the English philosopher Francis Bacon in 1620. "It must not be thought that heat generates motion, or motion heat (though in some respects this be true), but that the very essence of heat ... is motion and nothing else." "not a ... motion of the whole, but of the small particles of the body." In The Assayer (published 1623) Galileo Galilei, in turn, described heat as an artifact of our minds.
Galileo wrote that heat and pressure are apparent properties only, caused by the movement of particles, which is a real phenomenon. In 1665, and again in 1681, English polymath Robert Hooke reiterated that heat is nothing but the motion of the constituent particles of objects, and in 1675, his colleague, Anglo-Irish scientist Robert Boyle repeated that this motion is what heat consists of.
Heat has been discussed in ordinary language by philosophers. An example is this 1720 quote from the English philosopher John Locke:
When Bacon, Galileo, Hooke, Boyle and Locke wrote “heat”, they might more have referred to what we would now call “temperature”. No clear distinction was made between heat and temperature until the mid-18th century, nor between the internal energy of a body and the transfer of energy as heat until the mid-19th century.
Locke's description of heat was repeatedly quoted by English physicist James Prescott Joule. Also the transfer of heat was explained by the motion of particles. Scottish physicist and chemist Joseph Black wrote: "Many have supposed that heat is a tremulous ... motion of the particles of matter, which ... motion they imagined to be communicated from one body to another." John Tyndall's Heat Considered as Mode of Motion (1863) was instrumental in popularizing the idea of heat as motion to the English-speaking public. The theory was developed in academic publications in French, English and German.
18th century
Heat vs. temperature
Unstated distinctions between heat and “hotness” may be very old, heat seen as something dependent on the quantity of a hot substance, “heat”, vaguely perhaps distinct from the quality of "hotness". In 1723, the English mathematician Brook Taylor measured the temperature—the expansion of the liquid in a thermometer—of mixtures of various amounts of hot water in cold water. As expected, the increase in temperature was in proportion to the proportion of hot water in the mixture. The distinction between heat and temperature is implicitly expressed in the last sentence of his report.
Evaporative cooling
In 1748, an account was published in The Edinburgh Physical and Literary Essays of an experiment by the Scottish physician and chemist William Cullen. Cullen had used an air pump to lower the pressure in a container with diethyl ether. The ether boiled, while no heat was withdrawn from it, and its temperature decreased. And in 1758 on a warm day in Cambridge, England, Benjamin Franklin and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching .
Discovery of specific heat
In 1756 or soon thereafter, Joseph Black, Cullen’s friend and former assistant, began an extensive study of heat. In 1760 Black realized that when two different substances of equal mass but different temperatures are mixed, the changes in number of degrees in the two substances differ, though the heat gained by the cooler substance and lost by the hotter is the same. Black related an experiment conducted by Daniel Gabriel Fahrenheit on behalf of Dutch physician Herman Boerhaave. For clarity, he then described a hypothetical but realistic variant of the experiment: If equal masses of 100 °F water and 150 °F mercury are mixed, the water temperature increases by 20 ° and the mercury temperature decreases by 30 ° (both arriving at 120 °F), even though the heat gained by the water and lost by the mercury is the same. This clarified the distinction between heat and temperature. It also introduced the concept of specific heat capacity, being different for different substances. Black wrote: "Quicksilver [mercury] ... has less capacity for the matter of heat than water."
Degrees of heat
In his investigations of specific heat, Black used a unit of heat he called "degrees of heat"—as opposed to just "degrees" [of temperature]. This unit was context-dependent and could only be used when circumstances were identical. It was based on change in temperature multiplied by the mass of the substance involved.
Discovery of latent heat
It was known that when the air temperature rises above freezing—air then becoming the obvious heat source—snow melts very slowly and the temperature of the melted snow is close to its freezing point. In 1757, Black started to investigate if heat, therefore, was required for the melting of a solid, independent of any rise in temperature. As far Black knew, the general view at that time was that melting was inevitably accompanied by a small increase in temperature, and that no more heat was required than what the increase in temperature would require in itself. Soon, however, Black was able to show that much more heat was required during melting than could be explained by the increase in temperature alone. He was also able to show that heat is released by a liquid during its freezing; again, much more than could be explained by the decrease of its temperature alone.
In 1762, Black announced the following research and results to a society of professors at the University of Glasgow. Black had placed equal masses of ice at 32 °F (0 °C) and water at 33 °F (0.6 °C) respectively in two identical, well separated containers. The water and the ice were both evenly heated to 40 °F by the air in the room, which was at a constant 47 °F (8 °C). The water had therefore received 40 – 33 = 7 “degrees of heat”. The ice had been heated for 21 times longer and had therefore received 7 × 21 = 147 “degrees of heat”. The temperature of the ice had increased by 8 °F. The ice had now absorbed an additional 8 “degrees of heat”, which Black called sensible heat, manifest as temperature change, which could be felt and measured. 147 – 8 = 139 “degrees of heat” were also absorbed as latent heat, manifest as phase change rather than as temperature change.
Black next showed that a water temperature of 176 °F was needed to melt an equal mass of ice until it was all 32 °F. So now 176 – 32 = 144 “degrees of heat” seemed to be needed to melt the ice. The modern value for the heat of fusion of ice would be 143 “degrees of heat” on the same scale (79.5 “degrees of heat Celsius”).
Finally Black increased the temperature of and vaporized respectively two equal masses of water through even heating. He showed that 830 “degrees of heat” was needed for the vaporization; again based on the time required. The modern value for the heat of vaporization of water would be 967 “degrees of heat” on the same scale.
First calorimeter
A calorimeter is a device used for measuring heat capacity, as well as the heat absorbed or released in chemical reactions or physical changes. In 1780, French chemist Antoine Lavoisier used such an apparatus—which he named 'calorimeter'—to investigate the heat released by respiration, by observing how this heat melted snow surrounding his apparatus. A so called ice calorimeter was used 1782–83 by Lavoisier and his colleague Pierre-Simon Laplace to measure the heat released in various chemical reactions. The heat so released melted a specific amount of ice, and the heat required for the melting of a certain amount of ice was known beforehand.
Classical thermodynamics
The modern understanding of heat is often partly attributed to Thompson's 1798 mechanical theory of heat (An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction), postulating a mechanical equivalent of heat.
A collaboration between Nicolas Clément and Sadi Carnot (Reflections on the Motive Power of Fire) in the 1820s had some related thinking along similar lines. In 1842, Julius Robert Mayer frictionally generated heat in paper pulp and measured the temperature rise. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on heat production by friction in the passage of electricity through a resistor and in the rotation of a paddle in a vat of water. The theory of classical thermodynamics matured in the 1850s to 1860s.
Clausius (1850)
In 1850, Clausius, responding to Joule's experimental demonstrations of heat production by friction, rejected the caloric doctrine of conservation of heat, writing:
If we assume that heat, like matter, cannot be lessened in quantity, we must also assume that it cannot be increased; but it is almost impossible to explain the ascension of temperature brought about by friction otherwise than by assuming an actual increase of heat. The careful experiments of Joule, who developed heat in various ways by the application of mechanical force, establish almost to a certainty, not only the possibility of increasing the quantity of heat, but also the fact that the newly-produced heat is proportional to the work expended in its production. It may be remarked further, that many facts have lately transpired which tend to overthrow the hypothesis that heat is itself a body, and to prove that it consists in a motion of the ultimate particles of bodies.
The process function was introduced by Rudolf Clausius in 1850.
Clausius described it with the German compound Wärmemenge, translated as "amount of heat".
James Clerk Maxwell (1871)
James Clerk Maxwell in his 1871 Theory of Heat outlines four stipulations for the definition of heat:
It is something which may be transferred from one body to another, according to the second law of thermodynamics.
It is a measurable quantity, and so can be treated mathematically.
It cannot be treated as a material substance, because it may be transformed into something that is not a material substance, e.g., mechanical work.
Heat is one of the forms of energy.
Bryan (1907)
In 1907, G.H. Bryan published an investigation of the foundations of thermodynamics, Thermodynamics: an Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig.
Bryan was writing when thermodynamics had been established empirically, but people were still interested to specify its logical structure. The 1909 work of Carathéodory also belongs to this historical era. Bryan was a physicist while Carathéodory was a mathematician.
Bryan started his treatise with an introductory chapter on the notions of heat and of temperature. He gives an example of where the notion of heating as raising a body's temperature contradicts the notion of heating as imparting a quantity of heat to that body.
He defined an adiabatic transformation as one in which the body neither gains nor loses heat. This is not quite the same as defining an adiabatic transformation as one that occurs to a body enclosed by walls impermeable to radiation and conduction.
He recognized calorimetry as a way of measuring quantity of heat. He recognized water as having a temperature of maximum density. This makes water unsuitable as a thermometric substance around that temperature. He intended to remind readers of why thermodynamicists preferred an absolute scale of temperature, independent of the properties of a particular thermometric substance.
His second chapter started with the recognition of friction as a source of heat, by Benjamin Thompson, by Humphry Davy, by Robert Mayer, and by James Prescott Joule.
He stated the First Law of Thermodynamics, or Mayer–Joule Principle as follows:
When heat is transformed into work or conversely work is transformed into heat, the quantity of heat gained or lost is proportional to the quantity of work lost or gained.
He wrote:
If heat be measured in dynamical units the mechanical equivalent becomes equal to unity, and the equations of thermodynamics assume a simpler and more symmetrical form.
He explained how the caloric theory of Lavoisier and Laplace made sense in terms of pure calorimetry, though it failed to account for conversion of work into heat by such mechanisms as friction and conduction of electricity.
Having rationally defined quantity of heat, he went on to consider the second law, including the Kelvin definition of absolute thermodynamic temperature.
In section 41, he wrote:
§41. Physical unreality of reversible processes. In Nature all phenomena are irreversible in a greater or less degree. The motions of celestial bodies afford the closest approximations to reversible motions, but motions which occur on this earth are largely retarded by friction, viscosity, electric and other resistances, and if the relative velocities of moving bodies were reversed, these resistances would still retard the relative motions and would not accelerate them as they should if the motions were perfectly reversible.
He then stated the principle of conservation of energy.
He then wrote:
In connection with irreversible phenomena the following axioms have to be assumed.
(1) If a system can undergo an irreversible change it will do so.
(2) A perfectly reversible change cannot take place of itself; such a change can only be regarded as the limiting form of an irreversible change.
On page 46, thinking of closed systems in thermal connection, he wrote:
We are thus led to postulate a system in which energy can pass from one element to another otherwise than by the performance of mechanical work.
On page 47, still thinking of closed systems in thermal connection, he wrote:
§58. Quantity of Heat. Definition. When energy flows from one system or part of a system to another otherwise than by the performance of work, the energy so transferred i[s] called heat.
On page 48, he wrote:
§ 59. When two bodies act thermically on one another the quantities of heat gained by one and lost by the other are not necessarily equal.
In the case of bodies at a distance, heat may be taken from or given to the intervening medium.
The quantity of heat received by any portion of the ether may be defined in the same way as that received by a material body. [He was thinking of thermal radiation.]
Another important exception occurs when sliding takes place between two rough bodies in contact. The algebraic sum of the works done is different from zero, because, although the action and reaction are equal and opposite the velocities of the parts of the bodies in contact are different. Moreover, the work lost in the process does not increase the mutual potential energy of the system and there is no intervening medium between the bodies. Unless the lost energy can be accounted for in other ways, (as when friction produces electrification), it follows from the Principle of Conservation of Energy that the algebraic sum of the quantities of heat gained by the two systems is equal to the quantity of work lost by friction. [This thought was echoed by Bridgman, as above.]
Carathéodory (1909)
A celebrated and frequent definition of heat in thermodynamics is based on the work of Carathéodory (1909), referring to processes in a closed system. Carathéodory was responding to a suggestion by Max Born that he examine the logical structure of thermodynamics.
The internal energy of a body in an arbitrary state can be determined by amounts of work adiabatically performed by the body on its surroundings when it starts from a reference state . Such work is assessed through quantities defined in the surroundings of the body. It is supposed that such work can be assessed accurately, without error due to friction in the surroundings; friction in the body is not excluded by this definition. The adiabatic performance of work is defined in terms of adiabatic walls, which allow transfer of energy as work, but no other transfer, of energy or matter. In particular they do not allow the passage of energy as heat. According to this definition, work performed adiabatically is in general accompanied by friction within the thermodynamic system or body. On the other hand, according to Carathéodory (1909), there also exist non-adiabatic, diathermal walls, which are postulated to be permeable only to heat.
For the definition of quantity of energy transferred as heat, it is customarily envisaged that an arbitrary state of interest is reached from state by a process with two components, one adiabatic and the other not adiabatic. For convenience one may say that the adiabatic component was the sum of work done by the body through volume change through movement of the walls while the non-adiabatic wall was temporarily rendered adiabatic, and of isochoric adiabatic work. Then the non-adiabatic component is a process of energy transfer through the wall that passes only heat, newly made accessible for the purpose of this transfer, from the surroundings to the body. The change in internal energy to reach the state from the state is the difference of the two amounts of energy transferred.
Although Carathéodory himself did not state such a definition, following his work it is customary in theoretical studies to define heat, , to the body from its surroundings, in the combined process of change to state from the state , as the change in internal energy, , minus the amount of work, , done by the body on its surrounds by the adiabatic process, so that .
In this definition, for the sake of conceptual rigour, the quantity of energy transferred as heat is not specified directly in terms of the non-adiabatic process. It is defined through knowledge of precisely two variables, the change of internal energy and the amount of adiabatic work done, for the combined process of change from the reference state to the arbitrary state . It is important that this does not explicitly involve the amount of energy transferred in the non-adiabatic component of the combined process. It is assumed here that the amount of energy required to pass from state to state , the change of internal energy, is known, independently of the combined process, by a determination through a purely adiabatic process, like that for the determination of the internal energy of state above. The rigour that is prized in this definition is that there is one and only one kind of energy transfer admitted as fundamental: energy transferred as work. Energy transfer as heat is considered as a derived quantity. The uniqueness of work in this scheme is considered to guarantee rigor and purity of conception. The conceptual purity of this definition, based on the concept of energy transferred as work as an ideal notion, relies on the idea that some frictionless and otherwise non-dissipative processes of energy transfer can be realized in physical actuality. The second law of thermodynamics, on the other hand, assures us that such processes are not found in nature.
Before the rigorous mathematical definition of heat based on Carathéodory's 1909 paper,
historically, heat, temperature, and thermal equilibrium were presented in thermodynamics textbooks as jointly primitive notions. Carathéodory introduced his 1909 paper thus: "The proposition that the discipline of thermodynamics can be justified without recourse to any hypothesis that cannot be verified experimentally must be regarded as one of the most noteworthy results of the research in thermodynamics that was accomplished during the last century." Referring to the "point of view adopted by most authors who were active in the last fifty years", Carathéodory wrote: "There exists a physical quantity called heat that is not identical with the mechanical quantities (mass, force, pressure, etc.) and whose variations can be determined by calorimetric measurements." James Serrin introduces an account of the theory of thermodynamics thus: "In the following section, we shall use the classical notions of heat, work, and hotness as primitive elements, ... That heat is an appropriate and natural primitive for thermodynamics was already accepted by Carnot. Its continued validity as a primitive element of thermodynamical structure is due to the fact that it synthesizes an essential physical concept, as well as to its successful use in recent work to unify different constitutive theories." This traditional kind of presentation of the basis of thermodynamics includes ideas that may be summarized by the statement that heat transfer is purely due to spatial non-uniformity of temperature, and is by conduction and radiation, from hotter to colder bodies. It is sometimes proposed that this traditional kind of presentation necessarily rests on "circular reasoning".
This alternative approach to the definition of quantity of energy transferred as heat differs in logical structure from that of Carathéodory, recounted just above.
This alternative approach admits calorimetry as a primary or direct way to measure quantity of energy transferred as heat. It relies on temperature as one of its primitive concepts, and used in calorimetry. It is presupposed that enough processes exist physically to allow measurement of differences in internal energies. Such processes are not restricted to adiabatic transfers of energy as work. They include calorimetry, which is the commonest practical way of finding internal energy differences. The needed temperature can be either empirical or absolute thermodynamic.
In contrast, the Carathéodory way recounted just above does not use calorimetry or temperature in its primary definition of quantity of energy transferred as heat. The Carathéodory way regards calorimetry only as a secondary or indirect way of measuring quantity of energy transferred as heat. As recounted in more detail just above, the Carathéodory way regards quantity of energy transferred as heat in a process as primarily or directly defined as a residual quantity. It is calculated from the difference of the internal energies of the initial and final states of the system, and from the actual work done by the system during the process. That internal energy difference is supposed to have been measured in advance through processes of purely adiabatic transfer of energy as work, processes that take the system between the initial and final states. By the Carathéodory way it is presupposed as known from experiment that there actually physically exist enough such adiabatic processes, so that there need be no recourse to calorimetry for measurement of quantity of energy transferred as heat. This presupposition is essential but is explicitly labeled neither as a law of thermodynamics nor as an axiom of the Carathéodory way. In fact, the actual physical existence of such adiabatic processes is indeed mostly supposition, and those supposed processes have in most cases not been actually verified empirically to exist.
Planck (1926)
Over the years, for example in his 1879 thesis, but particularly in 1926, Planck advocated regarding the generation of heat by rubbing as the most specific way to define heat. Planck criticised Carathéodory for not attending to this. Carathéodory was a mathematician who liked to think in terms of adiabatic processes, and perhaps found friction too tricky to think about, while Planck was a physicist.
Heat transfer
Heat transfer between two bodies
Referring to conduction, Partington writes: "If a hot body is brought in conducting contact with a cold body, the temperature of the hot body falls and that of the cold body rises, and it is said that a quantity of heat has passed from the hot body to the cold body."
Referring to radiation, Maxwell writes: "In Radiation, the hotter body loses heat, and the colder body receives heat by means of a process occurring in some intervening medium which does not itself thereby become hot."
Maxwell writes that convection as such "is not a purely thermal phenomenon". In thermodynamics, convection in general is regarded as transport of internal energy. If, however, the convection is enclosed and circulatory, then it may be regarded as an intermediary that transfers energy as heat between source and destination bodies, because it transfers only energy and not matter from the source to the destination body.
In accordance with the first law for closed systems, energy transferred solely as heat leaves one body and enters another, changing the internal energies of each. Transfer, between bodies, of energy as work is a complementary way of changing internal energies. Though it is not logically rigorous from the viewpoint of strict physical concepts, a common form of words that expresses this is to say that heat and work are interconvertible.
Cyclically operating engines that use only heat and work transfers have two thermal reservoirs, a hot and a cold one. They may be classified by the range of operating temperatures of the working body, relative to those reservoirs. In a heat engine, the working body is at all times colder than the hot reservoir and hotter than the cold reservoir. In a sense, it uses heat transfer to produce work. In a heat pump, the working body, at stages of the cycle, goes both hotter than the hot reservoir, and colder than the cold reservoir. In a sense, it uses work to produce heat transfer.
Heat engine
In classical thermodynamics, a commonly considered model is the heat engine. It consists of four bodies: the working body, the hot reservoir, the cold reservoir, and the work reservoir. A cyclic process leaves the working body in an unchanged state, and is envisaged as being repeated indefinitely often. Work transfers between the working body and the work reservoir are envisaged as reversible, and thus only one work reservoir is needed. But two thermal reservoirs are needed, because transfer of energy as heat is irreversible. A single cycle sees energy taken by the working body from the hot reservoir and sent to the two other reservoirs, the work reservoir and the cold reservoir. The hot reservoir always and only supplies energy, and the cold reservoir always and only receives energy. The second law of thermodynamics requires that no cycle can occur in which no energy is received by the cold reservoir. Heat engines achieve higher efficiency when the ratio of the initial and final temperature is greater.
Heat pump or refrigerator
Another commonly considered model is the heat pump or refrigerator. Again there are four bodies: the working body, the hot reservoir, the cold reservoir, and the work reservoir. A single cycle starts with the working body colder than the cold reservoir, and then energy is taken in as heat by the working body from the cold reservoir. Then the work reservoir does work on the working body, adding more to its internal energy, making it hotter than the hot reservoir. The hot working body passes heat to the hot reservoir, but still remains hotter than the cold reservoir. Then, by allowing it to expand without passing heat to another body, the working body is made colder than the cold reservoir. It can now accept heat transfer from the cold reservoir to start another cycle.
The device has transported energy from a colder to a hotter reservoir, but this is not regarded as by an inanimate agency; rather, it is regarded as by the harnessing of work . This is because work is supplied from the work reservoir, not just by a simple thermodynamic process, but by a cycle of thermodynamic operations and processes, which may be regarded as directed by an animate or harnessing agency. Accordingly, the cycle is still in accord with the second law of thermodynamics. The 'efficiency' of a heat pump (which exceeds unity) is best when the temperature difference between the hot and cold reservoirs is least.
Functionally, such engines are used in two ways, distinguishing a target reservoir and a resource or surrounding reservoir. A heat pump transfers heat to the hot reservoir as the target from the resource or surrounding reservoir. A refrigerator transfers heat, from the cold reservoir as the target, to the resource or surrounding reservoir. The target reservoir may be regarded as leaking: when the target leaks heat to the surroundings, heat pumping is used; when the target leaks coldness to the surroundings, refrigeration is used. The engines harness work to overcome the leaks.
Macroscopic view
According to Planck, there are three main conceptual approaches to heat. One is the microscopic or kinetic theory approach. The other two are macroscopic approaches. One of the macroscopic approaches is through the law of conservation of energy taken as prior to thermodynamics, with a mechanical analysis of processes, for example in the work of Helmholtz. This mechanical view is taken in this article as currently customary for thermodynamic theory. The other macroscopic approach is the thermodynamic one, which admits heat as a primitive concept, which contributes, by scientific induction to knowledge of the law of conservation of energy. This view is widely taken as the practical one, quantity of heat being measured by calorimetry.
Bailyn also distinguishes the two macroscopic approaches as the mechanical and the thermodynamic. The thermodynamic view was taken by the founders of thermodynamics in the nineteenth century. It regards quantity of energy transferred as heat as a primitive concept coherent with a primitive concept of temperature, measured primarily by calorimetry. A calorimeter is a body in the surroundings of the system, with its own temperature and internal energy; when it is connected to the system by a path for heat transfer, changes in it measure heat transfer. The mechanical view was pioneered by Helmholtz and developed and used in the twentieth century, largely through the influence of Max Born. It regards quantity of heat transferred as heat as a derived concept, defined for closed systems as quantity of heat transferred by mechanisms other than work transfer, the latter being regarded as primitive for thermodynamics, defined by macroscopic mechanics. According to Born, the transfer of internal energy between open systems that accompanies transfer of matter "cannot be reduced to mechanics". It follows that there is no well-founded definition of quantities of energy transferred as heat or as work associated with transfer of matter.
Nevertheless, for the thermodynamical description of non-equilibrium processes, it is desired to consider the effect of a temperature gradient established by the surroundings across the system of interest when there is no physical barrier or wall between system and surroundings, that is to say, when they are open with respect to one another. The impossibility of a mechanical definition in terms of work for this circumstance does not alter the physical fact that a temperature gradient causes a diffusive flux of internal energy, a process that, in the thermodynamic view, might be proposed as a candidate concept for transfer of energy as heat.
In this circumstance, it may be expected that there may also be active other drivers of diffusive flux of internal energy, such as gradient of chemical potential which drives transfer of matter, and gradient of electric potential which drives electric current and iontophoresis; such effects usually interact with diffusive flux of internal energy driven by temperature gradient, and such interactions are known as cross-effects.
If cross-effects that result in diffusive transfer of internal energy were also labeled as heat transfers, they would sometimes violate the rule that pure heat transfer occurs only down a temperature gradient, never up one. They would also contradict the principle that all heat transfer is of one and the same kind, a principle founded on the idea of heat conduction between closed systems. One might to try to think narrowly of heat flux driven purely by temperature gradient as a conceptual component of diffusive internal energy flux, in the thermodynamic view, the concept resting specifically on careful calculations based on detailed knowledge of the processes and being indirectly assessed. In these circumstances, if perchance it happens that no transfer of matter is actualized, and there are no cross-effects, then the thermodynamic concept and the mechanical concept coincide, as if one were dealing with closed systems. But when there is transfer of matter, the exact laws by which temperature gradient drives diffusive flux of internal energy, rather than being exactly knowable, mostly need to be assumed, and in many cases are practically unverifiable. Consequently, when there is transfer of matter, the calculation of the pure 'heat flux' component of the diffusive flux of internal energy rests on practically unverifiable assumptions. This is a reason to think of heat as a specialized concept that relates primarily and precisely to closed systems, and applicable only in a very restricted way to open systems.
In many writings in this context, the term "heat flux" is used when what is meant is therefore more accurately called diffusive flux of internal energy; such usage of the term "heat flux" is a residue of older and now obsolete language usage that allowed that a body may have a "heat content".
Microscopic view
In the kinetic theory, heat is explained in terms of the microscopic motions and interactions of constituent particles, such as electrons, atoms, and molecules. The immediate meaning of the kinetic energy of the constituent particles is not as heat. It is as a component of internal energy.
In microscopic terms, heat is a transfer quantity, and is described by a transport theory, not as steadily localized kinetic energy of particles. Heat transfer arises from temperature gradients or differences, through the diffuse exchange of microscopic kinetic and potential particle energy, by particle collisions and other interactions. An early and vague expression of this was made by Francis Bacon. Precise and detailed versions of it were developed in the nineteenth century.
In statistical mechanics, for a closed system (no transfer of matter), heat is the energy transfer associated with a disordered, microscopic action on the system, associated with jumps in occupation numbers of the energy levels of the system, without change in the values of the energy levels themselves. It is possible for macroscopic thermodynamic work to alter the occupation numbers without change in the values of the system energy levels themselves, but what distinguishes transfer as heat is that the transfer is entirely due to disordered, microscopic action, including radiative transfer. A mathematical definition can be formulated for small increments of quasi-static adiabatic work in terms of the statistical distribution of an ensemble of microstates.
Calorimetry
Quantity of heat transferred can be measured by calorimetry, or determined through calculations based on other quantities.
Calorimetry is the empirical basis of the idea of quantity of heat transferred in a process. The transferred heat is measured by changes in a body of known properties, for example, temperature rise, change in volume or length, or phase change, such as melting of ice.
A calculation of quantity of heat transferred can rely on a hypothetical quantity of energy transferred as adiabatic work and on the first law of thermodynamics. Such calculation is the primary approach of many theoretical studies of quantity of heat transferred.
Engineering
The discipline of heat transfer, typically considered an aspect of mechanical engineering and chemical engineering, deals with specific applied methods by which thermal energy in a system is generated, or converted, or transferred to another system. Although the definition of heat implicitly means the transfer of energy, the term heat transfer encompasses this traditional usage in many engineering disciplines and laymen language.
Heat transfer is generally described as including the mechanisms of heat conduction, heat convection, thermal radiation, but may include mass transfer and heat in processes of phase changes.
Convection may be described as the combined effects of conduction and fluid flow. From the thermodynamic point of view, heat flows into a fluid by diffusion to increase its energy, the fluid then transfers (advects) this increased internal energy (not heat) from one location to another, and this is then followed by a second thermal interaction which transfers heat to a second body or system, again by diffusion. This entire process is often regarded as an additional mechanism of heat transfer, although technically, "heat transfer" and thus heating and cooling occurs only on either end of such a conductive flow, but not as a result of flow. Thus, conduction can be said to "transfer" heat only as a net result of the process, but may not do so at every time within the complicated convective process.
Latent and sensible heat
In an 1847 lecture entitled On Matter, Living Force, and Heat, James Prescott Joule characterized the terms latent heat and sensible heat as components of heat each affecting distinct physical phenomena, namely the potential and kinetic energy of particles, respectively.
He described latent energy as the energy possessed via a distancing of particles where attraction was over a greater distance, i.e. a form of potential energy, and the sensible heat as an energy involving the motion of particles, i.e. kinetic energy.
Latent heat is the heat released or absorbed by a chemical substance or a thermodynamic system during a change of state that occurs without a change in temperature. Such a process may be a phase transition, such as the melting of ice or the boiling of water.
Heat capacity
Heat capacity is a measurable physical quantity equal to the ratio of the heat added to an object to the resulting temperature change. The molar heat capacity is the heat capacity per unit amount (SI unit: mole) of a pure substance, and the specific heat capacity, often called simply specific heat, is the heat capacity per unit mass of a material. Heat capacity is a physical property of a substance, which means that it depends on the state and properties of the substance under consideration.
The specific heats of monatomic gases, such as helium, are nearly constant with temperature. Diatomic gases such as hydrogen display some temperature dependence, and triatomic gases (e.g., carbon dioxide) still more.
Before the development of the laws of thermodynamics, heat was measured by changes in the states of the participating bodies.
Some general rules, with important exceptions, can be stated as follows.
In general, most bodies expand on heating. In this circumstance, heating a body at a constant volume increases the pressure it exerts on its constraining walls, while heating at a constant pressure increases its volume.
Beyond this, most substances have three ordinarily recognized states of matter, solid, liquid, and gas. Some can also exist in a plasma. Many have further, more finely differentiated, states of matter, such as glass and liquid crystal. In many cases, at fixed temperature and pressure, a substance can exist in several distinct states of matter in what might be viewed as the same 'body'. For example, ice may float in a glass of water. Then the ice and the water are said to constitute two phases within the 'body'. Definite rules are known, telling how distinct phases may coexist in a 'body'. Mostly, at a fixed pressure, there is a definite temperature at which heating causes a solid to melt or evaporate, and a definite temperature at which heating causes a liquid to evaporate. In such cases, cooling has the reverse effects.
All of these, the commonest cases, fit with a rule that heating can be measured by changes of state of a body. Such cases supply what are called thermometric bodies, that allow the definition of empirical temperatures. Before 1848, all temperatures were defined in this way. There was thus a tight link, apparently logically determined, between heat and temperature, though they were recognized as conceptually thoroughly distinct, especially by Joseph Black in the later eighteenth century.
There are important exceptions. They break the obviously apparent link between heat and temperature. They make it clear that empirical definitions of temperature are contingent on the peculiar properties of particular thermometric substances, and are thus precluded from the title 'absolute'. For example, water contracts on being heated near 277 K. It cannot be used as a thermometric substance near that temperature. Also, over a certain temperature range, ice contracts on heating. Moreover, many substances can exist in metastable states, such as with negative pressure, that survive only transiently and in very special conditions. Such facts, sometimes called 'anomalous', are some of the reasons for the thermodynamic definition of absolute temperature.
In the early days of measurement of high temperatures, another factor was important, and used by Josiah Wedgwood in his pyrometer. The temperature reached in a process was estimated by the shrinkage of a sample of clay. The higher the temperature, the more the shrinkage. This was the only available more or less reliable method of measurement of temperatures above 1000 °C (1,832 °F). But such shrinkage is irreversible. The clay does not expand again on cooling. That is why it could be used for the measurement. But only once. It is not a thermometric material in the usual sense of the word.
Nevertheless, the thermodynamic definition of absolute temperature does make essential use of the concept of heat, with proper circumspection.
"Hotness"
The property of hotness is a concern of thermodynamics that should be defined without reference to the concept of heat. Consideration of hotness leads to the concept of empirical temperature. All physical systems are capable of heating or cooling others. With reference to hotness, the comparative terms hotter and colder are defined by the rule that heat flows from the hotter body to the colder.
If a physical system is inhomogeneous or very rapidly or irregularly changing, for example by turbulence, it may be impossible to characterize it by a temperature, but still there can be transfer of energy as heat between it and another system. If a system has a physical state that is regular enough, and persists long enough to allow it to reach thermal equilibrium with a specified thermometer, then it has a temperature according to that thermometer. An empirical thermometer registers degree of hotness for such a system. Such a temperature is called empirical. For example, Truesdell writes about classical thermodynamics: "At each time, the body is assigned a real number called the temperature. This number is a measure of how hot the body is."
Physical systems that are too turbulent to have temperatures may still differ in hotness. A physical system that passes heat to another physical system is said to be the hotter of the two. More is required for the system to have a thermodynamic temperature. Its behavior must be so regular that its empirical temperature is the same for all suitably calibrated and scaled thermometers, and then its hotness is said to lie on the one-dimensional hotness manifold. This is part of the reason why heat is defined following Carathéodory and Born, solely as occurring other than by work or transfer of matter; temperature is advisedly and deliberately not mentioned in this now widely accepted definition.
This is also the reason that the zeroth law of thermodynamics is stated explicitly. If three physical systems, A, B, and C are each not in their own states of internal thermodynamic equilibrium, it is possible that, with suitable physical connections being made between them, A can heat B and B can heat C and C can heat A. In non-equilibrium situations, cycles of flow are possible. It is the special and uniquely distinguishing characteristic of internal thermodynamic equilibrium that this possibility is not open to thermodynamic systems (as distinguished amongst physical systems) which are in their own states of internal thermodynamic equilibrium; this is the reason why the zeroth law of thermodynamics needs explicit statement. That is to say, the relation 'is not colder than' between general non-equilibrium physical systems is not transitive, whereas, in contrast, the relation 'has no lower a temperature than' between thermodynamic systems in their own states of internal thermodynamic equilibrium is transitive. It follows from this that the relation 'is in thermal equilibrium with' is transitive, which is one way of stating the zeroth law.
Just as temperature may be undefined for a sufficiently inhomogeneous system, so also may entropy be undefined for a system not in its own state of internal thermodynamic equilibrium. For example, 'the temperature of the Solar System' is not a defined quantity. Likewise, 'the entropy of the Solar System' is not defined in classical thermodynamics. It has not been possible to define non-equilibrium entropy, as a simple number for a whole system, in a clearly satisfactory way.
Classical thermodynamics
Heat and enthalpy
For a closed system (a system from which no matter can enter or exit), one version of the first law of thermodynamics states that the change in internal energy of the system is equal to the amount of heat supplied to the system minus the amount of thermodynamic work done by system on its surroundings. The foregoing sign convention for work is used in the present article, but an alternate sign convention, followed by IUPAC, for work, is to consider the work performed on the system by its surroundings as positive. This is the convention adopted by many modern textbooks of physical chemistry, such as those by Peter Atkins and Ira Levine, but many textbooks on physics define work as work done by the system.
This formula can be re-written so as to express a definition of quantity of energy transferred as heat, based purely on the concept of adiabatic work, if it is supposed that is defined and measured solely by processes of adiabatic work:
The thermodynamic work done by the system is through mechanisms defined by its thermodynamic state variables, for example, its volume , not through variables that necessarily involve mechanisms in the surroundings. The latter are such as shaft work, and include isochoric work.
The internal energy, , is a state function. In cyclical processes, such as the operation of a heat engine, state functions of the working substance return to their initial values upon completion of a cycle.
The differential, or infinitesimal increment, for the internal energy in an infinitesimal process is an exact differential . The symbol for exact differentials is the lowercase letter .
In contrast, neither of the infinitesimal increments nor in an infinitesimal process represents the change in a state function of the system. Thus, infinitesimal increments of heat and work are inexact differentials. The lowercase Greek letter delta, , is the symbol for inexact differentials. The integral of any inexact differential in a process where the system leaves and then returns to the same thermodynamic state does not necessarily equal zero.
As recounted above, in the section headed heat and entropy, the second law of thermodynamics observes that if heat is supplied to a system in a reversible process, the increment of heat and the temperature form the exact differential
and that , the entropy of the working body, is a state function. Likewise, with a well-defined pressure, , behind a slowly moving (quasistatic) boundary, the work differential, , and the pressure, , combine to form the exact differential
with the volume of the system, which is a state variable. In general, for systems of uniform pressure and temperature without composition change,
Associated with this differential equation is the concept that the internal energy may be considered to be a function of its natural variables and . The internal energy representation of the fundamental thermodynamic relation is written as
If is constant
and if is constant
with the enthalpy defined by
The enthalpy may be considered to be a function of its natural variables and . The enthalpy representation of the fundamental thermodynamic relation is written
The internal energy representation and the enthalpy representation are partial Legendre transforms of one another. They contain the same physical information, written in different ways. Like the internal energy, the enthalpy stated as a function of its natural variables is a thermodynamic potential and contains all thermodynamic information about a body.
If a quantity of heat is added to a body while it does only expansion work on its surroundings, one has
If this is constrained to happen at constant pressure, i.e. with , the expansion work done by the body is given by ; recalling the first law of thermodynamics, one has
Consequently, by substitution one has
In this scenario, the increase in enthalpy is equal to the quantity of heat added to the system. This is the basis of the determination of enthalpy changes in chemical reactions by calorimetry. Since many processes do take place at constant atmospheric pressure, the enthalpy is sometimes given the misleading name of 'heat content' or heat function, while it actually depends strongly on the energies of covalent bonds and intermolecular forces.
In terms of the natural variables of the state function , this process of change of state from state 1 to state 2 can be expressed as
It is known that the temperature is identically stated by
Consequently,
In this case, the integral specifies a quantity of heat transferred at constant pressure.
Heat and entropy
In 1856, Rudolf Clausius, referring to closed systems, in which transfers of matter do not occur, defined the second fundamental theorem (the second law of thermodynamics) in the mechanical theory of heat (thermodynamics): "if two transformations which, without necessitating any other permanent change, can mutually replace one another, be called equivalent, then the generations of the quantity of heat Q from work at the temperature T, has the equivalence-value:"
In 1865, he came to define the entropy symbolized by S, such that, due to the supply of the amount of heat Q at temperature T the entropy of the system is increased by
In a transfer of energy as heat without work being done, there are changes of entropy in both the surroundings which lose heat and the system which gains it. The increase, , of entropy in the system may be considered to consist of two parts, an increment, that matches, or 'compensates', the change, , of entropy in the surroundings, and a further increment, that may be considered to be 'generated' or 'produced' in the system, and is said therefore to be 'uncompensated'. Thus
This may also be written
The total change of entropy in the system and surroundings is thus
This may also be written
It is then said that an amount of entropy has been transferred from the surroundings to the system. Because entropy is not a conserved quantity, this is an exception to the general way of speaking, in which an amount transferred is of a conserved quantity.
From the second law of thermodynamics it follows that in a spontaneous transfer of heat, in which the temperature of the system is different from that of the surroundings:
For purposes of mathematical analysis of transfers, one thinks of fictive processes that are called reversible, with the temperature of the system being hardly less than that of the surroundings, and the transfer taking place at an imperceptibly slow rate.
Following the definition above in formula (), for such a fictive reversible process, a quantity of transferred heat (an inexact differential) is analyzed as a quantity , with (an exact differential):
This equality is only valid for a fictive transfer in which there is no production of entropy, that is to say, in which there is no uncompensated entropy.
If, in contrast, the process is natural, and can really occur, with irreversibility, then there is entropy production, with . The quantity was termed by Clausius the "uncompensated heat", though that does not accord with present-day terminology. Then one has
This leads to the statement
which is the second law of thermodynamics for closed systems.
In non-equilibrium thermodynamics that makes the approximation of assuming the hypothesis of local thermodynamic equilibrium, there is a special notation for this. The transfer of energy as heat is assumed to take place across an infinitesimal temperature difference, so that the system element and its surroundings have near enough the same temperature . Then one writes
where by definition
The second law for a natural process asserts that
| Physical sciences | Physics | null |
19593829 | https://en.wikipedia.org/wiki/Spin%20%28physics%29 | Spin (physics) | Spin is an intrinsic form of angular momentum carried by elementary particles, and thus by composite particles such as hadrons, atomic nuclei, and atoms. Spin is quantized, and accurate models for the interaction with spin require relativistic quantum mechanics or quantum field theory.
The existence of electron spin angular momentum is inferred from experiments, such as the Stern–Gerlach experiment, in which silver atoms were observed to possess two possible discrete angular momenta despite having no orbital angular momentum. The relativistic spin–statistics theorem connects electron spin quantization to the Pauli exclusion principle: observations of exclusion imply half-integer spin, and observations of half-integer spin imply exclusion.
Spin is described mathematically as a vector for some particles such as photons, and as a spinor or bispinor for other particles such as electrons. Spinors and bispinors behave similarly to vectors: they have definite magnitudes and change under rotations; however, they use an unconventional "direction". All elementary particles of a given kind have the same magnitude of spin angular momentum, though its direction may change. These are indicated by assigning the particle a spin quantum number.
The SI units of spin are the same as classical angular momentum (i.e., N·m·s, J·s, or kg·m2·s−1). In quantum mechanics, angular momentum and spin angular momentum take discrete values proportional to the Planck constant. In practice, spin is usually given as a dimensionless spin quantum number by dividing the spin angular momentum by the reduced Planck constant . Often, the "spin quantum number" is simply called "spin".
Models
Rotating charged mass
The earliest models for electron spin imagined a rotating charged mass, but this model fails when examined in detail: the required space distribution does not match limits on the electron radius: the required rotation speed exceeds the speed of light. In the Standard Model, the fundamental particles are all considered "point-like": they have their effects through the field that surrounds them. Any model for spin based on mass rotation would need to be consistent with that model.
Pauli's "classically non-describable two-valuedness"
Wolfgang Pauli, a central figure in the history of quantum spin, initially rejected any idea that the "degree of freedom" he introduced to explain experimental observations was related to rotation. He called it "classically non-describable two-valuedness". Later, he allowed that it is related to angular momentum, but insisted on considering spin an abstract property. This approach allowed Pauli to develop a proof of his fundamental Pauli exclusion principle, a proof now called the spin-statistics theorem. In retrospect, this insistence and the style of his proof initiated the modern particle-physics era, where abstract quantum properties derived from symmetry properties dominate. Concrete interpretation became secondary and optional.
Circulation of classical fields
The first classical model for spin proposed a small rigid particle rotating about an axis, as ordinary use of the word may suggest. Angular momentum can be computed from a classical field as well. By applying Frederik Belinfante's approach to calculating the angular momentum of a field, Hans C. Ohanian showed that "spin is essentially a wave property ... generated by a circulating flow of charge in the wave field of the electron". This same concept of spin can be applied to gravity waves in water: "spin is generated by subwavelength circular motion of water particles".
Unlike classical wavefield circulation, which allows continuous values of angular momentum, quantum wavefields allow only discrete values. Consequently, energy transfer to or from spin states always occurs in fixed quantum steps. Only a few steps are allowed: for many qualitative purposes, the complexity of the spin quantum wavefields can be ignored and the system properties can be discussed in terms of "integer" or "half-integer" spin models as discussed in quantum numbers below.
Dirac's relativistic electron
Quantitative calculations of spin properties for electrons requires the Dirac relativistic wave equation.
Relation to orbital angular momentum
As the name suggests, spin was originally conceived as the rotation of a particle around some axis. Historically orbital angular momentum related to particle orbits. While the names based on mechanical models have survived, the physical explanation has not. Quantization fundamentally alters the character of both spin and orbital angular momentum.
Since elementary particles are point-like, self-rotation is not well-defined for them. However, spin implies that the phase of the particle depends on the angle as for rotation of angle around the axis parallel to the spin . This is equivalent to the quantum-mechanical interpretation of momentum as phase dependence in the position, and of orbital angular momentum as phase dependence in the angular position.
For fermions, the picture is less clear: From the Ehrenfest theorem, the angular velocity is equal to the derivative of the Hamiltonian to its conjugate momentum, which is the total angular momentum operator Therefore, if the Hamiltonian has any dependence on the spin , then must be non-zero; consequently, for classical mechanics, the existence of spin in the Hamiltonian will produce an actual angular velocity, and hence an actual physical rotation – that is, a change in the phase-angle, , over time. However, whether this holds true for free electrons is ambiguous, since for an electron, ² is a constant and one might decide that since it cannot change, no partial () can exist. Therefore it is a matter of interpretation whether the Hamiltonian must include such a term, and whether this aspect of classical mechanics extends into quantum mechanics (any particle's intrinsic spin angular momentum, , is a quantum number arising from a "spinor" in the mathematical solution to the Dirac equation, rather than being a more nearly physical quantity, like orbital angular momentum ). Nevertheless, spin appears in the Dirac equation, and thus the relativistic Hamiltonian of the electron, treated as a Dirac field, can be interpreted as including a dependence in the spin .
Quantum number
Spin obeys the mathematical laws of angular momentum quantization. The specific properties of spin angular momenta include:
Spin quantum numbers may take either half-integer or integer values.
Although the direction of its spin can be changed, the magnitude of the spin of an elementary particle cannot be changed.
The spin of a charged particle is associated with a magnetic dipole moment with a -factor that differs from 1. (In the classical context, this would imply the internal charge and mass distributions differing for a rotating object.)
The conventional definition of the spin quantum number is , where can be any non-negative integer. Hence the allowed values of are 0, , 1, , 2, etc. The value of for an elementary particle depends only on the type of particle and cannot be altered in any known way (in contrast to the spin direction described below). The spin angular momentum of any physical system is quantized. The allowed values of are
where is the Planck constant, and is the reduced Planck constant. In contrast, orbital angular momentum can only take on integer values of ; i.e., even-numbered values of .
Fermions and bosons
Those particles with half-integer spins, such as , , , are known as fermions, while those particles with integer spins, such as 0, 1, 2, are known as bosons. The two families of particles obey different rules and broadly have different roles in the world around us. A key distinction between the two families is that fermions obey the Pauli exclusion principle: that is, there cannot be two identical fermions simultaneously having the same quantum numbers (meaning, roughly, having the same position, velocity and spin direction). Fermions obey the rules of Fermi–Dirac statistics. In contrast, bosons obey the rules of Bose–Einstein statistics and have no such restriction, so they may "bunch together" in identical states. Also, composite particles can have spins different from their component particles. For example, a helium-4 atom in the ground state has spin 0 and behaves like a boson, even though the quarks and electrons which make it up are all fermions.
This has some profound consequences:
Quarks and leptons (including electrons and neutrinos), which make up what is classically known as matter, are all fermions with spin . The common idea that "matter takes up space" actually comes from the Pauli exclusion principle acting on these particles to prevent the fermions from being in the same quantum state. Further compaction would require electrons to occupy the same energy states, and therefore a kind of pressure (sometimes known as degeneracy pressure of electrons) acts to resist the fermions being overly close. Elementary fermions with other spins (, , etc.) are not known to exist.
Elementary particles which are thought of as carrying forces are all bosons with spin 1. They include the photon, which carries the electromagnetic force, the gluon (strong force), and the W and Z bosons (weak force). The ability of bosons to occupy the same quantum state is used in the laser, which aligns many photons having the same quantum number (the same direction and frequency), superfluid liquid helium resulting from helium-4 atoms being bosons, and superconductivity, where pairs of electrons (which individually are fermions) act as single composite bosons. Elementary bosons with other spins (0, 2, 3, etc.) were not historically known to exist, although they have received considerable theoretical treatment and are well established within their respective mainstream theories. In particular, theoreticians have proposed the graviton (predicted to exist by some quantum gravity theories) with spin 2, and the Higgs boson (explaining electroweak symmetry breaking) with spin 0. Since 2013, the Higgs boson with spin 0 has been considered proven to exist. It is the first scalar elementary particle (spin 0) known to exist in nature.
Atomic nuclei have nuclear spin which may be either half-integer or integer, so that the nuclei may be either fermions or bosons.
Spin–statistics theorem
The spin–statistics theorem splits particles into two groups: bosons and fermions, where bosons obey Bose–Einstein statistics, and fermions obey Fermi–Dirac statistics (and therefore the Pauli exclusion principle). Specifically, the theorem requires that particles with half-integer spins obey the Pauli exclusion principle while particles with integer spin do not. As an example, electrons have half-integer spin and are fermions that obey the Pauli exclusion principle, while photons have integer spin and do not. The theorem was derived by Wolfgang Pauli in 1940; it relies on both quantum mechanics and the theory of special relativity. Pauli described this connection between spin and statistics as "one of the most important applications of the special relativity theory".
Magnetic moments
Particles with spin can possess a magnetic dipole moment, just like a rotating electrically charged body in classical electrodynamics. These magnetic moments can be experimentally observed in several ways, e.g. by the deflection of particles by inhomogeneous magnetic fields in a Stern–Gerlach experiment, or by measuring the magnetic fields generated by the particles themselves.
The intrinsic magnetic moment of a spin- particle with charge , mass , and spin angular momentum is
where the dimensionless quantity is called the spin -factor. For exclusively orbital rotations, it would be 1 (assuming that the mass and the charge occupy spheres of equal radius).
The electron, being a charged elementary particle, possesses a nonzero magnetic moment. One of the triumphs of the theory of quantum electrodynamics is its accurate prediction of the electron -factor, which has been experimentally determined to have the value , with the digits in parentheses denoting measurement uncertainty in the last two digits at one standard deviation. The value of 2 arises from the Dirac equation, a fundamental equation connecting the electron's spin with its electromagnetic properties; and the deviation from arises from the electron's interaction with the surrounding quantum fields, including its own electromagnetic field and virtual particles.
Composite particles also possess magnetic moments associated with their spin. In particular, the neutron possesses a non-zero magnetic moment despite being electrically neutral. This fact was an early indication that the neutron is not an elementary particle. In fact, it is made up of quarks, which are electrically charged particles. The magnetic moment of the neutron comes from the spins of the individual quarks and their orbital motions.
Neutrinos are both elementary and electrically neutral. The minimally extended Standard Model that takes into account non-zero neutrino masses predicts neutrino magnetic moments of:
where the are the neutrino magnetic moments, are the neutrino masses, and is the Bohr magneton. New physics above the electroweak scale could, however, lead to significantly higher neutrino magnetic moments. It can be shown in a model-independent way that neutrino magnetic moments larger than about 10−14 are "unnatural" because they would also lead to large radiative contributions to the neutrino mass. Since the neutrino masses are known to be at most about , fine-tuning would be necessary in order to prevent large contributions to the neutrino mass via radiative corrections. The measurement of neutrino magnetic moments is an active area of research. Experimental results have put the neutrino magnetic moment at less than times the electron's magnetic moment.
On the other hand, elementary particles with spin but without electric charge, such as the photon and Z boson, do not have a magnetic moment.
Curie temperature and loss of alignment
In ordinary materials, the magnetic dipole moments of individual atoms produce magnetic fields that cancel one another, because each dipole points in a random direction, with the overall average being very near zero. Ferromagnetic materials below their Curie temperature, however, exhibit magnetic domains in which the atomic dipole moments spontaneously align locally, producing a macroscopic, non-zero magnetic field from the domain. These are the ordinary "magnets" with which we are all familiar.
In paramagnetic materials, the magnetic dipole moments of individual atoms will partially align with an externally applied magnetic field. In diamagnetic materials, on the other hand, the magnetic dipole moments of individual atoms align oppositely to any externally applied magnetic field, even if it requires energy to do so.
The study of the behavior of such "spin models" is a thriving area of research in condensed matter physics. For instance, the Ising model describes spins (dipoles) that have only two possible states, up and down, whereas in the Heisenberg model the spin vector is allowed to point in any direction. These models have many interesting properties, which have led to interesting results in the theory of phase transitions.
Direction
Spin projection quantum number and multiplicity
In classical mechanics, the angular momentum of a particle possesses not only a magnitude (how fast the body is rotating), but also a direction (either up or down on the axis of rotation of the particle). Quantum-mechanical spin also contains information about direction, but in a more subtle form. Quantum mechanics states that the component of angular momentum for a spin-s particle measured along any direction can only take on the values
where is the spin component along the -th axis (either , , or ), is the spin projection quantum number along the -th axis, and is the principal spin quantum number (discussed in the previous section). Conventionally the direction chosen is the axis:
where is the spin component along the axis, is the spin projection quantum number along the axis.
One can see that there are possible values of . The number "" is the multiplicity of the spin system. For example, there are only two possible values for a spin- particle: and . These correspond to quantum states in which the spin component is pointing in the +z or −z directions respectively, and are often referred to as "spin up" and "spin down". For a spin- particle, like a delta baryon, the possible values are +, +, −, −.
Vector
For a given quantum state, one could think of a spin vector whose components are the expectation values of the spin components along each axis, i.e., . This vector then would describe the "direction" in which the spin is pointing, corresponding to the classical concept of the axis of rotation. It turns out that the spin vector is not very useful in actual quantum-mechanical calculations, because it cannot be measured directly: , and cannot possess simultaneous definite values, because of a quantum uncertainty relation between them. However, for statistically large collections of particles that have been placed in the same pure quantum state, such as through the use of a Stern–Gerlach apparatus, the spin vector does have a well-defined experimental meaning: It specifies the direction in ordinary space in which a subsequent detector must be oriented in order to achieve the maximum possible probability (100%) of detecting every particle in the collection. For spin- particles, this probability drops off smoothly as the angle between the spin vector and the detector increases, until at an angle of 180°—that is, for detectors oriented in the opposite direction to the spin vector—the expectation of detecting particles from the collection reaches a minimum of 0%.
As a qualitative concept, the spin vector is often handy because it is easy to picture classically. For instance, quantum-mechanical spin can exhibit phenomena analogous to classical gyroscopic effects. For example, one can exert a kind of "torque" on an electron by putting it in a magnetic field (the field acts upon the electron's intrinsic magnetic dipole moment—see the following section). The result is that the spin vector undergoes precession, just like a classical gyroscope. This phenomenon is known as electron spin resonance (ESR). The equivalent behaviour of protons in atomic nuclei is used in nuclear magnetic resonance (NMR) spectroscopy and imaging.
Mathematically, quantum-mechanical spin states are described by vector-like objects known as spinors. There are subtle differences between the behavior of spinors and vectors under coordinate rotations. For example, rotating a spin- particle by 360° does not bring it back to the same quantum state, but to the state with the opposite quantum phase; this is detectable, in principle, with interference experiments. To return the particle to its exact original state, one needs a 720° rotation. (The plate trick and Möbius strip give non-quantum analogies.) A spin-zero particle can only have a single quantum state, even after torque is applied. Rotating a spin-2 particle 180° can bring it back to the same quantum state, and a spin-4 particle should be rotated 90° to bring it back to the same quantum state. The spin-2 particle can be analogous to a straight stick that looks the same even after it is rotated 180°, and a spin-0 particle can be imagined as sphere, which looks the same after whatever angle it is turned through.
Mathematical formulation
Operator
Spin obeys commutation relations analogous to those of the orbital angular momentum:
where is the Levi-Civita symbol. It follows (as with angular momentum) that the eigenvectors of and (expressed as kets in the total basis) are
The spin raising and lowering operators acting on these eigenvectors give
where .
But unlike orbital angular momentum, the eigenvectors are not spherical harmonics. They are not functions of and . There is also no reason to exclude half-integer values of and .
All quantum-mechanical particles possess an intrinsic spin (though this value may be equal to zero). The projection of the spin on any axis is quantized in units of the reduced Planck constant, such that the state function of the particle is, say, not , but , where can take only the values of the following discrete set:
One distinguishes bosons (integer spin) and fermions (half-integer spin). The total angular momentum conserved in interaction processes is then the sum of the orbital angular momentum and the spin.
Pauli matrices
The quantum-mechanical operators associated with spin- observables are
where in Cartesian components
For the special case of spin- particles, , and are the three Pauli matrices:
Pauli exclusion principle
The Pauli exclusion principle states that the wavefunction
for a system of identical particles having spin must change upon interchanges of any two of the particles as
Thus, for bosons the prefactor will reduce to +1, for fermions to −1.
This permutation postulate for -particle state functions has most important consequences in daily life, e.g. the periodic table of the chemical elements.
Rotations
As described above, quantum mechanics states that components of angular momentum measured along any direction can only take a number of discrete values. The most convenient quantum-mechanical description of particle's spin is therefore with a set of complex numbers corresponding to amplitudes of finding a given value of projection of its intrinsic angular momentum on a given axis. For instance, for a spin- particle, we would need two numbers , giving amplitudes of finding it with projection of angular momentum equal to and , satisfying the requirement
For a generic particle with spin , we would need such parameters. Since these numbers depend on the choice of the axis, they transform into each other non-trivially when this axis is rotated. It is clear that the transformation law must be linear, so we can represent it by associating a matrix with each rotation, and the product of two transformation matrices corresponding to rotations A and B must be equal (up to phase) to the matrix representing rotation AB. Further, rotations preserve the quantum-mechanical inner product, and so should our transformation matrices:
Mathematically speaking, these matrices furnish a unitary projective representation of the rotation group SO(3). Each such representation corresponds to a representation of the covering group of SO(3), which is SU(2). There is one -dimensional irreducible representation of SU(2) for each dimension, though this representation is -dimensional real for odd and -dimensional complex for even (hence of real dimension ). For a rotation by angle in the plane with normal vector ,
where , and is the vector of spin operators.
A generic rotation in 3-dimensional space can be built by compounding operators of this type using Euler angles:
An irreducible representation of this group of operators is furnished by the Wigner D-matrix:
where
is Wigner's small d-matrix. Note that for and ; i.e., a full rotation about the axis, the Wigner D-matrix elements become
Recalling that a generic spin state can be written as a superposition of states with definite , we see that if is an integer, the values of are all integers, and this matrix corresponds to the identity operator. However, if is a half-integer, the values of are also all half-integers, giving for all , and hence upon rotation by 2 the state picks up a minus sign. This fact is a crucial element of the proof of the spin–statistics theorem.
Lorentz transformations
We could try the same approach to determine the behavior of spin under general Lorentz transformations, but we would immediately discover a major obstacle. Unlike SO(3), the group of Lorentz transformations SO(3,1) is non-compact and therefore does not have any faithful, unitary, finite-dimensional representations.
In case of spin- particles, it is possible to find a construction that includes both a finite-dimensional representation and a scalar product that is preserved by this representation. We associate a 4-component Dirac spinor with each particle. These spinors transform under Lorentz transformations according to the law
where are gamma matrices, and is an antisymmetric 4 × 4 matrix parametrizing the transformation. It can be shown that the scalar product
is preserved. It is not, however, positive-definite, so the representation is not unitary.
Measurement of spin along the , , or axes
Each of the (Hermitian) Pauli matrices of spin- particles has two eigenvalues, +1 and −1. The corresponding normalized eigenvectors are
(Because any eigenvector multiplied by a constant is still an eigenvector, there is ambiguity about the overall sign. In this article, the convention is chosen to make the first element imaginary and negative if there is a sign ambiguity. The present convention is used by software such as SymPy; while many physics textbooks, such as Sakurai and Griffiths, prefer to make it real and positive.)
By the postulates of quantum mechanics, an experiment designed to measure the electron spin on the , , or axis can only yield an eigenvalue of the corresponding spin operator (, or ) on that axis, i.e. or . The quantum state of a particle (with respect to spin), can be represented by a two-component spinor:
When the spin of this particle is measured with respect to a given axis (in this example, the axis), the probability that its spin will be measured as is just . Correspondingly, the probability that its spin will be measured as is just . Following the measurement, the spin state of the particle collapses into the corresponding eigenstate. As a result, if the particle's spin along a given axis has been measured to have a given eigenvalue, all measurements will yield the same eigenvalue (since , etc.), provided that no measurements of the spin are made along other axes.
Measurement of spin along an arbitrary axis
The operator to measure spin along an arbitrary axis direction is easily obtained from the Pauli spin matrices. Let be an arbitrary unit vector. Then the operator for spin in this direction is simply
The operator has eigenvalues of , just like the usual spin matrices. This method of finding the operator for spin in an arbitrary direction generalizes to higher spin states, one takes the dot product of the direction with a vector of the three operators for the three -, -, -axis directions.
A normalized spinor for spin- in the direction (which works for all spin states except spin down, where it will give ) is
The above spinor is obtained in the usual way by diagonalizing the matrix and finding the eigenstates corresponding to the eigenvalues. In quantum mechanics, vectors are termed "normalized" when multiplied by a normalizing factor, which results in the vector having a length of unity.
Compatibility of spin measurements
Since the Pauli matrices do not commute, measurements of spin along the different axes are incompatible. This means that if, for example, we know the spin along the axis, and we then measure the spin along the axis, we have invalidated our previous knowledge of the axis spin. This can be seen from the property of the eigenvectors (i.e. eigenstates) of the Pauli matrices that
So when physicists measure the spin of a particle along the axis as, for example, , the particle's spin state collapses into the eigenstate . When we then subsequently measure the particle's spin along the axis, the spin state will now collapse into either or , each with probability . Let us say, in our example, that we measure . When we now return to measure the particle's spin along the axis again, the probabilities that we will measure or are each (i.e. they are and respectively). This implies that the original measurement of the spin along the axis is no longer valid, since the spin along the axis will now be measured to have either eigenvalue with equal probability.
Higher spins
The spin- operator forms the fundamental representation of SU(2). By taking Kronecker products of this representation with itself repeatedly, one may construct all higher irreducible representations. That is, the resulting spin operators for higher-spin systems in three spatial dimensions can be calculated for arbitrarily large using this spin operator and ladder operators. For example, taking the Kronecker product of two spin- yields a four-dimensional representation, which is separable into a 3-dimensional spin-1 (triplet states) and a 1-dimensional spin-0 representation (singlet state).
The resulting irreducible representations yield the following spin matrices and eigenvalues in the z-basis:
Also useful in the quantum mechanics of multiparticle systems, the general Pauli group is defined to consist of all -fold tensor products of Pauli matrices.
The analog formula of Euler's formula in terms of the Pauli matrices
for higher spins is tractable, but less simple.
Parity
In tables of the spin quantum number for nuclei or particles, the spin is often followed by a "+" or "−". This refers to the parity with "+" for even parity (wave function unchanged by spatial inversion) and "−" for odd parity (wave function negated by spatial inversion). For example, see the isotopes of bismuth, in which the list of isotopes includes the column nuclear spin and parity. For Bi-209, the longest-lived isotope, the entry 9/2– means that the nuclear spin is 9/2 and the parity is odd.
Measuring spin
The nuclear spin of atoms can be determined by sophisticated improvements to the original Stern-Gerlach experiment. A single-energy (monochromatic) molecular beam of atoms in an inhomogeneous magnetic field will split into beams representing each possible spin quantum state. For an atom with electronic spin and nuclear spin , there are spin states. For example, neutral Na atoms, which have , were passed through a series of inhomogeneous magnetic fields that selected one of the two electronic spin states and separated the nuclear spin states, from which four beams were observed. Thus, the nuclear spin for 23Na atoms was found to be .
The spin of pions, a type of elementary particle, was determined by the principle of detailed balance applied to those collisions of protons that produced charged pions and deuterium.
The known spin values for protons and deuterium allows analysis of the collision cross-section to show that has spin . A different approach is needed for neutral pions. In that case the decay produced two gamma ray photons with spin one:
This result supplemented with additional analysis leads to the conclusion that the neutral pion also has spin zero.
Applications
Spin has important theoretical implications and practical applications. Well-established direct applications of spin include:
Nuclear magnetic resonance (NMR) spectroscopy in chemistry;
Electron spin resonance (ESR or EPR) spectroscopy in chemistry and physics;
Magnetic resonance imaging (MRI) in medicine, a type of applied NMR, which relies on proton spin density;
Giant magnetoresistive (GMR) drive-head technology in modern hard disks.
Electron spin plays an important role in magnetism, with applications for instance in computer memories. The manipulation of nuclear spin by radio-frequency waves (nuclear magnetic resonance) is important in chemical spectroscopy and medical imaging.
Spin–orbit coupling leads to the fine structure of atomic spectra, which is used in atomic clocks and in the modern definition of the second. Precise measurements of the -factor of the electron have played an important role in the development and verification of quantum electrodynamics. Photon spin is associated with the polarization of light (photon polarization).
An emerging application of spin is as a binary information carrier in spin transistors. The original concept, proposed in 1990, is known as Datta–Das spin transistor. Electronics based on spin transistors are referred to as spintronics. The manipulation of spin in dilute magnetic semiconductor materials, such as metal-doped ZnO or TiO2 imparts a further degree of freedom and has the potential to facilitate the fabrication of more efficient electronics.
There are many indirect applications and manifestations of spin and the associated Pauli exclusion principle, starting with the periodic table of chemistry.
History
Spin was first discovered in the context of the emission spectrum of alkali metals. Starting around 1910, many experiments on different atoms produced a collection of relationships involving quantum numbers for atomic energy levels partially summarized in Bohr's model for the atom Transitions between levels obeyed selection rules and the rules were known to be correlated with even or odd atomic number. Additional information was known from changes to atomic spectra observed in strong magnetic fields, known as the Zeeman effect. In 1924, Wolfgang Pauli used this large collection of empirical observations to propose a new degree of freedom, introducing what he called a "two-valuedness not describable classically" associated with the electron in the outermost shell.
The physical interpretation of Pauli's "degree of freedom" was initially unknown. Ralph Kronig, one of Alfred Landé's assistants, suggested in early 1925 that it was produced by the self-rotation of the electron. When Pauli heard about the idea, he criticized it severely, noting that the electron's hypothetical surface would have to be moving faster than the speed of light in order for it to rotate quickly enough to produce the necessary angular momentum. This would violate the theory of relativity. Largely due to Pauli's criticism, Kronig decided not to publish his idea.
In the autumn of 1925, the same thought came to Dutch physicists George Uhlenbeck and Samuel Goudsmit at Leiden University. Under the advice of Paul Ehrenfest, they published their results. The young physicists immediately regretted the publication: Hendrik Lorentz and Werner Heisenberg both pointed out problems with the concept of a spinning electron.
Pauli was especially unconvinced and continued to pursue his two-valued degree of freedom. This allowed him to formulate the Pauli exclusion principle, stating that no two electrons can have the same quantum state in the same quantum system.
Fortunately, by February 1926, Llewellyn Thomas managed to resolve a factor-of-two discrepancy between experimental results for the fine structure in the hydrogen spectrum and calculations based on Uhlenbeck and Goudsmit's (and Kronig's unpublished) model. This discrepancy was due to a relativistic effect, the difference between the electron's rotating rest frame and the nuclear rest frame; the effect is now known as Thomas precession. Thomas' result convinced Pauli that electron spin was the correct interpretation of his two-valued degree of freedom, while he continued to insist that the classical rotating charge model is invalid.
In 1927, Pauli formalized the theory of spin using the theory of quantum mechanics invented by Erwin Schrödinger and Werner Heisenberg. He pioneered the use of Pauli matrices as a representation of the spin operators and introduced a two-component spinor wave-function.
Pauli's theory of spin was non-relativistic. In 1928, Paul Dirac published his relativistic electron equation, using a four-component spinor (known as a "Dirac spinor") for the electron wave-function. In 1940, Pauli proved the spin–statistics theorem, which states that fermions have half-integer spin, and bosons have integer spin.
In retrospect, the first direct experimental evidence of the electron spin was the Stern–Gerlach experiment of 1922. However, the correct explanation of this experiment was only given in 1927.
The original interpretation assumed the two spots observed in the experiment were due to quantized orbital angular momentum. However, in 1927 Ronald Fraser showed that Sodium atoms are isotropic with no orbital angular momentum and suggested that the observed magnetic properties were due to electron spin. In the same year, Phipps and Taylor applied the Stern-Gerlach technique to hydrogen atoms; the ground state of hydrogen has zero angular momentum but the measurements again showed two peaks.
Once the quantum theory became established, it became clear that the original interpretation could not have been correct:
the possible values of orbital angular momentum along one axis is always an odd number, unlike the observations. Hydrogen atoms have a single electron with two spin states giving the two spots observed; silver atoms have closed shells which do not contribute to the magnetic moment and only the unmatched outer electron's spin responds to the field.
| Physical sciences | Particle physics: General | null |
19594028 | https://en.wikipedia.org/wiki/Theoretical%20physics | Theoretical physics | Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain, and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena.
The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation.
Overview
A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms.
A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical idea that (action and) energy are not continuously variable.
Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ (semi-) empirical formulas and heuristics to agree with experimental results, often without deep physical understanding. "Modelers" (also called "model-builders") often appear much like phenomenologists, but try to model speculative theories that have certain desirable features (rather than on experimental data), or apply the techniques of mathematical modeling to physics problems. Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled; e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics.
Theoretical advances may consist in setting aside old, incorrect paradigms (e.g., aether theory of light propagation, caloric theory of heat, burning consisting of evolving phlogiston, or astronomical bodies revolving around the Earth) or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result. Sometimes though, advances may proceed along different paths. For example, an essentially correct theory may need some conceptual or factual revisions; atomic theory, first postulated millennia ago (by several thinkers in Greece and India) and the two-fluid theory of electricity are two cases in this point. However, an exception to all the above is the wave–particle duality, a theory combining aspects of different, opposing models via the Bohr complementarity principle.
Physical theories become accepted if they are able to make correct predictions and no (or few) incorrect ones. The theory should have, at least as a secondary objective, a certain economy and elegance (compare to mathematical beauty), a notion sometimes called "Occam's razor" after the 13th-century English philosopher William of Occam (or Ockham), in which the simpler of two theories that describe the same matter just as adequately is preferred (but conceptual simplicity may mean mathematical complexity). They are also more likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method.
Physical theories can be grouped into three categories: mainstream theories, proposed theories and fringe theories.
History
Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, and continued by Plato and Aristotle, whose views held sway for a millennium. During the rise of medieval universities, the only acknowledged intellectual disciplines were the seven liberal arts of the Trivium like grammar, logic, and rhetoric and of the Quadrivium like arithmetic, geometry, music and astronomy. During the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon. As the Scientific Revolution gathered pace, the concepts of matter, energy, space, time and causality slowly began to acquire the form we know today, and other sciences spun off from the rubric of natural philosophy. Thus began the modern era of theory with the Copernican paradigm shift in astronomy, soon followed by Johannes Kepler's expressions for planetary orbits, which summarized the meticulous observations of Tycho Brahe; the works of these men (alongside Galileo's) can perhaps be considered to constitute the Scientific Revolution.
The great push toward the modern concept of explanation started with Galileo, one of the few physicists who was both a consummate theoretician and a great experimentalist. The analytic geometry and mechanics of Descartes were incorporated into the calculus and mechanics of Isaac Newton, another theoretician/experimentalist of the highest order, writing Principia Mathematica. In it contained a grand synthesis of the work of Copernicus, Galileo and Kepler; as well as Newton's theories of mechanics and gravitation, which held sway as worldviews until the early 20th century. Simultaneously, progress was also made in optics (in particular colour theory and the ancient science of geometrical optics), courtesy of Newton, Descartes and the Dutchmen Snell and Huygens. In the 18th and 19th centuries Joseph-Louis Lagrange, Leonhard Euler and William Rowan Hamilton would extend the theory of classical mechanics considerably. They picked up the interactive intertwining of mathematics and physics begun two millennia earlier by Pythagoras.
Among the great conceptual achievements of the 19th and 20th centuries were the consolidation of the idea of energy (as well as its global conservation) by the inclusion of heat, electricity and magnetism, and then light. The laws of thermodynamics, and most importantly the introduction of the singular concept of entropy began to provide a macroscopic explanation for the properties of matter. Statistical mechanics (followed by statistical physics and Quantum statistical mechanics) emerged as an offshoot of thermodynamics late in the 19th century. Another important event in the 19th century was the discovery of electromagnetic theory, unifying the previously separate phenomena of electricity, magnetism and light.
The pillars of modern physics, and perhaps the most revolutionary theories in the history of physics, have been relativity theory and quantum mechanics. Newtonian mechanics was subsumed under special relativity and Newton's gravity was given a kinematic explanation by general relativity. Quantum mechanics led to an understanding of blackbody radiation (which indeed, was an original motivation for the theory) and of anomalies in the specific heats of solids — and finally to an understanding of the internal structures of atoms and molecules. Quantum mechanics soon gave way to the formulation of quantum field theory (QFT), begun in the late 1920s. In the aftermath of World War 2, more progress brought much renewed interest in QFT, which had since the early efforts, stagnated. The same period also saw fresh attacks on the problems of superconductivity and phase transitions, as well as the first applications of QFT in the area of theoretical condensed matter. The 1960s and 70s saw the formulation of the Standard model of particle physics using QFT and progress in condensed matter physics (theoretical foundations of superconductivity and critical phenomena, among others), in parallel to the applications of relativity to problems in astronomy and cosmology respectively.
All of these achievements depended on the theoretical physics as a moving force both to suggest experiments and to consolidate results — often by ingenious application of existing mathematics, or, as in the case of Descartes and Newton (with Leibniz), by inventing new mathematics. Fourier's studies of heat conduction led to a new branch of mathematics: infinite, orthogonal series.
Modern theoretical physics attempts to unify theories and explain phenomena in further attempts to understand the Universe, from the cosmological to the elementary particle scale. Where experimentation cannot be done, theoretical physics still tries to advance through the use of mathematical models.
Mainstream theories
Mainstream theories (sometimes referred to as central theories) are the body of knowledge of both factual and scientific views and possess a usual scientific quality of the tests of repeatability, consistency with existing well-established science and experimentation. There do exist mainstream theories that are generally accepted theories based solely upon their effects explaining a wide variety of data, although the detection, explanation, and possible composition are subjects of debate.
Examples
Big Bang
Chaos theory
Classical mechanics
Classical field theory
Dynamo theory
Field theory
Ginzburg–Landau theory
Kinetic theory of gases
Classical electromagnetism
Perturbation theory (quantum mechanics)
Physical cosmology
Quantum chromodynamics
Quantum complexity theory
Quantum electrodynamics
Quantum field theory
Quantum field theory in curved spacetime
Quantum information theory
Quantum mechanics
Quantum thermodynamics
Relativistic quantum mechanics
Scattering theory
Standard Model
Statistical physics
Theory of relativity
Wave–particle duality
Proposed theories
The proposed theories of physics are usually relatively new theories which deal with the study of physics which include scientific approaches, means for determining the validity of models and new types of reasoning used to arrive at the theory. However, some proposed theories include theories that have been around for decades and have eluded methods of discovery and testing. Proposed theories can include fringe theories in the process of becoming established (and, sometimes, gaining wider acceptance). Proposed theories usually have not been tested. In addition to the theories like those listed below, there are also different interpretations of quantum mechanics, which may or may not be considered different theories since it is debatable whether they yield different predictions for physical experiments, even in principle. For example, AdS/CFT correspondence, Chern–Simons theory, graviton, magnetic monopole, string theory, theory of everything.
Fringe theories
Fringe theories include any new area of scientific endeavor in the process of becoming established and some proposed theories. It can include speculative sciences. This includes physics fields and physical theories presented in accordance with known evidence, and a body of associated predictions have been made according to that theory.
Some fringe theories go on to become a widely accepted part of physics. Other fringe theories end up being disproven. Some fringe theories are a form of protoscience and others are a form of pseudoscience. The falsification of the original theory sometimes leads to reformulation of the theory.
Examples
Aether (classical element)
Luminiferous aether
Digital physics
Electrogravitics
Stochastic electrodynamics
Tesla's dynamic theory of gravity
Thought experiments vs real experiments
"Thought" experiments are situations created in one's mind, asking a question akin to "suppose you are in this situation, assuming such is true, what would follow?". They are usually created to investigate phenomena that are not readily experienced in every-day situations. Famous examples of such thought experiments are Schrödinger's cat, the EPR thought experiment, simple illustrations of time dilation, and so on. These usually lead to real experiments designed to verify that the conclusion (and therefore the assumptions) of the thought experiments are correct. The EPR thought experiment led to the Bell inequalities, which were then tested to various degrees of rigor, leading to the acceptance of the current formulation of quantum mechanics and probabilism as a working hypothesis.
| Physical sciences | Basics_6 | null |
19594213 | https://en.wikipedia.org/wiki/Planck%20constant | Planck constant | The Planck constant, or Planck's constant, denoted by is a fundamental physical constant of foundational importance in quantum mechanics: a photon's energy is equal to its frequency multiplied by the Planck constant, and the wavelength of a matter wave equals the Planck constant divided by the associated particle momentum. The closely related reduced Planck constant, equal to and denoted is commonly used in quantum physics equations.
The constant was postulated by Max Planck in 1900 as a proportionality constant needed to explain experimental black-body radiation. Planck later referred to the constant as the "quantum of action". In 1905, Albert Einstein associated the "quantum" or minimal element of the energy to the electromagnetic wave itself. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta".
In metrology, the Planck constant is used, together with other constants, to define the kilogram, the SI unit of mass. The SI units are defined in such a way that, when the Planck constant is expressed in SI units, it has the exact value
History
Origin of the constant
Planck's constant was formulated as part of Max Planck's successful effort to produce a mathematical expression that accurately predicted the observed spectral distribution of thermal radiation from a closed furnace (black-body radiation). This mathematical expression is now known as Planck's law.
In the last years of the 19th century, Max Planck was investigating the problem of black-body radiation first posed by Kirchhoff some 40 years earlier. Every physical body spontaneously and continuously emits electromagnetic radiation. There was no expression or explanation for the overall shape of the observed emission spectrum. At the time, Wien's law fit the data for short wavelengths and high temperatures, but failed for long wavelengths. Also around this time, but unknown to Planck, Lord Rayleigh had derived theoretically a formula, now known as the Rayleigh–Jeans law, that could reasonably predict long wavelengths but failed dramatically at short wavelengths.
Approaching this problem, Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency. He examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for the black-body spectrum, which gave a simple empirical formula for long wavelengths.
Planck tried to find a mathematical expression that could reproduce Wien's law (for short wavelengths) and the empirical formula (for long wavelengths). This expression included a constant, , which is thought to be for (auxiliary quantity), and subsequently became known as the Planck constant. The expression formulated by Planck showed that the spectral radiance per unit frequency of a body for frequency at absolute temperature is given by
where is the Boltzmann constant, is the Planck constant, and is the speed of light in the medium, whether material or vacuum.
Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators. To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics, which he described as "an act of desperation". One of his new boundary conditions was
With this new condition, Planck had imposed the quantization of the energy of the oscillators, in his own words, "a purely formal assumption ... actually I did not think much about it", but one that would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now sometimes termed the "Planck–Einstein relation":
Planck was able to calculate the value of from experimental data on black-body radiation: his result, , is within 1.2% of the currently defined value. He also made the first determination of the Boltzmann constant from the same data and theory.
Development and application
The black-body problem was revisited in 1905, when Lord Rayleigh and James Jeans (together) and Albert Einstein independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) in convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta".
Photoelectric effect
The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz, who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard (Lénárd Fülöp) in 1902. Einstein's 1905 paper discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921, after his predictions had been confirmed by the experimental work of Robert Andrews Millikan. The Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real.
Before Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterize different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space (and hence consumes more electricity) than the ordinary bulb, even though the color of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their intensity. However, the energy account of the photoelectric effect did not seem to agree with the wave description of light.
The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect). Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy.
Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta. The size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of the Planck–Einstein relation:
Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light and the kinetic energy of photoelectrons was shown to be equal to the Planck constant .
Atomic structure
In 1912 John William Nicholson developed an atomic model and found the angular momentum of the electrons in the model were related by h/2.
Nicholson's nuclear quantum atomic model influenced the development of Niels Bohr 's atomic model and Bohr quoted him in his 1913 paper of the Bohr model of the atom. Bohr's model went beyond Planck's abstract harmonic oscillator concept: an electron in a Bohr atom could only have certain defined energies , defined by
where is the speed of light in vacuum, is an experimentally determined constant (the Rydberg constant) and . This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant in terms of other fundamental constants.
In discussing angular momentum of the electrons in his model Bohr introduced the quantity , now known as the reduced Planck constant as the quantum of angular momentum.
Uncertainty principle
The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given numerous particles prepared in the same state, the uncertainty in their position, , and the uncertainty in their momentum, , obey
where the uncertainty is given as the standard deviation of the measured value from its expected value. There are several other such pairs of physically measurable conjugate variables which obey a similar rule. One example is time vs. energy. The inverse relationship between the uncertainty of the two conjugate variables forces a tradeoff in quantum experiments, as measuring one quantity more precisely results in the other quantity becoming imprecise.
In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental cornerstones to the entire theory lies in the commutator relationship between the position operator and the momentum operator :
where is the Kronecker delta.
Photon energy
The Planck relation connects the particular photon energy with its associated wave frequency :
This energy is extremely small in terms of ordinarily perceived everyday objects.
Since the frequency , wavelength , and speed of light are related by , the relation can also be expressed as
de Broglie wavelength
In 1923, Louis de Broglie generalized the Planck–Einstein relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but the quantum wavelength of any particle. This was confirmed by experiments soon afterward. This holds throughout the quantum theory, including electrodynamics. The de Broglie wavelength of the particle is given by
where denotes the linear momentum of a particle, such as a photon, or any other elementary particle.
The energy of a photon with angular frequency is given by
while its linear momentum relates to
where is an angular wavenumber.
These two relations are the temporal and spatial parts of the special relativistic expression using 4-vectors.
Statistical mechanics
Classical statistical mechanics requires the existence of (but does not define its value). Eventually, following upon Planck's discovery, it was speculated that physical action could not take on an arbitrary value, but instead was restricted to integer multiples of a very small quantity, the "[elementary] quantum of action", now called the Planck constant. This was a significant conceptual part of the so-called "old quantum theory" developed by physicists including Bohr, Sommerfeld, and Ishiwara, in which particle trajectories exist but are hidden, but quantum laws constrain them based on their action. This view has been replaced by fully modern quantum theory, in which definite trajectories of motion do not even exist; rather, the particle is represented by a wavefunction spread out in space and in time. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain quantization of energy.
Dimension and value
The Planck constant has the same dimensions as action and as angular momentum. The Planck constant is fixed at = as part of the definition of the SI units.
This value is used to define the SI unit of mass, the kilogram: "the kilogram [...] is defined by taking the fixed numerical value of to be when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of speed of light and duration of hyperfine transition of the ground state of an unperturbed caesium-133 atom ." Technologies of mass metrology such as the Kibble balance measure refine the value of kilogram applying fixed value of the Planck constant.
Significance of the value
The Planck constant is one of the smallest constants used in physics. This reflects the fact that on a scale adapted to humans, where energies are typical of the order of kilojoules and times are typical of the order of seconds or minutes, the Planck constant is very small. When the product of energy and time for a physical event approaches the Planck constant, quantum effects dominate.
Equivalently, the order of the Planck constant reflects the fact that everyday objects and systems are made of a large number of microscopic particles. For example, in green light (with a wavelength of 555 nanometres or a frequency of ) each photon has an energy . That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light more typical in everyday experience (though much larger than the smallest amount perceivable by the human eye) is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, with the result of , about the food energy in three apples.
Reduced Planck constant
Many equations in quantum physics are customarily written using the reduced Planck constant,
equal to and denoted (pronounced h-bar).
The fundamental equations look simpler when written using as opposed to and it is usually rather than that gives the most reliable results when used in order-of-magnitude estimates.
For example, using dimensional analysis to estimate the ionization energy of a hydrogen atom, the relevant parameters that determine the ionization energy are the mass of the electron the electron charge and either the Planck constant or the reduced Planck constant :
Since both constants have the same dimensions, they will enter the dimensional analysis in the same way, but with the estimate is within a factor of two, while with the error is closer to
Names and symbols
The reduced Planck constant is known by many other names: reduced Planck's constant ), the rationalized Planck constant
(or rationalized Planck's constant ,
the Dirac constant (or Dirac's constant ), the Dirac (or Dirac's ), the Dirac (or Dirac's ), and h-bar. It is also common to refer to this as "Planck's constant" while retaining the relationship .
By far the most common symbol for the reduced Planck constant is . However, there are some sources that denote it by instead, in which case they usually refer to it as the "Dirac " (or "Dirac's ").
History
The combination appeared in Niels Bohr's 1913 paper, where it was denoted by For the next 15 years, the combination continued to appear in the literature, but normally without a separate symbol. Then, in 1926, in their seminal papers, Schrödinger and Dirac again introduced special symbols for it: in the case of Schrödinger, and in the case of Dirac. Dirac continued to use in this way until 1930, when he introduced the symbol in his book The Principles of Quantum Mechanics.
| Physical sciences | Physical constants | Physics |
19595060 | https://en.wikipedia.org/wiki/Tulip | Tulip | Tulips are spring-blooming perennial herbaceous bulbiferous geophytes in the Tulipa genus. Their flowers are usually large, showy, and brightly coloured, generally red, orange, pink, yellow, or white. They often have a different coloured blotch at the base of the tepals, internally. Because of a degree of variability within the populations and a long history of cultivation, classification has been complex and controversial. The tulip is a member of the lily family, Liliaceae, along with 14 other genera, where it is most closely related to Amana, Erythronium, and Gagea in the tribe Lilieae.
There are about 75 species, and these are divided among four subgenera. The name "tulip" is thought to be derived from a Persian word for turban, which it may have been thought to resemble by those who discovered it. Tulips were originally found in a band stretching from Southern Europe to Central Asia, but since the seventeenth century have become widely naturalised and cultivated (see map). In their natural state, they are adapted to steppes and mountainous areas with temperate climates. Flowering in the spring, they become dormant in the summer once the flowers and leaves die back, emerging above ground as a shoot from the underground bulb in early spring.
Growing wild over much of the Near East and Central Asia, tulips had probably been cultivated in Persia from the 10th century. By the 15th century, tulips were among the most prized flowers; becoming the symbol of the later Ottomans. Tulips were cultivated in Byzantine Constantinople as early as 1055 but they did not come to the attention of Northern Europeans until the sixteenth century, when Northern European diplomats to the Ottoman court observed and reported on them. They were rapidly introduced into Northern Europe and became a much-sought-after commodity during tulip mania. Tulips were frequently depicted in Dutch Golden Age paintings, and have become associated with the Netherlands, the major producer for world markets, ever since. In the seventeenth-century Netherlands, during the time of the tulip mania, an infection of tulip bulbs by the tulip breaking virus created variegated patterns in the tulip flowers that were much admired and valued. While truly broken tulips are not cultivated anymore, the closest available specimens today are part of the group known as the Rembrandts – so named because Rembrandt painted some of the most admired breaks of his time.
Breeding programmes have produced thousands of hybrid and cultivars in addition to the original species (known in horticulture as botanical tulips). They are popular throughout the world, both as ornamental garden plants and as cut flowers.
Description
Tulips are perennial herbaceous bulbiferous geophytes that bloom in spring and die back after flowering to an underground storage bulb. Depending on the species, tulip plants can be between high.
Tulip stems have few leaves. Larger species tend to have multiple leaves. Plants typically have two to six leaves, some species up to 12. The tulip's leaf is cauline (born on a stem), strap-shaped, with a waxy coating, and the leaves are alternate (alternately arranged on the stem), diminishing in size the further up the stem. These fleshy blades are often bluish-green in colour. The bulbs are truncated basally and elongated towards the apex. They are covered by a protective tunic (tunicate) which can be glabrous or hairy inside.
Flowers
The tulip's flowers are usually large and are actinomorphic (radially symmetric) and hermaphrodite (contain both male (androecium) and female (gynoecium) characteristics), generally erect, or more rarely pendulous, and are arranged more usually as a single terminal flower, or when pluriflor as two to three (e.g. Tulipa turkestanica), but up to four, flowers on the end of a floriferous stem (scape), which is single arising from amongst the basal leaf rosette. In structure, the flower is generally cup or star-shaped. As with other members of Liliaceae the perianth is undifferentiated (perigonium) and biseriate (two whorled), formed from six free (i.e. apotepalous) caducous tepals arranged into two separate whorls of three parts (trimerous) each. The two whorls represent three petals and three sepals but are termed tepals because they are nearly identical. The tepals are usually petaloid (petal-like), being brightly coloured, but each whorl may be different, or have different coloured blotches at their bases, forming darker colouration on the interior surface. The inner petals have a small, delicate cleft at the top, while the sturdier outer ones form uninterrupted ovals.
The flowers have six distinct, basifixed introrse stamens arranged in two whorls of three, which vary in length and may be glabrous or hairy. The filaments are shorter than the tepals and dilated towards their base. The style is short or absent and each stigma has three distinct lobes, and the ovaries are superior, with three chambers.
Colours
The "Semper Augustus" was the most expensive tulip during the 17th-century tulip mania. After seeing the tulip in the garden of Dr. Adriaen Pauw, a director of the new East India Company, Nicolas van Wassenaer wrote in 1624 that "The colour is white, with Carmine on a blue base, and with an unbroken flame right to the top". With limited specimens in existence at the time and most owned by Pauw, his refusal to sell any flowers, despite wildly escalating offers, is believed by some to have sparked the mania.
Tulip flowers come in a wide variety of colours, except pure blue (several tulips with "blue" in the name have a faint violet hue), and have absent nectaries. Tulip flowers are generally bereft of scent and are the coolest of floral characters.
While tulips can be bred to display a wide variety of colours, black tulips have historically been difficult to achieve. The Queen of the Night tulip is as close to black as a flower gets, though it is, in fact, a dark and glossy maroonish purple. The first truly black tulip was bred in 1986 by a Dutch flower grower in Bovenkarspel, Netherlands. The specimen was created by cross-breeding two deep purple tulips, the Queen of the Night and Wienerwald tulips.
Fruit
The tulip's fruit is a globose or ellipsoid capsule with a leathery covering and an ellipsoid to globe shape. Each capsule contains numerous flat, disc-shaped seeds in two rows per chamber. These light to dark brown seeds have very thin seed coats and endosperm that do not normally fill the entire seed.
Phytochemistry
Tulipanin is an anthocyanin found in tulips. It is the 3-rutinoside of delphinidin. The chemical compounds named tuliposides and tulipalins can also be found in tulips and are responsible for allergies. Tulipalin A, or α-methylene-γ-butyrolactone, is a common allergen, generated by hydrolysis of the glucoside tuliposide A. It induces a dermatitis that is mostly occupational and affects tulip bulb sorters and florists who cut the stems and leaves. Tulipanin A and B are toxic to horses, cats and dogs. The colour of a tulip is formed from two pigments working in concert; a base colour that is always yellow or white, and a second laid-on anthocyanin colour. The mix of these two hues determines the visible unitary colour. The breaking of flowers occurs when a virus suppresses anthocyanin and the base colour is exposed as a streak.
Fragrance
The great majority of tulips, both species and cultivars, have no discernable scent, but a few of both are scented to a degree, and Anna Pavord describes T. hungarica as "strongly scented", and among cultivars, some such as "Monte Carlo" and "Brown Sugar" are "scented", and "Creme Upstar" "fragrant".
Taxonomy
It was published by Carl Linnaeus in 1753 with Tulipa gesneriana as the type species. Tulipa is a genus of the lily family, Liliaceae, once one of the largest families of monocots, but which molecular phylogenetics has reduced to a monophyletic grouping with only 15 genera. Within Liliaceae, Tulipa is placed within Lilioideae, one of three subfamilies, with two tribes. Tribe Lilieae includes seven other genera in addition to Tulipa.
Subdivision
The genus, which includes about 75 species, is divided into four subgenera.
Clusianae (4 species)
Orithyia (4 species)
Tulipa (52 species)
Eriostemones (16 species)
Etymology
The word tulip, first mentioned in western Europe in or around 1554 and seemingly derived from the "Turkish Letters" of diplomat Ogier Ghiselin de Busbecq, first appeared in English as tulipa or tulipant, entering the language by way of and its obsolete form tulipan or by way of Modern Latin tulipa, from Ottoman Turkish tülbend ("muslin" or "gauze"), and may be ultimately derived from the delband ("Turban"), this name being applied because of a perceived resemblance of the shape of a tulip flower to that of a turban. This may have been due to a translation error in early times when it was fashionable in the Ottoman Empire to wear tulips on turbans. The translator possibly confused the flower for the turban.
Ogier Ghiselin de Busbecq stated that the "Turks" used the word tulipan to describe the flower. Extensive speculation has tried to understand why he would state this, given that the Turkish word for tulip is lale. It is from this speculation that tulipan being a translation error referring to turbans is derived. This etymology has been challenged and makes no assumptions about possible errors. At no point does Busbecq state this was the word used in Turkey, he simply states it was used by the "Turks". On his way to Constantinople Busbecq states he travelled through Hungary and used Hungarian guides. Until recent times "Turk" was a common term when referring to Hungarians. The word tulipan is in fact the Hungarian word for tulip. As long as one recognizes "Turk" as a reference to Hungarians, no amount of speculation is required to reconcile the word's origin or form. Busbecq may have been simply repeating the word used by his "Turk/Hungarian" guides.
The Hungarian word tulipan may be adopted from an Indo-Aryan reference to the tulip as a symbol of resurrection, tala meaning "bottom or underworld" and pAna meaning "defence". Prior to arriving in Europe the Hungarians, and other Finno-Ugrians, embraced the Indo-Iranian cult of the dead, Yima/Yama, and would have been familiar with all of its symbols including the tulip.
Distribution and habitat
Tulips are mainly distributed along a band corresponding to latitude 40° north, from southeast of Europe (Greece, Albania, North Macedonia, Kosovo, Southern Serbia, Bulgaria, most part of Romania, Ukraine, Russia) and Turkey in the west, through the Levant (Syria, Israel, Palestinian Territories, Lebanon and Jordan) and the Sinai Peninsula. From there it extends eastwards through Jerevan (Armenia), and Baku (Azerbaijan) and on the eastern shore of the Caspian Sea through Turkmenistan, Bukhara, Samarkand and Tashkent (Uzbekistan), to the eastern end of the range in the Pamir-Alai and Tien-Shan mountains in Central Asia, which form the centre of diversity. Further to the east, Tulipa is found in the western Himalayas, southern Siberia, Inner Mongolia, and as far as the northwest of China. While authorities have stated that no tulips west of the Balkans are native, subsequent identification of Tulipa sylvestris subsp. australis as a native of the Iberian peninsula and adjacent North Africa shows that this may be a simplification. In addition to these regions in the west tulips have been identified in Greece, Cyprus and the Balkans. In the south, Iran marks its furthest extent, while the northern limit is Ukraine. Although tulips are also throughout most of the Mediterranean and Europe, these regions do not form part of the natural distribution. Tulips were brought to Europe by travellers and merchants from Anatolia and Central Asia for cultivation, from where they escaped and naturalised (see map). For instance, less than half of those species found in Turkey are actually native. These have been referred to as neo-tulipae.
Tulips are indigenous to mountainous areas with temperate climates, where they are a common element of steppe and winter-rain Mediterranean vegetation. They thrive in climates with long, cool springs and dry summers. Tulips are most commonly found in meadows, steppes and chaparral, but also introduced in fields, orchards, roadsides and abandoned gardens.
Ecology
Botrytis tulipae is a major fungal disease affecting tulips, causing cell death and eventually the rotting of the plant. Other pathogens include anthracnose, bacterial soft rot, blight caused by Sclerotium rolfsii, bulb nematodes, other rots including blue molds, black molds and mushy rot.
The fungus Trichoderma viride can infect tulips, producing dried leaf tips and reduced growth, although symptoms are usually mild and only present on bulbs growing in glasshouses.
Variegated tulips admired during the Dutch tulipomania gained their delicately feathered patterns from an infection with the tulip breaking virus, a mosaic virus that was carried by the green peach aphid, Myzus persicae. While the virus produces fantastically streaked flowers, it also weakens plants and reduces the number of offsets produced. Dutch growers would go to extraordinary lengths during tulipomania to make tulips break, borrowing alchemists’ techniques and resorting to sprinkling paint powders of the desired hue or pigeon droppings onto flower roots. Tulips affected by the mosaic virus are called "broken"; while such plants can occasionally revert to a plain or solid colouring, they will remain infected and have to be destroyed. Today the virus is almost eradicated from tulip growers' fields. The multicoloured patterns of modern varieties result from breeding; they normally have solid, un-feathered borders between the colours.
Tulip growth is also dependent on temperature conditions. Slightly germinated plants show greater growth if subjected to a period of cool dormancy, known as vernalisation. Furthermore, although flower development is induced at warmer temperatures (), elongation of the flower stalk and proper flowering is dependent on an extended period of low temperature (< ). Tulip bulbs imported to warm-winter areas are often planted in autumn to be treated as annuals. The colour of tulip flowers also varies with growing conditions.
In the American East, white-tailed deer eat tulips, with no apparent ill effects. However, tulips are poisonous to domestic animals e.g. horses, cats, and dogs.
Cultivation
History
Islamic World
Cultivation of the tulip began in Iran (Persia), probably in the 10th century. Early cultivars must have emerged from hybridisation in gardens from wild collected plants, which were then favoured, possibly due to flower size or growth vigour. The tulip is not mentioned by any writer from antiquity, therefore it seems probable that tulips were introduced into Anatolia only with the advance of the Seljuks. In the Ottoman Empire, numerous types of tulips were cultivated and bred, and today, 14 species can still be found in Turkey. Tulips are mentioned by Omar Kayam and Jalāl ad-Dīn Rûmi. Species of tulips in Turkey typically come in red, less commonly in white or yellow. The Ottoman Turks had discovered that these wild tulips were great changelings, freely hybridizing (though it takes 7 years to show colour) but also subject to mutations that produced spontaneous changes in form and colour.
A paper by Arthur Baker reports that in 1574, Sultan Selim II ordered the Kadi of A‘azāz in Syria to send him 50,000 tulip bulbs. However, John Harvey points out several problems with this source, and there is also the possibility that tulips and hyacinth (sümbüll), originally Indian spikenard (Nardostachys jatamansi) have been confused. Sultan Selim also imported 300,000 bulbs of Kefe Lale (also known as Cafe-Lale, from the medieval name Kaffa, probably Tulipa suaveolens, syn. Tulipa schrenkii) from Kefe in Crimea, for his gardens in the Topkapı Sarayı in Istanbul.
It is also reported that shortly after arriving in Constantinople in 1554, Ogier Ghislain de Busbecq, ambassador of the Austrian Habsburgs to the court of Suleyman the Magnificent, claimed to have introduced the tulip to Europe by sending a consignment of bulbs west. The fact that the tulip's first official trip west took it from one court to the other could have contributed to its ascendency.
Sultan Ahmet III maintained famous tulip gardens in the summer highland pastures (Yayla) at Spil Dağı above the town of Manisa. They seem to have consisted of wild tulips. However, of the 14 tulip species known from Turkey, only four are considered to be of local origin, so wild tulips from Iran and Central Asia may have been brought into Turkey during the Seljuk and especially Ottoman periods. Also, Sultan Ahmet imported domestic tulip bulbs from the Netherlands.
The gardening book Revnak'ı Bostan (Beauty of the Garden) by Sahibül Reis ülhaç Ibrahim Ibn ülhaç Mehmet, written in 1660 does not mention the tulip at all, but contains advice on growing hyacinths and lilies. However, there is considerable confusion of terminology, and tulips may have been subsumed under hyacinth, a mistake several European botanists were to perpetuate. In 1515, the scholar Qasim from Herat in contrast had identified both wild and garden tulips (lale) as anemones (shaqayq al-nu'man) but described the crown imperial as laleh kakli.
In a Turkic text written before 1495, the Chagatay Husayn Bayqarah mentions tulips (lale). Babur, the founder of the Mughal Empire, also names tulips in the Baburnama. He may actually have introduced them from Afghanistan to the plains of India, as he did with other plants like melons and grapes. The tulip represents the official symbol of Turkey.
In Moorish Andalus, a "Makedonian bulb" (basal al-maqdunis) or "bucket-Narcissus" (naryis qadusi) was cultivated as an ornamental plant in gardens. It was supposed to have come from Alexandria and may have been Tulipa sylvestris, but the identification is not wholly secure.
Introduction to Western Europe
Although it is unknown who first brought the tulip to Northwestern Europe, the most widely accepted story is that it was Oghier Ghislain de Busbecq, an ambassador for Emperor Ferdinand I to Suleyman the Magnificent. According to a letter, he saw "an abundance of flowers everywhere; Narcissus, hyacinths and those in Turkish called Lale, much to our astonishment because it was almost midwinter, a season unfriendly to flowers." However, in 1559, an account by Conrad Gessner describes tulips flowering in Augsburg, Swabia in the garden of Councillor Heinrich Herwart. In Central and Northern Europe, tulip bulbs are generally removed from the ground in June and must be replanted by September for the winter. It is doubtful that Busbecq could have had the tulip bulbs harvested, shipped to Germany and replanted between March 1558 and Gessner's description the following year. Pietro Andrea Mattioli illustrated a tulip in 1565 but identified it as a narcissus.
Carolus Clusius is largely responsible for the spread of tulip bulbs in the final years of the 16th century; he planted tulips at the Vienna Imperial Botanical Gardens in 1573. He finished the first major work on tulips in 1592 and made note of the colour variations. After he was appointed the director of the Leiden University's newly established Hortus Botanicus, he planted both a teaching garden and his private garden with tulips in late 1593. Thus, 1594 is considered the date of the tulip's first flowering in the Netherlands, despite reports of the cultivation of tulips in private gardens in Antwerp and Amsterdam two or three decades earlier. These tulips at Leiden would eventually lead to both the tulip mania and the tulip industry in the Netherlands. Over two raids, in 1596 and in 1598, more than one hundred bulbs were stolen from his garden.
Tulips spread rapidly across Europe, and more opulent varieties such as double tulips were already known in Europe by the early 17th century. These curiosities fitted well in an age when natural oddities were cherished especially in the Netherlands, France, Germany and England, where the spice trade with the East Indies had made many people wealthy. Nouveaux riches seeking wealthy displays embraced the exotic plant market, especially in the Low Countries where gardens had become fashionable. A craze for bulbs soon grew in France, where in the early 17th century, entire properties were exchanged as payment for a single tulip bulb. The value of the flower gave it an aura of mystique, and numerous publications describing varieties in lavish garden manuals were published, cashing in on the value of the flower. An export business was built up in France, supplying Dutch, Flemish, German and English buyers. The trade drifted slowly from the French to the Dutch.
Between 1634 and 1637, the enthusiasm for the new flowers in Holland triggered a speculative frenzy now known as the tulip mania that eventually led to the collapse of the market three years later. Tulip bulbs had become so expensive that they were treated as a form of currency, or rather, as futures, forcing the Dutch government to introduce trading restrictions on the bulbs. Around this time, the ceramic tulipiere was devised for the display of cut flowers stem by stem. Vases and bouquets, usually including tulips, often appeared in Dutch still-life painting. To this day, tulips are associated with the Netherlands, and the cultivated forms of the tulip are often called "Dutch tulips". The Netherlands has the world's largest permanent display of tulips at the Keukenhof.
The majority of tulip cultivars are classified in the taxon Tulipa gesneriana. They have usually several species in their direct background, but most have been derived from Tulipa suaveolens. Tulipa gesneriana is in itself an early hybrid of complex origin and is probably not the same taxon as was described by Conrad Gessner in the 16th century.
The UK's National Collection of English florists' tulips and Dutch historic tulips, dating from the early 17th century to c. 1960, is held by Polly Nicholson at Blackland House, near Calne in Wiltshire.
Introduction to the United States
It is believed the first tulips in the United States were grown near Spring Pond at the Fay Estate in Lynn and Salem, Massachusetts. From 1847 to 1865, Richard Sullivan Fay, Esq., one of Lynn's wealthiest men, settled on located partly in present-day Lynn and partly in present-day Salem. Mr. Fay imported many different trees and plants from all parts of the world and planted them among the meadows of the Fay Estate.
Propagation
The Netherlands is the world's main producer of commercial tulip plants, producing as many as 3 billion bulbs annually, the majority for export.
"Unlike many flower species, tulips do not produce nectar to entice insect pollination. Instead, tulips rely on wind and land animals to move their pollen between reproductive organs. Because they are self-pollinating, they do not need the pollen to move several feet to another plant but only within their blossoms."
Tulips can be propagated through bulb offsets, seeds or micropropagation. Offsets and tissue culture methods are means of asexual propagation for producing genetic clones of the parent plant, which maintains cultivar genetic integrity. Seeds are most often used to propagate species and subspecies or to create new hybrids. Many tulip species can cross-pollinate with each other, and when wild tulip populations overlap geographically with other tulip species or subspecies, they often hybridise and create mixed populations. Most commercial tulip cultivars are complex hybrids, and often sterile.
Offsets require a year or more of growth before plants are large enough to flower. Tulips grown from seeds often need five to eight years before plants are of flowering size. To prevent cross-pollination, increase the growth rate of bulbs and increase the vigour and size of offsets, the flower and stems of a field of commercial tulips are usually topped using large tractor-mounted mowing heads. The same goals can be achieved by a private gardener by clipping the stem and flower of an individual specimen. Commercial growers usually harvest the tulip bulbs in late summer and grade them into sizes; bulbs large enough to flower are sorted and sold, while smaller bulbs are sorted into sizes and replanted for sale in the future.
Because tulip bulbs do not reliably come back every year, tulip varieties that fall out of favour with present aesthetic values have traditionally gone extinct. Unlike other flowers that do not suffer this same limitation, the tulip's historical forms do not survive alongside their modern incarnations.
Horticultural classification
In horticulture, tulips are divided into fifteen groups (Divisions) mostly based on flower morphology and plant size.
Div. 1: Single early – with cup-shaped single flowers, no larger than across. They bloom early to mid-season. Growing tall.
Div. 2: Double early – with fully double flowers, bowl shaped to across. Plants typically grow from tall.
Div. 3: Triumph – single, cup shaped flowers up to wide. Plants grow tall and bloom mid to late season.
Div. 4: Darwin hybrid – single flowers are ovoid in shape and up to wide. Plants grow tall and bloom mid to late season. This group should not be confused with older Darwin tulips, which belong in the Single Late Group below.
Div. 5: Single late – cup or goblet-shaped flowers up to wide, some plants produce multi-flowering stems. Plants grow tall and bloom late season.
Div. 6: Lily-flowered – the flowers possess a distinct narrow 'waist' with pointed and reflexed petals. Previously included with the old Darwins, only became a group in their own right in 1958.
Div. 7: Fringed (Crispa) – cup or goblet-shaped blossoms edged with spiked or crystal-like fringes, sometimes called “tulips for touch” because of the temptation to “test” the fringes to see if they are real or made of glass. Perennials with a tendency to naturalize in woodland areas, growing tall and blooming in late season.
Div. 8: Viridiflora
Div. 9: Rembrandt
Div. 10: Parrot
Div. 11: Double late – Large, heavy blooms. They range from tall.
Div. 12: Kaufmanniana – Waterlily tulip. Medium-large creamy yellow flowers marked red on the outside and yellow at the centre. Stems tall.
Div. 13: Fosteriana (Emperor)
Div. 14: Greigii – Scarlet flowers across, on stems. Foliage mottled with brown.
Div. 15: Species or Botanical – The terms "species tulips" and "botanical tulips" refer to wild species in contrast to hybridised varieties. As a group they have been described as being less ostentatious but more reliably vigorous as they age.
Div. 16: Multiflowering – not an official division, these tulips belong in the first 15 divisions but are often listed separately because they have multiple blooms per bulb.
They may also be classified by their flowering season:
Early flowering: Single Early Tulips, Double Early Tulips, Greigii Tulips, Kaufmanniana Tulips, Fosteriana Tulips,
Mid-season flowering: Darwin Hybrid Tulips, Triumph Tulips, Parrot Tulips
Late season flowering: Single Late Tulips, Double Late Tulips, Viridiflora Tulips, Lily-flowering Tulips, Fringed (Crispa) Tulips, Rembrandt Tulips
Neo-tulipae
A number of names are based on naturalised garden tulips and are usually referred to as neo-tulipae. These are often difficult to trace back to their original cultivar, and in some cases have been occurring in the wild for many centuries. The history of naturalisation is unknown, but populations are usually associated with agricultural practices and are possibly linked to saffron cultivation. Some neo-tulipae have been brought into cultivation, and are often offered as botanical tulips. These cultivated plants can be classified into two Cultivar Groups: 'Grengiolensis Group', with picotee tepals, and the 'Didieri Group' with unicolorous tepals.
Horticulture
Tulip bulbs are typically planted around late summer and fall, in well-drained soils. Tulips should be planted apart from each other. The recommended hole depth is deep and is measured from the top of the bulb to the surface. Therefore, larger tulip bulbs would require deeper holes. Species of tulips are normally planted deeper.
Toxicity
As with other plants of the lily family, tulips are poisonous to domestic animals including horses, cats, and dogs. In cats, ingestion of small amounts of tulips can cause vomiting, depression, diarrhoea, hypersalivation, and irritation of the mouth and throat, and larger amounts can cause abdominal pain, tremors, tachycardia, convulsions, tachypnea, difficulty breathing, cardiac arrhythmia, and coma. All parts of the tulip plant are poisonous to cats, while the bulb is especially dangerous. A veterinarian should be contacted immediately if a cat has ingested tulip.
Tulip bulbs look similar to onions, but should not generally be considered food. The toxicity of bulbs is not well understood, nor is there an agreed-upon method of safely preparing them for human consumption. There have been reports of illness when eaten, depending on quantity. During the Dutch famine of 1944–45, tulip bulbs were eaten out of desperation, and Dutch doctors provided recipes.
Uses
Tulip petals are edible to humans. The taste varies by variety and season and is roughly similar to lettuce or other salad greens. Some people are allergic to tulips.
In culture
Iran
The celebration of Persian New Year, or Nowruz, dating back over 3,000 years, marks the advent of spring, and tulips are used as a decorative feature during the festivities.
The 12th century Persian tragic romance, Khosrow and Shirin, similar to the tale of Romeo and Juliet, tells of tulips sprouting where the blood of the young prince Farhad spilt after he killed himself upon hearing the (deliberately false) story that his true love had died.
The tulip was a topic for Persian poets from the thirteenth century. The poem Gulistan by Musharrifu'd-din Saadi, described a visionary garden paradise with "The murmur of a cool stream / bird song, ripe fruit in plenty / bright multicoloured tulips and fragrant roses...". In recent times, tulips have featured in the poems of Simin Behbahani.
The tulip is the national symbol for martyrdom in Iran (and Shi'ite Islam generally), and has been used on postage stamps and coins. It was common as a symbol used in the 1979 Islamic Revolution, and a red tulip adorns the flag redesigned in 1980. The sword in the centre, with four crescent-shaped petals around it, create the word "Allah" as well as symbolising the five pillars of Islam. The tomb of Ayatollah Ruhollah Khomeini is decorated with 72 stained glass tulips, representing 72 martyrs who died at the Battle of Karbala in 680CE. It was also used as a symbol on billboards celebrating casualties of the 1980–1988 war with Iraq.
The tulip also became a symbol of protest against the Iranian government after the presidential election in June 2009, when millions turned out on the streets to protest the re-election of Mahmoud Ahmadinejad. After the protests were harshly suppressed, the Iranian Green Movement adopted the tulip as a symbol of their struggle.
The word for tulip in Persian is "laleh" (لاله), and this has become popular as a girl's name. The name has been used for commercial enterprises, such as the Laleh International Hotel, as well as public facilities, such as Laleh Park and Laleh Hospital, and the tulip motif remains common in Iranian culture.
Other cultures
Tulips are called in Turkish (from the from lal 'red'). When written in Arabic letters, has the same letters as Allah, which is why the flower became a holy symbol. It was also associated with the House of Osman, resulting in tulips being widely used in decorative motifs on tiles, mosques, fabrics, crockery, etc. in the Ottoman Empire. The tulip was seen as a symbol of abundance and indulgence. The era during which the Ottoman Empire was wealthiest is often called the Tulip era or in Turkish.
Tulips became popular garden plants in the east and west, but, whereas the tulip in Turkish culture was a symbol of paradise on earth and had almost a divine status, in the Netherlands it represented the briefness of life.
In Christianity, tulips symbolise passion, belief and love. White tulips represent forgiveness while purple tulips represent royalty, both important aspects of Easter. In Calvinism, the five points of the doctrines of grace have been summarized under the acrostic TULIP.
By contrast to other flowers such as the coneflower or lotus flower, tulips have historically been capable of genetically reinventing themselves to suit changes in aesthetic values. In his 1597 herbal, John Gerard says of the tulip that "nature seems to play more with this flower than with any other that I do know". When in the Netherlands, beauty was defined by marbled swirls of vivid contrasting colours, the petals of tulips were able to become "feathered" and "flamed". However, in the 19th century, when the English desired tulips for carpet bedding and massing, the tulips were able to once again accommodate this by evolving into "paint-filled boxes with the brightest, fattest dabs of pure pigment". This inherent mutability of the tulip even led the Ottoman Turks to believe that nature cherished this flower above all others.
The Dutch regarded the flower's lack of scent as a virtue, representing chasteness. The Black Tulip (1850) is a historical romance by Alexandre Dumas, père. The story takes place in the Dutch city of Haarlem, where a reward is offered to the first grower who can produce a truly black tulip.
The tulip occurs on a number of the Major Arcana cards of occultist Oswald Wirth's deck of Tarot cards, specifically the Magician, Emperor, Temperance and the Fool, described in his 1927 work .
Tulip festivals
Tulip festivals are held around the world, for example in the Netherlands and Spalding, England. There is also a popular festival in Morges, Switzerland. Every spring, there are tulip festivals in North America, including the Tulip Time Festival in Holland, Michigan, the Skagit Valley Tulip Festival in Skagit Valley, Washington, the Tulip Time Festival in Orange City and Pella, Iowa, and the Canadian Tulip Festival in Ottawa, Ontario, Canada. Tulips are also popular in Australia and several festivals are held in September and October, during the Southern Hemisphere's spring. The Indira Gandhi Memorial Tulip Garden hosts an annual tulip festival which draws huge attention and has an attendance of over 200,000.
| Biology and health sciences | Monocots | null |
19595436 | https://en.wikipedia.org/wiki/Elbow | Elbow | The elbow is the region between the upper arm and the forearm that surrounds the elbow joint. The elbow includes prominent landmarks such as the olecranon, the cubital fossa (also called the chelidon, or the elbow pit), and the lateral and the medial epicondyles of the humerus. The elbow joint is a hinge joint between the arm and the forearm; more specifically between the humerus in the upper arm and the radius and ulna in the forearm which allows the forearm and hand to be moved towards and away from the body.
The term elbow is specifically used for humans and other primates, and in other vertebrates it is not used. In those cases, forelimb plus joint is used.
The name for the elbow in Latin is cubitus, and so the word cubital is used in some elbow-related terms, as in cubital nodes for example.
Structure
Joint
The elbow joint has three different portions surrounded by a common joint capsule. These are joints between the three bones of the elbow, the humerus of the upper arm, and the radius and the ulna of the forearm.
When in anatomical position there are four main bony landmarks of the elbow. At the lower part of the humerus are the medial and lateral epicondyles, on the side closest to the body (medial) and on the side away from the body (lateral) surfaces. The third landmark is the olecranon found at the head of the ulna. These lie on a horizontal line called the Hueter line. When the elbow is flexed, they form a triangle called the Hueter triangle, which resembles an equilateral triangle.
At the surface of the humerus where it faces the joint is the trochlea. In most people, the groove running across the trochlea is vertical on the anterior side but it spirals off on the posterior side. This results in the forearm being aligned to the upper arm during flexion, but forming an angle to the upper arm during extension — an angle known as the carrying angle.
The superior radioulnar joint shares the joint capsule with the elbow joint but plays no functional role at the elbow.
Joint capsule
The elbow joint and the superior radioulnar joint are enclosed by a single fibrous capsule. The capsule is strengthened by ligaments at the sides but is relatively weak in front and behind.
On the anterior side, the capsule consists mainly of longitudinal fibres. However, some bundles among these fibers run obliquely or transversely, thickening and strengthening the capsule. These bundles are referred to as the capsular ligament. Deep fibres of the brachialis muscle insert anteriorly into the capsule and act to pull it and the underlying membrane during flexion in order to prevent them from being pinched.
On the posterior side, the capsule is thin and mainly composed of transverse fibres. A few of these fibres stretch across the olecranon fossa without attaching to it and form a transverse band with a free upper border. On the ulnar side, the capsule reaches down to the posterior part of the annular ligament. The posterior capsule is attached to the triceps tendon which prevents the capsule from being pinched during extension.
Synovial membrane
The synovial membrane of the elbow joint is very extensive. On the humerus, it extends up from the articular margins and covers the coronoid and radial fossae anteriorly and the olecranon fossa posteriorly. Distally, it is prolonged down to the neck of the radius and the superior radioulnar joint. It is supported by the quadrate ligament below the annular ligament where it also forms a fold which gives the head of the radius freedom of movement.
Several synovial folds project into the recesses of the joint.
These folds or plicae are remnants of normal embryonic development and can be categorized as either anterior (anterior humeral recess) or posterior (olecranon recess).
A crescent-shaped fold is commonly present between the head of the radius and the capitulum of the humerus.
On the humerus there are extrasynovial fat pads adjacent to the three articular fossae. These pads fill the radial and coronoid fossa anteriorly during extension, and the olecranon fossa posteriorly during flexion. They are displaced when the fossae are occupied by the bony projections of the ulna and radius.
Ligaments
The elbow, like other joints, has ligaments on either side. These are triangular bands which blend with the joint capsule. They are positioned so that they always lie across the transverse joint axis and are, therefore, always relatively tense and impose strict limitations on abduction, adduction, and axial rotation at the elbow.
The ulnar collateral ligament has its apex on the medial epicondyle. Its anterior band stretches from the anterior side of the medial epicondyle to the medial edge of the coronoid process, while the posterior band stretches from posterior side of the medial epicondyle to the medial side of the olecranon. These two bands are separated by a thinner intermediate part and their distal attachments are united by a transverse band below which the synovial membrane protrudes during joint movements. The anterior band is closely associated with the tendon of the superficial flexor muscles of the forearm, even being the origin of flexor digitorum superficialis. The ulnar nerve crosses the intermediate part as it enters the forearm.
The radial collateral ligament is attached to the lateral epicondyle below the common extensor tendon. Less distinct than the ulnar collateral ligament, this ligament blends with the annular ligament of the radius and its margins are attached near the radial notch of the ulna.
Muscles
Flexion
There are three main flexor muscles at the elbow:
Brachialis acts exclusively as an elbow flexor and is one of the few muscles in the human body with a single function. It originates low on the anterior side of the humerus and is inserted into the tuberosity of the ulna.
Brachioradialis acts essentially as an elbow flexor but also supinates during extreme pronation and pronates during extreme supination. It originates at the lateral supracondylar ridge distally on the humerus and is inserted distally on the radius at the styloid process.
Biceps brachii is the main elbow flexor but, as a biarticular muscle, also plays important secondary roles as a stabiliser at the shoulder and as a supinator. It originates on the scapula with two tendons: That of the long head on the supraglenoid tubercle just above the shoulder joint and that of the short head on the coracoid process at the top of the scapula. Its main insertion is at the radial tuberosity on the radius.
Brachialis is the main muscle used when the elbow is flexed slowly. During rapid and forceful flexion all three muscles are brought into action assisted by the superficial forearm flexors originating at the medial side of the elbow.
The efficiency of the flexor muscles increases dramatically as the elbow is brought into midflexion (flexed 90°) — biceps reaches its angle of maximum efficiency at 80–90° and brachialis at 100–110°.
Active flexion is limited to 145° by the contact between the anterior muscles of the upper arm and forearm, more so because they are hardened by contraction during flexion. Passive flexion (forearm is pushed against the upper arm with flexors relaxed) is limited to 160° by the bony projections on the radius and ulna as they reach to shallow depressions on the humerus; i.e. the head of radius being pressed against the radial fossa and the coronoid process being pressed against the coronoid fossa. Passive flexion is further limited by tension in the posterior capsular ligament and in triceps brachii.
A small accessory muscle, so called epitrochleoanconeus muscle, may be found on the medial aspect of the elbow running from the medial epicondyle to the olecranon.
Extension
Elbow extension is simply bringing the forearm back to anatomical position. This action is performed by triceps brachii with a negligible assistance from anconeus. Triceps originates with two heads posteriorly on the humerus and with its long head on the scapula just below the shoulder joint. It is inserted posteriorly on the olecranon.
Triceps is maximally efficient with the elbow flexed 20–30°. As the angle of flexion increases, the position of the olecranon approaches the main axis of the humerus which decreases muscle efficiency. In full flexion, however, the triceps tendon is "rolled up" on the olecranon as on a pulley which compensates for the loss of efficiency. Because triceps' long head is biarticular (acts on two joints), its efficiency is also dependent on the position of the shoulder.
Extension is limited by the olecranon reaching the olecranon fossa, tension in the anterior ligament, and resistance in flexor muscles. Forced extension results in a rupture in one of the limiting structures: olecranon fracture, torn capsule and ligaments, and, though the muscles are normally left unaffected, a bruised brachial artery.
Blood supply
The arteries supplying the joint are derived from an extensive circulatory anastomosis between the brachial artery and its terminal branches. The superior and inferior ulnar collateral branches of the brachial artery and the radial and middle collateral branches of the profunda brachii artery descend from above to reconnect on the joint capsule, where they also connect with the anterior and posterior ulnar recurrent branches of the ulnar artery; the radial recurrent branch of the radial artery; and the interosseous recurrent branch of the common interosseous artery.
The blood is brought back by vessels from the radial, ulnar, and brachial veins.
There are two sets of lymphatic nodes at the elbow, normally located above the medial epicondyle — the deep and superficial cubital nodes (also called epitrochlear nodes). The lymphatic drainage at the elbow is through the deep nodes at the bifurcation of the brachial artery, the superficial nodes drain the forearm and the ulnar side of the hand. The efferent lymph vessels from the elbow proceed to the lateral group of axillary lymph nodes.
Nerve supply
The elbow is innervated anteriorly by branches from the musculocutaneous, median, and radial nerve, and posteriorly from the ulnar nerve and the branch of the radial nerve to anconeus.
Development
The elbow undergoes dynamic development of ossification centers through infancy and adolescence, with the order of both the appearance and fusion of the apophyseal growth centers being crucial in assessment of the pediatric elbow on radiograph, in order to distinguish a traumatic fracture or apophyseal separation from normal development. The order of appearance can be understood by the mnemonic CRITOE, referring to the capitellum, radial head, internal epicondyle, trochlea, olecranon, and external epicondyle at ages 1, 3, 5, 7, 9 and 11 years. These apophyseal centers then fuse during adolescence, with the internal epicondyle and olecranon fusing last. The ages of fusion are more variable than ossification, but normally occur at 13, 15, 17, 13, 16 and 13 years, respectively. In addition, the presence of a joint effusion can be inferenced by the presence of the fat pad sign, a structure that is normally physiologically present, but pathologic when elevated by fluid, and always pathologic when posterior.
Function
The function of the elbow joint is to extend and flex the arm. The range of movement in the elbow is from 0 degrees of elbow extension to 150 degrees of elbow flexion. Muscles contributing to function are all flexion (biceps brachii, brachialis, and brachioradialis) and extension muscles (triceps and anconeus).
In humans, the main task of the elbow is to properly place the hand in space by shortening and lengthening the upper limb. While the superior radioulnar joint shares joint capsule with the elbow joint, it plays no functional role at the elbow.
With the elbow extended, the long axis of the humerus and that of the ulna coincide. At the same time, the articular surfaces on both bones are located in front of those axes and deviate from them at an angle of 45°. Additionally, the forearm muscles that originate at the elbow are grouped at the sides of the joint in order not to interfere with its movement. The wide angle of flexion at the elbow made possible by this arrangement — almost 180° — allows the bones to be brought almost in parallel to each other.
Carrying angle
When the arm is extended, with the palm facing forward or up, the bones of the upper arm (humerus) and forearm (radius and ulna) are not perfectly aligned. The deviation from a straight line occurs in the direction of the thumb, and is referred to as the "carrying angle".
The carrying angle permits the arm to be swung without contacting the hips. Women on average have smaller shoulders and wider hips than men, which tends to produce a larger carrying angle (i.e., larger deviation from a straight line than that in men). There is, however, extensive overlap in the carrying angle between individual men and women, and a sex-bias has not been consistently observed in scientific studies.
The angle is greater in the dominant limb than the non-dominant limb of both sexes, suggesting that natural forces acting on the elbow modify the carrying angle. Developmental, aging and possibly racial influences add further to the variability of this parameter.
Pathology
The types of disease most commonly seen at the elbow are due to injury.
Tendonitis
Two of the most common injuries at the elbow are overuse injuries: tennis elbow and golfer's elbow. Golfer's elbow involves the tendon of the common flexor origin which originates at the medial epicondyle of the humerus (the "inside" of the elbow). Tennis elbow is the equivalent injury, but at the common extensor origin (the lateral epicondyle of the humerus).
Fractures
There are three bones at the elbow joint, and any combination of these bones may be involved in a fracture of the elbow. Patients who are able to fully extend their arm at the elbow are unlikely to have a fracture (98% certainty) and an X-ray is not required as long as an olecranon fracture is ruled out. Acute fractures may not be easily visible on X-ray.
Dislocation
Elbow dislocations constitute 10% to 25% of all injuries to the elbow. The elbow is one of the most commonly dislocated joints in the body, with an average annual incidence of acute dislocation of 6 per 100,000 persons. Among injuries to the upper extremity, dislocation of the elbow is second only to a dislocated shoulder.
A full dislocation of the elbow will require expert medical attention to re-align, and recovery can take approximately 6 weeks.
Infection
Infection of the elbow joint (septic arthritis) is uncommon. It may occur spontaneously, but may also occur in relation to surgery or infection elsewhere in the body (for example, endocarditis).
Arthritis
Elbow arthritis is usually seen in individuals with rheumatoid arthritis or after fractures that involve the joint itself. When the damage to the joint is severe, fascial arthroplasty or elbow joint replacement may be considered.
Bursitis
Olecranon bursitis, tenderness, warmth, swelling, pain in both flexion and extension-in chronic case great flexion-is extremely painful.
Elbow pain
Elbow pain occurs when the tenderness of the tissues in the elbow become inflamed. Frequent exercise of the inflamed elbow will assist with healing.
Clinical significance
Elbow pain can occur for a multitude of reasons, including injury, disease, and other conditions. Common conditions include tennis elbow, golfer's elbow, distal radioulnar joint rheumatoid arthritis, and cubital tunnel syndrome.
Tennis elbow
Tennis elbow is a very common type of overuse injury. It can occur both from chronic repetitive motions of the hand and forearm, and from trauma to the same areas. These repetitions can injure the tendons that connect the extensor supinator muscles (which rotate and extend the forearm) to the olecranon process (also known as "the elbow"). Pain occurs, often radiating from the lateral forearm. Weakness, numbness, and stiffness are also very common, along with tenderness upon touch.
A non-invasive treatment for pain management is rest. If achieving rest is an issue, a wrist brace can also be worn. This keeps the wrist in flexion, thereby relieving the extensor muscles and allowing rest. Ice, heat, ultrasound, steroid injections, and compression can also help alleviate pain. After the pain has been reduced, exercise therapy is important to prevent injury in the future. Exercises should be low velocity, and weight should increase progressively. Stretching the flexors and extensors is helpful, as are strengthening exercises. Massage can also be useful, focusing on the extensor trigger points.
Golfer's elbow
Golfer's elbow is very similar to tennis elbow, but less common. It is caused by overuse and repetitive motions like a golf swing. It can also be caused by trauma. Wrist flexion and pronation (rotating of the forearm) causes irritation to the tendons near the medial epicondyle of the elbow. It can cause pain, stiffness, loss of sensation, and weakness radiating from the inside of the elbow to the fingers.
Rest is the primary intervention for this injury. Ice, pain medication, steroid injections, strengthening exercises, and avoiding any aggravating activities can also help. Surgery is a last resort, and rarely used. Exercises should focus on strengthening and stretching the forearm, and utilizing proper form when performing movements.
Rheumatoid arthritis
Rheumatoid arthritis is a chronic disease that affects joints. It is very common in the wrist, and is most common at the radioulnar joint. It results in pain, stiffness, and deformities.
There are many different treatments for rheumatoid arthritis, and there is no one consensus for which methods are best. Most common treatments include wrist splints, surgery, physical and occupational therapy, and antirheumatic medication.
Cubital tunnel syndrome
Cubital tunnel syndrome, more commonly known as ulnar neuropathy, occurs when the ulnar nerve is irritated and becomes inflamed. This can often happen where the ulnar nerve is most superficial, at the elbow. The ulnar nerve passes over the elbow, at the area known as the "funny bone". Irritation can occur due to constant, repeated stress and pressure at this area, or from a trauma. It can also occur due to bone deformities, and oftentimes from sports. Symptoms include tingling, numbness, and weakness, along with pain.
First line pain management techniques include the use of nonsteroidal anti-inflammatory oral medicines. These help to reduce inflammation, pressure, and irritation of the nerve and around the nerve. Other simple fixes include learning more ergonomically friendly habits that can help prevent nerve impingement and irritation in the future. Protective equipment can also be very helpful. Examples of this include a protective elbow pad, and an arm splint. More serious cases often involve surgery, in which the nerve or the surrounding tissue is moved to relieve the pressure. Recovery from surgery can take awhile, but the prognosis is often a good one. Recovery often includes movement restrictions, and range of motion activities, and can last a few months (cubital and radial tunnel syndrome, 2).
Society and culture
The now obsolete length unit ell relates closely to the elbow. This becomes especially visible when considering the Germanic origins of both words, Elle (ell, defined as the length of a male forearm from elbow to fingertips) and Ellbogen (elbow). It is unknown when or why the second "l" was dropped from English usage of the word. The ell as in the English measure could also be taken to come from the letter L, being bent at right angles, as an elbow. The ell as a measure was taken as six handbreadths; three to the elbow and three from the elbow to the shoulder. Another measure was the cubit (from cubital). This was taken to be the length of a man's arm from the elbow to the end of the middle finger.
The words wenis and wagina are humorously used to describe the posterior and anterior regions of the elbow, respectively. The terms entered the slang lexicon in the 1990s and proliferated as an Internet meme. Specifically, wenis refers to the loose flap of skin under the elbow (olecranal skin), while wagina refers to the skin crease of the cubital fossa.
Other primates
Though the elbow is similarly adapted for stability through a wide range of pronation-supination and flexion-extension in all apes, there are some minor differences. In arboreal apes such as orangutans, the large forearm muscles originating on the epicondyles of the humerus generate significant transverse forces on the elbow joint. The structure to resist these forces is a pronounced keel on the trochlear notch on the ulna, which is more flattened in, for example, humans and gorillas. In knuckle-walkers, on the other hand, the elbow has to deal with large vertical loads passing through extended forearms and the joint is therefore more expanded to provide larger articular surfaces perpendicular to those forces.
Derived traits in catarrhini (apes and Old World monkeys), elbows include the loss of the entepicondylar foramen (a hole in the distal humerus), a non-translatory (rotation-only) humeroulnar joint, and a more robust ulna with a shortened trochlear notch.
The proximal radioulnar joint is similarly derived in higher primates in the location and shape of the radial notch on the ulna; the primitive form being represented by New World monkeys, such as the howler monkey, and by fossil catarrhines, such as Aegyptopithecus. In these taxa, the oval head of the radius lies in front of the ulnar shaft so that the former overlaps the latter by half its width. With this forearm configuration, the ulna supports the radius and maximum stability is achieved when the forearm is fully pronated.
| Biology and health sciences | Skeletal system | Biology |
19595664 | https://en.wikipedia.org/wiki/Time%20in%20physics | Time in physics | In physics, time is defined by its measurement: time is what a clock reads. In classical, non-relativistic physics, it is a scalar quantity (often denoted by the symbol ) and, like length, mass, and charge, is usually described as a fundamental quantity. Time can be combined mathematically with other physical quantities to derive other concepts such as motion, kinetic energy and time-dependent fields. Timekeeping is a complex of technological and scientific issues, and part of the foundation of recordkeeping.
Markers of time
Before there were clocks, time was measured by those physical processes which were understandable to each epoch of civilization:
the first appearance (see: heliacal rising) of Sirius to mark the flooding of the Nile each year
the periodic succession of night and day, seemingly eternally
the position on the horizon of the first appearance of the sun at dawn
the position of the sun in the sky
the marking of the moment of noontime during the day
the length of the shadow cast by a gnomon
Eventually, it became possible to characterize the passage of time with instrumentation, using operational definitions. Simultaneously, our conception of time has evolved, as shown below.
Unit of measurement of time
In the International System of Units (SI), the unit of time is the second (symbol: s). It has been defined since 1967 as "the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom", and is an SI base unit. This definition is based on the operation of a caesium atomic clock. These clocks became practical for use as primary reference standards after about 1955, and have been in use ever since.
State of the art in timekeeping
The UTC timestamp in use worldwide is an atomic time standard. The relative accuracy of such a time standard is currently on the order of 10−15 (corresponding to 1 second in approximately 30 million years). The smallest time step considered theoretically observable is called the Planck time, which is approximately 5.391×10−44 seconds – many orders of magnitude below the resolution of current time standards.
The caesium atomic clock became practical after 1950, when advances in electronics enabled reliable measurement of the microwave frequencies it generates. As further advances occurred, atomic clock research has progressed to ever-higher frequencies, which can provide higher accuracy and higher precision. Clocks based on these techniques have been developed, but are not yet in use as primary reference standards.
Conceptions of time
Galileo, Newton, and most people up until the 20th century thought that time was the same for everyone everywhere. This is the basis for timelines, where time is a parameter. The modern understanding of time is based on Einstein's theory of relativity, in which rates of time run differently depending on relative motion, and space and time are merged into spacetime, where we live on a world line rather than a timeline. In this view time is a coordinate. According to the prevailing cosmological model of the Big Bang theory, time itself began as part of the entire Universe about 13.8 billion years ago.
Regularities in nature
In order to measure time, one can record the number of occurrences (events) of some periodic phenomenon. The regular recurrences of the seasons, the motions of the sun, moon and stars were noted and tabulated for millennia, before the laws of physics were formulated. The sun was the arbiter of the flow of time, but time was known only to the hour for millennia, hence, the use of the gnomon was known across most of the world, especially Eurasia, and at least as far southward as the jungles of Southeast Asia.
In particular, the astronomical observatories maintained for religious purposes became accurate enough to ascertain the regular motions of the stars, and even some of the planets.
At first, timekeeping was done by hand by priests, and then for commerce, with watchmen to note time as part of their duties.
The tabulation of the equinoxes, the sandglass, and the water clock became more and more accurate, and finally reliable. For ships at sea, marine sandglasses were used. These devices allowed sailors to call the hours, and to calculate sailing velocity.
Mechanical clocks
Richard of Wallingford (1292–1336), abbot of St. Albans Abbey, famously built a mechanical clock as an astronomical orrery about 1330.
By the time of Richard of Wallingford, the use of ratchets and gears allowed the towns of Europe to create mechanisms to display the time on their respective town clocks; by the time of the scientific revolution, the clocks became miniaturized enough for families to share a personal clock, or perhaps a pocket watch. At first, only kings could afford them. Pendulum clocks were widely used in the 18th and 19th century. They have largely been replaced in general use by quartz and digital clocks. Atomic clocks can theoretically keep accurate time for millions of years. They are appropriate for standards and scientific use.
Galileo: the flow of time
In 1583, Galileo Galilei (1564–1642) discovered that a pendulum's harmonic motion has a constant period, which he learned by timing the motion of a swaying lamp in harmonic motion at mass at the cathedral of Pisa, with his pulse.
In his Two New Sciences (1638), Galileo used a water clock to measure the time taken for a bronze ball to roll a known distance down an inclined plane; this clock was:
...a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length; the water thus collected was weighed, after each descent, on a very accurate balance; the differences and ratios of these weights gave us the differences and ratios of the times, and this with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results.
Galileo's experimental setup to measure the literal flow of time, in order to describe the motion of a ball, preceded Isaac Newton's statement in his Principia, "I do not define time, space, place and motion, as being well known to all."
The Galilean transformations assume that time is the same for all reference frames.
Newtonian physics: linear time
In or around 1665, when Isaac Newton (1643–1727) derived the motion of objects falling under gravity, the first clear formulation for mathematical physics of a treatment of time began: linear time, conceived as a universal clock.
Absolute, true, and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration: relative, apparent, and common time, is some sensible and external (whether accurate or unequable) measure of duration by the means of motion, which is commonly used instead of true time; such as an hour, a day, a month, a year.
The water clock mechanism described by Galileo was engineered to provide laminar flow of the water during the experiments, thus providing a constant flow of water for the durations of the experiments, and embodying what Newton called duration.
In this section, the relationships listed below treat time as a parameter which serves as an index to the behavior of the physical system under consideration. Because Newton's fluents treat a linear flow of time (what he called mathematical time), time could be considered to be a linearly varying parameter, an abstraction of the march of the hours on the face of a clock. Calendars and ship's logs could then be mapped to the march of the hours, days, months, years and centuries.
Thermodynamics and the paradox of irreversibility
By 1798, Benjamin Thompson (1753–1814) had discovered that work could be transformed to heat without limit – a precursor of the conservation of energy or
1st law of thermodynamics
In 1824 Sadi Carnot (1796–1832) scientifically analyzed the steam engine with his Carnot cycle, an abstract engine. Rudolf Clausius (1822–1888) noted a measure of disorder, or entropy, which affects the continually decreasing amount of free energy which is available to a Carnot engine in the:
2nd law of thermodynamics
Thus the continual march of a thermodynamic system, from lesser to greater entropy, at any given temperature, defines an arrow of time. In particular, Stephen Hawking identifies three arrows of time:
Psychological arrow of time – our perception of an inexorable flow.
Thermodynamic arrow of time – distinguished by the growth of entropy.
Cosmological arrow of time – distinguished by the expansion of the universe.
With time, entropy increases in an isolated thermodynamic system. In contrast, Erwin Schrödinger (1887–1961) pointed out that life depends on a "negative entropy flow". Ilya Prigogine (1917–2003) stated that other thermodynamic systems which, like life, are also far from equilibrium, can also exhibit stable spatio-temporal structures that reminisce life. Soon afterward, the Belousov–Zhabotinsky reactions were reported, which demonstrate oscillating colors in a chemical solution. These nonequilibrium thermodynamic branches reach a bifurcation point, which is unstable, and another thermodynamic branch becomes stable in its stead.
Electromagnetism and the speed of light
In 1864, James Clerk Maxwell (1831–1879) presented a combined theory of electricity and magnetism. He combined all the laws then known relating to those two phenomenon into four equations. These equations are known as Maxwell's equations for electromagnetism; they allow for solutions in the form of electromagnetic waves and propagate at a fixed speed, c, regardless of the velocity of the electric charge that generated them.
The fact that light is predicted to always travel at speed c would be incompatible with Galilean relativity if Maxwell's equations were assumed to hold in any inertial frame (reference frame with constant velocity), because the Galilean transformations predict the speed to decrease (or increase) in the reference frame of an observer traveling parallel (or antiparallel) to the light.
It was expected that there was one absolute reference frame, that of the luminiferous aether, in which Maxwell's equations held unmodified in the known form.
The Michelson–Morley experiment failed to detect any difference in the relative speed of light due to the motion of the Earth relative to the luminiferous aether, suggesting that Maxwell's equations did, in fact, hold in all frames. In 1875, Hendrik Lorentz (1853–1928) discovered Lorentz transformations, which left Maxwell's equations unchanged, allowing Michelson and Morley's negative result to be explained. Henri Poincaré (1854–1912) noted the importance of Lorentz's transformation and popularized it. In particular, the railroad car description can be found in Science and Hypothesis, which was published before Einstein's articles of 1905.
The Lorentz transformation predicted space contraction and time dilation; until 1905, the former was interpreted as a physical contraction of objects moving with respect to the aether, due to the modification of the intermolecular forces (of electric nature), while the latter was thought to be just a mathematical stipulation.
Relativistic physics: spacetime
Albert Einstein's 1905 special relativity challenged the notion of absolute time, and could only formulate a definition of synchronization for clocks that mark a linear flow of time:
Einstein showed that if the speed of light is not changing between reference frames, space and time must be so that the moving observer will measure the same speed of light as the stationary one because velocity is defined by space and time:
where r is position and t is time.
Indeed, the Lorentz transformation (for two reference frames in relative motion, whose x axis is directed in the direction of the relative velocity)
can be said to "mix" space and time in a way similar to the way a Euclidean rotation around the z axis mixes x and y coordinates. Consequences of this include relativity of simultaneity. More specifically, the Lorentz transformation is a hyperbolic rotation
which is a change of coordinates in the four-dimensional Minkowski space, a dimension of which is ct. (In Euclidean space an ordinary rotation
is the corresponding change of coordinates.) The speed of light c can be seen as just a conversion factor needed because we measure the dimensions of spacetime in different units; since the metre is currently defined in terms of the second, it has the exact value of . We would need a similar factor in Euclidean space if, for example, we measured width in nautical miles and depth in feet. In physics, sometimes units of measurement in which c = 1 are used to simplify equations.
Time in a "moving" reference frame is shown to run more slowly than in a "stationary" one by the following relation (which can be derived by the Lorentz transformation by putting ∆x′ = 0, ∆τ = ∆t′):
where:
is the time between two events as measured in the moving reference frame in which they occur at the same place (e.g. two ticks on a moving clock); it is called the proper time between the two events;
t is the time between these same two events, but as measured in the stationary reference frame;
v is the speed of the moving reference frame relative to the stationary one;
c is the speed of light.
Moving objects therefore are said to show a slower passage of time. This is known as time dilation.
These transformations are only valid for two frames at constant relative velocity. Naively applying them to other situations gives rise to such paradoxes as the twin paradox.
That paradox can be resolved using for instance Einstein's General theory of relativity, which uses Riemannian geometry, geometry in accelerated, noninertial reference frames. Employing the metric tensor which describes Minkowski space:
Einstein developed a geometric solution to Lorentz's transformation that preserves Maxwell's equations. His field equations give an exact relationship between the measurements of space and time in a given region of spacetime and the energy density of that region.
Einstein's equations predict that time should be altered by the presence of gravitational fields (see the Schwarzschild metric):
where:
is the gravitational time dilation of an object at a distance of .
is the change in coordinate time, or the interval of coordinate time.
is the gravitational constant
is the mass generating the field
is the change in proper time , or the interval of proper time.
Or one could use the following simpler approximation:
That is, the stronger the gravitational field (and, thus, the larger the acceleration), the more slowly time runs. The predictions of time dilation are confirmed by particle acceleration experiments and cosmic ray evidence, where moving particles decay more slowly than their less energetic counterparts. Gravitational time dilation gives rise to the phenomenon of gravitational redshift and Shapiro signal travel time delays near massive objects such as the sun. The Global Positioning System must also adjust signals to account for this effect.
According to Einstein's general theory of relativity, a freely moving particle traces a history in spacetime that maximises its proper time. This phenomenon is also referred to as the principle of maximal aging, and was described by Taylor and Wheeler as:
"Principle of Extremal Aging: The path a free object takes between two events in spacetime is the path for which the time lapse between these events, recorded on the object's wristwatch, is an extremum."
Einstein's theory was motivated by the assumption that every point in the universe can be treated as a 'center', and that correspondingly, physics must act the same in all reference frames. His simple and elegant theory shows that time is relative to an inertial frame. In an inertial frame, Newton's first law holds; it has its own local geometry, and therefore its own measurements of space and time; there is no 'universal clock. An act of synchronization must be performed between two systems, at the least.
Time in quantum mechanics
There is a time parameter in the equations of quantum mechanics. The Schrödinger equation is
One solution can be
.
where
is called the time evolution operator, and H is the Hamiltonian.
But the Schrödinger picture shown above is equivalent to the Heisenberg picture, which enjoys a similarity to the Poisson brackets of classical mechanics. The Poisson brackets are superseded by a nonzero commutator, say [H, A] for observable A, and Hamiltonian H:
This equation denotes an uncertainty relation in quantum physics. For example, with time (the observable A), the energy E (from the Hamiltonian H) gives:
where
is the uncertainty in energy
is the uncertainty in time
is the reduced Planck constant
The more precisely one measures the duration of a sequence of events, the less precisely one can measure the energy associated with that sequence, and vice versa. This equation is different from the standard uncertainty principle, because time is not an operator in quantum mechanics.
Corresponding commutator relations also hold for momentum p and position q, which are conjugate variables of each other, along with a corresponding uncertainty principle in momentum and position, similar to the energy and time relation above.
Quantum mechanics explains the properties of the periodic table of the elements. Starting with Otto Stern's and Walter Gerlach's experiment with molecular beams in a magnetic field, Isidor Rabi (1898–1988), was able to modulate the magnetic resonance of the beam. In 1945 Rabi then suggested that this technique be the basis of a clock using the resonant frequency of an atomic beam.
In 2021 Jun Ye of JILA in Boulder Colorado observed time dilatation in the difference in the rate of optical lattice clock ticks at the top of a cloud of strontium atoms, than at the bottom of that cloud, a column one millimeter tall, under the influence of gravity.
Dynamical systems
One could say that time is a parameterization of a dynamical system that allows the geometry of the system to be manifested and operated on. It has been asserted that time is an implicit consequence of chaos (i.e. nonlinearity/irreversibility): the characteristic time, or rate of information entropy production, of a system. Mandelbrot introduces intrinsic time in his book Multifractals and 1/f noise.
Time crystals
Khemani, Moessner, and Sondhi define a time crystal as a "stable, conservative, macroscopic clock".
Signalling
Signalling is one application of the electromagnetic waves described above. In general, a signal is part of communication between parties and places. One example might be a yellow ribbon tied to a tree, or the ringing of a church bell. A signal can be part of a conversation, which involves a protocol. Another signal might be the position of the hour hand on a town clock or a railway station. An interested party might wish to view that clock, to learn the time. See: Time ball, an early form of Time signal.
We as observers can still signal different parties and places as long as we live within their past light cone. But we cannot receive signals from those parties and places outside our past light cone.
Along with the formulation of the equations for the electromagnetic wave, the field of telecommunication could be founded.
In 19th century telegraphy, electrical circuits, some spanning continents and oceans, could transmit codes - simple dots, dashes and spaces. From this, a series of technical issues have emerged; see :Category:Synchronization. But it is safe to say that our signalling systems can be only approximately synchronized, a plesiochronous condition, from which jitter need be eliminated.
That said, systems can be synchronized (at an engineering approximation), using technologies like GPS. The GPS satellites must account for the effects of gravitation and other relativistic factors in their circuitry. See: Self-clocking signal.
Technology for timekeeping standards
The primary time standard in the U.S. is currently NIST-F1, a laser-cooled Cs fountain, the latest in a series of time and frequency standards, from the ammonia-based atomic clock (1949) to the caesium-based NBS-1 (1952) to NIST-7 (1993). The respective clock uncertainty declined from 10,000 nanoseconds per day to 0.5 nanoseconds per day in 5 decades. In 2001 the clock uncertainty for NIST-F1 was 0.1 nanoseconds/day. Development of increasingly accurate frequency standards is underway.
In this time and frequency standard, a population of caesium atoms is laser-cooled to temperatures of one microkelvin. The atoms collect in a ball shaped by six lasers, two for each spatial dimension, vertical (up/down), horizontal (left/right), and back/forth. The vertical lasers push the caesium ball through a microwave cavity. As the ball is cooled, the caesium population cools to its ground state and emits light at its natural frequency, stated in the definition of second above. Eleven physical effects are accounted for in the emissions from the caesium population, which are then controlled for in the NIST-F1 clock. These results are reported to BIPM.
Additionally, a reference hydrogen maser is also reported to BIPM as a frequency standard for TAI (international atomic time).
The measurement of time is overseen by BIPM (Bureau International des Poids et Mesures), located in Sèvres, France, which ensures uniformity of measurements and their traceability to the International System of Units (SI) worldwide. BIPM operates under authority of the Metre Convention, a diplomatic treaty between fifty-one nations, the Member States of the Convention, through a series of Consultative Committees, whose members are the respective national metrology laboratories.
Time in cosmology
The equations of general relativity predict a non-static universe. However, Einstein accepted only a static universe, and modified the Einstein field equation to reflect this by adding the cosmological constant, which he later described as his "biggest blunder". But in 1927, Georges Lemaître (1894–1966) argued, on the basis of general relativity, that the universe originated in a primordial explosion. At the fifth Solvay conference, that year, Einstein brushed him off with "" (“Your math is correct, but your physics is abominable”). In 1929, Edwin Hubble (1889–1953) announced his discovery of the expanding universe. The current generally accepted cosmological model, the Lambda-CDM model, has a positive cosmological constant and thus not only an expanding universe but an accelerating expanding universe.
If the universe were expanding, then it must have been much smaller and therefore hotter and denser in the past. George Gamow (1904–1968) hypothesized that the abundance of the elements in the Periodic Table of the Elements, might be accounted for by nuclear reactions in a hot dense universe. He was disputed by Fred Hoyle (1915–2001), who invented the term 'Big Bang' to disparage it. Fermi and others noted that this process would have stopped after only the light elements were created, and thus did not account for the abundance of heavier elements.
Gamow's prediction was a 5–10-kelvin black-body radiation temperature for the universe, after it cooled during the expansion. This was corroborated by Penzias and Wilson in 1965. Subsequent experiments arrived at a 2.7 kelvins temperature, corresponding to an age of the universe of 13.8 billion years after the Big Bang.
This dramatic result has raised issues: what happened between the singularity of the Big Bang and the Planck time, which, after all, is the smallest observable time. When might have time separated out from the spacetime foam; there are only hints based on broken symmetries (see Spontaneous symmetry breaking, Timeline of the Big Bang, and the articles in :Category:Physical cosmology).
General relativity gave us our modern notion of the expanding universe that started in the Big Bang. Using relativity and quantum theory we have been able to roughly reconstruct the history of the universe. In our epoch, during which electromagnetic waves can propagate without being disturbed by conductors or charges, we can see the stars, at great distances from us, in the night sky. (Before this epoch, there was a time, before the universe cooled enough for electrons and nuclei to combine into atoms about 377,000 years after the Big Bang, during which starlight would not have been visible over large distances.)
Reprise
Ilya Prigogine's reprise is "Time precedes existence". In contrast to the views of Newton, of Einstein, and of quantum physics, which offer a symmetric view of time (as discussed above), Prigogine points out that statistical and thermodynamic physics can explain irreversible phenomena, as well as the arrow of time and the Big Bang.
| Physical sciences | Physics basics: General | Physics |
19595676 | https://en.wikipedia.org/wiki/Superstring%20theory | Superstring theory | Superstring theory is an attempt to explain all of the particles and fundamental forces of nature in one theory by modeling them as vibrations of tiny supersymmetric strings.
'Superstring theory' is a shorthand for supersymmetric string theory because unlike bosonic string theory, it is the version of string theory that accounts for both fermions and bosons and incorporates supersymmetry to model gravity.
Since the second superstring revolution, the five superstring theories (Type I, Type IIA, Type IIB, HO and HE) are regarded as different limits of a single theory tentatively called M-theory.
Background
One of the deepest open problems in theoretical physics is formulating a theory of quantum gravity. Such a theory incorporates both the theory of general relativity, which describes gravitation and applies to large-scale structures, and quantum mechanics or more specifically quantum field theory, which describes the other three fundamental forces that act on the atomic scale.
Quantum field theory, in particular the Standard model, is currently the most successful theory to describe fundamental forces, but while computing physical quantities of interest, naïvely one obtains infinite values. Physicists developed the technique of renormalization to 'eliminate these infinities' to obtain finite values which can be experimentally tested. This technique works for three of the four fundamental forces: Electromagnetism, the strong force and the weak force, but does not work for gravity, which is non-renormalizable. Development of a quantum theory of gravity therefore requires different means than those used for the other forces.
According to superstring theory, or more generally string theory, the fundamental constituents of reality are strings with radius on the order of the Planck length (about 10−33 cm). An appealing feature of string theory is that fundamental particles can be viewed as excitations of the string. The tension in a string is on the order of the Planck force (1044 newtons). The graviton (the proposed messenger particle of the gravitational force) is predicted by the theory to be a string with wave amplitude zero.
History
Investigating how a string theory may include fermions in its spectrum led to the invention of supersymmetry (in the West) in 1971, a mathematical transformation between bosons and fermions. String theories that include fermionic vibrations are now known as "superstring theories".
Since its beginnings in the seventies and through the combined efforts of many different researchers, superstring theory has developed into a broad and varied subject with connections to quantum gravity, particle and condensed matter physics, cosmology, and pure mathematics.
Absence of physical evidence
Superstring theory is based on supersymmetry. No supersymmetric particles have been discovered and initial investigation, carried out in 2011 at the Large Hadron Collider (LHC) and in 2006 at the Tevatron has excluded some of the ranges. For instance, the mass constraint of the Minimal Supersymmetric Standard Model squarks has been up to 1.1 TeV, and gluinos up to 500 GeV. No report on suggesting large extra dimensions has been delivered from the LHC. There have been no principles so far to limit the number of vacua in the concept of a landscape of vacua.
Some particle physicists became disappointed by the lack of experimental verification of supersymmetry, and some have already discarded it. Jon Butterworth at University College London said that we had no sign of supersymmetry, even in higher energy regions, excluding the superpartners of the top quark up to a few TeV. Ben Allanach at the University of Cambridge states that if we do not discover any new particles in the next trial at the LHC, then we can say it is unlikely to discover supersymmetry at CERN in the foreseeable future.
Extra dimensions
Our physical space is observed to have three large spatial dimensions and, along with time, is a boundless 4-dimensional continuum known as spacetime. However, nothing prevents a theory from including more than 4 dimensions. In the case of string theory, consistency requires spacetime to have 10 dimensions (3D regular space + 1 time + 6D hyperspace). The fact that we see only 3 dimensions of space can be explained by one of two mechanisms: either the extra dimensions are compactified on a very small scale, or else our world may live on a 3-dimensional submanifold corresponding to a brane, on which all known particles besides gravity would be restricted.
If the extra dimensions are compactified, then the extra 6 dimensions must be in the form of a Calabi–Yau manifold. Within the more complete framework of M-theory, they would have to take form of a G2 manifold. A particular exact symmetry of string/M-theory called T-duality (which exchanges momentum modes for winding number and sends compact dimensions of radius R to radius 1/R), has led to the discovery of equivalences between different Calabi–Yau manifolds called mirror symmetry.
Superstring theory is not the first theory to propose extra spatial dimensions. It can be seen as building upon the Kaluza–Klein theory, which proposed a 4+1 dimensional (5D) theory of gravity. When compactified on a circle, the gravity in the extra dimension precisely describes electromagnetism from the perspective of the 3 remaining large space dimensions. Thus the original Kaluza–Klein theory is a prototype for the unification of gauge and gravity interactions, at least at the classical level, however it is known to be insufficient to describe nature for a variety of reasons (missing weak and strong forces, lack of parity violation, etc.) A more complex compact geometry is needed to reproduce the known gauge forces. Also, to obtain a consistent, fundamental, quantum theory requires the upgrade to string theory, not just the extra dimensions.
Number of superstring theories
Theoretical physicists were troubled by the existence of five separate superstring theories. A possible solution for this dilemma was suggested at the beginning of what is called the second superstring revolution in the 1990s, which suggests that the five string theories might be different limits of a single underlying theory, called M-theory. This remains a conjecture.
The five consistent superstring theories are:
The type I string has one supersymmetry in the ten-dimensional sense (16 supercharges). This theory is special in the sense that it is based on unoriented open and closed strings, while the rest are based on oriented closed strings.
The type II string theories have two supersymmetries in the ten-dimensional sense (32 supercharges). There are actually two kinds of type II strings called type IIA and type IIB. They differ mainly in the fact that the IIA theory is non-chiral (parity conserving) while the IIB theory is chiral (parity violating).
The heterotic string theories are based on a peculiar hybrid of a type I superstring and a bosonic string. There are two kinds of heterotic strings differing in their ten-dimensional gauge groups: the heterotic E8×E8 string and the heterotic SO(32) string. (The name heterotic SO(32) is slightly inaccurate since among the SO(32) Lie groups, string theory singles out a quotient Spin(32)/Z2 that is not equivalent to SO(32).)
Chiral gauge theories can be inconsistent due to anomalies. This happens when certain one-loop Feynman diagrams cause a quantum mechanical breakdown of the gauge symmetry. The anomalies were canceled out via the Green–Schwarz mechanism.
Even though there are only five superstring theories, making detailed predictions for real experiments requires information about exactly what physical configuration the theory is in. This considerably complicates efforts to test string theory because there is an astronomically high number—10500 or more—of configurations that meet some of the basic requirements to be consistent with our world. Along with the extreme remoteness of the Planck scale, this is the other major reason it is hard to test superstring theory.
Another approach to the number of superstring theories refers to the mathematical structure called composition algebra. In the findings of abstract algebra there are just seven composition algebras over the field of real numbers. In 1990 physicists R. Foot and G.C. Joshi in Australia stated that "the seven classical superstring theories are in one-to-one correspondence to the seven composition algebras".
Integrating general relativity and quantum mechanics
General relativity typically deals with situations involving large mass objects in fairly large regions of spacetime whereas quantum mechanics is generally reserved for scenarios at the atomic scale (small spacetime regions). The two are very rarely used together, and the most common case that combines them is in the study of black holes. Having peak density, or the maximum amount of matter possible in a space, and very small area, the two must be used in synchrony to predict conditions in such places. Yet, when used together, the equations fall apart, spitting out impossible answers, such as imaginary distances and less than one dimension.
The major problem with their incongruence is that, at Planck scale (a fundamental small unit of length) lengths, general relativity predicts a smooth, flowing surface, while quantum mechanics predicts a random, warped surface, which are nowhere near compatible. Superstring theory resolves this issue, replacing the classical idea of point particles with strings. These strings have an average diameter of the Planck length, with extremely small variances, which completely ignores the quantum mechanical predictions of Planck-scale length dimensional warping. Also, these surfaces can be mapped as branes. These branes can be viewed as objects with a morphism between them. In this case, the morphism will be the state of a string that stretches between brane A and brane B.
Singularities are avoided because the observed consequences of "Big Crunches" never reach zero size. In fact, should the universe begin a "big crunch" sort of process, string theory dictates that the universe could never be smaller than the size of one string, at which point it would actually begin expanding.
Mathematics
D-branes
D-branes are membrane-like objects in 10D string theory. They can be thought of as occurring as a result of a Kaluza–Klein compactification of 11D M-theory that contains membranes. Because compactification of a geometric theory produces extra vector fields the D-branes can be included in the action by adding an extra U(1) vector field to the string action.
In type I open string theory, the ends of open strings are always attached to D-brane surfaces. A string theory with more gauge fields such as SU(2) gauge fields would then correspond to the compactification of some higher-dimensional theory above 11 dimensions, which is not thought to be possible to date. Furthermore, the tachyons attached to the D-branes show the instability of those D-branes with respect to the annihilation. The tachyon total energy is (or reflects) the total energy of the D-branes.
Why five superstring theories?
For a 10 dimensional supersymmetric theory we are allowed a 32-component Majorana spinor. This can be decomposed into a pair of 16-component Majorana-Weyl (chiral) spinors. There are then various ways to construct an invariant depending on whether these two spinors have the same or opposite chiralities:
The heterotic superstrings come in two types SO(32) and E8×E8 as indicated above and the type I superstrings include open strings.
Beyond superstring theory
It is conceivable that the five superstring theories are approximated to a theory in higher dimensions possibly involving membranes. Because the action for this involves quartic terms and higher so is not Gaussian, the functional integrals are very difficult to solve and so this has confounded the top theoretical physicists. Edward Witten has popularised the concept of a theory in 11 dimensions, called M-theory, involving membranes interpolating from the known symmetries of superstring theory. It may turn out that there exist membrane models or other non-membrane models in higher dimensions—which may become acceptable when we find new unknown symmetries of nature, such as noncommutative geometry. It is thought, however, that 16 is probably the maximum since SO(16) is a maximal subgroup of E8, the largest exceptional Lie group, and also is more than large enough to contain the Standard Model. Quartic integrals of the non-functional kind are easier to solve so there is hope for the future. This is the series solution, which is always convergent when a is non-zero and negative:
In the case of membranes the series would correspond to sums of various membrane interactions that are not seen in string theory.
Compactification
Investigating theories of higher dimensions often involves looking at the 10 dimensional superstring theory and interpreting some of the more obscure results in terms of compactified dimensions. For example, D-branes are seen as compactified membranes from 11D M-theory. Theories of higher dimensions such as 12D F-theory and beyond produce other effects, such as gauge terms higher than U(1). The components of the extra vector fields (A) in the D-brane actions can be thought of as extra coordinates (X) in disguise. However, the known symmetries including supersymmetry currently restrict the spinors to 32-components—which limits the number of dimensions to 11 (or 12 if you include two time dimensions.) Some physicists (e.g., John Baez et al.) have speculated that the exceptional Lie groups E6, E7 and E8 having maximum orthogonal subgroups SO(10), SO(12) and SO(16) may be related to theories in 10, 12 and 16 dimensions; 10 dimensions corresponding to string theory and the 12 and 16 dimensional theories being yet undiscovered but would be theories based on 3-branes and 7-branes respectively. However, this is a minority view within the string community. Since E7 is in some sense F4 quaternified and E8 is F4 octonified, the 12 and 16 dimensional theories, if they did exist, may involve the noncommutative geometry based on the quaternions and octonions respectively. From the above discussion, it can be seen that physicists have many ideas for extending superstring theory beyond the current 10 dimensional theory, but so far all have been unsuccessful.
Kac–Moody algebras
Since strings can have an infinite number of modes, the symmetry used to describe string theory is based on infinite dimensional Lie algebras. Some Kac–Moody algebras that have been considered as symmetries for M-theory have been E10 and E11 and their supersymmetric extensions.
| Physical sciences | Particle physics: General | Physics |
19600416 | https://en.wikipedia.org/wiki/Isotope | Isotope | Isotopes are distinct nuclear species (or nuclides) of the same chemical element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but different nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have similar chemical properties, they have different atomic masses and physical properties.
The term isotope is derived from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in a 1913 suggestion to the British chemist Frederick Soddy, who popularized the term.
The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number.
For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively.
Isotope vs. nuclide
A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example, carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over nuclear. The neutron number greatly affects nuclear properties, but its effect on chemical properties is negligible for most elements. Even for the lightest elements, whose ratio of neutron number to atomic number varies the most between isotopes, it usually has only a small effect although it matters in some circumstances (for hydrogen, the lightest element, the isotope effect is large enough to affect biology strongly). The term isotopes (originally also isotopic elements, now sometimes isotopic nuclides) is intended to imply comparison (like synonyms or isomers). For example, the nuclides , , are isotopes (nuclides with the same atomic number but different mass numbers), but , , are isobars (nuclides with the same mass number). However, isotope is the older term and so is better known than nuclide and is still sometimes used in contexts in which nuclide might be more appropriate, such as nuclear technology and nuclear medicine.
Notation
An isotope and/or nuclide is specified by the name of the particular element (this indicates the atomic number) followed by a hyphen and the mass number (e.g. helium-3, helium-4, carbon-12, carbon-14, uranium-235 and uranium-239). When a chemical symbol is used, e.g. "C" for carbon, standard notation (now known as "AZE notation" because A is the mass number, Z the atomic number, and E for element) is to indicate the mass number (number of nucleons) with a superscript at the upper left of the chemical symbol and to indicate the atomic number with a subscript at the lower left (e.g. , , , , , and ). Because the atomic number is given by the element symbol, it is common to state only the mass number in the superscript and leave out the atomic number subscript (e.g. , , , , , and ). The letter m (for metastable) is sometimes appended after the mass number to indicate a nuclear isomer, a metastable or energetically excited nuclear state (as opposed to the lowest-energy ground state), for example (tantalum-180m).
The common pronunciation of the AZE notation is different from how it is written: is commonly pronounced as helium-four instead of four-two-helium, and as uranium two-thirty-five (American English) or uranium-two-three-five (British) instead of 235-92-uranium.
Radioactive, primordial, and stable isotopes
Some isotopes/nuclides are radioactive, and are therefore referred to as radioisotopes or radionuclides, whereas others have never been observed to decay radioactively and are referred to as stable isotopes or stable nuclides. For example, is a radioactive form of carbon, whereas and are stable isotopes. There are about 339 naturally occurring nuclides on Earth, of which 286 are primordial nuclides, meaning that they have existed since the Solar System's formation.
Primordial nuclides include 35 nuclides with very long half-lives (over 100 million years) and 251 that are formally considered as "stable nuclides", because they have not been observed to decay. In most cases, for obvious reasons, if an element has stable isotopes, those isotopes predominate in the elemental abundance found on Earth and in the Solar System. However, in the cases of three elements (tellurium, indium, and rhenium) the most abundant isotope found in nature is actually one (or two) extremely long-lived radioisotope(s) of the element, despite these elements having one or more stable isotopes.
Theory predicts that many apparently "stable" nuclides are radioactive, with extremely long half-lives (discounting the possibility of proton decay, which would make all nuclides ultimately unstable). Some stable nuclides are in theory energetically susceptible to other known forms of decay, such as alpha decay or double beta decay, but no decay products have yet been observed, and so these isotopes are said to be "observationally stable". The predicted half-lives for these nuclides often greatly exceed the estimated age of the universe, and in fact, there are also 31 known radionuclides (see primordial nuclide) with half-lives longer than the age of the universe.
Adding in the radioactive nuclides that have been created artificially, there are 3,339 currently known nuclides. These include 905 nuclides that are either stable or have half-lives longer than 60 minutes. See list of nuclides for details.
History
Radioactive isotopes
The existence of isotopes was first suggested in 1913 by the radiochemist Frederick Soddy, based on studies of radioactive decay chains that indicated about 40 different species referred to as radioelements (i.e. radioactive elements) between uranium and lead, although the periodic table only allowed for 11 elements between lead and uranium inclusive.
Several attempts to separate these new radioelements chemically had failed. For example, Soddy had shown in 1910 that mesothorium (later shown to be 228Ra), radium (226Ra, the longest-lived isotope), and thorium X (224Ra) are impossible to separate. Attempts to place the radioelements in the periodic table led Soddy and Kazimierz Fajans independently to propose their radioactive displacement law in 1913, to the effect that alpha decay produced an element two places to the left in the periodic table, whereas beta decay emission produced an element one place to the right. Soddy recognized that emission of an alpha particle followed by two beta particles led to the formation of an element chemically identical to the initial element but with a mass four units lighter and with different radioactive properties.
Soddy proposed that several types of atoms (differing in radioactive properties) could occupy the same place in the table. For example, the alpha-decay of uranium-235 forms thorium-231, whereas the beta decay of actinium-230 forms thorium-230. The term "isotope", Greek for "at the same place", was suggested to Soddy by Margaret Todd, a Scottish physician and family friend, during a conversation in which he explained his ideas to her. He received the 1921 Nobel Prize in Chemistry in part for his work on isotopes.
In 1914 T. W. Richards found variations between the atomic weight of lead from different mineral sources, attributable to variations in isotopic composition due to different radioactive origins.
Stable isotopes
The first evidence for multiple isotopes of a stable (non-radioactive) element was found by J. J. Thomson in 1912 as part of his exploration into the composition of canal rays (positive ions). Thomson channelled streams of neon ions through parallel magnetic and electric fields, measured their deflection by placing a photographic plate in their path, and computed their mass to charge ratio using a method that became known as the Thomson's parabola method. Each stream created a glowing patch on the plate at the point it struck. Thomson observed two separate parabolic patches of light on the photographic plate (see image), which suggested two species of nuclei with different mass-to-charge ratios. He wrote "There can, therefore, I think, be little doubt that what has been called neon is not a simple gas but a mixture of two gases, one of which has an atomic weight about 20 and the other about 22. The parabola due to the heavier gas is always much fainter than that due to the lighter, so that probably the heavier gas forms only a small percentage of the mixture."
F. W. Aston subsequently discovered multiple stable isotopes for numerous elements using a mass spectrograph. In 1919 Aston studied neon with sufficient resolution to show that the two isotopic masses are very close to the integers 20 and 22 and that neither is equal to the known molar mass (20.2) of neon gas. This is an example of Aston's whole number rule for isotopic masses, which states that large deviations of elemental molar masses from integers are primarily due to the fact that the element is a mixture of isotopes. Aston similarly showed in 1920 that the molar mass of chlorine (35.45) is a weighted average of the almost integral masses for the two isotopes 35Cl and 37Cl.
Neutrons
After the discovery of the neutron by James Chadwick in 1932, the ultimate root cause for the existence of isotopes was clarified, that is, the nuclei of different isotopes for a given element have different numbers of neutrons, albeit having the same number of protons.
Variation in properties between isotopes
Chemical and molecular properties
A neutral atom has the same number of electrons as protons. Thus different isotopes of a given element all have the same number of electrons and share a similar electronic structure. Because the chemical behaviour of an atom is largely determined by its electronic structure, different isotopes exhibit nearly identical chemical behaviour.
The main exception to this is the kinetic isotope effect: due to their larger masses, heavier isotopes tend to react somewhat more slowly than lighter isotopes of the same element. This is most pronounced by far for protium (), deuterium (), and tritium (), because deuterium has twice the mass of protium and tritium has three times the mass of protium. These mass differences also affect the behavior of their respective chemical bonds, by changing the center of gravity (reduced mass) of the atomic systems. However, for heavier elements, the relative mass difference between isotopes is much less so that the mass-difference effects on chemistry are usually negligible. (Heavy elements also have relatively more neutrons than lighter elements, so the ratio of the nuclear mass to the collective electronic mass is slightly greater.) There is also an equilibrium isotope effect.
Similarly, two molecules that differ only in the isotopes of their atoms (isotopologues) have identical electronic structures, and therefore almost indistinguishable physical and chemical properties (again with deuterium and tritium being the primary exceptions). The vibrational modes of a molecule are determined by its shape and by the masses of its constituent atoms; so different isotopologues have different sets of vibrational modes. Because vibrational modes allow a molecule to absorb photons of corresponding energies, isotopologues have different optical properties in the infrared range.
Nuclear properties and stability
Atomic nuclei consist of protons and neutrons bound together by the residual strong force. Because protons are positively charged, they repel each other. Neutrons, which are electrically neutral, stabilize the nucleus in two ways. Their copresence pushes protons slightly apart, reducing the electrostatic repulsion between the protons, and they exert an attractive nuclear force on each other and on protons. For this reason, one or more neutrons are necessary for two or more protons to bind into a nucleus. As the number of protons increases, so does the ratio of neutrons to protons necessary to ensure a stable nucleus (see graph at right). For example, although the neutron:proton ratio of is 1:2, the neutron:proton ratio of is greater than 3:2. A number of lighter elements have stable nuclides with the ratio 1:1 (Z = N). The nuclide (calcium-40) is observationally the heaviest stable nuclide with the same number of neutrons and protons. All stable nuclides heavier than calcium-40 contain more neutrons than protons.
Numbers of isotopes per element
Of the 80 elements with a stable isotope, the largest number of stable isotopes observed for any element is ten (for the element tin). No element has nine or eight stable isotopes. Five elements have seven stable isotopes, eight have six stable isotopes, ten have five stable isotopes, nine have four stable isotopes, five have three stable isotopes, 16 have two stable isotopes (counting as stable), and 26 elements have only a single stable isotope (of these, 19 are so-called mononuclidic elements, having a single primordial stable isotope that dominates and fixes the atomic weight of the natural element to high precision; 3 radioactive mononuclidic elements occur as well). In total, there are 251 nuclides that have not been observed to decay. For the 80 elements that have one or more stable isotopes, the average number of stable isotopes is 251/80 ≈ 3.14 isotopes per element.
Even and odd nucleon numbers
The proton:neutron ratio is not the only factor affecting nuclear stability. It depends also on evenness or oddness of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This remarkable difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences: unstable isotopes with a nonoptimal number of neutrons or protons decay by beta decay (including positron emission), electron capture, or other less common decay modes such as spontaneous fission and cluster decay.
Most stable nuclides are even-proton-even-neutron, where all numbers Z, N, and A are even. The odd-A stable nuclides are divided (roughly evenly) into odd-proton-even-neutron, and even-proton-odd-neutron nuclides. Stable odd-proton-odd-neutron nuclides are the least common.
Even atomic number
The 146 even-proton, even-neutron (EE) nuclides comprise ~58% of all stable nuclides and all have spin 0 because of pairing. There are also 24 primordial long-lived even-even nuclides. As a result, each of the 41 even-numbered elements from 2 to 82 has at least one stable isotope, and most of these elements have several primordial isotopes. Half of these even-numbered elements have six or more stable isotopes. The extreme stability of helium-4 due to a double pairing of 2 protons and 2 neutrons prevents any nuclides containing five (, ) or eight () nucleons from existing long enough to serve as platforms for the buildup of heavier elements via nuclear fusion in stars (see triple alpha process).
Only five stable nuclides contain both an odd number of protons and an odd number of neutrons. The first four "odd-odd" nuclides occur in low mass nuclides, for which changing a proton to a neutron or vice versa would lead to a very lopsided proton-neutron ratio (, , , and ; spins 1, 1, 3, 1). The only other entirely "stable" odd-odd nuclide, (spin 9), is thought to be the rarest of the 251 stable nuclides, and is the only primordial nuclear isomer, which has not yet been observed to decay despite experimental attempts.
Many odd-odd radionuclides (such as the ground state of tantalum-180) with comparatively short half-lives are known. Usually, they beta-decay to their nearby even-even isobars that have paired protons and paired neutrons. Of the nine primordial odd-odd nuclides (five stable and four radioactive with long half-lives), only is the most common isotope of a common element. This is the case because it is a part of the CNO cycle. The nuclides and are minority isotopes of elements that are themselves rare compared to other light elements, whereas the other six isotopes make up only a tiny percentage of the natural abundance of their elements.
Odd atomic number
53 stable nuclides have an even number of protons and an odd number of neutrons. They are a minority in comparison to the even-even isotopes, which are about 3 times as numerous. Among the 41 even-Z elements that have a stable nuclide, only two elements (argon and cerium) have no even-odd stable nuclides. One element (tin) has three. There are 24 elements that have one even-odd nuclide and 13 that have two odd-even nuclides. Of 35 primordial radionuclides there exist four even-odd nuclides (see table at right), including the fissile . Because of their odd neutron numbers, the even-odd nuclides tend to have large neutron capture cross-sections, due to the energy that results from neutron-pairing effects. These stable even-proton odd-neutron nuclides tend to be uncommon by abundance in nature, generally because, to form and enter into primordial abundance, they must have escaped capturing neutrons to form yet other stable even-even isotopes, during both the s-process and r-process of neutron capture, during nucleosynthesis in stars. For this reason, only and are the most naturally abundant isotopes of their element.
48 stable odd-proton-even-neutron nuclides, stabilized by their paired neutrons, form most of the stable isotopes of the odd-numbered elements; the very few odd-proton-odd-neutron nuclides comprise the others. There are 41 odd-numbered elements with Z = 1 through 81, of which 39 have stable isotopes (technetium () and promethium () have no stable isotopes). Of these 39 odd Z elements, 30 elements (including hydrogen-1 where 0 neutrons is even) have one stable odd-even isotope, and nine elements:
chlorine (),
potassium (),
copper (),
gallium (),
bromine (),
silver (),
antimony (),
iridium (), and
thallium (), have two odd-even stable isotopes each. This makes a total stable odd-even isotopes.
There are also five primordial long-lived radioactive odd-even isotopes, , , , , and . The last two were only recently found to decay, with half-lives greater than 10 years.
Odd neutron number
Actinides with odd neutron number are generally fissile (with thermal neutrons), whereas those with even neutron number are generally not, though they are fissionable with fast neutrons. All observationally stable odd-odd nuclides have nonzero integer spin. This is because the single unpaired neutron and unpaired proton have a larger nuclear force attraction to each other if their spins are aligned (producing a total spin of at least 1 unit), instead of anti-aligned. See deuterium for the simplest case of this nuclear behavior.
Only , , and have odd neutron number and are the most naturally abundant isotope of their element.
Occurrence in nature
Elements are composed either of one nuclide (mononuclidic elements), or of more than one naturally occurring isotopes. The unstable (radioactive) isotopes are either primordial or postprimordial. Primordial isotopes were a product of stellar nucleosynthesis or another type of nucleosynthesis such as cosmic ray spallation, and have persisted down to the present because their rate of decay is very slow (e.g. uranium-238 and potassium-40). Post-primordial isotopes were created by cosmic ray bombardment as cosmogenic nuclides (e.g., tritium, carbon-14), or by the decay of a radioactive primordial isotope to a radioactive radiogenic nuclide daughter (e.g. uranium to radium). A few isotopes are naturally synthesized as nucleogenic nuclides, by some other natural nuclear reaction, such as when neutrons from natural nuclear fission are absorbed by another atom.
As discussed above, only 80 elements have any stable isotopes, and 26 of these have only one stable isotope. Thus, about two-thirds of stable elements occur naturally on Earth in multiple stable isotopes, with the largest number of stable isotopes for an element being ten, for tin (). There are about 94 elements found naturally on Earth (up to plutonium inclusive), though some are detected only in very tiny amounts, such as plutonium-244. Scientists estimate that the elements that occur naturally on Earth (some only as radioisotopes) occur as 339 isotopes (nuclides) in total. Only 251 of these naturally occurring nuclides are stable, in the sense of never having been observed to decay as of the present time. An additional 35 primordial nuclides (to a total of 286 primordial nuclides), are radioactive with known half-lives, but have half-lives longer than 100 million years, allowing them to exist from the beginning of the Solar System. See list of nuclides for details.
All the known stable nuclides occur naturally on Earth; the other naturally occurring nuclides are radioactive but occur on Earth due to their relatively long half-lives, or else due to other means of ongoing natural production. These include the afore-mentioned cosmogenic nuclides, the nucleogenic nuclides, and any radiogenic nuclides formed by ongoing decay of a primordial radioactive nuclide, such as radon and radium from uranium.
An additional ~3000 radioactive nuclides not found in nature have been created in nuclear reactors and in particle accelerators. Many short-lived nuclides not found naturally on Earth have also been observed by spectroscopic analysis, being naturally created in stars or supernovae. An example is aluminium-26, which is not naturally found on Earth but is found in abundance on an astronomical scale.
The tabulated atomic masses of elements are averages that account for the presence of multiple isotopes with different masses. Before the discovery of isotopes, empirically determined noninteger values of atomic mass confounded scientists. For example, a sample of chlorine contains 75.8% chlorine-35 and 24.2% chlorine-37, giving an average atomic mass of 35.5 atomic mass units.
According to generally accepted cosmology theory, only isotopes of hydrogen and helium, traces of some isotopes of lithium and beryllium, and perhaps some boron, were created at the Big Bang, while all other nuclides were synthesized later, in stars and supernovae, and in interactions between energetic particles such as cosmic rays, and previously produced nuclides. (See nucleosynthesis for details of the various processes thought responsible for isotope production.) The respective abundances of isotopes on Earth result from the quantities formed by these processes, their spread through the galaxy, and the rates of decay for isotopes that are unstable. After the initial coalescence of the Solar System, isotopes were redistributed according to mass, and the isotopic composition of elements varies slightly from planet to planet. This sometimes makes it possible to trace the origin of meteorites.
Atomic mass of isotopes
The atomic mass (mr) of an isotope (nuclide) is determined mainly by its mass number (i.e. number of nucleons in its nucleus). Small corrections are due to the binding energy of the nucleus (see mass defect), the slight difference in mass between proton and neutron, and the mass of the electrons associated with the atom, the latter because the electron:nucleon ratio differs among isotopes.
The mass number is a dimensionless quantity. The atomic mass, on the other hand, is measured using the atomic mass unit based on the mass of the carbon-12 atom. It is denoted with symbols "u" (for unified atomic mass unit) or "Da" (for dalton).
The atomic masses of naturally occurring isotopes of an element determine the standard atomic weight of the element. When the element contains N isotopes, the expression below is applied for the average atomic mass :
where m1, m2, ..., mN are the atomic masses of each individual isotope, and x1, ..., xN are the relative abundances of these isotopes.
Applications of isotopes
Purification of isotopes
Several applications exist that capitalize on the properties of the various isotopes of a given element. Isotope separation is a significant technological challenge, particularly with heavy elements such as uranium or plutonium. Lighter elements such as lithium, carbon, nitrogen, and oxygen are commonly separated by gas diffusion of their compounds such as CO and NO. The separation of hydrogen and deuterium is unusual because it is based on chemical rather than physical properties, for example in the Girdler sulfide process. Uranium isotopes have been separated in bulk by gas diffusion, gas centrifugation, laser ionization separation, and (in the Manhattan Project) by a type of production mass spectrometry.
Use of chemical and biological properties
Isotope analysis is the determination of isotopic signature, the relative abundances of isotopes of a given element in a particular sample. Isotope analysis is frequently done by isotope ratio mass spectrometry. For biogenic substances in particular, significant variations of isotopes of C, N, and O can occur. Analysis of such variations has a wide range of applications, such as the detection of adulteration in food products or the geographic origins of products using isoscapes. The identification of certain meteorites as having originated on Mars is based in part upon the isotopic signature of trace gases contained in them.
Isotopic substitution can be used to determine the mechanism of a chemical reaction via the kinetic isotope effect.
Another common application is isotopic labeling, the use of unusual isotopes as tracers or markers in chemical reactions. Normally, atoms of a given element are indistinguishable from each other. However, by using isotopes of different masses, even different nonradioactive stable isotopes can be distinguished by mass spectrometry or infrared spectroscopy. For example, in 'stable isotope labeling with amino acids in cell culture (SILAC)' stable isotopes are used to quantify proteins. If radioactive isotopes are used, they can be detected by the radiation they emit (this is called radioisotopic labeling).
Isotopes are commonly used to determine the concentration of various elements or substances using the isotope dilution method, whereby known amounts of isotopically substituted compounds are mixed with the samples and the isotopic signatures of the resulting mixtures are determined with mass spectrometry.
Use of nuclear properties
A technique similar to radioisotopic labeling is radiometric dating: using the known half-life of an unstable element, one can calculate the amount of time that has elapsed since a known concentration of isotope existed. The most widely known example is radiocarbon dating used to determine the age of carbonaceous materials.
Several forms of spectroscopy rely on the unique nuclear properties of specific isotopes, both radioactive and stable. For example, nuclear magnetic resonance (NMR) spectroscopy can be used only for isotopes with a nonzero nuclear spin. The most common nuclides used with NMR spectroscopy are 1H, 2D, 15N, 13C, and 31P.
Mössbauer spectroscopy also relies on the nuclear transitions of specific isotopes, such as 57Fe.
Radionuclides also have important uses. Nuclear power and nuclear weapons development require relatively large quantities of specific isotopes. Nuclear medicine and radiation oncology utilize radioisotopes respectively for medical diagnosis and treatment.
| Physical sciences | Nuclear physics | null |
19604151 | https://en.wikipedia.org/wiki/Atomic%20mass | Atomic mass | Atomic mass (ma or m) is the mass of a single atom. The atomic mass mostly comes from the combined mass of the protons and neutrons in the nucleus, with minor contributions from the electrons and nuclear binding energy. The atomic mass of atoms, ions, or atomic nuclei is slightly less than the sum of the masses of their constituent protons, neutrons, and electrons, due to (per ).
Atomic mass is often measured in dalton (Da) or unified atomic mass unit (u). One dalton is equal to the mass of a carbon-12 atom in its natural state. Thus, the numeric value of the atomic mass when expressed in daltons has nearly the same value as the mass number. The value of 1 unified atomic mass unit in kilograms is .
Conversion between mass in kilograms and mass in daltons can be done using the atomic mass constant .
The formula used for conversion is:
where is the molar mass constant, is the Avogadro constant, and is the experimentally determined molar mass of carbon-12.
The relative isotopic mass (see section below) can be obtained by dividing the atomic mass ma of an isotope by the atomic mass constant mu yielding a dimensionless value. Thus, the atomic mass of a carbon-12 atom is by definition, but the relative isotopic mass of a carbon-12 atom is simply 12. The sum of relative isotopic masses of all atoms in a molecule is the relative molecular mass.
The atomic mass of an isotope and the relative isotopic mass refers to a certain specific isotope of an element. Because substances are usually not isotopically pure, it is convenient to use the elemental atomic mass which is the average (mean) atomic mass of an element, weighted by the abundance of the isotopes. The dimensionless (standard) atomic weight is the weighted mean relative isotopic mass of a (typical naturally occurring) mixture of isotopes.
The 2019 revision of the SI redefined the kilogram using the Planck constant (), improving the precision of the atomic mass constant by anchoring it to fixed physical constants. Although the dalton remains defined via carbon-12, the revision enhances traceability and accuracy in atomic mass measurements.
Relative isotopic mass
Relative isotopic mass (a property of a single atom) is not to be confused with the averaged quantity atomic weight (see above), that is an average of values for many atoms in a given sample of a chemical element.
While atomic mass is an absolute mass, relative isotopic mass is a dimensionless number with no units. This loss of units results from the use of a scaling ratio with respect to a carbon-12 standard, and the word "relative" in the term "relative isotopic mass" refers to this scaling relative to carbon-12.
The relative isotopic mass, then, is the mass of a given isotope (specifically, any single nuclide), when this value is scaled by the mass of carbon-12, where the latter has to be determined experimentally. Equivalently, the relative isotopic mass of an isotope or nuclide is the mass of the isotope relative to 1/12 of the mass of a carbon-12 atom.
For example, the relative isotopic mass of a carbon-12 atom is exactly 12. For comparison, the atomic mass of a carbon-12 atom is exactly 12 daltons. Alternately, the atomic mass of a carbon-12 atom may be expressed in any other mass units: for example, the atomic mass of a carbon-12 atom is .
As is the case for the related atomic mass when expressed in daltons, the relative isotopic mass numbers of nuclides other than carbon-12 are not whole numbers, but are always close to whole numbers. This is discussed fully below.
Similar terms for different quantities
The atomic mass or relative isotopic mass are sometimes confused, or incorrectly used, as synonyms of relative atomic mass (also known as atomic weight) or the standard atomic weight (a particular variety of atomic weight, in the sense that it is standardized). However, as noted in the introduction, atomic mass is an absolute mass while all other terms are dimensionless. Relative atomic mass and standard atomic weight represent terms for (abundance-weighted) averages of relative atomic masses in elemental samples, not for single nuclides. As such, relative atomic mass and standard atomic weight often differ numerically from the relative isotopic mass.
The atomic mass (relative isotopic mass) is defined as the mass of a single atom, which can only be one isotope (nuclide) at a time, and is not an abundance-weighted average, as in the case of relative atomic mass/atomic weight. The atomic mass or relative isotopic mass of each isotope and nuclide of a chemical element is, therefore, a number that can in principle be measured to high precision, since every specimen of such a nuclide is expected to be exactly identical to every other specimen, as all atoms of a given type in the same energy state, and every specimen of a particular nuclide, are expected to be exactly identical in mass to every other specimen of that nuclide. For example, every atom of oxygen-16 is expected to have exactly the same atomic mass (relative isotopic mass) as every other atom of oxygen-16.
In the case of many elements that have one naturally occurring isotope (mononuclidic elements) or one dominant isotope, the difference between the atomic mass of the most common isotope, and the (standard) relative atomic mass or (standard) atomic weight can be small or even nil, and does not affect most bulk calculations. However, such an error can exist and even be important when considering individual atoms for elements that are not mononuclidic.
For non-mononuclidic elements that have more than one common isotope, the numerical difference in relative atomic mass (atomic weight) from even the most common relative isotopic mass, can be half a mass unit or more (e.g. see the case of chlorine where atomic weight and standard atomic weight are about 35.45). The atomic mass (relative isotopic mass) of an uncommon isotope can differ from the relative atomic mass, atomic weight, or standard atomic weight, by several mass units.
Relative isotopic masses are always close to whole-number values, but never (except in the case of carbon-12) exactly a whole number, for two reasons:
protons and neutrons have different masses, and different nuclides have different ratios of protons and neutrons.
atomic masses are reduced, to different extents, by their binding energies.
The ratio of atomic mass to mass number (number of nucleons) varies from for 56Fe to for 1H.
Any mass defect due to nuclear binding energy is experimentally a small fraction (less than 1%) of the mass of an equal number of free nucleons. When compared to the average mass per nucleon in carbon-12, which is moderately strongly-bound compared with other atoms, the mass defect of binding for most atoms is an even smaller fraction of a dalton (unified atomic mass unit, based on carbon-12). Since free protons and neutrons differ from each other in mass by a small fraction of a dalton (), rounding the relative isotopic mass, or the atomic mass of any given nuclide given in daltons to the nearest whole number, always gives the nucleon count, or mass number. Additionally, the neutron count (neutron number) may then be derived by subtracting the number of protons (atomic number) from the mass number (nucleon count).
Mass defect
The amount that the ratio of atomic masses to mass number deviates from 1 is as follows: the deviation starts positive at hydrogen-1, then decreases until it reaches a local minimum at helium-4. Isotopes of lithium, beryllium, and boron are less strongly bound than helium, as shown by their increasing mass-to-mass number ratios.
At carbon, the ratio of mass (in daltons) to mass number is defined as 1, and after carbon it becomes less than one until a minimum is reached at iron-56 (with only slightly higher values for iron-58 and nickel-62), then increases to positive values in the heavy isotopes, with increasing atomic number. This corresponds to the fact that nuclear fission in an element heavier than zirconium produces energy, and fission in any element lighter than niobium requires energy. On the other hand, nuclear fusion of two atoms of an element lighter than scandium (except for helium) produces energy, whereas fusion in elements heavier than calcium requires energy. The fusion of two atoms of 4He yielding beryllium-8 would require energy, and the beryllium would quickly fall apart again. 4He can fuse with tritium (3H) or with 3He; these processes occurred during Big Bang nucleosynthesis. The formation of elements with more than seven nucleons requires the fusion of three atoms of 4He in the triple-alpha process, skipping over lithium, beryllium, and boron to produce carbon-12.
Here are some values of the ratio of atomic mass to mass number:
Measurement of atomic masses
Direct comparison and measurement of the masses of atoms is achieved with mass spectrometry.
Relationship between atomic and molecular masses
Similar definitions apply to molecules. One can calculate the molecular mass of a compound by adding the atomic masses (not the standard atomic weights) of its constituent atoms. Conversely, the molar mass is usually computed from the standard atomic weights (not the atomic or nuclide masses). Thus, molecular mass and molar mass differ slightly in numerical value and represent different concepts. Molecular mass is the mass of a molecule, which is the sum of its constituent atomic masses. Molar mass is an average of the masses of the constituent molecules in a chemically pure but isotopically heterogeneous ensemble. In both cases, the multiplicity of the atoms (the number of times it occurs) must be taken into account, usually by multiplication of each unique mass by its multiplicity.
History
The first scientists to determine relative atomic masses were John Dalton and Thomas Thomson between 1803 and 1805 and Jöns Jakob Berzelius between 1808 and 1826. Relative atomic mass (Atomic weight) was originally defined relative to that of the lightest element, hydrogen, which was taken as 1.00, and in the 1820s, Prout's hypothesis stated that atomic masses of all elements would prove to be exact multiples of that of hydrogen. Berzelius, however, soon proved that this was not even approximately true, and for some elements, such as chlorine, relative atomic mass, at about 35.5, falls almost exactly halfway between two integral multiples of that of hydrogen. Still later, this was shown to be largely due to a mix of isotopes, and that the atomic masses of pure isotopes, or nuclides, are multiples of the hydrogen mass, to within about 1%.
In the 1860s, Stanislao Cannizzaro refined relative atomic masses by applying Avogadro's law (notably at the Karlsruhe Congress of 1860). He formulated a law to determine relative atomic masses of elements: the different quantities of the same element contained in different molecules are all whole multiples of the atomic weight and determined relative atomic masses and molecular masses by comparing the vapor density of a collection of gases with molecules containing one or more of the chemical element in question.
In the 20th century, until the 1960s, chemists and physicists used two different atomic-mass scales. The chemists used an "atomic mass unit" (amu) scale such that the natural mixture of oxygen isotopes had an atomic mass 16, while the physicists assigned the same number 16 to only the atomic mass of the most common oxygen isotope (16O, containing eight protons and eight neutrons). However, because oxygen-17 and oxygen-18 are also present in natural oxygen this led to two different tables of atomic mass. The unified scale based on carbon-12, 12C, met the physicists' need to base the scale on a pure isotope, while being numerically close to the chemists' scale. This was adopted as the 'unified atomic mass unit'. The current International System of Units (SI) primary recommendation for the name of this unit is the dalton and symbol 'Da'. The name 'unified atomic mass unit' and symbol 'u' are recognized names and symbols for the same unit.
The term atomic weight is being phased out slowly and being replaced by relative atomic mass, in most current usage. This shift in nomenclature reaches back to the 1960s and has been the source of much debate in the scientific community, which was triggered by the adoption of the unified atomic mass unit and the realization that weight was in some ways an inappropriate term. The argument for keeping the term "atomic weight" was primarily that it was a well understood term to those in the field, that the term "atomic mass" was already in use (as it is currently defined) and that the term "relative atomic mass" might be easily confused with relative isotopic mass (the mass of a single atom of a given nuclide, expressed dimensionlessly relative to 1/12 of the mass of carbon-12; see section above).
In 1979, as a compromise, the term "relative atomic mass" was introduced as a secondary synonym for atomic weight. Twenty years later the primacy of these synonyms was reversed, and the term "relative atomic mass" is now the preferred term.
However, the term "standard atomic weights" (referring to the standardized expectation atomic weights of differing samples) has not been changed, because simple replacement of "atomic weight" with "relative atomic mass" would have resulted in the term "standard relative atomic mass."
| Physical sciences | Basics_4 | null |
19604228 | https://en.wikipedia.org/wiki/Dark%20energy | Dark energy | In physical cosmology and astronomy, dark energy is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe. Assuming that the lambda-CDM model of cosmology is correct, dark energy dominates the universe, contributing 68% of the total energy in the present-day observable universe while dark matter and ordinary (baryonic) matter contribute 26% and 5%, respectively, and other components such as neutrinos and photons are nearly negligible. Dark energy's density is very low: ( in mass-energy), much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the universe's mass–energy content because it is uniform across space.
The first observational evidence for dark energy's existence came from measurements of supernovae. Type Ia supernovae have constant luminosity, which means that they can be used as accurate distance measures. Comparing this distance to the redshift (which measures the speed at which the supernova is receding) shows that the universe's expansion is accelerating. Prior to this observation, scientists thought that the gravitational attraction of matter and energy in the universe would cause the universe's expansion to slow over time. Since the discovery of accelerating expansion, several independent lines of evidence have been discovered that support the existence of dark energy.
The exact nature of dark energy remains a mystery, and many possible explanations have been theorized. The main candidates are a cosmological constant (representing a constant energy density filling space homogeneously) and scalar fields (dynamic quantities having energy densities that vary in time and space) such as quintessence or moduli. A cosmological constant would remain constant across time and space, while scalar fields can vary. Yet other possibilities are interacting dark energy, an observational effect, and cosmological coupling (see the section ).
History of discovery and previous speculation
Einstein's cosmological constant
The "cosmological constant" is a constant term that can be added to Einstein field equations of general relativity. If considered as a "source term" in the field equation, it can be viewed as equivalent to the mass of empty space (which conceptually could be either positive or negative), or "vacuum energy".
The cosmological constant was first proposed by Einstein as a mechanism to obtain a solution to the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity. Einstein gave the cosmological constant the symbol Λ (capital lambda). Einstein stated that the cosmological constant required that 'empty space takes the role of gravitating negative masses which are distributed all over the interstellar space'.
The mechanism was an example of fine-tuning, and it was later realized that Einstein's static universe would not be stable: local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. According to Einstein, "empty space" can possess its own energy. Because this energy is a property of space itself, it would not be diluted as space expands. As more space comes into existence, more of this energy-of-space would appear, thereby causing accelerated expansion. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. Further, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding and is not static. Einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder.
Inflationary dark energy
Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher (negative) energy density than the dark energy we observe today, and inflation is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe.
Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter (CDM) and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: in particular, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the cosmic microwave background by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess et al. and in Perlmutter et al., and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background experiments observed the first acoustic peak in the cosmic microwave background, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003–2010 have continued to support the standard model and give more accurate measurements of the key parameters.
The term "dark energy", echoing Fritz Zwicky's "dark matter" from the 1930s, was coined by Michael S. Turner in 1998.
Change in expansion over time
High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is estimated from the curvature of the universe and the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today. Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model of cosmology" because of its precise agreement with observations.
As of 2013, the Lambda-CDM model is consistent with a series of increasingly rigorous cosmological observations, including the Planck spacecraft and the Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10%. Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration.
Nature
The nature of dark energy is more hypothetical than that of dark matter, and many things about it remain in the realm of speculation. Dark energy is thought to be very homogeneous and not dense, and is not known to interact through any of the fundamental forces other than gravity. Since it is rarefied and un-massive—roughly 10−27 kg/m3—it is unlikely to be detectable in laboratory experiments. The reason dark energy can have such a profound effect on the universe, making up 68% of universal density in spite of being so dilute, is that it is believed to uniformly fill otherwise empty space.
The vacuum energy, that is, the particle-antiparticle pairs generated and mutually annihilated within a time frame in accord with Heisenberg's uncertainty principle in the energy-time formulation, has been often invoked as the main contribution to dark energy. The mass–energy equivalence postulated by general relativity implies that the vacuum energy should exert a gravitational force. Hence, the vacuum energy is expected to contribute to the cosmological constant, which in turn impinges on the accelerated expansion of the universe. However, the cosmological constant problem asserts that there is a huge disagreement between the observed values of vacuum energy density and the theoretical large value of zero-point energy obtained by quantum field theory; the problem remains unresolved.
Independently of its actual nature, dark energy would need to have a strong negative pressure to explain the observed acceleration of the expansion of the universe. According to general relativity, the pressure within a substance contributes to its gravitational attraction for other objects just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the stress–energy tensor, which contains both the energy (or matter) density of a substance and its pressure. In the Friedmann–Lemaître–Robertson–Walker metric, it can be shown that a strong constant negative pressure (i.e., tension) in all the universe causes an acceleration in the expansion if the universe is already expanding, or a deceleration in contraction if the universe is already contracting. This accelerating expansion effect is sometimes labeled "gravitational repulsion".
Technical definition
In standard cosmology, there are three components of the universe: matter, radiation, and dark energy. This matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., , while radiation is anything whose energy density scales to the inverse fourth power of the scale factor (). This can be understood intuitively: for an ordinary particle in a cube-shaped box, doubling the length of an edge of the box decreases the density (and hence energy density) by a factor of eight (23). For radiation, the decrease in energy density is greater, because an increase in spatial distance also causes a redshift.
The final component is dark energy: it is an intrinsic property of space and has a constant energy density, regardless of the dimensions of the volume under consideration (). Thus, unlike ordinary matter, it is not diluted by the expansion of space.
Evidence of existence
The evidence for dark energy is indirect but comes from three independent sources:
Distance measurements and their relation to redshift, which suggest the universe has expanded more in the latter half of its life than in the former half of its life.
The theoretical need for a type of additional energy that is not matter or dark matter to form the observationally flat universe (absence of any detectable global curvature).
Measurements of large-scale wave patterns of mass density in the universe.
Supernovae
In 1998, the High-Z Supernova Search Team published observations of Type Ia ("one-A") supernovae. In 1999, the Supernova Cosmology Project followed by suggesting that the expansion of the universe is accelerating. The 2011 Nobel Prize in Physics was awarded to Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess for their leadership in the discovery.
Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large-scale structure of the cosmos, as well as improved measurements of supernovae, have been consistent with the Lambda-CDM model. Some people argue that the only indications for the existence of dark energy are observations of distance measurements and their associated redshifts. Cosmic microwave background anisotropies and baryon acoustic oscillations serve only to demonstrate that distances to a given redshift are larger than would be expected from a "dusty" Friedmann–Lemaître universe and the local measured Hubble constant.
Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow researchers to measure the expansion history of the universe by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, or absolute magnitude, is known. This allows the object's distance to be measured from its actual observed brightness, or apparent magnitude. Type Ia supernovae are the most accurate known standard candles across cosmological distances because of their extreme and consistent luminosity.
Recent observations of supernovae are consistent with a universe made up 71.3% of dark energy and 27.4% of a combination of dark matter and baryonic matter.
Large-scale structure
The theory of large-scale structure, which governs the formation of structures in the universe (stars, quasars, galaxies and galaxy groups and clusters), also suggests that the density of matter in the universe is only 30% of the critical density.
A 2011 survey, the WiggleZ galaxy survey of more than 200,000 galaxies, provided further evidence towards the existence of dark energy, although the exact physics behind it remains unknown. The WiggleZ survey from the Australian Astronomical Observatory scanned the galaxies to determine their redshift. Then, by exploiting the fact that baryon acoustic oscillations have left voids regularly of ≈150 Mpc diameter, surrounded by the galaxies, the voids were used as standard rulers to estimate distances to galaxies as far as 2,000 Mpc (redshift 0.6), allowing for accurate estimate of the speeds of galaxies from their redshift and distance. The data confirmed cosmic acceleration up to half of the age of the universe (7 billion years) and constrain its inhomogeneity to 1 part in 10. This provides a confirmation to cosmic acceleration independent of supernovae.
Cosmic microwave background
The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass–energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the cosmic microwave background spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%. The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft seven-year analysis estimated a universe made up of 72.8% dark energy, 22.7% dark matter, and 4.5% ordinary matter. Work done in 2013 based on the Planck spacecraft observations of the cosmic microwave background gave a more accurate estimate of 68.3% dark energy, 26.8% dark matter, and 4.9% ordinary matter.
Late-time integrated Sachs–Wolfe effect
Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the cosmic microwave background aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs–Wolfe effect (ISW) is a direct signal of dark energy in a flat universe. It was reported at high significance in 2008 by Ho et al. and Giannantonio et al.
Observational Hubble constant data
A new approach to test evidence of dark energy through observational Hubble constant data (OHD), also known as cosmic chronometers, has gained significant attention in recent years.
The Hubble constant, H(z), is measured as a function of cosmological redshift. OHD directly tracks the expansion history of the universe by taking passively evolving early-type galaxies as "cosmic chronometers". From this point, this approach provides standard clocks in the universe. The core of this idea is the measurement of the differential age evolution as a function of redshift of these cosmic chronometers. Thus, it provides a direct estimate of the Hubble parameter
The reliance on a differential quantity, brings more information and is appealing for computation: It can minimize many common issues and systematic effects. Analyses of supernovae and baryon acoustic oscillations (BAO) are based on integrals of the Hubble parameter, whereas measures it directly. For these reasons, this method has been widely used to examine the accelerated cosmic expansion and study properties of dark energy.
Theories of dark energy
Dark energy's status as a hypothetical force with unknown properties makes it an active target of research. The problem is attacked from a variety of angles, such as modifying the prevailing theory of gravity (general relativity), attempting to pin down the properties of dark energy, and finding alternative ways to explain the observational data.
Cosmological constant
The simplest explanation for dark energy is that it is an intrinsic, fundamental energy of space. This is the cosmological constant, usually represented by the Greek letter (Lambda, hence the name Lambda-CDM model). Since energy and mass are related according to the equation Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called vacuum energy because it is the energy density of empty space – of vacuum.
A major outstanding problem is that the same quantum field theories predict a huge cosmological constant, about 120 orders of magnitude too large. This would need to be almost, but not exactly, cancelled by an equally large term of the opposite sign.
Some supersymmetric theories require a cosmological constant that is exactly zero. Also, it is unknown whether there is a metastable vacuum state in string theory with a positive cosmological constant, and it has been conjectured by Ulf Danielsson et al. that no such state exists. This conjecture would not rule out other models of dark energy, such as quintessence, that could be compatible with string theory.
Quintessence
In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength. In the simplest scenarios, the quintessence field has a canonical kinetic term, is minimally coupled to gravity, and does not feature higher order operations in its Lagrangian.
No evidence of quintessence is yet available, nor has it been ruled out. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses.
The coincidence problem asks why the acceleration of the Universe began when it did. If acceleration began earlier in the universe, structures such as galaxies would never have had time to form, and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called "tracker" behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter–radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.
In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w = −1) from above to below. A no-go theorem has been proved that this scenario requires models with at least two types of quintessence. This scenario is the so-called Quintom scenario.
Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy such as a negative kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip.
A group of researchers argued in 2021 that observations of the Hubble tension may imply that only quintessence models with a nonzero coupling constant are viable.
Interacting dark energy
This class of theories attempts to come up with an all-encompassing theory of both dark matter and dark energy as a single phenomenon that modifies the laws of gravity at various scales. This could, for example, treat dark energy and dark matter as different facets of the same unknown substance, or postulate that cold dark matter decays into dark energy. Another class of theories that unifies dark matter and dark energy are suggested to be covariant theories of modified gravities. These theories alter the dynamics of spacetime such that the modified dynamics stems to what have been assigned to the presence of dark energy and dark matter. Dark energy could in principle interact not only with the rest of the dark sector, but also with ordinary matter. However, cosmology alone is not sufficient to effectively constrain the strength of the coupling between dark energy and baryons, so that other indirect techniques or laboratory searches have to be adopted. It was briefly theorized in the early 2020s that excess observed in the XENON1T detector in Italy may have been caused by a chameleon model of dark energy, but further experiments disproved this possibility.
Variable dark energy models
The density of dark energy might have varied in time during the history of the universe. Modern observational data allows us to estimate the present density of dark energy. Using baryon acoustic oscillations, it is possible to investigate the effect of dark energy in the history of the universe, and constrain parameters of the equation of state of dark energy. To that end, several models have been proposed. One of the most popular models is the Chevallier–Polarski–Linder model (CPL). Some other common models are Barboza & Alcaniz (2008), Jassal et al. (2005), Wetterich. (2004), and Oztas et al. (2018).
Possibly decreasing levels
Researchers using the Dark Energy Spectroscopic Instrument (DESI) to make the largest 3-D map of the universe as of 2024, have obtained an expansion history that has greater than 1% precision. From this level of detail, DESI Director Michael Levi stated:We're also seeing some potentially interesting differences that could indicate that dark energy is evolving over time. Those may or may not go away with more data, so we're excited to start analyzing our three-year dataset soon.
Observational skepticism
Some alternatives to dark energy, such as inhomogeneous cosmology, aim to explain the observational data by a more refined use of established theories. In this scenario, dark energy does not actually exist, and is merely a measurement artifact. For example, if we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration. A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble. Yet other possibilities are that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe, or that the statistical methods employed were flawed. A laboratory direct detection attempt failed to detect any force associated with dark energy.
Observational skepticism explanations of dark energy have generally not gained much traction among cosmologists. For example, a paper that suggested the anisotropy of the local Universe has been misrepresented as dark energy was quickly countered by another paper claiming errors in the original paper. Another study questioning the essential assumption that the luminosity of Type Ia supernovae does not vary with stellar population age was also swiftly rebutted by other cosmologists.
As a general relativistic effect due to black holes
This theory was formulated by researchers of the University of Hawaiʻi at Mānoa in February 2023. The idea is that if one requires the Kerr metric (which describes rotating black holes) to asymptote to the Friedmann-Robertson-Walker metric (which describes the isotropic and homogeneous universe that is the basic assumption of modern cosmology), then one finds that black holes gain mass as the universe expands. The rate is measured to be , where a is the scale factor. This particular rate means that the energy density of black holes remains constant over time, mimicking dark energy (see Dark energy#Technical definition). The theory is called "cosmological coupling" because the black holes couple to a cosmological requirement. Other astrophysicists are skeptical, with a variety of papers claiming that the theory fails to explain other observations.
Other mechanism driving acceleration
Modified gravity
The evidence for dark energy is heavily dependent on the theory of general relativity. Therefore, it is conceivable that a modification to general relativity also eliminates the need for dark energy. There are many such theories, and research is ongoing. The measurement of the speed of gravity in the first gravitational wave measured by non-gravitational means (GW170817) ruled out many modified gravity theories as explanations to dark energy.
Astrophysicist Ethan Siegel states that, while such alternatives gain mainstream press coverage, almost all professional astrophysicists are confident that dark energy exists and that none of the competing theories successfully explain observations to the same level of precision as standard dark energy.
Implications for the fate of the universe
Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of matter. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant).
Projections into the future can differ radically for different models of dark energy. For a cosmological constant, or any other model that predicts that the acceleration will continue indefinitely, the ultimate result will be that galaxies outside the Local Group will have a line-of-sight velocity that continually increases with time, eventually far exceeding the speed of light. This is not a violation of special relativity because the notion of "velocity" used here is different from that of velocity in a local inertial frame of reference, which is still constrained to be less than the speed of light for any massive object (see Uses of the proper distance for a discussion of the subtleties of defining any notion of relative velocity in cosmology). Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually.
However, because of the accelerating expansion, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future because the light never reaches a point where its "peculiar velocity" toward us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Uses of the proper distance). Assuming the dark energy is constant (a cosmological constant), the current distance to this cosmological event horizon is about 16 billion light years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event were less than 16 billion light years away, but the signal would never reach us if the event were more than 16 billion light years away.
As galaxies approach the point of crossing this cosmological event horizon, the light from them will become more and more redshifted, to the point where the wavelength becomes too large to detect in practice and the galaxies appear to vanish completely (see Future of an expanding universe). Planet Earth, the Milky Way, and the Local Group of galaxies of which the Milky Way is a part, would all remain virtually undisturbed as the rest of the universe recedes and disappears from view. In this scenario, the Local Group would ultimately suffer heat death, just as was hypothesized for the flat, matter-dominated universe before measurements of cosmic acceleration.
There are other, more speculative ideas about the future of the universe. The phantom energy model of dark energy results in divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". On the other hand, dark energy might dissipate with time or even become attractive. Such uncertainties leave open the possibility of gravity eventually prevailing and lead to a universe that contracts in on itself in a "Big Crunch", or that there may even be a dark energy cycle, which implies a cyclic model of the universe in which every iteration (Big Bang then eventually a Big Crunch) takes about a trillion (1012) years. While none of these are supported by observations, they are not ruled out.
In philosophy of science
The astrophysicist David Merritt identifies dark energy as an example of an "auxiliary hypothesis", an ad hoc postulate that is added to a theory in response to observations that falsify it. He argues that the dark energy hypothesis is a conventionalist hypothesis, that is, a hypothesis that adds no empirical content and hence is unfalsifiable in the sense defined by Karl Popper. However, his opinion is not shared by all scientists.
| Physical sciences | Physical cosmology | null |
255798 | https://en.wikipedia.org/wiki/Achromatopsia | Achromatopsia | Achromatopsia, also known as rod monochromacy, is a medical syndrome that exhibits symptoms relating to five conditions, most notably monochromacy. Historically, the name referred to monochromacy in general, but now typically refers only to an autosomal recessive congenital color vision condition. The term is also used to describe cerebral achromatopsia, though monochromacy is usually the only common symptom. The conditions include: monochromatic color blindness, poor visual acuity, and day-blindness. The syndrome is also present in an incomplete form that exhibits milder symptoms, including residual color vision. Achromatopsia is estimated to affect 1 in 30,000 live births worldwide.
Signs and symptoms
The five symptoms associated with achromatopsia are:
Color blindness – usually monochromacy
Reduced visual acuity – uncorrectable with lenses
Hemeralopia – with the subject exhibiting photophobia
Nystagmus
Iris operating abnormalities
The syndrome is typically first noticed in children around six months of age due to their photophobia or their nystagmus. The nystagmus becomes less noticeable with age but the other symptoms of the syndrome become more relevant as school age approaches. Visual acuity and stability of the eye motions generally improve during the first six to seven years of life – but remain near 20/200. Otherwise the syndrome is considered stationary and does not worsen with age.
If the light level during testing is optimized, achromats may achieve corrected visual acuity of 20/100 to 20/150 at lower light levels, regardless of the absence of color. The fundus of the eye appears completely normal.
Achromatopsia can be classified as complete or incomplete. In general, symptoms of incomplete achromatopsia are attenuated versions of those of complete achromatopsia. Individuals with incomplete achromatopsia have reduced visual acuity with or without nystagmus or photophobia. Incomplete achromats show only partial impairment of cone cell function.
Cause
Achromatopsia is sometimes called rod monochromacy (as opposed to blue cone monochromacy), as achromats exhibit a complete absence of cone cell activity via electroretinography in photopic lighting. There are at least four genetic causes of achromatopsia, two of which involve cyclic nucleotide-gated ion channels (ACHM2, ACHM3), a third involves the cone photoreceptor transducin (GNAT2, ACHM4), and the last remains unknown.
Known genetic causes of this include mutations in the cone cell cyclic nucleotide-gated ion channels CNGA3 (ACHM2) and CNGB3 (ACHM3), the cone cell transducin, GNAT2 (ACHM4), subunits of cone phosphodiesterase PDE6C (ACHM5, OMIM 613093) and PDEH (ACHM6, OMIM 610024), and ATF6 (ACHM7, OMIM 616517).
Pathophysiology
The hemeralopic aspect of achromatopsia can be diagnosed non-invasively using electroretinography. The response at low (scotopic) and median (mesopic) light levels will be normal but the response under high light level (photopic) conditions will be absent. The mesopic level is approximately a hundred times lower than the clinical level used for the typical high level electroretinogram. When as described; the condition is due to a saturation in the neural portion of the retina and not due to the absence of the photoreceptors per se.
In general, the molecular pathomechanism of achromatopsia is either the inability to properly control or respond to altered levels of cGMP; particularly important in visual perception as its level controls the opening of cyclic nucleotide-gated ion channels (CNGs). Decreasing the concentration of cGMP results in closure of CNGs and resulting hyperpolarization and cessation of glutamate release. Native retinal CNGs are composed of 2 α- and 2 β-subunits, which are CNGA3 and CNGB3, respectively, in cone cells. When expressed alone, CNGB3 cannot produce functional channels, whereas this is not the case for CNGA3. Coassembly of CNGA3 and CNGB3 produces channels with altered membrane expression, ion permeability (Na+ vs. K+ and Ca2+), relative efficacy of cAMP/cGMP activation, decreased outward rectification, current flickering, and sensitivity to block by L-cis-diltiazem.
Mutations tend to result in the loss of CNGB3 function or gain of function—often increased affinity for cGMP—of CNGA3. cGMP levels are controlled by the activity of the cone cell transducin, GNAT2. Mutations in GNAT2 tend to result in a truncated and, presumably, non-functional protein, thereby preventing alteration of cGMP levels by photons. There is a positive correlation between the severity of mutations in these proteins and the completeness of the achromatopsia phenotype.
Molecular diagnosis can be established by identification of biallelic variants in the causative genes. Molecular genetic testing approaches used in achromatopsia can include targeted analysis for the common CNGB3 variant c.1148delC (p.Thr383IlefsTer13), use of a multigenerational panel, or comprehensive genomic testing.
ACHM2
While some mutations in CNGA3 result in truncated and, presumably, non-functional channels this is largely not the case. While few mutations have received in-depth study, at least one mutation does result in functional channels. Curiously, this mutation, T369S, produces profound alterations when expressed without CNGB3. One such alteration is decreased affinity for Cyclic guanosine monophosphate. Others include the introduction of a sub-conductance, altered single-channel gating kinetics, and increased calcium permeability.
When mutant T369S channels coassemble with CNGB3, however, the only remaining aberration is increased calcium permeability. While it is not immediately clear how this increase in Ca2+ leads to achromatopsia, one hypothesis is that this increased current decreases the signal-to-noise ratio. Other characterized mutations, such as Y181C and the other S1 region mutations, result in decreased current density due to an inability of the channel to traffic to the surface. Such loss of function will undoubtedly negate the cone cell's ability to respond to visual input and produce achromatopsia. At least one other missense mutation outside of the S1 region, T224R, also leads to loss of function.
ACHM3
While very few mutations in CNGB3 have been characterized, the vast majority of them result in truncated channels that are presumably non-functional. This will largely result in haploinsufficiency, though in some cases the truncated proteins may be able to coassemble with wild-type channels in a dominant negative fashion. The most prevalent ACHM3 mutation, T383IfsX12, results in a non-functional truncated protein that does not properly traffic to the cell membrane.
The three missense mutations that have received further study show a number of aberrant properties, with one underlying theme. The R403Q mutation, which lies in the pore region of the channel, results in an increase in outward current rectification, versus the largely linear current-voltage relationship of wild-type channels, concomitant with an increase in cGMP affinity. The other mutations show either increased (S435F) or decreased (F525N) surface expression but also with increased affinity for cAMP and cGMP. It is the increased affinity for cGMP and cAMP in these mutants that is likely the disorder-causing change. Such increased affinity will result in channels that are insensitive to the slight concentration changes of cGMP due to light input into the retina.
ACHM4
Upon activation by light, cone opsin causes the exchange of GDP for GTP in the guanine nucleotide binding protein (G-protein) α-transducing activity polypeptide 2 (GNAT2). This causes the release of the activated α-subunit from the inhibitory β/γ-subunits. This α-subunit then activates a phosphodiesterase that catalyzes the conversion of cGMP to GMP, thereby reducing current through CNG3 channels. As this process is absolutely vital for proper color processing it is not surprising that mutations in GNAT2 lead to achromatopsia. The known mutations in this gene, all result in truncated proteins. Presumably, then, these proteins are non-functional and, consequently, cone opsin that has been activated by light does not lead to altered cGMP levels or photoreceptor membrane hyperpolarization.
Management
Gene therapy
As achromatopsia is linked to only a few single-gene mutations, it is a good candidate for gene therapy. Gene therapy is a technique for injecting functional genes into the cells that need them, replacing or overruling the original alleles linked to achromatopsia, thereby curing it – at least in part. Achromatopsia has been a focus of gene therapy since 2010, when achromatopsia in dogs was partially cured. Several clinical trials on humans are ongoing with mixed results. In July 2023, a study found positive but limited improvements on congenital CNGA3 achromatopsia.
Eyeborg
Since 2003, a cybernetic device called the eyeborg has allowed people to perceive color through sound waves. This form of Sensory substitution maps the hue perceived by a camera worn on the head to a pitch experienced through bone conduction according to a sonochromatic scale. This allows achromats (or even the totally blind) to perceive – or estimate – the color of an object. Achromat and artist Neil Harbisson was the first to use the eyeborg in early 2004, which allowed him to start painting in color. He has since acted as a spokesperson for the technology, namely in a 2012 TED Talk. A 2015 study suggests that achromats who use the Eyeborg for several years exhibit neural plasticity, which indicates the sensory substitution has become intuitive for them.
Other accommodations
While gene therapy and the Eyeborg may currently have low uptake with achromats, there are several more practical ways for achromats to manage their condition:
Some colors can be estimated through the use of colored filters. By comparing the luminosity of a color with and without a filter (or between two different filters), the color can be estimated. This is the premise of monocular lenses and the SeeKey. In some US states, achromats can use a red filter while driving to determine the color of a traffic light.
To alleviate photophobia stemming from hemeralopia, dark red or plum colored filters as either sunglasses or tinted contacts are very helpful at decreasing light sensitivity.
To manage the low visual acuity that is typical of achromatopsia, achromats may use telescopic systems, specifically when driving, to increase the resolution of an object of interest.
Epidemiology
Achromatopsia is a relatively uncommon disorder, with a prevalence of 1 in 30,000 people.
However, on the small Micronesian atoll of Pingelap, approximately five percent of the atoll's 3,000 inhabitants are affected. This is the result of a population bottleneck caused by a typhoon and ensuing famine in the 1770s, which killed all but about twenty islanders, including one who was heterozygous for achromatopsia.
The people of this region have termed achromatopsia "maskun", which literally means "not see" in Pingelapese. This unusual population drew neurologist Oliver Sacks to the island for which he wrote his 1997 book, The Island of the Colorblind.
Blue cone monochromacy
Blue cone monochromacy (BCM) is another genetic condition causing monochromacy. It mimics many of the symptoms of incomplete achromatopsia and before the discovery of its molecular biological basis was commonly referred to as x-linked achromatopsia, sex-linked achromatopsia or atypical achromatopsia. BCM stems from mutations or deletions of the OPN1LW and OPN1MW genes, both on the X chromosome. As a recessive x-linked condition, BCM disproportionately affects males, unlike typical achromatopsia.
Cerebral achromatopsia
Cerebral achromatopsia is a form of acquired color blindness that is caused by damage to the cerebral cortex. Damage is most commonly localized to visual area V4 of the visual cortex (the major part of the colour center), which receives information from the parvocellular pathway involved in color processing. It is most frequently caused by physical trauma, hemorrhage or tumor tissue growth. If there is unilateral damage, a loss of color perception in only half of the visual field may result; this is known as hemiachromatopsia. Cerebral achromats usually do not experience the other major symptoms of congenital achromatopsia, since photopic vision is still functions.
Color agnosia involves having difficulty recognizing colors, while still being able to perceive them as measured by a color matching or categorizing task.
Terminology
| Biology and health sciences | Disabilities | Health |
255849 | https://en.wikipedia.org/wiki/LaserDisc | LaserDisc | The LaserDisc (LD) is a home video format and the first commercial optical disc storage medium, initially licensed, sold and marketed as MCA DiscoVision (also known simply as "DiscoVision") in the United States in 1978. Its diameter typically spans . Unlike most optical-disc standards, LaserDisc is not fully digital, and instead requires the use of analog video signals.
Although the format was capable of offering higher-quality video and audio than its consumer rivals, VHS and Betamax videotape, LaserDisc never managed to gain widespread use in North America. This was largely due to the high cost of the players and their inability to record TV programs. It eventually did gain some traction in that region and became mildly popular in the 1990s. It also saw a modest share of adoption in Australia and several European countries.
By contrast, the format was much more popular in Japan and in the more affluent regions of Southeast Asia, such as Hong Kong, Singapore, and Malaysia, and was the prevalent rental video medium in Hong Kong during the 1990s. Its superior video and audio quality made it a popular choice among videophiles and film enthusiasts during its lifespan. The technologies and concepts behind LaserDisc were the foundation for later optical disc formats, including Compact Disc (CD), DVD, and Blu-ray (BD). LaserDisc players continued to be produced until July 2009, when Pioneer stopped making them.
History
Optical video recording technology, using a transparent disc, was invented by David Paul Gregg and James Russell in 1963 (and patented in 1970 and 1990). The Gregg patents were purchased by MCA in 1968. By 1969, Philips had developed a videodisc in reflective mode, which has advantages over the transparent mode. MCA and Philips then decided to combine their efforts and first publicly demonstrated the videodisc in 1972.
LaserDisc was first available on the market in Atlanta, Georgia, on December 11, 1978, two years after the introduction of the VHS VCR, and four years before the introduction of the CD (which is based on laser disc technology). Initially licensed, sold, and marketed as MCA DiscoVision (also known as simply DiscoVision) in 1978, the technology was previously referred to internally as Optical Videodisc System, Reflective Optical Videodisc, Laser Optical Videodisc, and Disco-Vision (with a hyphen), with the first players referring to the format as Video Long Play.
Pioneer Electronics later purchased the majority stake in the format and marketed it as both (format name) and LaserDisc (brand name) in 1980, with some releases unofficially referring to the medium as . Philips produced the players while MCA produced the discs. The Philips-MCA collaboration was unsuccessful – and was discontinued after a few years. Several of the scientists responsible for the early research (Richard Wilkinson, Ray Dakin and John Winslow) founded Optical Disc Corporation (now ODC Nimbus).
LaserDisc was launched in Japan in October 1981, and a total of approximately 3.6 million LaserDisc players had been sold before its discontinuation in 2009.
In 1984, Sony offered a LaserDisc format that could store any form of digital data, as a data storage device similar to CD-ROM, with a large 3.28 GB storage capacity, comparable to the DVD-ROM format that would arrive 11 years later in 1995.
The first LaserDisc title marketed in North America was the MCA DiscoVision release of Jaws on December 15, 1978. The last title released in North America was Paramount's Bringing Out the Dead on October 3, 2000. Film titles continued to be released in Japan until September 21, 2001, with the last Japanese movie released being the Hong Kong film Tokyo Raiders from Golden Harvest. The last known LD title is Onta Station vol. 1018, a karaoke disc released on March 21, 2007. Production of LaserDisc players ended in July 2009, when Pioneer stopped making them. Pioneer continued to repair and service players until September 30, 2020, when the remaining parts inventory was exhausted.
It was estimated that in 1998, LaserDisc players were in approximately 2% of U.S. households (roughly two million). By comparison, in 1999, players were in 10% of Japanese households. A total of 16.8 million LaserDisc players were sold worldwide, of which 9.5 million were sold by Pioneer.
By 2001, LaserDisc had been completely replaced by DVD in the North American retail marketplace, as media were no longer being produced. Players were still exported to North America from Japan until the end of 2001. , the format retains some popularity among "thousands" of American collectors, and to a greater degree in Japan, where the format was better supported and more prevalent during its lifespan. In Europe, LaserDisc always remained an obscure format. It was chosen by the British Broadcasting Corporation (BBC) for the BBC Domesday Project in the mid-1980s, a school-based project to commemorate the 900 years since the original Domesday Book in England. From 1991 until the late 1990s, the BBC also used LaserDisc technology (specifically Sony CRVdisc) to play out their channel idents.
Design
A standard home video LaserDisc is in diameter and made up of two single-sided aluminum discs layered in plastic. Although similar in appearance to compact discs or DVDs, early LaserDiscs used analog video stored in the composite domain (having a video bandwidth and resolution approximately equivalent to the Type C videotape format) with analog frequency modulation (FM) stereo sound and pulse-code modulation (PCM) digital audio. Later discs used D-2 instead of Type C videotape for mastering.
The LaserDisc at its most fundamental level was still recorded as a series of pits and lands much like CDs, DVDs, and even Blu-ray discs are today. In true digital media, the pits (or their edges) directly represent 1s and 0s of a binary digital information stream. On a LaserDisc, the information is encoded as analog frequency modulation and is contained in the lengths and spacing of the pits. A carrier frequency is modulated by the baseband video signal (and analog soundtracks). In a simplified view, positive parts of this variable frequency signal can produce lands and negative parts can be pits, which results in a projection of the FM signal along the track on the disc. When reading, the FM carrier can be reconstructed from the succession of pit edges, and demodulated to extract the original video signal (in practice, selection between pit and land parts uses intersection of the FM carrier with a horizontal line having an offset from the zero axis, for noise considerations). If PCM sound is present, its waveform, considered as an analog signal, can be added to the FM carrier, which modulates the width of the intersection with the horizontal threshold. As a result, space between pit centers essentially represent video (as frequency), and pit length code for PCM sound information. Early LaserDiscs featured in 1978 were entirely analog but the format evolved to incorporate digital stereo sound in CD format (sometimes with a TOSlink or coax output to feed an external digital-to-analog converter or DAC), and later multi-channel formats such as Dolby Digital and DTS.
Since digital encoding and compression schemes were either unavailable or impractical in 1978, three encoding formats based upon the rotation speed were used:
CAV Constant angular velocity or Standard Play discs supported several unique features such as freeze frame, variable slow motion, and reverse. CAV discs were spun at a constant rotational speed (1800 rpm for 525 line and Hi-Vision, and 1500 rpm for 625 line discs) during playback, with one video frame read per revolution. In this mode, 54,000 individual frames (30 minutes of audio/video for NTSC and Hi-Vision, 36 minutes for PAL) could be stored on a single side of a CAV disc. Another unique attribute to CAV was to reduce the visibility of crosstalk from adjacent tracks, since on CAV discs any crosstalk at a specific point in a frame is simply from the same point in the next or previous frame. CAV was used less frequently than CLV, and reserved for special editions of feature films to highlight bonus material and special effects. One of the most intriguing advantages of this format was the ability to reference every frame of a film directly by number, a feature of particular interest to film buffs, students and others intrigued by the study of errors in staging, continuity and so on.
CLV Constant linear velocity or Extended Play discs did not have the "trick play" features of CAV, offering only simple playback on all but the high-end LaserDisc players incorporating a digital frame store. These high-end LaserDisc players could add features not normally available to CLV discs, such as variable forward and reverse, and a VCR-like "pause". By gradually slowing down their rotational speed (1800–600 rpm for NTSC and 2470–935 rpm for Hi-Vision) CLV encoded discs could store 60 minutes of audio/video per side for NTSC and Hi-Vision (64 minutes for PAL), or two hours per disc. For films with a run-time less than 120 minutes, this meant they could fit on one disc, lowering the cost of the title and eliminating the distracting exercise of "getting up to change the disc", at least for those who owned a dual-sided player. The majority of titles were only available in CLV (a few titles were released partly CLV, partly CAV. For example, a 140-minute movie could fit on two CLV sides and one CAV side, thus allowing for the CAV-only features during the climax of the film).
CAA In the early 1980s, due to problems with crosstalk distortion on CLV extended play LaserDiscs, Pioneer Video introduced constant angular acceleration (CAA) formatting for extended play discs. CAA is very similar to CLV, save for the fact that CAA varies the angular rotation of the disc in controlled steps instead of gradually slowing down in a steady linear pace as a CLV disc is read. With the exception of 3M/Imation, all LaserDisc manufacturers adopted the CAA encoding scheme, although the term was rarely (if ever) used on any consumer packaging. CAA encoding noticeably improved picture quality and greatly reduced crosstalk and other tracking problems while being fully compatible with existing players.
As Pioneer introduced digital audio to LaserDisc in 1985, it further refined the CAA format. CAA55 was introduced in 1985 with a total playback capacity per side of 55 minutes 5 seconds, reducing the video capacity to resolve bandwidth issues with the inclusion of digital audio. Several titles released between 1985 and 1987 were analog audio only due to the length of the title and the desire to keep the film on one disc (e.g., Back to the Future). By 1987, Pioneer had overcome the technical challenges and was able to once again encode in CAA60, allowing a total of 60 minutes 5 seconds. Pioneer further refined CAA, offering CAA45, encoding 45 minutes of material, but filling the entire playback surface of the side. Used on only a handful of titles, CAA65 offered 65 minutes 5 seconds of playback time per side. There were a handful of titles pressed by Technidisc that used CAA50. The final variant of CAA was CAA70, which could accommodate 70 minutes of playback time per side. There are no known uses of this format on the consumer market.
Audio
Sound could be stored in either analog or digital format and in a variety of surround sound formats; NTSC discs could carry a stereo analog audio track, plus a stereo CD-quality uncompressed PCM digital audio track, which were (EFM, CIRC, 16-bit and 44.1 kHz sample rate). PAL discs could carry one pair of audio tracks, either analog or digital and the digital tracks on a PAL disc were 16-bit, 44.1 kHz as on a CD; in the UK, the term "LaserVision" is used to refer to discs with analog sound, while "LaserDisc" is used for those with digital audio. The digital sound signal in both formats is EFM-encoded, as in CD.
Dolby Digital (also called AC-3) and DTS, which are now common on DVD releases, first became available on LaserDisc, and Star Wars: Episode I – The Phantom Menace (1999) which was released on LaserDisc in Japan, was among the first home video releases ever to include 6.1 channel Dolby Digital EX Surround (along with a few other late-life releases from 1999 to 2001). Unlike DVDs, which carry Dolby Digital audio in digital form, LaserDiscs stored Dolby Digital in a frequency modulated form within a track normally used for analog audio. Extracting Dolby Digital from a LaserDisc required a player equipped with a special "AC-3 RF" output and an external demodulator in addition to an AC-3 decoder. The demodulator was necessary to convert the 2.88 MHz modulated AC-3 information on the disc into a 384 kbit/s signal that the decoder could handle.
In the mid to late 1990s, many higher-end AV receivers included the demodulator circuit specifically for the LaserDisc player's RF-modulated Dolby Digital AC-3 signal. By the late 1990s, with LaserDisc players and disc sales declining due to DVD's growing popularity, the AV receiver manufacturers removed the demodulator circuit. Although DVD players were capable of playing Dolby Digital tracks, the signals out of DVD players were not in a modulated form and were not compatible with the inputs designed for LaserDisc AC-3. Outboard demodulators were available for a period that converted the AC-3 signal to the standard Dolby Digital signal that was compatible with the standard Dolby Digital/PCM inputs on capable AV receivers. Another type marketed by Onkyo and Marantz converted the RF AC-3 signal to 6-channel analog audio.
The two FM audio channels occupied the disc spectrum at 2.3 and 2.8 MHz on NTSC formatted discs and each channel had a 100 kHz FM deviation. The FM audio carrier frequencies were chosen to minimize their visibility in the video image, so that even with a poorly mastered disc, audio carrier beats in the video would be at least ‑35 dB down, and thus, invisible. Due to the frequencies chosen, the 2.8 MHz audio carrier (Right Channel) and the lower edge of the chroma signal were very close together, and if filters were not carefully set during mastering, there could be interference between the two. In addition, high audio levels combined with high chroma levels could cause mutual interference, leading to beats becoming visible in highly saturated areas of the image. To help deal with this, Pioneer decided to implement the CX Noise Reduction System on the analog tracks. By reducing the dynamic range and peak levels of the audio signals stored on the disc, filtering requirements were relaxed and visible beats greatly reduced or eliminated. The CX system gives a total NR effect of 20 dB, but in the interest of better compatibility for non-decoded playback, Pioneer reduced this to only 14 dB of noise reduction (the RCA CED system used the "original" 20 dB CX system). This also relaxed calibration tolerances in players and helped reduce audible pumping if the CX decoder was not calibrated correctly.
At least where the digital audio tracks were concerned, the sound quality was unsurpassed at the time compared to consumer videotape. However, the quality of the analog soundtracks could vary greatly depending upon the disc and, sometimes, the player. Many early and lower-end LaserDisc players had poor analog audio components, and in turn, many early discs had poorly mastered analog audio tracks, making digital soundtracks in any form more desirable to serious enthusiasts. Early DiscoVision and LaserDisc titles lacked the digital audio option, but many of those movies received digital sound in later re-issues by Universal, and the quality of analog audio tracks generally improved greatly as time went on. Many discs that had originally carried old analog stereo tracks received new Dolby Stereo and Dolby Surround tracks instead often in addition to digital tracks, which helped boost sound quality. Later analog discs also applied CX noise reduction, which improved the signal-to-noise ratio of the audio.
DTS audio, when available on a disc, replaced the digital audio tracks; hearing DTS-encoded audio required only an S/PDIF compliant digital connection to a DTS decoder.
On a DTS disc, digital PCM audio was not available, so if a DTS decoder was also not available, the only option was to fall back to the analog Dolby Surround or stereo audio tracks. In some cases, the analog audio tracks were further made unavailable through replacement with supplementary audio such as isolated scores or audio commentary. This effectively reduced playback of a DTS disc on a non-DTS equipped system to mono audio, or in a handful of cases, no film soundtrack at all.
Only one 5.1 surround sound option existed on a given LaserDisc (either Dolby Digital or DTS). As such, if surround sound was desired, the disc must be matched to the capabilities of the playback equipment (LaserDisc player and receiver/decoder) by the purchaser. A fully capable LaserDisc playback system included a newer LaserDisc player that was capable of playing digital tracks; had a digital optical output for digital PCM and DTS encoded audio; was aware of AC-3 audio tracks; and had an AC-3 coaxial output, an external or internal AC-3 RF demodulator and AC-3 decoder, and a DTS decoder. Many 1990s A/V receivers combined the AC-3 decoder and DTS decoder logic, but an integrated AC-3 demodulator was rare both in LaserDisc players and in later A/V receivers.
PAL LaserDiscs have a slightly longer playing time than NTSC discs, but have fewer audio options. PAL discs only have two audio tracks, consisting of either two analog-only tracks on older PAL LaserDiscs, or two digital-only tracks on newer discs. In comparison, later NTSC LaserDiscs are capable of carrying four tracks (two analog and two digital). On certain releases, one of the analog tracks is used to carry a modulated AC-3 signal for 5.1 channel audio (for decoding and playback by newer LaserDisc players with an "AC-3 RF" output). Older NTSC LaserDiscs made before 1984 (such as the original DiscoVision discs) only have two analog audio tracks.
LaserDisc players
The earliest players employed gas helium–neon laser tubes to read discs and had a red-orange light with a wavelength of 632.8 nm, while later solid-state players used infrared semiconductor laser diodes with a wavelength of 780 nm.
In March 1984, Pioneer introduced the first consumer player with a solid-state laser, the LD-700. It was also the first LaserDisc player to load from the front and not the top. One year earlier, Hitachi introduced an expensive industrial player with a laser diode, but the player had poor picture quality (due to an inadequate dropout compensator), and was made only in limited quantities. After Pioneer released the LD-700, gas lasers were no longer used in consumer players, despite their advantages, although Philips continued to use gas lasers in their industrial units until 1985.
Most LaserDisc players required the user to manually turn the disc over to play the other side. A number of players (all diode laser based) were made that were capable of playing both sides of the disc automatically, using a mechanism to physically flip a single laser pickup.
Pioneer produced some multi-disc models which held more than 50 LaserDiscs. For a short time in 1984, one company offered a "LaserStack" unit that added multi-disc capability to existing players: the Pioneer LD-600, LD-1100, or the Sylvania/Magnavox clones. It required the user to physically remove the player lid for installation, where it then attached to the top of the player. LaserStack held up to 10 discs and could automatically load or remove them from the player or change sides in around 15 seconds.
The first mass-produced industrial LaserDisc player was the MCA DiscoVision PR-7820, later rebranded the Pioneer PR7820. In North America, this unit was used in many General Motors dealerships as a source of training videos and presentation of GM's new line of cars and trucks in the late 1970s and early 1980s.
Most players made after the mid-1980s were capable of also playing Compact Discs. These players included a indentation in the loading tray, where the CD was placed for playback. At least two Pioneer models (the CLD-M301 and the CLD-M90) also operated as a CD changer, with several 4.7 in indentations around the circumference of the main tray.
The Pioneer DVL-9, introduced in 1996, was both Pioneer's first consumer DVD player and the first combination DVD/LD player.
The first high-definition video player was the Pioneer HLD-X0. A later model, the HLD-X9, featured a superior comb filter, and laser diodes on both sides of the disc.
Notable players
Pioneer PR7820, first industrial LaserDisc player, capable of being controlled by an external computer, was used in the first US LaserDisc arcade game Dragon's Lair.
Pioneer CLD-900, first combination player capable of reading Compact Discs. Released in 1985.
Pioneer CLD-1010, first player capable of playing CD-Video discs. Released in 1987.
Pioneer LaserActive players: The Pioneer CLD-A100 and NEC PCE-LD1 provided the ability to play Sega Genesis (Mega Drive) and TurboGrafx16 (PC Engine) video games when used in conjunction with additional components.
Pioneer DVL series, capable of playing both LaserDiscs and DVDs
Branding
During its development, MCA (which co-owned the technology), referred to it as the Optical Videodisc System, "Reflective Optical Videodisc" or "Laser Optical Videodisc", depending on the document. They changed the name once in 1969 to Disco-Vision and then again in 1978 to DiscoVision (without the hyphen), which became the official spelling. Technical documents and brochures produced by MCA Disco-Vision during the early and mid-'70s also used the term "Disco-Vision Records" to refer to the pressed discs. MCA owned the rights to the largest catalog of films in the world during this time, and they manufactured and distributed the DiscoVision releases of those films under the "MCA DiscoVision" software and manufacturing label; consumer sale of those titles began on December 11, 1978, with the aforementioned Jaws.
Philips' preferred name for the format was "VLP", after the Dutch words Video Langspeel-Plaat ("Video long-play disc"), which in English-speaking countries stood for Video Long-Play. The first consumer player, the Magnavox VH-8000 even had the VLP logo on the player. For a while in the early and mid-1970s, Philips also discussed a compatible audio-only format they called "ALP", but that was soon dropped as the Compact Disc system became a non-compatible project in the Philips corporation. Until early 1980, the format had no "official" name. The LaserVision Association, made up of MCA, Universal-Pioneer, IBM, and Philips/Magnavox, was formed to standardize the technical specifications of the format (which had been causing problems for the consumer market) and finally named the system officially as "LaserVision".
After its introduction in Japan in 1981, the format was introduced in Europe in 1983 with the LaserVision name, although Philips used "VLP" in model designations, such as VLP-600. Following lackluster sales there (around 12–15,000 units Europe-wide), Philips tried relaunching the entire format as "CD-Video" in 1987, with the name appearing not just on the new hybrid 12 cm discs, but also on standard 20 and 30 cm LaserDiscs with digital audio. While this name and logo appeared on players and labels for years, the "official" name of the format remained LaserVision. In the early 1990s, the format's name was changed again to LaserDisc.
Pioneer
Pioneer Electronics also entered the optical disc market in 1977 as a 50/50 joint venture with MCA called Universal-Pioneer and manufacturing MCA-designed industrial players under the MCA DiscoVision name (the PR-7800 and PR-7820). For the 1980 launch of the first Universal-Pioneer player, the VP-1000 was noted as a "laser disc player", although the "LaserDisc" logo was displayed clearly on the device. In 1981, "LaserDisc" was used exclusively for the medium itself, although the official name was "LaserVision" (as seen at the beginning of many LaserDisc releases, just before the start of the film). Pioneer reminded numerous video magazines and stores in 1984 that LaserDisc was a trademarked word, standing only for LaserVision products manufactured for sale by Pioneer Video or Pioneer Electronics. A 1984 Ray Charles ad for the LD-700 player bore the term "Pioneer LaserDisc brand videodisc player". From 1981 until the early 1990s, all properly licensed discs carried the LaserVision name and logo, even Pioneer Artists titles.
On single-sided LaserDiscs mastered by Pioneer, playing the wrong side would cause a still screen to appear with a happy, upside-down turtle that has a LaserDisc for a belly (nicknamed the "LaserDisc Turtle"). The words "Program material is recorded on the other side of this disc" are below the turtle.
MCA
During the early years, MCA also manufactured discs for other companies including Paramount, Disney and Warner Bros. Some of them added their own names to the disc jacket to signify that the movie was not owned by MCA. After DiscoVision Associates shut down in early 1982, Universal Studio's videodisc software label (called MCA Videodisc until 1984), began reissuing many DiscoVision titles. Unfortunately, quite a few, such as Battlestar Galactica and Jaws, were time-compressed versions of their CAV or CLV DiscoVision originals. The time-compressed CLV re-issue of Jaws no longer had the original soundtrack, having had incidental background music replaced for the videodisc version due to high licensing costs (the original music would not be available until the THX LaserDisc box set was released in 1995). One Universal/Columbia co-production issued by MCA Disco Vision in both CAV and CLV versions, The Electric Horseman, is still not available in any other home video format with its original score intact; even the most recent DVD release has had substantial music replacement of both instrumental score and Willie Nelson's songs. An MCA release of Universal's Howard the Duck shows only the start credits shown in widescreen before changing to 4:3 for the rest of the film. For many years, this was the only disc-based release of the film, until widescreen DVD formats were released with extras. Also, the 1989 and 1996 LaserDisc releases of E.T. the Extra-Terrestrial are the only formats to include the cut scene of Harrison Ford, in the role of the school principal, telling off Elliott for letting the frogs free in the biology class.
Comparison with other formats
VHS
LaserDisc had several advantages over VHS. It featured a far sharper picture with a horizontal resolution of 425 television lines (TVL) for NTSC and 440 TVL for PAL discs, while VHS featured only 240 TVL with NTSC. Super VHS, released in 1987, reduced the quality gap, having horizontal luma resolution comparable to LaserDisc. But horizontal chroma resolution of Super VHS remained as low as that of standard VHS, about 40 TVL, while LaserDisc offered about 70 TVL of chroma resolution.
LaserDisc could handle analog and digital audio where VHS was mostly analog only (VHS could have PCM audio in professional applications but it was uncommon), and the NTSC discs could store multiple audio tracks. This allowed for extras such as director's commentary tracks and other features to be added onto a film, creating "Special Edition" releases that would not have been possible with VHS. Disc access was random and chapter-based, like the DVD format, meaning that one could jump to any point on a given disc very quickly. By comparison, VHS would require tedious rewinding and fast-forwarding to get to specific points.
Initially, LaserDiscs were cheaper than videocassettes to manufacture, because they lacked the moving parts and plastic outer shell which were necessary for VHS tapes to work, and the duplication process was much simpler. A VHS cassette had at least 14 parts (including the actual tape) while LaserDisc had one part with five or six layers. A disc could be stamped out in a matter of seconds, whereas duplicating videotape required a complex bulk tape duplication mechanism and was a time-consuming process. By the end of the 1980s, average disc-pressing prices were over $5.00 per two-sided disc, due to the large amount of plastic material and the costly glass-mastering process needed to make the metal stamper mechanisms. Due to the larger volume of demand, videocassettes quickly became much cheaper to duplicate, costing as little as $1.00 by the beginning of the 1990s.
LaserDiscs potentially had a much longer lifespan than videocassettes. Because the discs were read optically instead of magnetically, no physical contact needed to be made between the player and the disc, except for the player's clamp that holds the disc at its center as it is spun and read. As a result, playback would not wear the information-bearing part of the discs, and properly manufactured LaserDiscs could theoretically last beyond a lifetime. By contrast, a VHS tape held all of its picture and sound information on the tape in a magnetic coating which was in contact with the spinning heads on the head drum, causing progressive wear with each use (though later in VHS's lifespan, engineering improvements allowed tapes to be made and played back without contact). The tape was also thin and delicate, and it was easy for a player mechanism, especially on a low quality or malfunctioning model, to mishandle the tape and damage it by creasing it, frilling (stretching) its edges, or even breaking it.
DVD
By the advent of DVD, LaserDisc had declined considerably in popularity, so the two formats never directly competed with each other.
LaserDisc was a composite video format: the luminance (black and white) and chrominance (color) information were transmitted in one signal, separated by the receiver. While good comb filters could separate the signals adequately, the two signals could not be completely separated. On DVD-Video, images are stored in the YCbCr format, with the chroma information being entirely discrete, which results in far higher fidelity, particularly at strong color borders or regions of high detail (especially if there is moderate movement in the picture) and low-contrast details such as skin tones, where comb filters almost inevitably smudge some detail.
In contrast to the entirely digital DVD, LaserDiscs used only analog video. As the LaserDisc format was not digitally encoded and did not make use of compression techniques, it was immune to video macroblocking (most visible as blockiness during high motion sequences) or contrast banding (subtle visible lines in gradient areas, such as out-of-focus backgrounds, skies, or light casts from spotlights) which could be caused by the MPEG-2 encoding process as video is prepared for DVD. Early DVD releases held the potential to surpass their LaserDisc counterparts, but often managed only to match them for image quality, and in some cases, the LaserDisc version was preferred. Proprietary human-assisted encoders manually operated by specialists could vastly reduce the incidence of artifacts, depending on playing time and image complexity. By the end of LaserDisc's run, DVDs were living up to their potential as a superior format.
DVDs use compressed audio formats such as Dolby Digital and DTS for multichannel sound. Most LaserDiscs were encoded with stereo (often Dolby Surround) CD quality audio 16bit/44.1 kHz tracks as well as analog audio tracks.
DTS-encoded LaserDiscs have DTS soundtracks of 1,235 kbit/s instead of the reduced bitrate of 768 kbit/s commonly employed on DVDs with optional DTS audio.
Advantages
LaserDisc players could provide a greater degree of control over the playback process. Unlike many DVD players, the transport mechanism always obeyed commands from the user: pause, fast-forward, and fast-reverse commands were always accepted (barring malfunctions). There were no "User Prohibited Options" where content protection code instructed the player to refuse commands to skip a specific part (such as fast forwarding through copyright warnings). (Some DVD players, particularly higher-end units, do have the ability to ignore the blocking code and play the video without restrictions, but this feature is not common in the usual consumer market.)
With CAV LaserDiscs, the user could jump directly to any individual frame of a video simply by entering the frame number on the remote keypad, a feature not common among DVD players. Some DVD players have a cache feature, which stores a certain amount of the video in RAM, which allows the player to index a DVD as quickly as an LD, even down to the frame in some players.
Damaged spots on a LaserDisc could be played through or skipped over, while a DVD will often become unplayable past the damage. Some newer DVD players feature a repair+skip algorithm, which alleviates this problem by continuing to play the disc, filling in unreadable areas of the picture with blank space or a frozen frame of the last readable image and sound. The success of this feature depends upon the amount of damage. LaserDisc players, when working in full analog, recover from such errors faster than DVD players.
Similar to the CD versus LP sound quality debates common in the audiophile community, some videophiles argue that LaserDisc maintains a "smoother", more "film-like", natural image while DVD still looks slightly more artificial. Early DVD demo discs often had compression or encoding problems, lending additional support to such claims at the time. The video signal-to-noise ratio and bandwidth of LaserDisc are substantially less than those of DVDs, making DVDs appear sharper and clearer to most viewers.
Another advantage, at least to some consumers, was the fact that any sort of anti-piracy technology was purely optional. It was claimed that Macrovision's Copyguard protection could not be applied to LaserDisc, due to the format's design. The vertical blanking interval, where the Macrovision signal would be implemented, was used for timecode and frame coding as well as player control codes on LaserDisc players. Due to its relatively small market share, there was never a push to redesign the format despite the obvious potential for piracy. The industry simply decided to engineer it into the DVD specification.
LaserDisc's support for multiple audio tracks allowed for vast supplemental materials to be included on-disc and made it the first available format for "Special Edition" releases; the 1984 Criterion Collection edition of Citizen Kane is generally credited as being the first "Special Edition" release to home video (King Kong being the first release to have an audio commentary track included), and for setting the standard by which future "Special Edition" discs were measured. The disc provided interviews, commentary tracks, documentaries, still photographs, and other features for historians and collectors.
Disadvantages
Despite the advantages over competing technology at the time (namely VHS and Betamax), the discs were heavy—weighing about each—and cumbersome, were more prone than a VHS tape to damage if mishandled, and manufacturers did not market LaserDisc units with recording capabilities to consumers. Also, because of their size, greater mechanical effort was required to spin the discs at the proper speed, resulting in much more noise generated than other media.
The space-consuming analog video signal of a LaserDisc limited playback duration to 30/36 minutes (CAV NTSC/PAL) or 60/64 minutes (CLV NTSC/PAL) per side, because of the hardware manufacturer's refusal to reduce line count and bandwidth for increased playtime, (as was done in VHS; VHS tapes had a 3 MHz video bandwidth, while LaserDisc preserves the full 6 MHz bandwidth and resolution used in NTSC broadcasts). After one side finished playing, a disc had to be flipped over to continue watching a movie, and some titles filled two or more discs, depending on the film's runtime and whether or not special features are included. Many players, especially units built after the mid-1980s, could "flip" discs automatically (by rotating the optical pickup to the other side of the disc), but this was accompanied by a pause in the movie during the side change.
In the event the movie was longer than what could be stored on two sides of a single disc, manually swapping to a second disc was required at some point during the film (one exception to this rule was the Pioneer LD-W1, which featured the ability to load two discs and to play each side of one disc and then to switch to playing each side of the other disc). In addition, perfect still frames and random access to individual still frames was limited only to the more expensive CAV discs, which only had a playing time of approximately 30 minutes per side. In later years, Pioneer and other manufacturers overcame this limitation by incorporating a digital memory buffer, which "grabbed" a single field or frame from a CLV disc.
The analog information encoded onto LaserDiscs also did not include any form of built-in checksum or error correction. Because of this, slight dust and scratches on the disc surface could result in read errors which caused various video quality problems: glitches, streaks, bursts of static, or momentary picture interruptions. In contrast, the digital MPEG-2 format information used on DVDs has built-in error correction which ensures that the signal from a damaged disc will remain identical to that from a perfect disc right up until the damage to the disc surface prevents the laser from being able to identify usable data.
In addition, LaserDisc videos sometimes exhibited a problem known as "crosstalk". The issue could arise when the laser optical pickup assembly within the player was out of alignment or because the disc was damaged or excessively warped. But it could also occur even with a properly functioning player and a factory-new disc, depending on electrical and mechanical alignment problems. In these instances, the issue arose due to the fact that CLV discs required subtle changes in rotating speed at various points during playback. During a change in speed, the optical pickup inside the player might read video information from a track adjacent to the intended one, causing data from the two tracks to "cross"; the extra video information picked up from that second track shows up as distortion in the picture which looks reminiscent of swirling "barber poles" or rolling lines of static.
Assuming the player's optical pickup was in proper working order, crosstalk distortion normally did not occur during playback of CAV-format LaserDiscs, as the rotational speed never varied. If the player calibration was out of order, or if the CAV disc was faulty or damaged, other problems affecting tracking accuracy could occur. One such problem was "laser lock", where the player read the same two fields for a given frame over and over, causing the picture to look frozen as if the movie were paused.
Another significant issue unique to LaserDisc involved the inconsistency of playback quality between different makers and models of player. On the majority of televisions, a given DVD player will produce a picture that is visually indistinguishable from other units; differences in image quality between players only becomes easily apparent on larger televisions, and substantial leaps in image quality are generally only obtained with expensive, high-end players that allow for post-processing of the MPEG-2 stream during playback.
In contrast, LaserDisc playback quality was highly dependent on hardware quality, and major variances in picture quality appeared between different makers and models of LaserDisc players, even when tested on low- to mid-range televisions. The obvious benefits of using high-quality equipment helped keep demand for some players high, while also keeping pricing for those units comparably high: in the 1990s, notable players sold for anywhere from US$200 to well over $1,000, while older and less desirable players could be purchased in working condition for as little as $25.
Laser rot
Many early LaserDiscs were not manufactured properly. The adhesive that was used contained impurities which were able to penetrate the lacquer seal layer and chemically attack the metalized reflective aluminum layer, altering its reflective characteristics. This, in turn, deteriorated the recorded signal. This was a problem that was termed "laser rot" among LaserDisc enthusiasts (also called "color flash" internally by LaserDisc pressing plants). Some forms of laser rot could appear as black spots that looked like mold or burned plastic which caused the disc to skip and the video to exhibit excessive speckling noise. But, for the most part, rotted discs could actually appear perfectly fine to the naked eye.
Later optical standards have also been known to suffer similar problems, including a notorious batch of defective CDs manufactured by Philips-DuPont Optical at their Blackburn, Lancashire facility in England during the late 1980s/early 1990s.
Impact and decline
LaserDisc did not have high market penetration in North America due to the high cost of the players and discs (which were far more expensive than VHS players and tapes), and due to marketplace confusion with the technologically inferior CED, which also went by the name Videodisc. While the format was not widely adopted by North American consumers, it was received well among videophiles due to the superior audio and video quality compared to VHS and Betamax tapes, thus finding a place in nearly one million American homes by the end of 1990. The format was more popular in Japan than in North America because prices were kept low to ensure adoption, resulting in minimal price differences between VHS tapes and the higher quality LaserDiscs, which helped ensure that it quickly became the dominant consumer video format in Japan. Anime collectors in every country in which the LaserDisc format was released (which included both North America and Japan) also quickly became familiar with this format, and sought the higher video and sound quality of LaserDisc and the availability of numerous titles not available on VHS. (They were also encouraged by Pioneer's in-house production of anime which made titles specifically with the format in mind.) LaserDiscs were also popular alternatives to videocassettes among movie enthusiasts in the more affluent regions of South East Asia, such as Singapore, due to their high integration with the Japanese export market and the disc-based media's superior longevity compared to videocassette, especially in the humid conditions endemic to that area of the world.
The format also became quite popular in Hong Kong during the 1990s before the introduction of VCDs and DVD. While people rarely bought the discs (because each LaserDisc was priced around US$100), high rental activity helped the video rental business in the city grow larger than it had ever been previously. Due to integration with the Japanese export market, NTSC LaserDiscs were used in the Hong Kong market, in contrast to the PAL standard used for broadcast (this anomaly also exists for DVD). This created a market for multi-system TVs and multi-system VCRs which could display or play both PAL and NTSC materials in addition to SECAM materials (which were never popular in Hong Kong). Some LaserDisc players could convert NTSC signals to PAL during playback so that TVs used in Hong Kong could display the LaserDisc materials.
Despite the relative popularity, manufacturers refused to market recordable LaserDisc devices on the consumer market, even though the competing VCR devices could record onto cassette. This had a negative impact on sales worldwide. The inconvenient disc size, the high cost of both the players and the media and the inability to record onto the discs combined to take a serious toll on sales, and contributed to the format's poor adoption figures.
Although the LaserDisc format was supplanted by DVD by the late 1990s, many LaserDisc titles are still highly coveted by movie enthusiasts (for example, Disney's Song of the South which is unavailable in the US in any format, but was issued in Japan on LaserDisc.) This is largely because there are many films that are still only available on LaserDisc and many other LaserDisc releases contain supplementary material not available on subsequent DVD versions of those films. Until the end of 2001, many titles were released on VHS, LaserDisc, and DVD in Japan.
Further developments and applications
Computer control
In the early 1980s, Philips produced a LaserDisc player model adapted for a computer interface, dubbed "professional." In 1985, Jasmine Multimedia created LaserDisc jukeboxes featuring music videos from Michael Jackson, Duran Duran, and Cyndi Lauper. When connected to a PC this combination could be used to display images or information for educational or archival purposes, for example, thousands of scanned medieval manuscripts. This strange device could be considered a very early equivalent of a CD-ROM.
In the mid-1980s Lucasfilm pioneered the EditDroid non-linear editing system for film and television based on computer-controlled LaserDisc players. Instead of printing dailies out on film, processed negatives from the day's shoot would be sent to a mastering plant to be assembled from their 10-minute camera elements into 20-minute film segments. These were then mastered onto single-sided blank LaserDiscs, just as a DVD would be burnt at home today, allowing for much easier selection and preparation of an edit decision list (EDL). In the days before video assist was available in cinematography, this was the only other way a film crew could see their work. The EDL went to the negative cutter who then cut the camera negative accordingly and assembled the finished film. Only 24 EditDroid systems were ever built, even though the ideas and technology are still in use today. Later EditDroid experiments borrowed from hard-drive technology of having multiple discs on the same spindle and added numerous playback heads and numerous electronics to the basic jukebox design so that any point on each of the discs would be accessible within seconds. This eliminated the need for racks and racks of industrial LaserDisc players since EditDroid discs were only single-sided.
In 1986, a SCSI-equipped LaserDisc player attached to a BBC Master computer was used for the BBC Domesday Project. The player was referred as an LV-ROM (LaserVision Read Only Memory) as the discs contained the driving software as well as the video frames. The discs used the CAV format, and encoded data as a binary signal represented by the analog audio recording. These discs could contain in each CAV frame video/audio or video/binary data, but not both. "Data" frames would appear blank when played as video. It was typical for each disc to start with the disc catalog (a few blank frames) then the video introduction before the rest of the data. Because the format (based on the ADFS hard disc format) used a starting sector for each file, the data layout effectively skipped over any video frames. If all 54,000 frames are used for data storage an LV-ROM disc can contain 324 MB of data per side. The Domesday Project systems also included a genlock, allowing video frames, clips and audio to be mixed with graphics originated from the BBC Master; this was used to great effect for displaying high-resolution photographs and maps, which could then be zoomed into.
During the 1980s in the United States, Digital Equipment Corporation developed the standalone PC control IVIS (Interactive VideoDisc Information System) for training and education. One of the most influential programs developed at DEC was Decision Point, a management gaming simulation, which won the Nebraska Video Disc Award for Best of Show in 1985.
Apple's HyperCard scripting language provided Macintosh computer users with a means to design databases of slides, animation, video and sounds from LaserDiscs and then to create interfaces for users to play specific content from the disc through software called LaserStacks. User-created "stacks" were shared and were especially popular in education where teacher-generated stacks were used to access discs ranging from art collections to basic biological processes. Commercially available stacks were also popular with the Voyager company being possibly the most successful distributor.
Commodore International's 1992 multimedia presentation system for the Amiga, AmigaVision, included device drivers for controlling a number of LaserDisc players through a serial port. Coupled with the Amiga's ability to use a Genlock, this allowed for the LaserDisc video to be overlaid with computer graphics and integrated into presentations and multimedia displays, years before such practice was commonplace.
Pioneer also made computer-controlled units such as the LD-V2000. It had a back-panel RS-232 serial connection through a five-pin DIN connector, and no front-panel controls except Open/Close. (The disc would be played automatically upon insertion.)
Under contract from the U.S. military, Matrox produced a combination computer/LaserDisc player for instructional purposes. The computer was a 286, the LaserDisc player only capable of reading the analog audio tracks. Together they weighed and sturdy handles were provided in case two people were required to lift the unit. The computer controlled the player via a 25-pin serial port at the back of the player and a ribbon cable connected to a proprietary port on the motherboard. Many of these were sold as surplus by the military during the 1990s, often without the controller software. Nevertheless, it is possible to control the unit by removing the ribbon cable and connecting a serial cable directly from the computer's serial port to the port on the LaserDisc player.
Video games
The format's instant-access capability made it possible for a new breed of LaserDisc-based video arcade games. Several companies saw potential in using LaserDiscs for video games in the 1980s and 1990s, beginning in 1983 with Sega's Astron Belt. Cinematronics and American Laser Games produced elaborate arcade games that used the random-access features to create interactive movies such as Dragon's Lair and Space Ace. Similarly, the Pioneer Laseractive and Halcyon were introduced as home video game consoles that used LaserDisc media for their software.
Hi-Vision LD
In 1991, several manufacturers announced specifications for what would become known as Hi-Vision LD, representing a span of almost 15 years until the feats of this HD analog optical disc system would finally be duplicated digitally by HD DVD and Blu-ray Disc. Encoded using NHK's MUSE "Hi-Vision" analog HDTV system, MUSE discs would operate like standard LaserDiscs but would contain high-definition 1,125-line (1,035 visible lines; Sony HDVS) video with a 16:9 aspect ratio. The MUSE players were also capable of playing standard NTSC format discs and are superior in performance to non-MUSE players even with these NTSC discs. The MUSE-capable players had several noteworthy advantages over standard LaserDisc players, including a red laser with a much narrower wavelength than the lasers found in standard players. The red laser was capable of reading through disc defects such as scratches and even mild disc rot that would cause most other players to stop, stutter or drop-out. Crosstalk was not an issue with MUSE discs, and the narrow wavelength of the laser allowed for the virtual elimination of crosstalk with normal discs.
To view MUSE-encoded discs, it was necessary to have a MUSE decoder in addition to a compatible player. There are televisions with MUSE decoding built-in and set-top tuners with decoders that can provide the proper MUSE input. Equipment prices were high, especially for early HDTVs which generally eclipsed US$10,000, and even in Japan the market for MUSE was tiny. Players and discs were never officially sold in North America, although several distributors imported MUSE discs along with other import titles. Terminator 2: Judgment Day, Lawrence of Arabia, A League of Their Own, Bugsy, Close Encounters of the Third Kind, Bram Stoker's Dracula and Chaplin were among the theatrical releases available on MUSE LDs. Several documentaries, including one about Formula One at Japan's Suzuka Circuit were also released.
LaserDisc players and LaserDiscs that worked with the competing European HD-MAC HDTV standard were also made.
Picture discs
Picture discs have artistic etching on one side of the disc to make the disc more visually attractive than the standard shiny silver surface. This etching might look like a movie character, logo, or other promotional material. Sometimes that side of the LD would be made with colored plastic, rather than the clear material used for the data side. Picture disc LDs only had video material on one side as the "picture" side could not contain any data. Picture discs are rare in North America.
LD-G
Pioneer Electronics—one of the format's largest supporters/investors—was also deeply involved in the karaoke business in Japan, and used LaserDiscs as the storage medium for music and additional content such as graphics. This format was generally called LD-G. While several other karaoke labels manufactured LaserDiscs, there was nothing like the breadth of competition in that industry that exists now, as almost all manufacturers have transitioned to CD+G discs.
Anamorphic LaserDiscs
With the release of 16:9 televisions in the early 1990s, Pioneer and Toshiba decided that it was time to take advantage of this aspect ratio. Squeeze LDs were enhanced 16:9-ratio widescreen LaserDiscs. During the video transfer stage, the movie was stored in an anamorphic "squeezed" format. The widescreen movie image was stretched to fill the entire video frame with less or none of the video resolution wasted to create letterbox bars. The advantage was a 33% greater vertical resolution compared to letterboxed widescreen LaserDisc. This same procedure was used for anamorphic DVDs, but unlike all DVD players, very few LD players had the ability to unsqueeze the image for 4:3 sets, If the discs were played on a standard 4:3 television the image would be distorted. Some 4:3 sets (such as the Sony WEGA series) could be set to unsqueeze the image. Since very few people outside of Japan owned 16:9 displays, the marketability of these special discs was very limited.
There were no anamorphic LaserDisc titles available in the US except for promotional purposes. Upon purchase of a Toshiba 16:9 television viewers had the option of selecting a number of Warner Bros. 16:9 films. Titles include Unforgiven, Grumpy Old Men, The Fugitive, and Free Willy. The Japanese lineup of titles was different. A series of releases under the banner "Squeeze LD" from Pioneer of mostly Carolco titles included Basic Instinct, Stargate, Terminator 2: Judgment Day, Showgirls, Cutthroat Island, and Cliffhanger. Terminator 2 was released twice in Squeeze LD, the second release being THX certified and a notable improvement over the first.
Recordable formats
Another type of video media, CRVdisc, or "Component Recordable Video Disc" were available for a short time, mostly to professionals. Developed by Sony, CRVdiscs resemble early PC CD-ROM caddies with a disc inside resembling a full-sized LD. CRVdiscs were blank, write-once, read-many media that can be recorded once on each side. CRVdiscs were used largely for backup storage in professional and commercial applications.
Another form of recordable LaserDisc that is completely playback-compatible with the LaserDisc format (unlike CRVdisc with its caddy enclosure) is the RLV, or Recordable Laser Videodisc. It was developed and first marketed by the Optical Disc Corporation (ODC, now ODC Nimbus) in 1984. RLV discs, like CRVdisc, are also a WORM technology, and function exactly like a CD-R disc. RLV discs look almost exactly like standard LaserDiscs, and can play in any standard LaserDisc player after they have been recorded.
The only cosmetic difference between an RLV disc and a regular factory-pressed LaserDiscs is their reflective Red (showing up in photos as a purple-violet or blue with some RLV discs) color resulting from the dye embedded in the reflective layer of the disc to make it recordable, as opposed to the silver mirror appearance of regular LDs. The reddish color of RLVs is very similar to DVD-R and DVD+R discs. RLVs were popular for making short-run quantities of LaserDiscs for specialized applications such as interactive kiosks and flight simulators. Another, single-sided form of RLV exists with the silver side being covered in small bumps. Blank RLV discs show a standard Test Card when played in a Laserdisc player.
Pioneer also produced a rewritable LaserDisc system, the VDR-V1000 "LaserRecorder" for which the discs had a claimed erase/record potential of 1,000,000 cycles.
These recordable LD systems were never marketed toward the general public, and are so unknown as to create the misconception that home recording for LaserDiscs was impossible and thus a perceived "weakness" of the LaserDisc format.
LaserDisc sizes
30 cm (Full-size)
The most common size of LaserDisc was , approximately the size of LP vinyl records. These discs allowed for 30/36 minutes per side (CAV NTSC/PAL) or 60/64 minutes per side (CLV NTSC/PAL). The vast majority of programming for the LaserDisc format was produced on these discs.
20 cm ("EP"-size)
A number of LaserDiscs were also published. These smaller "EP"-sized LDs allowed for 20 minutes per side (CLV). They are much rarer than the full-size LDs, especially in North America, and roughly approximate the size of 45rpm () vinyl singles. These discs were often used for music video compilations (e.g. Bon Jovi's "Breakout", Bananarama's "Video Singles" or T'Pau's "View from a Bridge"), as well as Japanese karaoke machines.
12 cm (CD Video and Video Single Disc)
There were also (CD size) "single"-style discs produced that were playable on LaserDisc players. These were referred to as CD Video (CD-V) discs, and Video Single Discs (VSD).
CD-V was a hybrid format launched in the late 1980s, and carried up to five minutes of analog LaserDisc-type video content with a digital soundtrack (usually a music video), plus up to 20 minutes of digital audio CD tracks. The original 1989 release of David Bowie's retrospective Sound + Vision CD box set prominently featured a CD-V video of "Ashes to Ashes", and standalone promo CD-Vs featured the video, plus three audio tracks: "John, I'm Only Dancing", "Changes", and "The Supermen".
Despite the similar name, CD Video is entirely incompatible with the later all-digital Video CD (VCD) format, and can only be played back on LaserDisc players with CD-V capability or one of the players dedicated to the smaller discs. CD-Vs were somewhat popular for a brief time worldwide but soon faded from view.
In Europe, Philips also used the "CD Video" name as part of a short-lived attempt in the late 1980s to relaunch and rebrand the entire LaserDisc system. Some 20 and 30 cm discs were also branded "CD Video", but unlike the 12 cm discs, these were essentially just standard LaserDiscs with digital soundtracks and no audio-only CD content.
The VSD format was announced in 1990, and was essentially the same as the CD-V, but without the audio CD tracks, and intended to sell at a lower price. VSDs were popular only in Japan and other parts of Asia and were never fully introduced to the rest of the world.
| Technology | Non-volatile memory | null |
255954 | https://en.wikipedia.org/wiki/DNA%20microarray | DNA microarray | A DNA microarray (also commonly known as DNA chip or biochip) is a collection of microscopic DNA spots attached to a solid surface. Scientists use DNA microarrays to measure the expression levels of large numbers of genes simultaneously or to genotype multiple regions of a genome. Each DNA spot contains picomoles (10−12 moles) of a specific DNA sequence, known as probes (or reporters or oligos). These can be a short section of a gene or other DNA element that are used to hybridize a cDNA or cRNA (also called anti-sense RNA) sample (called target) under high-stringency conditions. Probe-target hybridization is usually detected and quantified by detection of fluorophore-, silver-, or chemiluminescence-labeled targets to determine relative abundance of nucleic acid sequences in the target. The original nucleic acid arrays were macro arrays approximately 9 cm × 12 cm and the first computerized image based analysis was published in 1981. It was invented by Patrick O. Brown. An example of its application is in SNPs arrays for polymorphisms in cardiovascular diseases, cancer, pathogens and GWAS analysis. It is also used for the identification of structural variations and the measurement of gene expression.
Principle
The core principle behind microarrays is hybridization between two DNA strands, the property of complementary nucleic acid sequences to specifically pair with each other by forming hydrogen bonds between complementary nucleotide base pairs. A high number of complementary base pairs in a nucleotide sequence means tighter non-covalent bonding between the two strands. After washing off non-specific bonding sequences, only strongly paired strands will remain hybridized. Fluorescently labeled target sequences that bind to a probe sequence generate a signal that depends on the hybridization conditions (such as temperature), and washing after hybridization. Total strength of the signal, from a spot (feature), depends upon the amount of target sample binding to the probes present on that spot. Microarrays use relative quantitation in which the intensity of a feature is compared to the intensity of the same feature under a different condition, and the identity of the feature is known by its position.
Uses and types
Many types of arrays exist and the broadest distinction is whether they are spatially arranged on a surface or on coded beads:
The traditional solid-phase array is a collection of orderly microscopic "spots", called features, each with thousands of identical and specific probes attached to a solid surface, such as glass, plastic or silicon biochip (commonly known as a genome chip, DNA chip or gene array). Thousands of these features can be placed in known locations on a single DNA microarray.
The alternative bead array is a collection of microscopic polystyrene beads, each with a specific probe and a ratio of two or more dyes, which do not interfere with the fluorescent dyes used on the target sequence.
DNA microarrays can be used to detect DNA (as in comparative genomic hybridization), or detect RNA (most commonly as cDNA after reverse transcription) that may or may not be translated into proteins. The process of measuring gene expression via cDNA is called expression analysis or expression profiling.
Applications include:
Specialised arrays tailored to particular crops are becoming increasingly popular in molecular breeding applications. In the future they could be used to screen seedlings at early stages to lower the number of unneeded seedlings tried out in breeding operations.
Fabrication
Microarrays can be manufactured in different ways, depending on the number of probes under examination, costs, customization requirements, and the type of scientific question being asked. Arrays from commercial vendors may have as few as 10 probes or as many as 5 million or more micrometre-scale probes.
Spotted vs. in situ synthesised arrays
Microarrays can be fabricated using a variety of technologies, including printing with fine-pointed pins onto glass slides, photolithography using pre-made masks, photolithography using dynamic micromirror devices, ink-jet printing, or electrochemistry on microelectrode arrays.
In spotted microarrays, the probes are oligonucleotides, cDNA or small fragments of PCR products that correspond to mRNAs. The probes are synthesized prior to deposition on the array surface and are then "spotted" onto glass. A common approach utilizes an array of fine pins or needles controlled by a robotic arm that is dipped into wells containing DNA probes and then depositing each probe at designated locations on the array surface. The resulting "grid" of probes represents the nucleic acid profiles of the prepared probes and is ready to receive complementary cDNA or cRNA "targets" derived from experimental or clinical samples.
This technique is used by research scientists around the world to produce "in-house" printed microarrays in their own labs. These arrays may be easily customized for each experiment, because researchers can choose the probes and printing locations on the arrays, synthesize the probes in their own lab (or collaborating facility), and spot the arrays. They can then generate their own labeled samples for hybridization, hybridize the samples to the array, and finally scan the arrays with their own equipment. This provides a relatively low-cost microarray that may be customized for each study, and avoids the costs of purchasing often more expensive commercial arrays that may represent vast numbers of genes that are not of interest to the investigator.
Publications exist which indicate in-house spotted microarrays may not provide the same level of sensitivity compared to commercial oligonucleotide arrays, possibly owing to the small batch sizes and reduced printing efficiencies when compared to industrial manufactures of oligo arrays.
In oligonucleotide microarrays, the probes are short sequences designed to match parts of the sequence of known or predicted open reading frames. Although oligonucleotide probes are often used in "spotted" microarrays, the term "oligonucleotide array" most often refers to a specific technique of manufacturing. Oligonucleotide arrays are produced by printing short oligonucleotide sequences designed to represent a single gene or family of gene splice-variants by synthesizing this sequence directly onto the array surface instead of depositing intact sequences. Sequences may be longer (60-mer probes such as the Agilent design) or shorter (25-mer probes produced by Affymetrix) depending on the desired purpose; longer probes are more specific to individual target genes, shorter probes may be spotted in higher density across the array and are cheaper to manufacture.
One technique used to produce oligonucleotide arrays include photolithographic synthesis (Affymetrix) on a silica substrate where light and light-sensitive masking agents are used to "build" a sequence one nucleotide at a time across the entire array. Each applicable probe is selectively "unmasked" prior to bathing the array in a solution of a single nucleotide, then a masking reaction takes place and the next set of probes are unmasked in preparation for a different nucleotide exposure. After many repetitions, the sequences of every probe become fully constructed. More recently, Maskless Array Synthesis from NimbleGen Systems has combined flexibility with large numbers of probes.
Two-channel vs. one-channel detection
Two-color microarrays or two-channel microarrays are typically hybridized with cDNA prepared from two samples to be compared (e.g. diseased tissue versus healthy tissue) and that are labeled with two different fluorophores. Fluorescent dyes commonly used for cDNA labeling include Cy3, which has a fluorescence emission wavelength of 570 nm (corresponding to the green part of the light spectrum), and Cy5 with a fluorescence emission wavelength of 670 nm (corresponding to the red part of the light spectrum). The two Cy-labeled cDNA samples are mixed and hybridized to a single microarray that is then scanned in a microarray scanner to visualize fluorescence of the two fluorophores after excitation with a laser beam of a defined wavelength. Relative intensities of each fluorophore may then be used in ratio-based analysis to identify up-regulated and down-regulated genes.
Oligonucleotide microarrays often carry control probes designed to hybridize with RNA spike-ins. The degree of hybridization between the spike-ins and the control probes is used to normalize the hybridization measurements for the target probes. Although absolute levels of gene expression may be determined in the two-color array in rare instances, the relative differences in expression among different spots within a sample and between samples is the preferred method of data analysis for the two-color system. Examples of providers for such microarrays includes Agilent with their Dual-Mode platform, Eppendorf with their DualChip platform for colorimetric Silverquant labeling, and TeleChem International with Arrayit.
In single-channel microarrays or one-color microarrays, the arrays provide intensity data for each probe or probe set indicating a relative level of hybridization with the labeled target. However, they do not truly indicate abundance levels of a gene but rather relative abundance when compared to other samples or conditions when processed in the same experiment. Each RNA molecule encounters protocol and batch-specific bias during amplification, labeling, and hybridization phases of the experiment making comparisons between genes for the same microarray uninformative. The comparison of two conditions for the same gene requires two separate single-dye hybridizations. Several popular single-channel systems are the Affymetrix "Gene Chip", Illumina "Bead Chip", Agilent single-channel arrays, the Applied Microarrays "CodeLink" arrays, and the Eppendorf "DualChip & Silverquant". One strength of the single-dye system lies in the fact that an aberrant sample cannot affect the raw data derived from other samples, because each array chip is exposed to only one sample (as opposed to a two-color system in which a single low-quality sample may drastically impinge on overall data precision even if the other sample was of high quality). Another benefit is that data are more easily compared to arrays from different experiments as long as batch effects have been accounted for.
One channel microarray may be the only choice in some situations. Suppose samples need to be compared: then the number of experiments required using the two channel arrays quickly becomes unfeasible, unless a sample is used as a reference.
A typical protocol
This is an example of a DNA microarray experiment which includes details for a particular case to better explain DNA microarray experiments, while listing modifications for RNA or other alternative experiments.
The two samples to be compared (pairwise comparison) are grown/acquired. In this example treated sample (case) and untreated sample (control).
The nucleic acid of interest is purified: this can be RNA for expression profiling, DNA for comparative hybridization, or DNA/RNA bound to a particular protein which is immunoprecipitated (ChIP-on-chip) for epigenetic or regulation studies. In this example total RNA is isolated (both nuclear and cytoplasmic) by guanidinium thiocyanate-phenol-chloroform extraction (e.g. Trizol) which isolates most RNA (whereas column methods have a cut off of 200 nucleotides) and if done correctly has a better purity.
The purified RNA is analysed for quality (by capillary electrophoresis) and quantity (for example, by using a NanoDrop or NanoPhotometer spectrometer). If the material is of acceptable quality and sufficient quantity is present (e.g., >1μg, although the required amount varies by microarray platform), the experiment can proceed.
The labeled product is generated via reverse transcription and followed by an optional PCR amplification. The RNA is reverse transcribed with either polyT primers (which amplify only mRNA) or random primers (which amplify all RNA, most of which is rRNA). miRNA microarrays ligate an oligonucleotide to the purified small RNA (isolated with a fractionator), which is then reverse transcribed and amplified.
The label is added either during the reverse transcription step, or following amplification if it is performed. The sense labeling is dependent on the microarray; e.g. if the label is added with the RT mix, the cDNA is antisense and the microarray probe is sense, except in the case of negative controls.
The label is typically fluorescent; only one machine uses radiolabels.
The labeling can be direct (not used) or indirect (requires a coupling stage). For two-channel arrays, the coupling stage occurs before hybridization, using aminoallyl uridine triphosphate (aminoallyl-UTP, or aaUTP) and NHS amino-reactive dyes (such as cyanine dyes); for single-channel arrays, the coupling stage occurs after hybridization, using biotin and labeled streptavidin. The modified nucleotides (usually in a ratio of 1 aaUTP: 4 TTP (thymidine triphosphate)) are added enzymatically in a low ratio to normal nucleotides, typically resulting in 1 every 60 bases. The aaDNA is then purified with a column (using a phosphate buffer solution, as Tris contains amine groups). The aminoallyl group is an amine group on a long linker attached to the nucleobase, which reacts with a reactive dye.
A form of replicate known as a dye flip can be performed to control for dye artifacts in two-channel experiments; for a dye flip, a second slide is used, with the labels swapped (the sample that was labeled with Cy3 in the first slide is labeled with Cy5, and vice versa). In this example, aminoallyl-UTP is present in the reverse-transcribed mixture.
The labeled samples are then mixed with a proprietary hybridization solution which can consist of SDS, SSC, dextran sulfate, a blocking agent (such as Cot-1 DNA, salmon sperm DNA, calf thymus DNA, PolyA, or PolyT), Denhardt's solution, or formamine.
The mixture is denatured and added to the pinholes of the microarray. The holes are sealed and the microarray hybridized, either in a hyb oven, where the microarray is mixed by rotation, or in a mixer, where the microarray is mixed by alternating pressure at the pinholes.
After an overnight hybridization, all nonspecific binding is washed off (SDS and SSC).
The microarray is dried and scanned by a machine that uses a laser to excite the dye and measures the emission levels with a detector.
The image is gridded with a template and the intensities of each feature (composed of several pixels) is quantified.
The raw data is normalized; the simplest normalization method is to subtract background intensity and scale so that the total intensities of the features of the two channels are equal, or to use the intensity of a reference gene to calculate the t-value for all of the intensities. More sophisticated methods include z-ratio, loess and lowess regression and RMA (robust multichip analysis) for Affymetrix chips (single-channel, silicon chip, in situ synthesized short oligonucleotides).
Microarrays and bioinformatics
The advent of inexpensive microarray experiments created several specific bioinformatics challenges: the multiple levels of replication in experimental design (Experimental design); the number of platforms and independent groups and data format (Standardization); the statistical treatment of the data (Data analysis); mapping each probe to the mRNA transcript that it measures (Annotation); the sheer volume of data and the ability to share it (Data warehousing).
Experimental design
Due to the biological complexity of gene expression, the considerations of experimental design that are discussed in the expression profiling article are of critical importance if statistically and biologically valid conclusions are to be drawn from the data.
There are three main elements to consider when designing a microarray experiment. First, replication of the biological samples is essential for drawing conclusions from the experiment. Second, technical replicates (e.g. two RNA samples obtained from each experimental unit) may help to quantitate precision. The biological replicates include independent RNA extractions. Technical replicates may be two aliquots of the same extraction. Third, spots of each cDNA clone or oligonucleotide are present as replicates (at least duplicates) on the microarray slide, to provide a measure of technical precision in each hybridization. It is critical that information about the sample preparation and handling is discussed, in order to help identify the independent units in the experiment and to avoid inflated estimates of statistical significance.
Standardization
Microarray data is difficult to exchange due to the lack of standardization in platform fabrication, assay protocols, and analysis methods. This presents an interoperability problem in bioinformatics. Various grass-roots open-source projects are trying to ease the exchange and analysis of data produced with non-proprietary chips:
For example, the "Minimum Information About a Microarray Experiment" (MIAME) checklist helps define the level of detail that should exist and is being adopted by many journals as a requirement for the submission of papers incorporating microarray results. But MIAME does not describe the format for the information, so while many formats can support the MIAME requirements, no format permits verification of complete semantic compliance. The "MicroArray Quality Control (MAQC) Project" is being conducted by the US Food and Drug Administration (FDA) to develop standards and quality control metrics which will eventually allow the use of MicroArray data in drug discovery, clinical practice and regulatory decision-making. The MGED Society has developed standards for the representation of gene expression experiment results and relevant annotations.
Data analysis
Microarray data sets are commonly very large, and analytical precision is influenced by a number of variables. Statistical challenges include taking into account effects of background noise and appropriate normalization of the data. Normalization methods may be suited to specific platforms and, in the case of commercial platforms, the analysis may be proprietary. Algorithms that affect statistical analysis include:
Image analysis: gridding, spot recognition of the scanned image (segmentation algorithm), removal or marking of poor-quality and low-intensity features (called flagging).
Data processing: background subtraction (based on global or local background), determination of spot intensities and intensity ratios, visualisation of data (e.g. see MA plot), and log-transformation of ratios, global or local normalization of intensity ratios, and segmentation into different copy number regions using step detection algorithms.
Class discovery analysis: This analytic approach, sometimes called unsupervised classification or knowledge discovery, tries to identify whether microarrays (objects, patients, mice, etc.) or genes cluster together in groups. Identifying naturally existing groups of objects (microarrays or genes) which cluster together can enable the discovery of new groups that otherwise were not previously known to exist. During knowledge discovery analysis, various unsupervised classification techniques can be employed with DNA microarray data to identify novel clusters (classes) of arrays. This type of approach is not hypothesis-driven, but rather is based on iterative pattern recognition or statistical learning methods to find an "optimal" number of clusters in the data. Examples of unsupervised analyses methods include self-organizing maps, neural gas, k-means cluster analyses, hierarchical cluster analysis, Genomic Signal Processing based clustering and model-based cluster analysis. For some of these methods the user also has to define a distance measure between pairs of objects. Although the Pearson correlation coefficient is usually employed, several other measures have been proposed and evaluated in the literature. The input data used in class discovery analyses are commonly based on lists of genes having high informativeness (low noise) based on low values of the coefficient of variation or high values of Shannon entropy, etc. The determination of the most likely or optimal number of clusters obtained from an unsupervised analysis is called cluster validity. Some commonly used metrics for cluster validity are the silhouette index, Davies-Bouldin index, Dunn's index, or Hubert's statistic.
Class prediction analysis: This approach, called supervised classification, establishes the basis for developing a predictive model into which future unknown test objects can be input in order to predict the most likely class membership of the test objects. Supervised analysis for class prediction involves use of techniques such as linear regression, k-nearest neighbor, learning vector quantization, decision tree analysis, random forests, naive Bayes, logistic regression, kernel regression, artificial neural networks, support vector machines, mixture of experts, and supervised neural gas. In addition, various metaheuristic methods are employed, such as genetic algorithms, covariance matrix self-adaptation, particle swarm optimization, and ant colony optimization. Input data for class prediction are usually based on filtered lists of genes which are predictive of class, determined using classical hypothesis tests (next section), Gini diversity index, or information gain (entropy).
Hypothesis-driven statistical analysis: Identification of statistically significant changes in gene expression are commonly identified using the t-test, ANOVA, Bayesian method Mann–Whitney test methods tailored to microarray data sets, which take into account multiple comparisons or cluster analysis. These methods assess statistical power based on the variation present in the data and the number of experimental replicates, and can help minimize type I and type II errors in the analyses.
Dimensional reduction: Analysts often reduce the number of dimensions (genes) prior to data analysis. This may involve linear approaches such as principal components analysis (PCA), or non-linear manifold learning (distance metric learning) using kernel PCA, diffusion maps, Laplacian eigenmaps, local linear embedding, locally preserving projections, and Sammon's mapping.
Network-based methods: Statistical methods that take the underlying structure of gene networks into account, representing either associative or causative interactions or dependencies among gene products. Weighted gene co-expression network analysis is widely used for identifying co-expression modules and intramodular hub genes. Modules may corresponds to cell types or pathways. Highly connected intramodular hubs best represent their respective modules.
Microarray data may require further processing aimed at reducing the dimensionality of the data to aid comprehension and more focused analysis. Other methods permit analysis of data consisting of a low number of biological or technical replicates; for example, the Local Pooled Error (LPE) test pools standard deviations of genes with similar expression levels in an effort to compensate for insufficient replication.
Annotation
The relation between a probe and the mRNA that it is expected to detect is not trivial. Some mRNAs may cross-hybridize probes in the array that are supposed to detect another mRNA. In addition, mRNAs may experience amplification bias that is sequence or molecule-specific. Thirdly, probes that are designed to detect the mRNA of a particular gene may be relying on genomic EST information that is incorrectly associated with that gene.
Data warehousing
Microarray data was found to be more useful when compared to other similar datasets. The sheer volume of data, specialized formats (such as MIAME), and curation efforts associated with the datasets require specialized databases to store the data. A number of open-source data warehousing solutions, such as InterMine and BioMart, have been created for the specific purpose of integrating diverse biological datasets, and also support analysis.
Alternative technologies
Advances in massively parallel sequencing has led to the development of RNA-Seq technology, that enables a whole transcriptome shotgun approach to characterize and quantify gene expression. Unlike microarrays, which need a reference genome and transcriptome to be available before the microarray itself can be designed, RNA-Seq can also be used for new model organisms whose genome has not been sequenced yet.
Glossary
An array or slide is a collection of features spatially arranged in a two dimensional grid, arranged in columns and rows.
Block or subarray: a group of spots, typically made in one print round; several subarrays/ blocks form an array.
Case/control: an experimental design paradigm especially suited to the two-colour array system, in which a condition chosen as control (such as healthy tissue or state) is compared to an altered condition (such as a diseased tissue or state).
Channel: the fluorescence output recorded in the scanner for an individual fluorophore and can even be ultraviolet.
Dye flip or dye swap or fluor reversal: reciprocal labelling of DNA targets with the two dyes to account for dye bias in experiments.
Scanner: an instrument used to detect and quantify the intensity of fluorescence of spots on a microarray slide, by selectively exciting fluorophores with a laser and measuring the fluorescence with a filter (optics) photomultiplier system.
Spot or feature: a small area on an array slide that contains picomoles of specific DNA samples.
For other relevant terms see:
Glossary of gene expression terms
Protocol (natural sciences)
| Technology | Biotechnology | null |
255960 | https://en.wikipedia.org/wiki/Ground%20roller | Ground roller | The ground rollers , Brachypteraciidae, are a small family of non-migratory birds restricted to Madagascar.
They are members of the order Coraciiformes and are most closely related to the rollers in the family Coraciidae.
Description
Ground rollers share the generally crow-like size and build of the true rollers, ranging from in length, and also hunt reptiles and large insects. They are more terrestrial than Coracidae species, and this is reflected in their longer legs and shorter, more rounded wings.
They lack the highly colourful appearance of the true rollers, and are duller in appearance, with striped or flecked plumage. They are much more elusive and shy than their relatives, and are normally difficult to find in the Malagasy forests. Often the hooting breeding call is all that betrays their presence.
These birds nest as solitary pairs in holes in the ground which they excavate themselves, unlike the true rollers, which rarely nest in ground holes and even then do not dig their own nests.
Taxonomy
The phylogenetic relationship between the six families that make up the order Coraciiformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
mtDNA analyses confirmed the systematics of this group but indicated that merging Geobiastes into Brachypteracias, as was usually done since the 1960s, should be reversed at least until a more comprehensive review (e.g. supported by fossils) is possible (Kirchman et al., 2001). Also, 2000-year-old subfossil remains of ground rollers are known from the Holocene of Ampoza (Goodman, 2000); Eocene remains from Europe at first tentatively assigned to this family were later recognized as quite distinct (Mayr & Mourer-Chauviré 2000). Presently, there is no indication that ground rollers ever occurred anywhere outside Madagascar (Mayr & Mourer-Chauviré, 2001).
Species
There are five extant species in four genera in the family Brachypteraciidae:
An extinct species, the Ampoza ground roller (Brachypteracias langrandi), of presumed Holocene age, was described in 2000 based on a single humerus.
| Biology and health sciences | Coraciiformes | Animals |
255968 | https://en.wikipedia.org/wiki/High-explosive%20anti-tank | High-explosive anti-tank | High-explosive anti-tank (HEAT) is the effect of a shaped charge explosive that uses the Munroe effect to penetrate heavy armor. The warhead functions by having an explosive charge collapse a metal liner inside the warhead into a high-velocity shaped charge jet; this is capable of penetrating armor steel to a depth of seven or more times the diameter of the charge (charge diameters, CD). The shaped charge jet armor penetration effect is purely kinetic in nature; the round has no explosive or incendiary effect on the armor.
Unlike standard armor-piercing rounds, a HEAT warhead's penetration performance is unaffected by the projectile's velocity, allowing them to be fired by lower-powered weapons that generate less recoil.
The performance of HEAT weapons has nothing to do with thermal effects, with HEAT being simply an acronym.
History
HEAT warheads were developed during World War II, from extensive research and development into shaped charge warheads. Shaped charge warheads were promoted internationally by the Swiss inventor Henry Mohaupt, who exhibited the weapon before World War II. Before 1939, Mohaupt demonstrated his invention to British and French ordnance authorities. Concurrent development by the German inventors’ group of Cranz, Schardin, and Thomanek led to the first documented use of shaped charges in warfare, during the successful assault on the fortress of Eben Emael on 10 May 1940.
Claims for priority of invention are difficult to resolve due to subsequent historic interpretations, secrecy, espionage, and international commercial interest.
The first British HEAT weapon to be developed and issued was a rifle grenade using a cup launcher on the end of the rifle barrel; the Grenade, Rifle No. 68 /AT which was first issued to the British Armed Forces in 1940. This has some claim to have been the first HEAT warhead and launcher in use. The design of the warhead was simple and was capable of penetrating of armor. The fuze of the grenade was armed by removing a pin in the tail which prevented the firing pin from flying forward. Simple fins gave it stability in the air and, provided the grenade hit the target at the proper angle of 90 degrees, the charge would be effective. Detonation occurred on impact, when a striker in the tail of the grenade overcame the resistance of a creep spring and was thrown forward into a stab detonator.
By mid-1940, Germany introduced the first HEAT round to be fired by a gun, the 7.5 cm Gr.38 Hl/A, (later editions B and C) fired by the KwK.37 L/24 of the Panzer IV tank and the StuG III self-propelled gun . In mid-1941, Germany started the production of HEAT rifle-grenades, first issued to paratroopers and, by 1942, to the regular army units (Gewehr-Panzergranate 40, 46 and 61), but, just as did the British, soon turned to integrated warhead-delivery systems: In 1943, the Püppchen, Panzerschreck and Panzerfaust were introduced.
The Panzerfaust and Panzerschreck (tank fist and tank terror, respectively) gave the German infantryman the ability to destroy any tank on the battlefield from 50 to 150 meters with relative ease of use and training (unlike the British PIAT). The Germans made use of large quantities of HEAT ammunition in converted 7.5 cm Pak 97/38 guns from 1942, also fabricating HEAT warheads for the Mistel weapon. These so-called Schwere Hohlladung (heavy shaped charge) warheads were intended for use against heavily armored battleships. Operational versions weighed nearly two tons and were perhaps the largest HEAT warheads ever deployed. A five-ton version code-named Beethoven was also developed.
Meanwhile, the British No. 68 AT rifle grenade was proving to be too light to deal significant damage, resulting in it rarely being used in action. Due to these limits, a new infantry anti-tank weapon was needed, and this ultimately came in the form of the "projector, infantry, anti-tank" or PIAT. By 1942, the PIAT had been developed by Major Millis Jefferis. It was a combination of a HEAT warhead with a spigot mortar delivery system. While cumbersome, the weapon allowed British infantry to engage armor at range for the first time. The earlier magnetic hand-mines and grenades required them to approach dangerously near. During World War II the British referred to the Monroe effect as the "cavity effect on explosives".
During the war, the French communicated Mohaupt's technology to the U.S. Ordnance Department, and he was invited to the US, where he worked as a consultant on the bazooka project.
The need for a large bore made HEAT rounds relatively ineffective in existing small-caliber anti-tank guns of the era. Germany worked around this with the Stielgranate 41, introducing a round that was placed over the end on the outside of otherwise obsolete anti-tank guns to produce a medium-range low-velocity weapon.
Adaptations to existing tank guns were somewhat more difficult, although all major forces had done so by the end of the war. Since velocity has little effect on the armor-piercing ability of the round, which is defined by explosive power, HEAT rounds were particularly useful in long-range combat where slower terminal velocity was not an issue. The Germans were again the ones to produce the most capable gun-fired HEAT rounds, using a driving band on bearings to allow it to fly unspun from their existing rifled tank guns. The HEAT round was particularly useful to them because it allowed the low-velocity large-bore guns used on their many assault guns to also become useful anti-tank weapons.
Likewise, the Germans, Italians, and Japanese had in service many obsolescent infantry guns, short-barreled, low-velocity artillery pieces capable of direct and indirect fire and intended for infantry support, similar in tactical role to mortars; generally an infantry battalion had a battery of four or six. High-explosive anti-tank rounds for these old infantry guns made them semi-useful anti-tank guns, particularly the German guns (the Japanese 70 mm Type 92 battalion gun and Italian 65 mm mountain gun also had HEAT rounds available for them by 1944 but they were not very effective).
High-explosive anti-tank rounds caused a revolution in anti-tank warfare when they were first introduced in the later stages of World War II. One infantryman could effectively destroy any existing tank with a handheld weapon, thereby dramatically altering the nature of mobile operations. During World War II, weapons using HEAT warheads were termed hollow charge or shape charge warheads.
Post World War II
The general public remained in the dark about shape charge warheads, even believing that it was a new secret explosive, until early 1945 when the US Army cooperated with the US monthly publication Popular Science on a large and detailed article on the subject titled "It makes steel flow like mud". It was this article that revealed to the American public how the fabled bazooka actually worked against tanks and that the velocity of the rocket was irrelevant.
After the war, HEAT rounds became almost universal as the primary anti-tank weapon. Models of varying effectiveness were produced for almost all weapons from infantry weapons like rifle grenades and the M203 grenade launcher, to larger dedicated anti-tank systems like the Carl Gustav recoilless rifle. When combined with the wire-guided missile, infantry weapons were able to operate at long-ranges also. Anti-tank missiles altered the nature of tank warfare from the 1960s to the 1990s; due to the tremendous penetration of HEAT munitions, many post-WWII main battle tanks, such as the Leopard 1 and AMX-30, were deliberately designed to carry modest armour in favour of reduced weight and better mobility. Despite subsequent developments in vehicle armour, HEAT munitions remain effective to this day.
Design
Penetration performance and effects
The jet moves at hypersonic speeds in solid material and therefore erodes exclusively in the local area where it interacts with armor material. The correct detonation point of the warhead and spacing is critical for optimal penetration, for two reasons:
If the HEAT warhead is detonated too near a target's surface, there is not enough time for the jet to fully form. That is why most modern HEAT warheads have what is called a standoff, in the form of an extended nose cap or probe in front of the warhead.
The jet stretches, breaks apart and disperses during travel, reducing effectiveness. Depending on the quality of the HEAT warhead, this happens around 6 to 8 times the charge diameter for low quality and around 12 to 20 times the charge diameter for high quality warheads. For example the 150 mm diameter warhead of the newer BGM-71 TOW (TOW 2) reaches a peak penetration of ~1000 mm RHA at 6.5x diameter (~1 m) and is down to ~500 mm RHA penetration at 19x diameter (~2.8 m).
An important factor in the penetration performance of a HEAT round is the diameter of the warhead. As the penetration continues through the armor, the width of the hole decreases leading to a characteristic fist to finger penetration, where the size of the eventual finger is based on the size of the original fist. In general, very early HEAT rounds could expect to penetrate armor of 150% to 250% of their diameters, and these numbers were typical of early weapons used during World War II. Since then, the penetration of HEAT rounds relative to projectile diameters has steadily increased as a result of improved liner material and metal jet performance. Some modern examples claim numbers as high as 700%.
As for any antiarmor weapon, a HEAT round achieves its effectiveness through three primary mechanisms. Most obviously, when it perforates the armor, the jet's residual can cause great damage to any interior components it strikes. And as the jet interacts with the armor, even if it does not perforate into the interior, it typically causes a cloud of irregular fragments of armor material to spall from the inside surface. This cloud of behind-armor debris too will typically damage anything that the fragments strike. Another damage mechanism is the mechanical shock that results from the jet's impact and penetration. Shock is particularly important for such sensitive components as electronics.
Stabilization and accuracy
Spinning imparts centrifugal force onto a warhead's jet, dispersing it and reducing effectiveness. This became a challenge for weapon designers: for a long time, spinning a shell was the most standard method to obtain good accuracy, as with any rifled gun. Most hollow charge projectiles are fin-stabilized and not spin-stabilized.
In recent years, it has become possible to use shaped charges in spin-stabilized projectiles by imparting an opposite spin on the jet so that the two spins cancel out and result in a non-spinning jet. This is done either using fluted copper liners, which have raised ridges, or by forming the liner in such a way that it has a crystalline structure which imparts spin to the jet.
Besides spin-stabilization, another problem with any barreled weapon (that is, a gun) is that a large-diameter shell has worse accuracy than a small-diameter shell of the same weight. The lessening of accuracy increases dramatically with range. Paradoxically, this leads to situations when a kinetic armor-piercing projectile is more usable at long ranges than a HEAT projectile, despite the latter having a higher armor penetration. To illustrate this: a stationary Soviet T-62 tank, firing a (smoothbore) cannon at a range of 1000 meters against a target moving 19 km/h was rated to have a first-round hit probability of 70% when firing a kinetic projectile. Under the same conditions, it could expect 25% when firing a HEAT round. This affects combat on the open battlefield with long lines of sight; the same T-62 could expect a 70% first-round hit probability using HEAT rounds on target at 500 meters.
Additionally, a warhead's diameter is restricted by a gun's caliber if it is contained within the barrel. In non-gun applications, when HEAT warheads are delivered with missiles, rockets, bombs, grenades, or spigot mortars, the warhead size is no longer a limiting factor. In these cases, HEAT warheads often seem oversized in relation to the round's body. Classic examples of this include the German Panzerfaust and Soviet RPG-7.
Variants
Many HEAT-armed missiles today have two (or more) separate warheads (termed a tandem charge) to be more effective against reactive or multi-layered armor. The first, smaller warhead initiates the reactive armor, while the second (or other), larger warhead penetrates the armor below. This approach requires highly sophisticated fuzing electronics to set off the two warheads the correct time apart, and also special barriers between the warheads to stop unwanted interactions; this makes them cost more to produce.
The latest HEAT warheads, such as 3BK-31, feature triple charges: the first penetrates the spaced armor, the second the reactive or first layers of armor, and the third one finishes the penetration. The total penetration value may reach up to .
Some anti-armor weapons incorporate a variant on the shaped charge concept that, depending on the source, can be called an explosively formed penetrator (EFP), self-forging fragment (SFF), self-forging projectile (SEFOP), plate charge, or Misnay Schardin (MS) charge. This warhead type uses the interaction of the detonation waves, and to a lesser extent the propulsive effect of the detonation products, to deform a dish or plate of metal (iron, tantalum, etc.) into a slug-shaped projectile of low length-to-diameter ratio and project this towards the target at around two kilometers per second.
The SFF is relatively unaffected by first-generation reactive armor, it can also travel more than 1,000 cone diameters (CDs) before its velocity becomes ineffective at penetrating armor due to aerodynamic drag, or hitting the target becomes a problem. The impact of an SFF normally causes a large diameter, but relatively shallow hole (relative to a shaped charge) or, at best, a few CDs. If the SFF perforates the armor, extensive behind-armor damage (BAD, also called behind-armor effect (BAE)) occurs. The BAD is mainly caused by the high temperature and velocity armor and slug fragments being injected into the interior space and also overpressure (blast) caused by the impact.
More modern SFF warhead versions, through the use of advanced initiation modes, can also produce rods (stretched slugs), multi-slugs and finned projectiles, and this in addition to the standard short L to D ratio projectile. The stretched slugs are able to penetrate a much greater depth of armor, at some loss to BAD. Multi-slugs are better at defeating light or area targets and the finned projectiles have greatly enhanced accuracy. The use of this warhead type is mainly restricted to lightly armored areas of MBTs—the top, belly and rear armored areas, for example. It is well suited for use in the attack of other less heavily armored fighting vehicles (AFVs) and for breaching material targets (buildings, bunkers, bridge supports, etc.). The newer rod projectiles may be effective against the more heavily armored areas of MBTs.
Weapons using the SEFOP principle have already been used in combat; the smart submunitions in the CBU-97 cluster bomb used by the US Air Force and US Navy in the 2003 Iraq war used this principle, and the US Army is reportedly experimenting with precision-guided artillery shells under Project SADARM (Seek And Destroy Armor). There are also various other projectiles (BONUS, DM 642) and rocket submunitions (Motiv-3M, DM 642) and mines (MIFF, TMRP-6) that use the SFF principle.
Multipurpose
With the effectiveness of gun-fired single charge HEAT rounds being lessened, or even negated by increasingly sophisticated armoring techniques, a class of HEAT rounds termed high-explosive anti-tank multi-purpose, or HEAT-MP, has become more popular. These are HEAT rounds that are effective against older tanks and light armored vehicles but have improved fragmentation, blast and fuzing. This gives the projectiles an overall reasonable light armor and anti-personnel and material effect so that they can be used in place of conventional high-explosive rounds against infantry and other battlefield targets. This reduces the total number of rounds that need to be carried for different roles, which is particularly important for modern tanks like the M1 Abrams, due to the size of their rounds. The M1A1/M1A2 tank can carry only 40 rounds for its 120 mm M256 gun—the M60A3 Patton tank (the Abrams' predecessor), carried 63 rounds for its M68 gun. This effect is reduced by the higher first round hit rate of the Abrams with its improved fire control system compared to that of the M60.
Another variant of HEAT warheads surround the warhead with a conventional fragmentation casing, to increase its effectiveness against unarmored targets, while remaining effective in the anti-armor role. In some cases, this is merely a side effect of the armor-piercing design, whilst other designs specifically incorporate this dual role ability.
Defense
Improvements to the armor of main battle tanks have reduced the usefulness of HEAT warheads by making effective man portable HEAT missiles heavier, although many of the world's armies continue to carry man-portable HEAT rocket launchers for use against vehicles and bunkers. In unusual cases, shoulder-launched HEAT rockets are believed to have shot down U.S. helicopters in Iraq.
The reason for the ineffectiveness of HEAT munitions against modern main battle tanks can be attributed in part to the use of new types of armor. The jet created by the explosion of the HEAT round must be a certain distance from the target and must not be deflected. Reactive armor attempts to defeat this with an outward directed explosion under the impact point, causing the jet to deform and so greatly reducing penetrating power. Alternatively, composite armor featuring ceramics erode the liner jet faster than rolled homogeneous armor steel, the preferred material in constructing older armored fighting vehicles.
Spaced armor and slat armor are also designed to defend against HEAT rounds, protecting vehicles by causing premature detonation of the explosive at a relatively safe distance away from the main armor of the vehicle. Some cage defenses work by destroying the mechanism of the HEAT round.
Deployment
Helicopters have carried anti-tank guided missiles (ATGM) tipped with HEAT warheads since 1956. The first example of this was the use of the Nord SS.11 ATGM on the Aérospatiale Alouette II helicopter by the French Armed Forces. After then, such weapon systems were widely adopted by other nations.
On 13 April 1972—during the Vietnam War—Americans Major Larry McKay, Captain Bill Causey, First Lieutenant Steve Shields, and Chief Warrant Officer Barry McIntyre became the first helicopter crew to destroy enemy armor in combat. A flight of two AH-1 Cobra helicopters, dispatched from Battery F, 79th Artillery, 1st Cavalry Division, were armed with the newly developed M247 70 millimeter (2.8 in) HEAT rockets, which were yet untested in the theatre of war. The helicopters destroyed three T-54 tanks that were about to overrun a U.S. command post. McIntyre and McKay engaged first, destroying the lead tank.
| Technology | Explosive weapons | null |
256015 | https://en.wikipedia.org/wiki/Stairs | Stairs | Stairs are a structure designed to bridge a large vertical distance between lower and higher levels by dividing it into smaller vertical distances. This is achieved as a diagonal series of horizontal platforms called steps which enable passage to the other level by stepping from one to another step in turn. Steps are very typically rectangular. Stairs may be straight, round, or may consist of two or more straight pieces connected at angles.
Types of stairs include staircases (also called stairways) and escalators. Some alternatives to stairs are elevators (also called lifts), stairlifts, inclined moving walkways, ladders, and ramps. A stairwell is a vertical shaft or opening that contains a staircase. A flight (of stairs) is an inclined part of a staircase consisting of steps (and their lateral supports if supports are separate from steps).
History
This is an excerpt from Staircase.
The concept of stairs is believed to be 8000 years old, and are one of the oldest structures in architectural history. The oldest example of spiral stairs dates back to the 400s BC. Medieval architecture saw experimentation with many different shapes, and the Renaissance even more so with varied designs.
Components and terms
A stair, or a stairstep, is one step in a flight of stairs. A staircase or stairway is one or more flights of stairs leading from one floor to another, and includes landings, newel posts, handrails, balustrades, and additional parts.
In buildings, stairs is a term applied to a complete flight of steps between two floors. A stair flight is a run of stairs or steps between landings. A stairwell is a compartment extending vertically through a building in which stairs are placed. A stair hall is the stairs, landings, hallways, or other portions of the public hall through which it is necessary to pass when going from the entrance floor to the other floors of a building. Box stairs are stairs built between walls, usually with no support except the wall strings.
Stairs may be in a "straight run", leading from one floor to another without a turn or change in direction. Stairs may change direction, commonly by two straight flights connected at a 90° angle landing. Stairs may also return onto themselves with 180° angle landings at each end of straight flights forming a vertical stairway commonly used in multistory and highrise buildings. Many variations of geometrical stairs may be formed of circular, elliptical and irregular constructions.
Ascending a set of stairs to a higher floor is often referred to as going "upstairs", the opposite being "downstairs". The same words can also be used to mean the upper or lower floors of a building, respectively.
Steps
Each step is composed of a tread and a riser. Some treads may include a nosing.
Tread: The part of the stairway that is stepped on. It is constructed to the same specifications (thickness) as any other flooring. The tread "depth" is measured from the back of one tread to the back of the next. The "width" is measured from one side to the other.
Riser: The near-vertical element in a set of stairs, forming the space between one step and the next. It is sometimes slightly inclined from the vertical so that its top is closer than its base to the person climbing the stairs. If a physical riser is not present, the design is described as "open riser". This is often the case in unfinished basements.
Nosing: An edge part of the tread that protrudes over the riser beneath. If it is present, this means that, measured horizontally, the total "run" length of the stairs is not simply the sum of the tread lengths, as the treads overlap each other. Some building codes require stair nosings in specific cases where the tread depth is below a minimum value. They provide additional length to the tread without changing the pitch of the stairs.
Starting or feature tread: Where stairs are open on one or both sides, the first step above the lower floor or landing may be wider than the other steps and rounded. When the starting step is rounded, the balusters typically are arranged in a true spiral around the circumference of the rounded portion, and the handrail has a flat spiral called a "volute" that connects the tops of the balusters. Besides the cosmetic appeal, starting steps allow the balusters to form a wider, more stable base for the end of the handrail. Handrails that simply end at a post at the foot of the stairs can be less sturdy, even with a thick post. A double-ended feature tread can be used when both sides of the stairs are open. There are a number of different styles and uses of feature tread.
Stringer board, stringer, or sometimes just string: The structural member that supports the treads and risers in standard staircases. There are typically three stringers, one on either side and one in the center, with more added as necessary for wider spans. Side stringers are sometimes dadoed to receive risers and treads for increased support. Stringers on open-sided stairs are called "cut stringers".
Tread rise: The distance from the top of one tread to the top of the next tread.
Total rise: The distance the flight of stairs raises vertically between two finished floor levels.
Winders: Winders are steps that are narrower on one side than the other. They are used to change the direction of the stairs without landings. A series of winders form a circular or helical stairway. When three steps are used to turn a 90° corner, the middle step is called a kite winder as a kite-shaped quadrilateral.
Trim: Various moldings are used to decorate and in some instances support stairway elements. Scotia or quarter-round are typically placed beneath the nosing to support its overhang.
Curtail step: A decorative step at the bottom of a staircase that usually houses the volute and volute newel turning for a continuous handrail. The curtail tread will follow the flow of the volute.
Handrails
The balustrade is the system of railings and balusters that prevents people from falling over the edge.
Banister, railing, or handrail: The angled member for handholding, as distinguished from the vertical balusters which hold it up for stairs that are open on one side. Railings are often present on both sides of stairs, but can sometimes be only on one side or absent altogether. On wide staircases, there can be one or more railings between the two sides. The term "banister" is sometimes used to mean just the handrail, sometimes the handrail and the balusters, or sometimes just the balusters.
Volute: A handrail end element for the bullnose step that curves inward like a spiral. A volute is said to be right or left-handed depending on which side of the stairs the handrail is as one faces up the stairs.
Turnout: Instead of a complete spiral volute, a turnout deviates from the normal handrail center line away from the flight to give a wider opening as one enters the staircase, The turnout is usually set over a newel post to give added stability to the handrail.
Gooseneck: The vertical handrail that joins a sloped handrail to a higher handrail on the balcony or landing is a gooseneck.
Rosette: Where the handrail ends in the wall and a half-newel is not used, it may be trimmed by a rosette.
Easings: Wall handrails are mounted directly onto the wall with wall brackets. At the bottom of the stairs, such railings flare to a horizontal railing and this horizontal portion is called a "starting easing". At the top of the stairs, the horizontal portion of the railing is called an "over easing".
Core rail: Wood handrails often have a metal core to provide extra strength and stiffness, especially when the rail has to curve against the grain of the wood. The archaic term for the metal core is "core rail".
Baluster: A term for the vertical posts that hold up the handrail. Sometimes simply called guards or spindles. Treads often require two balusters. The second baluster is closer to the riser and is taller than the first. The extra height in the second baluster is typically in the middle between decorative elements on the baluster. That way the bottom decorative elements are aligned with the tread and the top elements are aligned with the railing angle.
Newel: A large baluster or post used to anchor the handrail. Since it is a structural element, it extends below the floor and subfloor to the bottom of the floor joists and is bolted right to the floor joist. A half-newel may be used where a railing ends in the wall. Visually, it looks like half the newel is embedded in the wall. For open landings, a newel may extend below the landing for a decorative newel drop.
Finial: A decorative cap to the top of a newel post, particularly at the end of the balustrade.
Baserail, or shoerail: For systems where the baluster does not start at the treads, they go to a baserail. This allows for identical balusters, avoiding the second baluster problem.
Fillet: A decorative filler piece on the floor between balusters on a balcony railing.
Handrails may be continuous (sometimes called over-the-post) or post-to-post (or more accurately newel-to-newel). For continuous handrails on long balconies, there may be multiple newels and tandem caps to cover the newels. At corners, there are quarter-turn caps. For post-to-post systems, the newels project above the handrails.
Another, more classical, form of handrailing that is still in use is the tangent method. A variant of the cylindric method of layout, it allows for continuous climbing and twisting rails and easings. It was defined from principles set down by architect Peter Nicholson in the 18th century.
Other terms
Flight: Any uninterrupted series of steps between floors or levels.
Landing, or platform: A landing is the area of a floor near the top or bottom step of a stair. An intermediate landing is a small platform that is built as part of stairs between main floor levels and is typically used to allow the stairs to change directions, or to allow the user a rest. A half landing, or half-pace, is where a 180° change in direction is made, and a quarter landing is where a 90° change in direction is made (on an intermediate landing). As intermediate landings consume floor space, they can be expensive to build. However, changing the direction of the stairs allows stairs to fit where they would not otherwise, or provides privacy to the upper level as visitors downstairs cannot simply look up the stairs to the upper level due to the change in direction. The word 'landing' is also commonly used for a general corridor in any of the floors above the ground floor of a building, even if that corridor is located well away from a staircase.
Apron: This is a wooden fascia board used to cover up trimmers and joists exposed by stairwell openings. The apron may be moulded or plain, and is intended to give the staircase a cleaner look by cloaking the side view.
Balcony: For stairs with an open concept upper floor or landing, the upper floor is functionally a balcony. For a straight flight of stairs, the balcony may be long enough to require multiple newels to support the length of railing.
Floating stairs: A flight of stairs is said to be "floating" if there is nothing underneath. The risers are typically missing as well to emphasize the open effect, and create a functional feature suspended in midair. There may be only one stringer or the stringers otherwise minimized. Where building codes allow, there may not even be handrails.
Mobile safety steps: Can be used as temporary, safe replacements for many types of stairs
Runner: Carpeting that runs down the middle of the stairs. Runners may be directly stapled or nailed to the stairs, or may be secured by a specialized bar, known as a stair rod, that holds the carpet in place where the tread meets the riser.
Spandrel: If there is not another flight of stairs immediately underneath, the triangular space underneath the stairs is called a "spandrel". It is frequently used as a closet.
Stairwell: The spatial opening, usually a vertical shaft, containing an indoor stairway; by extension, it is often used to include the stairs it contains.
Staircase tower: A tower attached to, or incorporated into, a building that contains stairs linking the various floors.
Dimensions
The dimensions of a stair, in particular the rise height and going of the steps, should remain the same along the stairs.
The following stair dimensions are important:
The rise height or rise of each step is measured from the top of one tread to the next. It is not the physical height of the riser; the latter excludes the thickness of the tread. A person using the stairs would move this distance vertically for each step taken.
The tread depth of a step is measured from the edge of the nosing to the vertical riser; if the steps have no nosing, it is the same as the going; otherwise it is the going plus the extent of one nosing.
The going of a step is measured from the edge of the nosing to the edge of nosing in plan view. A person using the stairs would move this distance forward with each step they take.
To avoid confusion, the number of steps in a set of stairs is always the number of risers, not the number of treads.
The total run or total going of the stairs is the horizontal distance from the first riser to the last riser. It is often not simply the sum of the individual tread lengths due to the nosing overlapping between treads. If there are N steps, the total run equals N-1 times the going: the tread of the first step is part of a landing.
The total rise of the stairs is the height between floors (or landings) that the flight of stairs is spanning. If there are N steps, the total rise equals N times the rise of each step.
The slope or pitch of the stairs is the ratio between the rise and the going (not the tread depth, due to the nosing). It is sometimes called the rake of the stairs. The pitch line is the imaginary line along the tip of the nosing of the treads. In the UK, stair pitch is the angle the pitch line makes with the horizontal, measured in degrees. The value of the slope, as a ratio, is then the tangent of the pitch angle.
Headroom is the height above the nosing of a tread to the ceiling above it.
Walkline – for curved stairs, the inner radius of the curve may result in very narrow treads. The "walkline" is the imaginary line some distance away from the inner edge on which people are expected to walk. The building code will specify the distance. Building codes will then specify the minimum tread size at the walkline.
Forms
Stairs can take a large number of forms, combining straight runs, winders, and landings.
The simplest form is the straight flight of stairs, with neither winders nor landings. These types of stairs were commonly used in traditional homes, as they are relatively easy to build and only need to be connected at the top and bottom. However, many modern architects may not choose straight flights of stairs because:
the upstairs is directly visible from the bottom of a straight flight of stairs
it is potentially more dangerous in that a fall is not interrupted until the bottom of the stairs
a straight flight requires enough straight length for the entire run of the stairs
Another form of a straight staircase is the "space saver staircase", also known as "paddle stairs" or "alternating tread staircases". These designs can be used for a steeper rise, but they can only be used in certain circumstances, and must comply with regulations.
However, a basic straight flight of stairs is easier to design and construct than one with landings or winders. The rhythm of stepping is not interrupted in a straight run, which may offset an increased fall risk by helping to prevent a misstep in the first place. However, many long straight runs of stairs will require landings or winders to comply with safety standards in building regulations.
Straight stairs can have a mid-landing incorporated, but it is probably more common to see stairs that use a landing or winder to produce a bend in the stairs. A straight flight with a mid-landing will require a lot of straight length, and may be more commonly found in large commercial buildings. L-shaped stairways have one landing and usually a change in direction by 90°. U-shaped stairs may employ a single wider landing for a change in direction of 180°, or two landings for two changes in direction of 90° each. A Z-shaped staircase incorporates two opposite 90° turns, creating a shape similar to that of the letter "Z" if seen from above. The use of landings and a possible change of direction have the following effects:
The upstairs is not directly visible from the bottom of the stairs, which can provide more privacy for the upper floor
A fall can be halted at the landing point, reducing the total distance and potential injury
Though the landings consume more total floor space, there is no requirement for a single lengthy run, allowing more flexible floorplan designs
For larger stairs, particularly in exterior applications, a landing can provide a place for pedestrians to rest their legs
Other forms include stairs with winders that curve or bend at an acute angle, three flights of stairs that join at a landing to form a T-shape, and stairs with balconies and complex designs.
A "mono string" staircase is a term used for a staircase with treads arranged along a single steel beam. A "double string" staircase has two steel beams, one on either side, and treads spanning between.
Helical ("spiral") stairs
Terminology
The term "helical stair" has many synonyms:
The term "spiral" has a more narrow definition in a mathematical context, as a curve which lies in a single plane and moves towards or away from a central point, with a continuously changing radius. The mathematical term for the 3-dimensional curve traced where the locus progresses at a fixed radius from a fixed line while moving in a circular motion around it is a "helix". Since the very purpose of a stairway is to change elevation, it is inherently a 3-dimensional path.
Loose everyday usage conflates the terms helical and spiral, but the vast majority of circular stairs are actually helical. True spiral staircases would be nonfunctional flat structures, although functional hybrid helical spiral staircases can be constructed. This article attempts to preferentially use the terms "helix" and "helical" to describe circular stairways more clearly and precisely, while reserving the term "spiral" for a curve restricted to a flat plane.
Helical stairs, sometimes referred to in architectural descriptions as vice, wind around a newel (also called the "central pole"). The presence or absence of a central pole or newel does not affect the overall terminology applied to the design of the structure. In Scottish architecture, helical stairs are commonly known as a turnpike stair.
Helical stairs typically have a handrail along the outer periphery only, and on the inner side may have just a central pole. A "squared helical" stair fills a square stairwell and expands the steps and railing to a square, resulting in unequal steps (wider and longer where they extend into a corner of the square). A "pure helix" fills a circular stairwell, and has multiple steps and handrail elements which are identical and positioned screw-symmetrically.
Helical stairs have a handedness or chirality, analogous to the handedness of screw threads, either right-handed or left-handed helical shapes. Ascending a right-handed helix rises counter-clockwise, while ascending a left-handed helix rises clockwise (both as viewed from above).
Geometry
A fundamental advantage of helical stairs is that they can be very compact, fitting into very narrow spaces and occupying a small footprint. For this reason, they can often be found in ships and submarines, industrial installations, small loft apartments, and other locations where floorspace is scarce. However, this compactness can come at the expense of requiring great craftsmanship and care to produce a safe and effective structure. By contrast, grand helical stairs occupying wide sweeps of space can also be built, to showcase luxurious funding and elegant taste. Architects have used the twisting curvilinear shape as an embellishment, either within or outside of their buildings.
Helical stairs have the disadvantage of being very steep if they are tight (small radius) or are otherwise not supported by a center column. The cylindrical spaces they occupy can have a narrow or wide diameter:
The wider the helix, the more steps can be accommodated per revolution around the central axis. Therefore, if the helix is large in diameter, due to having a central support column that is strong (and large in diameter) with a special handrail that helps to distribute the load, each step may be longer and therefore the rise between each step may be smaller (equal to that of regular steps). Otherwise, the circumference of the circle at the walk line will be so small that it will be impossible to maintain a normal tread depth and a normal rise height without compromising headroom before reaching the upper floor.
A small-diameter helix can still have wider treads near the central axis, by making the treads "dance". The narrow end of each tapering wedge-shaped tread is widened, and installed to overlap the adjacent treads above and below.
To maintain headroom, some helical stairs with a very short diameter must also have a very high rise for each step. These are typically cases where the stairwell must be a small diameter to fit a narrow space, or must not have any center support by design, or may not have any perimeter support. A tight helical stair with a central pole is very space efficient in the use of floor footprint. However, this type of stairway must be used carefully, to avoid an injurious fall.
"Open well" helical or circular stairs designed by architects often do not have a central pole, but there usually is a handrail at both sides of the treads. These designs have the advantage of a more uniform tread depth when compared to a narrow helical staircase. Such stairs may also be built around an elliptical or oval footprint, or even a triangular or pentagonal core. Lacking a central pole, an open well staircase is supported at its outer periphery, or in some cases may be a completely self-supporting and free-standing structure.
An example of perimeter support is the Vatican stairwell or the Gothic stairwell. The latter stairwell is tight because of its location where the diameter must be small. Many helices, however, have sufficient width for normal-size treads () by being supported by any combination of a center pole, perimeter supports attaching to or beneath the treads, and a helical handrail. In this manner, the treads may be wide enough to accommodate low rises. In self-supporting stairs, the helix needs to be steep to allow the weight to distribute safely down the structure in the most vertical manner possible. Helical steps with center columns or perimeter support do not have this limitation. Building codes may limit the use of helical stairs to small areas or secondary usage, if their treads are not sufficiently wide or have risers taller than .
Double helix staircases are possible, with two independent helical stairs in the same vertical space, allowing one person to ascend and another to descend without ever meeting, if they choose different helices. For examples, the Pozzo di San Patrizio allows one-way traffic so that laden and unladen mules can ascend and descend without obstruction, while Château de Chambord, Château de Blois, and the Crédit Lyonnais headquarters ensure separation for social purposes.
Emergency exit stairways, though built with landings and straight runs of stairs, are often functionally double helices, with two separate stairs intertwined and occupying the same floor footprint. This is often in compliance with legal safety requirements to have two independent fire escape paths.
Helical stairs can be characterized by the number of turns that are made. A "quarter-turn" stair deposits the person facing 90° from the starting orientation. Likewise, there are half-turn, three-quarters-turn and full-turn stairs. A continuous helix may make many turns depending on the height. Very tall multi-turn helical staircases are usually found in old stone towers within fortifications, churches, and in lighthouses. Winders may be used in combination with straight stairs to turn the direction of the stairs. This allows for a large number of permutations in designs.
Historic uses
The earliest known helical staircases appear in Temple A in the Greek colony Selinunte, Sicily, to both sides of the cella. The temple was constructed around 480–470 BCE.
When used in Roman architecture, helical stairs were generally restricted to elite luxury structures. They were then adopted into Christian ecclesiastic architecture. During the Renaissance and Baroque periods, increasingly spectacular helical stairways were devised, first deleting walled enclosures, and then deleting a central post to leave an open well. Modern designs have trended towards minimalism, culminating in helical stairs made largely of transparent glass, or consisting only of stair treads with minimal visible support.
There is a common misconception that helical staircases in castles rose in a clockwise direction, to hinder right-handed attackers. While clockwise helical staircases are more common in castles than anti-clockwise, they were even more common in medieval structures without a military role, such as religious buildings. Studies of helical stairs in castles have concluded that "the role and position of spirals in castles ... had a much stronger domestic and status role than a military function" and that "there are sufficient examples of anticlockwise stairs in Britain and France in [the 11th and 12th centuries] to indicate that the choice must have depended both on physical convenience and architectural practicalities and there was no military ideology that demanded clockwise staircases in the cause of fighting efficiency or advantage".
Developments in manufacturing and design have also led to the introduction of kit form helical stairs. Modular, standardized steps and handrails can be bolted together to form a complete unit. These stairs can be made out of steel, timber, concrete, or a combination of materials.
Alternating tread stairs
Where there is insufficient space for the full run length of normal stairs, "alternating tread stairs" may be used (other names are "paddle stairs", "zig-zag stairs", or "double-riser stairs"). Alternating tread stairs can be designed to allow for a safe forward-facing descent of very steep stairs (however, designs with recessed treads or footholds do not have this feature). The treads are designed such that they alternate between treads for each foot: one step is wide on the left side; the next step is wide on the right side. There is insufficient space on the narrow portion of the step for the other foot to stand, hence the person must always use the correct foot on the correct step. The slope of alternating tread stairs can be as high as 65° as opposed to standard stairs, which are almost always less than 45°.
An advantage of alternating tread stairs is that people can descend while facing forward, in the direction of travel. The only other alternative in such short spaces would be a ladder, which requires a backward-facing descent. Alternating tread stairs may not be safe for small children, the elderly, or the physically challenged. Building codes typically classify them as ladders, and will only allow them where ladders are allowed, usually basement or attic utility or storage areas infrequently accessed.
The block model in the image illustrates the space efficiency gained by an alternating tread stair. The alternating stairs (3) requires one unit of space per step: the same as the half-width stairs (2), and half as much as the full-width stairs (1). Thus, the horizontal distance between steps is in this case reduced by a factor of two, reducing the size of each step. The horizontal distance between steps is reduced by a factor less than two if for construction reasons there are narrow "unused" step extensions.
These stairs often (including this example) illustrate the mathematical principle of glide plane symmetry: the mirror image with respect to the vertical center plane corresponds to a shift by one step.
Alternating tread stairs are sometimes referred to as "witches stairs", in the supposed belief that they were created during an earlier era as an attempt to repel witches who were thought to be unable to climb such stairs. Such a fanciful origin of the term has since been disproved, with experts finding no mention in any historical literature of stairs that were believed to prevent access by witches.
Alternating tread stairs have been in use since at least 1888. Today, the design is used in some loft apartments to access bedrooms or storage spaces.
Emergency exit stairs
Local building codes often dictate the number of emergency exits required for a building of a given size, including specifying a minimum number of stairwells. For any building bigger than a private house, modern codes invariably specify at least two sets of stairs, completely isolated from each other so that if one becomes impassable due to smoke or flames, the other remains usable.
The traditional way to satisfy this requirement was to construct two separate stairwell stacks, each occupying its own footprint within each floorplate. Each stairwell is internally configured into an arrangement often called a "U-return" or "return" design. The two stairwells may be constructed next to each other, separated by a fireproof partition, or optionally the two stairwells may be located at some distance from each other within the floorplan. These traditional arrangements have the advantage of being easily understood by building occupants and occasional visitors.
Some architects save floor footprint space while still meeting the exit requirement, by housing two stairwells in a "double helix" or "scissors stairs" configuration whereby two stairwells occupy the same floor footprint, but are intertwined while being separated by fireproof partitions along their entire run. However, this design deposits anybody descending the stack into alternating locations on each successive floor, and this can be very disorienting. Some building codes recommend using a color-coded stripe and signage to distinguish otherwise identical-looking stairwells from each other, and to make following a quick exit path easier.
Ergonomics and building code requirements
Ergonomically and for safety reasons, stairs must have certain dimensions so that people can comfortably use them. Building codes typically specify certain clearances so that the stairs are not too steep or narrow.
Nicolas-François Blondel in the last volume of his Cours d'architecture (1675–1683) was the first known person to establish the ergonomic relationship of tread and riser dimensions. He specified that 2 × riser + tread = step length.
It is estimated that a noticeable misstep occurs once in 7,398 uses and a minor accident on a flight of stairs occurs once in 63,000 uses. Stairs can be a hazardous obstacle for some, so some people choose to live in residences without stairs so that they are protected from injury.
Stairs are not suitable for wheelchairs and other vehicles. A stairlift is a mechanical device for lifting wheelchairs up and down stairs. For sufficiently wide stairs, a rail is mounted to the treads of the stairs, or attached to the wall. A chair is attached to the rail and the person on the chair is lifted as the chair moves along the rail.
UK requirements
(overview of Approved document K – Stairs, Ladders and Ramps)
The 2013 edition "approved document K" categorises stairs as private, utility and general access
When considering stairs for private dwellings, all the specified measurements are in millimetres.
Building regulations are required for stairs used where the difference of level is greater than 600
Steepness of stairs – rise and going
Any rise between 150 and 220 used with any going between 220 and 300
Maximum rise 220 and minimum going 220 remembering that the maximum pitch of private stairs is 42°. The normal relationship between dimensions of the rise and going is that twice the rise plus the going (2R + G) should be between 550 and 700
Construction of steps
Steps should have level treads, they may have open risers but if so treads should overlap at least 16mm. Domestic private stairs are likely to be used by children under 5 years old so the handrail ballister spacing should be constructed so that a 100mm diameter sphere cannot pass through the opening in the risers in order to prevent children from sticking their heads through them and potentially getting stuck.
Headroom
A headroom of 2000mm is adequate. Special considerations can be made for loft conversions.
Width of flights
No recommendations are given for stair widths.
Length of flights
The approved document refers to 16 risers (steps) for utility stairs and 12 for general access. There is no requirement for private stairs. In practice there will be fewer than 16 steps as 16 x 220 gives over 3500 total rise (storey height) which is way above that in a domestic situation.
Landings
Level, unobstructed landings should be provided at the top and bottom of every flight. The width and length being at least that of the width of the stairs and can include part of the floor. A door may swing across the landing at the bottom of the flight but must leave a clear space of at least 400 across the whole landing
Tapered steps
There are special rules for stairs with tapered steps as shown in the image Example of Winder Stairs above
Alternate tread stairs can be provide in space saving situations
Guarding
Flights and landings must be guarded at the sides where the drop is more than 600mm. As domestic private stairs are likely to be used by children under 5 the guarding must be constructed so that a 100mm diameter sphere cannot pass through any opening or constructed so that children will not be able to climb the guarding. The height for internal private stairs should be at least and be able to withstand a horizontal force of 0.36|kN/m|.
US requirements
American building codes, while varying from State to State and County to County, generally specify the following parameters:
Minimum tread length, typically excluding the nosing for private residences. Some building codes also specify a minimum riser height, often .
Riser-Tread formula: Sometimes the stair parameters will be something like riser plus tread equals ; another formula is 2 times riser + tread equals , the length of a stride. Thus a rise and a tread exactly meets this code. If only a rise is used then a tread is required. This is based on the principle that a low rise is more like walking up a gentle incline and so the natural swing of the leg will be longer.
Low rise stairs are very expensive in terms of the space consumed. Such low rise stairs were built into the Winchester Mystery House to accommodate the infirmities of the owner, Sarah Winchester, before the invention of the elevator. These stairways, called "Easy Risers" consist of five flights wrapped into a multi-turn arrangement with a total width equal to more than four times the individual flight width and a depth roughly equal to one flight's run plus this width. The flights have varying numbers of steps.
Slope: A value for the rise-to-tread ratio of 17/29 ˜ 0.59 is considered optimal; this corresponds to a pitch angle of about 30°.
Variance on riser height and tread depth between steps on the same flight should be very low. Building codes require variances no larger than between depth of adjacent treads or the height of adjacent risers; within a flight, the tolerance between the largest and smallest riser or between the largest and smallest tread can not exceed . The reason is that on a continuous flight of stairs, people get used to a regular step and may trip if there is a step that is different, especially at night. The general rule is that all steps on the same flight must be identical. Hence, stairs are typically custom made to fit the particular floor to floor height and horizontal space available. Special care must be taken on the first and last risers. Stairs must be supported directly by the subfloor. If thick flooring (e.g. thick hardwood planks) are added on top of the subfloor, it will cover part of the first riser, reducing the effective height of the first step. Likewise at the top step, if the top riser simply reaches the subfloor and thick flooring is added, the last rise at the top may be higher than the last riser. The first and last riser heights of the rough stairs are modified to adjust for the addition of the finished floor.
Maximum nosing protrusion, typically to prevent people from tripping on the nosing.
Height of the handrail. This is typically between , measured to the nose of the tread. The minimum height of the handrail for landings may be different and is typically .
Handrail diameter. The size has to be comfortable for grasping and is typically between .
Maximum space between the balusters of the handrail. This is typically .
Openings (if they exist) between the bottom rail and treads are typically no bigger than .
Headroom: At least .
Maximum vertical height between floors or landings. This allows people to rest and limits the height of a fall.
Mandate handrails if there is more than a certain number of steps (typically 2 risers).
Minimum width of the stairway, with and without handrails.
Not allow doors to swing over steps; the arc of doors must be completely on the landing/floor.
A stairwell may be designated as an area of refuge as well as a fire escape route, due to its fire-resistance rated design and fresh air supply.
The American Disabilities Act and other accessibility standards by state, such as Architectural Barriers Texas Accessibility Standards (TAS), do not allow open risers on accessible or egress stairs.
Stairs in art and architecture
Religious shrines and memorial structures are often approached via stairs, sometimes numbering hundreds or thousands of steps. Many Neoclassical buildings feature prominent wide stairs ascending to an elevated platform or plinth where the main entrance is located. In recent years, increasing concerns about accessibility have encouraged architects to retrofit discreet lower-level public entrances or elevators to ease wheelchair access.
Large open-well helical stairs have been used as a central feature in public buildings from the Renaissance until modern times. A 21st-century version is the helical glass-enclosed, glass-treaded stairways at the center of several flagship Apple Stores.
Vessel is an artistic structure and visitor attraction which was built as part of the Hudson Yards Redevelopment Project in Manhattan, New York City. Built to plans by the British designer Thomas Heatherwick, the elaborate honeycomb-like structure rises 16 stories and consists of 154 flights of stairs, 2,500 steps, and 80 landings for visitors to explore.
Stairs may also be a fanciful physical construct such as the "stairs that go nowhere" located at the Winchester Mystery House. The Penrose stairs, devised by Lionel and Roger Penrose, are a famous impossible object. The image distorts perspective in such a manner that the stairs appear to be never-ending, a physical impossibility. The image was adopted by M. C. Escher in his iconic lithograph Ascending and Descending.
Notable sets of stairs
The longest stairway is listed by Guinness Book of Records as the service stairway for the Niesenbahn funicular railway near Spiez, Switzerland, with 11,674 steps and a height of . The stairs are usually employee-only, but there is a public run called Niesenlauf once a year.
Mount Girnar, one of the holiest of sacred places for Hindu, Jain, and Buddhist followers, and also for some Muslims, is located in Junagadh district in the Indian state of Gujarat in Saurashtrian peninsula. At a height of 1100 metres, with five summits, each adorned with several sacred places, it is accessed on foot by soaring close to 10,000 steps along a rugged terrain and deciduous forest that is also the last home for Asiatic lions. It is the longest completely stone-made stairway in the world.
Alipiri, India, is one of two ways to reach the Sri Venkateswara Swami Vaari Temple, Tirumala from Tirupati on foot, and it was until recently the only one in modern times. The temple is the richest Hindu temple in the world in terms of donations received and wealth and is visited by about 50,000 to 100,000 pilgrims per day (30 to 40 million people annually on average), while on special occasions and festivals like the annual Brahmotsavam, the number of pilgrims shoots up to 500,000, making it the most-visited holy place in the world. Srivari Mettu, about 20 km away, is the original one that was renovated and brought back to use in 2008. Alipiri is the longer route with more than 3550 steps, Srivari Mettu is shorter with 2388 steps.
A flight of 7,200 steps (including inner temple Steps), with 6,293 Official Mountain Walkway Steps, leads up the East Peak of Mount Tai in China.
The Haiku Stairs, on the island of Oahu, Hawaii, are approximately 4,000 steps which climb nearly . Originally used to access long wire radio antennas which were strung high above the Haiku Valley, between Honolulu and Kaneohe, they are closed to hikers.
The Flørli stairs, in Lysefjorden, Norway, have 4,444 wooden steps that climb from sea level to . It is a maintenance stairway for the water pipeline to the old Flørli hydro plant. The hydro plant is now closed down, and the stairs are open to the public. The stairway is claimed to be the longest wooden stairway in the world.
The longest stone stairs in Japan are the 3,333-step stairs of the Shakain temple in Yatsushiro, Kumamoto. The second ones, Mount Haguro stone stairs, have 2,446 steps in Tsuruoka, Yamagata.
The CN Tower's staircase reaches the main deck level after 1,776 steps and the Sky Pod above after 2,579 steps; it is the tallest metal staircase on Earth.
The Gemonian stairs were infamous as a place of execution during the early Roman Empire, especially during the period postdating Tiberius.
The World Trade Center Survivors' Staircase is the last visible structure above ground level at the World Trade Center site. It was originally two outdoor flights of granite-clad stairs and an escalator that connected Vesey Street to the World Trade Center's Austin J. Tobin Plaza. During the September 11, 2001, attacks, the stairs served as an escape route for hundreds of evacuees from 5 World Trade Center, a 9-floor building adjacent to the 110-story towers. Stairwell A was the lone stairway left intact after the second plane hit the South Tower of the World Trade Center during the September 11 attacks. It was believed to have remained intact until the South Tower collapsed at 9:59 am. 14 people were able to escape the floors located at the impact zone (including one man who saw the plane coming at him), and 4 people from the floors above the impact zone. Numerous 911 operators who received calls from individuals inside the South Tower were not well informed of the situation as it rapidly unfolded in the South Tower. Many operators told callers not to descend the tower on their own, even though it is now believed that Stairwell A was most likely passable at and above the point of impact.
In London, England, a notable staircase is that to the Monument to the Great Fire of London, more commonly known simply as "the Monument". This is a column in the City of London, near the northern end of London Bridge, which commemorates the Great Fire of London. The top of the Monument is reached by a narrow winding staircase of 311 steps. Constructed between 1671 and 1677, it is the tallest isolated stone column in the world.
The Spanish Steps in Rome are a monument of late Italian Baroque architecture connecting the Piazza di Spagna with the Trinità dei Monti up the side of the Pincian Hill. Designed by Francesco De Sanctis and constructed 1723–1725, the 135 steps form a wide vista looking down toward the Tiber. The steps are adorned with garden terraces blooming with azaleas and have been widely celebrated in cultural work.
The Loretto Chapel in Santa Fe, New Mexico, is well known for its helix-shaped staircase, which has been nicknamed "Miraculous Stair". It has been the subject of legend and rumor, and the circumstances surrounding its construction and its builder are considered miraculous by the Sisters of Loretto and many visitors.
The El Toro 20, a twenty-step set of stairs in Lake Forest, California, was well known among skateboarders, BMXers, and inline skaters as a popular and challenging skate spot. The first skateboarder to do an ollie down it was Don Nguyen. The staircase was demolished in 2019.
The Grand Staircase of the Titanic was one of the most recognizable features of the British transatlantic ocean liner that sank on her maiden voyage in 1912 after a collision with an iceberg.
Decorated stair risers were used extensively in the Greco-Buddhist art of Gandhara, to form the pedestal to small devotional stupas. They were usually adorned with friezes, fantastic animals and decorations. A flight of stairs with decorated stair risers from the Chakhil-i-Ghoundi Stupa has been almost fully restored and can now be seen at the Guimet Museum in Paris. Archaeological research by the Italian IsMEO at the Butkara Stupa suggests that small decorative stairs were adjoined to Buddhist stupas at the time of the Indo-Greek Kingdom, and that they were decorated with Buddhist scenes.
Gallery
| Technology | Architectural elements | null |
256028 | https://en.wikipedia.org/wiki/New%20World%20quail | New World quail | The New World quail are small birds, that despite their similar appearance and habits to the Old World quail, belong to a different family known as the Odontophoridae. In contrast, the Old World quail are in the Phasianidae family. The geographical range of the New World quail extends from Canada to southern Brazil, and two species, the California quail and the bobwhite quail, have been successfully introduced to New Zealand. The stone partridge and Nahan's partridge, both found in Africa, seem to belong to the family. Species are found across a variety of habitats from tropical rainforest to deserts, although few species are capable of surviving at very low temperatures. There are 34 species divided into 10 genera.
The legs of most New World quails are short but powerful, with some species having very thick legs for digging. They lack the spurs of many Old World galliformes. Although they are capable of short bursts of strong flight, New World quails prefer to walk, and run from danger (or hide), taking off explosively only as a last resort. Plumage varies from dull to spectacular, and many species have ornamental crests or plumes on their heads. Moderate sexual dichromism is seen in plumage, with males having brighter plumage.
Behaviour and ecology
The New World quails are shy diurnal birds and generally live on the ground; even the tree quails, which roost in high trees, generally feed mainly on the ground. They are generalists with regards to their diet, taking insects, seeds, vegetation, and tubers. Desert species in particular consume seeds frequently.
Most of the information about the breeding biology of New World quails comes from North American species, which have been better studied than those of the Neotropics. The family is generally thought to be monogamous, and nests are constructed on the ground. Clutch sizes are large, as is typical within the Galliformes, ranging from three to six eggs for the tree quail and wood quail, and as high as 10–15 for the northern bobwhite. Incubation takes between 16 and 30 days depending on the species. Chicks are precocial and quickly leave the nest to accompany the parents in large family groups.
Northern bobwhite and California quail are popular gamebirds, with many taken by hunters, but these species have also had their ranges increased to meet hunting demand and are not threatened. They are also artificially stocked. Some species are threatened by human activity, such as the bearded tree quail of Mexico, which is threatened by habitat loss and illegal hunting.
Species
Subspecies English names by Çınar 2015.
Fossils
Genus †Miortyx Miller 1944
†Miortyx teres Miller 1944
†Miortyx aldeni Howard 1966
Genus †Nanortyx Weigel 1963
†Nanortyx inexpectatus Weigel 1963
Genus †Neortyx Holman 1961
†Neortyx peninsularis Holman 1961
Phylogeny
Position within the Galliformes.
Living Odontophoridae based on the work by John Boyd.
| Biology and health sciences | Galliformes | Animals |
256162 | https://en.wikipedia.org/wiki/Neutrophil | Neutrophil | Neutrophils are a type of phagocytic white blood cell and part of innate immunity. More specifically, they form the most abundant type of granulocytes and make up 40% to 70% of all white blood cells in humans. Their functions vary in different animals. They are also known as neutrocytes, heterophils or polymorphonuclear leukocytes.
They are formed from stem cells in the bone marrow and differentiated into subpopulations of neutrophil-killers and neutrophil-cagers. They are short-lived (between 5 and 135 hours, see ) and highly mobile, as they can enter parts of tissue where other cells/molecules cannot. Neutrophils may be subdivided into segmented neutrophils and banded neutrophils (or bands). They form part of the polymorphonuclear cells family (PMNs) together with basophils and eosinophils.
The name neutrophil derives from staining characteristics on hematoxylin and eosin (H&E) histological or cytological preparations. Whereas basophilic white blood cells stain dark blue and eosinophilic white blood cells stain bright red, neutrophils stain a neutral pink. Normally, neutrophils contain a nucleus divided into 2–5 lobes.
Neutrophils are a type of phagocyte and are normally found in the bloodstream. During the beginning (acute) phase of inflammation, particularly as a result of bacterial infection, environmental exposure, and some cancers, neutrophils are one of the first responders of inflammatory cells to migrate toward the site of inflammation. They migrate through the blood vessels and then through interstitial space, following chemical signals such as interleukin-8 (IL-8), C5a, fMLP, leukotriene B4, and hydrogen peroxide (H2O2) in a process called chemotaxis. They are the predominant cells in pus, accounting for its whitish/yellowish appearance.
Neutrophils are recruited to the site of injury within minutes following trauma and are the hallmark of acute inflammation. They not only play a central role in combating infection but also contribute to pain in the acute period by releasing pro-inflammatory cytokines and other mediators that sensitize nociceptors, leading to heightened pain perception. However, due to some pathogens being indigestible, they may not be able to resolve certain infections without the assistance of other types of immune cells.
Structure
When adhered to a surface, neutrophil granulocytes have an average diameter of 12–15 micrometers (μm) in peripheral blood smears. In suspension, human neutrophils have an average diameter of 8.85 μm.
With the eosinophil and the basophil, they form the class of polymorphonuclear cells, named for the nucleus' multilobulated shape (as compared to lymphocytes and monocytes, the other types of white cells). The nucleus has a characteristic lobed appearance, the separate lobes connected by chromatin. The nucleolus disappears as the neutrophil matures, which is something that happens in only a few other types of nucleated cells. Up to 17% of female human neutrophil nuclei have a drumstick-shaped appendage which contains the inactivated X chromosome. In the cytoplasm, the Golgi apparatus is small, mitochondria and ribosomes are sparse, and the rough endoplasmic reticulum is absent. The cytoplasm also contains about 200 granules, of which a third are azurophilic.
Neutrophils will show increasing segmentation (many segments of the nucleus) as they mature. A normal neutrophil should have 3–5 segments. Hypersegmentation is not normal but occurs in some disorders, most notably vitamin B12 deficiency. This is noted in a manual review of the blood smear and is positive when most or all of the neutrophils have 5 or more segments.
Neutrophils are the most abundant white blood cells in the human body (approximately 1011 are produced daily); they account for approximately 50–70% of all white blood cells (leukocytes). The stated normal range for human blood counts varies between laboratories, but a neutrophil count of 2.5–7.5 × 109/L is a standard normal range. People of African and Middle Eastern descent may have lower counts, which are still normal. A report may divide neutrophils into segmented neutrophils and bands.
When circulating in the bloodstream and inactivated, neutrophils are spherical. Once activated, they change shape and become more amorphous or amoeba-like and can extend pseudopods as they hunt for antigens.
The capacity of neutrophils to engulf bacteria is reduced when simple sugars like glucose, fructose as well as sucrose, honey and orange juice were ingested, while the ingestion of starches had no effect. Fasting, on the other hand, strengthened the neutrophils' phagocytic capacity to engulf bacteria. It was concluded that the function, and not the number, of phagocytes in engulfing bacteria was altered by the ingestion of sugars. In 2007 researchers at the Whitehead Institute of Biomedical Research found that given a selection of sugars on microbial surfaces, the neutrophils reacted to some types of sugars preferentially. The neutrophils preferentially engulfed and killed beta-1,6-glucan targets compared to beta-1,3-glucan targets.
Development
Life span
The average lifespan of inactivated human neutrophils in the circulation has been reported by different approaches to be between 5 and 135 hours.
Upon activation, they marginate (position themselves adjacent to the blood vessel endothelium) and undergo selectin-dependent capture followed by integrin-dependent adhesion in most cases, after which they migrate into tissues, where they survive for 1–2 days. Neutrophils have also been demonstrated to be released into the blood from a splenic reserve following myocardial infarction.
The distribution ratio of neutrophils in bone marrow, blood and connective tissue is 28:1:25.
Neutrophils are much more numerous than the longer-lived monocyte/macrophage phagocytes. A pathogen (disease-causing microorganism or virus) is likely to first encounter a neutrophil. Some experts hypothesize that the short lifetime of neutrophils is an evolutionary adaptation. The short lifetime of neutrophils minimizes propagation of those pathogens that parasitize phagocytes (e.g. Leishmania) because the more time such parasites spend outside a host cell, the more likely they will be destroyed by some component of the body's defenses. Also, because neutrophil antimicrobial products can also damage host tissues, their short life limits damage to the host during inflammation.
Neutrophils will be removed after phagocytosis of pathogens by macrophages. PECAM-1 and phosphatidylserine on the cell surface are involved in this process.
Function
Chemotaxis
Neutrophils undergo a process called chemotaxis via amoeboid movement, which allows them to migrate toward sites of infection or inflammation. Cell surface receptors allow neutrophils to detect chemical gradients of molecules such as interleukin-8 (IL-8), interferon gamma (IFN-γ), C3a, C5a, and leukotriene B4, which these cells use to direct the path of their migration.
Neutrophils have a variety of specific receptors, including ones for complement, cytokines like interleukins and IFN-γ, chemokines, lectins, and other proteins. They also express receptors to detect and adhere to endothelium and Fc receptors for opsonin.
In leukocytes responding to a chemoattractant, the cellular polarity is regulated by activities of small Ras or Rho guanosine triphosphatases (Ras or Rho GTPases) and the phosphoinositide 3-kinases (PI3Ks). In neutrophils, lipid products of PI3Ks regulate activation of Rac1, hematopoietic Rac2, and RhoG GTPases of the Rho family and are required for cell motility. Ras-GTPases and Rac-GTPases regulate cytoskeletal dynamics and facilitate neutrophils adhesion, migration, and spreading. They accumulate asymmetrically to the plasma membrane at the leading edge of polarized cells. Spatially regulating Rho GTPases and organizing the leading edge of the cell, PI3Ks and their lipid products could play pivotal roles in establishing leukocyte polarity, as compass molecules that tell the cell where to crawl.
It has been shown in mice that in certain conditions neutrophils have a specific type of migration behaviour referred to as neutrophil swarming during which they migrate in a highly coordinated manner and accumulate and cluster to sites of inflammation.
Anti-microbial function
Being highly motile, neutrophils quickly congregate at a focus of infection, attracted by cytokines expressed by activated endothelium, mast cells, and macrophages. Neutrophils express and release cytokines, which in turn amplify inflammatory reactions by several other cell types.
In addition to recruiting and activating other cells of the immune system, neutrophils play a key role in the front-line defense against invading pathogens, and contain a broad range of proteins. Neutrophils have three methods for directly attacking microorganisms: phagocytosis (ingestion), degranulation (release of soluble anti-microbials), and generation of neutrophil extracellular traps (NETs).
Phagocytosis
Neutrophils are phagocytes, capable of ingesting microorganisms or particles. For targets to be recognized, they must be coated in opsoninsa process known as antibody opsonization. They can internalize and kill many microbes, each phagocytic event resulting in the formation of a phagosome into which reactive oxygen species and hydrolytic enzymes are secreted. The consumption of oxygen during the generation of reactive oxygen species has been termed the "respiratory burst", although unrelated to respiration or energy production.
The respiratory burst involves the activation of the enzyme NADPH oxidase, which produces large quantities of superoxide, a reactive oxygen species. Superoxide decays spontaneously or is broken down via enzymes known as superoxide dismutases (Cu/ZnSOD and MnSOD), to hydrogen peroxide, which is then converted to hypochlorous acid (HClO), by the green heme enzyme myeloperoxidase. It is thought that the bactericidal properties of HClO are enough to kill bacteria phagocytosed by the neutrophil, but this may instead be a step necessary for the activation of proteases.
Though neutrophils can kill many microbes, the interaction of neutrophils with microbes and molecules produced by microbes often alters neutrophil turnover. The ability of microbes to alter the fate of neutrophils is highly varied, can be microbe-specific, and ranges from prolonging the neutrophil lifespan to causing rapid neutrophil lysis after phagocytosis. Chlamydia pneumoniae and Neisseria gonorrhoeae have been reported to delay neutrophil apoptosis. Thus, some bacteriaand those that are predominantly intracellular pathogenscan extend the neutrophil lifespan by disrupting the normal process of spontaneous apoptosis and/or PICD (phagocytosis-induced cell death). On the other end of the spectrum, some pathogens such as Streptococcus pyogenes are capable of altering neutrophil fate after phagocytosis by promoting rapid cell lysis and/or accelerating apoptosis to the point of secondary necrosis.
Degranulation
Neutrophils also release an assortment of proteins in three types of granules by a process called degranulation. The contents of these granules have antimicrobial properties, and help combat infection. Glitter cells are polymorphonuclear leukocyte neutrophils with granules. Degranulation is postulated to occur in a hierarchical manner, with the sequential release of secretory vesicles, tertiary granules, specific granules, and azurophilic granules in response to increasing intracellular calcium concentrations. The release of neutrophils by degranulation occurs through exocytosis, regulated by exocytotic machinery including SNARE proteins, RAC2, RAB27, and others.
Neutrophil extracellular traps
In 2004, Brinkmann and colleagues described a striking observation that activation of neutrophils causes the release of web-like structures of DNA; this represents a third mechanism for killing bacteria. These neutrophil extracellular traps (NETs) comprise a web of fibers composed of chromatin and serine proteases that trap and kill extracellular microbes. It is suggested that NETs provide a high local concentration of antimicrobial components and bind, disarm, and kill microbes independent of phagocytic uptake. In addition to their possible antimicrobial properties, NETs may serve as a physical barrier that prevents further spread of pathogens. Trapping of bacteria may be a particularly important role for NETs in sepsis, where NETs are formed within blood vessels. Finally, NET formation has been demonstrated to augment macrophage bactericidal activity during infection. Recently, NETs have been shown to play a role in inflammatory diseases, as NETs could be detected in preeclampsia, a pregnancy-related inflammatory disorder in which neutrophils are known to be activated. Neutrophil NET formation may also impact cardiovascular disease, as NETs may influence thrombus formation in coronary arteries.
NETs are now known to exhibit pro-thrombotic effects both in vitro and in vivo. More recently, in 2020 NETs were implicated in the formation of blood clots in cases of severe COVID-19.
Tumor Associated Neutrophils (TANS)
TANs can exhibit an elevated extracellular acidification rate when there is an increase in glycolysis levels. When there is a metabolic shift in TANs this can lead to tumor progression in certain areas of the body, such as the lungs. TANs support the growth and progression of tumors unlike normal neutrophils which would inhibit tumor progression through the phagocytosis of tumor cells. Utilizing a mouse model, they identified that both Glut1 and glucose metabolism increased in TANs found within a mouse who possessed lung adenocarcinoma. A study showed that lung tumor cells can remotely initiate osteoblasts and these osteoblasts can worsen tumors in two ways. First, they can induce SiglecFhigh-expressing neutrophil formation that in turn promotes lung tumor growth and progression. Second, the osteoblasts can promote bone growth thus forming a favorable environment for tumor cells to grow to form bone metastasis.
Clinical significance
Low neutrophil counts are termed neutropenia. This can be congenital (developed at or before birth) or it can develop later, as in the case of aplastic anemia or some kinds of leukemia. It can also be a side-effect of medication, most prominently chemotherapy. Neutropenia makes an individual highly susceptible to infections. It can also be the result of colonization by intracellular neutrophilic parasites.
In alpha 1-antitrypsin deficiency, the important neutrophil elastase is not adequately inhibited by alpha 1-antitrypsin, leading to excessive tissue damage in the presence of inflammation – the most prominent one being emphysema. Negative effects of elastase have also been shown in cases when the neutrophils are excessively activated (in otherwise healthy individuals) and release the enzyme in extracellular space. Unregulated activity of neutrophil elastase can lead to disruption of pulmonary barrier showing symptoms corresponding with acute lung injury. The enzyme also influences activity of macrophages by cleaving their toll-like receptors (TLRs) and downregulating cytokine expression by inhibiting nuclear translocation of NF-κB.
In Familial Mediterranean fever (FMF), a mutation in the pyrin (or marenostrin) gene, which is expressed mainly in neutrophil granulocytes, leads to a constitutively active acute-phase response and causes attacks of fever, arthralgia, peritonitis, and – eventually – amyloidosis.
Hyperglycemia can lead to neutrophil dysfunction. Dysfunction in the neutrophil biochemical pathway myeloperoxidase as well as reduced degranulation are associated with hyperglycemia.
The Absolute neutrophil count (ANC) is also used in diagnosis and prognosis. ANC is the gold standard for determining severity of neutropenia, and thus neutropenic fever. Any ANC < 1500 cells / mm3 is considered neutropenia, but <500 cells / mm3 is considered severe. There is also new research tying ANC to myocardial infarction as an aid in early diagnosis. Neutrophils promote ventricular tachycardia in acute myocardial infarction.
In autopsy, the presence of neutrophils in the heart or brain is one of the first signs of infarction, and is useful in the timing and diagnosis of myocardial infarction and stroke.
Pathogen evasion and resistance
Just like phagocytes, pathogens may evade or infect neutrophils. Some bacterial pathogens evolved various mechanisms such as virulence molecules to avoid being killed by neutrophils. These molecules collectively may alter or disrupt neutrophil recruitment, apoptosis or bactericidal activity.
Neutrophils can also serve as host cell for various parasites that infects them avoding phagocytosis, including:
Leishmania major – uses neutrophils as vehicle to parasitize phagocytes
M. tuberculosis
M. leprae
Yersinia pestis
Chlamydia pneumoniae
Neutrophil antigens
There are five (HNA 1–5) sets of neutrophil antigens recognized. The three HNA-1 antigens (a-c) are located on the low affinity Fc-γ receptor IIIb (FCGR3B :CD16b) The single known HNA-2a antigen is located on CD177. The HNA-3 antigen system has two antigens (3a and 3b) which are located on the seventh exon of the CLT2 gene (SLC44A2). The HNA-4 and HNA-5 antigen systems each have two known antigens (a and b) and are located in the β2 integrin. HNA-4 is located on the αM chain (CD11b) and HNA-5 is located on the αL integrin unit (CD11a).
Subpopulations
Two functionally unequal subpopulations of neutrophils were identified on the basis of different levels of their reactive oxygen metabolite generation, membrane permeability, activity of enzyme system, and ability to be inactivated. The cells of one subpopulation with high membrane permeability (neutrophil-killers) intensively generate reactive oxygen metabolites and are inactivated in consequence of interaction with the substrate, whereas cells of another subpopulation (neutrophil-cagers) produce reactive oxygen species less intensively, don't adhere to substrate and preserve their activity. Additional studies have shown that lung tumors can be infiltrated by various populations of neutrophils.
Video
Neutrophils display highly directional amoeboid motility in infected footpad and phalanges. Intravital imaging was performed in the footpad path of LysM-eGFP mice 20 minutes after infection with Listeria monocytogenes.
Additional images
| Biology and health sciences | Circulatory system | Biology |
256332 | https://en.wikipedia.org/wiki/Achillea | Achillea | Achillea is a genus of flowering plants in the family Asteraceae. The plants typically have frilly leaves and are known colloquially as yarrows, although this common name usually refers to A. millefolium. The genus was named after the Greek mythological character Achilles, whose soldiers were said to have used yarrow to treat their wounds; this is reflected by common names such as allheal and bloodwort. The genus is native primarily to Eurasia and North America.
Description
These plants typically have frilly, hairy, aromatic leaves. The plants show large, flat clusters of small flowers at the top of the stem. The flowers can be white, yellow, orange, pink or red and are generally visited by many insects, and are thus characterised by a generalised pollination system.
Taxonomy
Carl Linnaeus described the genus in 1753. The common name "yarrow" is usually applied to Achillea millefolium, but may also be used for other species within the genus.
Selected species
Nearly 1,000 names have been published within the genus Achillea, at or below the level of species. Sources differ widely as to which should be recognized as species, which merit subspecies or variety status, and which ones are merely synonyms of better-established names. For convenience, the Plant List maintained by the Kew Botanic Gardens is followed.
Cultivars
The following cultivars are recipients of the Royal Horticultural Society's Award of Garden Merit:
Achillea ageratifolia
Achillea 'Coronation Gold'
Achillea 'Credo'
Achillea filipendulina 'Cloth of Gold'
Achillea filipendulina 'Gold Plate'
Achillea 'Heidi'
Achillea 'Hella Glashoff'
Achillea 'Lachsschönheit' (Galaxy Series)
Achillea × lewisii 'King Edward'
Achillea 'Lucky Break'
Achillea 'Martina'
Achillea millefolium 'Lansdorferglut'
Achillea 'Mondpagode'
Achillea 'Moonshine'
Achillea 'Summerwine'
Etymology
The genus was named after the Greek mythological character Achilles. According to legend, Achilles' soldiers used yarrow to treat their wounds, hence some of its common names such as allheal and bloodwort.
Distribution and habitat
The genus is primarily native to Europe, temperate areas of Asia, and North America.
Ecology
Achillea species are used as food plants by the larvae of some Lepidoptera species.
Uses
Achillea species and cultivars are popular garden plants.
Gallery
| Biology and health sciences | Asterales | Plants |
256604 | https://en.wikipedia.org/wiki/Motmot | Motmot | The motmots or Momotidae are a family of birds in the order Coraciiformes, which also includes the kingfishers, bee-eaters and rollers. All extant motmots are restricted to woodland or forests in the Neotropics, and the largest are in Central America. They have a colourful plumage and a relatively heavy bill. All except the tody motmot have relatively long tails that in some species have a distinctive racket-like tip.
Behaviour
Motmots eat small prey such as insects and lizards, and will also take fruit. In Nicaragua and Costa Rica, motmots have been observed feeding on poison dart frogs.
Like most of the Coraciiformes, motmots nest in tunnels in banks, laying about four white eggs. Some species form large colonies of up to 40 paired individuals. The eggs hatch after about 20 days, and the young leave the nest after another 30 days. Both parents care for the young.
Motmots often move their tails back and forth in a wag-display that commonly draws attention to an otherwise hidden bird. Research indicates that motmots perform the wag-display when they detect predators (based on studies on turquoise-browed motmot) and that the display is likely to communicate that the motmot is aware of the predator and is prepared to escape. This form of interspecific pursuit-deterrent signal provides a benefit to both the motmot and the predator: the display prevents the motmot from wasting time and energy fleeing, and the predator avoids a costly pursuit that is unlikely to result in capture.
The largest concentration of motmots reside in Honduras and Guatemala, with a total of 7 subspecies. It is also the national bird of Nicaragua and El Salvador.
There is also evidence that the male tail, which is slightly larger than the female tail, functions as a sexual signal in the turquoise-browed motmot.
In several species of motmots, the barbs near the ends of the two longest (central) tail feathers are weak and fall off due to abrasion with substrates, or fall off during preening, leaving a length of bare shaft, thus creating the racket shape of the tail. It was, however, wrongly believed in the past that the motmot shaped its tail by plucking part of the feather web to leave the racket. This was based on inaccurate reports made by Charles William Beebe. It has since been shown that these barbs are weakly attached and fall off due to abrasion with substrates and during routine preening. There are, however, also several species where the tail is "normal", these being the tody motmot, blue-throated motmot, rufous-capped motmot, and the Amazonian populations of the rufous and broad-billed motmots.
Taxonomy
A fossil genus of Oligocene coraciiform from Switzerland has been described as Protornis; it might be a primitive motmot or a more basal lineage. A partial momotid humerus found in early Hemphilian (Late Miocene, c. 8 mya) deposits in Alachua County, USA has not been named; it might belong to an extant genus.
The phylogenetic relationship between the six families that make up the order Coraciiformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
| Biology and health sciences | Coraciiformes | Animals |
256611 | https://en.wikipedia.org/wiki/Cardinalidae | Cardinalidae | Cardinalidae (sometimes referred to as the "cardinal-grosbeaks" or simply the "cardinals") is a family of New World-endemic passerine birds that consists of cardinals, grosbeaks, and buntings. It also includes several other genera such as the tanager-like Piranga and the warbler-like Granatellus. Membership of this family is not easily defined by a single or even a set of physical characteristics, but instead by molecular work. Among songbirds, they range from average-sized to relatively large, and have stout features, some species with large, heavy bills.
Members of this group are beloved for their brilliant red, yellow, or blue plumages seen in many of the breeding males in this family. Most species are monogamous breeders that nest in open-cup nests, with parents taking turns incubating the nest and taking care of their young. Most are arboreal species, although the dickcissel is a ground-dwelling prairie bird.
In terms of conservation, most members of this family are considered least concern by the IUCN Red List, though a few birds, such as the Carrizal seedeater, are considered to be endangered.
Field characteristics
The grosbeaks, seedeaters, and cardinals have large bills, while Granatellus and buntings have small bills. The cardinalid tanagers have stout, near pointed bills, with some species of Piranga having serrations along the edge of their upper bills. This bill shape is not always an indicator of relationships, as the various species of blue cardinalid species, like the blue grosbeak and Cyanoloxia grosbeaks are related to the buntings. Similarly, the cardinalid tanagers are closer to the cardinals and masked grosbeaks (see more in the systematics section). The head is medium to large in size, with a medium neck length. The body of cardinalids ranges from small to medium with lengths of 4.5 to 11 in (11 to 28 cm). Legs are also short to medium in length. The wings are medium and pointed. Cardinalids have nine visible primary feathers with the tenth primary feather being short in comparison.
The plumages in cardinalids are sexually dichromatic as many males of various species display bright reds, oranges, blues or blacks. In most temperate species, males will undergo molting between seasons, so that non-breeding males will somewhat resemble the females of their species. These species, such as the indigo bunting, will exhibit a complex molt cycle going through four different stages of plumage coverage within their first year of life. From spring to summer, birds start with juvenile plumage to supplemental plumage, then changing to a first basic (nonbreeding) plumage from fall to winter, and finally reaching the first alternate (breeding) plumage. Adults will typically have the basic two molt cycle changing to basic or partial in the late summer or fall, and then back to alternate again in the spring. Males of tropical species will have the same coloration year-round. Females of all species are either drabber in coloration by comparison, often having a lighter coloration of the males. The molting pattern in most cardinalids exhibits delayed plumage maturation, causing the first-year male birds to often be in non-breeding plumage or at an intermediate stage. The molting pattern in cardinalids is divided into two types. A preformative molt is a partial molt where only the body feathers get replaced, but not the wing and tail feathers, which is seen in a lot of temperate and neotropical species. The second type is an eccentric preformative molt when only the outer primary and inner secondaries are replaced. This molt is seen in some species of Cyanoloxia and Passerina.
Systematics
Traditionally, members of this group were classified as a tribe of the finch family Fringillidae (Cardinalini), characterized by heavy, conical, seed-crushing bills. The group consisted of the genera Pheucticus, Parkerthraustes, Saltator, Spiza, Cyanocompsa, Cyanoloxia, Porphyrospiza, Passerina, Caryothraustes, Periporphyrus, and Cardinalis. The issue that taxonomists had faced was that there were no unifying morphological traits that were in agreement for various studies. In 2007, a mitochondrial DNA study by Klicka, Burns and Spellman sampling all of the aforementioned genera and 34 of the total 42 species, found that the genera Parkerthraustes, Saltator, and Porphyrospiza were not members of the cardinal-lineage, but instead are found throughout in the tanager-lineage (Thraupidae). The genera classified as thraupids at the time, Piranga, Habia, Chlorothraupis, and Amaurospiza, are found to be part of cardinalid radiation. In addition the genus Granatellus, originally classified as a parulid warbler, are also found to be part of Cardinalidae. The study found that with this new relationship Cardinalidae can be classified into six subgroups, which have been supported by subsequent studies. The six subclades consists of the Pheucticus lineage, the Granatellus lineage, the “blue” lineage (Spiza, Cyanoloxia, Amaurospiza, Cyanocompsa, and Passerina), the Habia lineage (Habia and Chlorothraupis), the “masked” lineage (Caryothraustes, Periporphyrus, and Cardinalis), and the Piranga lineage (Piranga and Driophlox). These subclades and membership of these genera have been widely supported in subsequent studies. A 2021 paper by Guallar et al. based on the preformative molting pattern of cardinalids suggested the ancestor of this group was a forest-dwelling bird that dispersed into open habitats on numerous occasions.
The cardinalids are part of a larger grouping of American endemic songbirds, Emberizoidea, which also includes the aforementioned thraupids and parulids, as well as icterids (New World blackbirds), passerellids (New World sparrows), and several families that contain one or a couple of genera. Several studies have placed cardinalids as either the sister group to Thraupidae, Mitrospingidae (a small family whose genera were formerly classified as thraupids), or the sister to a clade containing thraupids and mitrospingids. At least one study suggested that cardinalids could treated as a subfamily of Thraupidae.
Phylogeny
The genus level cladogram of the Cardinalidae shown below is based on molecular phylogenetic study published in 2024 that analysed DNA sequences flanking ultraconserved elements (UCEs). The number of species in each genus is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Species list
The following 53 species and 14 genera are recognized by the IOC as of July 2024:
Natural history
Habitat, distribution and migration
The cardinalids can be found from Canada to northern Argentina and Uruguay, with Central America having the most concentrated amount of species. Species are found year-around in the Central United States and the Eastern United States down to the neotropics. Cardinalids found in the West Indies are non-breeding migrants and those in the Western United States and Canada are breeding migrants. The western tanager is the northernmost species in the family, with their breeding ranges occurring in the southern portions of the Northwest Territories. The northern cardinal has been introduced in Hawaii and Bermuda. They occupy a variety of habitats from forests to grassland and arid scrubland. Most North American cardinalid species migrate south for the winter, whether further south in the continent or extending into the neotropics, except the northern cardinal and pyrrhuloxia which stay year-round. The neotropical species are residential year-round in their range.
Feeding ecology
Cardinals, the dickcissel, seedeaters, buntings, and grosbeaks have the thicker, seed-crushing bills that enabled them to feed heavily on fruits and seeds outside of the breeding season (especially in the winter for northern species like the aforementioned dickcissel and northern cardinal). Once their breeding season begins, members of this group will supplement themselves with invertebrate prey, vital when raising their young and refueling their energetic costs of reproduction and other daily activities. The genera Chlorothraupis, Habia, Piranga, and Granatellus have slightly longer and less deep bills, which their diet mostly consists of insects, fruit, nectar and sap, less so on seeds. Cardinalids typically forage alone low level or on the ground, though some like Piranga and grosbeaks will forage high in the tree canopy. Many will come to birdfeeders especially during the winter.
Breeding and reproduction
Nearly all cardinalids are monogamous breeders and are highly territorial. Despite being monogamous this is only during the breeding season, and each year the birds might partner up with a different bird. The only exception is the dickcissel which is a polygynous species which nest in dense grasses and sedges. Other non-monogamous species include the lazuli and painted buntings which perform extra-copulation with multiple partners. The family is known for their intense brilliant songs. In some species like the lazuli bunting and indigo bunting the bird learn singing by match-based, meaning that first year breeding males will learn by copying the songs of nearby males, as opposed of learning it while they are in the nest. Even more unusual is the females of a few species, such as the scarlet tanager, northern cardinal, pyrrhuloxia, and black-headed grosbeak, which sing as well. In temperate species the breeding season occurs annually while in tropical species it is year-around. The breeding seasons are in sync with the abundance of insects. Most species build open-cup nests made of grasses and twigs depending on the species. These nests would be in the trees, often high up in the crown. The nest building is done by both partners or by the female alone. The male and female take turns incubating the nest, often the male would feed the female. In a clutch on average there are 1 to 6 six eggs, with tropical species laying the fewest. Cardinalids produce one to three broods per season. As with other passerines, the young are born altricial and fledged between one and two weeks.
Conservation
As of 2021, the IUCN Red List has nearly 82 percent of cardinalids to be least concern. However, there are a handful of species that are of conservation concern. The rose-bellied bunting is an endemic near-threatened species as they are found in a small area of Oaxaca and Chiapas, Mexico; the black-cheeked ant-tanager is another endemic species found in Osa Peninsula in Costa Rica and the carrizal seedeater a critically endangered species found in the spiny bamboo thickets in the understory of deciduous forest in a remote southeastern corner of Venezuela. All of these species are threatened with habitat loss and the confinement within their much smaller range. The IUCN has not yet reevaluate the other species of seedeaters in the genus Amaurospiza.
Despite the vast majority of species being classified as least concern, there has been a growing concern in how the ongoing climate crisis will affect the distribution and migration of many species across the globe. One study led by Dr. Brooke L. Bateman published in July 2020 focused on the risk North American birds will face from climate change and the measures needed to protect them. The first study assessed 604 species from the United States found that if the planet warmed by 3.0 degrees Celsius many species, especially arctic birds, waterbirds, and boreal and western forest birds, will be highly vulnerable to climate change and future conservation efforts will need to be in place. Among the species sampled, the North American species of Piranga and Pheucticus are found to be most climate vulnerable of the cardinalids. These species will either lose some substantial amount of their range or they will migrate up north to escape the sudden change in their habitat.
A possible extinct species is the controversial Townsend's bunting, a supposed enigmatic species related to the dickcissel. The Townsend's bunting is only known from a single type specimen collected from Chester County, Pennsylvania by John Kirk Townsend and described by John James Audubon in 1834. The specimen is housed in the National Museum of Natural History. Genetic work has not been done on this bird, but observation of the plumage has been done. The controversy stems from the uncertainty from authors whether the bird is an extinct species, a rare color-variant of the dickcissel, or a hybrid female dickcissel and male blue grosbeak. If the bird is indeed simply a dickcissel it lacks any of the known field characteristics seen in the species in all life stages and sexes.
| Biology and health sciences | Passerida | null |
256641 | https://en.wikipedia.org/wiki/Plant%20hormone | Plant hormone | Plant hormones (or phytohormones) are signal molecules, produced within plants, that occur in extremely low concentrations. Plant hormones control all aspects of plant growth and development, including embryogenesis, the regulation of organ size, pathogen defense, stress tolerance and reproductive development. Unlike in animals (in which hormone production is restricted to specialized glands) each plant cell is capable of producing hormones. Went and Thimann coined the term "phytohormone" and used it in the title of their 1937 book.
Phytohormones occur across the plant kingdom, and even in algae, where they have similar functions to those seen in vascular plants ("higher plants"). Some phytohormones also occur in microorganisms, such as unicellular fungi and bacteria, however in these cases they do not play a hormonal role and can better be regarded as secondary metabolites.
Characteristics
The word hormone is derived from Greek, meaning set in motion. Plant hormones affect gene expression and transcription levels, cellular division, and growth. They are naturally produced within plants, though very similar chemicals are produced by fungi and bacteria that can also affect plant growth. A large number of related chemical compounds are synthesized by humans. They are used to regulate the growth of cultivated plants, weeds, and in vitro-grown plants and plant cells; these manmade compounds are called plant growth regulators (PGRs). Early in the study of plant hormones, "phytohormone" was the commonly used term, but its use is less widely applied now.
Plant hormones are not nutrients, but chemicals that in small amounts promote and influence the growth, development, and differentiation of cells and tissues. The biosynthesis of plant hormones within plant tissues is often diffuse and not always localized. Plants lack glands to produce and store hormones, because, unlike animals—which have two circulatory systems (lymphatic and cardiovascular) —plants use more passive means to move chemicals around their bodies. Plants utilize simple chemicals as hormones, which move more easily through their tissues. They are often produced and used on a local basis within the plant body. Plant cells produce hormones that affect even different regions of the cell producing the hormone.
Hormones are transported within the plant by utilizing four types of movements. For localized movement, cytoplasmic streaming within cells and slow diffusion of ions and molecules between cells are utilized. Vascular tissues are used to move hormones from one part of the plant to another; these include sieve tubes or phloem that move sugars from the leaves to the roots and flowers, and xylem that moves water and mineral solutes from the roots to the foliage.
Not all plant cells respond to hormones, but those cells that do are programmed to respond at specific points in their growth cycle. The greatest effects occur at specific stages during the cell's life, with diminished effects occurring before or after this period. Plants need hormones at very specific times during plant growth and at specific locations. They also need to disengage the effects that hormones have when they are no longer needed. The production of hormones occurs very often at sites of active growth within the meristems, before cells have fully differentiated. After production, they are sometimes moved to other parts of the plant, where they cause an immediate effect; or they can be stored in cells to be released later. Plants use different pathways to regulate internal hormone quantities and moderate their effects; they can regulate the amount of chemicals used to biosynthesize hormones. They can store them in cells, inactivate them, or cannibalise already-formed hormones by conjugating them with carbohydrates, amino acids, or peptides. Plants can also break down hormones chemically, effectively destroying them. Plant hormones frequently regulate the concentrations of other plant hormones. Plants also move hormones around the plant diluting their concentrations.
The concentration of hormones required for plant responses are very low (10−6 to 10−5 mol/L). Because of these low concentrations, it has been very difficult to study plant hormones, and only since the late 1970s have scientists been able to start piecing together their effects and relationships to plant physiology. Much of the early work on plant hormones involved studying plants that were genetically deficient in one or involved the use of tissue-cultured plants grown in vitro that were subjected to differing ratios of hormones, and the resultant growth compared. The earliest scientific observation and study dates to the 1880s; the determination and observation of plant hormones and their identification was spread out over the next 70 years.
Synergism in plant hormones refers to the how of two or more hormones result in an effect that is more than the individual effects. For example, auxins and cytokinins often act in cooperation during cellular division and differentiation. Both hormones are key to cell cycle regulation, but when they come together, their synergistic interactions can enhance cell proliferation and organogenesis more effectively than either could in isolation.
Classes
Different hormones can be sorted into different classes, depending on their chemical structures. Within each class of hormone, chemical structures can vary, but all members of the same class have similar physiological effects. Initial research into plant hormones identified five major classes: abscisic acid, auxins, brassinosteroids, cytokinins and ethylene. This list was later expanded, and brassinosteroids, jasmonates, salicylic acid, and strigolactones are now also considered major plant hormones. Additionally there are several other compounds that serve functions similar to the major hormones, but their status as bona fide hormones is still debated.
Abscisic acid
Abscisic acid (also called ABA) is one of the most important plant growth inhibitors. It was discovered and researched under two different names, dormin and abscicin II, before its chemical properties were fully known. Once it was determined that the two compounds are the same, it was named abscisic acid. The name refers to the fact that it is found in high concentrations in newly abscissed or freshly fallen leaves.
This class of PGR is composed of one chemical compound normally produced in the leaves of plants, originating from chloroplasts, especially when plants are under stress. In general, it acts as an inhibitory chemical compound that affects bud growth, and seed and bud dormancy. It mediates changes within the apical meristem, causing bud dormancy and the alteration of the last set of leaves into protective bud covers. Since it was found in freshly abscissed leaves, it was initially thought to play a role in the processes of natural leaf drop, but further research has disproven this. In plant species from temperate parts of the world, abscisic acid plays a role in leaf and seed dormancy by inhibiting growth, but, as it is dissipated from seeds or buds, growth begins. In other plants, as ABA levels decrease, growth then commences as gibberellin levels increase. Without ABA, buds and seeds would start to grow during warm periods in winter and would be killed when it froze again. Since ABA dissipates slowly from the tissues and its effects take time to be offset by other plant hormones, there is a delay in physiological pathways that provides some protection from premature growth. Abscisic acid accumulates within seeds during fruit maturation, preventing seed germination within the fruit or before winter. Abscisic acid's effects are degraded within plant tissues during cold temperatures or by its removal by water washing in and out of the tissues, releasing the seeds and buds from dormancy.
ABA exists in all parts of the plant, and its concentration within any tissue seems to mediate its effects and function as a hormone; its degradation, or more properly catabolism, within the plant affects metabolic reactions and cellular growth and production of other hormones. Plants start life as a seed with high ABA levels. Just before the seed germinates, ABA levels decrease; during germination and early growth of the seedling, ABA levels decrease even more. As plants begin to produce shoots with fully functional leaves, ABA levels begin to increase again, slowing down cellular growth in more "mature" areas of the plant. Stress from water or predation affects ABA production and catabolism rates, mediating another cascade of effects that trigger specific responses from targeted cells. Scientists are still piecing together the complex interactions and effects of this and other phytohormones.
In plants under water stress, ABA plays a role in closing the stomata. Soon after plants are water-stressed and the roots are deficient in water, a signal moves up to the leaves, causing the formation of ABA precursors there, which then move to the roots. The roots then release ABA, which is translocated to the foliage through the vascular system and modulates potassium and sodium uptake within the guard cells, which then lose turgidity, closing the stomata.
Auxins
Auxins are compounds that positively influence cell enlargement, bud formation, and root initiation. They also promote the production of other hormones and, in conjunction with cytokinins, control the growth of stems, roots, and fruits, and convert stems into flowers. Auxins were the first class of growth regulators discovered.
A Dutch Biologist Frits Warmolt Went first described auxins. They affect cell elongation by altering cell wall plasticity. They stimulate cambium, a subtype of meristem cells, to divide, and in stems cause secondary xylem to differentiate.
Auxins act to inhibit the growth of buds lower down the stems in a phenomenon known as apical dominance, and also to promote lateral and adventitious root development and growth. Leaf abscission is initiated by the growing point of a plant ceasing to produce auxins. Auxins in seeds regulate specific protein synthesis, as they develop within the flower after pollination, causing the flower to develop a fruit to contain the developing seeds.
In large concentrations, auxins are often toxic to plants; they are most toxic to dicots and less so to monocots. Because of this property, synthetic auxin herbicides including 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) have been developed and used for weed control by defoliation. Auxins, especially 1-naphthaleneacetic acid (NAA) and indole-3-butyric acid (IBA), are also commonly applied to stimulate root growth when taking cuttings of plants. The most common auxin found in plants is indole-3-acetic acid (IAA).
Brassinosteroids
Brassinosteroids (BRs) are a class of polyhydroxysteroids, the only example of steroid-based hormones in plants. Brassinosteroids control cell elongation and division, gravitropism, resistance to stress, and xylem differentiation. They inhibit root growth and leaf abscission. Brassinolide was the first brassinosteroid to be identified and was isolated from extracts of rapeseed (Brassica napus) pollen in 1979. Brassinosteroids are a class of steroidal phytohormones in plants that regulate numerous physiological processes. This plant hormone was identified by Mitchell et al. who extracted ingredients from Brassica pollen only to find that the extracted ingredients’ main active component was Brassinolide. This finding meant the discovery of a new class of plant hormones called Brassinosteroids. These hormones act very similarly to animal steroidal hormones by promoting growth and development.
In plants these steroidal hormones play an important role in cell elongation via BR signaling. The brassinosteroids receptor brassinosteroid insensitive 1 (BRI1) is the main receptor for this signaling pathway. This BRI1 receptor was found by Clouse et al. who made the discovery by inhibiting BR and comparing it to the wildtype in Arabidopsis. The BRI1 mutant displayed several problems associated with growth and development such as dwarfism, reduced cell elongation and other physical alterations. These findings mean that plants properly expressing brassinosteroids grow more than their mutant counterparts. Brassinosteroids bind to BRI1 localized at the plasma membrane which leads to a signal cascade that further regulates cell elongation. This signal cascade however is not entirely understood at this time. What is believed to be happening is that BR binds to the BAK1 complex which leads to a phosphorylation cascade. This phosphorylation cascade then causes BIN2 to be deactivated which causes the release of transcription factors. These released transcription factors then bind to DNA that leads to growth and developmental processes and allows plants to respond to abiotic stressors.
Cytokinins
Cytokinins (CKs) are a group of chemicals that influence cell division and shoot formation. They also help delay senescence of tissues, are responsible for mediating auxin transport throughout the plant, and affect internodal length and leaf growth. They were called kinins in the past when they were first isolated from yeast cells. Cytokinins and auxins often work together, and the ratios of these two groups of plant hormones affect most major growth periods during a plant's lifetime. Cytokinins counter the apical dominance induced by auxins; in conjunction with ethylene, they promote abscission of leaves, flower parts, and fruits.
Among the plant hormones, the three that are known to help with immunological interactions are ethylene (ET), salicylates (SA), and jasmonates (JA), however more research has gone into identifying the role that cytokinins play in this. Evidence suggests that cytokinins delay the interactions with pathogens, showing signs that they could induce resistance toward these pathogenic bacteria. Accordingly, there are higher CK levels in plants that have increased resistance to pathogens compared to those which are more susceptible. For example, pathogen resistance involving cytokinins was tested using the Arabidopsis species by treating them with naturally occurring CK (trans-zeatin) to see their response to the bacteria Pseudomonas syringa. Tobacco studies reveal that over expression of CK inducing IPT genes yields increased resistance whereas over expression of CK oxidase yields increased susceptibility to pathogen, namely P. syringae.
While there’s not much of a relationship between this hormone and physical plant behavior, there are behavioral changes that go on inside the plant in response to it. Cytokinin defense effects can include the establishment and growth of microbes (delay leaf senescence), reconfiguration of secondary metabolism or even induce the production of new organs such as galls or nodules. These organs and their corresponding processes are all used to protect the plants against biotic/abiotic factors.
Ethylene
Unlike the other major plant hormones, ethylene is a gas and a very simple organic compound, consisting of just six atoms. It forms through the breakdown of methionine, an amino acid which is in all cells. Ethylene has very limited solubility in water and therefore does not accumulate within the cell, typically diffusing out of the cell and escaping the plant. Its effectiveness as a plant hormone is dependent on its rate of production versus its rate of escaping into the atmosphere. Ethylene is produced at a faster rate in rapidly growing and dividing cells, especially in darkness. New growth and newly germinated seedlings produce more ethylene than can escape the plant, which leads to elevated amounts of ethylene, inhibiting leaf expansion (see hyponastic response).
As the new shoot is exposed to light, reactions mediated by phytochrome in the plant's cells produce a signal for ethylene production to decrease, allowing leaf expansion. Ethylene affects cell growth and cell shape; when a growing shoot or root hits an obstacle while underground, ethylene production greatly increases, preventing cell elongation and causing the stem to swell. The resulting thicker stem is stronger and less likely to buckle under pressure as it presses against the object impeding its path to the surface. If the shoot does not reach the surface and the ethylene stimulus becomes prolonged, it affects the stem's natural geotropic response, which is to grow upright, allowing it to grow around an object. Studies seem to indicate that ethylene affects stem diameter and height: when stems of trees are subjected to wind, causing lateral stress, greater ethylene production occurs, resulting in thicker, sturdier tree trunks and branches.
Ethylene also affects fruit ripening. Normally, when the seeds are mature, ethylene production increases and builds up within the fruit, resulting in a climacteric event just before seed dispersal. The nuclear protein Ethylene Insensitive2 (EIN2) is regulated by ethylene production, and, in turn, regulates other hormones including ABA and stress hormones. Ethylene diffusion out of plants is strongly inhibited underwater. This increases internal concentrations of the gas. In numerous aquatic and semi-aquatic species (e.g. Callitriche platycarpus, rice, and Rumex palustris), the accumulated ethylene strongly stimulates upward elongation. This response is an important mechanism for the adaptive escape from submergence that avoids asphyxiation by returning the shoot and leaves to contact with the air whilst allowing the release of entrapped ethylene. At least one species (Potamogeton pectinatus) has been found to be incapable of making ethylene while retaining a conventional morphology. This suggests ethylene is a true regulator rather than being a requirement for building a plant's basic body plan.
Gibberellins
Gibberellins (GAs) include a large range of chemicals that are produced naturally within plants and by fungi. They were first discovered when Japanese researchers, including Eiichi Kurosawa, noticed a chemical produced by a fungus called Gibberella fujikuroi that produced abnormal growth in rice plants. It was later discovered that GAs are also produced by the plants themselves and control multiple aspects of development across the life cycle. The synthesis of GA is strongly upregulated in seeds at germination and its presence is required for germination to occur. In seedlings and adults, GAs strongly promote cell elongation. GAs also promote the transition between vegetative and reproductive growth and are also required for pollen function during fertilization.
Gibberellins breaks the dormancy (in active stage) in seeds and buds and helps increasing the height of the plant. It helps in the growth of the stem
Jasmonates
Jasmonates (JAs) are lipid-based hormones that were originally isolated from jasmine oil. JAs are especially important in the plant response to attack from herbivores and necrotrophic pathogens. The most active JA in plants is jasmonic acid. Jasmonic acid can be further metabolized into methyl jasmonate (MeJA), which is a volatile organic compound. This unusual property means that MeJA can act as an airborne signal to communicate herbivore attack to other distant leaves within one plant and even as a signal to neighboring plants. In addition to their role in defense, JAs are also believed to play roles in seed germination, the storage of protein in seeds, and root growth.
JAs have been shown to interact in the signalling pathway of other hormones in a mechanism described as “crosstalk.” The hormone classes can have both negative and positive effects on each other's signal processes.
Jasmonic acid methyl ester (JAME) has been shown to regulate genetic expression in plants. They act in signalling pathways in response to herbivory, and upregulate expression of defense genes. Jasmonyl-isoleucine (JA-Ile) accumulates in response to herbivory, which causes an upregulation in defense gene expression by freeing up transcription factors.
Jasmonate mutants are more readily consumed by herbivores than wild type plants, indicating that JAs play an important role in the execution of plant defense. When herbivores are moved around leaves of wild type plants, they reach similar masses to herbivores that consume only mutant plants, implying the effects of JAs are localized to sites of herbivory. Studies have shown that there is significant crosstalk between defense pathways.
Salicylic acid
Salicylic acid (SA) is a hormone with a structure related to benzoic acid and phenol. It was originally isolated from an extract of white willow bark (Salix alba) and is of great interest to human medicine, as it is the precursor of the painkiller aspirin. In plants, SA plays a critical role in the defense against biotrophic pathogens. In a similar manner to JA, SA can also become methylated. Like MeJA, methyl salicylate is volatile and can act as a long-distance signal to neighboring plants to warn of pathogen attack. In addition to its role in defense, SA is also involved in the response of plants to abiotic stress, particularly from drought, extreme temperatures, heavy metals, and osmotic stress.
Salicylic acid (SA) serves as a key hormone in plant innate immunity, including resistance in both local and systemic tissue upon biotic attacks, hypersensitive responses, and cell death. Some of the SA influences on plants include seed germination, cell growth, respiration, stomatal closure, senescence-associated gene expression, responses to abiotic and biotic stresses, basal thermo tolerance and fruit yield. A possible role of salicylic acid in signaling disease resistance was first demonstrated by injecting leaves of resistant tobacco with SA. The result was that injecting SA stimulated pathogenesis related (PR) protein accumulation and enhanced resistance to tobacco mosaic virus (TMV) infection. Exposure to pathogens causes a cascade of reactions in the plant cells. SA biosynthesis is increased via isochorismate synthase (ICS) and phenylalanine ammonia-lyase (PAL) pathway in plastids. It was observed that during plant-microbe interactions, as part of the defense mechanisms, SA is initially accumulated at the local infected tissue and then spread all over the plant to induce systemic acquired resistance at non-infected distal parts of the plant. Therefore with increased internal concentration of SA, plants were able to build resistant barriers for pathogens and other adverse environmental conditions
Strigolactones
Strigolactones (SLs) were originally discovered through studies of the germination of the parasitic weed Striga lutea. It was found that the germination of Striga species was stimulated by the presence of a compound exuded by the roots of its host plant. It was later shown that SLs that are exuded into the soil also promote the growth of symbiotic arbuscular mycorrhizal (AM) fungi. More recently, another role of SLs was identified in the inhibition of shoot branching. This discovery of the role of SLs in shoot branching led to a dramatic increase in the interest in these hormones, and it has since been shown that SLs play important roles in leaf senescence, phosphate starvation response, salt tolerance, and light signalling.
Other known hormones
Other identified plant growth regulators include:
Plant peptide hormones – encompasses all small secreted peptides that are involved in cell-to-cell signaling. These small peptide hormones play crucial roles in plant growth and development, including defense mechanisms, the control of cell division and expansion, and pollen self-incompatibility. The small peptide CLE25 is known to act as a long-distance signal to communicate water stress sensed in the roots to the stomata in the leaves.
Polyamines – are strongly basic molecules with low molecular weight that have been found in all organisms studied thus far. They are essential for plant growth and development and affect the process of mitosis and meiosis. In plants, polyamines have been linked to the control of senescence and programmed cell death.
Nitric oxide (NO) – serves as signal in hormonal and defense responses (e.g. stomatal closure, root development, germination, nitrogen fixation, cell death, stress response). NO can be produced by a yet undefined NO synthase, a special type of nitrite reductase, nitrate reductase, mitochondrial cytochrome c oxidase or non enzymatic processes and regulate plant cell organelle functions (e.g. ATP synthesis in chloroplasts and mitochondria).
Karrikins – are not plant hormones as they are not produced by plants themselves but are rather found in the smoke of burning plant material. Karrikins can promote seed germination in many species. The finding that plants which lack the receptor of karrikin receptor show several developmental phenotypes (enhanced biomass accumulation and increased sensitivity to drought) have led some to speculate on the existence of an as yet unidentified karrikin-like endogenous hormone in plants. The cellular karrikin signalling pathway shares many components with the strigolactone signalling pathway.
Triacontanol – a fatty alcohol that acts as a growth stimulant, especially initiating new basal breaks in the rose family. It is found in alfalfa (lucerne), bee's wax, and some waxy leaf cuticles.
Use in horticulture
Synthetic plant hormones or PGRs are used in a number of different techniques involving plant propagation from cuttings, grafting, micropropagation and tissue culture. Most commonly they are commercially available as "rooting hormone powder".
The propagation of plants by cuttings of fully developed leaves, stems, or roots is performed by gardeners utilizing auxin as a rooting compound applied to the cut surface; the auxins are taken into the plant and promote root initiation. In grafting, auxin promotes callus tissue formation, which joins the surfaces of the graft together. In micropropagation, different PGRs are used to promote multiplication and then rooting of new plantlets. In the tissue-culturing of plant cells, PGRs are used to produce callus growth, multiplication, and rooting.
When used in field conditions, plant hormones or mixtures that include them can be applied as biostimulants.
Seed dormancy
Plant hormones affect seed germination and dormancy by acting on different parts of the seed.
Embryo dormancy is characterized by a high ABA:GA ratio, whereas the seed has high abscisic acid sensitivity and low GA sensitivity. In order to release the seed from this type of dormancy and initiate seed germination, an alteration in hormone biosynthesis and degradation toward a low ABA/GA ratio, along with a decrease in ABA sensitivity and an increase in GA sensitivity, must occur.
ABA controls embryo dormancy, and GA embryo germination.
Seed coat dormancy involves the mechanical restriction of the seed coat. This, along with a low embryo growth potential, effectively produces seed dormancy. GA releases this dormancy by increasing the embryo growth potential, and/or weakening the seed coat so the radical of the seedling can break through the seed coat.
Different types of seed coats can be made up of living or dead cells, and both types can be influenced by hormones; those composed of living cells are acted upon after seed formation, whereas the seed coats composed of dead cells can be influenced by hormones during the formation of the seed coat. ABA affects testa or seed coat growth characteristics, including thickness, and effects the GA-mediated embryo growth potential. These conditions and effects occur during the formation of the seed, often in response to environmental conditions. Hormones also mediate endosperm dormancy: Endosperm in most seeds is composed of living tissue that can actively respond to hormones generated by the embryo. The endosperm often acts as a barrier to seed germination, playing a part in seed coat dormancy or in the germination process. Living cells respond to and also affect the ABA:GA ratio, and mediate cellular sensitivity; GA thus increases the embryo growth potential and can promote endosperm weakening. GA also affects both ABA-independent and ABA-inhibiting processes within the endosperm.
Human use
Salicylic acid
Willow bark has been used for centuries as a painkiller. The active ingredient in willow bark that provides these effects is the hormone salicylic acid (SA). In 1899, the pharmaceutical company Bayer began marketing a derivative of SA as the drug aspirin. In addition to its use as a painkiller, SA is also used in topical treatments of several skin conditions, including acne, warts and psoriasis. Another derivative of SA, sodium salicylate has been found to suppress proliferation of lymphoblastic leukemia, prostate, breast, and melanoma human cancer cells.
Jasmonic acid
Jasmonic acid (JA) can induce death in lymphoblastic leukemia cells. Methyl jasmonate (a derivative of JA, also found in plants) has been shown to inhibit proliferation in a number of cancer cell lines, although there is still debate over its use as an anti-cancer drug, due to its potential negative effects on healthy cells.
| Biology and health sciences | Biochemistry and molecular biology | null |
256700 | https://en.wikipedia.org/wiki/Mathematical%20problem | Mathematical problem | A mathematical problem is a problem that can be represented, analyzed, and possibly solved, with the methods of mathematics. This can be a real-world problem, such as computing the orbits of the planets in the solar system, or a problem of a more abstract nature, such as Hilbert's problems. It can also be a problem referring to the nature of mathematics itself, such as Russell's Paradox.
Real-world problems
Informal "real-world" mathematical problems are questions related to a concrete setting, such as "Adam has five apples and gives John three. How many has he left?". Such questions are usually more difficult to solve than regular mathematical exercises like "5 − 3", even if one knows the mathematics required to solve the problem. Known as word problems, they are used in mathematics education to teach students to connect real-world situations to the abstract language of mathematics.
In general, to use mathematics for solving a real-world problem, the first step is to construct a mathematical model of the problem. This involves abstraction from the details of the problem, and the modeller has to be careful not to lose essential aspects in translating the original problem into a mathematical one. After the problem has been solved in the world of mathematics, the solution must be translated back into the context of the original problem.
Abstract problems
Abstract mathematical problems arise in all fields of mathematics. While mathematicians usually study them for their own sake, by doing so, results may be obtained that find application outside the realm of mathematics. Theoretical physics has historically been a rich source of inspiration.
Some abstract problems have been rigorously proved to be unsolvable, such as squaring the circle and trisecting the angle using only the compass and straightedge constructions of classical geometry, and solving the general quintic equation algebraically. Also provably unsolvable are so-called undecidable problems, such as the halting problem for Turing machines.
Some well-known difficult abstract problems that have been solved relatively recently are the four-colour theorem, Fermat's Last Theorem, and the Poincaré conjecture.
Computers do not need to have a sense of the motivations of mathematicians in order to do what they do. Formal definitions and computer-checkable deductions are absolutely central to mathematical science.
Degradation of problems to exercises
Mathematics educators using problem solving for evaluation have an issue phrased by Alan H. Schoenfeld:
How can one compare test scores from year to year, when very different problems are used? (If similar problems are used year after year, teachers and students will learn what they are, students will practice them: problems become exercises, and the test no longer assesses problem solving).
The same issue was faced by Sylvestre Lacroix almost two centuries earlier:
... it is necessary to vary the questions that students might communicate with each other. Though they may fail the exam, they might pass later. Thus distribution of questions, the variety of topics, or the answers, risks losing the opportunity to compare, with precision, the candidates one-to-another.
Such degradation of problems into exercises is characteristic of mathematics in history. For example, describing the preparations for the Cambridge Mathematical Tripos in the 19th century, Andrew Warwick wrote:
... many families of the then standard problems had originally taxed the abilities of the greatest mathematicians of the 18th century.
| Mathematics | Basics | null |
257243 | https://en.wikipedia.org/wiki/Pentaquark | Pentaquark | A pentaquark is a human-made subatomic particle, consisting of four quarks and one antiquark bound together; they are not known to occur naturally, or exist outside of experiments specifically carried out to create them.
As quarks have a baryon number of , and antiquarks of , the pentaquark would have a total baryon number of 1, and thus would be a baryon. Further, because it has five quarks instead of the usual three found in regular baryons ( "triquarks"), it is classified as an exotic baryon. The name pentaquark was coined by Claude Gignoux et al. (1987) and Harry J. Lipkin in 1987; however, the possibility of five-quark particles was identified as early as 1964 when Murray Gell-Mann first postulated the existence of quarks. Although predicted for decades, pentaquarks proved surprisingly difficult to discover and some physicists were beginning to suspect that an unknown law of nature prevented their production.
The first claim of pentaquark discovery was recorded at LEPS in Japan in 2003, and several experiments in the mid-2000s also reported discoveries of other pentaquark states. However, other researchers were not able to replicate the LEPS results, and the other pentaquark discoveries were not accepted because of poor data and statistical analysis. On 13 July 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom Lambda baryons ().
On 26 March 2019, the LHCb collaboration announced the discovery of a new pentaquark that had not been previously observed. On 5 July 2022, the LHCb collaboration announced the discovery of the pentaquark.
Outside of particle research laboratories, pentaquarks might be produced naturally in the processes that result in the formation of neutron stars.
Background
A quark is a type of elementary particle that has mass, electric charge, and colour charge, as well as an additional property called flavour, which describes what type of quark it is (up, down, strange, charm, top, or bottom). Due to an effect known as colour confinement, quarks are never seen on their own. Instead, they form composite particles known as hadrons so that their colour charges cancel out. Hadrons made of one quark and one antiquark are known as mesons, while those made of three quarks are known as baryons. These 'regular' hadrons are well documented and characterized; however, there is nothing in theory to prevent quarks from forming 'exotic' hadrons such as tetraquarks with two quarks and two antiquarks, or pentaquarks with four quarks and one antiquark.
Structure
A wide variety of pentaquarks are possible, with different quark combinations producing different particles. To identify which quarks compose a given pentaquark, physicists use the notation qqqq, where q and respectively refer to any of the six flavours of quarks and antiquarks. The symbols u, d, s, c, b, and t stand for the up, down, strange, charm, bottom, and top quarks respectively, with the symbols of , , , , , corresponding to the respective antiquarks. For instance a pentaquark made of two up quarks, one down quark, one charm quark, and one charm antiquark would be denoted uudc.
The quarks are bound together by the strong force, which acts in such a way as to cancel the colour charges within the particle. In a meson, this means a quark is partnered with an antiquark with an opposite colour charge – blue and antiblue, for example – while in a baryon, the three quarks have between them all three colour charges – red, blue, and green. In a pentaquark, the colours also need to cancel out, and the only feasible combination is to have one quark with one colour (e.g. red), one quark with a second colour (e.g. green), two quarks with the third colour (e.g. blue), and one antiquark to counteract the surplus colour (e.g. antiblue).
The binding mechanism for pentaquarks is not yet clear. They may consist of five quarks tightly bound together, but it is also possible that they are more loosely bound and consist of a three-quark baryon and a two-quark meson interacting relatively weakly with each other via pion exchange (the same force that binds atomic nuclei) in a "meson-baryon molecule".
History
Mid-2000s
The requirement to include an antiquark means that many classes of pentaquark are hard to identify experimentally – if the flavour of the antiquark matches the flavour of any other quark in the quintuplet, it will cancel out and the particle will resemble its three-quark hadron cousin. For this reason, early pentaquark searches looked for particles where the antiquark did not cancel. In the mid-2000s, several experiments claimed to reveal pentaquark states. In particular, a resonance with a mass of (4.6 σ) was reported by LEPS in 2003, the . This coincided with a pentaquark state with a mass of predicted in 1997.
The proposed state was composed of two up quarks, two down quarks, and one strange antiquark (uudd). Following this announcement, nine other independent experiments reported seeing narrow peaks from and , with masses between and , all above 4 σ. While concerns existed about the validity of these states, the Particle Data Group gave the a 3-star rating (out of 4) in the 2004 Review of Particle Physics. Two other pentaquark states were reported albeit with low statistical significance—the (ddss), with a mass of and the (uudd), with a mass of . Both were later found to be statistical effects rather than true resonances.
Ten experiments then looked for the , but came out empty-handed. Two in particular (one at BELLE, and the other at CLAS) had nearly the same conditions as other experiments which claimed to have detected the (DIANA and SAPHIR respectively). The 2006 Review of Particle Physics concluded:
[T]here has not been a high-statistics confirmation of any of the original experiments that claimed to see the ; there have been two high-statistics repeats from Jefferson Lab that have clearly shown the original positive claims in those two cases to be wrong; there have been a number of other high-statistics experiments, none of which have found any evidence for the ; and all attempts to confirm the two other claimed pentaquark states have led to negative results. The conclusion that pentaquarks in general, and the , in particular, do not exist, appears compelling.
The 2008 Review of Particle Physics went even further:
There are two or three recent experiments that find weak evidence for signals near the nominal masses, but there is simply no point in tabulating them in view of the overwhelming evidence that the claimed pentaquarks do not exist... The whole story—the discoveries themselves, the tidal wave of papers by theorists and phenomenologists that followed, and the eventual "undiscovery"—is a curious episode in the history of science.
Despite these null results, LEPS results continued to show the existence of a narrow state with a mass of , with a statistical significance of 5.1 σ.
However this 'discovery' was later revealed to be due to flawed methodology (https://www.osti.gov/biblio/21513283-critical-view-claimed-theta-sup-pentaquark).
2015 LHCb results
In July 2015, the LHCb collaboration at CERN identified pentaquarks in the channel, which represents the decay of the bottom lambda baryon into a J/ψ meson , a kaon and a proton (p). The results showed that sometimes, instead of decaying via intermediate lambda states, the decayed via intermediate pentaquark states. The two states, named and , had individual statistical significances of 9 σ and 12 σ, respectively, and a combined significance of 15 σ – enough to claim a formal discovery. The analysis ruled out the possibility that the effect was caused by conventional particles. The two pentaquark states were both observed decaying strongly to , hence must have a valence quark content of two up quarks, a down quark, a charm quark, and an anti-charm quark (), making them charmonium-pentaquarks.
The search for pentaquarks was not an objective of the LHCb experiment (which is primarily designed to investigate matter-antimatter asymmetry) and the apparent discovery of pentaquarks was described as an "accident" and "something we've stumbled across" by the Physics Coordinator for the experiment.
Studies of pentaquarks in other experiments
The production of pentaquarks from electroweak decays of baryons has extremely small cross-section and yields very limited information about internal structure of pentaquarks. For this reason, there are several ongoing and proposed initiatives to study pentaquark production in other channels.
It is expected that pentaquarks will be studied in electron-proton collisions in Hall B E12-12-001A and Hall C E2-16-007 experiments at JLab. The major challenge in these studies is a heavy mass of the pentaquark, which will be produced at the tail of photon-proton spectrum in JLab kinematics. For this reason, the currently unknown branching fractions of pentaquark should be sufficiently large to allow pentaquark detection in JLab kinematics. The proposed Electron Ion Collider which has higher energies is much better suited for this problem.
An interesting channel to study pentaquarks in proton-nuclear collisions was suggested by Schmidt & Siddikov (2016). This process has a large cross-section due to lack of electroweak intermediaries and gives access to pentaquark wave function. In the fixed-target experiments pentaquarks will be produced with small rapidities in laboratory frame and will be easily detected.
Besides, if there are neutral pentaquarks, as suggested in several models based on flavour symmetry, these might be also produced in this mechanism. This process might be studied at future high-luminosity experiments like After@LHC and NICA.
2019 LHCb results
On 26 March 2019, the LHCb collaboration announced the discovery of a new pentaquark, based on observations that passed the 5-sigma threshold, using a dataset that was many times larger than the 2015 dataset.
Designated Pc(4312)+ (Pc+ identifies a charmonium-pentaquark while the number between parenthesis indicates a mass of about 4312 MeV), the pentaquark decays to a proton and a J/ψ meson. The analyses revealed additionally that the earlier reported observations of the Pc(4450)+ pentaquark actually were the average of two different resonances, designated Pc(4440)+ and Pc(4457)+. Understanding this will require further study.
2022 LHCb results
On 5 July 2022, the LHCb collaboration announced the discovery of a further new pentaquark, with a significance of 15-sigma. Designated PψsΛ(4338)0, its composition is described as udsc, representing the first confirmed pentaquark containing a strange quark.
Applications
The discovery of pentaquarks will allow physicists to study the strong force in greater detail and aid understanding of quantum chromodynamics. In addition, current theories suggest that some very large stars produce pentaquarks as they collapse. The study of pentaquarks might help shed light on the physics of neutron stars.
| Physical sciences | Fermions | Physics |
258733 | https://en.wikipedia.org/wiki/Knuth%27s%20up-arrow%20notation | Knuth's up-arrow notation | In mathematics, Knuth's up-arrow notation is a method of notation for very large integers, introduced by Donald Knuth in 1976.
In his 1947 paper, R. L. Goodstein introduced the specific sequence of operations that are now called hyperoperations. Goodstein also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation. The sequence starts with a unary operation (the successor function with n = 0), and continues with the binary operations of addition (n = 1), multiplication (n = 2), exponentiation (n = 3), tetration (n = 4), pentation (n = 5), etc.
Various notations have been used to represent hyperoperations. One such notation is .
Knuth's up-arrow notation is another.
For example:
the single arrow represents exponentiation (iterated multiplication)
the double arrow represents tetration (iterated exponentiation)
the triple arrow represents pentation (iterated tetration)
The general definition of the up-arrow notation is as follows (for ):
Here, stands for n arrows, so for example
The square brackets are another notation for hyperoperations.
Introduction
The hyperoperations naturally extend the arithmetic operations of addition and multiplication as follows.
Addition by a natural number is defined as iterated incrementation:
Multiplication by a natural number is defined as iterated addition:
For example,
Exponentiation for a natural power is defined as iterated multiplication, which Knuth denoted by a single up-arrow:
For example,
Tetration is defined as iterated exponentiation, which Knuth denoted by a “double arrow”:
For example,
Expressions are evaluated from right to left, as the operators are defined to be right-associative.
According to this definition,
etc.
This already leads to some fairly large numbers, but the hyperoperator sequence does not stop here.
Pentation, defined as iterated tetration, is represented by the “triple arrow”:
Hexation, defined as iterated pentation, is represented by the “quadruple arrow”:
and so on. The general rule is that an -arrow operator expands into a right-associative series of ()-arrow operators. Symbolically,
Examples:
Notation
In expressions such as , the notation for exponentiation is usually to write the exponent as a superscript to the base number . But many environments — such as programming languages and plain-text e-mail — do not support superscript typesetting. People have adopted the linear notation for such environments; the up-arrow suggests 'raising to the power of'. If the character set does not contain an up arrow, the caret (^) is used instead.
The superscript notation doesn't lend itself well to generalization, which explains why Knuth chose to work from the inline notation instead.
is a shorter alternative notation for n uparrows. Thus .
Writing out up-arrow notation in terms of powers
Attempting to write using the familiar superscript notation gives a power tower.
For example:
If is a variable (or is too large), the power tower might be written using dots and a note indicating the height of the tower.
Continuing with this notation, could be written with a stack of such power towers, each describing the size of the one above it.
Again, if is a variable or is too large, the stack might be written using dots and a note indicating its height.
Furthermore, might be written using several columns of such stacks of power towers, each column describing the number of power towers in the stack to its left:
And more generally:
This might be carried out indefinitely to represent as iterated exponentiation of iterated exponentiation for any , , and (although it clearly becomes rather cumbersome).
Using tetration
The Rudy Rucker notation for tetration allows us to make these diagrams slightly simpler while still employing a geometric representation (we could call these tetration towers).
Finally, as an example, the fourth Ackermann number could be represented as:
Generalizations
Some numbers are so large that multiple arrows of Knuth's up-arrow notation become too cumbersome; then an n-arrow operator is useful (and also for descriptions with a variable number of arrows), or equivalently, hyper operators.
Some numbers are so large that even that notation is not sufficient. The Conway chained arrow notation can then be used: a chain of three elements is equivalent with the other notations, but a chain of four or more is even more powerful.
= , Since = = , Thus the result comes out with
= or (Petillion)
Even faster-growing functions can be categorized using an ordinal analysis called the fast-growing hierarchy. The fast-growing hierarchy uses successive function iteration and diagonalization to systematically create faster-growing functions from some base function . For the standard fast-growing hierarchy using , already exhibits exponential growth, is comparable to tetrational growth and is upper-bounded by a function involving the first four hyperoperators;. Then, is comparable to the Ackermann function, is already beyond the reach of indexed arrows but can be used to approximate Graham's number, and is comparable to arbitrarily-long Conway chained arrow notation.
These functions are all computable. Even faster computable functions, such as the Goodstein sequence and the TREE sequence require the usage of large ordinals, may occur in certain combinatorical and proof-theoretic contexts. There exist functions which grow uncomputably fast, such as the Busy Beaver, whose very nature will be completely out of reach from any up-arrow, or even any ordinal-based analysis.
Definition
Without reference to hyperoperation the up-arrow operators can be formally defined by
for all integers with .
This definition uses exponentiation as the base case, and tetration as repeated exponentiation. This is equivalent to the hyperoperation sequence except it omits the three more basic operations of succession, addition and multiplication.
One can alternatively choose multiplication as the base case and iterate from there. Then exponentiation becomes repeated multiplication. The formal definition would be
for all integers with .
Note, however, that Knuth did not define the "nil-arrow" (). One could extend the notation to negative indices (n ≥ -2) in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:
The up-arrow operation is a right-associative operation, that is, is understood to be , instead of . If ambiguity is not an issue parentheses are sometimes dropped.
Tables of values
Computing 0↑n b
Computing results in
0, when n = 0
1, when n = 1 and b = 0
0, when n = 1 and b > 0
1, when n > 1 and b is even (including 0)
0, when n > 1 and b is odd
Computing 2↑n b
Computing can be restated in terms of an infinite table. We place the numbers in the top row, and fill the left column with values 2. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
The table is the same as that of the Ackermann function, except for a shift in and , and an addition of 3 to all values.
Computing 3↑n b
We place the numbers in the top row, and fill the left column with values 3. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
Computing 4↑n b
We place the numbers in the top row, and fill the left column with values 4. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
Computing 10↑n b
We place the numbers in the top row, and fill the left column with values 10. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
For 2 ≤ b ≤ 9 the numerical order of the numbers is the lexicographical order with n as the most significant number, so for the numbers of these 8 columns the numerical order is simply line-by-line. The same applies for the numbers in the 97 columns with 3 ≤ b ≤ 99, and if we start from n = 1 even for 3 ≤ b ≤ 9,999,999,999.
| Mathematics | Specific functions | null |
258765 | https://en.wikipedia.org/wiki/Tadorna | Tadorna | The shelducks, most species of which are found in the genus Tadorna (except for the Radjah shelduck, which is now found in its own monotypic genus Radjah), are a group of large birds in the Tadorninae subfamily of the Anatidae, the biological family that includes the ducks and most duck-like waterfowl such as the geese and swans.
Biology
Shelducks are a group of large, often semi-terrestrial waterfowl, which can be seen as intermediate between geese (Anserinae) and ducks. They are mid-sized (some 50–60 cm) Old World waterfowl. The sexes are colored slightly differently in most species, and all have a characteristic upperwing coloration in flight: the tertiary remiges form a green speculum, the secondaries and primaries are black, and the coverts (forewing) are white. Their diet consists of small shore animals (winkles, crabs etc.) as well as grasses and other plants.
They were originally known as "sheldrakes", which remained the most common name until the late 19th century. The word is still sometimes used to refer to a male shelduck and can also occasionally refer to the canvasback (Aythya valisineria) of North America.
Systematics
The genus Tadorna was introduced by the German zoologist Friedrich Boie in 1822. The type species is the common shelduck. The genus name comes from the French name Tadorne for the common shelduck. It may originally derive from Celtic roots meaning "pied waterfowl", essentially the same as the English "shelduck". A group of them is called a "dopping," taken from the Harley Manuscript.
The namesake genus of the Tadorninae, Tadorna is very close to the Egyptian goose and its extinct relatives from the Madagascar region, Alopochen. While the classical shelducks form a group that is obviously monophyletic, the interrelationships of these, the aberrant common and especially Radjah sheducks, and the Egyptian goose were found to be poorly resolved by mtDNA cytochrome b sequence data; this genus may thus be paraphyletic.
Ruddy shelduck (Tadorna ferruginea)
South African shelduck (Tadorna cana)
Australian shelduck (Tadorna tadornoides)
Paradise shelduck (Tadorna variegata)
† Crested shelduck (Tadorna cristata) - possibly extinct (late 20th century?)
Common shelduck (Tadorna tadorna)
The Radjah sheduck, formerly placed in the genus Tadorna, is now placed in its own monotypic genus:
Radjah shelduck (Radjah radjah)
Fossil bones from Dorkovo (Bulgaria) described as Balcanas pliocaenica may actually belong to this genus. They have even been proposed to be referable to the common shelduck, but their Early Pliocene age makes this rather unlikely.
Phylogeny
Based on the Taxonomy in Flux from John Boyd's website.
Table of species
The following table is based on the HBW and BirdLife International Illustrated Checklist of the Birds of the World.
| Biology and health sciences | Anseriformes | Animals |
258959 | https://en.wikipedia.org/wiki/Phone%20connector%20%28audio%29 | Phone connector (audio) | A phone connector is a family of cylindrically-shaped electrical connectors primarily for analog audio signals. Invented in the late 19th century for telephone switchboards, the phone connector remains in use for interfacing wired audio equipment, such as headphones, speakers, microphones, mixing consoles, and electronic musical instruments (e.g. electric guitars, keyboards, and effects units). A male connector (a plug), is mated into a female connector (a socket), though other terminology is used.
Plugs have 2 to 5 electrical contacts. The tip contact is indented with a groove. The sleeve contact is nearest the (conductive or insulated) handle. Contacts are insulated from each other by a band of non-conductive material. Between the tip and sleeve are 0 to 3 ring contacts. Since phone connectors have many uses, it is common to simply name the connector according its number of rings:
The sleeve is usually a common ground reference voltage or return current for signals in the tip and any rings. Thus, the number of transmittable signals is less than the number of contacts.
The outside diameter of the sleeve is for full-sized connectors, for "mini" connectors, and only for "sub-mini" connectors. Rings are typically the same diameter as the sleeve.
Other terms
The 1902 International Library of Technology simply uses jack for the female and plug for the male connector. The 1989 Sound Reinforcement Handbook uses phone jack for the female and phone plug for the male connector. Robert McLeish, who worked at the BBC, uses jack or jack socket for the female and jack plug for the male connector in his 2005 book Radio Production. The American Society of Mechanical Engineers, as of 2007, says the more fixed electrical connector is the jack, while the less fixed connector is the plug, without regard to the gender of the connector contacts. The Institute of Electrical and Electronics Engineers in 1975 also made a standard that was withdrawn in 1997.
The intended application for a phone connector has also resulted in names such as audio jack, headphone jack, stereo plug, microphone jack, aux input, etc. Among audio engineers, the connector may often simply be called a quarter-inch to distinguish it from XLR, another frequently-used audio connector. These naming variations are also used for the 3.5 mm connectors, which have been called mini-phone, mini-stereo, mini jack, etc.
RCA connectors are differently-shaped, but confusingly are similarly-named as phono plugs and phono jacks (or in the UK, phono sockets). 3.5 mm connectors are sometimes—counter to the connector manufacturers' nomenclature—referred to as mini phonos.
Confusion also arises because phone jack and phone plug may sometimes refer to the RJ11 and various older telephone sockets and plugs that connect wired telephones to wall outlets.
Historical development
The original version descends from as early as 1877 in Boston when the first telephone switchboard was installed or 1878, when an early switchboard was used for the first commercial manual telephone exchange in New Haven created by George W. Coy.
Charles E. Scribner filed a patent in 1878 to facilitate switchboard operation using his spring-jack switch. In it, a conductive lever pushed by a spring is normally connected to one contact. But when a cable with a conductive plug is inserted into a hole and makes contact with that lever, the lever pivots and breaks its normal connection. The receptacle was called a jack-knife because of its resemblance to a pocket clasp-knife. This is said to be the origin of calling the receptacle a jack. Scribner filed a patent in 1880 which removes the lever and resembles the modern connector and made improvements to switchboard design in subsequent patents filed in 1882.
Henry P. Clausen filed a patent in 1901 for improved construction of the telephone switchboard-plug with today's inch TS form still used on audio equipment.
Western Electric was the manufacturing arm of the Bell System, and thus originated or refined most of the engineering designs, including the telephone jacks and plugs which were later adopted by other industries, including the US military.
By 1907, Western Electric had designed a number of models for different purposes, including:
By 1950, the two main plug designs were:
WE-309 (compatible with -inch jacks, such as 246 jack), for use on high-density jack panels such as the 608A
WE-310 (compatible with -inch jacks, such as the 242)
Several modern designs have descended from those earlier versions:
B-Gauge standard BPO316 (not compatible with EIA RS-453)
EIA RS-453: Dimensional, Mechanical and Electrical Characteristics Defining Phone Plugs & Jacks standard of diameter, also found in IEC 60603-11:1992 Connectors for frequencies below 3 MHz for use with printed boards – Part 11: Detail specification for concentric connectors (dimensions for free connectors and fixed connectors).
Military variants
U.S. military versions of the Western Electric plugs were initially specified in Amendment No.1, MIL-P-642, and included:
M642/1-1
M642/1-2
M642/2-1
M642/2-2
M642/4-1
M642/4-2
MIL-P-642/2, also known as PJ-051. (Similar to Western Electric WE-310, and thus not compatible with EIA RS-453)
MIL-P-642/5A: Plug, Telephone (TYPE PJ-068) and Accessory Screws (1973), and MIL-DTL-642F: Plugs, Telephone, and Accessory Screws (2015), with diameter, also known by the earlier Signal Corps PL-68 designation. These are commonly used as the microphone jack for aviation radios, and on Collins S-line and many Drake amateur radios. MIL-DTL-642F states, "This specification covers telephone plugs used in telephone (including telephone switchboard consoles), telegraph, and teletype circuits, and for connecting headsets, handsets, and microphones into communications circuits."
Miniature size
The 3.5 mm or miniature size was originally designed in the 1950s as two-conductor connectors for earpieces on transistor radios, and remains a standard still used today. This roughly half-sized version of the original, popularized by the Sony EFM-117J radio (released in 1964), is still commonly used in portable applications and has a length of . The three-conductor version became very popular with its application on the Walkman in 1979, as unlike earlier transistor radios, these devices had no speaker of their own; the usual way to listen to them was to plug in headphones. There is also an EIA standard for 0.141-inch miniature phone jacks.
The 2.5 mm or sub-miniature sizes were similarly popularized on small portable electronics. They often appeared next to a 3.5 mm microphone jack for a remote control on-off switch on early portable tape recorders; the microphone provided with such machines had the on-off switch and used a two-pronged connector with both the 3.5 and 2.5 mm plugs. They were also used for low-voltage DC power input from wall adapters. In the latter role, they were soon replaced by coaxial DC power connectors. 2.5 mm phone jacks have also been used as headset jacks on mobile telephones (see ).
The in and in sizes, approximately 3.5 mm and 2.5 mm respectively in mm, though those dimensions are only approximations. All sizes are now readily available in two-conductor (unbalanced mono) and three-conductor (balanced mono or unbalanced stereo) versions.
Four-conductor versions of the 3.5 mm plug and jack are used for certain applications. A four-conductor version is often used in compact camcorders and portable media players, providing stereo sound and composite analog video. It is also used for a combination of stereo audio, a microphone, and controlling media playback, calls, volume and/or a virtual assistant on some laptop computers and most mobile phones, and some handheld amateur radio transceivers from Yaesu. Some headphone amplifiers have used it to connect balanced stereo headphones, which require two conductors per audio channel as the channels do not share a common ground.
Broadcast usage
By the 1940s, broadcast radio stations were using Western Electric Code No. 103 plugs and matching jacks for patching audio throughout studios. This connector was used because of its use in AT&T's Long Line circuits for the distribution of audio programs over the radio networks' leased telephone lines. Because of the large amount of space these patch panels required, the industry began switching to 3-conductor plugs and jacks in the late 1940s, using the WE Type 291 plug with WE type 239 jacks. The type 291 plug was used instead of the standard type 110 switchboard plug because the location of the large bulb shape on this TRS plug would have resulted in both audio signal connections being shorted together for a brief moment while the plug was being inserted and removed. The Type 291 plug avoids this by having a shorter tip.
Patch bay connectors
Professional audio and the telecommunication industry use a diameter plug, associated with trademarked names including , TT, Tini-Telephone, and Tini-Tel. They are not compatible with standard EIA RS-453/IEC 60603-11 -inch jacks. In addition to a slightly smaller diameter, they have a slightly different geometry. The three-conductor TRS versions are capable of handling balanced signals and are used in professional audio installations. Though unable to handle as much power, and less reliable than a jack, Bantam connectors are used for mixing console and outboard patchbays in recording studio and live sound applications, where large numbers of patch points are needed in a limited space. The slightly different shape of Bantam plugs is also less likely to cause shorting as they are plugged in.
Less common
A two-pin version, known to the telecom industry as a "310 connector", consists of two -inch phone plugs at a centre spacing of . The socket versions of these can be used with normal phone plugs provided the plug bodies are not too large, but the plug version will only mate with two sockets at inches centre spacing, or with line sockets, again with sufficiently small bodies. These connectors are still used today in telephone company central offices on "DSX" patch panels for DS1 circuits. A similar type of 3.5 mm connector is often used in the armrests of older aircraft, as part of the on-board in-flight entertainment system. Plugging a stereo plug into one of the two mono jacks typically results in the audio coming into only one ear. Adapters are available.
A short-barrelled version of the phone plug was used for 20th-century high-impedance mono headphones, and in particular those used in World War II aircraft. These have become rare. It is physically possible to use a normal plug in a short socket, but a short plug will neither lock into a normal socket nor complete the tip circuit.
Less commonly used sizes, both diameters and lengths, are also available from some manufacturers, and are used when it is desired to restrict the availability of matching connectors, such as inside diameter jacks for fire safety communication in public buildings.
Decline of phone connector sockets in consumer goods
While phone connectors remain a standard connector type in some fields, such as desktop computers, musical instrument amplification, and live audio and recording equipment, as of 2025 they are becoming less commonly found in some consumer product categories.
Digital audio is now common and may be transmitted via USB sound cards, USB headphones, Bluetooth, display connectors with integrated sound (e.g. DisplayPort and HDMI). Digital devices may also have internal speakers and mics. Thus the phone connector is sometimes considered redundant and a waste of space, particularly on thinner mobile devices. And while low-profile surface-mount sockets waterproofed up to 1 meter exist, removing the socket entirely facilitates waterproofing.
Chinese phone manufacturers were early in not using a phone socket: first with Oppo's Finder in July 2012 (which came packaged with micro-USB headphones and supported Bluetooth headphones), followed by Vivo's X5Max in 2014 and LeEco in April 2016 and Lenovo's Moto Z in September 2016. Apple's September 2016 announcement of the iPhone 7 was initially mocked for removing the socket by other manufacturers like Samsung and Google who eventually followed suit. The socket is also not present in some tablets and thin laptops (e.g. Lenovo Duet Chromebook and Asus ZenBook 13 in 2020).
Aviation and US military connectors
The US military uses a variety of phone connectors including -inch (0.281-inch, 7.14 mm) and -inch (0.25 inch, 6.35 mm) diameter plugs.
Commercial and general aviation (GA) civil aircraft headsets often use a pair of phone connectors. A standard -inch (6.3 mm) 2 or 3-conductor plug, type PJ-055, is used for headphones. For the microphone, a smaller -inch (0.206 inch / 5.23 mm) diameter 3-conductor plug, type PJ-068, is used.
Military aircraft and civil helicopters have another type termed the U-174/U (Nexus TP-101), also known as U-93A/U (Nexus TP-102) and Nexus TP-120. These are also known as US NATO plugs. These have a diameter shaft with four conductors, allowing two for the headphones, and two for the microphone. Also used is the U-384/U (Nexus TP-105), which has the same diameter as the U-174/U but is slightly longer and has 5 conductors instead of 4.
There is a confusingly similar four-conductor British connector, Type 671 (10H/18575), with a slightly larger diameter of used for headsets in many UK military aircraft and often referred to as a UK NATO or European NATO connector.
General use
In the most common arrangement, consistent with the original intention of the design, the male plug is connected to a cable, and the female socket is mounted in a piece of equipment. A considerable variety of line plugs and panel sockets is available, including plugs suiting various cable sizes, right-angle plugs, and both plugs and sockets in a variety of price ranges and with current capacities up to 15 amperes for certain heavy-duty in versions intended for loudspeaker connections.
Common uses of phone plugs and their matching sockets include:
Headphone and earphone jacks on a wide range of equipment. 6.35 mm ( in) plugs are common on home and professional audio equipment, while 3.5 mm plugs are nearly universal for portable audio equipment and headphones. 2.5 mm plugs are not as common, but are used on communication equipment such as cordless phones, mobile phones, and two-way radios, especially in the earliest years of the 21st century before the 3.5 mm became standard on mobile phones. The use of headphone jacks in smartphones is declining in favor of USB-C connectors and wireless Bluetooth solutions.
Consumer electronics devices such as digital cameras, camcorders, and portable DVD players use 3.5 mm connectors for composite video and audio output. Typically, a TRS connection is used for mono unbalanced audio plus video, and a TRRS connection for stereo unbalanced audio plus analog video. Cables designed for this use are often terminated with RCA connectors on the other end. A combined video/audio jack is also present on some computers; several generations of the Raspberry Pi have analog audio and video from the same jack, and Sony also used this style of connection as the TV-out on some models of Vaio laptop.
Hands-free sets and headsets often use 3.5 mm or 2.5 mm connectors. TRS connectors are used for mono audio out and an unbalanced microphone (with a shared ground). Four-conductor TRRS phone connectors add an additional audio channel for stereo output. TRRS connectors used for this purpose are sometimes interoperable with TRS connectors, depending on how the contacts are used.
Microphone inputs on tape and cassette recorders, sometimes with remote control switching on the ring, on early, monaural cassette recorders mostly a dual-pin version consisting of a 3.5 mm TS for the microphone and a 2.5 mm TS for remote control which switches the recorder's power supply.
Musical instruments, such as guitars, digital keyboards and electronic drum kits – along with associated audio equipment such as amplifiers and effects units – generally use 6.35mm TS connectors.
Patching points (insert points) on a wide range of equipment. An unusual example is the Enigma machine, which featured a plugboard as part of its encryption system.
Computer sound
Any number of 3.5 mm sockets for input and output may be found on personal computers, either from integrated sound hardware common on motherboards or from insertable sound cards. The 1999 PC System Design Guide's color code for 3.5 mm TRS sockets is common, which assigns pink for microphone, light blue for line in, and lime for line level. AC'97 and its 2004 successor Intel High Definition Audio have been widely adopted specifications that, while not mandating physical sockets, do provide specifications for a front panel connector with pin assignments for two ports with jack detection. Front panels commonly have a stereo output socket for headphones and (slightly less commonly) a stereo input socket for a mic. The back panel may have additional sockets, most commonly for line out, mic, line in, and less commonly for multiple surround sound outs. Laptops and tablets tend to have fewer sockets than desktops due to size constraints.
Microphone power
Some computers include a 3.5 mm TRS socket for mono microphone that delivers a 5 V bias voltage on the ring to power an electret microphone's integrated buffer amplifier, though details depend on the manufacturer. The Apple PlainTalk microphone socket is a historical variant that accepts either a 3.5 mm line input or an elongated 3.5 mm TRS plug whose tip carries the amplifier's power.
TRRS headset sockets
Some newer computers, especially laptops, have 3.5 mm TRRS headset sockets, which are compatible with phone headsets and may be distinguished by a headset icon instead of the usual headphones or microphone icons. These are particularly used for voice over IP.
Surround sound
Sound cards that output 5.1 surround sound have three sockets to accommodate six channels: front left and right; surround left and right; and center and subwoofer. 6.1 and 7.1 channel sound cards from Creative Labs, however, use a single three-conductor socket (for the front speakers) and two four-conductor sockets. This is to accommodate rear-center (6.1) or rear left and right (7.1) channels without the need for additional sockets on the sound card.
Combined TRS and TOSLINK
Some portable computers have a combined 3.5 mm TRS/TOSLINK jack, supporting stereo audio output using either a TRS connector or TOSLINK (stereo or 5.1 Dolby Digital/DTS) digital output using a suitable optical adapter. Most iMac computers have this digital/analog combo output feature as standard, with early MacBooks having two ports, one for analog/digital audio input and the other for output. Support for input was dropped on various later models
Compatibility for different numbers of rings
The original application for the 6.35 mm ( in) phone jack was in manual telephone exchanges. Many different configurations of these phone plugs were used, some accommodating five or more conductors, with several tip profiles. Of these many varieties, only the two-conductor version with a rounded tip profile was compatible between different manufacturers, and this was the design that was at first adopted for use with microphones, electric guitars, headphones, loudspeakers, and other audio equipment.
When a three-conductor version of the 6.35 mm plug was introduced for use with stereo headphones, it was given a sharper tip profile to make it possible to manufacture jacks that would accept only stereo plugs, to avoid short-circuiting the right channel of the amplifier. This attempt has long been abandoned, and now the convention is that all plugs fit all sockets of the same size, regardless of whether they are balanced or unbalanced, mono or stereo. Most 6.35 mm plugs, mono or stereo, now have the profile of the original stereo plug, although a few rounded mono plugs are still produced. The profiles of stereo miniature and sub-miniature plugs have always been identical to the mono plugs of the same size.
The results of this physical compatibility are:
If a 2-conductor plug is inserted into a 3-conductor socket, then the socket's ring is shorted to ground, thus any signal sent from that socket's ring is lost. Equipment not designed for this short might, for instance, damage an audio amplifier channel.
If a 3-conductor plug is connected to a 2-conductor socket, normally the result is to leave the ring of the plug unconnected. This open circuit is potentially dangerous to equipment using vacuum tubes, but most solid-state devices will tolerate an open condition.
Equipment aware of this possible shorting allows, for instance:
Mono equipment receiving stereo output will simply use the left (tip) channel as the mono input signal and lose the right (ring) channel of the stereo audio.
The positive (tip) component of a balanced signal will be received, though without the full benefits of balanced audio, since the signal's negative (ring) component will be lost.
Some devices for an even higher number of rings might possibly be backwards-compatible with an opposite-gendered device with fewer rings, or may cause damage. For example, 3.5 mm TRRS sockets that accept TRRS headsets (stereo headphones with a mic) are often compatible with standard TRS stereo headphones, whereby the contact that expects a mic signal will instead simply become shorted to ground and thus will provide a zero signal. Conversely, those TRRS headsets can plug into TRS sockets, in which case its speakers may still work even though its mic won't work (the mic's signal contact will be disconnected).
Because of a lack of standardization in the past regarding the dimensions (length) given to the ring conductor and the insulating portions on either side of it in 6.35 mm ( in) phone connectors and the width of the conductors in different brands and generations of sockets, there are occasional issues with compatibility between differing brands of plug and socket. This can result in a contact in the socket bridging (shorting) the ring and sleeve contacts on a phone connector.
Video
Equipment requiring video with stereo audio input or output sometimes uses 3.5 mm TRRS connectors. Two incompatible variants exist, of and length, and using the wrong variant may either simply not work, or could cause physical damage.
Attempting to fully insert the longer (17 mm) plug into a receptacle designed for the shorter (15 mm) plug may damage the receptacle, and may damage any electronics located immediately behind the receptacle. However, partially inserting the plug will work as the tip/ring/ring distances are the same for both variants.
A shorter plug in a socket designed for the longer connector may not be retained firmly and may result in wrong signal routing or a short circuit inside the equipment (e.g. the plug tip may cause the contacts inside the receptacle – tip/ring 1, etc. – to short together).
The shorter 15 mm TRRS variant is more common and physically compatible with standard 3.5 mm TRS and TS connectors.
Recording equipment
Many small video cameras, laptops, recorders and other consumer devices use a 3.5 mm microphone connector for attaching a microphone to the system. These fall into three categories:
Devices that use an unpowered microphone: usually a cheap dynamic or piezoelectric microphone. The microphone generates its own voltage and needs no power.
Devices that use a self-powered microphone: usually a condenser microphone with an internal battery-powered amplifier.
Devices that use a plug-in powered microphone: an electret microphone containing an internal FET amplifier. These provide a good quality signal in a very small microphone. However, the internal FET needs a DC power supply, which is provided as a bias voltage for an internal preamp transistor. Plug-in power is supplied on the same line as the audio signal, using an RC filter. The DC bias voltage supplies the FET amplifier (at a low current), while the capacitor decouples the DC supply from the AC input to the recorder. Typically, V=1.5 V, R=1 kΩ, C=47 μF. If a recorder provides plug-in power, and the microphone does not need it, everything will usually work well. In the converse case (recorder provides no power; microphone needs power), no sound will be recorded.
Mobile devices
Three- or four-conductor (TRS or TRRS) 2.5 mm and 3.5 mm sockets were common on older cell phones and smartphones respectively, providing mono (three-conductor) or stereo (four-conductor) sound and a microphone input, together with signaling (e.g., push a button to answer a call). These are used both for handsfree headsets and for stereo headphones.
3.5 mm TRRS (stereo-plus-mic) sockets became particularly common on smartphones, and have been used by Nokia and others since 2006, and as mentioned in the compatibility section, they are often compatible with standard 3.5 mm stereo headphones. Many computers, especially laptops, also include a TRRS headset socket compatible with the headsets intended for smartphones.
The four conductors of a TRRS connector are assigned to different purposes by different manufacturers. Any 3.5 mm plug can be plugged mechanically into any socket, but many combinations are electrically incompatible. For example, plugging TRRS headphones into a TRS headset socket, a TRS headset into a TRRS socket, or plugging TRRS headphones from one manufacturer into a TRRS socket from another may not function correctly, or at all. Mono audio will usually work, but stereo audio or the microphone may not work, or the pause/play controls may be inactive, as is common when trying to use headphones with controls for iPhones on an Android device, or vice versa.
TRRS standards
Two different forms are frequently found. Both place left audio on the tip and right audio on the first ring, same as stereo connectors. They differ in the placement of the microphone and return contacts.
The OMTP standard places the ground return on the sleeve and the microphone on the second ring. It has been accepted as a national Chinese standard YDT 1885–2009. In the West, it is mostly used on older devices, such as older Nokia mobiles, older Samsung smartphones, and some Sony Ericsson phones. It is widely used in products meant for the Chinese market. Headsets using this wiring are sometimes indicated by black plastic separators between the rings.
The CTIA/AHJ standard reverses these contacts, putting the microphone on the sleeve. It is used by Apple's iPhone line until the 6S and SE (1st). In the West, these products made it the de facto TRRS standard. It is now used by HTC devices, recent Samsung, Nokia, and Sony phones, among others. It has the disadvantage that the microphone gets shorted to ground if the device has a metal body and the sleeve has a flange, touching the body. Headsets using this wiring are sometimes indicated by white plastic separators between the rings.
If a CTIA headset is connected to an OMTP device, the missing ground effectively connects the speakers in series, out-of-phase. This removes the singer's voice on typical popular music recordings, which place the singers in the center. If the main microphone button is held down, shorting across the microphone and restoring ground, the correct sound may be audible.
The 4-pole 3.5 mm connector is defined by the Japanese standard JEITA/EIAJ RC-5325A, "4-Pole miniature concentric plugs and jacks", originally published in 1993. 3-pole 3.5 mm TRS connectors are defined in JIS C 6560. | Technology | Media and communication: Basics | null |
258979 | https://en.wikipedia.org/wiki/Malnutrition | Malnutrition | Malnutrition occurs when an organism gets too few or too many nutrients, resulting in health problems. Specifically, it is a deficiency, excess, or imbalance of energy, protein and other nutrients which adversely affects the body's tissues and form.
Malnutrition is a category of diseases that includes undernutrition and overnutrition. Undernutrition is a lack of nutrients, which can result in stunted growth, wasting, and underweight. A surplus of nutrients causes overnutrition, which can result in obesity. In some developing countries, overnutrition in the form of obesity is beginning to appear within the same communities as undernutrition.
Most clinical studies use the term 'malnutrition' to refer to undernutrition. However, the use of 'malnutrition' instead of 'undernutrition' makes it impossible to distinguish between undernutrition and overnutrition, a less acknowledged form of malnutrition. Accordingly, a 2019 report by The Lancet Commission suggested expanding the definition of malnutrition to include "all its forms, including obesity, undernutrition, and other dietary risks." The World Health Organization and The Lancet Commission have also identified "[t]he double burden of malnutrition", which occurs from "the coexistence of overnutrition (overweight and obesity) alongside undernutrition (stunted growth and wasting)."
Prevalence
It is estimated that nearly one in three persons globally has at least one form of malnutrition: wasting, stunting, vitamin or mineral deficiency, overweight, obesity, or diet-related noncommunicable diseases. Undernutrition is more common in developing countries. Stunting is more prevalent in urban slums than in rural areas. Studies on malnutrition have the population categorised into different groups including infants, under-five children, children, adolescents, pregnant women, adults and the elderly population. The use of different growth references in different studies leads to variances in the undernutrition prevalence reported in different studies. Some of the growth references used in studies include the
National Center for Health Statistics (NCHS) growth charts, WHO reference 2007, Centers for Disease Control and Prevention (CDC) growth charts, National Health and Nutrition Examination Survey (NHANES), WHO reference 1995, Obesity Task Force (IOTF) criteria and Indian Academy of Pediatrics (IAP) growth charts.
In children
The prevalence of undernutrition is highest among children under five. In 2021, 148.1 million children under five years old were stunted, 45 million were wasted, and 37 million were overweight or obese. The same year, an estimated 45% of deaths in children were linked to undernutrition. , the prevalence of wasting among children under five in South Asia was reported to be 16% moderately or severely wasted. , UNICEF reported this prevalence as having slightly improved, but still being at 14.8%. India has one of the highest burdens of wasting in Asia with over 20% wasted children. However, the burden of undernutrition among under-five children in African countries is much higher. A pooled analysis of the prevalence of chronic undernutrition among under-five children in East Africa was identified to be 33.3%. This prevalence of undernutrition among under-five children ranged from 21.9% in Kenya to 53% in Burundi.
In Tanzania, the prevalence of stunting, among children under five varied from 41% in lowland and 64.5% in highland areas. Undernutrition by underweight and wasting was 11.5% and 2.5% in lowland and 22.% and 1.4% in the highland areas of Tanzania respectively. In South Sudan, the prevalence of undernutrition explained by stunting, underweight and wasting in under-five children were 23.8%, 4.8% and 2.3% respectively. In 28 countries, at least 30% of children were still affected by stunting in 2022.
Vitamin A deficiency affects one third of children under age 5 around the world, leading to 670,000 deaths and 250,000–500,000 cases of blindness. Vitamin A supplementation has been shown to reduce all-cause mortality by 12 to 24%.
In adults
As of June 2021, 1.9 billion adults were overweight or obese, and 462 million adults were underweight. Globally, two billion people had iodine deficiency in 2017. In 2020, 900 million women and children had anemia, which is often caused by iron deficiency. More than 3.1 billion people in the world – 42% – were unable to afford a healthy diet in 2021.
Certain groups have higher rates of undernutrition, including elderly people and women (in particular while pregnant or breastfeeding children under five years of age). Undernutrition is an increasing health problem in people aged over 65 years, even in developed countries, especially among nursing home residents and in acute care hospitals. In the elderly, undernutrition is more commonly due to physical, psychological, and social factors, not a lack of food. Age-related reduced dietary intake due to chewing and swallowing problems, sensory decline, depression, imbalanced gut microbiome, poverty and loneliness are major contributors to undernutrition in the elderly population. Malnutrition is also attributed due to wrong diet plan adopted by people who aim to reduce their weight without medical practitioners or nutritionist advice.
Increase in 2020
There has been a global increase in food insecurity and hunger between 2011 and 2020. In 2015, 795 million people (about one in ten people on earth) had undernutrition. It is estimated that between 691 and 783 million people in the world faced hunger in 2022. According to UNICEF, 2.4 billion people were moderately or severely food insecure in 2022, 391 million more than in 2019.
These increases are partially related to the ongoing COVID-19 pandemic, which continues to highlight the weaknesses of current food and health systems. It has contributed to food insecurity, increasing hunger worldwide; meanwhile, lower physical activity during lockdowns has contributed to increases in overweight and obesity. In 2020, experts estimated that by the end of the year, the pandemic could have double the number of people at risk of suffering acute hunger. Similarly, experts estimated that the prevalence of moderate and severe wasting could increase by 14% due to COVID-19; coupled with reductions in nutrition and health services coverage, this could result in over 128,000 additional deaths among children under 5 in 2020 alone. Although COVID-19 is less severe in children than in adults, the risk of severe disease increases with undernutrition.
Other major causes of hunger include manmade conflicts, climate changes, and economic downturns.
Type
Undernutrition
Undernutrition can occur either due to protein-energy wasting or as a result of micronutrient deficiencies. It adversely affects physical and mental functioning, and causes changes in body composition and body cell mass. Undernutrition is a major health problem, causing the highest mortality rate in children, particularly in those under 5 years, and is responsible for long-lasting physiologic effects. It is a barrier to the complete physical and mental development of children.
Undernutrition can manifest as stunting, wasting, and underweight. If undernutrition occurs during pregnancy, or before two years of age, it may result in permanent problems with physical and mental development. Extreme undernutrition can cause starvation, chronic hunger, Severe Acute Malnutrition (SAM), and/or Moderate Acute Malnutrition (MAM).
The signs and symptoms of micronutrient deficiencies depend on which micronutrient is lacking. However, undernourished people are often thin and short, with very poor energy levels; and swelling in the legs and abdomen is also common. People who are undernourished often get infections and frequently feel cold.
Micronutrient undernutrition
Micronutrient undernutrition results from insufficient intake of vitamins and minerals. Worldwide, deficiencies in iodine, Vitamin A, and iron are the most common. Children and pregnant women in low-income countries are at especially high risk for micronutrient deficiencies.
Anemia is most commonly caused by iron deficiency, but can also result from other micronutrient deficiencies and diseases. This condition can have major health consequences.
It is possible to have overnutrition simultaneously with micronutrient deficiencies; this condition is termed the double burden of malnutrition.
Protein-energy malnutrition
'Undernutrition' sometimes refers specifically to protein–energy malnutrition (PEM). This condition involves both micronutrient deficiencies and an imbalance of protein intake and energy expenditure. It differs from calorie restriction in that calorie restriction may not result in negative health effects. Hypoalimentation (underfeeding) is one cause of undernutrition.
Two forms of PEM are kwashiorkor and marasmus; both commonly coexist.
Kwashiorkor is primarily caused by inadequate protein intake. Its symptoms include edema, wasting, liver enlargement, hypoalbuminaemia, and steatosis; the condition may also cause depigmentation of skin and hair. The disorder is further identified by a characteristic swelling of the belly, and extremities which disguises the patient's undernourished condition. 'Kwashiorkor' means 'displaced child' and is derived from the Ga language of coastal Ghana in West Africa. It means "the sickness the baby gets when the next baby is born," as it often occurs when the older child is deprived of breastfeeding and weaned to a diet composed largely of carbohydrates.
Marasmus (meaning 'to waste away') can result from a sustained diet that is deficient in both protein and energy. This causes their metabolism to adapt to prolong survival. The primary symptoms are severe wasting, leaving little or no edema; minimal subcutaneous fat; and abnormal serum albumin levels. It is traditionally seen in cases of famine, significant food restriction, or severe anorexia. Conditions are characterized by extreme wasting of the muscles and a gaunt expression.
Overnutrition
Excessive consumption of energy-dense foods and drinks and limited physical activity causes overnutrition. It causes overweight, defined as a body mass index (BMI) of 25 or more, and can lead to obesity (a BMI of 30 or more). Obesity has become a major health issue worldwide. Overnutrition is linked to chronic non-communicable diseases like diabetes, certain cancers, and cardiovascular diseases. Hence identifying and addressing the immediate risk factors has become a major health priority. The recent evidence on the impact of diet-induced obesity in fathers and mothers around the time of conception is identified to negatively program the health outcomes of multiple generations.
According to UNICEF, at least 1 in every 10 children under five is overweight in 33 countries.
Classifying malnutrition
Definition by Gomez and Galvan
In 1956, Gómez and Galvan studied factors associated with death in a group of undernourished children in a hospital in Mexico City, Mexico. They defined three categories of malnutrition: first, second, and third degree. The degree of malnutrition is calculated based on a child's body size compared to the median weight for their age. The risk of death increases with increasing degrees of malnutrition.
An adaptation of Gomez's original classification is still used today. While it provides a way to compare malnutrition within and between populations, this classification system has been criticized for being "arbitrary" and for not considering overweight as a form of malnutrition. Also, height alone may not be the best indicator of malnutrition; children who are born prematurely may be considered short for their age even if they have good nutrition.
Definition by Waterlow
In the 1970s, John Conrad Waterlow established a new classification system for malnutrition. Instead of using just weight for age measurements, Waterlow's system combines weight-for-height (indicating acute episodes of malnutrition) with height-for-age to show the stunting that results from chronic malnutrition. One advantage of the Waterlow classification is that weight for height can be calculated even if a child's age is unknown.
The World Health Organization frequently uses these classifications of malnutrition, with some modifications.
Effects
Undernutrition weakens every part of the immune system. Protein and energy undernutrition increases susceptibility to infection; so do deficiencies of specific micronutrients (including iron, zinc, and vitamins). In communities or areas that lack access to safe drinking water, these additional health risks present a critical problem.
Undernutrition plays a major role in the onset of active tuberculosis. It also raises the risk of HIV transmission from mother to child, and increases replication of the virus. Undernutrition can cause vitamin-deficiency-related diseases like scurvy and rickets. As undernutrition worsens, those affected have less energy and experience impairment in brain functions. This can make it difficult (or impossible) for them to perform the tasks needed to acquire food, earn an income, or gain an education.
Undernutrition can also cause acute problems, like hypoglycemia (low blood sugar). This condition can cause lethargy, limpness, seizures, and loss of consciousness. Children are particularly at risk and can become hypoglycemic after 4 to 6 hours without food. Dehydration can also occur in malnourished people, and can be life-threatening, especially in babies and small children.
Signs
There are many different signs of dehydration in undernourished people. These can include sunken eyes; a very dry mouth; decreased urine output and/or dark urine; increased heart rate with decreasing blood pressure; and altered mental status.
Cognitive development
Protein-calorie malnutrition can cause cognitive impairments. This most commonly occurs in people who were malnourished during a "critical period ... from the final third of gestation to the first 2 years of life". For example, in children under two years of age, iron deficiency anemia is likely to affect brain function acutely, and probably also chronically. Similarly, folate deficiency has been linked to neural tube defects.
Iodine deficiency is "the most common preventable cause of mental impairment worldwide." "Even moderate [iodine] deficiency, especially in pregnant women and infants, lowers intelligence by 10 to 15 I.Q. points, shaving incalculable potential off a nation's development." Among those affected, very few people experience the most visible and severe effects: disabling goiters, cretinism and dwarfism. These effects occur most commonly in mountain villages. However, 16 percent of the world's people have at least mild goiter (a swollen thyroid gland in the neck)."
Causes and risk factors
Social and political
Social conditions have a significant influence on the health of people. The social determinants of undernutrition mainly include poor education, poverty, disease burden and lack of women's empowerment. Identifying and addressing these determinants can eliminate undernutrition in the long term. Identification of the social conditions that causes malnutrition in children under five has received significant research attention as it is a major public health problem.
Undernutrition most commonly results from a lack of access to high-quality, nutritious food. The household income is a socio-economic variable that influences the access to nutritious food and the probability of under and overnutrition in a community. In the study by Ghattas et al. (2020), the probability of overnutrition is significantly higher in higher-income families than in disadvantaged families. High food prices is a major factor preventing low income households from getting nutritious food For example, Khan and Kraemer (2009) found that in Bangladesh, low socioeconomic status was associated with chronic malnutrition since it inhibited purchase of nutritious foods (like milk, meat, poultry, and fruits).
Food shortages may also contribute to malnutritions in countries which lack technology. However, in the developing world, eighty percent of malnourished children live in countries that produce food surpluses, according to estimates from the Food and Agriculture Organization (FAO). The economist Amartya Sen observes that, in recent decades, famine has always been a problem of food distribution, purchasing power, and/or poverty, since there has always been enough food for everyone in the world.
There are also sociopolitical causes of malnutrition. For example, the population of a community might be at increased risk for malnutrition if government is poor and the area lacks health-related services. On a smaller scale, certain households or individuals may be at an even higher risk due to differences in income levels, access to land, or levels of education. Community plays a crucial role in addressing the social causes of malnutrition. For example, communities with high social support and knowledge sharing about social protection programs can enable better public service demands. Better public service demands and social protection programs minimise the risk of malnutrition in these communities.
It is argued that commodity speculators are increasing the cost of food. As the real-estate bubble in the United States was collapsing, it is said that trillions of dollars moved to invest in food and primary commodities, causing the 2007–2008 food price crisis.
The use of biofuels as a replacement for traditional fuels raises the price of food. The United Nations special rapporteur on the right to food, Jean Ziegler proposes that agricultural waste, such as corn cobs and banana leaves, should be used as fuel instead of crops.
In some developing countries, overnutrition (in the form of obesity) is beginning to appear in the same communities where malnutrition occurs. Overnutrition increases with urbanisation, food commercialisation and technological developments and increases physical inactivity. Variations in the health status of individuals in the same society are associated with the societal structure and an individual's socioeconomic status which leads to income inequality, racism, educational differences and lack of opportunities.
Diseases and conditions
Infectious diseases which increase nutrient requirements, such as gastroenteritis, pneumonia, malaria, and measles, can cause malnutrition. So can some chronic illnesses, especially HIV/AIDS.
Malnutrition can also result from abnormal nutrient loss due to diarrhea or chronic small bowel illnesses, like Crohn's disease or untreated coeliac disease. "Secondary malnutrition" can result from increased energy expenditure.
In infants, a lack of breastfeeding may contribute to undernourishment. Anorexia nervosa and bariatric surgery can also cause malnutrition.
Dietary practices
Undernutrition
Undernutrition due to lack of adequate breastfeeding is associated with the deaths of an estimated one million children annually. Illegal advertising of breast-milk substitutes contributed to malnutrition and continued three decades after its 1981 prohibition under the WHO International Code of Marketing Breast Milk Substitutes.
Maternal malnutrition can also factor into the poor health or death of a baby. Over 800,000 neonatal deaths have occurred because of deficient growth of the fetus in the mother's womb.
Deriving too much of one's diet from a single source, such as eating almost exclusively potato, maize or rice, can cause malnutrition. This may either be from a lack of education about proper nutrition, only having access to a single food source, or from poor healthcare access and unhealthy environments.
It is not just the total amount of calories that matters but specific nutritional deficiencies such as vitamin A deficiency, iron deficiency or zinc deficiency can also increase risk of death.
Overnutrition
Overnutrition caused by overeating is also a form of malnutrition. In the United States, more than half of all adults are now overweight—a condition that, like hunger, increases susceptibility to disease and disability, reduces worker productivity, and lowers life expectancy. Overeating is much more common in the United States, since most people have adequate access to food. Many parts of the world have access to a surplus of non-nutritious food. Increased sedentary lifestyles also contribute to overnutrition. Yale University psychologist Kelly Brownell calls this a "toxic food environment", where fat- and sugar-laden foods have taken precedence over healthy nutritious foods.
In these developed countries, overnutrition can be prevented by choosing the right kind of food. More fast food is consumed per capita in the United States than in any other country. This mass consumption of fast food results from its affordability and accessibility. Fast food, which is low in cost and nutrition, is high in calories. Due to increasing urbanization and automation, people are living more sedentary lifestyles. These factors combine to make weight gain difficult to avoid.
Overnutrition also occurs in developing countries. It has appeared in parts of developing countries where income is on the rise. It is also a problem in countries where hunger and poverty persist. Economic development, rapid urbanisation and shifting dietary patterns have increased the burden of overnutrition in the cities of low and middle-income countries. In China, consumption of high-fat foods has increased, while consumption of rice and other goods has decreased.
Overeating leads to many diseases, such as heart disease and diabetes, that may be fatal.
Agricultural productivity
Local food shortages can be caused by a lack of arable land, adverse weather, and/or poorer farming skills (like inadequate crop rotation). They can also occur in areas which lack the technology or resources needed for the higher yields found in modern agriculture. These resources include fertilizers, pesticides, irrigation, machinery, and storage facilities. As a result of widespread poverty, farmers and governments cannot provide enough of these resources to improve local yields.
Additionally, the World Bank and some wealthy donor countries have pressured developing countries to use free market policies. Even as the United States and Europe extensively subsidized their own farmers, they urged developing countries to cut or eliminate subsidized agricultural inputs, like fertilizer. Without subsidies, few (if any) farmers in developing countries can afford fertilizer at market prices. This leads to low agricultural production, low wages, and high, unaffordable food prices. Fertilizer is also increasingly unavailable because Western environmental groups have fought to end its use due to environmental concerns. The Green Revolution pioneers Norman Borlaug and Keith Rosenberg cited as the obstacle to feeding Africa by .
Future threats
In the future, variety of factors could potentially disrupt global food supply and cause widespread malnutrition. According to UNICEF's projections, it is projected that almost 600 million people will be chronically undernourished in 2030.
Global warming is of importance to food security. Almost all malnourished people (95%) live in the tropics and subtropics, where the climate is relatively stable. According to the latest Intergovernmental Panel on Climate Change reports, temperature increases in these regions are "very likely." Even small changes in temperatures can make extreme weather conditions occur more frequently. Extreme weather events, like drought, have a major impact on agricultural production, and hence nutrition. For example, the 1998–2001 Central Asian drought killed about 80 percent of livestock in Iran and caused a 50% reduction in wheat and barley crops there. Other central Asian nations experienced similar losses. An increase in extreme weather such as drought in regions such as Sub-Saharan Africa would have even greater consequences in terms of malnutrition. Even without an increase of extreme weather events, a simple increase in temperature reduces the productivity of many crop species, and decreases food security in these regions.
Another threat is colony collapse disorder, a phenomenon where bees die in large numbers. Since many agricultural crops worldwide are pollinated by bees, colony collapse disorder represents a threat to the global food supply.
Prevention
Reducing malnutrition is key part of the United Nations' Sustainable Development Goal 2 (SDG2), "Zero Hunger", which aims to reduce malnutrition, undernutrition, and stunted child growth. Managing severe acute undernutrition in a community setting has received significant research attention.
Food security
In the 1950s and 1960s, the Green Revolution aimed to bring modern Western agricultural techniques (like nitrogen fertilizers and pesticides) to Asia. Investments in agriculture, such as fund fertilizers and seeds, increased food harvests and thus food production. Consequently, food prices and malnutrition decreased (as they had earlier in Western nations).
The Green Revolution was possible in Asia because of existing infrastructure and institutions, such as a system of roads and public seed companies that made seeds available. These resources were in short supply in Africa, decreasing the Green Revolution's impact on the continent.
For example, almost five million of the 13 million people in Malawi used to need emergency food aid. However, in the early 2000s, the Malawian government changed its agricultural policies, and implemented subsidies for fertilizer and seed introduced against World Bank strictures. By 2007, farmers were producing record-breaking corn harvests. Corn production leaped to 3.4 million in 2007 compared to 1.2 million in 2005, making Malawi a major food exporter. Consequently, food prices lowered and wages for farmworkers rose. Such investments in agriculture are still needed in other African countries like the Democratic Republic of the Congo (DRC). Despite the country's great agricultural potential, the prevalence of malnutrition in the DRC is among the highest in the world. Proponents for investing in agriculture include Jeffrey Sachs, who argues that wealthy countries should invest in fertilizer and seed for Africa's farmers.
Imported Ready to Use Therapeutic Food (RUTF) has been used to treat malnutrition in northern Nigeria. Some Nigerians also use soy kunu, a locally sourced and prepared blend consisting of peanut, millet and soybeans.
New technology in agricultural production has great potential to combat undernutrition. It makes farming easier, thus improving agricultural yields. By increasing farmers' incomes, this could reduce poverty. It would also open up area which farmers could use to diversify crops for household use.
The World Bank claims to be part of the solution to malnutrition, asserting that countries can best break the cycle of poverty and malnutrition by building export-led economies, which give them the financial means to buy foodstuffs on the world market.
Economics
Many aid groups have found that giving cash assistance (or cash vouchers) is more effective than donating food. Particularly in areas where food is available but unaffordable, giving cash assistance is a cheaper, faster, and more efficient way to deliver help to the hungry. In 2008, the UN's World Food Program, the biggest non-governmental distributor of food, announced that it would begin distributing cash and vouchers instead of food in some areas, which Josette Sheeran, the WFP's executive director, described as a "revolution" in food aid. The aid agency Concern Worldwide piloted a method of giving cash assistance using a mobile phone operator, Safaricom, which runs a money transfer program that allows cash to be sent from one part of a country to another.
However, during a drought, delivering food might be the most appropriate way to help people, especially those who live far from markets and thus have limited access to them. Fred Cuny stated that "the chances of saving lives at the outset of a relief operation are greatly reduced when food is imported. By the time it arrives in the country and gets to people, many will have died." U.S. law requires food aid to be purchased at home rather than in the countries where the hungry live; this is inefficient because approximately half of the money spent goes for transport. Cuny further pointed out that "studies of every recent famine have shown that food was available in-country—though not always in the immediate food deficit area" and "even though by local standards the prices are too high for the poor to purchase it, it would usually be cheaper for a donor to buy the hoarded food at the inflated price than to import it from abroad."
Food banks and soup kitchens address malnutrition in places where people lack money to buy food. A basic income has been proposed as a way to ensure that everyone has enough money to buy food and other basic needs. This is a form of social security in which all citizens or residents of a country regularly receive an unconditional sum of money, either from a government or some other public institution, in addition to any income received from elsewhere.
Successful initiatives
Ethiopia pioneered a program that later became part of the World Bank's prescribed method for coping with a food crisis. Through the country's main food assistance program, the Productive Safety Net Program, Ethiopia provided rural residents who were chronically short of food a chance to work for food or cash. Foreign aid organizations like the World Food Program were then able to buy food locally from surplus areas to distribute in areas with a shortage of food. Aid organizations now view the Ethiopian program as a model of how to best help hungry nations.
Successful initiatives also include Brazil's recycling program for organic waste, which benefits farmers, the urban poor, and the city in general. City residents separate organic waste from their garbage, bag it, and then exchange it for fresh fruit and vegetables from local farmers. This reduces the country's waste while giving the urban poor a steady supply of nutritious food.
World population
Restricting population size is a proposed solution to malnutrition. Thomas Malthus argues that population growth can be controlled by natural disasters and by voluntary limits through "moral restraint." Robert Chapman suggests that government policies are a necessary ingredient for curtailing global population growth. The United Nations recognizes that poverty and malnutrition (as well as the environment) are interdependent and complementary with population growth. According to the World Health Organization, "Family planning is key to slowing unsustainable population growth and the resulting negative impacts on the economy, environment, and national and regional development efforts". However, more than 200 million women worldwide lack adequate access to family planning services.
There are different theories about what causes famine. Some theorists, like the Indian economist Amartya Sen, believe that the world has more than enough resources to sustain its population. In this view, malnutrition is caused by unequal distribution of resources and under- or unused arable land. For example, Sen argues that "no matter how a famine is caused, methods of breaking it call for a large supply of food in the Public Distribution System. This applies not only to organizing rationing and control, but also to undertaking work programmes and other methods of increasing purchasing power for those hit by shifts in exchange entitlements in a general inflationary situation."
Food sovereignty
Food sovereignty is one suggested policy framework to resolve access issues. In this framework, people (rather than international market forces) have the right to define their own food, agricultural, livestock, and fishery systems. Food First is one of the primary think tanks working to build support for food sovereignty. Neoliberals advocate for an increasing role of the free market.
Health facilities
Another possible long-term solution to malnutrition is to increase access to health facilities in rural parts of the world. These facilities could monitor undernourished children, act as supplemental food distribution centers, and provide education on dietary needs. Similar facilities have already proven very successful in countries such as Peru and Ghana.
Breastfeeding
In 2016, estimates suggested that more widespread breastfeeding could prevent about 823,000 deaths annually of children under age 5. In addition to reducing infant deaths, breast milk provides an important source of micronutrients - which are clinically proven to bolster children's immune systems – and provides long-term defenses against non-communicable and allergic diseases. Breastfeeding may improve cognitive abilities in children, and correlates strongly with individual educational achievements. As previously noted, lack of proper breastfeeding is a major factor in child mortality rates, and is a primary determinant of disease development for children. The medical community recommends exclusively breastfeeding infants for 6 months, with nutritional whole food supplementation and continued breastfeeding up to 2 years or older for overall optimal health outcomes. Exclusive breastfeeding is defined as giving an infant only breast milk for six months as a source of food and nutrition. This means no other liquids, including water or semi-solid foods.
Barriers to breastfeeding
Breastfeeding is noted as one of the most cost-effective medical interventions benefiting child health. While there are considerable differences among developed and developing countries, there are universal determinants of whether a mother breastfeeds or uses formula; these include income, employment, social norms, and access to healthcare. Many newly made mothers face financial barriers; community-based healthcare workers have helped to alleviate these barriers, while also providing a viable alternative to traditional and expensive hospital-based medical care. Recent studies, based upon surveys conducted from 1995 to 2010, show that exclusive breastfeeding rates have risen globally, from 33% to 39%. Despite the growth rates, medical professionals acknowledge the need for improvement given the importance of exclusive breastfeeding.
21st century global initiatives
Starting around 2009, there was renewed international media and political attention focused on malnutrition. This resulted in part from spikes in food prices and the 2008 financial crisis. Additionally, there was an emerging consensus that combating malnutrition is one of the most cost-effective ways to contribute to development. This led to the 2010 launch of the UN's Scaling up Nutrition movement (SUN).
In April 2012, a number of countries signed the Food Assistance Convention, the world's first legally binding international agreement on food aid. The following month, the Copenhagen Consensus recommended that politicians and private sector philanthropists should prioritize interventions against hunger and malnutrition to maximize the effectiveness of aid spending. The Consensus recommended prioritizing these interventions ahead of any others, including the fights against malaria and AIDS.
In June 2015, the European Union and the Bill & Melinda Gates Foundation launched a partnership to combat undernutrition, especially in children. The program was first implemented in Bangladesh, Burundi, Ethiopia, Kenya, Laos and Niger. It aimed to help these countries improve information and analysis about nutrition, enabling them to develop effective national nutrition policies.
Also in 2015, the UN's Food and Agriculture Organization created a partnership aimed at ending hunger in Africa by 2025. The African Union's Comprehensive Africa Agriculture Development Programme (CAADP) provided the framework for the partnership. It includes a variety of interventions, including support for improved food production, a strengthening of social protection, and integration of the right to food into national legislation.
The EndingHunger campaign is an online communication campaign whose goal is to raise awareness about hunger. The campaign has created viral videos depicting celebrities voicing their anger about the large number of hungry people in the world.
After the Millennium Development Goals expired in 2015, the Sustainable Development Goals became the main global policy focus to reduce hunger and poverty. In particular, Goal 2: Zero Hunger sets globally agreed-upon targets to wipe out hunger, end all forms of malnutrition, and make agriculture sustainable. The partnership Compact2025 develops and disseminates evidence-based advice to politicians and other decision-makers, with the goal of ending hunger and undernutrition by 2025. The International Food Policy Research Institute (IFPRI) led the partnership, with the involvement of UN organisations, non-governmental organizations (NGOs), and private foundations.
Treatment
Improving nutrition
Efforts such as infant and young child feeding practices to improve nutrition are some of the common forms of development aid. Interventions often promote breastfeeding to reduce rates of malnutrition and death in children. Some of these interventions have been successful. For example, interventions with commodities such as ready to use therapeutic foods, ready to use supplementary foods, micronutrient intervention and vitamin supplementation were identified to significantly improve nutrition, reduce stunting and prevent diseases in communities with severe acute malnutrition. In young children, outcomes improve when children between six months and two years of age receive complementary food (in addition to breast milk). There is also good evidence that supports giving supplemental micronutrients to pregnant women and young children in the developing world.
The United Nations has reported on the importance of nutritional counselling and support, for example in the care of HIV-infected persons, especially in "resource-constrained settings where malnutrition and food insecurity are endemic". UNICEF provides nutritional counselling services for malnourished children in Afghanistan.
Sending food and money is a common form of development aid, aimed at feeding hungry people. Some strategies help people buy food within local markets. Simply feeding students at school is insufficient.
Longer-term measures include improving agricultural practices, reducing poverty, and improving sanitation.
Identifying malnourishment
Measuring children is crucial to identifying malnourishment. In 2000, the United States Centers for Disease Control and Prevention (CDC) established the International Micronutrient Malnutrition Prevention and Control (IMMPaCt) program. It tested children for malnutrition by conducting a three-dimensional scan, using an iPad or a tablet. Its objective was to help doctors provide more efficient treatments. There may be some chance of error when using this method. The Screening Tool for the Assessment of Malnutrition in Paediatrics (STAMPa) is another method for the identification and evaluation of malnutrition in young children. The assessment tool has fair to medium reliability in the identification of children at risk of malnutrition.
A systematic review of 42 studies found that many approaches to mitigating acute malnutrition are equally effective; thus, intervention decisions can be based on cost-related factors. Overall, evidence for the effectiveness of acute malnutrition interventions is not robust. The limited evidence related to cost indicates that community and outpatient management of children with uncomplicated malnutrition may be the most cost-effective strategy.
Regularly measuring and charting children's growth and including activities to promote health (an intervention called growth monitoring and promotion, also known as GPM) is often considered by policy makers and is recommended by the World Health Organization. This program is often performed at the same time as a child has their regular immunizations. Despite widespread use of this type of program, further studies are needed to understand the impact of these programs on overall child health and how to better address faltering growth in a child and improve practices related to feeding children in lower to middle income countries.
Medical management
It is often possible to manage severe malnutrition within a person's home, using ready-to-use therapeutic foods. In people with severe malnutrition complicated by other health problems, treatment in a hospital setting is recommended. In-hospital treatment often involves managing low blood sugar, maintaining adequate body temperature, addressing dehydration, and gradual feeding.
Routine antibiotics are usually recommended because malnutrition weakens the immune system, causing a high risk of infection. Additionally, broad spectrum antibiotics are recommended in all severely undernourished children with diarrhea requiring admission to hospital.
A severely malnourished child who appears to have dehydration, but has not had diarrhea, should be treated as if they have an infection.
Among malnourished people who are hospitalized, nutritional support improves protein intake, calorie intake, and weight.
Bangladeshi model
In response to child malnutrition, the Bangladeshi government recommends ten steps for treating severe malnutrition:
Prevent or treat dehydration
Prevent or treat low blood sugar
Prevent or treat low body temperature
Prevent or treat infection;
Correct electrolyte imbalances
Correct micronutrient deficiencies
Start feeding cautiously
Achieve catch-up growth
Provide psychological support
Prepare for discharge and follow-up after recovery
Therapeutic foods
Due in part to limited research on supplementary feeding, there is little evidence that this strategy is beneficial. A 2015 systematic review of 32 studies found that there are limited benefits when children under 5 receive supplementary feeding, especially among younger, poorer, and more undernourished children.
However, specially formulated foods do appear to be useful in treating moderate acute malnutrition in the developing world. These foods may have additional benefits in humanitarian emergencies, since they can be stored for years, can be eaten directly from the packet, and do not have to be mixed with clean water or refrigerated. In young children with severe acute malnutrition, it is unclear if ready-to-use therapeutic food differs from a normal diet.
Severely malnourished individuals can experience refeeding syndrome if fed too quickly. Refeeding syndrome can result regardless of whether food is taken orally, enterally or parenterally. It can present several days after eating with potentially fatal heart failure, dysrhythmias, and confusion.
Some manufacturers have fortified everyday foods with micronutrients before selling them to consumers. For example, flour has been fortified with iron, zinc, folic acid, and other B vitamins like thiamine, riboflavin, niacin and vitamin B12. Baladi bread (Egyptian flatbread) is made with fortified wheat flour. Other fortified products include fish sauce in Vietnam and iodized salt.
Micronutrient supplementation
According to the World Bank, treating malnutrition – mostly by fortifying foods with micronutrients – improves lives more quickly than other forms of aid, and at a lower cost. After reviewing a variety of development proposals, The Copenhagen Consensus, a group of economists who reviewed a variety of development proposals, ranked micronutrient supplementation as its number-one treatment strategy.
In malnourished people with diarrhea, zinc supplementation is recommended following an initial four-hour rehydration period. Daily zinc supplementation can help reduce the severity and duration of the diarrhea. Additionally, continuing daily zinc supplementation for ten to fourteen days makes diarrhea less likely to recur in the next two to three months.
Malnourished children also need both potassium and magnesium. Within two to three hours of starting rehydration, children should be encouraged to take food, particularly foods rich in potassium like bananas, green coconut water, and unsweetened fresh fruit juice. Along with continued eating, many homemade products can also help restore normal electrolyte levels. For example, early during the course of a child's diarrhea, it can be beneficial to provide cereal water (salted or unsalted) or vegetable broth (salted or unsalted). If available, vitamin A, potassium, magnesium, and zinc supplements should be added, along with other vitamins and minerals.
Giving base (as in Ringer's lactate) to treat acidosis without simultaneously supplementing potassium worsens low blood potassium.
Treating diarrhea
Preventing dehydration
Food and drink can help prevent dehydration in malnourished people with diarrhea. Eating (or breastfeeding, among infants) should resume as soon as possible. Sugary beverages like soft drinks, fruit juices, and sweetened teas are not recommended as they may worsen diarrhea.
Malnourished people with diarrhea (especially children) should be encouraged to drink fluids; the best choices are fluids with modest amounts of sugar and salt, like vegetable broth or salted rice water. If clean water is available, they should be encouraged to drink that too. Malnourished people should be allowed to drink as much as they want, unless signs of swelling emerge.
Babies can be given small amounts of fluids via an eyedropper or a syringe without the needle. Children under two should receive a teaspoon of fluid every one to two minutes; older children and adults should take frequent sips of fluids directly from a cup. After the first two hours, fluids and foods should be alternated, rehydration should be continued at the same rate or more slowly, depending on how much fluid the child wants and whether they are having ongoing diarrhea.
If vomiting occurs, fluids can be paused for 5–10 minutes and then restarted more slowly. Vomiting rarely prevents rehydration, since fluids are still absorbed and vomiting is usually short-term.
Oral rehydration therapy
If prevention has failed and dehydration develops, the preferred treatment is rehydration through oral rehydration therapy (ORT). In severely undernourished children with diarrhea, rehydration should be done slowly, according to the World Health Organization.
Oral rehydration solutions consist of clean water mixed with small amounts of sugars and salts. These solutions help restore normal electrolyte levels, provide a source of carbohydrates, and help with fluid replacement.
Reduced-osmolarity ORS is the current standard of care for oral rehydration therapy, with reasonably wide availability. Introduced in 2003 by WHO and UNICEF, reduced-osmolarity solutions contain lower concentrations of sodium and glucose than original ORS preparations. Reduced-osmolarity ORS has the added benefit of reducing stool volume and vomiting while simultaneously preventing dehydration. Packets of reduced-osmolarity ORS include glucose, table salt, potassium chloride, and trisodium citrate. For general use, each packet should be mixed with a liter of water. However, for malnourished children, experts recommend adding a packet of ORS to two liters of water, along with an extra 50 grams of sucrose and some stock potassium solution.
People who have no access to commercially available ORS can make a homemade version using water, sugar, and table salt. Experts agree that homemade ORS preparations should include one liter (34 oz.) of clean water and 6 teaspoons of sugar; however, they disagree about whether they should contain half a teaspoon of table salt or a full teaspoon. Most sources recommend using half a teaspoon of salt per liter of water. However, people with malnutrition have an excess of body sodium. To avoid worsening this symptom, ORS for people with severe undernutrition should contain half the usual amount of sodium and more potassium.
Patients who do not drink may require fluids by nasogastric tube. Intravenous fluids are recommended only in those who have significant dehydration due to their potential complications, including congestive heart failure.
Low blood sugar
Hypoglycemia, whether known or suspected, can be treated with a mixture of sugar and water. If the patient is conscious, the initial dose of sugar and water can be given by mouth. Otherwise, they should receive glucose by intravenous or nasogastric tube. If seizures occur (and continue after glucose is given), rectal diazepam may be helpful. Blood sugar levels should be re-checked on two-hour intervals.
Hypothermia
Hypothermia (dangerously low core body temperature) can occur in malnutrition, particularly in children. Mild hypothermia causes confusion, trembling, and clumsiness; more severe cases can be fatal. Keeping malnourished children warm can prevent or treat hypothermia. Covering the child (including their head) in blankets is one method. Another method is to warm the child through direct skin-to-skin contact with their mother or father, then covering both parent and child.
Warming methods are usually most important at night. Prolonged bathing or prolonged medical exams can further lower body temperature and are not recommended for malnourished children at high risk of hypothermia.
Epidemiology
The figures provided in this section on epidemiology all refer to undernutrition even if the term malnutrition is used which, by definition, could also apply to too much nutrition.
The Global Hunger Index (GHI) is a multidimensional statistical tool used to describe the state of countries' hunger situation. The GHI measures progress and failures in the global fight against hunger. The GHI is updated once a year. The data from the 2015 report shows that Hunger levels have dropped 27% since 2000. Fifty two countries remain at serious or alarming levels. In addition to the latest statistics on Hunger and Food Security, the GHI also features different special topics each year. The 2015 report include an article on conflict and food security.
People affected
The United Nations estimated that there were 821 million undernourished people in the world in 2017. This is using the UN's definition of 'undernourishment', where it refers to insufficient consumption of raw calories, and so does not necessarily include people who lack micro nutrients. The undernourishment occurred despite the world's farmers producing enough food to feed around 12 billion people—almost double the current world population.
Malnutrition, as of 2010, was the cause of 1.4% of all disability adjusted life years.
Mortality
In 2010 protein-energy malnutrition resulted in 600,000 deaths down from 883,000 deaths in 1990. Other nutritional deficiencies, which include iodine deficiency and iron deficiency anemia, result in another 84,000 deaths. In 2010 malnutrition caused about 1.5 million deaths in women and children.
According to the World Health Organization, malnutrition is the biggest contributor to child mortality, present in half of all cases. Six million children die of hunger every year. Underweight births and intrauterine growth restrictions cause 2.2 million child deaths a year. Poor or non-existent breastfeeding causes another 1.4 million. Other deficiencies, such as lack of vitamin A or zinc, for example, account for 1 million. Malnutrition in the first two years is irreversible. Malnourished children grow up with worse health and lower education achievement. Their own children tend to be smaller. Malnutrition was previously seen as something that exacerbates the problems of diseases such as measles, pneumonia and diarrhea, but malnutrition actually causes diseases, and can be fatal in its own right.
History
Hunger has been a perennial human problem. However, until the early 20th century, there was relatively little awareness of the qualitative aspects of malnutrition.
Throughout history, various peoples have known the importance of eating certain foods to prevent symptoms now associated with malnutrition. Yet such knowledge appears to have been repeatedly lost and then re-discovered. For example, the ancient Egyptians reportedly knew the symptoms of scurvy. Much later, in the 14th century, Crusaders sometimes used anti-scurvy measures – for example, ensuring that citrus fruits were planted on Mediterranean islands, for use on sea journeys. However, for several centuries, Europeans appear to have forgotten the importance of these measures. They rediscovered this knowledge in the 18th century, and by the early 19th century, the Royal Navy was issuing frequent rations of lemon juice to every crewman on their ships. This massively reduced scurvy deaths among British sailors, which in turn gave the British a significant advantage in the Napoleonic Wars. Later on in the 19th century, the Royal Navy replaced lemons with limes (unaware at the time that lemons are far more effective at preventing scurvy).
According to historian Michael Worboys, malnutrition was essentially discovered, and the science of nutrition established, between World War I and World War II. Advances built on prior works like Casimir Funk's 1912 formulisation of the concept of vitamins. Scientific study of malnutrition increased in the 1920s and 1930s, and grew even more common after World War II.
Non-governmental organizations and United Nations agencies began to devote considerable energy to alleviating malnutrition around the world. The exact methods and priorities for doing this tended to fluctuate over the years, with varying levels of focus on different types of malnutrition like Kwashiorkor or Marasmus; varying levels of concern on protein deficiency compared to vitamins, minerals and lack of raw calories; and varying priorities given to the problem of malnutrition in general compared to other health and development concerns. The green Revolution of the 1950s and 1960s saw considerable improvement in capability to prevent malnutrition.
One of the first official global documents addressing Food security and global malnutrition was the 1948 Universal Declaration of Human Rights(UDHR). Within this document it stated that access to food was part of an adequate right to a standard of living. The Right to food was asserted in the International Covenant on Economic, Social and Cultural Rights, a treaty adopted by the United Nations General Assembly on December 16, 1966. The Right to food is a human right for people to feed themselves in dignity, be free from hunger, food insecurity, and malnutrition. As of 2018, the treaty has been signed by 166 countries, by signing states agreed to take steps to the maximum of their available resources to achieve the right to adequate food.
However, after the 1966 International Covenant the global concern for the access to sufficient food only became more present, leading to the first ever World Food Conference that was held in 1974 in Rome, Italy. The Universal Declaration on the Eradication of Hunger and Malnutrition was a UN resolution adopted November 16, 1974 by all 135 countries that attended the 1974 World Food Conference. This non-legally binding document set forth certain aspirations for countries to follow to sufficiently take action on the global food problem. Ultimately this document outline and provided guidance as to how the international community as one could work towards fighting and solving the growing global issue of malnutrition and hunger.
Adoption of the right to food was included in the Additional Protocol to the American Convention on Human Rights in the area of Economic, Social, and Cultural Rights, this 1978 document was adopted by many countries in the Americas, the purpose of the document is, "to consolidate in this hemisphere, within the framework of democratic institutions, a system of personal liberty and social justice based on respect for the essential rights of man."
A later document in the timeline of global initiatives for malnutrition was the 1996 Rome Declaration on World Food Security, organized by the Food and Agriculture Organization. This document reaffirmed the right to have access to safe and nutritious food by everyone, also considering that everyone gets sufficient food, and set the goals for all nations to improve their commitment to food security by halving their number of undernourished people by 2015. In 2004 the Food and Agriculture Organization adopted the Right to Food Guidelines, which offered states a framework of how to increase the right to food on a national basis.
Special populations
Undernutrition is an important determinant of maternal and child health, accounting for more than a third of child deaths and more than 10 percent of the total global disease burden according to 2008 studies.
Children
Undernutrition adversely affects the cognitive development of children, contributing to poor earning capacity and poverty in adulthood. The development of childhood undernutrition coincides with the introduction of complementary weaning foods which are usually nutrient deficient. The World Health Organization estimates that malnutrition accounts for 54 percent of child mortality worldwide, about 1 million children. There is a strong association between undernutrition and child mortality. Another estimate also by WHO states that childhood underweight is the cause for about 35% of all deaths of children under the age of five years worldwide. Over 90% of the stunted children below five years of age live in sub-Saharan Africa and South Central Asia. Although access to adequate food and improving nutritional intake is an obvious solution to tackling undernutrition in children, the progress in reducing children undernutrition is disappointing.
Women
In 2022, more than 1 billion adolescent girls and women suffered from undernutrition, according to UNICEF's 2023 report "Undernourished and Overlooked: A Global Nutrition Crisis in Adolescent Girls and Women". The gender gap in food insecurity more than doubled between 2019 (49 million) and 2021 (126 million). The report shows that globally, 30% of women aged 15–49 years are living with anaemia while 10 per cent of women aged 20–49 years suffer from underweight. South Asia, West and Central Africa and Eastern and Southern Africa are home to 60% of women with anaemia and 65% of women being underweight. In contrast, overweight is affecting more than 35% of women aged 20–49 years, of which 13% are living with obesity. Middle East and North Africa has the highest prevalence of overweight with 61% affected. North America closely follows at 60%. Fewer than 1 in 3 adolescent girls and women have diets meeting the minimum dietary diversity in the Sudan (10%), Burundi (12%), Burkina Faso (17%) and Afghanistan (26%). In Niger, the percentage of women accessing a minimally diverse diet fell from 53% to 37% between 2020 and 2022.
Researchers from the Centre for World Food Studies in 2003 found that the gap between levels of undernutrition in men and women is generally small, but that the gap varies from region to region and from country to country. These small-scale studies showed that female undernutrition prevalence rates exceeded male undernutrition prevalence rates in South/Southeast Asia and Latin America and were lower in Sub-Saharan Africa. Datasets for Ethiopia and Zimbabwe reported undernutrition rates between 1.5 and 2 times higher in men than in women; however, in India and Pakistan, datasets rates of undernutrition were 1.5–2 times higher in women than in men. Intra-country variation also occurs, with frequent high gaps between regional undernutrition rates. Gender inequality in nutrition in some countries such as India is present in all stages of life.
Studies on nutrition concerning gender bias within households look at patterns of food allocation, and one study from 2003 suggested that women often receive a lower share of food requirements than men. Gender discrimination, gender roles, and social norms affecting women can lead to early marriage and childbearing, close birth spacing, and undernutrition, all of which contribute to malnourished mothers.
Within the household, there may be differences in levels of malnutrition between men and women, and these differences have been shown to vary significantly from one region to another, with problem areas showing relative deprivation of women. Samples of 1000 women in India in 2008 demonstrated that malnutrition in women is associated with poverty, lack of development and awareness, and illiteracy. The same study showed that gender discrimination in households can prevent a woman's access to sufficient food and healthcare. How socialization affects the health of women in Bangladesh, Najma Rivzi explains in an article about a research program on this topic. In some cases, such as in parts of Kenya in 2006, rates of malnutrition in pregnant women were even higher than rates in children.
Women in some societies are traditionally given less food than men since men are perceived to have heavier workloads. Household chores and agricultural tasks can in fact be very arduous and require additional energy and nutrients; however, physical activity, which largely determines energy requirements, is difficult to estimate.
Physiology
Women have unique nutritional requirements, and in some cases need more nutrients than men; for example, women need twice as much calcium as men.
Pregnancy and breastfeeding
During pregnancy and breastfeeding, women must ingest enough nutrients for themselves and their child, so they need significantly more protein and calories during these periods, as well as more vitamins and minerals (especially iron, iodine, calcium, folic acid, and vitamins A, C, and K). In 2001 the FAO of the UN reported that iron deficiency affected 43 percent of women in developing countries and increased the risk of death during childbirth. A 2008 review of interventions estimated that universal supplementation with calcium, iron, and folic acid during pregnancy could prevent 105,000 maternal deaths (23.6 percent of all maternal deaths). Malnutrition has been found to affect three-quarters of UK women aged 16–49 indicated by them having less folic acid than the WHO recommended levels.
Frequent pregnancies with short intervals between them and long periods of breastfeeding add an additional nutritional burden.
Educating children
"Action for Healthy Kids" has created several methods to teach children about nutrition. They introduce 2 different topics, self-awareness which teaches children about taking care of their own health and social awareness, which is how culinary arts vary from culture to culture. As well as its importance when it comes to nutrition. They include eBooks, tips, cooking clubs. including facts about vegetables and fruits.
Team Nutrition has created "MyPlate eBooks" this includes 8 different eBooks to download for free. These eBooks contain drawings to color, audio narration, and a large number of characters to make nutrition lessons entertaining for children.
According to the FAO, women are often responsible for preparing food and have the chance to educate their children about beneficial food and health habits, giving mothers another chance to improve the nutrition of their children.
Elderly
Malnutrition and being underweight are more common in the elderly than in adults of other ages. If elderly people are healthy and active, the aging process alone does not usually cause malnutrition. However, changes in body composition, organ functions, adequate energy intake and ability to eat or access food are associated with aging, and may contribute to malnutrition. Sadness or depression can play a role, causing changes in appetite, digestion, energy level, weight, and well-being. A study on the relationship between malnutrition and other conditions in the elderly found that malnutrition in the elderly can result from gastrointestinal and endocrine system disorders, loss of taste and smell, decreased appetite and inadequate dietary intake. Poor dental health, ill-fitting dentures, or chewing and swallowing problems can make eating difficult. As a result of these factors, malnutrition is seen to develop more easily in the elderly.
Rates of malnutrition tend to increase with age with less than 10 percent of the "young" elderly (up to age 75) malnourished, while 30 to 65 percent of the elderly in home care, long-term care facilities, or acute hospitals are malnourished. Many elderly people require assistance in eating, which may contribute to malnutrition. However, the mortality rate due to undernourishment may be reduced. Because of this, one of the main requirements of elderly care is to provide an adequate diet and all essential nutrients. Providing the different nutrients such as protein and energy keeps even small but consistent weight gain. Hospital admissions for malnutrition in the United Kingdom have been related to insufficient social care, where vulnerable people at home or in care homes are not helped to eat.
In Australia malnutrition or risk of malnutrition occurs in 80 percent of elderly people presented to hospitals for admission. Malnutrition and weight loss can contribute to sarcopenia with loss of lean body mass and muscle function. Abdominal obesity or weight loss coupled with sarcopenia lead to immobility, skeletal disorders, insulin resistance, hypertension, atherosclerosis, and metabolic disorders. A paper from the Journal of the American Dietetic Association noted that routine nutrition screenings represent one way to detect and therefore decrease the prevalence of malnutrition in the elderly.
| Biology and health sciences | Health and fitness | null |
23765719 | https://en.wikipedia.org/wiki/Sulfur%20hexafluoride%20circuit%20breaker | Sulfur hexafluoride circuit breaker | Sulfur hexafluoride circuit breakers protect electrical power stations and distribution systems by interrupting electric currents, when tripped by a protective relay. Instead of oil, air, or a vacuum, a sulfur hexafluoride circuit breaker uses sulfur hexafluoride (SF6) gas to cool and quench the arc on opening a circuit. Advantages over other media include lower operating noise and no emission of hot gases, and relatively low maintenance. Developed in the 1950s and onward, SF6 circuit breakers are widely used in electrical grids at transmission voltages up to 800 kV, as generator circuit breakers, and in distribution systems at voltages up to 35 kV.
Sulfur hexafluoride circuit breakers may be used as self-contained apparatus in outdoor air-insulated substations or may be incorporated into gas-insulated switchgear which allows compact installations at high voltages.
Operating principle
Current interruption in a high-voltage circuit breaker is obtained by separating two contacts in a medium, such as sulfur hexafluoride (SF6), having excellent dielectric and arc-quenching properties. After contact separation, current is carried through an arc and is interrupted when this arc is cooled by a gas blast of sufficient intensity.
SF6 gas is electronegative and has a strong tendency to absorb free electrons. The contacts of the breaker are opened in a high-pressure flow of sulfur hexafluoride gas, and an arc is struck between them. The gas captures the conducting free electrons in the arc to form relatively immobile negative ions. This loss of conducting electrons in the arc quickly builds up enough insulation strength to extinguish the arc.
A gas blast applied to the arc must be able to cool it rapidly so that gas temperature between the contacts is reduced from 20,000 K to less than 2000 K in a few hundred microseconds, so that it is able to withstand the transient recovery voltage that is applied across the contacts after current interruption. Sulfur hexafluoride is generally used in present high-voltage circuit breakers at rated voltage higher than 52 kV.
Into the 1980s, the pressure necessary to blast the arc was generated mostly by gas heating using arc energy. It is now possible to use low-energy spring-loaded mechanisms to drive high-voltage circuit breakers up to 800 kV.
Brief history
High-voltage circuit breakers have changed since they were introduced in the mid-1950s, and several interrupting principles have been developed that have contributed successively to a large reduction of the operating energy. These breakers are available for indoor or outdoor applications, the latter being in the form of breaker poles housed in ceramic insulators mounted on a structure.
The first patents on the use of SF6 as an interrupting medium were filed in Germany in 1938 by Vitaly Grosse (AEG) and independently later in the United States in July 1951 by H. J. Lingal, T. E. Browne and A. P. Strom (Westinghouse).
The first industrial application of SF6 for current interruption dates to 1953. High-voltage 15 kV to 161 kV load switches were developed with a breaking capacity of 600 A. The first high-voltage SF6 circuit breaker built in 1956 by Westinghouse, could interrupt 5 kA under 115 kV, but it had six interrupting chambers in series per pole.
In 1957, the puffer-type technique was introduced for SF6 circuit breakers, wherein the relative movement of a piston and a cylinder linked to the moving part is used to generate the pressure rise necessary to blast the arc via a nozzle made of insulating material. In this technique, the pressure rise is obtained mainly by gas compression.
The first high-voltage SF6 circuit breaker with a high short-circuit current capability was produced by Westinghouse in 1959. This circuit breaker in a grounded tank (called a dead tank), could interrupt 41.8 kA under 138 kV (10,000 MV·A) and 37.6 kA under 230 kV (15,000 MV·A). This performance was already significant, but the three chambers per pole and the high-pressure source needed for the blast (1.35 MPa) was a constraint that had to be avoided in subsequent developments.
The excellent properties of SF6 led to the fast extension of this technique in the 1970s and to its use for the development of circuit breakers with high interrupting capability, up to 800 kV.
The achievement around 1983 of the first single-break 245 kV and the corresponding 420 kV to 550 kV and 800 kV, with respectively 2, 3, and 4 chambers per pole, led to the dominance of SF6 circuit breakers in the complete range of high voltages.
Several characteristics of SF6 circuit breakers can explain their success:
Simplicity of the interrupting chamber which does not need an auxiliary breaking chamber
Autonomy provided by the puffer technique
The possibility to obtain the highest performance, up to 63 kA, with a reduced number of interrupting chambers
Short break time of 2 to 2.5 cycles
High electrical endurance, allowing at least 25 years of operation without reconditioning
Possible compact solutions when used for gas-insulated switchgear or hybrid switchgear
Integrated closing resistors or synchronized operations to reduce switching over-voltages
Reliability and availability
Low noise levels
The reduction in the number of interrupting chambers per pole has led to a considerable simplification of circuit breakers as well as the number of parts and seals required. As a direct consequence, the reliability of circuit breakers improved, as verified later on by International Council on Large Electric Systems (CIGRE) surveys.
Design features
Thermal blast chambers
New types of SF6 breaking chambers, which implement innovative interrupting principles, have been developed over the past 30 years, with the objective of reducing the operating energy of the circuit breaker. One aim of this evolution was to further increase the reliability by reducing the dynamic forces in the pole. Developments since 1980 have seen the use of the self-blast technique of interruption for SF6 interrupting chambers.
These developments have been facilitated by the progress made in digital simulations that were widely used to optimize the geometry of the interrupting chamber and the linkage between the poles and the mechanism.
This technique has proved to be very efficient and has been widely applied for high-voltage circuit breakers up to 550 kV. It has allowed the development of new ranges of circuit breakers operated by low-energy spring-operated mechanisms.
The reduction of operating energy was mainly achieved by lowering energy used for gas compression and by making increased use of arc energy to produce the pressure necessary to quench the arc and obtain current interruption. Low-current interruption, up to about 30% of rated short-circuit current, is obtained by a puffer blast. Also includes more of extensive energy available .
Self-blast chambers
Further development in the thermal blast technique was made by the introduction of a valve between the expansion and compression volumes. When interrupting low currents the valve opens under the effect of the overpressure generated in the compression volume. The blow-out of the arc is made as in a puffer circuit breaker thanks to the compression of the gas obtained by the piston action. In the case of high currents interruption, the arc energy produces a high overpressure in the expansion volume, which leads to the closure of the valve and thus isolating the expansion volume from the compression volume. The overpressure necessary for breaking is obtained by the optimal use of the thermal effect and of the nozzle clogging effect produced whenever the cross-section of the arc significantly reduces the exhaust of gas in the nozzle. In order to avoid excessive energy consumption by gas compression, a valve is fitted on the piston in order to limit the overpressure in the compression to a value necessary for the interruption of low short circuit currents.
This technique, known as "self-blast" has now been used extensively since 1980 for the development of many types of interrupting chambers. The increased understanding of arc interruption obtained by digital simulations and validation through breaking tests, contribute to a higher reliability of these self-blast circuit breakers. In addition the reduction in operating energy, allowed by the self-blast technique, leads to longer service life.
Double motion of contacts
An important decrease in operating energy can also be obtained by reducing the kinetic energy consumed during the tripping operation. One way is to displace the two arcing contacts in opposite directions so that the arc speed is half that of a conventional layout with a single mobile contact.
The thermal and self-blast principles have enabled the use of low-energy spring mechanisms for the operation of high-voltage circuit breakers. They progressively replaced the puffer technique in the 1980s; first in 72.5 kV breakers, and then from 145 kV to 800 kV.
Comparison of single motion and double motion techniques
The double motion technique halves the tripping speed of the moving part. In principle, the kinetic energy could be quartered if the total moving mass were not increased. However, as the total moving mass is increased, the practical reduction in kinetic energy is closer to 60%. The total tripping energy also includes the compression energy, which is almost the same for both techniques. Thus, the reduction of the total tripping energy is lower, about 30%, although the exact value depends on the application and the operating mechanism. Depending on the specific case, either the double motion or the single motion technique can be cheaper. Other considerations, such as rationalization of the circuit breaker range, can also influence the cost.
Thermal blast chamber with arc-assisted opening
In this interruption principle arc energy is used, on the one hand to generate the blast by thermal expansion and, on the other hand, to accelerate the moving part of the circuit breaker when interrupting high currents. The overpressure produced by the arc energy downstream of the interruption zone is applied on an auxiliary piston linked with the moving part. The resulting force accelerates the moving part, thus increasing the energy available for tripping. With this interrupting principle it is possible, during high-current interruptions, to increase by about 30% the tripping energy delivered by the operating mechanism and to maintain the opening speed independently of the current. It is obviously better suited to circuit breakers with high breaking currents, such as generator circuit breakers.
Generator circuit breakers
Generator circuit breakers (GCB) are connected between a generator and the step-up voltage transformer. They are generally used at the outlet of high-power generators (30 MVA to 1800 MVA) in order to protect them in a reliable, fast and economic manner. Such circuit breakers have high carrying current rating (4 kA to 40 kA), and have a high breaking capacity (50 kA to 275 kA).
They belong to the medium voltage range, but the transient recovery voltage withstand capability required by IEC/IEEE 62771-37-013 is such that the specifically developed interrupting principles must be used. A particular embodiment of the thermal blast technique has been developed and applied to generator circuit breakers. The self-blast technique described above is also widely used in SF6 generator circuit breakers, in which the contact system is driven by a low-energy, spring-operated mechanism. An example of such a device is shown in the figure below; this circuit breaker is rated for 17.5 kV and 63 kA.
High-power testing
The short-circuit interrupting capability of high-voltage circuit breakers is such that it cannot be demonstrated with a single source able to generate the necessary power. A special scheme is used with a generator that provides the short-circuit current until current interruption and afterwards a voltage source applies the recovery voltage across the terminals of the circuit breaker. Tests are usually performed single-phase, but can also be performed three-phase
Also have a small control of power.
Issues related to SF6 circuit breakers
The following issues are associated with SF6 circuit breakers:
Toxic lower-order gases
When an arc occurs in SF6 gas, it produces small quantities of other gases. Some of these byproducts are toxic and can irritate eyes and respiratory systems. One byproduct, disulfur decafluoride (), has a toxicity similar to phosgene and does not produce lacrimation or skin irritation, thus providing little warning of exposure. Therefore used SF6 must be handled as a hazardous substance whenever an interrupter is opened for maintenance or disposal.
Oxygen displacement
SF6 is heavier than air, so care must be taken when entering low places and confined spaces due to the risk of suffocation due to oxygen displacement.
Greenhouse gas
SF6 is the most potent greenhouse gas that the Intergovernmental Panel on Climate Change has evaluated. It has a global warming potential that is 23,900 times worse than CO2.
Some governments have implemented systems to monitor and control the emission of SF6 to the atmosphere.
Comparison with other types
Circuit breakers are usually classed on their insulating medium. The following types of circuit breakers may be an alternative to SF6 types.
air-blast
oil
vacuum
CO2
Compared with air-blast breakers, operation with SF6 is quieter and no hot gases are discharged in normal operation. No compressed-air plant is required to maintain blast air pressure. The higher dielectric strength of the gas allows more compact design or a larger interrupting rating for the same relative size as air-blast circuit breakers. This also has the desirable effect of minimizing size and weight of the circuit breakers, making foundations and installation less costly. Operating mechanisms are simpler, and less maintenance is required, generally with more mechanical operations allowed between inspections or maintenance. However, checking or replacing the SF6 gas requires special equipment and training to prevent accidental emissions. At very low outdoor temperatures, unlike air, SF6 gas can liquefy, reducing the ability of the circuit breaker to interrupt fault currents.
Oil-filled breakers contain some volume of mineral oil. A minimum-oil breaker may contain on the order of hundreds of litres of oil at transmission voltages; a dead-tank bulk oil-filled circuit breaker may contain tens of thousands of litres of oil. If this is discharged from the circuit breaker during a failure, it will be a fire hazard. Oil is also toxic to water systems and leakages must be carefully contained.
Vacuum circuit breakers have limited availability and are not made for transmission voltages, unlike SF6 breakers available up to 800 kV.
| Technology | Electrical protective devices | null |
23768607 | https://en.wikipedia.org/wiki/51%20Pegasi%20b | 51 Pegasi b | 51 Pegasi b, officially named Dimidium , is an extrasolar planet approximately away in the constellation of Pegasus. It was the first exoplanet to be discovered orbiting a main-sequence star, the Sun-like 51 Pegasi, and marked a breakthrough in astronomical research. It is the prototype for a class of planets called hot Jupiters.
In 2017, traces of water were discovered in the planet's atmosphere. In 2019, the Nobel Prize in Physics was awarded in part for the discovery of 51 Pegasi b.
Name
51 Pegasi is the Flamsteed designation of the host star. The planet was originally designated 51 Pegasi b by Michel Mayor and Didier Queloz, who discovered the planet in 1995. The following year it was unofficially dubbed "Bellerophon" by astronomer Geoffrey Marcy, who followed the convention of naming planets after Greek and Roman mythological figures (Bellerophon is a figure from Greek mythology who rode the winged horse Pegasus).
In July 2014, the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name for this planet was Dimidium. The name was submitted by the , Switzerland. 'Dimidium' is Latin for 'half', referring to the planet's mass of approximately half the mass of Jupiter.
Discovery
The exoplanet's discovery was announced on October 6, 1995, by Michel Mayor and Didier Queloz of the University of Geneva in the journal Nature. They used the radial velocity method with the ELODIE spectrograph on the Observatoire de Haute-Provence telescope in France and made world headlines with their announcement. For this discovery, they were awarded the 2019 Nobel Prize in Physics.
The planet was discovered using a sensitive spectroscope that could detect the slight and regular velocity changes in the star's spectral lines of around 70 metres per second. These changes are caused by the planet's gravitational effects from just 7 million kilometres' distance from the star.
Within a week of the announcement, the planet was confirmed by another team using the Lick Observatory in California.
Physical characteristics
After its discovery, many teams confirmed the planet's existence and obtained more observations of its properties. It was discovered that the planet orbits the star in around four days. It is much closer to it than Mercury is to the Sun, moves at an orbital speed of , yet has a minimum mass about half that of Jupiter (about 150 times that of the Earth). At the time, the presence of a huge world so close to its star was not compatible with theories of planet formation and was considered an anomaly. However, since then, numerous other "hot Jupiters" have been discovered (such as 55 Cancri and τ Boötis), and astronomers are revising their theories of planet formation to account for them by studying orbital migration.
Assuming the planet is perfectly grey with no greenhouse or tidal effects, and a Bond albedo of 0.1, the temperature would be . This is between the predicted temperatures of HD 189733 b and HD 209458 b (–), before they were measured.
In the report of the discovery, it was initially speculated that 51 Pegasi b was the stripped core of a brown dwarf of a decomposed star and was therefore composed of heavy elements, but it is now believed to be a gas giant. It is sufficiently massive that its thick atmosphere is not blown away by the star's solar wind.
51 Pegasi b probably has a greater radius than that of Jupiter despite its lower mass. This is because its superheated atmosphere must be puffed up into a thick but tenuous layer surrounding it. Beneath this, the gases that make up the planet would be so hot that the planet would glow red. Clouds of silicates may exist in the atmosphere.
The planet is tidally locked to its star, always presenting the same face to it.
The planet (with Upsilon Andromedae b) was deemed a candidate for aperture polarimetry by Planetpol. It is also a candidate for "near-infrared characterisation.... with the VLTI Spectro-Imager".
Claims of direct detection of visible light
A 2015 study alleged the detection of 51 Pegasi b in the visible light spectrum using the High Accuracy Radial velocity Planet Searcher (HARPS) instrument at the European Southern Observatory's La Silla Observatory in Chile. This detection, if confirmed, would allow the inference of a true mass of 0.46 Jupiter masses. The findings also could suggest a high albedo for the planet, hence a large radius up to Jupiter radii, which could suggest 51 Pegasi b is an inflated hot Jupiter. The optical detection could not be replicated in 2020, implying the planet has an albedo below 0.15. Measurements in 2021 have marginally detected a polarized reflected light signal, which, while they cannot place limits on the albedo without assumptions made about the scattering mechanisms, could suggest a high albedo.
More recent studies found no evidence of reflected light, ruling out the previous radii and albedo estimates from previous studies. Instead, 51 Pegasi b is likely a low-albedo planet with a radius around .
| Physical sciences | Notable exoplanets | Astronomy |
12822286 | https://en.wikipedia.org/wiki/Conflict%20minerals%20law | Conflict minerals law | The eastern Democratic Republic of the Congo (DRC) has a history of conflict, where various armies, rebel groups, and outside actors have profited from mining while contributing to violence and exploitation during wars in the region. The four main end products of mining in the eastern DRC are tin, tungsten, tantalum, and gold, which are extracted and passed through a variety of intermediaries before being sold to international markets. These four products, (known as the 3TGs) are essential in the manufacture of a variety of devices, including consumer electronics such as smartphones, tablets, and computers.
Some have identified the conflict as significantly motivated by control over resources. In response, several countries and organizations, including the United States, European Union, and OECD have designated 3TG minerals connected to conflict in the DRC as conflict minerals and legally require companies to report trade or use of conflict minerals as a way to reduce incentives for armed groups to extract and fight over the minerals.
In the United States, the 2010 Dodd–Frank Wall Street Reform and Consumer Protection Act required manufacturers to audit their supply chains and report use of conflict minerals. In 2015, a US federal appeals court struck down some aspects of the reporting requirements as a violation of corporations’ freedom of speech, but left others in place.
Democratic Republic of the Congo
The history of extraction in the Congo began in 1885 following the Berlin West Africa Conference, as King Leopold II of Belgium forcibly dispossessed Congolese kings of their land through invalid treaties and established rubber plantations through the use of military violence. The country was characterized by extraordinarily violent treatment of natives, including mass killings, sex crimes, and torture for not meeting quotas because exploitation of resources was the first priority to secure profits for the Belgian colonial empire. In 1908 control was transferred from Leopold II to the Belgian colonial administration, although exploitation of resources remained key to economic growth. Racism, political subjugation, and forced labor remained prevalent and helped enforce a power dynamic to ensure continued economic production.
Independence movements, including the Alliance des Bakongo (ABAKO) and the Mouvement National Congolais (MNC), gained traction in the late 1950s, both supported partly by a strong nationalist wave. Clashes between the Belgian security forces, the Force Publique, and nationalist protestors in Léopoldville on January 4, 1959, triggered events leading to the Congo being granted independence in June of 1960. The execution of MNC leader Patrice Lumumba in January 1961 by CIA-backed coup leader Joseph Mobutu showed the violence experienced by those attempting to unify the new political landscape.
Continued conflicting intervention from the UN, USA, Soviets, Chinese, Belgians, and others left the Congo politically unstable without a representational government and lacking basic social services. The legacy of this instability simultaneously leaves new governments vulnerable to conflict by militia groups and unable to exercise sufficient oversight of their territory, enabling contemporary mineral conflicts. Armed conflict and mineral resource looting by the Congolese National Army and various armed rebel groups, including the Democratic Forces for the Liberation of Rwanda (FDLR) and the National Congress for the Defense of the People (CNDP), a proxy Rwandan militia group, has occurred throughout the late 20th century and the early 21st century. As of 2020, an estimated 113 armed groups were operating in the Kivu region, ranging from small militias to sophisticated groups with international support. Such armed groups have continued to commit severe human rights abuses, and battles, fatalities, and attacks on civilians have increased steadily from 2017 to 2021.
Extraction of the Congo's natural resources also occurs across its immediate borders. During the First Congo War (1996–1997) and Second Congo War (1998–2003), Rwanda, Uganda and Burundi particularly profited from the Congo's resources. These governments continue to smuggle resources out of the Congo to this day. Minerals mined in Eastern Congo pass through the hands of numerous middlemen as they are shipped out of Congo through neighboring countries such as Rwanda or Burundi, to East Asian processing plants. Because of this, the US Conflict Minerals Law applies to materials originating (or claimed to originate) from the DRC as well as the nine adjoining countries: Angola, Burundi, Central African Republic, Republic of Congo, Rwanda, South Sudan, Zimbabwe, Uganda, and Zambia. The profits from the sale of these minerals have financed fighting in derivative ongoing conflicts. Control of lucrative mines has also become a military objective.
As of 2024, the Congo contains an estimated $24 trillion in raw mineral deposits, making it the world's richest country measured by wealth of natural resources. The international market for critical minerals driven by the clean energy transition grew from $160bn to $320bn from 2017-2022, similarly increasing demand in the Congo for minerals including cobalt, copper, and lithium. Gold and diamonds continue to finance regional conflict due to their high value to weight ratio, and no jewellery industry standard exists for verifying gold origination as it does for diamonds (though jewellers' total outlay on gold is five times that on diamonds). Other conflict minerals being illicitly exported from the Congo include tungsten, tin, cassiterite, and coltan (which provides the tantalum for mobile phones, and is also said to be directly sustaining the conflict). Scholar Siddharth Kara explains that similar to the slavery-for-rubber economy that contributed to the human rights abuses of the early Congo, a blood-for-cobalt economy continues to dehumanize Congolese people and is responsible for continuing conflict.
A major research report from November 2012 by the Southern Africa Resource Watch revealed that gold miners in the east of the Democratic Republic of Congo were being exploited by corrupt government officials, bureaucrats and security personnel, who all demand illegal tax, fees and levies from the miners without delivering any services in return. Despite the alleged gold rush in various regions of the country, none of the population or workforce is benefiting from this highly lucrative industry.
While corruption due to weak governance is an issue, companies contributing to the economic pressure and facilitating government corruption are underlying factors. These companies include well-known brands such as Apple, Google, and Samsung, which profit from the relatively lower prices associated with corruption and illicit trade. Filings of Reasonable Country-of-Origin Inquiries (RCOI) to determine sources of conflict minerals decreased from 2014 to 2021, which is a concern, significantly as demand has increased. These filings also reveal a trend where companies increasingly determine the source country (compared to being unable to determine), but the percentage from covered countries has increased. These companies have continued to grow in part due to unethical extraction in covered countries, and enforcing corporate accountability is difficult due to international legal processes and weak governance as mentioned above.
Mines
Mines in eastern Congo are often far away from populated areas in remote and dangerous regions. A recent International Peace Information Service (IPIS) study indicates that armed groups are present at more than 50% of mining sites. At many sites, armed groups illegally tax, extort, and coerce civilians to work. Miners, including children, work up to 48-hour shifts amidst mudslides and tunnel collapses that kill many. The groups are sometimes but not always affiliated with rebel groups or with the Congolese National Army, but both use rape and violence to control the local population. While these groups are the direct perpetrators of the violence, international power struggles mainly between the US and China ensure that demand remains high for conflict minerals, regardless of known human rights abuses taking place. Demand for conflict minerals used in military technology has also become a national defense issue especially during rising US-China tensions. The electronics and clean energy industries are the biggest cobalt consumers globally. Yet, 16 multinational consumer brands are listed in a 2016 Amnesty International report, and none traced their cobalt supply chain. This chain starts with undocumented traders selling cobalt, often mined using child labor, to the Chinese-owned Congo Dongfang Mining company, which then supplies battery manufacturers, including CATL, LG, and Glencore. These battery manufacturers supply international electronics brands such as Apple, Microsoft, and Tesla, which claim "conflict-free" products despite widespread and documented human rights abuse cases.
United States law
Conflict minerals
The four conflict minerals codified in the U.S. Conflict Minerals Law, are:
Columbite-tantalite (or coltan, the colloquial African term) is the metal ore from which the element tantalum is extracted. Tantalum is used primarily for the production of tantalum capacitors, particularly for applications requiring high performance, a small compact format and high reliability, from hearing aids and pacemakers, to airbags, GPS, ignition systems and anti-lock braking systems in automobiles, through to laptop computers, mobile phones, video game consoles, video cameras and digital cameras. In its carbide form, tantalum possesses significant hardness and wear resistance properties. As a result, it is used in jet engine/turbine blades, drill bits, end mills and other tools.
Cassiterite is the chief ore needed to produce tin, essential for the production of tin cans and the solder on the circuit boards of electronic equipment. Tin is also commonly a component of biocides, fungicides and as tetrabutyl tin/tetraoctyl tin, an intermediate in polyvinyl chloride (PVC) and high performance paint manufacturing.
Wolframite is an important source of the element tungsten. Tungsten is a very dense metal and is frequently used for this property, such as in fishing weights, dart tips and golf club heads. Like tantalum carbide, tungsten carbide possesses hardness and wear resistance properties and is frequently used in applications like metalworking tools, drill bits and milling. Smaller amounts are used to substitute lead in "green ammunition". Minimal amounts are used in electronic devices, including the vibration mechanism of cell phones.
Gold is used in jewelry, investments, electronics, and dental products. It is also present in some chemical compounds used in certain semiconductor manufacturing processes.
These are sometimes termed "the 3T's and gold", "3TG", or even simply the "3T's", referring to the elements of interest they contain (tantalum, tin, tungsten, gold). Under the US Conflict Minerals Law, additional minerals may be added to this list in the future.
There has been a push in recent years to consider cobalt as an additional conflict mineral, as since 2019 the Congo accounts for 70% of global production. Cobalt demand has also increased 70% from 2017 to 2022 driven by lithium-ion battery demand, and “the Enough Project estimates that 60 percent of that production [in the Congo] comes from illegal mines.”
History
In April 2009, Senator Sam Brownback (R-KS) introduced the Congo Conflict Minerals Act of 2009 () to require electronics companies to verify and disclose their sources of cassiterite, wolframite, and tantalum. This legislation died in committee. However, Brownback added similar language as Section 1502 of the Dodd–Frank Wall Street Reform and Consumer Protection Act, which passed Congress and was signed into law by President Barack Obama on July 21, 2010. This Conflict Mineral Law was published in the Federal Register of December 23, 2010.
The U.S. Securities and Exchange Commission (SEC) draft regulations to implement the law would have required U.S. and certain foreign companies to report and make public their use of so-called "conflict minerals" from the Democratic Republic of the Congo or adjoining countries in their products. Comments on this proposal were extended until March 2, 2011. The comments on the proposal were reviewable by the public.
One report on the proposal stated the following statistics for the submitted comments:
Slightly more than 700 comment letters were submitted to SEC on the proposal;
Approximately 65% of those were form letters or basic letters from the general public supporting the rule's intent;
The remaining 35% (roughly 270) represent views of businesses, trade/industry associations, the investment/financial community, professional auditing firms, and other relevant governmental entities; and
Of those 270 comments, an estimated 200 contained substantive and/or technical comments.
That report also contained what it calls a "preview of the final SEC regulations" synthesized from their detailed research and analysis of a large body of documents, reports and other information on the law, proposed regulation and the current budget/political setting facing the SEC in the current administration.
The final rule went into effect 13 November 2012.
The SEC rule did not go unnoticed by the international community, including entities seeking to undermine traceability efforts. A report published by a metals trading publication illustrated one DRC ore/mineral flow method that has apparently been devised to thwart detection.
On July 15, 2011, the US State Department issued a statement on the subject. Section 1502(c) of the Law mandates that the State Department work in conjunction with SEC on certain elements of conflict minerals policy development and support.
On October 23, 2012, U.S. State Dept Officials asserted that ultimately, it falls on the U.S. State Dept. to determine when this rule would no longer apply.
In April 2014, the United States Court of Appeals for the District of Columbia Circuit struck down Section 13(p) and Rule 13(p)-1 of the SEC Rules, deeming them in violation of the First Amendment. Following this ruling, the Court noted that there was no “First Amendment objection to any other aspect of the conflict minerals report or required disclosures.”
Auditing and reporting requirements
US Conflict Minerals Law contains two requirements that are closely connected:
independent third party supply chain traceability audits
reporting of audit information to the public and SEC.
Even companies not directly regulated by the SEC will be impacted by the audit requirements because they will be pushed down through entire supply chains, including privately held and foreign-owned companies.
SEC estimated that 1,199 "issuers" (i.e., companies subject to filing other SEC reports) will be required to submit full conflict mineral reports. This estimate was developed by finding the amount of tantalum produced by the DRC in comparison to global production (15% – 20%). The Commission selected the higher figure of 20% and multiplied that by 6,000 (the total number of "issuers" SEC will be required to do initial product/process evaluations). This estimate does not account for the companies who supply materials to the "issuers" (but are not themselves SEC-regulated) but who will almost certainly be required to conduct conflict minerals audits to meet the demands of those customers. Other estimates indicate that the total number of US companies likely impacted may exceed 12,000.
A study of the potential impact of the regulation in early 2011 by the IPC – Association Connecting Electronic Industries trade association. was submitted with the association's comments to the SEC. The study states that the IPC survey respondents had a median of 163 direct suppliers. Applying that number to the SEC's estimated number of impacted issuers results in the possibility of over 195,000 businesses that could be subject to some level of supply chain traceability effort.
Applicability in general
Under the law, companies have to submit an annual conflict minerals report to the SEC if:
(a) they are required to file reports with the SEC under the Exchange Act of 1934
(b) conflict minerals are necessary to the functionality or production of a product that they manufacture or contract to be manufactured. That statement contains two separate – but critical concepts: the purpose of the conflict mineral in the product/process, and the control that the company exerts over the manufacturing process/specifications.
A company would be deemed to contract an item to be manufactured if it:
Exerts any influence over the manufacturing process; or,
Offers a generic product under its own brand name or a separate brand name (regardless of whether the company has any influence over the manufacturing process) and the company contracted to have the product manufactured specifically for itself.
This language implied that some retailers who are not manufacturers might be subject to the audit and disclosure requirements.
"Contracting to manufacture" a product requires some actual influence over the manufacturing of process that product, a determination based on facts and circumstances. A company is not to be deemed to have influence over the manufacturing process if it merely:
Affixes its brand, marks, logo, or label to a generic product manufactured by a third party.
Services, maintains, or repairs a product manufactured by a third party.
Specifies or negotiates contractual terms with a manufacturer that do not directly relate to the manufacturing of the product.
The proposed regulations attempted to clarify that tools used in assembly and manufacturing will not trigger the law. The intent was to cover minerals/metals in the final product only. Nothing specifically addresses intermediate chemical processes that use chemicals that contain conflict minerals. Additionally, neither the law nor the proposed regulation established a de minimis quantity or other form of materiality threshold that would preclude the applicability of the auditing/reporting requirements.
Supply chain traceability auditing
The law mandates the use of an "independent private sector auditor" to conduct the audits. SEC has proposed two different standards for the audits: the "reasonable inquiry" and the "due diligence". Should the final rule include this structure, the reasonable inquiry would be the first step to determine if the company can on its own, using reasonable efforts and trustworthy information, make a reliable determination as to the source/origin of its tin, tantalum, tungsten and/or gold. Where companies are unable to make such a determination for any reason, they would then be required to take the next step of the "due diligence", which is the independent private sector audit.
The statute specified that the audits be "conducted in accordance with standards established by the Comptroller General of the United States, in accordance with rules promulgated by the Commission". This means that the same auditing standards that apply to other SEC auditing requirements will apply to conflict minerals audits Because of this language, SEC will have little discretion to allow companies to issue self-generated statements or certifications to satisfy the law.
Third party audits for conflict minerals supply chain traceability began in summer 2010 under the Electronic Industry Citizenship Coalition (EICC), a US-based electronics manufacturing trade association. Under this program, EICC selected three audit firms to conduct the actual audits, with two of the three participating in the pilot audits in 2010. After concluding the pilot, one of the two firms involved in 2010 withdrew from the program specifically in response to the SEC's proposal and to reduce potential legal risks to the audited entities.
Neither the law nor the proposed regulations provide guidance on what will be considered an acceptable audit scope or process, preferring to allow companies the flexibility meeting the requirement in a manner that is responsive to their own individual business and supply chain. At the same time, the law contains a provision that preserves the government's rights to deem any report, audit or other due diligence processes as being unreliable, and in such cases, the report shall not satisfy the requirements of the regulations, further emphasizing the need for such audits to conform to established SEC auditing standards. Comments on the proposed regulation pointed out that, should SEC not specify an applicable audit standard, it cannot also be silent or ambiguous on the auditor standards as well, or the commission will violate the plain language of the Law mandating "standards established by the Comptroller General of the United States". It is generally expected that SEC will provide specificity on both the audit standard and the auditor standard. SEC's proposal attempted to clarify its position on auditor requirements.
The Organisation for Economic Co-operation and Development (OECD) published its Guidance on conflict minerals supply chain traceability. This guidance is gaining much momentum as "the" standard within US policy. However, a recent critical analysis of the standard in comparison to existing US auditing standards under SEC highlighted a number of significant inconsistencies and conflict with relevant US standards. Companies subject to the US law who implement the OECD Guidance without regard for the SEC auditing standards may face legal compliance risks.
Reporting and disclosure
Companies subject to the SEC reporting requirement would be required to disclose whether the minerals used in their products originated in the DRC or adjoining countries (as defined above). The law mandates that this reporting be submitted/made available annually. Many comments to the proposed regulation asked SEC to clarify whether the report must be "furnished"—meaning it is made available to SEC but not directly incorporated within the company's formal financial report—or "submitted"—meaning the report is directly incorporated into the financial report. At first glance, this may appear to be a minor point; however, this difference is very important in determining the audit/auditor standards and related liabilities.
If it is determined that none of the minerals originated in the DRC or adjoining countries, the report must include a statement to that effect and provide an explanation of the country of origin analysis that was used to arrive at the ultimate conclusion. On the other hand, if conflict minerals originating in the DRC or adjoining countries were used (or if it is not possible to determine the country of origin of the conflict minerals used), companies would be required to state as such in the annual report. In either case, companies would also be required to make this information public by posting their annual conflict minerals report on their websites, and providing the SEC with the internet addresses where the reports may be found. Further, the proposed regulations would require companies to maintain records relating to the country of origin of conflict minerals used in their products.
Media outlets have reported that many companies required to file Specialized Disclosure Reports to the U.S. Securities and Exchange Commission (SEC) and any necessary conflict minerals reports for 2013 under the SEC's conflict minerals rule are struggling to meet the June 2, 2014 report filing deadline. Many impacted companies were hoping for clarification regarding filing requirements, from the United States Court of Appeals for the District of Columbia Circuit from a lawsuit filed by the National Association of Manufacturers. The appellate court's ruling left the necessary conflict minerals reporting requirements largely intact and it has been suggested that impacted companies should review the SEC's Division of Corporation Finance's response to the court's ruling which provides guidance regarding the effect of the appellate court's ruling.
On August 18, 2015, the divided D.C. Circuit Court again held the SEC's conflict materials rule violates the First Amendment. Senior Circuit Judge A. Raymond Randolph, joined by Senior Circuit Judge David B. Sentelle, weighed if the required disclosures were effective and uncontroversial.<ref name =harvard>[http://cdn.harvardlawreview.org/wp-content/uploads/2016/01/819-826-Online.pdf Recent Cases - D.C. Circuit Limits Compelled Commercial Disclosures to Voluntary Advertising] , 129 Harv. L. Rev. 819 (2016).</ref> Citing news reports and a Congressional hearing, the court decided the policy was ineffective. The court next found the required label was controversial because it "is a metaphor that conveys moral responsibility for the Congo war". As such, the court struck down the conflict materials rule's disclosure requirements as a violation of corporations’ freedom of speech. Circuit Judge Sri Srinivasan dissented, writing that the required disclosures were not controversial because they were truthful.
Criticism of the law
The law has been criticised for not addressing the root causes of the conflict, leaving to the Congolese government the responsibility for providing an environment in which companies can practice due diligence and legitimately purchase the minerals they need when the reality is that mechanisms for transparency do not exist. The effect has been to halt legitimate mining ventures that provided livelihoods for people, reducing the Congo's legal exports of tantalum by 90%.
An investigation by the U.S. Government Accountability Office (GAO) found that most companies were unable to determine the source of their conflict minerals.
Technology manufacturers criticized a law which required them to label a product as not "DRC Conflict Free" as compelled speech, and in violation of the First Amendment.
Conflict Minerals Regulation in the EU
Like the US, the EU wanted to stabilise and guarantee the steady supply of 3TG. On 16 June 2016 the European Parliament confirmed that "mandatory due diligence" would be required for "all but the smallest EU firms importing tin, tungsten, tantalum, gold and their ores".
On May 17, 2017, the EU passed Regulation (EU) 2017/821 of the Parliament and of the council on the supply chain due diligence obligations for importers of tin, tantalum, tungsten, their ores, and gold from conflict-affected and high risk areas. The regulation took effect in January 2021, and directly applies to certain companies that mineral ores, concentrates and processed metals containing or consisting of 3TG into the EU from conflict-affected or high-risk areas.
On August 10, 2018, The European Commission published their non-binding guidelines for the identification of conflict-affected and high-risk areas and other supply chain risks under Regulation (EU) 2017/821 of the European Parliament and of the council.
Conflict resources in supply chains
Increases in business process outsourcing to globally dispersed production facilities means that social problems and human rights violations are no longer only an organization matter, but also often occur in companies’ supply chains and challenge supply chain managers.
Consequently, firms that are located downstream in the supply chain and that are more visible to stakeholders are particularly threatened by social supply chain problems. The recent debate concerning conflict minerals illustrates the importance of social and human rights issues in supply chain management practice as well as the emerging need to react to social conflicts.
Rapid developments continue to be made in clean energy technology including solar PV, energy storage systems, and batteries especially in the electric car market. Critical minerals mining required for these technologies has increased as demand has increased, which can drive conflict through supply chains in source countries. This contributes to increasing environmental degradation especially of water resources, as poorly or untreated mine effluents cause mass destruction of aquatic ecosystems along with rendering ground and surface water resources unsafe for consumption. This degradation increases reliance on mining jobs for survival as food chains and land are destroyed. It also incentivizes violence-for-profit mechanisms, and to address these issues increased transparency is needed in supply chains. The Responsible Business Alliance code of conduct, the largest industry coalition related to conflict minerals in supply chains, states that “falsification of records or misrepresentation of conditions or practices in the supply chain are unacceptable."
Initiatives like the Dodd–Frank Wall Street Reform and Consumer Protection Act or the OECD Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High-Risk Areas demand that supply chain managers verify purchased goods as ‘‘conflict-free’’ or implement measures to better manage any inability to do so.
Firms have begun to apply governance mechanisms to avoid adverse effects of conflict mineral sourcing. However, the mere transfer of responsibilities upstream in the supply chain apparently will not stop the trade with conflict minerals, notably due to two reasons:
On the one hand, globalization has created governance gaps in a sense that companies are able to abuse human rights without being sanctioned by independent third parties. This gap results in a non-allocation of responsibility that makes the problem of human rights abuses and social conflicts within dispersed supply chains very likely to endure, particularly without collaborative approaches to remedy these deficiencies.
On the other hand, conflict minerals usually originate from globally diverse deposits and are difficult to track within components and manufactured products. This is the case because they are mixed with minerals of different origin and added to metal alloys. Consequently, although the share of these minerals in single end products may be negligible, they are prevalent in numerous products and commodities. Together, these circumstances leave downstream firms nearly incapable of detecting risks associated with conflict minerals. Hence, the topic of conflict minerals becomes one of supply chain management rather than of individual companies’ legal or compliance divisions alone. What is needed is effective and supply-chain wide-mechanisms of traceability and due diligence that allow firms to take individual and collective responsibility as parts of supply chains.
In the context of mineral supply chains, due diligence represents a holistic concept that aims at providing a chain of custody tracking from mine to export at country level, regional tracking of mineral flows through the creation of a database on their purchases, independent audits on all actors in the supply chain, and a monitoring of the whole mineral chain by a mineral chain auditor. In this sense, due diligence transcends conventional risk management approaches that usually focus on the prevention of direct impacts on the core business activities of companies. Moreover, due diligence focuses on a maximum of transparency as an end itself while risk management is always directed towards the end of averting direct damages. However, besides the Dodd–Frank Wall Street Reform and Consumer Protection Act and the OECD Guidance, there is still a gap in due diligence practices as international norms are just emerging. Studies found that the motivation for supply chain due diligence as well as expected outcomes of these processes vary among firms. Furthermore, different barriers, drivers, and implementation patterns of supply chain due diligence have been identified in scholarly research.
Implementation
Several industry organizations assist responsible companies with conducting due diligence tracking of minerals through the supply chain. Multiple international industry initiatives have been assessed for whether they fulfill the OECD guidance on conflict minerals.
The Dubai Multi Commodities Centre is a trade zone in the United Arab Emirates that is a major market for gold and diamonds.
The International Tin Association (ITA), previously known as the International Tin Research Institute (ITRI) until 2018, is a tin trade association based in the United Kingdom. The Tantalum-Niobium International Study Center (TIC) is a tantalum-niobium trade association based in Belgium. The organizations represent major buyers of tin, tantalum, and tungsten. Following the passage of the Dodd Frank bill, the two associations launched the ITRI Tin Supply Chain Initiative (ITSCI) in 2010.
The London bullion market is a market for gold and silver and runs the Responsible Gold Guidance (RGG).
The Responsible Jewellery Council is an industry organization for the watch and jewellery industry.
The Responsible Business Alliance (RBA), previously known as the Electronic Industry Citizenship Coalition (EICC), heads the Responsible Minerals Initiative (RMI), previously known as the Conflict Free Sourcing Initiative'' (CFSI).
Organizations and activists involved
The FairPhone Foundation raises awareness of conflict minerals in the mobile industry and is a company which tries to produce a smart phone with 'fair' conditions along the supply chain. Various industry and trade associations are also monitoring developments in conflict minerals laws and traceability frameworks. Some of these represent electronics, retailers, jewelry, mining, electronics components, and general manufacturing sectors. One organization – ITRI (a UK-based international non-profit organization representing the tin industry and sponsored/supported by its members, principally miners and smelters.) had spearheaded efforts for the development and implementation of a "bag and tag" scheme at the mine as a key element of credible traceability. The program and related efforts were initially not likely to extend beyond the pilot phase due to a variety of implementation and funding problems that occurred. In the end however, the device did enter the market.
In late March 2011, the UK government launched an informational section on its Foreign & Commonwealth Office website dedicated to conflict minerals. This information resource is intended to assist British companies in understanding the issues and, specifically, the US requirements.
| Physical sciences | Earth science basics: General | Earth science |
15669707 | https://en.wikipedia.org/wiki/VK%20%28service%29 | VK (service) | VK (short for its original name VKontakte; , meaning InContact) is a Russian online social media and social networking service based in Saint Petersburg. VK is available in multiple languages but it is predominantly used by Russian speakers. VK users can message each other publicly or privately, edit these messages, create groups, public pages, and events; share and tag images, audio, and video; and play browser-based games.
, VK had at least 500 million accounts. As of November 2022, it was the sixth most popular website in Russia. The network was also popular in Ukraine until it was banned by the Verkhovna Rada in 2017.
According to Semrush, in 2024 VK is the 30th most visited website in the world.
History
VKontakte was conceived in 2006 when Pavel Durov, creator of the popular student forum spbgu.ru, met his former classmate Vyacheslav Mirilashvili in St. Petersburg after graduating from the Faculty of Philology at St Petersburg State University. Vyacheslav showed Durov the increasingly popular Facebook, after which the friends decided to create a new Russian social network. Lev Leviev, an Israeli classmate of Vyacheslav Mirilashivili, became the third co-founder. Vyacheslav Mirilashvili borrowed the money from his billionaire father and became the largest shareholder. Lev Leviev took over operational management, and Durov became CEO. Pavel Durov attracted his older brother Nikolai, a multiple winner of international math and programming competitions, to develop the site.
Durov launched VKontakte for beta testing in September 2006. The following month, the domain name Vkontakte.ru was registered. The new project was incorporated on 19 January 2007 as a Russian private limited company. In February 2007 the site reached a user base of over 100,000 and was recognized as the second largest company in Russia's nascent social network market. In the same month, the site was subjected to a severe DDoS attack, which briefly put it offline. The user base reached 1 million in July 2007, and 10 million in April 2008. In December 2008 VK overtook rival Odnoklassniki as Russia's most popular social networking service.
Website
Similar to many social networks, the platform's fundamental features revolve around private messaging, sharing photos, posting status updates, and exchanging links with friends. VK also provides tools for administering online communities and managing celebrity pages. The site allows its users to upload, search and stream media content, such as videos and music. VK features an advanced search engine, that allows complex queries for finding friends, as well as a real-time news search. VK updated its features and design in April 2016.
Features
Messaging. VK Private Messages can be exchanged between groups of 2 to 500 people. An email address can also be specified as the recipient. Each message may contain up to 10 attachments: Photos, Videos, Audio Files, Maps (an embedded map with a manually placed marker), and Documents.
News. VK users can post on their profile walls, each post may contain up to 10 attachments – media files, maps, and documents (see above). User mentions and hashtags are supported. In the case of multiple photo attachments, the previews are automatically scaled and arranged in a magazine-style layout. The news feed can be switched between all news (default) and most interesting modes. The site features a news-recommendation engine, global real-time search, and individual search for posts and comments on specific users' walls.
Communities. VK features three types of communities. Groups are better suited for decentralized communities (discussion boards, wiki-style articles, editable by all members, etc.). Public pages is a news feed-orientated broadcasting tool for celebrities and businesses. The two types are largely interchangeable, the main difference being in the default settings. The third type of community is called Events, which are used for appropriately organizing concerts and events in an appropriate way.
Like buttons. VK like buttons for posts, comments, media, and external sites operate differently from Facebook. Liked content doesn't get automatically pushed to the user's wall, but is saved in the private Favorites section instead. The user has to press a second 'share with friends' button to share an item on their wall or send it via private message to a friend.
Privacy. Users can control the availability of their content within the network and on the Internet. Blanket and granular privacy settings are available for pages and individual content.
Synchronization with other social networks. Any news published on the VK wall will appear on Facebook or Twitter. Certain news may not be published by clicking on the logo next to the "Send" button. Editing a post in VK does not change the post in Facebook or Twitter and vice versa. However, removing the news in VK will remove it from other social networks.
SMS service. Russian users can receive and reply to a private message or leave a comment for community news using SMS.
Music. Users have access to the audio files uploaded by other users. In addition, users can upload the audio files themselves, create playlists and share audios with others by attaching to messages and wall posts. The uploaded audio files cannot violate copyright laws.
Popularity
As of May 2017, according to Alexa Internet ranking, VK is one of the most visited websites in some Eurasian countries. It is:
4th most visited in Russia;
3rd most visited in Belarus;
6th most visited in Kazakhstan;
8th most visited in Kyrgyzstan and Moldova;
12th most visited in Latvia.
It was the fourth most viewed site in Ukraine until, in May 2017, the Ukrainian government banned the use of VK in Ukraine. According to a study for May 2018 conducted by Factum Group Ukraine VK remained the fourth most viewed site in Ukraine, but Facebook was twice as much visited. For 2019, VK appeared as the most visited social network in Ukraine according to Alexa. According to the Internet Association of Ukraine the share of Ukrainian Internet users who visit VK daily had fallen from 54% to 10% from September 2016 to September 2019. They also claimed in November 2019 that Facebook was the most popular social network.
VK was expected to gain most of the users lost by Facebook and Instagram after they were blocked in Russia in 2022, according to a Calltouch poll.
Ownership
Initially, founder and CEO Pavel Durov owned 20% of shares (although he had majority voting power through proxy votes), and a trio of Russian-Israeli investors Yitzchak Mirilashvili, his father Mikhael Mirilashvili, and Lev Leviev owned 60%, 10%, and 10% respectively.
In 2007, Digital Sky Technologies, an investment company managed by Yuri Milner, acquired a total of 24.99% of the shares from shareholders, investing $16.3 million. In preparation for the IPO in September 2010, DST separated international and Russian assets: the former formed the DST Global fund, while the latter, including VKontakte and rival social network Odnoklassniki, were merged into Mail.ru Group. Mail.ru Group used part of the money to acquire 7.5% of the social network for $112.5 million at a valuation of the entire project of 1.5 billion dollars. After exercising a 7.5% option in July 2011 for $111.7 million, Mail.ru Group accumulated a 39.99% stake in VKontakte.
The head of Mail.ru Group, Dmitry Grishin, voiced the company's intention to gain 100% control over VKontakte. MRG was discussing with shareholders to buy out shares from the valuation of the entire company in $2-3 billion. In the summer of 2011, Mirilashvili and Leviev were ready to accept in payment owned by Mail.ru Group shares of Facebook, Groupon, and Zynga, but the deal failed due to Durov's unwillingness to sell a stake on MRG terms. Later, the co-founders considered VKontakte's IPO as an alternative. In March 2012, Durov "accidentally" became plugged into the negotiations where Mirilashvili and Leviev discussed selling their stakes directly to Mail.ru Group's main investor, Alisher Usmanov. On the same day, Durov deleted the pages of the first co-investors, stopped contacting them, and soon announced that VKontakte would postpone its IPO indefinitely.
On 29 May 2012, Mail.ru Group announced its decision to yield control of the company to Durov by offering him the voting rights on its shares. Combined with Durov's personal 12% stake, this gave him 52% of the votes.
In April 2013, the Mirilashvili family sold its 40% share in VK to United Capital Partners for $1.12 billion, while Lev Leviev sold his 8% share in the same deal, giving United Capital Partners 48% ownership. In January 2014, VK's founder Pavel Durov sold his 12% stake in the company to Ivan Tavrin, the CEO of MegaFon, which is controlled by Alisher Usmanov. Following the deal, Usmanov and his allies controlled around 52% of the company. Shortly thereafter, the CEO of Megafon, sold his 12% stake to Mail.ru, thus allowing Mail.ru to consolidate its controlling stake of 52% in VK.
On 1 April 2014, Durov submitted his resignation to the board; at first, due to the fact the company confirmed he had resigned, it was believed to be related to the Russo-Ukrainian War which began in the previous February. However, Durov himself claimed it was an April Fool's Joke on 3 April 2014. On 21 April 2014, Durov was dismissed as CEO, claiming he failed to withdraw his letter of resignation a month earlier. Durov then claimed the company had been effectively taken over by Vladimir Putin's political faction, suggesting his dismissal was the result of both his refusal to hand over personal details of users to federal law enforcement and his refusal to hand over the personal details of people who were members of a VKontakte group dedicated to the Euromaidan protest movement. Durov then left Russia and stated that he had "no plans to go back" and that "the country is incompatible with Internet business at the moment".
On 16 September 2014, the Mail.ru group bought the remaining 48% stake of VK from United Capital Partners (UCP) for $1.5 billion, thus becoming the sole proprietor of the social network.
In December 2021, Russian state-owned bank Gazprombank and insurance company Sogaz bought out 57.3% of VK shares, thus becoming the holders of the company's controlling interest.
Controversies
Copyright issues
Litigation
In 2008, the leading Russian television channel TV Russia (TV channel name RTR used in 1991–2002, then Russia 1) and television company VGTRK sued VKontakte (then VK) over unlicensed copies of two of its films which had been uploaded by VK users. In 2010, this dispute was settled by the Russian High Arbitration Court in favour of the social network. The court ruled that VK is not responsible for its users’ copyright violations, taking into account that both parties agreed with the technical possibility to identify the user who posted illegal content and who, consequently, must incur the liability. Another ruling early in 2012 went partially in favor of Gala Records (now Warner Music Russia), a recording studio, when the same court ordered VK to pay $7000 for not being active enough in regard to copyrighted materials.
Efforts against copyright infringement
VK offers a content removal tool for copyright holders. Large-scale copyright holders may gain access to bulk content removal tools.
Since 2010, VK has also entered several partnerships with legal content providers such as television networks. and streaming providers. Most notably, the Video on Demand provider Ivi.ru that has secured licensing rights with all Hollywood majors in 2012. These partnerships allow providers to remove user-uploaded content from VK and substitute it with legal embedded copies from the provider's site. This legal content can be either ad-sponsored, subscription-based, or free, depending on the provider's choices. VK does not display its own advertising in the site's music or video sections, nor in the videos themselves. In October 2013, VKontakte was cleared of copyright infringement charges by a court in Saint Petersburg. The judge ruled that the social network is not responsible for the content uploaded by its users.
In November 2014, the head of the Roskomnadzor, Maxim Ksenzov, said that VKontakte would complete the process of legalization of the content at the beginning of 2015. At that time (November 2014), negotiations between major label companies and the social network VKontakte were ongoing.
DDoS attacks on sites
Because the social network is one of the most popular and visited sites in runet, its visits can be used to make DDoS attacks on smaller sites.
VK performed DDOS attacks on certain sites, making users' browsers send multiple requests to the target site without their consent. The targets were the Runet Prize voting page in 2008 and the CAPTCHA-solving service antigate.com in 2012. It was done by inserting an iframe and a piece of JavaScript code which periodically reloaded the iframe. As a countermeasure, antigate was detecting whether iframe was loaded from VK and if it were antigate had redirected request to xHamster, a pornography website. VK needed to cease the attack due to the site's use by children. VK tried to use XMLHttpRequest to solve this problem, but had forgotten about the same-origin policy. They succeeded in stopping the attack, though there were many ways to solve the problem with redirect.
Durov's dismissal
Durov was dismissed as CEO in April 2014 after he had failed to retract a letter of resignation. Durov contended that the resignation letter was an April fools prank. Durov then claimed Vladimir Putin's allies had, in effect, taken over the company, and suggested his ousting was the result of his refusal to hand over personal details of users to the Russian Federal Security Service and his refusal to shut down a VK group dedicated to anti-corruption activist Alexei Navalny.
Censorship
On 24 May 2013, it was reported in the media that the site had been mistakenly put on a list of websites banned by the Russian government. Some critics have accused the blacklist as the latest in a series of suspicious incidents to have happened to the website in recent months as a way for the Russian government to increase their stake in, and control of the site.
On 18 November 2013, following an order from the Court of Rome, VK was blocked in Italy after a complaint from Medusa Film stating that it was hosting an illegal copy of one of its films. However, in April 2015, the site was reopened for Italian users and its mobile app is available on both the App Store and Google Play.
In January 2016, China banned VKontakte, claiming that it was manipulating web content and cooperating with the Kremlin. According to Russia's media watchdog, the network estimates around 300,000 users based in China. As of 14 February 2018, China authorities unblocked VKontakte and it was fully accessible in the country.
In May 2017, Ukrainian President Petro Poroshenko signed a decree to impose a ban on Mail.ru and its widely used social networks including VKontakte and Odnoklassniki as part of its continued sanctions on Russia for its annexation of Crimea and involvement in the war in Donbas. Reporters Without Borders condemned the ban, calling it a "disproportionate measure that seriously undermines the Ukrainian people's right to information and freedom of expression." VK closed its office in Ukraine's capital Kyiv in June 2017.
In December 2021, VKontakte's CEO, Boris Dobrodeev resigned from his post. Reuters linked Dobrodeev's resignation to the acquisition of VK's majority interest by two state-owned companies that happened the same month. According to one analyst, the state consolidation of VKontakte would cause greater censorship by the government.
On 12 May 2022, in connection with the sanctions imposed by the European Union (EU), NEPLP decided to limit the activity of "VKontakte" ("vk.com"), "Odnoklassniki" ("ok.ru") and "Moy Mir" ("my.mail.ru") social medias in Latvia. The decision was made because NEPLP has evidence that the platforms are owned and controlled by Yury Kovalchuk and Vladimir Kiriyenko. The mentioned persons are subject to EU sanctions in connection with undermining the territorial integrity, sovereignty and independence of Ukraine.
After the Russian military invasion of Ukraine, on 26 September 2022, the VK application (as well as other applications of the holding services) was removed from the Apple App Store due to international sanctions. On 28 September, the Russian communications regulator Roskomnadzor issued a statement demanding an explanation for the removal of the VK application from the App Store. CEO Vladimir Kiriyenko was sanctioned by the United States, Canada, United Kingdom, European Union, Japan, Australia and various countries.
Prosecution of users in Russia
In July 2012, VKontakte was accused of close cooperation with the Centre for Combating Extremism (Centre E), a unit within the Russian Ministry of Internal Affairs heavily criticized for repressing opposition activists. For publications, reposts, comments and likes posted on their VKontakte pages, dozens of Russian citizens were sentenced to fines, suspended sentences and imprisonment. Most of the cases against users are qualified as propaganda of extremism, xenophobia and Nazism. Statistically, among all the social networking services available in Russia, the users of VKontakte were targeted by police almost exclusively.
Events and projects
Automated workplace of a civil servant
By 2023, on the basis of VK, the “Automated workplace of a civil servant” (АРМ ГС in Russian) was developed, to which it is planned to transfer all Russian civil servants. AWP includes mail, calendar, cloud storage, instant messenger, supports audio and video calls. It should replace Telegram, WhatsApp, Gmail, Google Docs, Zoom and Skype widely used in Russia. Thus, the Russian government intends to exclude foreign services from the public administration system. Officials were supposed to switch to "Automated workplace" from May 1, 2023.
In June 2023, the Ministry of Digital Development, Communications and Mass Media announced a competition for software developers to "scale" the workspace and provide the necessary level of cryptographic protection. It is planned to spend 9 million rubles on the project.
Hackathons
VK organized their first 24-hour Hackathon in 2015 from 31 October to 1 November. The participants were invited to develop projects united by a common idea: “Make it Simple!” (Russian: «Упрощайте!»). 34 teams took part in the competition. A prize pool of 300 thousand rubles was split among the winners.
The second VK Hackathon took place from 26 to 27 November 2016. The participants developed projects for the community app platform. The “Search for Lost Cats” (Russian: «Поиск пропавших котиков») app won the “Developers’ Choice” category. The prize pool for the event was 300 thousand rubles.
The third VK Hackathon took place from 20 to 22 October 2017 with 320 participants competing in the event. The prize pool was one million rubles. An application designed to help users navigate the State Hermitage Museum won the “Culture” category.
Start Fellows
In 2011, Pavel Durov and Yuri Milner created Start Fellows, a grant program founded to support projects in the field of technology. In 2014, VK took over the Start Fellows program and made it more systematic. The grant was provided to 3 companies each month and included project consultation from VK along with 25 thousand rubles a month for advertisement on the VK platform. Winners of the grant include “University Schedules” (Russian: «Расписание вузов»), a scheduling app, LiveCamDroid, a mobile streaming service, HTML Academy, an educational project, and others.
VK re-launched the project in 2017. Only active projects with an earnings model could submit applications. 327 grant applications were received but only 67 of them passed the initial screening. The total prize pool was 2.5 million rubles.
VK Cup
The first VK Cup, a programming championship for young programmers aged 13–23, was held on 16 July 2012 in Saint Petersburg, Russia.
VK and Codeforces co-organized the second VK Cup programming championship, which took place from 24 to 27 July 2015. The winners received a total of 1,048,576 or 220 rubles (an amount related to round binary numbers).
The third VK Cup took place from 1 to 4 July 2016 and had a prize pool of 2.5 million rubles.
VK and Codeforces co-organized the fourth VK Cup which took place from 8 to 9 July 2017. Teams from 52 countries applied to take part in the competition. The prize pool for the competition was 2.5 million rubles.
VK Music Awards
The first VK Music Awards ceremony took place on 25 December 2017. The VK Music Awards were produced by Timur Bekmambetov and the Bazelevs Company with Pavel Volya hosting the event. The awards ceremony was held in the form of an online live stream. Any VK user could watch the broadcast live. After the ceremony, a private concert was held in the Vegas City Hall in Moscow. Tickets to the event could be won through a contest held in the VK Music community. VK Music Awards winners were determined by the number of plays an artist's song got on VK and the BOOM app. The names of the 30 award winners were published on the official VK Music Awards community page and on the BOOM app website. “Rosé Wine” (Allj and Feduk), “Lambada” ( and Scriptonite), and “My Half” () topped the list of most listened to songs. The official pages of all award winners have been marked with a special symbol.
VK Fest
Since 2015, VK has held a yearly 2-day open-air music and entertainment festival. This festival traditionally takes place on a weekend in July at the (Russian: Парк имени 300-летия Санкт-Петербурга) in St. Petersburg, Russia. According to data from the organizer, 70 thousand people attended the festival in 2016, with the number rising to 85 thousand attendees in 2017. In 2017, around 40 artists and groups performed on 3 stages, including Little Big, The Hatters, and others. Bloggers and other famous individuals, such as Dmitry Grishin, Timur Bekmambetov, and Mikhail Piotrovsky (speakers at the 2017 festival), are also an important part of the festival. More than 1.5 million people watched the festival's official live stream.
| Technology | Social network and blogging | null |
1240348 | https://en.wikipedia.org/wiki/Strength%20training | Strength training | Strength training, also known as weight training or resistance training, involves the performance of physical exercises that are designed to improve physical strength. It is often associated with the lifting of weights. It can also incorporate a variety of training techniques such as bodyweight exercises, isometrics, and plyometrics.
Training works by progressively increasing the force output of the muscles and uses a variety of exercises and types of equipment. Strength training is primarily an anaerobic activity, although circuit training also is a form of aerobic exercise.
Strength training can increase muscle, tendon, and ligament strength as well as bone density, metabolism, and the lactate threshold; improve joint and cardiac function; and reduce the risk of injury in athletes and the elderly. For many sports and physical activities, strength training is central or is used as part of their training regimen.
Principles and training methods
Strength training follows the fundamental principle that involves repeatedly overloading a muscle group. This is typically done by contracting the muscles against heavy resistance and then returning to the starting position. This process is repeated for several repetitions until the muscles reach the point of failure. The basic method of resistance training uses the principle of progressive overload, in which the muscles are overloaded by working against as high resistance as they are capable of. They respond by growing larger and stronger.
Beginning strength-trainers are in the process of training the neurological aspects of strength, the ability of the brain to generate a rate of neuronal action potentials that will produce a muscular contraction that is close to the maximum of the muscle's potential.
Proper form
Strength training also requires the use of proper or 'good form', performing the movements with the appropriate muscle group, and not transferring the weight to different body parts in order to move greater weight (called 'cheating'). An injury or an inability to reach training objectives might arise from poor form during a training set. If the desired muscle group is not challenged sufficiently, the threshold of overload is never reached and the muscle does not gain in strength. At a particularly advanced level, however, "cheating" can be used to break through strength plateaus and encourage neurological and muscular adaptation.
Maintaining proper form is one of the many steps in order to perfectly perform a certain technique. Correct form in weight training improves strength, muscle tone, and maintaining a healthy weight. Improper form can lead to strains and fractures.
Stretching and warm-up
Weight trainers often spend time warming up before starting a workout, a practice strongly recommended by the National Strength and Conditioning Association (NSCA). A warm-up may include cardiovascular activity such as light stationary biking (a "pulse raiser"), flexibility and joint mobility exercises, static and/or dynamic stretching, "passive warm up" such as applying heat pads or taking a hot shower, and workout-specific warm-up, such as rehearsal of the intended exercise with no weights or light weights. The intended purpose of warming up is to enhance exercise effectiveness and reduce the risk of injury.
Evidence is limited regarding whether warming up reduces injuries during strength training. As of 2015, no articles existed on the effects of warm-up for upper body injury prevention. For the lower limbs, several programs significantly reduce injuries in sports and military training, but no universal injury prevention program has emerged, and it is unclear if warm-ups designed for these areas will also be applicable to strength training. Static stretching can increase the risk of injury due to its analgesic effect and cellular damage caused by it.
The effects of warming up on exercise effectiveness are clearer. For 1RM trials, an exercise rehearsal has significant benefits. For submaximal strength training (3 sets of 80% of 1RM to failure), exercise rehearsal does not provide any benefits regarding fatigue or total repetitions for exercises such as bench press, squats, and arm curl, compared to no warm-up. Dynamic warm-ups (performed with greater than 20% of maximal effort) enhance strength and power in upper-body exercises. When properly warmed up the lifter will have more strength and stamina since the blood has begun to flow to the muscle groups. Pulse raisers do not have any effect on either 1RM or submaximal training. Static stretching induces strength loss, and should therefore probably not be performed before strength training. Resistance training functions as an active form of flexibility training, with similar increases in range of motion when compared to performing a static stretching protocol. Static stretching, performed either before or after exercise, also does not reduce muscle soreness in healthy adults.
Breathing
Like numerous forms of exercise, weight training has the potential to cause the breathing pattern to deepen. This helps to meet increased oxygen requirements. One approach to breathing during weight training consists of avoiding holding one's breath and breathing shallowly. The benefits of this include protecting against a lack of oxygen, passing out, and increased blood pressure. The general procedure of this method is to inhale when lowering the weight (the eccentric portion) and exhale when lifting the weight (the concentric portion). However, the reverse, inhaling when lifting and exhaling when lowering, may also be recommended. There is little difference between the two techniques in terms of their influence on heart rate and blood pressure.
On the other hand, for people working with extremely heavy loads (such as powerlifters), breathing à la the Valsalva maneuver is often used. This involves deeply inhaling and then bracing down with the abdominal and lower back muscles as the air is held in during the entire rep. Air is then expelled once the rep is done, or after a number of reps is done. The Valsalva maneuver leads to an increase in intrathoracic and intra-abdominal pressure. This enhances the structural integrity of the torso—protecting against excessive spinal flexion or extension and providing a secure base to lift heavy weights effectively and securely. However, as the Valsalva maneuver increases blood pressure, lowers heart rate, and restricts breathing, it can be a dangerous method for those with hypertension or for those who faint easily.
Training volume
Training volume is commonly defined as sets × reps × load. That is, an individual moves a certain load for some number of repetitions, rests, and repeats this for some number of sets, and the volume is the product of these numbers. For non-weightlifting exercises, the load may be replaced with intensity, the amount of work required to achieve the activity. Training volume is one of the most critical variables in the effectiveness of strength training. There is a positive relationship between volume and hypertrophy.
The load or intensity is often normalized as the percentage of an individual's one-repetition maximum (1RM). Due to muscle failure, the intensity limits the maximum number of repetitions that can be carried out in one set, and is correlated with the repetition ranges chosen. Depending on the goal, different loads and repetition amounts may be appropriate:
Strength development (1RM performance): Gains may be achieved with a variety of loads. However, training efficiency is maximized by using heavy loads (80% to 100% of 1RM). The number of repetitions is secondary and may be 1 to 5 repetitions per set.
Muscle growth (hypertrophy): Hypertrophy can be maximized by taking sets to failure or close to failure. Any load 30% of 1RM or greater may be used. The NCSA recommends "medium" loads of 8 to 12 repetitions per set with 60% to 80% of 1RM.
Endurance: Endurance may be trained by performing many repetitions, such as 15 or more per set. The NCSA recommends "light" loads below 60% of 1RM, but some studies have found conflicting results suggesting that "moderate" 15-20RM loads may work better when performed to failure.
Training to muscle failure is not necessary for increasing muscle strength and muscle mass, but it also is not harmful.
Movement tempo
The speed or pace at which each repetition is performed is also an important factor in strength and muscle gain. The emerging format for expressing this is as a 4-number tempo code such as 3/1/4/2, meaning an eccentric phase lasting 3 seconds, a pause of 1 second, a concentric phase of 4 seconds, and another pause of 2 seconds. The letter X in a tempo code represents a voluntary explosive action whereby the actual velocity and duration is not controlled and may be involuntarily extended as fatigue manifests, while the letter V implies volitional freedom "at your own pace". A phase's tempo may also be measured as the average movement velocity. Less precise but commonly used characterizations of tempo include the total time for the repetition or a qualitative characterization such as fast, moderate, or slow. The ACSM recommends a moderate or slower tempo of movement for novice- and intermediate-trained individuals, but a combination of slow, moderate, and fast tempos for advanced training.
Intentionally slowing down the movement tempo of each repetition can increase muscle activation for a given number of repetitions. However, the maximum number of repetitions and the maximum possible load for a given number of repetitions decreases as the tempo is slowed. Some trainers calculate training volume using the time under tension (TUT), namely the time of each rep times the number of reps, rather than simply the number of reps. However, hypertrophy is similar for a fixed number of repetitions and each repetition's duration varying from 0.5 s - 8 s. There is however a marked decrease in hypertrophy for "very slow" durations greater than 10 s. There are similar hypertrophic effects for 50-60% 1RM loads with a slower 3/0/3/0 tempo and 80-90% 1RM loads with a faster 1/1/1/0 tempo. It may be beneficial for both hypertrophy and strength to use fast, short concentric phases and slower, longer eccentric phases. Research has not yet isolated the effects of concentric and eccentric durations, or tested a wide variety of exercises and populations.
Weekly frequency
In general, more weekly training sessions lead to higher increases in physical strength. However, when training volume was equalized, training frequency had no influence on muscular strength. In addition, greater frequency had no significant effect on single-joint exercises. There may be a fatigue recovery effect in which spreading the same amount of training over multiple days boosts gains, but this has to be confirmed by future studies.
For muscle growth, a training frequency of two sessions per week had greater effects than once per week. Whether training a muscle group three times per week is superior to a twice-per-week protocol remains to be determined.
Rest period
The rest period is defined as the time dedicated to recovery between sets and exercises. Exercise causes metabolic stress, such as the buildup of lactic acid and the depletion of adenosine triphosphate and phosphocreatine. Resting 3–5 minutes between sets allows for significantly greater repetitions in the next set versus resting 1–2 minutes.
For untrained individuals (no previous resistance training experience), the effect of resting on muscular strength development is small and other factors such as volitional fatigue and discomfort, cardiac stress, and the time available for training may be more important. Moderate rest intervals (60-160s) are better than short (20-40 s), but long rest intervals (3–4 minutes) have no significant difference from moderate.
For trained individuals, rest of 3–5 minutes is sufficient to maximize strength gain, compared to shorter intervals 20s-60s and longer intervals of 5 minutes. Intervals of greater than 5 minutes have not been studied. Starting at 2 minutes and progressively decreasing the rest interval over the course of a few weeks to 30s can produce similar strength gains to a constant 2 minutes.
Regarding older individuals, a 1 minute rest is sufficient in females.
Order
The largest increases in strength happen for the exercises in the beginning of a session.
Supersets are defined as a pair of different exercise sets performed without rest, followed by a normal rest period. Common superset configurations are two exercises for the same muscle group, agonist-antagonist muscles, or alternating upper and lower body muscle groups. Exercises for the same muscle group (flat bench press followed by the incline bench press) result in a significantly lower training volume than a traditional exercise format with rests. However, agonist–antagonist supersets result in a significantly higher training volume when compared to a traditional exercise format. Similarly, holding training volume constant but performing upper–lower body supersets and tri-sets reduce elapsed time but increased perceived exertion rate. These results suggest that specific exercise orders may allow more intense, more time-efficient workouts with results similar to longer workouts.
Periodization
Periodization refers to the organization of training into sequential phases and cyclical periods, and the change in training over time. The simplest strength training periodization involves keeping a fixed schedule of sets and reps (e.g. 2 sets of 12 reps of bicep curls every 2 days), and steadily increasing the intensity on a weekly basis. This is conceptually a parallel model, as several exercises are done each day and thus multiple muscles are developed simultaneously. It is also sometimes called linear periodization, but this designation is considered a misnomer.
Sequential or block periodization concentrates training into periods ("blocks"). For example, for athletes, performance can be optimized for specific events based on the competition schedule. An annual training plan may be divided hierarchically into several levels, from training phases down to individual sessions. Traditional periodization can be viewed as repeating one weekly block over and over. Block periodization has the advantage of focusing on specific motor abilities and muscle groups. Because only a few abilities are worked on at a time, the effects of fatigue are minimized. With careful goal selection and ordering, there may be synergistic effects. A traditional block consists of high-volume, low-intensity exercises, transitioning to low-volume, high-intensity exercises. However, to maximize progress to specific goals, individual programs may require different manipulations, such as decreasing the intensity and increasing volume.
Undulating periodization is an extension of block periodization to frequent changes in volume and intensity, usually daily or weekly. Because of the rapid changes, it is theorized that there will be more stress on the neuromuscular system and better training effects. Undulating periodization yields better strength improvements on 1RM than non-periodized training. For hypertrophy, it appears that daily undulating periodization has similar effect to more traditional models.
Training splits
A training split refers to how the trainee divides and schedules their training volume, or in other words which muscles are trained on a given day over a period of time (usually a week). Popular training splits include full body, upper/lower, push/pull/legs, and the "bro" split. Some training programs may alternate splits weekly.
Exercise selection
Exercise selection depends on the goals of the strength training program. If a specific sport or activity is targeted, the focus will be on specific muscle groups used in that sport. Various exercises may target improvements in strength, speed, agility, or endurance. For other populations such as older individuals, there is little information to guide exercise selection, but exercises can be selected on the basis of specific functional capabilities as well as the safety and efficiency of the exercises.
For strength and power training in able-bodied individuals, the NCSA recommends emphasizing integrated or compound movements (multi-joint exercises), such as with free weights, over exercises isolating a muscle (single-joint exercises), such as with machines. This is due to the fact that only the compound movements improve gross motor coordination and proprioceptive stabilizing mechanisms. However, single-joint exercises can result in greater muscle growth in the targeted muscles, and are more suitable for injury prevention and rehabilitation. Low variation in exercise selection or targeted muscle groups, combined with a high volume of training, is likely to lead to overtraining and training maladaptation. Many exercises such as the squat have several variations. Some studies have analyzed the differing muscle activation patterns, which can aid in exercise selection.
Equipment
Commonly used equipment for resistance training include free weights—including dumbbells, barbells, and kettlebells—weight machines, and resistance bands.
Resistance can also be generated by inertia in flywheel training instead of by gravity from weights, facilitating variable resistance throughout the range of motion and eccentric overload.
Some bodyweight exercises do not require any equipment, and others may be performed with equipment such as suspension trainers or pull-up bars.
Types of strength training exercises
Isometric exercise
Isotonic exercise
Isokinetic exercise
Aerobic exercise versus anaerobic exercise
Strength training exercise is primarily anaerobic. Even while training at a lower intensity (training loads of ~20-RM), anaerobic glycolysis is still the major source of power, although aerobic metabolism makes a small contribution. Weight training is commonly perceived as anaerobic exercise, because one of the more common goals is to increase strength by lifting heavy weights. Other goals such as rehabilitation, weight loss, body shaping, and bodybuilding often use lower weights, adding aerobic character to the exercise.
Except in the extremes, a muscle will fire fibres of both the aerobic or anaerobic types on any given exercise, in varying ratio depending on the load on the intensity of the contraction. This is known as the energy system continuum. At higher loads, the muscle will recruit all muscle fibres possible, both anaerobic ("fast-twitch") and aerobic ("slow-twitch"), to generate the most force. However, at maximum load, the anaerobic processes contract so forcefully that the aerobic fibers are completely shut out, and all work is done by the anaerobic processes. Because the anaerobic muscle fibre uses its fuel faster than the blood and intracellular restorative cycles can resupply it, the maximum number of repetitions is limited. In the aerobic regime, the blood and intracellular processes can maintain a supply of fuel and oxygen, and continual repetition of the motion will not cause the muscle to fail.
Circuit weight training is a form of exercise that uses a number of weight training exercise sets separated by short intervals. The cardiovascular effort to recover from each set serves a function similar to an aerobic exercise, but this is not the same as saying that a weight training set is itself an aerobic process.
Strength training is typically associated with the production of lactate, which is a limiting factor of exercise performance. Regular endurance exercise leads to adaptations in skeletal muscle which can prevent lactate levels from rising during strength training. This is mediated via activation of PGC-1alpha which alter the LDH (lactate dehydrogenase) isoenzyme complex composition and decreases the activity of the lactate generating enzyme LDHA, while increasing the activity of the lactate metabolizing enzyme LDHB.
Nutrition and supplementation
Supplementation of protein in the diet of healthy adults increases the size and strength of muscles during prolonged resistance exercise training (RET); protein intakes of greater than 1.62 grams per kilogram of body weight a day did not additionally increase fat–free mass (FFM), muscle size, or strength, with the caveat that "Increasing age reduces… the efficacy of protein supplementation during RET."
It is not known how much carbohydrate is necessary to maximize muscle hypertrophy. Strength adaptations may not be hindered by a low-carbohydrate diet.
A light, balanced meal prior to the workout (usually one to two hours beforehand) ensures that adequate energy and amino acids are available for the intense bout of exercise. The type of nutrients consumed affects the response of the body, and nutrient timing whereby protein and carbohydrates are consumed prior to and after workout has a beneficial impact on muscle growth. Water is consumed throughout the course of the workout to prevent poor performance due to dehydration. A protein shake is often consumed immediately following the workout. However, the anabolic window is not particularly narrow and protein can also be consumed before or hours after the exercise with similar effects. Glucose (or another simple sugar) is often consumed as well since this quickly replenishes any glycogen lost during the exercise period.
If consuming recovery drink after a workout, to maximize muscle protein anabolism, it is suggested that the recovery drink contain glucose (dextrose), protein (usually whey) hydrolysate containing mainly dipeptides and tripeptides, and leucine.
Some weight trainers also take ergogenic aids such as creatine or anabolic steroids to aid muscle growth. In a meta-analysis study that investigated the effects of creatine supplementation on repeated sprint ability, it was discovered that creatine increased body mass and mean power output. The creatine-induced increase in body mass was a result of fluid retention. The increase in mean power output was attributed to creatine's ability to counteract the lack of intramuscular phosphocreatine. Creatine does not have an effect on fatigue or maximum power output.
Hydration
As with other sports, weight trainers should avoid dehydration throughout the workout by drinking sufficient water. This is particularly true in hot environments, or for those older than 65.
Some athletic trainers advise athletes to drink about every 15 minutes while exercising, and about throughout the day.
However, a much more accurate determination of how much fluid is necessary can be made by performing appropriate weight measurements before and after a typical exercise session, to determine how much fluid is lost during the workout. The greatest source of fluid loss during exercise is through perspiration, but as long as fluid intake is roughly equivalent to the rate of perspiration, hydration levels will be maintained.
Under most circumstances, sports drinks do not offer a physiological benefit over water during weight training.
Insufficient hydration may cause lethargy, soreness or muscle cramps. The urine of well-hydrated persons should be nearly colorless, while an intense yellow color is normally a sign of insufficient hydration.
Effects
The effects of strength training include greater muscular strength, improved muscle tone and appearance, increased endurance, cardiovascular health, and enhanced bone density.
Bones, joints, frailty, posture and in people at risk
Strength training also provides functional benefits. Stronger muscles improve posture, provide better support for joints, and reduce the risk of injury from everyday activities.
Progressive resistance training may improve function, quality of life and reduce pain in people at risk of fracture, with rare adverse effects. Weight-bearing exercise also helps to prevent osteoporosis and to improve bone strength in those with osteoporosis. For many people in rehabilitation or with an acquired disability, such as following stroke or orthopaedic surgery, strength training for weak muscles is a key factor to optimise recovery. Consistent exercise can actually strengthen bones and prevent them from getting frail with age.
Mortality, longevity, muscle and body composition
Strength training appears to be associated with a "10–17% lower risk of all-cause mortality, cardiovascular disease (CVD), total cancer, diabetes and lung cancer". Two key outcomes of strength training are muscle hypertrophy and muscular strength gain which are associated with reduced all-cause mortality.
Strength training causes endocrine responses that could have positive effects. It also reduces blood pressure (SBP and DBP) and alters body composition, reducing body fat percentage, body fat mass and visceral fat, which is usually beneficial as obesity predisposes towards several chronic diseases and e.g. body fat distribution is one predictor of insulin resistance and related complications.
Neurobiological effects
Strength training also leads to various beneficial neurobiological effects – likely including functional brain changes, lower white matter atrophy, neuroplasticity (including some degree of BDNF expression), and white matter-related structural and functional changes in neuroanatomy. Although resistance training has been less studied for its effect on depression than aerobic exercise, it has shown benefits compared to no intervention.
Lipid and inflammatory outcomes
Moreover, it also promotes decreases in total cholesterol (TC), triglycerides (TG), low-density lipoprotein (LDL), and C-reactive protein (CRP) as well as increases in high-density lipoprotein (HDL) and adiponectin concentrations.
Sports performance
Stronger muscles improve performance in a variety of sports. Sport-specific training routines are used by many competitors. These often specify that the speed of muscle contraction during weight training should be the same as that of the particular sport. Strength training can substantially prevent sports injuries, increase jump height and improve change of direction.
Neuromuscular Adaptations
Strength training is not only associated with an increase in muscle mass, but also an improvement in the nervous system's ability to recruit muscle fibers and activate them at a faster rate. Neural adaptations can occur in the motor cortex, the spinal cord, and/or neuromuscular junctions. The initial significant improvements in strength amongst new lifters are a result of increased neural drive, motor unit synchronization, motor unit excitability, rate of force development, muscle fiber conduction velocity, and motor unit discharge rate. Together, these improvements provide an increase in strength separate from muscle hypertrophy. Typically, the main barbell lifts (squat, bench, and deadlift) are performed with a full range of motion, which provides the greatest neuromuscular improvements compared to one-third or two-thirds range of motion. However, there are reasons to perform these lifts with less range of motion, particularly in the powerlifting community. By limiting range of motion, lifters can target a specific joint angle in order to improve their sticking points by training their neural drive. Neuromuscular adaptations are critical for the development of strength, but are especially important in the aging adult population, as the decline in neuromuscular function is roughly three times as great (~3% per year) as the loss of muscle mass (~1% per year). By staying active and following a resistance training program, older adults can maintain their movement, stability, balance, and independence.
History
The genealogy of lifting can be traced back to the beginning of recorded history where humanity's fascination with physical abilities can be found among numerous ancient writings. In many prehistoric tribes, they would have a big rock they would try to lift, and the first one to lift it would inscribe their name into the stone. Such rocks have been found in Greek and Scottish castles. Progressive resistance training dates back at least to Ancient Greece, when legend has it that wrestler Milo of Croton trained by carrying a newborn calf on his back every day until it was fully grown. Another Greek, the physician Galen, described strength training exercises using the halteres (an early form of dumbbell) in the 2nd century.
Ancient Greek sculptures also depict lifting feats. The weights were generally stones, but later gave way to dumbbells. The dumbbell was joined by the barbell in the later half of the 19th century. Early barbells had hollow globes that could be filled with sand or lead shot, but by the end of the century these were replaced by the plate-loading barbell commonly used today.
Weightlifting was first introduced in the Olympics in the 1896 Athens Olympic Games as a part of track and field, and was officially recognized as its own event in 1914.
The 1960s saw the gradual introduction of exercise machines into the still-rare strength training gyms of the time. Weight training became increasingly popular in the 1970s, following the release of the bodybuilding movie Pumping Iron, and the subsequent popularity of Arnold Schwarzenegger. Since the late 1990s, increasing numbers of women have taken up weight training; currently, nearly one in five U.S. women engage in weight training on a regular basis.
Subpopulations
Sex differences
Men and women have similar reactions to resistance training with comparable effect sizes for hypertrophy and lower body strength, although some studies have found that women experience a greater relative increase in upper-body strength. Because of their greater starting strength and muscle mass, absolute gains are higher in men. In older adults, women experienced a larger increase in lower-body strength.
Safety concerns and Training related to children
Orthopaedic specialists used to recommend that children avoid weight training because the growth plates on their bones might be at risk. The very rare reports of growth plate fractures in children who trained with weights occurred as a result of inadequate supervision, improper form or excess weight, and there have been no reports of injuries to growth plates in youth training programs that followed established guidelines. The position of the National Strength and Conditioning Association is that strength training is safe for children if properly designed and supervised. The effects of training on youth have been shown to depend on the methods of training being implemented. Studies from the Journal of Strength and Conditioning Research concluded that both Resistance Training and Plyometric training led to significant improvements in peak torque, peak rate of torque development, and jump performance, with Plyometric showing a greater improvement in jump performance compared to Resistance training. Another study saw results that suggest that both high-load, low-repetition and moderate-load, high-repetition resistance training can be prescribed to improve muscular fitness in untrained adolescents, as well as the jump height had also increased. These finding can be used in the future to develop training programs for youth athletes. The big takeaway from these studies is that not only in training important for the development of strength for young athletes, but also it shows that when developing a program, having both plyometrics exercise and resistance training will result in better adaptations in the short and long term. This can be attributed to the effect of neuromuscular development and the principle that it comes faster for adolescents than muscular hypertrophy. Understanding this is crucial for those in charge of creating programs for the youth to avoid injury and/or overtraining. Since adolescents are still in growing and are not done with developing not only musculature but also bone and joint structures. Younger children are at greater risk of injury than adults if they drop a weight on themselves or perform an exercise incorrectly; further, they may lack understanding of, or ignore the safety precautions around weight training equipment. As a result, supervision of minors is considered vital to ensuring the safety of any youth engaging in strength training.
Older adults
Aging is associated with sarcopenia, a decrease in muscle mass and strength. Resistance training can mitigate this effect, and even the oldest old (those above age 85) can increase their muscle mass with a resistance training program, although to a lesser degree than younger individuals. With more strength older adults have better health, better quality of life, better physical function and fewer falls. Resistance training can improve physical functioning in older people, including the performance of activities of daily living. Resistance training programs are safe for older adults, can be adapted for mobility and disability limitations, and may be used in assisted living settings. Resistance training at lower intensities such as 45% of 1RM can still result in increased muscular strength.
| Biology and health sciences | Physical fitness | Health |
1240378 | https://en.wikipedia.org/wiki/Symmetry%20breaking | Symmetry breaking | In physics, symmetry breaking is a phenomenon where a disordered but symmetric state collapses into an ordered, but less symmetric state. This collapse is often one of many possible bifurcations that a particle can take as it approaches a lower energy state. Due to the many possibilities, an observer may assume the result of the collapse to be arbitrary. This phenomenon is fundamental to quantum field theory (QFT), and further, contemporary understandings of physics. Specifically, it plays a central role in the Glashow–Weinberg–Salam model which forms part of the Standard model modelling the electroweak sector.In an infinite system (Minkowski spacetime) symmetry breaking occurs, however in a finite system (that is, any real super-condensed system), the system is less predictable, but in many cases quantum tunneling occurs. Symmetry breaking and tunneling relate through the collapse of a particle into non-symmetric state as it seeks a lower energy.
Symmetry breaking can be distinguished into two types, explicit and spontaneous. They are characterized by whether the equations of motion fail to be invariant, or the ground state fails to be invariant.
Non-technical description
This section describes spontaneous symmetry breaking. This is the idea that for a physical system, the lowest energy configuration (the vacuum state) is not the most symmetric configuration of the system. Roughly speaking there are three types of symmetry that can be broken: discrete, continuous and gauge, ordered in increasing technicality.
An example of a system with discrete symmetry is given by the figure with the red graph: consider a particle moving on this graph, subject to gravity. A similar graph could be given by the function . This system is symmetric under reflection in the y-axis. There are three possible stationary states for the particle: the top of the hill at , or the bottom, at . When the particle is at the top, the configuration respects the reflection symmetry: the particle stays in the same place when reflected. However, the lowest energy configurations are those at . When the particle is in either of these configurations, it is no longer fixed under reflection in the y-axis: reflection swaps the two vacuum states.
An example with continuous symmetry is given by a 3d analogue of the previous example, from rotating the graph around an axis through the top of the hill, or equivalently given by the graph . This is essentially the graph of the Mexican hat potential. This has a continuous symmetry given by rotation about the axis through the top of the hill (as well as a discrete symmetry by reflection through any radial plane). Again, if the particle is at the top of the hill it is fixed under rotations, but it has higher gravitational energy at the top. At the bottom, it is no longer invariant under rotations but minimizes its gravitational potential energy. Furthermore rotations move the particle from one energy minimizing configuration to another. There is a novelty here not seen in the previous example: from any of the vacuum states it is possible to access any other vacuum state with only a small amount of energy, by moving around the trough at the bottom of the hill, whereas in the previous example, to access the other vacuum, the particle would have to cross the hill, requiring a large amount of energy.
Gauge symmetry breaking is the most subtle, but has important physical consequences. Roughly speaking, for the purposes of this section a gauge symmetry is an assignment of systems with continuous symmetry to every point in spacetime. Gauge symmetry forbids mass generation for gauge fields, yet massive gauge fields (W and Z bosons) have been observed. Spontaneous symmetry breaking was developed to resolve this inconsistency. The idea is that in an early stage of the universe it was in a high energy state, analogous to the particle being at the top of the hill, and so had full gauge symmetry and all the gauge fields were massless. As it cooled, it settled into a choice of vacuum, thus spontaneously breaking the symmetry, thus removing the gauge symmetry and allowing mass generation of those gauge fields. A full explanation is highly technical: see electroweak interaction.
Spontaneous symmetry breaking
In spontaneous symmetry breaking (SSB), the equations of motion of the system are invariant, but any vacuum state (lowest energy state) is not.
For an example with two-fold symmetry, if there is some atom that has two vacuum states, occupying either one of these states breaks the two-fold symmetry. This act of selecting one of the states as the system reaches a lower energy is SSB. When this happens, the atom is no longer symmetric (reflectively symmetric) and has collapsed into a lower energy state.
Such a symmetry breaking is parametrized by an order parameter. A special case of this type of symmetry breaking is dynamical symmetry breaking.
In the Lagrangian setting of Quantum field theory (QFT), the Lagrangian is a functional of quantum fields which is invariant under the action of a symmetry group . However, the vacuum expectation value formed when the particle collapses to a lower energy may not be invariant under . In this instance, it will partially break the symmetry of , into a subgroup . This is spontaneous symmetry breaking.
Within the context of gauge symmetry however, SSB is the phenomenon by which gauge fields 'acquire mass' despite gauge-invariance enforcing that such fields be massless. This is because the SSB of gauge symmetry breaks gauge-invariance, and such a break allows for the existence of massive gauge fields. This is an important exemption from Goldstone's theorem, where a Nambu-Goldstone boson can gain mass, becoming a Higgs boson in the process.
Further, in this context the usage of 'symmetry breaking' while standard, is a misnomer, as gauge 'symmetry' is not really a symmetry but a redundancy in the description of the system. Mathematically, this redundancy is a choice of trivialization, somewhat analogous to redundancy arising from a choice of basis.
Spontaneous symmetry breaking is also associated with phase transitions. For example in the Ising model, as the temperature of the system falls below the critical temperature the symmetry of the vacuum is broken, giving a phase transition of the system.
Explicit symmetry breaking
In explicit symmetry breaking (ESB), the equations of motion describing a system are variant under the broken symmetry. In Hamiltonian mechanics or Lagrangian mechanics, this happens when there is at least one term in the Hamiltonian (or Lagrangian) that explicitly breaks the given symmetry.
In the Hamiltonian setting, this is often studied when the Hamiltonian can be written .
Here is a 'base Hamiltonian', which has some manifest symmetry. More explicitly, it is symmetric under the action of a (Lie) group . Often this is an integrable Hamiltonian.
The is a perturbation or interaction Hamiltonian. This is not invariant under the action of . It is often proportional to a small, perturbative parameter.
This is essentially the paradigm for perturbation theory in quantum mechanics. An example of its use is in finding the fine structure of atomic spectra.
Examples
Symmetry breaking can cover any of the following scenarios:
The breaking of an exact symmetry of the underlying laws of physics by the apparently random formation of some structure;
A situation in physics in which a minimal energy state has less symmetry than the system itself;
Situations where the actual state of the system does not reflect the underlying symmetries of the dynamics because the manifestly symmetric state is unstable (stability is gained at the cost of local asymmetry);
Situations where the equations of a theory may have certain symmetries, though their solutions may not (the symmetries are "hidden").
One of the first cases of broken symmetry discussed in the physics literature is related to the form taken by a uniformly rotating body of incompressible fluid in gravitational and hydrostatic equilibrium. Jacobi and soon later Liouville, in 1834, discussed the fact that a tri-axial ellipsoid was an equilibrium solution for this problem when the kinetic energy compared to the gravitational energy of the rotating body exceeded a certain critical value. The axial symmetry presented by the McLaurin spheroids is broken at this bifurcation point. Furthermore, above this bifurcation point, and for constant angular momentum, the solutions that minimize the kinetic energy are the non-axially symmetric Jacobi ellipsoids instead of the Maclaurin spheroids.
| Physical sciences | Particle physics: General | Physics |
1240666 | https://en.wikipedia.org/wiki/Parity%20%28physics%29 | Parity (physics) | In physics, a parity transformation (also called parity inversion) is the flip in the sign of one spatial coordinate. In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection):
It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image.
All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity. As established by the Wu experiment conducted at the US National Bureau of Standards by Chinese-American scientist Chien-Shiung Wu, the weak interaction is chiral and thus provides a means for probing chirality in physics. In her experiment, Wu took advantage of the controlling role of weak interactions in radioactive decay of atomic isotopes to establish the chirality of the weak force.
By contrast, in interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions.
A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation; it is the same as a 180° rotation.
In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions.
Simple symmetry relations
Under rotations, classical geometrical objects can be classified into scalars, vectors, and tensors of higher rank. In classical physics, physical configurations need to transform under representations of every symmetry group.
Quantum theory predicts that states in a Hilbert space do not need to transform under representations of the group of rotations, but only under projective representations. The word projective refers to the fact that if one projects out the phase of each state, where we recall that the overall phase of a quantum state is not observable, then a projective representation reduces to an ordinary representation. All representations are also projective representations, but the converse is not true, therefore the projective representation condition on quantum states is weaker than the representation condition on classical states.
The projective representations of any group are isomorphic to the ordinary representations of a central extension of the group. For example, projective representations of the 3-dimensional rotation group, which is the special orthogonal group SO(3), are ordinary representations of the special unitary group SU(2). Projective representations of the rotation group that are not representations are called spinors and so quantum states may transform not only as tensors but also as spinors.
If one adds to this a classification by parity, these can be extended, for example, into notions of
scalars () and pseudoscalars () which are rotationally invariant.
vectors () and axial vectors (also called pseudovectors) () which both transform as vectors under rotation.
One can define reflections such as
which also have negative determinant and form a valid parity transformation. Then, combining them with rotations (or successively performing x-, y-, and z-reflections) one can recover the particular parity transformation defined earlier. The first parity transformation given does not work in an even number of dimensions, though, because it results in a positive determinant. In even dimensions only the latter example of a parity transformation (or any reflection of an odd number of coordinates) can be used.
Parity forms the abelian group due to the relation . All Abelian groups have only one-dimensional irreducible representations. For , there are two irreducible representations: one is even under parity, , the other is odd, . These are useful in quantum mechanics. However, as is elaborated below, in quantum mechanics states need not transform under actual representations of parity but only under projective representations and so in principle a parity transformation may rotate a state by any phase.
Representations of O(3)
An alternative way to write the above classification of scalars, pseudoscalars, vectors and pseudovectors is in terms of the representation space that each object transforms in. This can be given in terms of the group homomorphism which defines the representation. For a matrix
scalars: , the trivial representation
pseudoscalars:
vectors: , the fundamental representation
pseudovectors:
When the representation is restricted to , scalars and pseudoscalars transform identically, as do vectors and pseudovectors.
Classical mechanics
Newton's equation of motion (if the mass is constant) equates two vectors, and hence is invariant under parity. The law of gravity also involves only vectors and is also, therefore, invariant under parity.
However, angular momentum is an axial vector,
In classical electrodynamics, the charge density is a scalar, the electric field, , and current are vectors, but the magnetic field, is an axial vector. However, Maxwell's equations are invariant under parity because the curl of an axial vector is a vector.
Effect of spatial inversion on some variables of classical physics
The two major divisions of classical physical variables have either even or odd parity. The way into which particular variables and vectors sort out into either category depends on whether the number of dimensions of space is either an odd or even number. The categories of odd or even given below for the parity transformation is a different, but intimately related issue.
The answers given below are correct for 3 spatial dimensions. In a 2 dimensional space, for example, when constrained to remain on the surface of a planet, some of the variables switch sides.
Odd
Classical variables whose signs flip when inverted in space inversion are predominantly vectors. They include:
Even
Classical variables, predominantly scalar quantities, which do not change upon spatial inversion include:
Quantum mechanics
Possible eigenvalues
In quantum mechanics, spacetime transformations act on quantum states. The parity transformation, , is a unitary operator, in general acting on a state as follows: .
One must then have , since an overall phase is unobservable. The operator , which reverses the parity of a state twice, leaves the spacetime invariant, and so is an internal symmetry which rotates its eigenstates by phases . If is an element of a continuous U(1) symmetry group of phase rotations, then is part of this U(1) and so is also a symmetry. In particular, we can define , which is also a symmetry, and so we can choose to call our parity operator, instead of . Note that and so has eigenvalues . Wave functions with eigenvalue under a parity transformation are even functions, while eigenvalue corresponds to odd functions. However, when no such symmetry group exists, it may be that all parity transformations have some eigenvalues which are phases other than .
For electronic wavefunctions, even states are usually indicated by a subscript g for gerade (German: even) and odd states by a subscript u for ungerade (German: odd). For example, the lowest energy level of the hydrogen molecule ion (H2+) is labelled and the next-closest (higher) energy level is labelled .
The wave functions of a particle moving into an external potential, which is centrosymmetric (potential energy invariant with respect to a space inversion, symmetric to the origin), either remain invariable or change signs: these two possible states are called the even state or odd state of the wave functions.
The law of conservation of parity of particles states that, if an isolated ensemble of particles has a definite parity, then the parity remains invariable in the process of ensemble evolution. However this is not true for the beta decay of nuclei, because the weak nuclear interaction violates parity.
The parity of the states of a particle moving in a spherically symmetric external field is determined by the angular momentum, and the particle state is defined by three quantum numbers: total energy, angular momentum and the projection of angular momentum.
Consequences of parity symmetry
When parity generates the Abelian group , one can always take linear combinations of quantum states such that they are either even or odd under parity (see the figure). Thus the parity of such states is ±1. The parity of a multiparticle state is the product of the parities of each state; in other words parity is a multiplicative quantum number.
In quantum mechanics, Hamiltonians are invariant (symmetric) under a parity transformation if commutes with the Hamiltonian. In non-relativistic quantum mechanics, this happens for any scalar potential, i.e., , hence the potential is spherically symmetric. The following facts can be easily proven:
If and have the same parity, then where is the position operator.
For a state of orbital angular momentum with z-axis projection , then .
If , then atomic dipole transitions only occur between states of opposite parity.
If , then a non-degenerate eigenstate of is also an eigenstate of the parity operator; i.e., a non-degenerate eigenfunction of is either invariant to or is changed in sign by .
Some of the non-degenerate eigenfunctions of are unaffected (invariant) by parity and the others are merely reversed in sign when the Hamiltonian operator and the parity operator commute:
where is a constant, the eigenvalue of ,
Many-particle systems: atoms, molecules, nuclei
The overall parity of a many-particle system is the product of the parities of the one-particle states. It is −1 if an odd number of particles are in odd-parity states, and +1 otherwise. Different notations are in use to denote the parity of nuclei, atoms, and molecules.
Atoms
Atomic orbitals have parity (−1)ℓ, where the exponent ℓ is the azimuthal quantum number. The parity is odd for orbitals p, f, ... with ℓ = 1, 3, ..., and an atomic state has odd parity if an odd number of electrons occupy these orbitals. For example, the ground state of the nitrogen atom has the electron configuration 1s22s22p3, and is identified by the term symbol 4So, where the superscript o denotes odd parity. However the third excited term at about 83,300 cm−1 above the ground state has electron configuration 1s22s22p23s has even parity since there are only two 2p electrons, and its term symbol is 4P (without an o superscript).
Molecules
The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectively. The parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass.
Centrosymmetric molecules at equilibrium have a centre of symmetry at their midpoint (the nuclear center of mass). This includes all homonuclear diatomic molecules as well as certain symmetric molecules such as ethylene, benzene, xenon tetrafluoride and sulphur hexafluoride. For centrosymmetric molecules, the point group contains the operation i which is not to be confused with the parity operation. The operation i involves the inversion of the electronic and vibrational displacement coordinates at the nuclear centre of mass. For centrosymmetric molecules the operation i commutes with the rovibronic (rotation-vibration-electronic) Hamiltonian and can be used to label such states. Electronic and vibrational states of centrosymmetric molecules are either unchanged by the operation i, or they are changed in sign by i. The former are denoted by the subscript g and are called gerade, while the latter are denoted by the subscript u and are called ungerade. The complete electromagnetic Hamiltonian of a centrosymmetric molecule
does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u vibronic states (called ortho-para mixing) and give rise to ortho-para transitions
Nuclei
In atomic nuclei, the state of each nucleon (proton or neutron) has even or odd parity, and nucleon configurations can be predicted using the nuclear shell model. As for electrons in atoms, the nucleon state has odd overall parity if and only if the number of nucleons in odd-parity states is odd. The parity is usually written as a + (even) or − (odd) following the nuclear spin value. For example, the isotopes of oxygen include 17O(5/2+), meaning that the spin is 5/2 and the parity is even. The shell model explains this because the first 16 nucleons are paired so that each pair has spin zero and even parity, and the last nucleon is in the 1d5/2 shell, which has even parity since ℓ = 2 for a d orbital.
Quantum field theory
If one can show that the vacuum state is invariant under parity, , the Hamiltonian is parity invariant and the quantization conditions remain unchanged under parity, then it follows that every state has good parity, and this parity is conserved in any reaction.
To show that quantum electrodynamics is invariant under parity, we have to prove that the action is invariant and the quantization is also invariant. For simplicity we will assume that canonical quantization is used; the vacuum state is then invariant under parity by construction. The invariance of the action follows from the classical invariance of Maxwell's equations. The invariance of the canonical quantization procedure can be worked out, and turns out to depend on the transformation of the annihilation operator:
where denotes the momentum of a photon and refers to its polarization state. This is equivalent to the statement that the photon has odd intrinsic parity. Similarly all vector bosons can be shown to have odd intrinsic parity, and all axial-vectors to have even intrinsic parity.
A straightforward extension of these arguments to scalar field theories shows that scalars have even parity. That is, , since
This is true even for a complex scalar field. (Details of spinors are dealt with in the article on the Dirac equation, where it is shown that fermions and antifermions have opposite intrinsic parity.)
With fermions, there is a slight complication because there is more than one spin group.
Parity in the Standard Model
Fixing the global symmetries
Applying the parity operator twice leaves the coordinates unchanged, meaning that must act as one of the internal symmetries of the theory, at most changing the phase of a state. For example, the Standard Model has three global U(1) symmetries with charges equal to the baryon number , the lepton number , and the electric charge . Therefore, the parity operator satisfies for some choice of , , and . This operator is also not unique in that a new parity operator can always be constructed by multiplying it by an internal symmetry such as for some .
To see if the parity operator can always be defined to satisfy , consider the general case when for some internal symmetry present in the theory. The desired parity operator would be . If is part of a continuous symmetry group then exists, but if it is part of a discrete symmetry then this element need not exist and such a redefinition may not be possible.
The Standard Model exhibits a symmetry, where is the fermion number operator counting how many fermions are in a state. Since all particles in the Standard Model satisfy , the discrete symmetry is also part of the continuous symmetry group. If the parity operator satisfied , then it can be redefined to give a new parity operator satisfying . But if the Standard Model is extended by incorporating Majorana neutrinos, which have and , then the discrete symmetry is no longer part of the continuous symmetry group and the desired redefinition of the parity operator cannot be performed. Instead it satisfies so the Majorana neutrinos would have intrinsic parities of .
Parity of the pion
In 1954, a paper by William Chinowsky and Jack Steinberger demonstrated that the pion has negative parity.
They studied the decay of an "atom" made from a deuteron () and a negatively charged pion () in a state with zero orbital angular momentum into two neutrons ().
Neutrons are fermions and so obey Fermi–Dirac statistics, which implies that the final state is antisymmetric. Using the fact that the deuteron has spin one and the pion spin zero together with the antisymmetry of the final state they concluded that the two neutrons must have orbital angular momentum The total parity is the product of the intrinsic parities of the particles and the extrinsic parity of the spherical harmonic function Since the orbital momentum changes from zero to one in this process, if the process is to conserve the total parity then the products of the intrinsic parities of the initial and final particles must have opposite sign. A deuteron nucleus is made from a proton and a neutron, and so using the aforementioned convention that protons and neutrons have intrinsic parities equal to they argued that the parity of the pion is equal to minus the product of the parities of the two neutrons divided by that of the proton and neutron in the deuteron, explicitly from which they concluded that the pion is a pseudoscalar particle.
Parity violation
Although parity is conserved in electromagnetism and gravity, it is violated in weak interactions, and perhaps, to some degree, in strong interactions. The Standard Model incorporates parity violation by expressing the weak interaction as a chiral gauge interaction. Only the left-handed components of particles and right-handed components of antiparticles participate in charged weak interactions in the Standard Model. This implies that parity is not a symmetry of our universe, unless a hidden mirror sector exists in which parity is violated in the opposite way.
An obscure 1928 experiment, undertaken by R. T. Cox, G. C. McIlwraith, and B. Kurrelmeyer, had in effect reported parity violation in weak decays, but, since the appropriate concepts had not yet been developed, those results had no impact. In 1929, Hermann Weyl explored, without any evidence, the existence of a two-component massless particle of spin one-half. This idea was rejected by Pauli, because it implied parity violation.
By the mid-20th century, it had been suggested by several scientists that parity might not be conserved (in different contexts), but without solid evidence these suggestions were not considered important. Then, in 1956, a careful review and analysis by theoretical physicists Tsung-Dao Lee and Chen-Ning Yang went further, showing that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. They were mostly ignored, but Lee was able to convince his Columbia colleague Chien-Shiung Wu to try it. She needed special cryogenic facilities and expertise, so the experiment was done at the National Bureau of Standards.
Wu, Ambler, Hayward, Hoppes, and Hudson (1957) found a clear violation of parity conservation in the beta decay of cobalt-60. As the experiment was winding down, with double-checking in progress, Wu informed Lee and Yang of their positive results, and saying the results need further examination, she asked them not to publicize the results first. However, Lee revealed the results to his Columbia colleagues on 4 January 1957 at a "Friday lunch" gathering of the Physics Department of Columbia. Three of them, R. L. Garwin, L. M. Lederman, and R. M. Weinrich, modified an existing cyclotron experiment, and immediately verified the parity violation. They delayed publication of their results until after Wu's group was ready, and the two papers appeared back-to-back in the same physics journal.
The discovery of parity violation explained the outstanding puzzle in the physics of kaons.
In 2010, it was reported that physicists working with the Relativistic Heavy Ion Collider had created a short-lived parity symmetry-breaking bubble in quark–gluon plasmas. An experiment conducted by several physicists in the STAR collaboration, suggested that parity may also be violated in the strong interaction. It is predicted that this local parity violation manifests itself by chiral magnetic effect.
Intrinsic parity of hadrons
To every particle one can assign an intrinsic parity as long as nature preserves parity. Although weak interactions do not, one can still assign a parity to any hadron by examining the strong interaction reaction that produces it, or through decays not involving the weak interaction, such as rho meson decay to pions.
| Physical sciences | Quantum mechanics | Physics |
1240837 | https://en.wikipedia.org/wiki/Whistling%20duck | Whistling duck | The whistling ducks or tree ducks are a subfamily, Dendrocygninae, of the duck, goose and swan family of birds, Anatidae. In other taxonomic schemes, they are considered a separate family, Dendrocygnidae. Some taxonomists list only one genus, Dendrocygna, which contains eight living species, and one undescribed extinct species from Aitutaki of the Cook Islands, but other taxonomists also list the white-backed duck (Thalassornis leuconotus) under the subfamily.
Taxonomy and evolution
Whistling ducks were first described by Carl Linnaeus in the 10th edition of Systema Naturae in 1758: the black-bellied whistling duck (then Anas autumnalis) and the West Indian whistling duck (then Anas arborea). In 1837, William Swainson named the genus Dendrocygna to distinguish whistling ducks from the other waterfowl. The type species was listed as the wandering whistling duck (D. arcuata), formerly named by Thomas Horsfield as Anas arcuata.
Whistling duck taxonomy, including that of the entire order Anseriformes, is complicated and disputed. Under a traditional classification proposed by ornithologist Jean Théodore Delacour based on morphological and behavioral traits, whistling ducks belong to the tribe Dendrocygnini under the family Anatidae and subfamily Anserinae. Following the revisions by ornithologist Paul Johnsgard, Dendrocygnini includes the genus Thalassornis (the white-backed duck) under this system.
In 1997, Bradley C. Livezey proposed that Dendrocygna were a separate lineage from Anserinae, placing it and its tribe in its own subfamily, Dendrocygninae. Alternatively Charles Sibley and Jon Edward Ahlquist recommended placing Dendrocygna in its own family, Dendrocygnidae, which includes the genus Thalassornis.
Species
Eight species of whistling duck are currently recognized in the genus Dendrocygna. However, Johnsgard considers the white-backed duck (Thalassornis leuconotus) from Africa and Madagascar to be distinct ninth species, a view first proposed in 1960 and initially supported by behavioral similarities. Later, similarities in anatomy, duckling vocalizations, and feather proteins gave additional support. Molecular analysis in 2009 also suggested that the white-backed duck was nested within the whistling duck clade. In addition to the extant species, subfossil remains of an extinct, undescribed species have been found on Aitutaki of the Cook Islands.
Description
Whistling ducks are found in the tropics and subtropics. As their name implies, they have distinctive whistling calls.
The whistling ducks have long legs and necks, and are very gregarious, flying to and from night-time roosts in large flocks. Both sexes have the same plumage, and all have a hunched appearance and black underwings in flight.
After breeding and pairing with a female, male whistling ducks (especially within the Fulvous whistling duck species) will often help with the construction of nests and will take turns with the female incubating the eggs.
| Biology and health sciences | Anseriformes | Animals |
1241988 | https://en.wikipedia.org/wiki/System%20of%20units%20of%20measurement | System of units of measurement | A system of units of measurement, also known as a system of units or system of measurement, is a collection of units of measurement and rules relating them to each other. Systems of measurement have historically been important, regulated and defined for the purposes of science and commerce. Instances in use include the International System of Units or (the modern form of the metric system), the British imperial system, and the United States customary system.
History
In antiquity, systems of measurement were defined locally: the different units might be defined independently according to the length of a king's thumb or the size of his foot, the length of stride, the length of arm, or maybe the weight of water in a keg of specific size, perhaps itself defined in hands and knuckles. The unifying characteristic is that there was some definition based on some standard. Eventually cubits and strides gave way to "customary units" to meet the needs of merchants and scientists.
The preference for a more universal and consistent system only gradually spread with the growth of international trade and science. Changing a measurement system has costs in the near term, which often results in resistance to such a change. The substantial benefit of conversion to a more rational and internationally consistent system of measurement has been recognized and promoted by scientists, engineers, businesses and politicians, and has resulted in most of the world adopting a commonly agreed metric system.
The French Revolution gave rise to the metric system, and this has spread around the world, replacing most customary units of measure. In most systems, length (distance), mass, and time are base quantities.
Later, science developments showed that an electromagnetic quantity such as electric charge or electric current could be added to extend the set of base quantities. Gaussian units have only length, mass, and time as base quantities, with no separate electromagnetic dimension. Other quantities, such as power and speed, are derived from the base quantities: for example, speed is distance per unit time. Historically, a wide range of units was used for the same type of quantity. In different contexts length was measured in inches, feet, yards, fathoms, rods, chains, furlongs, miles, nautical miles, stadia, leagues, with conversion factors that were not based on power of ten.
In the metric system and other recent systems, underlying relationships between quantities, as expressed by formulae of physics such as Newton's laws of motion, is used to select a small number of base quantities for which a unit is defined for each, from which all other units may be derived. Secondary units (multiples and submultiples) are derived from these base and derived units by multiplying by powers of ten. For example, where the unit of length is the metre; a distance of 1 metre is 1,000 millimetres, or 0.001 kilometres.
Current practice
Metrication is complete or nearly complete in most countries.
However, US customary units remain heavily used in the United States and to some degree in Liberia. Traditional Burmese units of measurement are used in Burma, with partial transition to the metric system. U.S. units are used in limited contexts in Canada due to the large volume of trade with the U.S. There is also considerable use of imperial weights and measures, despite de jure Canadian conversion to metric.
A number of other jurisdictions have laws mandating or permitting other systems of measurement in some or all contexts, such as the United Kingdom whose road signage legislation, for instance, only allows distance signs displaying imperial units (miles or yards) or Hong Kong.
In the United States, metric units are virtually always used in science, frequently in the military, and partially in industry. U.S. customary units are primarily used in U.S. households. At retail stores, the litre (spelled 'liter' in the U.S.) is a commonly used unit for volume, especially on bottles of beverages, and milligrams, rather than grains, are used for medications.
Some other non-SI units are still in international use, such as nautical miles and knots in aviation and shipping, and feet for aircraft altitude.
Metric system
Metric systems of units have evolved since the adoption of the first well-defined system in France in 1795. During this evolution the use of these systems has spread throughout the world, first to non-English-speaking countries, and then to English speaking countries.
Multiples and submultiples of metric units are related by powers of ten and their names are formed with prefixes. This relationship is compatible with the decimal system of numbers and it contributes greatly to the convenience of metric units.
In the early metric system there were two base units, the metre for length and the gram for mass. The other units of length and mass, and all units of area, volume, and derived units such as density were derived from these two base units.
Mesures usuelles (French for customary measures) were a system of measurement introduced as a compromise between the metric system and traditional measurements. It was used in France from 1812 to 1839.
A number of variations on the metric system have been in use. These include gravitational systems, the centimetre–gram–second systems (cgs) useful in science, the metre–tonne–second system (mts) once used in the USSR and the metre–kilogram–second system (mks). In some engineering fields, like computer-aided design, millimetre–gram–second (mmgs) is also used.
The current international standard for the metric system is the International System of Units ( or SI). It is a system in which all units can be expressed in terms of seven units. The units that serve as the SI base units are the metre, kilogram, second, ampere, kelvin, mole, and candela.
British imperial and US customary units
Both British imperial units and US customary units derive from earlier English units. Imperial units were mostly used in the former British Empire and the British Commonwealth, but in all these countries they have been largely supplanted by the metric system. They are still used for some applications in the United Kingdom but have been mostly replaced by the metric system in commercial, scientific, and industrial applications. US customary units, however, are still the main system of measurement in the United States. While some steps towards metrication have been made (mainly in the late 1960s and early 1970s), the customary units have a strong hold due to the vast industrial infrastructure and commercial development.
While British imperial and US customary systems are closely related, there are a number of differences between them. Units of length and area (the inch, foot, yard, mile, etc.) have been identical since the adoption of the International Yard and Pound Agreement; however, the US and, formerly, India retained older definitions for surveying purposes. This gave rise to the US survey foot, for instance. The avoirdupois units of mass and weight differ for units larger than a pound (lb). The British imperial system uses a stone of 14 lb, a long hundredweight of 112 lb and a long ton of 2,240 lb. The stone is not a measurement of weight used in the US. The US customary system uses the short hundredweight of 100 lb and short ton of 2,000 lb.
Where these systems most notably differ is in their units of volume. A US fluid ounce (fl oz), about 29.6 millilitres (ml), is slightly larger than the imperial fluid ounce (about 28.4 ml). However, as there are 16 US fl oz to a US pint and 20 imp fl oz per imperial pint, the imperial pint is about 20% larger. The same is true of quarts, gallons, etc.; six US gallons are a little less than five imperial gallons.
The avoirdupois system served as the general system of mass and weight. In addition to this, there are the troy and the apothecaries' systems. Troy weight was customarily used for precious metals, black powder, and gemstones. The troy ounce is the only unit of the system in current use; it is used for precious metals. Although the troy ounce is larger than its avoirdupois equivalent, the pound is smaller. The obsolete troy pound was divided into 12 ounces, rather than the 16 ounces per pound of the avoirdupois system. The apothecaries' system was traditionally used in pharmacology, but has now been replaced by the metric system; it shared the same pound and ounce as the troy system but with different further subdivisions.
Natural units
Natural units are units of measurement defined in terms of universal physical constants in such a manner that selected physical constants take on the numerical value of one when expressed in terms of those units. Natural units are so named because their definition relies on only properties of nature and not on any human construct. Varying systems of natural units are possible, depending on the choice of constants used.
Some examples are as follows:
Geometrized unit systems are useful in relativistic physics. In these systems, speed of light and the gravitational constant are among the constants chosen.
Planck units is system of geometrized units in which the reduced Planck constant is included in the list of defining constants. It is based on only properties of free space rather than of any object or particle.
Stoney units is a system of geometrized units in which the Coulomb constant and the elementary charge are included.
Atomic units are a system of units used in atomic physics, particularly for describing the properties of electrons. The atomic units have been chosen to use several constants relating to the electron: the electron mass, the elementary charge, the Coulomb constant and the reduced Planck constant. The unit of energy in this system is the total energy of the electron in the Bohr atom and called the Hartree energy. The unit of length is the Bohr radius.
Non-standard units
Non-standard measurement units also found in books, newspapers etc., include:
Area
The American football field, which has a playing area long by wide. This is often used by the American public media for the sizes of large buildings or parks. It is used both as a unit of length (, the length of the playing field excluding goal areas) and as a unit of area (), about .
British media also frequently uses the football pitch for equivalent purposes, although soccer pitches are not of a fixed size, but instead can vary within defined limits ( long, and wide, giving an area of ). However the UEFA Champions League field must be exactly giving an area of or . For example, "HSS vessels are aluminium catamarans about the size of a football pitch."
Larger areas are also expressed as a multiple of the areas of states and countries understood to be familiar to the reader.
Energy
A ton of TNT equivalent, and its multiples the kiloton, the megaton, and the gigaton. Often used in stating the power of very energetic events such as explosions and volcanic events and earthquakes and asteroid impacts. A gram of TNT as a unit of energy has been defined as 1000 thermochemical calories ().
The atom bomb dropped on Hiroshima. Its energy yield is often used in the public media and popular books as a unit of energy. (Its yield was roughly 13 kilotons, or 60 TJ.)
One stick of dynamite.
Units of currency
A unit of measurement that applies to money is called a unit of account in economics and unit of measure in accounting. This is normally a currency issued by a country or a fraction thereof; for instance, the US dollar and US cent ( of a dollar), or the euro and euro cent.
ISO 4217 is the international standard describing three letter codes (also known as the currency code) to define the names of currencies established by the International Organization for Standardization (ISO).
Historical systems of measurement
Throughout history, many official systems of measurement have been used. While no longer in official use, some of these customary systems are occasionally used in day-to-day life, for instance in cooking.
Africa
Algerian
Egyptian
Ethiopian
Eritrean
Guinean
Libyan
Malagasy
Mauritian
Moroccan
Seychellois
Somalian
Tunisian
South African
Tanzanian
Asia
Arabic
Afghan
Cambodian
Chinese
Hong Kong
Hebrew (Biblical and Talmudic)
Indian
Indonesian
Japanese
Korean
Omani
Pakistani
Philippine
Mesopotamian
Persian
Singaporean
Sri Lankan
Syrian
Taiwanese
Tamil
Thai
Vietnamese
Nepalese
Still in use:
Myanmar
Europe
Ancient Greek
Belgian
Byzantine
Czech
Cypriot
Danish
Dutch
English
Estonian
Finnish
French (now)
French (to 1795)
German
Greek
Hungary
Icelandic
Irish
Italian
Latvian
Luxembourgian
Maltese
Norwegian
Polish
Portuguese
Roman
Romanian
Russian
Scottish
Serbian
Slovak
Spanish
Swedish
Switzerland
Turkish
Tatar
Welsh
North America
Costa Rican
Cuban
Haitian
Honduran
Mexico
Nicaraguan
Puerto Rican
South America
Argentine
Bolivian
Brazilian
Chilean
Colombian
Paraguayan
Peruvian
Uruguayan
Venezuelan
Ancient
Arabic
Biblical and Talmudic
Egyptian
Greek
Hindu (time)
Indian
Mesopotamian
Persian
Roman
| Physical sciences | Measurement systems | Basics and measurement |
1242449 | https://en.wikipedia.org/wiki/Common%20collared%20lizard | Common collared lizard | The common collared lizard (Crotaphytus collaris), also commonly called eastern collared lizard, Oklahoma collared lizard, yellow-headed collared lizard, and collared lizard, is a North American species of lizard in the family Crotaphytidae. The common name "collared lizard" comes from the lizard's distinct coloration, which includes bands of black around the neck and shoulders that look like a collar. Males can be very colorful, with blue green bodies, yellow stripes on the tail and back, and yellow orange throats. There are five recognized subspecies.
Etymology
The subspecific name, baileyi, is in honor of American mammalogist Vernon Orlando Bailey.
Subspecies
Five subspecies are recognized as being valid, including the nominotypical subspecies.
Crotaphytus collaris auriceps – yellow-headed collared lizard
Crotaphytus collaris baileyi – western collared lizard
Crotaphytus collaris collaris – eastern collared lizard
Crotaphytus collaris fuscus – Chihuahuan collared lizard
Crotaphytus collaris melanomaculatus – black-spotted collared lizard
Nota bene: A trinomial authority in parentheses indicates that the subspecies was originally described in a genus other than Crotaphytus.
Description
C. collaris can grow up to in total length (including the tail), with a large head and powerful jaws. Males have a blue-green body with a light brown head. Females have a light brown head and body.
C. collaris exhibit a wide range of physical characteristics, particularly in coloration and spotting patterns, and this phenotypic variability may be attributed to a combination of differences in population, social organizations, or habitat. They are a sexually dichromatic lizard species with the adult males being more vivid and colorful than the females. Male dorsal and head color tend to range from green to tan and yellow to orange respectively, while females, overall, possess more muted body pigmentations, varying from brown to gray. However, when reproductively active during breeding seasons, females undergo a rapid color change, in which faint orange spots on their heads increase in brightness; this orange spotting reaches a maximum during egg maturation but gradually fades again after expulsion from the female's oviduct as she lays her eggs. Both males and females have two distinct black bands around their neck, providing additional context to their name, the common collared lizards.
Similar to adult females, juveniles also exhibit dull body colorations compared to adult males, but a key distinction is that the young have pronounced, dark brown markings that eventually fade as they grow and mature. Consequently, juvenile collared lizards lose this sharp cross-band pattern, and their features drastically change to resemble those of either adult males or females.
Moderate in size, C. collaris have disproportionately large heads and long hind limbs. It can reach a length of 14 inches, including the tail, with males being larger than females. Hence, they are sexually dimorphic, and adult males exhibit larger and more muscular heads than females, which tend to vary in size. Used as a weapon during male combat, the head dimensions play a key role in determining dominance, territoriality, fitness, as well as mating success. In general, bigger heads are associated with greater jaw strength and thus, bite force.
Bipedal locomotion
C. collaris are able to run on their hind legs and can sprint at speeds of up to 24 kilometers per hour. This behavior is usually observed when trying to escape predators.
Like many other lizards, including the frilled lizard and basilisk, the collared lizard can run on its hind legs, and is a relatively fast sprinter. Record speeds have been around , much slower than the world record for lizards () attained by the larger-bodied Costa Rican spiny-tailed iguana, Ctenosaura similis.
Geographic range and habitat
C. collaris is chiefly found in dry, open regions of Mexico and the south-central United States including Arizona, Arkansas, Colorado, Kansas, Missouri, New Mexico, Oklahoma, and Texas. The full extent of its habitat in the United States ranges from the Ozark Mountains to Western Arizona.
C. collaris is distributed across the Southwestern United States and extend to Northern Mexico as well. Individuals occupy a range of different habitats from rocky desert landscapes to grasslands, but they often prefer to inhabit mountainous regions with high environmental temperatures for optimal thermoregulation. In addition, the hilly topography allows these keen and highly alert lizards to stay hidden between rocks, despite their flamboyant features, and look out for potential predators or territory intruders from the top of elevated platforms.
Diet
As obligate carnivores, they consume insects and small vertebrates as their main diet. While they may occasionally ingest plant materials, it is not preferred. They feed on a variety of large insects, including crickets, grasshoppers, spiders, moths, beetles, and cicadas, along with other small lizards and even snakes. As plants do not provide enough nutrients for constant body weight maintenance, C. collaris cannot survive solely on an herbivorous diet. Their stomachs are too small to accommodate the amount of flowers, shrubs, herbs, etc. that would be needed to maintain a constant body weight. Thus, they are considered obligate carnivores, requiring nutrients from arthropods or other small reptiles.
Diet can also vary depending on age, sex, as well as seasonal changes. In the case of younger lizards, they consume the same kinds of foods, specifically insect species, that adults do, but since younger lizards and adults differ in body size and weight, the amount of food intake tends to vary. On the other hand, male and female adults are similar in terms of their sizes and the amounts of food ingested but exhibit drastic differences in the kinds of foods that they eat. From an evolutionary standpoint, these sexual differences in diet may act to reduce intra-species competition for resources, whereby females and males do not need to fight for the same type of food. Moreover, changes in season can drastically affect their diets as well.
Cultural impact
The collared lizard is the state reptile of Oklahoma, where it is known as the mountain boomer. The origin of the name "mountain boomer" is not clear, but it may be traceable to settlers traveling west during the Gold Rush. One theory is that settlers mistook the sound of wind in canyons for the call of an animal in an area where the collared lizard was abundant. In reality, collared lizards are silent.
Behavior
Collared lizards are diurnal; they are active during the day, and spend most of their time basking on top of elevated rocks or boulders. As a highly territorial species, they remain hyper-vigilant, scanning for predators or intruders, ready to sprint or fight when necessary. Generally, males are more active than females, as the former engage in more chase, fight, display, and courtship behaviors while the latter exhibit basking and foraging behaviors. The collared lizard in the wild has been the subject of a number of studies of sexual selection; in captivity if two males are placed in the same cage they will fight to the death. Females, on the other hand, do not demonstrate aggressive behaviors as frequently as males, experiencing less intra-species competition with other females.
Social behavior
In an effort to monopolize as many female mates as they can, male C. collaris viciously defend their exclusive territories through aggression, patrolling activities, and displays. These territories provide ample resources and shelter the harem of females claimed and protected by the male territory owners. However, when agonistic interactions between male rivals escalate to violent fights, both lizards must expend substantial amounts of energy and risk getting seriously injured. Thus, though males do actively exclude other males from territories, they do so without resorting to physical and unfavorable conflict. Instead, they partake in social displays, either at a distance or proximally from their competitors to advertise their superiority. Surprisingly, both types of social encounters, in which males perform push ups and compressions and elevations of the trunk with the dewlap extended, rarely lead to arduous and violent fights; rather, distant displays barely evoke a response while proximal confrontations may lead to chasing at most.
Furthermore, C. collaris territory owners exhibit differential behaviors in response to neighbors and strangers, in which residents reduce the cost of territorial defense by demonstrating less aggression for spatial neighbors. Thus, when nearby residents approach an owner's shared territorial boundaries, the owner will recognize this individual and only engage in aggressive behaviors, usually in the form of a costly fight, if a threat to its territory is perceived. However, in the case of a stranger, the owner will exhibit intense hostility towards the intruder without hesitation. In relation, male territorial behaviors also vary within the reproductive season, decreasing in June due to the higher prevalence of reproductively active females and instead, engaging in more courtship behaviors. This cost-benefit strategy demonstrates the complex social behaviors and decision making processes exhibited by the male collared lizards.
Reproduction
The reproductive season starts in mid-March to early April and concludes in mid-July. Females and smaller individuals emerge first from hibernation with males following around two weeks later. Though lizards are considered mature and may breed following their first hibernation, those that are two years and older exhibit greater reproductive success due to their larger size. In late May, courtship occurs between adult males and females. Subsequently, mature females, typically two years and older, produce their first clutches and lay them in a burrow or under a rock about two weeks after copulation. They may then go on to produce second and sometimes even third clutches throughout June until mid-July. The eggs are incubated in a temperature dependent manner, and the incubation period may vary from 50 to 100 days. On average, clutch size can range from 4 to 6 eggs, but larger, older females can produce more. By August, adults begin to hibernate again, and juveniles do the same after hatching. The earliest of the clutches can hatch in mid-July and later ones follow until mid-October. Upon hatching, juveniles are fully developed and behave independently of their parents, as the C. collaris do not exhibit any parental care in offspring.
Mating behavior and rituals
C. collaris are polygamous, which leads to intense territorial behaviors that include male to male competition for females partners. This triggers aggressive behaviors in males and induces fierce competition for mating. With regards to female selection of male mates, females not only prefer males who are bright and conspicuous in body coloration but also consider the resources such as food and territory that males may be able to provide in order to ensure reproductive success. Moreover, as males often must compete with other males for potential mates, their body and head size play a significant role in determining mating success. The variability in head size gives rise to differential jaw strength and bite force in males, which ultimately results in intra-species selection against smaller headed males. For example, if an instance of male-to-male conflict escalates into a violent fight between two males, the larger male with a substantial larger body mass and head size will overpower its weaker and smaller counterpart. Consequently, successful males may, more often than not, possess vibrant body coloration and patterns and may be bigger in size, specifically having larger head proportions.
During courtship rituals, a male or a female lizard approaches the opposite sex within 1 body length and subsequently engages in various behavioral patterns, which include either individual superimposing its limbs, torso, or tail over its partner, mounting the dorsum of the other lizard, males nudging females with their snouts or grasping them with their jaws, and mutual displays. These mutual displays involve a complex set of movements and behaviors, unique to each sex. Males flex their forearms up and down and extend their dewlaps while females also extend their dewlaps and raise the base of their tails to signal receptivity. Ultimately, at the end of this courting process, both sexes walk in circles, making sure to remain within 1 body length of one another throughout.
Gallery
| Biology and health sciences | Iguania | Animals |
1242977 | https://en.wikipedia.org/wiki/Flocculation | Flocculation | In colloidal chemistry, flocculation is a process by which colloidal particles come out of suspension to sediment in the form of floc or flake, either spontaneously or due to the addition of a clarifying agent. The action differs from precipitation in that, prior to flocculation, colloids are merely suspended, under the form of a stable dispersion (where the internal phase (solid) is dispersed throughout the external phase (fluid) through mechanical agitation) and are not truly dissolved in solution.
Coagulation and flocculation are important processes in fermentation and water treatment with coagulation aimed to destabilize and aggregate particles through chemical interactions between the coagulant and colloids, and flocculation to sediment the destabilized particles by causing their aggregation into floc.
Term definition
According to the IUPAC definition, flocculation is "a process of contact and adhesion whereby the particles of a dispersion form larger-size clusters". Flocculation is synonymous with agglomeration and coagulation/coalescence.
Basically, coagulation is a process of addition of coagulant to destabilize a stabilized charged particle. Meanwhile, flocculation is a technique that promotes agglomeration and assists in the settling of particles. The most common used coagulant is alum, Al2(SO4)3·14H2O.
The chemical reaction involved:
Al2(SO4)3 · 14 H2O → 2 Al(OH)3 + 6 H+ + 3 + 8 H2O
During flocculation, gentle mixing accelerates the rate of particle collision, and the destabilized particles are further aggregated and enmeshed into larger precipitates. Flocculation is affected by several parameters, including mixing shear and intensity, time and pH. The product of the mixing intensity and mixing time is used to describe flocculation processes.
Jar test
The process by which the dosage and choice of flocculant are selected is called a jar test. The equipment used for jar testing consists of one or more beakers, each equipped with a paddle mixer. After the addition of flocculants, rapid mixing takes place, followed by slow mixing and later the sedimentation process. Samples can then be taken from the aqueous phase in each beaker.
Applications
Surface chemistry
In colloid chemistry, flocculation refers to the process by which fine particulates are caused to clump together into a floc. The floc may then float to the top of the liquid (creaming), settle to the bottom of the liquid (sedimentation), or be readily filtered from the liquid. Flocculation behavior of soil colloids is closely related to freshwater quality. High dispersibility of soil colloids not only directly causes turbidity of the surrounding water but it also induces eutrophication due to the adsorption of nutritional substances in rivers and lakes and even boats under the sea.
Physical chemistry
For emulsions, flocculation describes clustering of individual dispersed droplets together, whereby the individual droplets do not lose their identity. Flocculation is thus the initial step leading to further ageing of the emulsion (droplet coalescence and the ultimate separation of the phases). Flocculation is used in mineral dressing, but can be also used in the design of physical properties of food and pharmaceutical products.
Medical diagnostics
In a medical laboratory, flocculation is the core principle used in various diagnostic tests, for example the rapid plasma reagin test.
Civil engineering/earth sciences
In civil engineering, and in the earth sciences, flocculation is a condition in which clays, polymers or other small charged particles become attached and form a fragile structure, a floc. In dispersed clay slurries,
flocculation occurs after mechanical agitation ceases and the dispersed clay platelets spontaneously form flocs because of attractions between negative face charges and positive edge charges.
Biology
Flocculation is used in biotechnology applications in conjunction with microfiltration to improve the efficiency of biological feeds. The addition of synthetic flocculants to the bioreactor can increase the average particle size making microfiltration more efficient. When flocculants are not added, cakes form and accumulate causing low cell viability. Positively charged flocculants work better than negatively charged ones since the cells are generally negatively charged.
Cheese industry
Flocculation is widely employed to measure the progress of curd formation in the initial stages of cheese making to determine how long the curds must set. The reaction involving the rennet micelles are modeled by Smoluchowski kinetics. During the renneting of milk the micelles can approach one another and flocculate, a process that involves hydrolysis of molecules and macropeptides.
Flocculation is also used during cheese wastewater treatment. Three different coagulants are mainly used:
FeSO4 (iron(II) sulfate)
Al2(SO4)3 (aluminium sulfate)
FeCl3 (iron(III) chloride)
Brewing
In the brewing industry flocculation has a different meaning. It is a very important process in fermentation during the production of beer where cells form macroscopic flocs. These flocs cause the yeast to sediment or rise to the top of a fermentation at the end of the fermentation. Subsequently, the yeast can be collected (cropped) from the top (ale fermentation) or the bottom (lager fermentation) of the fermenter in order to be reused for the next fermentation.
Yeast flocculation is partially determined by the calcium concentration, often in the 50-100ppm range. Calcium salts can be added to cause flocculation, or the process can be reversed by removing calcium by adding phosphate to form insoluble calcium phosphate, adding excess sulfate to form insoluble calcium sulfate, or adding EDTA to chelate the calcium ions. While it appears similar to sedimentation in colloidal dispersions, the mechanisms are different.
Water treatment process
Flocculation and sedimentation are widely employed in the purification of drinking water as well as in sewage treatment, storm-water treatment and treatment of industrial wastewater streams.
For drinking water, typical treatment processes consist of grates, coagulation, flocculation, sedimentation, granular filtration and disinfection. The coagulation and flocculation steps are similar, causing particles to aggregate and fall out of solution, but may use different chemicals or physical movement of water. A variety of salts may be added to adjust the pH and act as clarifying agents, depending on the water chemistry. These include sodium hydroxide, calcium hydroxide, aluminum sulfate, aluminum oxide, ferric sulfate, ferric chloride, sodium aluminate, with flocculant aids polyaluminum chloride, polyferric chloride. A variety of cationic, anionic, and non-ionic polymers are also used, typically with a molecular weight below 500,000. Polydiallyldimethyl ammonium chloride (polyDADMAC) and epiDMA (a copolymer of epichlorohydrin and dimethylamine) are common choices, though these can produce carcinogenic nitrosamines. Sand, powerdered activated carbon, and clay may also be used as nucleating agents; in some cases, these are re-used after extraction.
Biopolymers, especially, chitosan, are increasingly popular as environmentally friendly flocculants. Chitosan is not only biodegradable but also exhibits a unique ability to bind with a wide range of contaminants, including heavy metals and organic pollutants, effectively removing them from water sources.
Flocculation provides promising results for removing fine particles and treating stormwater runoff from transportation construction projects, but are not used by most state departments of transportation in the U.S. This may be due to regulative restrictions or insufficient guidance for soil sampling requirements in light of changing soil characteristics. States that must achieve a numeric turbidity limit are more inclined to use flocculants to ensure the appropriate level of treatment.
Deflocculation
Deflocculation is the opposite of flocculation, sometimes known as peptization. Sodium silicate (Na2SiO3) is a typical example. Usually, in higher pH ranges, in addition to low ionic strength of solutions and domination of monovalent metal cations, the colloidal particles can be dispersed.
The additive that prevents the colloids from forming flocs is called a deflocculant. For deflocculation imparted through electrostatic barriers, the efficacy of a deflocculant can be gauged in terms of zeta potential. According to the Encyclopedic Dictionary of Polymers deflocculation is "a state or condition of a dispersion of a solid in a liquid in which each solid particle remains independent and unassociated with adjacent particles (much like emulsifier). A deflocculated suspension shows zero or very low yield value".
Deflocculation can be a problem in wastewater treatment plants, as it commonly causes problems with sludge settling and deterioration of the effluent quality.
| Physical sciences | Other separations | Chemistry |
1243550 | https://en.wikipedia.org/wiki/Rigid%20rotor | Rigid rotor | In rotordynamics, the rigid rotor is a mechanical model of rotating systems. An arbitrary rigid rotor is a 3-dimensional rigid object, such as a top. To orient such an object in space requires three angles, known as Euler angles. A special rigid rotor is the linear rotor requiring only two angles to describe, for example of a diatomic molecule. More general molecules are 3-dimensional, such as water (asymmetric rotor), ammonia (symmetric rotor), or methane (spherical rotor).
Linear rotor
The linear rigid rotor model consists of two point masses located at fixed distances from their center of mass. The fixed distance between the two masses and the values of the masses are the only characteristics of the rigid model. However, for many actual diatomics this model is too restrictive since distances are usually not completely fixed. Corrections on the rigid model can be made to compensate for small variations in the distance. Even in such a case the rigid rotor model is a useful point of departure (zeroth-order model).
Classical linear rigid rotor
The classical linear rotor consists of two point masses and (with reduced mass ) at a distance of each other. The rotor is rigid if is independent of time. The kinematics of a linear rigid rotor is usually described by means of spherical polar coordinates, which form a coordinate system of R3. In the physics convention the coordinates are the co-latitude (zenith) angle , the longitudinal (azimuth) angle and the distance . The angles specify the orientation of the rotor in space. The kinetic energy of the linear rigid rotor is given by
where and are scale (or Lamé) factors.
Scale factors are of importance for quantum mechanical applications since they enter the Laplacian expressed in curvilinear coordinates. In the case at hand (constant )
The classical Hamiltonian function of the linear rigid rotor is
Quantum mechanical linear rigid rotor
The linear rigid rotor model can be used in quantum mechanics to predict the rotational energy of a diatomic molecule. The rotational energy depends on the moment of inertia for the system, . In the center of mass reference frame, the moment of inertia is equal to:
where is the reduced mass of the molecule and is the distance between the two atoms.
According to quantum mechanics, the energy levels of a system can be determined by solving the Schrödinger equation:
where is the wave function and is the energy (Hamiltonian) operator. For the rigid rotor in a field-free space, the energy operator corresponds to the kinetic energy of the system:
where is reduced Planck constant and is the Laplacian. The Laplacian is given above in terms of spherical polar coordinates. The energy operator written in terms of these coordinates is:
This operator appears also in the Schrödinger equation of the hydrogen atom after the radial part is separated off. The eigenvalue equation becomes
The symbol represents a set of functions known as the spherical harmonics. Note that the energy does depend on through I. The energy
is -fold degenerate: the functions with fixed and have the same energy.
Introducing the rotational constant , we write,
In the units of reciprocal length the rotational constant is,
with c the speed of light. If cgs units are used for , , and , is expressed in cm−1, or wave numbers, which is a unit that is often used for rotational-vibrational spectroscopy. The rotational constant depends on the distance . Often one writes where is the equilibrium value of (the value for which the interaction energy of the atoms in the rotor has a minimum).
A typical rotational absorption spectrum consists of a series of peaks that correspond to transitions between levels with different values of the angular momentum quantum number () such that , due to the selection rules (see below). Consequently, rotational peaks appear at energies with differences corresponding to an integer multiple of .
Selection rules
Rotational transitions of a molecule occur when the molecule absorbs a photon [a particle of a quantized electromagnetic (em) field]. Depending on the energy of the photon (i.e., the wavelength of the em field) this transition may be seen as a sideband of a vibrational and/or
electronic transition. Pure rotational transitions, in which the vibronic (= vibrational plus electronic) wave function does not change, occur in the microwave region of the electromagnetic spectrum.
Typically, rotational transitions can only be observed when the angular momentum quantum number changes by . This selection rule arises from a first-order perturbation theory approximation of the time-dependent Schrödinger equation. According to this treatment, rotational transitions can only be observed when one or more components of the dipole operator have a non-vanishing transition moment. If is the direction of the electric field component of the incoming electromagnetic wave, the transition moment is,
A transition occurs if this integral is non-zero. By separating the rotational part of the molecular wavefunction from the vibronic part, one can show that this means that the molecule must have a permanent dipole moment. After integration over the vibronic coordinates the following rotational part of the transition moment remains,
Here is the z component of the permanent dipole moment. The moment is the vibronically averaged component of the dipole operator. Only the component of the permanent dipole along the axis of a heteronuclear molecule is non-vanishing.
By the use of the orthogonality of the spherical harmonics it is possible to determine which values of , , , and will result in nonzero values for the dipole transition moment integral. This constraint results in the observed selection rules for the rigid rotor:
Non-rigid linear rotor
The rigid rotor is commonly used to describe the rotational energy of diatomic molecules but it is not a completely accurate description of such molecules. This is because molecular bonds (and therefore the interatomic distance ) are not completely fixed; the bond between the atoms stretches out as the molecule rotates faster (higher values of the rotational quantum number ). This effect can be accounted for by introducing a correction factor known as the centrifugal distortion constant (bars on top of various quantities indicate that these quantities are expressed in cm−1):
where
is the fundamental vibrational frequency of the bond (in cm−1). This frequency is related to the reduced mass and the force constant (bond strength) of the molecule according to
The non-rigid rotor is an acceptably accurate model for diatomic molecules but is still somewhat imperfect. This is because, although the model does account for bond stretching due to rotation, it ignores any bond stretching due to vibrational energy in the bond (anharmonicity in the potential).
Arbitrarily shaped rigid rotor
An arbitrarily shaped rigid rotor is a rigid body of arbitrary shape with its center of mass fixed (or in uniform rectilinear motion) in field-free space R3, so that its energy consists only of rotational kinetic energy (and possibly constant translational energy that can be ignored). A rigid body can be (partially) characterized by the three eigenvalues of its moment of inertia tensor, which are real nonnegative values known as principal moments of inertia.
In microwave spectroscopy—the spectroscopy based on rotational transitions—one usually classifies molecules (seen as rigid rotors) as follows:
spherical rotors
symmetric rotors
oblate symmetric rotors
prolate symmetric rotors
asymmetric rotors
This classification depends on the relative magnitudes of the principal moments of inertia.
Coordinates of the rigid rotor
Different branches of physics and engineering use different coordinates for the description of the kinematics of a rigid rotor. In molecular physics Euler angles are used almost exclusively. In quantum mechanical applications it is advantageous to use Euler angles in a convention that is a simple extension of the physical convention of spherical polar coordinates.
The first step is the attachment of a right-handed orthonormal frame (3-dimensional system of orthogonal axes) to the rotor (a body-fixed frame) . This frame can be attached arbitrarily to the body, but often one uses the principal axes frame—the normalized eigenvectors of the inertia tensor, which always can be chosen orthonormal, since the tensor is symmetric. When the rotor possesses a symmetry-axis, it usually coincides with one of the principal axes. It is convenient to choose
as body-fixed z-axis the highest-order symmetry axis.
One starts by aligning the body-fixed frame with a space-fixed frame (laboratory axes), so that the body-fixed x, y, and z axes coincide with the space-fixed X, Y, and Z axis. Secondly, the body and its frame are rotated actively over a positive angle around the z-axis (by the right-hand rule), which moves the - to the -axis. Thirdly, one rotates the body and its frame over a positive angle around the -axis. The z-axis of the body-fixed frame has after these two rotations the longitudinal angle (commonly designated by ) and the colatitude angle (commonly designated by ), both with respect to the space-fixed frame. If the rotor were cylindrical symmetric around its z-axis, like the linear rigid rotor, its orientation in space would be unambiguously specified at this point.
If the body lacks cylinder (axial) symmetry, a last rotation around its z-axis (which has polar coordinates and ) is necessary to specify its orientation completely. Traditionally the last rotation angle is called .
The convention for Euler angles described here is known as the convention; it can be shown (in the same manner as in this article) that it is equivalent to the convention in which the order of rotations is reversed.
The total matrix of the three consecutive rotations is the product
Let be the coordinate vector of an arbitrary point in the body with respect to the body-fixed frame. The elements of are the 'body-fixed coordinates' of . Initially is also the space-fixed coordinate vector of . Upon rotation of the body, the body-fixed coordinates of do not change, but the space-fixed coordinate vector of becomes,
In particular, if is initially on the space-fixed Z-axis, it has the space-fixed coordinates
which shows the correspondence with the spherical polar coordinates (in the physical convention).
Knowledge of the Euler angles as function of time t and the initial coordinates determine the kinematics of the rigid rotor.
Classical kinetic energy
The following text forms a generalization of the well-known special case of the rotational energy of an object that rotates around one axis.
It will be assumed from here on that the body-fixed frame is a principal axes frame; it diagonalizes the instantaneous inertia tensor (expressed with respect to the space-fixed frame), i.e.,
where the Euler angles are time-dependent and in fact determine the time dependence of by the inverse of this equation. This notation implies
that at the Euler angles are zero, so that at the body-fixed frame coincides with the space-fixed frame.
The classical kinetic energy T of the rigid rotor can be expressed in different ways:
as a function of angular velocity
in Lagrangian form
as a function of angular momentum
in Hamiltonian form.
Since each of these forms has its use and can be found in textbooks we will present all of them.
Angular velocity form
As a function of angular velocity T reads,
with
The vector on the left hand side contains the components of the angular velocity of the rotor expressed with respect to the body-fixed frame. The angular velocity satisfies equations of motion known as Euler's equations (with zero applied torque, since by assumption the rotor is in field-free space). It can be shown that is not the time derivative of any vector, in contrast to the usual definition of velocity.
The dots over the time-dependent Euler angles on the right hand side indicate time derivatives. Note that a different rotation matrix would result from a different choice of Euler angle convention used.
Lagrange form
Backsubstitution of the expression of into T gives
the kinetic energy in Lagrange form (as a function of the time derivatives of the Euler angles). In matrix-vector notation,
where is the metric tensor expressed in Euler angles—a non-orthogonal system of curvilinear coordinates—
Angular momentum form
Often the kinetic energy is written as a function of the angular momentum of the rigid rotor. With respect to the body-fixed frame it has the components , and can be shown to be related to the angular velocity,
This angular momentum is a conserved (time-independent) quantity if viewed from a stationary space-fixed frame. Since the body-fixed frame moves (depends on time) the components are not time independent. If we were to represent with respect to the stationary space-fixed frame, we would
find time independent expressions for its components.
The kinetic energy is expressed in terms of the angular momentum by
Hamilton form
The Hamilton form of the kinetic energy is written in terms of generalized momenta
where it is used that the is symmetric. In Hamilton form the kinetic energy is,
with the inverse metric tensor given by
This inverse tensor is needed to obtain the Laplace-Beltrami operator, which (multiplied by ) gives the quantum mechanical energy operator of the rigid rotor.
The classical Hamiltonian given above can be rewritten to the following expression, which is needed in the phase integral arising in the classical statistical mechanics of rigid rotors,
Quantum mechanical rigid rotor
As usual quantization is performed by the replacement of the generalized momenta by operators that give first derivatives with respect to its canonically conjugate variables (positions). Thus,
and similarly for and . It is remarkable that this rule replaces the fairly complicated function of all three Euler angles, time derivatives of Euler angles, and inertia moments (characterizing the rigid rotor) by a simple differential operator that does not depend on time or inertia moments and differentiates to one Euler angle only.
The quantization rule is sufficient to obtain the operators that correspond with the classical angular momenta. There are two kinds: space-fixed and body-fixed
angular momentum operators. Both are vector operators, i.e., both have three components that transform as vector components among themselves upon rotation of the space-fixed and the body-fixed frame, respectively. The explicit form of the rigid rotor angular momentum operators is given here (but beware, they must be multiplied with ). The body-fixed angular momentum operators are written as . They satisfy anomalous commutation relations.
The quantization rule is not sufficient to obtain the kinetic energy operator from the classical Hamiltonian. Since classically commutes with and and the inverses of these functions, the position of these trigonometric functions in the classical Hamiltonian is arbitrary. After
quantization the commutation does no longer hold and the order of operators and functions in the Hamiltonian (energy operator) becomes a point of concern. Podolsky proposed in 1928 that the Laplace-Beltrami operator (times ) has the appropriate form for the quantum mechanical kinetic energy operator. This operator has the general form (summation convention: sum over repeated indices—in this case over the three Euler angles ):
where is the determinant of the g-tensor:
Given the inverse of the metric tensor above, the explicit form of the kinetic energy operator in terms of Euler angles follows by simple substitution. (Note: The corresponding eigenvalue equation gives the Schrödinger equation for the rigid rotor in the form that it was solved for the first time by Kronig and Rabi (for the special case of the symmetric rotor). This is one of the few cases where the Schrödinger equation can be solved analytically. All these cases were solved within a year of the formulation of the Schrödinger equation.)
Nowadays it is common to proceed as follows. It can be shown that can be expressed in body-fixed angular momentum operators (in this proof one must carefully commute differential operators with trigonometric functions). The result has the same appearance as the classical formula expressed in body-fixed coordinates,
The action of the on the Wigner D-matrix is simple. In particular
so that the Schrödinger equation for the spherical rotor () is solved with the degenerate energy equal to .
The symmetric top (= symmetric rotor) is characterized by . It is a prolate (cigar shaped) top if . In the latter case we write the Hamiltonian as
and use that
Hence
The eigenvalue is -fold degenerate, for all eigenfunctions with have the same eigenvalue. The energies with |k| > 0 are -fold degenerate. This exact solution of the Schrödinger equation of the symmetric top was first found in 1927.
The asymmetric top problem () is not soluble analytically.
Direct experimental observation of molecular rotations
For a long time, molecular rotations could not be directly observed experimentally. Only measurement techniques with atomic resolution made it possible to detect the rotation of a single molecule. At low temperatures, the rotations of molecules (or part thereof) can be frozen. This could be directly visualized by scanning tunneling microscopy, i.e., the stabilization could be explained at higher temperatures by the rotational entropy. The direct observation of rotational excitation at single molecule level was achieved recently using inelastic electron tunneling spectroscopy with the scanning tunneling microscope. The rotational excitation of molecular hydrogen and its isotopes were detected.
| Physical sciences | Molecular physics | Physics |
1243767 | https://en.wikipedia.org/wiki/Horn%20%28anatomy%29 | Horn (anatomy) | A horn is a permanent pointed projection on the head of various animals that consists of a covering of keratin and other proteins surrounding a core of live bone. Horns are distinct from antlers, which are not permanent.
In mammals, true horns are found mainly among the ruminant artiodactyls, in the families Antilocapridae (pronghorn) and Bovidae (cattle, goats, antelope etc.). Cattle horns arise from subcutaneous connective tissue (under the scalp) and later fuse to the underlying frontal bone.
One pair of horns is usual; however, two or more pairs occur in a few wild species and in some domesticated breeds of sheep. Polycerate (multi-horned) sheep breeds include the Hebridean, Icelandic, Jacob, Manx Loaghtan, and the Navajo-Churro.
Horns usually have a curved or spiral shape, often with ridges or fluting. In many species, only males have horns. Horns start to grow soon after birth and continue to grow throughout the life of the animal (except in pronghorns, which shed the outer layer annually, but retain the bony core). Partial or deformed horns in livestock are called scurs. Similar growths on other parts of the body are not usually called horns, but spurs, claws, or hooves, depending on the part of the body on which they occur.
Other hornlike growths
The term "horn" is also popularly applied to other hard and pointed features attached to the head of animals in various other families:
Giraffidae: Giraffes have one or more pairs of bony bumps on their heads, called ossicones. These are covered with furred skin.
Cervidae: Most deer have antlers, which are not true horns and made of bone. When fully developed, antlers are dead bone without a horn or skin covering; they are borne only by adults (usually males, except for reindeer) and are shed and regrown each year.
Rhinocerotidae: The "horns" of rhinoceroses are made of keratin, the same substance as fingernails, and grow continuously, but do not have a bone core.
Chamaeleonidae: Many chameleons, most notably the Jackson's chameleon, possess horns on their skulls, and have a keratin covering.
Ceratopsidae: The "horns" of the Triceratops were extensions of its skull bones, although debate exists over whether they had a keratin covering.
Abelisauridae: Various abelisaurid theropods, such as Carnotaurus and Majungasaurus possessed extensions of the frontal bone which were likely covered in some form of keratinous integument.
Horned lizards (Phrynosoma): These lizards have horns on their heads which have a hard keratin covering over a bony core, like mammalian horns.
Insects: Some insects (such as rhinoceros beetles) have hornlike structures on the head or thorax (or both). These are pointed outgrowths of the hard chitinous exoskeleton. Some (such as stag beetles) have greatly enlarged jaws, also made of chitin.
Canidae: Golden jackals were once thought to occasionally develop a horny growth on the skull, which is associated with magical powers in south-eastern Asia. Although no evidence of its existence has been found, it remains a common belief in South Asia.
Azendohsauridae: the skull of the triassic azendohsaurid archosauromorph Shringasaurus possessed two massive, forward-facing conical horns, which were likely covered in cornified sheaths in life.
Anhimidae: The horned screamer possesses an entirely keratinous spine, which is loosely connected to its skull.
Many mammal species in various families have tusks, which often serve the same functions as horns, but are in fact oversized teeth. These include the Moschidae (Musk deer, which are ruminants), Suidae (Wild Boars), Proboscidea (Elephants), Monodontidae (Narwhals) and Odobenidae (Walruses).
Polled animals or pollards are those of normally-horned (mainly domesticated) species whose horns have been removed, or which have not grown. In some cases such animals have small horny growths in the skin where their horns would be – these are known as scurs.
On humans
Cutaneous horns are the only examples of horns growing on people.
Cases of people growing horns have been historically described, sometimes with mythical status. Researchers have not however discovered photographic evidence of the phenomenon. There are human cadaveric specimens that show outgrowings, but these are instead classified as osteomas or other excrescences.
The phenomenon of humans with horns has been observed in countries lacking advanced medicine. There are living people, several in China, with cases of cutaneous horns, most common in the elderly.
Some people, notably The Enigma, have horn implants; that is, they have implanted silicone beneath the skin as a form of body modification.
Animal uses of horns
Animals have a variety of uses for horns and antlers, including defending themselves from predators and fighting members of their own species (horn fighting) for territory, dominance or mating priority. Horns are usually present only in males but in some species, females too may possess horns. It has been theorized by researchers that taller species living in the open are more visible from longer distances and more likely to benefit from horns to defend themselves against predators. Female bovids that are not hidden from predators due to their large size or open savannah-like habitat are more likely to bear horns than small or camouflaged species.
In addition, horns may be used to root in the soil or strip bark from trees. In animal courtship, many use horns in displays. For example, the male blue wildebeest reams the bark and branches of trees to impress the female and lure her into his territory. Some animals such as goats with true horns use them for cooling with the blood vessels in the bony core allowing them to function as a radiator.
After the death of a horned animal, the keratin may be consumed by the larvae of the horn moth.
Human uses of horns
Horned animals are sometimes hunted so their mounted head or horns can be displayed as a hunting trophy or as decorative objects.
Some cultures use bovid horns as musical instruments, for example, the shofar. These have evolved into brass instruments in which, unlike the trumpet, the bore gradually increases in width through most of its length—that is to say, it is conical rather than cylindrical. These are called horns, though now made of metal.
Drinking horns are bovid horns removed from the bone core, cleaned, polished, and used as drinking vessels. (This is similar to the legend of the cornucopia.) It has been suggested that the shape of a natural horn was also the model for the rhyton, a horn-shaped drinking vessel.
Powder horns were originally bovid horns fitted with lids and carrying straps, used to carry gunpowder. Powder flasks of any material may be referred to as powder horns.
Shoehorns were originally made from slices of bovid horn, which provided the right curving shape and a smooth surface.
Antelope horns are used in traditional Chinese medicine.
Horns consist of keratin, and the term "horn" is used to refer to this material, sometimes including similarly solid keratin from other parts of animals, such as hoofs. Horn may be used as a material in tools, furniture and decoration, among other uses. In these applications, horn is valued for its hardness, and it has given rise to the expression hard as horn. Horn is somewhat thermoplastic and (like tortoiseshell) was formerly used for many purposes where plastic would now be used. Horn may be used to make glue.
Horn bows are bows made from a combination of horn, sinew and usually wood. These materials allow more energy to be stored in a short bow than wood alone.
Horns and horn tips from various animals have been used for centuries in the manufacture of scales, grips, or handles for knives and other weapons, and beginning in the 19th century, for the handle scales of handguns.
Horn buttons may be made from horns, and historically also hooves which are a similar material. The non-bony part of the horn or hoof may be softened by heating to a temperature just above the boiling point of water, then molded in metal dies, or the hollow lower part of the horn may be slit spirally lengthwise and then flattened in a vise between wood boards, again after heating, and later cut with a holesaw or similar tool into round or other shaped blanks which are finished on a lathe or by hand. Toggle buttons are made by cutting off the solid tips of horns and perforating them. Antler buttons, and buttons made from hooves are not technically horn buttons, but are often referred to as such in popular parlance. Horns from cattle, water buffalo, and sheep are all used for commercial button making, and of other species as well, on a local and non-commercial basis.
Horn combs were common in the era before replacement by plastic, and are still made.
Horn needle cases and other small boxes, particularly of water buffalo horn, are still made. One occasionally finds horn used as a material in antique snuff boxes.
Horn strips for inlaying wood are a traditional technique.
Carved horn hairpins and other jewelry such as brooches and rings are manufactured, particularly in Asia, including for the souvenir trade.
Horn is used in artwork for small, detailed carvings. It is an easily worked and polished material, is strong and durable, and in the right variety, beautiful.
Horn chopsticks are found in Asian countries from highland Nepal and Tibet to the Pacific coast. Typically they are not the common material, but rather are higher quality decorative articles. Similarly other horn flatware, notably spoons, continues to be manufactured for decorations and other purposes.
Long dice made of horn that have a rodlike elongated shape with four numbered faces and two small unnumbered end faces continue to be manufactured in Asia where they are traditionally used in games like Chaupar (Pachisi) and many others.
Horn is sometimes found in walking sticks, cane handles, and shafts. In the latter use, the horn elements may be cut into short cylindrical segments held together by a metal core.
Horned deities appear in various guises across many world religions and mythologies.
Horned helmets arise in different cultures, for ritual purposes rather than combat.
Horns were treated and cut into strips to make semi-transparent windows in the vernacular architecture of the Medieval Ages.
Dehorning
In some instances, wildlife parks may decide to remove the horn of some animals (such as rhinos) as a preventive measure against poaching. Animal horns can be safely sawn off without hurting the animal (it is similar to clipping toe nails). When the animal were to be poached, the animal is generally killed as it is shot first. Park rangers however may decide to tranquilize the animal instead to remove the horn.
Gallery
| Biology and health sciences | Skeletal system | Biology |
2577842 | https://en.wikipedia.org/wiki/Eurasian%20brown%20bear | Eurasian brown bear | The Eurasian brown bear (Ursus arctos arctos) is one of the most common subspecies of the brown bear, and is found in much of Eurasia. It is also called the European brown bear, common brown bear, common bear, and colloquially by many other names. The genetic diversity of present-day brown bears (Ursus arctos) has been extensively studied over the years and appears to be geographically structured into five main clades based upon analysis of the mtDNA.
Description
The Eurasian brown bear has brown fur, which ranges from yellowish-brown to dark brown, red-brown, and almost black in some cases; albinism has also been recorded. The fur is dense to varying degrees and the hair can grow up to in length. The head normally is quite round and has relatively small rounded ears, a wide skull, and a mouth equipped with 42 teeth, including predatory teeth. It has a powerful bone structure and large paws equipped with claws that can grow up to in length. The weight varies depending on habitat and the time of the year. A full-grown male weighs on average between 350 and 500 kg and reaches a maximum weight of 650 kg and length of nearly . Females typically range between 150 and 300 kg and reach a maximum weight of 450 kg . They have a lifespan of 20 to 30 years in the wild.
History
Eurasian brown bears were used in Ancient Rome for fighting in arenas. The strongest bears apparently came from Caledonia and Dalmatia.
In antiquity, the Eurasian brown bear was largely carnivorous, with 80% of its diet consisting of animal matter. However, as its habitat increasingly diminished, the portion of meat in its diet decreased with it until by the late Middle Ages, meat consisted of only 40% of its dietary intake. Today, meat makes up little more than 10–15% of its diet. Whenever possible, the brown bear will consume sheep.
Unlike in North America, where an average of two people a year are killed by bears, Scandinavia only has records of three fatal bear attacks within the last century. In late 2019, brown bears killed three men in Romania in just over a month.
Species origin
The oldest fossils are from the Choukoutien, China, and date back about 500,000 years. It is known from mtDNA studies that during the Pleistocene ice age it was too cold for the brown bear to survive in Europe except in three places: Russia, Spain, and the Balkans. However, a newer study found that brown bears were present in France and Belgium during the Last Glacial Maximum as well, indicating they were not as restricted to southern refugia as previously thought.
Modern research has made it possible to track the origin of the subspecies. The species to which it belongs developed more than 500,000 years ago, and researchers have found that the Eurasian brown bear separated about 850,000 years ago, with one branch based in Western Europe and the other branch in Russia, Eastern Europe and Asia. Through the research of mitochondrial DNA (mtDNA), researchers have found that the European family has divided into two clades—one in the Iberian Peninsula and the Balkans, the other in Russia.
There is a population in Scandinavia that includes bears of the western and eastern lineages. By analyzing the mtDNA of the southern population, researchers have found that they have probably come from populations in the Pyrenees in Southern France and Spain and the Cantabrian Mountains (Spain). Bears from these populations spread to southern Scandinavia after the last ice age. The northern bear populations originate in the Finnish/Russian population. Probably their ancestors survived the ice age in the ice-free areas west of the Ural Mountains, and thereafter spread to Northern Europe.
Distribution
Brown bears could once be found across most of Eurasia, compared to the more limited range today. General habitats included areas such as grassland, sparsely vegetated land, and wetlands.
Although included as of Least Concern on the 2006 IUCN Red List of Threatened Species (which refers to the global species, not to the Eurasian brown bear specifically), local populations, specifically those in the European Union, are becoming increasingly scarce. As the IUCN itself adds: "Least Concern does not always mean that species are not at risk. There are declining species that are evaluated as Least Concern."
The brown bear has long been extinct in the British Isles (at least 1,500 years ago, possibly even 3,000 years ago), Denmark (about 6,500 years ago), the Netherlands (about 1,000 years ago, although later singles rarely wandered from Germany), Belgium and Luxembourg, with more recent extinctions in Germany (in the year 1835, although singles wandering from Italy were recorded in 2006 and 2019), Switzerland (in 1904, although a single was seen in 1923 and since 2005 there has been an increasing number of sightings of wanderers from Italy), and Portugal (in 1843, although a wanderer from Spain was recorded in 2019).
Globally, the largest population is found east of the Ural mountain range, in the large Siberian forests; brown bears are also present in smaller numbers in parts of central Asia.
The largest brown bear population in Europe is in Russia, where it has now recovered from an all-time low caused by intensive hunting. Populations in Baltoscandia are similarly, albeit slowly, increasing. They include almost 3,000 bears in Sweden, 2,000 in Finland, 1,100 in Estonia and around 100 in Norway.
Large populations can also be found in Romania (around 6,000), Slovakia (around 1,200), Bosnia and Herzegovina, Croatia (1,200), Slovenia (1,100), North Macedonia, Bulgaria, Poland, Turkey, and Georgia.
Small but still significant populations can also be found in Albania, Greece, Serbia and Montenegro. In 2005, there were an estimated 200 in Ukraine; these populations are part of two distinct metapopulations: the Carpathian with over 5000 individuals, and the Dinaric-Pindos (Balkans) with around 3000 individuals.
There is a small but growing population (at least 70 bears) in the Pyrenees, on the border between Spain and France, which was once on the edge of extinction, as well as two subpopulations in the Cantabrian Mountains in Spain (amounting to around 250 individuals). There are also populations totalling around 100 bears in the Abruzzo, South Tyrol and Trentino regions of Italy. Bears from the aforementioned Italian regions occasionally cross over to bordering Switzerland, which has not hosted a native population since its last bear was shot and killed in Graubünden in 1904.
Outside Europe and Russia/the CIS, clades of brown bear persist in small, isolated, and for the most part highly threatened populations in Iran, Afghanistan, Pakistan, parts of northwest India and central China, and on the island of Hokkaido in Japan.
Cultural depictions
The historic distribution of bears and the impression the Eurasian brown bear has made on people are reflected in the names of several localities (some notable examples include Bern, Medvednica, Otepää and Ayu-Dag), as well as personal names—for example, Xiong, Bernard, Arthur, Ursula, Urs, Ursicinus, Orsolya, Björn, Nedved, Medvedev, and Otso.
Bears of this subspecies appear very frequently in the fairy tales and fables of Europe, in particular, tales collected by Jakob and Wilhelm Grimm. The European brown bear was once common in Germany and alpine lands like Northern Italy, Eastern France, and most of Switzerland, and thus appears in tales of various dialects of German.
The bear is traditionally regarded as the symbol of Russian (military and political) might. It is also Finland's national animal; and in Croatia, a brown bear is depicted on the reverse of the Croatian 5 kuna coin, minted from 1993 to 2023.
| Biology and health sciences | Bears | Animals |
2578372 | https://en.wikipedia.org/wiki/Atropisomer | Atropisomer | Atropisomers are stereoisomers arising because of hindered rotation about a single bond, where energy differences due to steric strain or other contributors create a barrier to rotation that is high enough to allow for isolation of individual rotamers.
They occur naturally and are of occasional importance in pharmaceutical design. When the substituents are achiral, these conformers are enantiomers (atropoenantiomers), showing axial chirality; otherwise they are diastereomers (atropodiastereomers).
Etymology and history
The word atropisomer (, , meaning "not to be turned") was coined in application to a theoretical concept by German biochemist Richard Kuhn for Karl Freudenberg's seminal Stereochemie volume in 1933. Atropisomerism was first experimentally detected in a tetra substituted biphenyl, a diacid, by George Christie and James Kenner in 1922. Michinori Ōki further refined the definition of atropisomers taking into account the temperature-dependence associated with the interconversion of conformers, specifying that atropisomers interconvert with a half-life of at least 1000 seconds at a given temperature, corresponding to an energy barrier of 93 kJ mol−1 (22 kcal mol −1) at 300 K (27 °C).
Energetics
The stability of individual atropisomers is conferred by the repulsive interactions that inhibit rotation. Both the steric bulk and, in principle, the length and rigidity of the bond connecting the two subunits contribute. Commonly, atropisomerism is studied by dynamic nuclear magnetic resonance spectroscopy, since atropisomerism is a form of fluxionality. Inferences from theory and results of reaction outcomes and yields also contribute.
Atropisomers exhibit axial chirality (planar chirality). When the barrier to racemization is high, as illustrated by the BINAP ligands, the phenomenon becomes of practical value in asymmetric synthesis. Methaqualone, the anxiolytic and hypnotic-sedative, is a classical example of a drug molecule that exhibits the phenomenon of atropisomerism.
Most examples of atropisomerism focus on derivatives or analogues of biphenyl. Some acylic systems, such as amides and especially thioamides, also exhibit the phenomenon owing to the partial double bond character of the C-N bonds in these systems.
Stereochemical assignment
Determining the axial stereochemistry of biaryl atropisomers can be accomplished through the use of a Newman projection along the axis of hindered rotation. The ortho, and in some cases meta substituents are first assigned priority based on Cahn–Ingold–Prelog priority rules. One scheme of nomenclature in based on envisioning the helicity defined by these groups. Starting with the substituent of highest priority in the closest ring and moving along the shortest path to the substituent of highest priority in the other ring, the absolute configuration is assigned P or Δ for clockwise and M or Λ for counterclockwise. Alternately, all four groups can be ranked by Cahn–Ingold–Prelog priority rules, with overall priority given to the two groups on the "front" atom of the Newman projection. The two configurations determined in this way are termed Ra and Sa, in analogy to the traditional R/S for a traditional tetrahedral stereocenter.
Synthesis
Axially chiral biaryl compounds are prepared by coupling reactions, e.g., Ullmann coupling, Suzuki–Miyaura reaction, or palladium-catalyzed arylation of arenes. Subsequent to the synthesis, the racemic biaryl is resolved by classical methods. Diastereoselective coupling can be achieved through the use of a chiral bridge that links the two aryl groups or through the use of a chiral auxiliary at one of the positions proximal to axial bridge. Enantioselective coupling can be achieved through the use of a chiral leaving group on one of the biaryls or under oxidative conditions that utilize chiral amines to set the axial configuration.
Individual atropisomers can be isolated by seed-directed crystallization of racemates. Thus, 1,1'-Binaphthyl crystallizes from the melt as individual enantiomers.
Scope
In one application the asymmetry in an atropisomer is transferred in a chemical reaction to a new stereocenter. The atropisomer is an iodoaryl compound synthesised starting from (S)-valine and exists as the (M,S) isomer and the (P,S) isomer. The interconversion barrier between the two is 24.3 kcal/mol (101.7 kJ/mol). The (M,S) isomer can be obtained exclusively from this mixture by recrystallisation from hexanes. The iodine group is homolytically removed to form an aryl radical by a tributyltin hydride/triethylboron/oxygen mixture as in the Barton–McCombie reaction. Although the hindered rotation is now removed in the aryl radical, the intramolecular reaction with the alkene is so much faster than is rotation of the carbon–nitrogen bond that the stereochemistry is preserved. In this way the (M,S) isomer yields the (S,S) dihydroindolone.
The most important class of atropisomers are biaryls such as diphenic acid, which is a derivative of biphenyl with a complete set of ortho substituents. Heteroaromatic analogues of the biphenyl compounds also exist, where hindered rotation occurs about a carbon-nitrogen or a nitrogen-nitrogen bond. Others are dimers of naphthalene derivatives such as 1,1'-bi-2-naphthol. In a similar way, aliphatic ring systems like cyclohexanes linked through a single bond may display atropisomerism provided that bulky substituents are present. The use of axially chiral biaryl compounds such as BINAP, QUINAP and BINOL, have been found to be useful in the area of asymmetric catalysis as chiral ligands.
Their ability to provide stereoinduction has led to use in metal catalyzed hydrogenation, epoxidation, addition, and allylic alkylation reactions. Other reactions that can be catalyzed by the use of chiral biaryl compounds are the Grignard reaction, Ullmann reaction, and the Suzuki reaction. A recent example in the area of chiral biaryl asymmetric catalysis employs a five-membered imidazole as part of the atropisomer scaffold. This specific phosphorus, nitrogen-ligand has been shown to perform enantioselective A3-coupling.
Natural products, drug design
Many atropisomers occur in nature, and some have applications to drug design. The natural product mastigophorene A has been found to aid in nerve growth.
Other examples of naturally occurring atropisomers include vancomycin isolated from an Actinobacterium, and knipholone, which is found in the roots of Kniphofia foliosa of the family Asphodelaceae. The structure complexity in vancomycin is significant because it can bind with peptides due to the complexity of its stereochemistry, which includes multiple stereocenters, two chiral planes in its stereogenic biaryl axis. Knipholone, with its axial chirality, occurs in nature and has been shown to offer good antimalarial and antitumor activities particularly in the M form.
The use of atropisomeric drugs provides an additional way for drugs to have stereochemical variations and specificity in design. One example is , a drug that was discovered to aid in chemotherapy cancer treatment.
Telenzepine is atropisomeric in the conformation of its central thienobenzodiazepine ring. The two enantiomers have been resolved, and it was found that the (+)-isomer which is about 500-fold more active than the (–)-isomer at muscarinic receptors in rat cerebral cortex. However, drug design is not always aided by atropisomerism. In some cases, making drugs from atropisomers is challenging because isomers may interconvert faster than expected. Atropisomers also might interact differently in the body, and as with other types of stereoisomers, it is important to examine these properties before administering drugs to patients.
| Physical sciences | Stereochemistry | Chemistry |
2578624 | https://en.wikipedia.org/wiki/Pseudocereal | Pseudocereal | A pseudocereal or pseudograin is one of any non-grasses that are used in much the same way as cereals (true cereals are grasses). Pseudocereals can be further distinguished from other non-cereal staple crops (such as potatoes) by their being processed like a cereal: their seed can be ground into flour and otherwise used as a cereal. Prominent examples of pseudocereals include amaranth (love-lies-bleeding, red amaranth, Prince-of-Wales-feather), quinoa, and buckwheat. The pseudocereals have a good nutritional profile, with high levels of essential amino acids, essential fatty acids, minerals, and some vitamins. The starch in pseudocereals has small granules and low amylose content (except for buckwheat), which gives it similar properties to waxy-type cereal starches. The functional properties of pseudocereals, such as high viscosity, water-binding capacity, swelling capability, and freeze-thaw stability, are determined by their starch properties and seed morphology. Pseudocereals are gluten-free, and they are used to make 100% gluten-free products, which has increased their popularity.
Common pseudocereals
Acorn
Amaranth (love-lies-bleeding, red amaranth, Prince-of-Wales-feather)
Breadnut
Buckwheat
Cañahua
Chia
Cockscomb (also called quail grass or soko)
Goosefoot
Hanza
Oak
Pitseed goosefoot
Quinoa
Spinach
Wattleseed (also called acacia seed)
Production
This table shows the annual production of some pseudocereals in 1961, 2010, 2011, 2012, and 2013 ranked by 2013 production.
Other grains that are locally important, but are not included in FAO statistics, include:
Amaranth, an ancient pseudocereal, formerly a staple crop of the Aztec Empire, widely grown in Africa.
Kañiwa or Cañahua, close relative of quinoa.
| Biology and health sciences | Pseudocereals | Plants |
2578746 | https://en.wikipedia.org/wiki/Homogeneity%20%28physics%29 | Homogeneity (physics) | In physics, a homogeneous material or system has the same properties at every point; it is uniform without irregularities. A uniform electric field (which has the same strength and the same direction at each point) would be compatible with homogeneity (all points experience the same physics). A material constructed with different constituents can be described as effectively homogeneous in the electromagnetic materials domain, when interacting with a directed radiation field (light, microwave frequencies, etc.).
Mathematically, homogeneity has the connotation of invariance, as all components of the equation have the same degree of value whether or not each of these components are scaled to different values, for example, by multiplication or addition. Cumulative distribution fits this description. "The state of having identical cumulative distribution function or values".
Context
The definition of homogeneous strongly depends on the context used. For example, a composite material is made up of different individual materials, known as "constituents" of the material, but may be defined as a homogeneous material when assigned a function. For example, asphalt paves our roads, but is a composite material consisting of asphalt binder and mineral aggregate, and then laid down in layers and compacted. However, homogeneity of materials does not necessarily mean isotropy. In the previous example, a composite material may not be isotropic.
In another context, a material is not homogeneous in so far as it is composed of atoms and molecules. However, at the normal level of our everyday world, a pane of glass, or a sheet of metal is described as glass, or stainless steel. In other words, these are each described as a homogeneous material.
A few other instances of context are: dimensional homogeneity (see below) is the quality of an equation having quantities of same units on both sides; homogeneity (in space) implies conservation of momentum; and homogeneity in time implies conservation of energy.
Homogeneous alloy
In the context of composite metals is an alloy. A blend of a metal with one or more metallic or nonmetallic materials is an alloy. The components of an alloy do not combine chemically but, rather, are very finely mixed. An alloy might be homogeneous or might contain small particles of components that can be viewed with a microscope. Brass is an example of an alloy, being a homogeneous mixture of copper and zinc. Another example is steel, which is an alloy of iron with carbon and possibly other metals. The purpose of alloying is to produce desired properties in a metal that naturally lacks them. Brass, for example, is harder than copper and has a more gold-like color. Steel is harder than iron and can even be made rust proof (stainless steel).
Homogeneous cosmology
Homogeneity, in another context plays a role in cosmology. From the perspective of 19th-century cosmology (and before), the universe was infinite, unchanging, homogeneous, and therefore filled with stars. However, German astronomer Heinrich Olbers asserted that if this were true, then the entire night sky would be filled with light and bright as day; this is known as Olbers' paradox. Olbers presented a technical paper in 1826 that attempted to answer this conundrum. The faulty premise, unknown in Olbers' time, was that the universe is not infinite, static, and homogeneous. The Big Bang cosmology replaced this model (expanding, finite, and inhomogeneous universe). However, modern astronomers supply reasonable explanations to answer this question. One of at least several explanations is that distant stars and galaxies are red shifted, which weakens their apparent light and makes the night sky dark. However, the weakening is not sufficient to actually explain Olbers' paradox. Many cosmologists think that the fact that the Universe is finite in time, that is that the Universe has not been around forever, is the solution to the paradox. The fact that the night sky is dark is thus an indication for the Big Bang.
Translation invariance
By translation invariance, one means independence of (absolute) position, especially when referring to a law of physics, or to the evolution of a physical system.
Fundamental laws of physics should not (explicitly) depend on position in space. That would make them quite useless. In some sense, this is also linked to the requirement that experiments should be reproducible.
This principle is true for all laws of mechanics (Newton's laws, etc.), electrodynamics, quantum mechanics, etc.
In practice, this principle is usually violated, since one studies only a small subsystem of the universe, which of course "feels" the influence of the rest of the universe. This situation gives rise to "external fields" (electric, magnetic, gravitational, etc.) which make the description of the evolution of the system depend upon its position (potential wells, etc.). This only stems from the fact that the objects creating these external fields are not considered as (a "dynamical") part of the system.
Translational invariance as described above is equivalent to shift invariance in system analysis, although here it is most commonly used in linear systems, whereas in physics the distinction is not usually made.
The notion of isotropy, for properties independent of direction, is not a consequence of homogeneity. For example, a uniform electric field (i.e., which has the same strength and the same direction at each point) would be compatible with homogeneity (at each point physics will be the same), but not with isotropy, since the field singles out one "preferred" direction.
Consequences
In the Lagrangian formalism, homogeneity in space implies conservation of momentum, and homogeneity in time implies conservation of energy. This is shown, using variational calculus, in standard textbooks like the classical reference text of Landau & Lifshitz. This is a particular application of Noether's theorem.
Dimensional homogeneity
As said in the introduction, dimensional homogeneity is the quality of an equation having quantities of same units on both sides. A valid equation in physics must be homogeneous, since equality cannot apply between quantities of different nature. This can be used to spot errors in formula or calculations. For example, if one is calculating a speed, units must always combine to [length]/[time]; if one is calculating an energy, units must always combine to [mass][length]2/[time]2, etc. For example, the following formulae could be valid expressions for some energy:
if m is a mass, v and c are velocities, p is a momentum, h is the Planck constant, λ a length. On the other hand, if the units of the right hand side do not combine to [mass][length]2/[time]2, it cannot be a valid expression for some energy.
Being homogeneous does not necessarily mean the equation will be true, since it does not take into account numerical factors. For example, could be or could not be the correct formula for the energy of a particle of mass m traveling at speed v, and one cannot know if hc/λ should be divided or multiplied by 2π.
Nevertheless, this is a very powerful tool in finding characteristic units of a given problem, see dimensional analysis.
| Physical sciences | Physics basics: General | Physics |
2579168 | https://en.wikipedia.org/wiki/Scolopendra%20gigantea | Scolopendra gigantea | Scolopendra gigantea, also known as the Peruvian giant yellow-leg centipede or Amazonian giant centipede or "Giant Grumpalumpagus", is a centipede in the genus Scolopendra. It is the largest centipede species in the world, with a length exceeding . Specimens may have 21 or 23 segments. It is found in various places throughout South America and the extreme south Caribbean, where it preys on a wide variety of animals, including other sizable arthropods, amphibians, mammals and reptiles.
Distribution and habitat
It is naturally found in northern South America. Countries from which verified museum specimens have been collected include Aruba, Brazil, Curaçao, Colombia, Venezuela (including Margarita Island) and Trinidad. Records from Saint Thomas, U.S. Virgin Islands, Hispaniola (both Haiti and the Dominican Republic), Mexico, Puerto Rico and Honduras are assumed to be accidental introductions or labelling errors.
Scolopendra gigantea can be found in tropical or sub-tropical rainforest and tropical dry forest, in dark, moist places such as in leaf litter or under rocks.
Behavior and diet
It is a carnivore that feeds on any other animal it can overpower and kill. It is capable of overpowering not only other invertebrates such as large insects, worms, snails, spiders, millipedes, scorpions, and even tarantulas, but also small vertebrates including small lizards, frogs (up to long), snakes (up to long), sparrow-sized birds, mice, and bats. Large individuals of S. gigantea have been known to employ unique strategies to catch bats with muscular strength. They climb cave ceilings and hold or manipulate their heavier prey with only a few legs attached to the ceiling. Natural predators to the giant centipedes include large birds, spiders and arthropod-hunting mammals, including coati, kinkajou and opossum.
Venom
At least one human death has been attributed to the venom of S. gigantea. In 2014, a four-year-old child in Venezuela died after being bitten by a giant centipede which was hidden inside an open soda can. Researchers at Universidad de Oriente later confirmed the specimen to be S. gigantea.
| Biology and health sciences | Myriapoda | Animals |
2581444 | https://en.wikipedia.org/wiki/Spring%20steel | Spring steel | Spring steel is a name given to a wide range of steels used in the manufacture of different products, including swords, saw blades, springs and many more. These steels are generally low-alloy manganese, medium-carbon steel or high-carbon steel with a very high yield strength. This allows objects made of spring steel to return to their original shape despite significant deflection or twisting.
Grades
Many grades of steel can be hardened and tempered to increase elasticity and resist deformation; however, some steels are inherently more elastic than others:
Applications
Applications include piano wire (also known as music wire) such as ASTM A228 (0.80–0.95% carbon), spring clamps, antennas, springs (e. g. vehicle coil springs or leaf springs), and s-tines.
Spring steel is commonly used in the manufacture of swords with rounded edges for training or stage combat, as well as sharpened swords for collectors and live combat.
Spring steel is one of the most popular materials used in the fabrication of lockpicks due to its pliability and resilience.
Tubular spring steel is used in the landing gear of some small aircraft due to its ability to absorb the impact of landing.
It is frequently used in the making of knives, machetes, and other edged tools.
It is a key component in electrician's fish tape.
It is used in binder clips.
Used extensively in shims due to its resistance to deformation in low thicknesses.
| Physical sciences | Iron alloys | Chemistry |
2581605 | https://en.wikipedia.org/wiki/Concurrent%20computing | Concurrent computing | Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts.
This is a property of a system—whether a program, computer, or a network—where there is a separate execution point or "thread of control" for each process. A concurrent system is one where a computation can advance without waiting for all other computations to complete.
Concurrent computing is a form of modular programming. In its paradigm an overall computation is factored into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C.A.R. Hoare.
Introduction
The concept of concurrent computing is frequently confused with the related but distinct concept of parallel computing, although both can be described as "multiple processes executing during the same period of time". In parallel computing, execution occurs at the same physical instant: for example, on separate processors of a multi-processor machine, with the goal of speeding up computations—parallel computing is impossible on a (one-core) single processor, as only one computation can occur at any instant (during any single clock cycle). By contrast, concurrent computing consists of process lifetimes overlapping, but execution does not happen at the same instant. The goal here is to model processes that happen concurrently, like multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.
For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via time-sharing slices: only one process runs at a time, and if it does not complete during its time slice, it is paused, another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.
Concurrent computations may be executed in parallel, for example, by assigning each process to a separate processor or processor core, or distributing a computation across a network.
The exact timing of when tasks in a concurrent system are executed depends on the scheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:
T1 may be executed and finished before T2 or vice versa (serial and sequential)
T1 and T2 may be executed alternately (serial and concurrent)
T1 and T2 may be executed simultaneously at the same instant of time (parallel and concurrent)
The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, concurrent/sequential and parallel/serial are used as opposing pairs. A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called a serial schedule. A set of tasks that can be scheduled serially is serializable, which simplifies concurrency control.
Coordinating access to shared resources
The main challenge in designing concurrent programs is concurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions. Potential problems include race conditions, deadlocks, and resource starvation. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resource balance:
bool withdraw(int withdrawal)
{
if (balance >= withdrawal)
{
balance -= withdrawal;
return true;
}
return false;
}
Suppose balance = 500, and two concurrent threads make the calls withdraw(300) and withdraw(350). If line 3 in both operations executes before line 5 both operations will find that balance >= withdrawal evaluates to true, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources benefit from the use of concurrency control, or non-blocking algorithms.
Advantages
The advantages of concurrent computing include:
Increased program throughput—parallel execution of a concurrent algorithm allows the number of tasks completed in a given time to increase proportionally to the number of processors according to Gustafson's law
High responsiveness for input/output—input/output-intensive programs mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task.
More appropriate program structure—some problems and problem domains are well-suited to representation as concurrent tasks or processes. For example MVCC.
Models
Introduced in 1962, Petri nets were an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, and Dataflow architectures were created to physically implement the ideas of dataflow theory. Beginning in the late 1970s, process calculi such as Calculus of Communicating Systems (CCS) and Communicating Sequential Processes (CSP) were developed to permit algebraic reasoning about systems composed of interacting components. The π-calculus added the capability for reasoning about dynamic topologies.
Input/output automata were introduced in 1987.
Logics such as Lamport's TLA+, and mathematical models such as traces and Actor event diagrams, have also been developed to describe the behavior of concurrent systems.
Software transactional memory borrows from database theory the concept of atomic transactions and applies them to memory accesses.
Consistency models
Concurrent programming languages and multiprocessor programs must have a consistency model (also known as a memory model). The consistency model defines rules for how operations on computer memory occur and how results are produced.
One of the first consistency models was Leslie Lamport's sequential consistency model. Sequential consistency is the property of a program that its execution produces the same results as a sequential program. Specifically, a program is sequentially consistent if "the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program".
Implementation
A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as an operating system process, or implementing the computational processes as a set of threads within a single operating system process.
Interaction and communication
In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:
Shared memory communication Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java and C#). This style of concurrent programming usually needs the use of some form of locking (e.g., mutexes, semaphores, or monitors) to coordinate between threads. A program that properly implements any of these is said to be thread-safe.
Message passing communication Concurrent components communicate by exchanging messages (exemplified by MPI, Go, Scala, Erlang and occam). The exchange of messages may be carried out asynchronously, or may use a synchronous "rendezvous" style in which the sender blocks until the message is received. Asynchronous message passing may be reliable or unreliable (sometimes referred to as "send and pray"). Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust form of concurrent programming. A wide variety of mathematical theories to understand and analyze message-passing systems are available, including the actor model, and various process calculi. Message passing can be efficiently implemented via symmetric multiprocessing, with or without shared memory cache coherence.
Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors.
History
Concurrent computing developed out of earlier work on railroads and telegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via time-division multiplexing (1870s).
The academic study of concurrent algorithms started in the 1960s, with credited with being the first paper in this field, identifying and solving mutual exclusion.
Prevalence
Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks. Examples follow.
At the programming language level:
Channel
Coroutine
Futures and promises
At the operating system level:
Computer multitasking, including both cooperative multitasking and preemptive multitasking
Time-sharing, which replaced sequential batch processing of jobs with concurrent use of a system
Process
Thread
At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices.
Languages supporting concurrent programming
Concurrent programming languages are programming languages that use language constructs for concurrency. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures and promises. Such languages are sometimes described as concurrency-oriented languages or concurrency-oriented programming languages (COPL).
Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present.
Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. A non-exhaustive list of languages which use or provide concurrent programming facilities:
Ada—general purpose, with native support for message passing and monitor based concurrency
Alef—concurrent, with threads and message passing, for system programming in early versions of Plan 9 from Bell Labs
Alice—extension to Standard ML, adds support for concurrency via futures
Ateji PX—extension to Java with parallel primitives inspired from π-calculus
Axum—domain specific, concurrent, based on actor model and .NET Common Language Runtime using a C-like syntax
BMDFM—Binary Modular DataFlow Machine
C++—thread and coroutine support libraries
Cω (C omega)—for research, extends C#, uses asynchronous communication
C#—supports concurrent computing using , , also since version 5.0 and keywords introduced
Clojure—modern, functional dialect of Lisp on the Java platform
Concurrent Clean—functional programming, similar to Haskell
Concurrent Collections (CnC)—Achieves implicit parallelism independent of memory model by explicitly defining flow of data and control
Concurrent Haskell—lazy, pure functional language operating concurrent processes on shared memory
Concurrent ML—concurrent extension of Standard ML
Concurrent Pascal—by Per Brinch Hansen
Curry
D—multi-paradigm system programming language with explicit support for concurrent programming (actor model)
E—uses promises to preclude deadlocks
ECMAScript—uses promises for asynchronous operations
Eiffel—through its SCOOP mechanism based on the concepts of Design by Contract
Elixir—dynamic and functional meta-programming aware language running on the Erlang VM.
Erlang—uses synchronous or asynchronous message passing with no shared memory
FAUST—real-time functional, for signal processing, compiler provides automatic parallelization via OpenMP or a specific work-stealing scheduler
Fortran—coarrays and do concurrent are part of Fortran 2008 standard
Go—for system programming, with a concurrent programming model based on CSP
Haskell—concurrent, and parallel functional programming language
Hume—functional, concurrent, for bounded space and time environments where automata processes are described by synchronous channels patterns and message passing
Io—actor-based concurrency
Janus—features distinct askers and tellers to logical variables, bag channels; is purely declarative
Java—thread class or Runnable interface
Julia—"concurrent programming primitives: Tasks, async-wait, Channels."
JavaScript—via web workers, in a browser environment, promises, and callbacks.
JoCaml—concurrent and distributed channel based, extension of OCaml, implements the join-calculus of processes
Join Java—concurrent, based on Java language
Joule—dataflow-based, communicates by message passing
Joyce—concurrent, teaching, built on Concurrent Pascal with features from CSP by Per Brinch Hansen
LabVIEW—graphical, dataflow, functions are nodes in a graph, data is wires between the nodes; includes object-oriented language
Limbo—relative of Alef, for system programming in Inferno (operating system)
Locomotive BASIC—Amstrad variant of BASIC contains EVERY and AFTER commands for concurrent subroutines
MultiLisp—Scheme variant extended to support parallelism
Modula-2—for system programming, by N. Wirth as a successor to Pascal with native support for coroutines
Modula-3—modern member of Algol family with extensive support for threads, mutexes, condition variables
Newsqueak—for research, with channels as first-class values; predecessor of Alef
occam—influenced heavily by communicating sequential processes (CSP)
occam-π—a modern variant of occam, which incorporates ideas from Milner's π-calculus
ooRexx—object-based, message exchange for communication and synchronization
Orc—heavily concurrent, nondeterministic, based on Kleene algebra
Oz-Mozart—multiparadigm, supports shared-state and message-passing concurrency, and futures
ParaSail—object-oriented, parallel, free of pointers, race conditions
PHP—multithreading support with parallel extension implementing message passing inspired from Go
Pict—essentially an executable implementation of Milner's π-calculus
Python — uses thread-based parallelism and process-based parallelism
Raku includes classes for threads, promises and channels by default
Reia—uses asynchronous message passing between shared-nothing objects
Red/System—for system programming, based on Rebol
Rust—for system programming, using message-passing with move semantics, shared immutable memory, and shared mutable memory.
Scala—general purpose, designed to express common programming patterns in a concise, elegant, and type-safe way
SequenceL—general purpose functional, main design objectives are ease of programming, code clarity-readability, and automatic parallelization for performance on multicore hardware, and provably free of race conditions
SR—for research
SuperPascal—concurrent, for teaching, built on Concurrent Pascal and Joyce by Per Brinch Hansen
Swift—built-in support for writing asynchronous and parallel code in a structured way
Unicon—for research
TNSDL—for developing telecommunication exchanges, uses asynchronous message passing
VHSIC Hardware Description Language (VHDL)—IEEE STD-1076
XC—concurrency-extended subset of C language developed by XMOS, based on communicating sequential processes, built-in constructs for programmable I/O
Many other languages provide support for concurrency in the form of libraries, at levels roughly comparable with the above list.
| Technology | Computer science | null |
2582879 | https://en.wikipedia.org/wiki/Vacuum%20permittivity | Vacuum permittivity | Vacuum permittivity, commonly denoted (pronounced "epsilon nought" or "epsilon zero"), is the value of the absolute dielectric permittivity of classical vacuum. It may also be referred to as the permittivity of free space, the electric constant, or the distributed capacitance of the vacuum. It is an ideal (baseline) physical constant. Its CODATA value is:
It is a measure of how dense of an electric field is "permitted" to form in response to electric charges and relates the units for electric charge to mechanical quantities such as length and force. For example, the force between two separated electric charges with spherical symmetry (in the vacuum of classical electromagnetism) is given by Coulomb's law:
Here, q1 and q2 are the charges, r is the distance between their centres, and the value of the constant fraction is approximately . Likewise, ε0 appears in Maxwell's equations, which describe the properties of electric and magnetic fields and electromagnetic radiation, and relate them to their sources. In electrical engineering, ε0 itself is used as a unit to quantify the permittivity of various dielectric materials.
Value
The value of ε0 is defined by the formula
where c is the defined value for the speed of light in classical vacuum in SI units, and μ0 is the parameter that international standards organizations refer to as the magnetic constant (also called vacuum permeability or the permeability of free space). Since μ0 has an approximate value 4π × 10−7 H/m, and c has the defined value , it follows that ε0 can be expressed numerically as
The historical origins of the electric constant ε0, and its value, are explained in more detail below.
Revision of the SI
The ampere was redefined by defining the elementary charge as an exact number of coulombs as from 20 May 2019, with the effect that the vacuum electric permittivity no longer has an exactly determined value in SI units. The value of the electron charge became a numerically defined quantity, not measured, making μ0 a measured quantity. Consequently, ε0 is not exact. As before, it is defined by the equation , and is thus determined by the value of μ0, the magnetic vacuum permeability which in turn is determined by the experimentally determined dimensionless fine-structure constant α:
with e being the elementary charge, h being the Planck constant, and c being the speed of light in vacuum, each with exactly defined values. The relative uncertainty in the value of ε0 is therefore the same as that for the dimensionless fine-structure constant, namely
Terminology
Historically, the parameter ε0 has been known by many different names. The terms "vacuum permittivity" or its variants, such as "permittivity in/of vacuum", "permittivity of empty space", or "permittivity of free space" are widespread. Standards organizations also use "electric constant" as a term for this quantity.
Another historical synonym was "dielectric constant of vacuum", as "dielectric constant" was sometimes used in the past for the absolute permittivity. However, in modern usage "dielectric constant" typically refers exclusively to a relative permittivity ε/ε0 and even this usage is considered "obsolete" by some standards bodies in favor of relative static permittivity. Hence, the term "dielectric constant of vacuum" for the electric constant ε0 is considered obsolete by most modern authors, although occasional examples of continuing usage can be found.
As for notation, the constant can be denoted by either ε0 or 0, using either of the common glyphs for the letter epsilon.
Historical origin of the parameter ε0
As indicated above, the parameter ε0 is a measurement-system constant. Its presence in the equations now used to define electromagnetic quantities is the result of the so-called "rationalization" process described below. But the method of allocating a value to it is a consequence of the result that Maxwell's equations predict that, in free space, electromagnetic waves move with the speed of light. Understanding why ε0 has the value it does requires a brief understanding of the history.
Rationalization of units
The experiments of Coulomb and others showed that the force F between two, equal, point-like "amounts" of electricity that are situated a distance r apart in free space, should be given by a formula that has the form
where Q is a quantity that represents the amount of electricity present at each of the two points, and ke depends on the units. If one is starting with no constraints, then the value of ke may be chosen arbitrarily. For each different choice of ke there is a different "interpretation" of Q: to avoid confusion, each different "interpretation" has to be allocated a distinctive name and symbol.
In one of the systems of equations and units agreed in the late 19th century, called the "centimetre–gram–second electrostatic system of units" (the cgs esu system), the constant ke was taken equal to 1, and a quantity now called "Gaussian electric charge" qs was defined by the resulting equation
The unit of Gaussian charge, the statcoulomb, is such that two units, at a distance of 1 centimetre apart, repel each other with a force equal to the cgs unit of force, the dyne. Thus, the unit of Gaussian charge can also be written 1 dyne1/2⋅cm. "Gaussian electric charge" is not the same mathematical quantity as modern (MKS and subsequently the SI) electric charge and is not measured in coulombs.
The idea subsequently developed that it would be better, in situations of spherical geometry, to include a factor 4π in equations like Coulomb's law, and write it in the form:
This idea is called "rationalization". The quantities qs′ and ke′ are not the same as those in the older convention. Putting generates a unit of electricity of different size, but it still has the same dimensions as the cgs esu system.
The next step was to treat the quantity representing "amount of electricity" as a fundamental quantity in its own right, denoted by the symbol q, and to write Coulomb's law in its modern form:
The system of equations thus generated is known as the rationalized metre–kilogram–second (RMKS) equation system, or "metre–kilogram–second–ampere (MKSA)" equation system. The new quantity q is given the name "RMKS electric charge", or (nowadays) just "electric charge". The quantity qs used in the old cgs esu system is related to the new quantity q by:
In the 2019 revision of the SI, the elementary charge is fixed at and the value of the vacuum permittivity must be determined experimentally.
Determination of a value for ε0
One now adds the requirement that one wants force to be measured in newtons, distance in metres, and charge to be measured in the engineers' practical unit, the coulomb, which is defined as the charge accumulated when a current of 1 ampere flows for one second. This shows that the parameter ε0 should be allocated the unit C2⋅N−1⋅m−2 (or an equivalent unit – in practice, farad per metre).
In order to establish the numerical value of ε0, one makes use of the fact that if one uses the rationalized forms of Coulomb's law and Ampère's force law (and other ideas) to develop Maxwell's equations, then the relationship stated above is found to exist between ε0, μ0 and c0. In principle, one has a choice of deciding whether to make the coulomb or the ampere the fundamental unit of electricity and magnetism. The decision was taken internationally to use the ampere. This means that the value of ε0 is determined by the values of c0 and μ0, as stated above. For a brief explanation of how the value of μ0 is decided, see Vacuum permeability.
Permittivity of real media
By convention, the electric constant ε0 appears in the relationship that defines the electric displacement field D in terms of the electric field E and classical electrical polarization density P of the medium. In general, this relationship has the form:
For a linear dielectric, P is assumed to be proportional to E, but a delayed response is permitted and a spatially non-local response, so one has:
In the event that nonlocality and delay of response are not important, the result is:
where ε is the permittivity and εr the relative static permittivity. In the vacuum of classical electromagnetism, the polarization , so and .
| Physical sciences | Physical constants | Physics |
3524192 | https://en.wikipedia.org/wiki/NOR%20gate | NOR gate | The NOR gate is a digital logic gate that implements logical NOR - it behaves according to the truth table to the right. A HIGH output (1) results if both the inputs to the gate are LOW (0); if one or both input is HIGH (1), a LOW output (0) results. NOR is the result of the negation of the OR operator. It can also in some senses be seen as the inverse of an AND gate. NOR is a functionally complete operation—NOR gates can be combined to generate any other logical function. It shares this property with the NAND gate. By contrast, the OR operator is monotonic as it can only change LOW to HIGH but not vice versa.
In most, but not all, circuit implementations, the negation comes for free—including CMOS and TTL. In such logic families, OR is the more complicated operation; it may use a NOR followed by a NOT. A significant exception is some forms of the domino logic family.
Symbols
There are three symbols for NOR gates: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol, as well as the deprecated DIN symbol. For more information see Logic Gate Symbols. The ANSI symbol for the NOR gate is a standard OR gate with an inversion bubble connected.
The bubble indicates that the function of the or gate has been inverted.
Hardware description and pinout
NOR Gates are basic logic gates, and as such they are recognised in TTL and CMOS ICs. The standard, 4000 series, CMOS IC is the 4001, which includes four independent, two-input, NOR gates. The pinout diagram is as follows:
Availability
These devices are available from most semiconductor manufacturers such as Fairchild Semiconductor, Philips or Texas Instruments. These are usually available in both through-hole DIP and SOIC format. Datasheets are readily available in most datasheet databases.
In the popular CMOS and TTL logic families, NOR gates with up to 8 inputs are available:
CMOS
4001: Quad 2-input NOR gate
4025: Triple 3-input NOR gate
4002: Dual 4-input NOR gate
4078: Single 8-input NOR gate
TTL
7402: Quad 2-input NOR gate
7427: Triple 3-input NOR gate
7425: Dual 4-input NOR gate (with strobe, obsolete)
74260: Dual 5-Input NOR Gate
744078: Single 8-input NOR Gate
In the older RTL and ECL families, NOR gates were efficient and most commonly used.
Implementations
The left diagram above show the construction of a 2-input NOR gate using NMOS logic circuitry. If either of the inputs is high, the corresponding N-channel MOSFET is turned on and the output is pulled low; otherwise the output is pulled high through the pull-up resistor. In the CMOS implementation on the right, the function of the pull-up resistor is implemented by the two p-type transistors in series on the top.
In CMOS, NOR gates are less efficient than NAND gates. This is due to the faster charge mobility in n-MOSFETs compared to p-MOSFETs, so that the parallel connection of two p-MOSFETs realised in the NAND gate is more favourable than their series connection in the NOR gate. For this reason, NAND gates are generally preferred over NOR gates in CMOS circuits.
Functional completeness
The NOR gate has the property of functional completeness, which it shares with the NAND gate. That is, any other logic function (AND, OR, etc.) can be implemented using only NOR gates. An entire processor can be created using NOR gates alone. The original Apollo Guidance Computer used 4,100 integrated circuits (IC), each one containing only two 3-input NOR gates.
As NAND gates are also functionally complete, if no specific NOR gates are available, one can be made from NAND gates using NAND logic.
| Technology | Digital logic | null |
3524381 | https://en.wikipedia.org/wiki/Human%20back | Human back | The human back, also called the dorsum (: dorsa), is the large posterior area of the human body, rising from the top of the buttocks to the back of the neck. It is the surface of the body opposite from the chest and the abdomen. The vertebral column runs the length of the back and creates a central area of recession. The breadth of the back is created by the shoulders at the top and the pelvis at the bottom.
Back pain is a common medical condition, generally benign in origin.
Structure
The central feature of the human back is the vertebral column, specifically the length from the top of the thoracic vertebrae to the bottom of the lumbar vertebrae, which houses the spinal cord in its spinal canal, and which generally has some curvature that gives shape to the back. The ribcage extends from the spine at the top of the back (with the top of the ribcage corresponding to the T1 vertebra), more than halfway down the length of the back, leaving an area with less protection between the bottom of the ribcage and the hips. The width of the back at the top is defined by the scapula, the broad, flat bones of the shoulders.
Muscles
The muscles of the back can be divided into three distinct groups; a superficial group, an intermediate group and a deep group.
Superficial group
The superficial group, also known as the appendicular group, is primarily associated with movement of the appendicular skeleton. It is composed of trapezius, latissimus dorsi, rhomboid major, rhomboid minor and levator scapulae. It is innervated by anterior rami of spinal nerves, reflecting its embryological origin outside the back.
Intermediate group
The intermediate group is also known as respiratory group as it may serve a respiratory function. It is composed of serratus posterior superior and serratus posterior inferior. Like the superficial group, it is innervated by anterior rami of spinal nerves.
Deep group
The deep group, also known as the intrinsic group due to its embryological origin in the back, can be further subdivided into four groups:
Spinotransversales composed of splenius capitis and splenius cervicis.
Erector spinae composed of iliocostalis, longissismus and spinalis
Transversospinales composed of semispinalis, multifidus and rotatores
Segmental muscles composed of levatores costarum, interspinales and intertransversarii
The deep group is innervated by the posterior rami of spinal nerves.
Organs near the back
The lungs are within the ribcage, and extend to the back of the ribcage making it possible for them to be listened into through the back. The kidneys are situated beneath the muscles in the area below the end of the ribcage, loosely connected to the peritoneum. A strike to the lower back can damage the kidneys of the person being hit.
Surface of the back
The skin of the human back is thicker and has fewer nerve endings than the skin on any other part of the torso. With some notable exceptions (see, e.g., George "the Animal" Steele), it tends to have less hair than the chest on men. The upper-middle back is also the one area of the body which a typical human under normal conditions might be unable to physically touch.
The skin of the back is innervated by the dorsal cutaneous branches, as well as the lateral abdominal cutaneous branches of intercostal nerves.
Movement
The intricate anatomy of the back provides support for the head and trunk of the body, strength in the trunk of the body, as well as a great deal of flexibility and movement. The upper back has the most structural support, with the ribs attached firmly to each level of the thoracic spine and very limited movement. The lower back (lumbar vertebrae) allows for flexibility and movement in back bending (extension) and forward bending (flexion). It does not permit twisting.
Clinical significance
Back pain
The back comprises interconnecting nerves, bones, muscles, ligaments, and tendons, all of which can be a source of pain. Back pain is the second most common type of pain in adults (the most common being headaches). By far the most common cause of back pain is muscle strain. The back muscles can usually heal themselves within a couple of weeks, but the pain can be intense and debilitating. Other common sources of back pain include disc problems, such as degenerative disc disease or a lumbar disc herniation, many types of fractures, such as spondylolisthesis or an osteoporotic fracture, or osteoarthritis.
Society and culture
The curvature of the female back is a frequent theme in paintings, because the sensibilities of many cultures permit the back to be shown nude - implying full nudity without actually displaying it. Indeed, the practice of showing explicitness on the lower back has been performed for centuries. Certain articles of clothing, such as the haltertop and the backless dress, are designed to expose the back in this manner. The lower back is typically exposed frequently by many types of shirts in woman's fashion, and even the more conservative shirts and blouses will reveal the lower back. This happens for a variety of reasons- the lower waist area is a pivot point for the body and lengthens and arches as a person sits or bends. Secondly, woman's fashion typically favors tops that are waist length, allowing the back to be left bare during slight movement, bending or sitting. The back also serves as the largest canvas for body art on the human body. Because of its size and the relative lack of hair, the back presents an ideal canvas on the human body for lower back tattoos, mostly among young women. Indeed, some individuals have tattoos that cover the entirety of the back. Others have smaller tattoos at significant locations, such as the shoulder blade or the bottom of the back.
The part of the back that typically cannot be reached to be scratched is sometimes named .
An itch there can be irritant, leading to the development and use of backscratchers.
Many English idioms mention the back, usually highlighting it as an area of vulnerability; one must "watch one's back", or one may end up "with one's back up against the wall"; worse yet, someone may "stab one in the back", but hopefully a friend "has got one's back".
"" is a derogatory name in American English for immigrants who cross the US-Mexico border illegally (purportedly swimming though the Rio Grande).
The back is also a symbol of strength and hard work, with those seeking physical labor looking for "strong backs", and workers being implored to "put their back into it".
Historically, flagellation of a person across the back with a whip was both a common form of punishment of criminals, and a common means of forcing slaves to work. As well, self-flagellation, as in self punishment, may include the use of whipping oneself. This is one method of mortification, the practice of inflicting physical suffering on oneself with the religious belief that it will serve as penance for one's own sins or those of others. While more moderate forms of mortification are widely practiced—particularly in the Catholic Church—self flagellation is not encouraged by mainstream religions or religious leaders. A well-known instrument used for flagellations is the infamous Cat 'o Nine Tails, a nine-corded whip with one handle enabling a much more effective whipping than would be possible with only one lashing at a time.
| Biology and health sciences | Human anatomy | Health |
3524476 | https://en.wikipedia.org/wiki/Particle%20statistics | Particle statistics | Particle statistics is a particular description of multiple particles in statistical mechanics. A key prerequisite concept is that of a statistical ensemble (an idealization comprising the state space of possible states of a system, each labeled with a probability) that emphasizes properties of a large system as a whole at the expense of knowledge about parameters of separate particles. When an ensemble describes a system of particles with similar properties, their number is called the particle number and usually denoted by N.
Classical statistics
In classical mechanics, all particles (fundamental and composite particles, atoms, molecules, electrons, etc.) in the system are considered distinguishable. This means that individual particles in a system can be tracked. As a consequence, switching the positions of any pair of particles in the system leads to a different configuration of the system. Furthermore, there is no restriction on placing more than one particle in any given state accessible to the system. These characteristics of classical positions are called Maxwell–Boltzmann statistics.
Quantum statistics
The fundamental feature of quantum mechanics that distinguishes it from classical mechanics is that particles of a particular type are indistinguishable from one another. This means that in an ensemble of similar particles, interchanging any two particles does not lead to a new configuration of the system. In the language of quantum mechanics this means that the wave function of the system is invariant up to a phase with respect to the interchange of the constituent particles. In the case of a system consisting of particles of different kinds (for example, electrons and protons), the wave function of the system is invariant up to a phase separately for both assemblies of particles.
The applicable definition of a particle does not require it to be elementary or even "microscopic", but it requires that all its degrees of freedom (or internal states) that are relevant to the physical problem considered shall be known. All quantum particles, such as leptons and baryons, in the universe have three translational motion degrees of freedom (represented with the wave function) and one discrete degree of freedom, known as spin. Progressively more "complex" particles obtain progressively more internal freedoms (such as various quantum numbers in an atom), and, when the number of internal states that "identical" particles in an ensemble can occupy dwarfs their count (the particle number), then effects of quantum statistics become negligible. That's why quantum statistics is useful when one considers, say, helium liquid or ammonia gas (its molecules have a large, but conceivable number of internal states), but is useless applied to systems constructed of macromolecules.
While this difference between classical and quantum descriptions of systems is fundamental to all of quantum statistics, quantum particles are divided into two further classes on the basis of the symmetry of the system. The spin–statistics theorem binds two particular kinds of combinatorial symmetry with two particular kinds of spin symmetry, namely bosons and fermions.
| Physical sciences | Statistical mechanics | Physics |
3526295 | https://en.wikipedia.org/wiki/Port%20of%20Barcelona | Port of Barcelona | The Port of Barcelona (, ; ) is a major port in Barcelona, Catalonia, Spain. Its are divided into three zones: Port Vell (the Old Port), the commercial/industrial port, and the logistics port (Barcelona Free Port). The port is managed by the Port Authority of Barcelona, itself owned by the state-owned Ports of the State.
It is the third largest container port in the country and the ninth largest in Europe, with a trade volume of 3.42 million TEUs in 2018. It is also the second cruiser port by passengers in the Mediterranean after Rome's Port of Civitavecchia.
The city has two additional yacht harbors/marinas: Port Olímpic and Port Fòrum Sant Adrià to the north.
Overview
The Port Vell area comprises two marinas or yacht harbors, a fishing port, a maritime station for ferries travelling to the Balearic Islands and other destinations in the Mediterranean and other stations or landing areas cruise ships, and it abuts the industrial port.
In the central area, it also houses "Maremagnum" (a shopping mall and nightlife complex), a multiplex cinema, the IMAX Port Vell (large-format cinema complex), and Europe's largest aquarium, containing 8,000 fish and 11 sharks in 22 basins filled with 6 million litres of sea water. Because it is located in a designated tourist zone, the Maremagnum is the only commercial mall in the city that can open on Sundays and public holidays. Next to the Maremagnum area are the "Golondrines", small ships that take tourists for a visit around the port area and beyond.
The Barcelona industrial port is to the south and comprises the Zona Franca, a tariff-free industrial park that has developed within the Port of Barcelona, across the flat land of the Llobregat Delta between the city of Barcelona and that of El Prat de Llobregat and the Barcelona International Airport to the south.
A good place to view both the industrial and pleasure port is from Montjuïc, and more specifically, from Montjuïc Castle, as well as from the aerial cable car connecting Barceloneta with the Ferry Station and Montjuïc.
Information
In common with much of Western Europe, the older traditional industries in Spain, such as textiles, declined in the face of foreign competition. The surviving companies closed their factories in the city or along the rivers, leaving industrial wastelands or abandoned workers' colonies. In many cases within Spain, these industries moved to the Zona Franca ().
The free trade zone is located within the port area, not far away from downtown Barcelona, and is easy to access. It is away from the Barcelona International Airport and connected via highway and railway.
Business investors here rent offices or bonded warehouses. They can also elect to purchase land to erect their own buildings.
The free trade zone offers a series of services. It is divided into a comprehensive service area, truck/lorry area, reception area, and sports facilities area. It has a customs duties service, bonded warehousing service, advanced telecommunication and computer system, security system, combined multiple transport system, and so on.
On 17 January 1977, a landing craft being used as a liberty boat by and
, was run over by a freighter. The Mike8 boat capsized and came to rest against the fleet landing pier. Crew-members from both vessels were on hand to assist with rescue operations. There were over one hundred sailors and marines on board the landing craft. 49 sailors and marines were killed. A memorial is erected at the landing pier in memory.
History
In 1978, the Ministry of Public Works declared Bilbao, Huelva and Valencia and Barcelona autonomous ports. It became then known as the Autonomous Port of Barcelona (, ) and while remaining a government body, it was able to function as a commercial enterprise subject to private law.
Opening the Bosch i Alsina wharf in Port Vell (also known as the Moll de la Fusta) to the public in 1981 marked the start to transform the Northern part of the port. This gained much momentum with the decision in 1986 that Barcelona would host the 1992 Summer Olympics. In the subsequent years, the run-down area of empty warehouses, railroad yards, and factories was converted to an attractive harborfront area in a huge urban renewal project. Also neighbouring Barceloneta and its beaches have been transformed to open the city up to the sea. During the Olympics the port hosted up to 11 cruise ships that served as floating hotels.
In November 1992, the central body Ports of the State () was created by the Spanish government which brought the end to the Autonomous Port of Barcelona. Since then the port is operated by Barcelona Port Authority (, , APB).
The Logistics Activity Zone (, , ZAL) is a multimodal transport centre that was set up in 1993 with an initial area of 68 hectares in the first phase. The second phase then saw an extension of 143 hectares into El Prat de Llobregat.
In July 1999, the World Trade Center was opened.
Between 2001 and 2008 the port underwent an enlargement that doubled its size by diverting the mouth of the Llobregat River to the south and slightly pushing back the Llobregat Delta Nature Reserve.
Passenger ferries
The three passenger terminals Terminal Drassanes, Terminal Ferry Barcelona and Grimaldi Terminal Barcelona are located in Port Vell. While Baleària and Trasmediterránea operate connections to the Balearic Islands, the companies Grimaldi Lines and Grandi Navi Veloci serve destinations in Italy and Morocco.
Accidents and incidents
On 31 October 2018, a 8:00am local time, the Grandi Navi Veloci (GNV) line ferry Excellent crashed into the Port of Barcelona after a gust of wind drove it into the cargo pier, smashing into a gantry crane, which tipped over onto containers holding flammable chemicals, which caught fire, causing toxic smoke, and setting the pier ablaze. The Excellent had been trying to dock, but was prevented from doing so due to bad weather.
| Technology | Specific piers and ports | null |
3528600 | https://en.wikipedia.org/wiki/Coucal | Coucal | A coucal is one of about 30 species of birds in the cuckoo family. All of them belong in the subfamily Centropodinae and the genus Centropus. Unlike many Old World cuckoos, coucals are not brood parasites, though they do have their own reproductive peculiarity: all members of the genus are (to varying degrees) sex-role reversed, so that the smaller male provides most of the parental care. Male pheasant coucals (Centropus phasianinus) invest in building the nest, incubate for the most part and take a major role in feeding the young. At least one coucal species, the black coucal, is polyandrous.
Taxonomy
The genus Centropus was introduced in 1811 by the German zoologist Johann Karl Wilhelm Illiger. The type species was subsequently designated as the Senegal coucal by George Robert Gray in 1840. The genus name combines the Ancient Greek kentron meaning "spur" or "spike" with pous meaning "foot".
Description
Many coucals have a long claw on their hind toe (hallux). The feet have minute spurs and this is responsible for the German term for coucals Sporenkuckucke. The common name is perhaps derived from the French coucou and alouette (for the long lark like claw). (Cuvier, in Newton 1896) The length of the claw can be about 68-76% of the tarsus length in the African black coucal C. grillii and lesser coucal C. bengalensis. Only the short-toed coucal C. rectunguis is an exception with the hallux claw of only 23% of the tarsus length. Thread like feather structures (elongated sheaths of the growing feathers that are sometimes termed trichoptiles) are found on the head and neck of hatchlings and can be as long as 20mm. Nestlings can look spiny. Many are opportunistic predators, Centropus phasianus is known to attack birds caught in mist nets while white-browed coucals Centropus superciliosus are attracted to smoke from grass fires where they forage for insects and small mammals escaping from the fire.
Coucals generally make nests inside dense vegetation and they usually have the top covered but some species have the top open. Pheasant coucal Centropus phasianinus, greater coucal C. sinensis and Madagascar coucal C. toulou sometimes build an open nest while some species always build open nests (the bay coucal C. celebensis)
Some coucal species have been seen to fly while carrying their young.
Species
The genus contains 29 species:
Buff-headed coucal, Centropus milo
White-necked coucal or pied coucal, Centropus ateralbus
Ivory-billed coucal or greater black coucal, Centropus menbeki
Biak coucal, Centropus chalybeus
Rufous coucal, Centropus unirufus
Green-billed coucal, Centropus chlororhynchos
Black-faced coucal, Centropus melanops
Black-hooded coucal, Centropus steerii
Short-toed coucal, Centropus rectunguis
Bay coucal, Centropus celebensis
Gabon coucal, Centropus anselli
Black-throated coucal, Centropus leucogaster
Senegal coucal, Centropus senegalensis
Blue-headed coucal, Centropus monachus
Coppery-tailed coucal, Centropus cupreicaudus
White-browed coucal, Centropus superciliosus
Burchell's coucal, Centropus burchellii
Sunda coucal, Centropus nigrorufus
Greater coucal, Centropus sinensis
Malagasy coucal or Madagascar coucal, Centropus toulou
Goliath coucal, Centropus goliath
Black coucal, Centropus grillii
Philippine coucal, Centropus viridis
Lesser coucal, Centropus bengalensis
Violaceous coucal, Centropus violaceus
Black-billed coucal or lesser black coucal, Centropus bernsteini
Kai coucal, Centropus spilopterus
Pheasant coucal, Centropus phasianinus
Andaman coucal or brown coucal, Centropus andamanensis
A fossil species, Centropus colossus, is known from the Quaternary-aged Fossil Cave, Tantanoola, South Australia.
| Biology and health sciences | Cuculiformes and relatives | Animals |
10553773 | https://en.wikipedia.org/wiki/Marine%20chronometer | Marine chronometer | A marine chronometer is a precision timepiece that is carried on a ship and employed in the determination of the ship's position by celestial navigation. It is used to determine longitude by comparing Greenwich Mean Time (GMT), and the time at the current location found from observations of celestial bodies. When first developed in the 18th century, it was a major technical achievement, as accurate knowledge of the time over a long sea voyage was vital for effective navigation, lacking electronic or communications aids. The first true chronometer was the life work of one man, John Harrison, spanning 31 years of persistent experimentation and testing that revolutionized naval (and later aerial) navigation.
The term chronometer was coined from the Greek words () (meaning time) and (meaning measure). The 1713 book Physico-Theology by the English cleric and scientist William Derham includes one of the earliest theoretical descriptions of a marine chronometer. It has recently become more commonly used to describe watches tested and certified to meet certain precision standards.
History
To determine a position on the Earth's surface, using classical models, it is necessary and sufficient to know the latitude, longitude, and altitude. Altitude considerations can naturally be ignored for vessels operating at sea level. Until the mid-1750s, accurate navigation at sea out of sight of land was an unsolved problem due to the difficulty in calculating longitude. Navigators could determine their latitude by measuring the sun's angle at noon (i.e., when it reached its highest point in the sky, or culmination) or, in the Northern Hemisphere, by measuring the angle of Polaris (the North Star) from the horizon (usually during twilight). To find their longitude, however, they needed a time standard that would work aboard a ship. Observation of regular celestial motions, such as Galileo's method based on observing Jupiter's natural satellites, was usually not possible at sea due to the ship's motion. The lunar distances method, initially proposed by Johannes Werner in 1514, was developed in parallel with the marine chronometer. The Dutch scientist Gemma Frisius was the first to propose the use of a chronometer to determine longitude in 1530.
The purpose of a chronometer is to measure accurately the time of a known fixed location. This is particularly important for navigation. As the Earth rotates at a regular predictable rate, the time difference between the chronometer and the ship's local time can be used to calculate the longitude of the ship relative to the Prime Meridian (defined as 0°) (or another starting point) if accurately enough known, using spherical trigonometry. Practical celestial navigation usually requires a marine chronometer to measure time, a sextant to measure the angles, an almanac giving schedules of the coordinates of celestial objects, a set of sight reduction tables to help perform the height and azimuth computations, and a chart of the region. With sight reduction tables, the only calculations required are addition and subtraction. Most people can master simpler celestial navigation procedures after a day or two of instruction and practice, even using manual calculation methods. The use of a marine chronometer to determine longitude by chronometer permits navigators to obtain a reasonably accurate position fix. For every four seconds that the time source is in error, the east–west position may be off by up to just over one nautical mile as the angular speed of Earth is latitude dependent.
The creation of a timepiece which would work reliably at sea was difficult. Until the 20th century, the best timekeepers were pendulum clocks, but both the rolling of a ship at sea and the up to 0.2% variations in the gravity of Earth made a simple gravity-based pendulum useless both in theory and in practice.
First examples
Christiaan Huygens, following his invention of the pendulum clock in 1656, made the first attempt at a marine chronometer in 1673 in France, under the sponsorship of Jean-Baptiste Colbert. In 1675, Huygens, who was receiving a pension from Louis XIV, invented a chronometer that employed a balance wheel and a spiral spring for regulation, instead of a pendulum, opening the way to marine chronometers and modern pocket watches and wristwatches. He obtained a patent for his invention from Colbert, but his clock remained imprecise at sea. Huygens' attempt in 1675 to obtain an English patent from Charles II stimulated Robert Hooke, who claimed to have conceived of a spring-driven clock years earlier, to attempt to produce one and patent it. During 1675 Huygens and Hooke each delivered two such devices to Charles, but none worked well and neither Huygens nor Hooke received an English patent. It was during this work that Hooke formulated Hooke's law.
The first published use of the term chronometer was in 1684 in , a theoretical work by Kiel professor Matthias Wasmuth. This was followed by a further theoretical description of a chronometer in works published by English scientist William Derham in 1713. Derham's principal work, Physico-theology, or a demonstration of the being and attributes of God from his works of creation, also proposed the use of vacuum sealing to ensure greater accuracy in the operation of clocks. Attempts to construct a working marine chronometer were begun by Jeremy Thacker in England in 1714, and by Henry Sully in France two years later. Sully published his work in 1726 with , but neither his nor Thacker's models were able to resist the rolling of the seas and keep precise time while in shipboard conditions.
In 1714, the British government offered a longitude prize for a method of determining longitude at sea, with the awards ranging from £10,000 to £20,000 (£2 million to £4 million in terms) depending on accuracy. John Harrison, a Yorkshire carpenter, submitted a project in 1730, and in 1735 completed a clock based on a pair of counter-oscillating weighted beams connected by springs whose motion was not influenced by gravity or the motion of a ship. His first two sea timepieces H1 and H2 (completed in 1741) used this system, but he realised that they had a fundamental sensitivity to centrifugal force, which meant that they could never be accurate enough at sea. Construction of his third machine, designated H3, in 1759 included novel circular balances and the invention of the bi-metallic strip and caged roller bearings, inventions which are still widely used. However, H3's circular balances still proved too inaccurate and he eventually abandoned the large machines.
Harrison solved the precision problems with his much smaller H4 chronometer design in 1761. H4 looked much like a large five-inch (12 cm) diameter pocket watch. In 1761, Harrison submitted H4 for the £20,000 longitude prize. His design used a fast-beating balance wheel controlled by a temperature-compensated spiral spring. These features remained in use until stable electronic oscillators allowed very accurate portable timepieces to be made at affordable cost. In 1767, the Board of Longitude published a description of his work in The Principles of Mr. Harrison's time-keeper. A French expedition under Charles-François-César Le Tellier de Montmirail performed the first measurement of longitude using marine chronometers aboard Aurore in 1767.
Further development
In France, 1748, Pierre Le Roy invented the detent escapement characteristic of modern chronometers. In 1766, he created a revolutionary chronometer that incorporated a detent escapement, the temperature-compensated balance and the isochronous balance spring: Harrison showed the possibility of having a reliable chronometer at sea, but these developments by Le Roy are considered by Rupert Gould to be the foundation of the modern chronometer. Le Roy's innovations made the chronometer a much more accurate piece than had been anticipated.
Ferdinand Berthoud in France, as well as Thomas Mudge in Britain also successfully produced marine timekeepers. Although none were simple, they proved that Harrison's design was not the only answer to the problem. The greatest strides toward practicality came at the hands of Thomas Earnshaw and John Arnold, who in 1780 developed and patented simplified, detached, "spring detent" escapements, moved the temperature compensation to the balance, and improved the design and manufacturing of balance springs. This combination of innovations served as the basis of marine chronometers until the electronic era.
The new technology was initially so expensive that not all ships carried chronometers, as illustrated by the fateful last journey of the East Indiaman Arniston, shipwrecked with the loss of 372 lives. However, by 1825, the Royal Navy had begun routinely supplying its vessels with chronometers.
Beginning in 1820, the British Royal Observatory in Greenwich tested marine chronometers in an Admiralty-instigated trial or "chronometer competition" program intended to encourage the improvement of chronometers. In 1840 a new series of trials in a different format was begun by the seventh Astronomer Royal George Biddell Airy. These trials continued in much the same format until the outbreak of World War I in 1914, at which point they were suspended. Although the formal trials ceased, the testing of chronometers for the Royal Navy did not.
Marine chronometer makers looked to a phalanx of astronomical observatories located in Western Europe to conduct accuracy assessments of their timepieces. Once mechanical timepiece movements developed sufficient precision to allow for adequately accurate marine navigation, these third party independent assessments also developed into what became known as "chronometer competitions" at the astronomical observatories located in Western Europe. The Neuchâtel Observatory, Geneva Observatory, Besançon Observatory, Kew Observatory, German Naval Observatory Hamburg and Glashütte Observatory are prominent examples of observatories that certified the accuracy of mechanical timepieces. The observatory testing regime typically lasted for 30 to 50 days and contained accuracy standards that were far more stringent and difficult than modern standards such as those set by the Contrôle Officiel Suisse des Chronomètres (COSC). When a movement passed the observatory test, it became certified as an observatory chronometer and received a Bulletin de Marche from the observatory, stipulating the performance of the movement.
It was common for ships at the time to observe a time ball, such as the one at the Royal Observatory, Greenwich, to check their chronometers before departing on a long voyage. Every day, ships would anchor briefly in the River Thames at Greenwich, waiting for the ball at the observatory to drop at precisely 1pm. This practice was in small part responsible for the subsequent adoption of Greenwich Mean Time as an international standard. (Time balls became redundant around 1920 with the introduction of radio time signals, which have themselves largely been superseded by GPS time.) In addition to setting their time before departing on a voyage, ship chronometers were also routinely checked for accuracy while at sea by carrying out lunar or solar observations. In typical use, the chronometer would be mounted in a sheltered location below decks to avoid damage and exposure to the elements. Mariners would use the chronometer to set a so-called hack watch, which would be carried on deck to make the astronomical observations. Though much less accurate (and less expensive) than the chronometer, the hack watch would be satisfactory for a short period of time after setting it (i.e., long enough to make the observations).
Rationalizing production methods
Although industrial production methods began revolutionizing watchmaking in the middle of the 19th century, chronometer manufacture remained craft-based much longer and was dominated by British and Swiss manufacturers. Around the turn of the 20th century, Swiss makers such as Ulysse Nardin made great strides toward incorporating modern production methods and using fully interchangeable parts, but it was only with the onset of World War II that the Hamilton Watch Company in the United States perfected the process of mass production, which enabled it to produce thousands of its Hamilton Model 21 and Model 22 chronometers from 1942 onwards for the branches of the United States military and merchant marine as well as other Allied forces during World War II. The Hamilton 21 Marine Chronometer had a chain drive fusee and its second hand advanced in -second increments over a 60 seconds marked sub dial. In Germany, where marine chronometers were imported or used foreign key components, a (three-pillar movement unified chronometer) was developed by a collaboration between the Wempe Chronometerwerke and A. Lange & Söhne companies to make more efficient production possible. The development of a precise and inexpensive was a 1939 German naval command and Aviation ministry driven initiative. Serial production began in 1942. All parts were made in Germany and interchangeable. During the course of World War II modifications that became necessary when raw materials became scarce were applied and work was compulsory and sometimes voluntarily shared between various German manufacturers to speed up production. The production of German unified design chronometers with their harmonized components continued until long after World War II in Germany and the Soviet Union, who confiscated the original technical drawings, and set up a production line in Moscow in 1949 that produced the first Soviet MX6 chronometers containing German made movements. From 1952 onwards until 1997 MX6 chronometers with minor (NII Chasprom — Horological institute of the Soviet era) devised alterations were produced from components all made in the Soviet Union. The German ultimately became the mechanical marine timekeeper design produced in the highest volume, with about 58,000 units produced. Of these, less than 3,000 were produced during World War II, about 5,000 after the war in West and East Germany and about 50,000 in the Soviet Union and later post-Soviet Russia. Of the Hamilton 21 Marine Chronometer during and after World War II about 13,000 units were produced. Despite the and Hamilton's success, chronometers made in the old way never disappeared from the marketplace during the era of mechanical timekeepers. Thomas Mercer Chronometers was among the companies that continued to make them.
Historical significance
Ship’s marine chronometers are the most exact portable mechanical timepieces ever produced and in a static environment were only trumped by non-portable precision pendulum clocks for observatories. They served, alongside the sextant, to determine the location of ships at sea. The seafaring nations invested richly in the development of these precision instruments, as pinpointing location at sea gave a decisive naval advantage. Without their accuracy and the accuracy of the feats of navigation that marine chronometers enabled, it is arguable that the ascendancy of the Royal Navy, and by extension that of the British Empire, might not have occurred so overwhelmingly; the formation of the empire by wars and conquests of colonies abroad took place in a period in which British vessels had reliable navigation due to the chronometer, while their Portuguese, Dutch, and French opponents did not. For example: the French were well established in India and other places before Britain, but were defeated by naval forces in the Seven Years' War.
Rating and maintaining marine chronometers was deemed important well into the 20th century, as after World War I the work of the British Royal Observatory’s Chronometer Department became largely confined to rating of chronometers and watches that the Admiralty already owned and providing acceptance testing.
In 1937 a workshop was set up for the first time by the Time Department for the repair and adjustment of British armed forces issued chronometers and watches. These maintenance activities had previously been outsourced to commercial workshops.
From about the 1960s onwards mechanical spring detent marine chronometers were gradually replaced and supplanted by chronometers based on electric engineering techniques and technologies. In 1985 the British Ministry of Defence invited bids by tender for the disposal of their mechanical Hamilton Model 21 Marine Chronometers. The US Navy kept their Hamilton Model 21 Marine Chronometers in service as backups to the Loran-C hyperbolic radio navigation system until 1988, when the GPS global navigation satellite system was approved as reliable. At the end of the 20th century the production of mechanical marine chronometers had declined to the point where only a few were being made to special order by the First Moscow Watch Factory 'Kirov' (Poljot) in Russia, Wempe in Germany and Mercer in England.
The most complete international collection of marine chronometers, including Harrison's H1 to H4, is at the Royal Observatory, Greenwich, in London, UK.
Characteristics
The crucial problem was to find a resonator that remained unaffected by the changing conditions met by a ship at sea. The balance wheel, harnessed to a spring, solved most of the problems associated with the ship's motion. Unfortunately, the elasticity of most balance spring materials changes relative to temperature. To compensate for ever-changing spring strength, the majority of chronometer balances used bi-metallic strips to move small weights toward and away from the centre of oscillation, thus altering the period of the balance to match the changing force of the spring. The balance spring problem was solved with a nickel-steel alloy named Elinvar for its invariable elasticity at normal temperatures. The inventor was Charles Édouard Guillaume, who won the 1920 Nobel Prize for physics in recognition for his metallurgical work.
The escapement serves two purposes. First, it allows the train to advance fractionally and record the balance's oscillations. At the same time, it supplies minute amounts of energy to counter tiny losses from friction, thus maintaining the momentum of the oscillating balance. The escapement is the part that ticks. Since the natural resonance of an oscillating balance serves as the heart of a chronometer, chronometer escapements are designed to interfere with the balance as little as possible. There are many constant-force and detached escapement designs, but the most common are the spring detent and pivoted detent. In both of these, a small detent locks the escape wheel and allows the balance to swing completely free of interference except for a brief moment at the centre of oscillation, when it is least susceptible to outside influences. At the centre of oscillation, a roller on the balance staff momentarily displaces the detent, allowing one tooth of the escape wheel to pass. The escape wheel tooth then imparts its energy on a second roller on the balance staff. Since the escape wheel turns in only one direction, the balance receives impulse in only one direction. On the return oscillation, a passing spring on the tip of the detent allows the unlocking roller on the staff to move by without displacing the detent. The weakest link of any mechanical timekeeper is the escapement's lubrication. When the oil thickens through age or temperature or dissipates through humidity or evaporation, the rate will change, sometimes dramatically as the balance motion decreases through higher friction in the escapement. A detent escapement has a strong advantage over other escapements as it needs no lubrication. An impulse from the escape wheel to the impulse roller is nearly dead-beat, meaning little sliding action needing lubrication. Chronometer escape wheels and passing springs are typically gold due to the metal's lower slide friction over brass and steel.
Chronometers often included other innovations to increase their efficiency and precision. Hard stones such as ruby and sapphire were often used as jewel bearings to decrease friction and wear of the pivots and escapement. Diamond was often used as the cap stone for the lower balance staff pivot to prevent wear from years of the heavy balance turning on the small pivot end. Until the end of mechanical chronometer production in the third quarter of the 20th century, makers continued to experiment with things like ball bearings and chrome-plated pivots.
The timepieces were normally protected from the elements and kept below decks in a fixed position in a traditional box suspended in gimbals (a set of rings connected by bearings). This keeps the chronometer isolated in a horizontal "dial up" position to counter ship inclination (rocking) movements induced timing errors on the balance wheel.
Marine chronometers always contain a maintaining power which keeps the chronometer going while it is being wound, and a power reserve indicator to show how long the chronometer will continue to run without being wound.
These technical provisions usually yield timekeeping in mechanical marine chronometers accurate to within 0.5 second per day.
Chronometer rating
In strictly horological terms, "rating" a chronometer means that prior to the instrument entering service, the average rate of gaining or losing per day is observed and recorded on a rating certificate which accompanies the instrument. This daily rate is used in the field to correct the time indicated by the instrument to get an accurate time reading. Even the best-made chronometer with the finest temperature compensation etc. exhibits two types of error, (1) random and (2) consistent. The quality of design and manufacture of the instrument keeps the random errors small. In principle, the consistent errors should be amenable to elimination by adjustment, but in practice it is not possible to make the adjustment so precisely that this error is completely eliminated, so the technique of rating is used. The rate will also change while the instrument is in service due to e.g. thickening of the oil, so on long expeditions the instrument's rate would be periodically checked against accurate time determined by astronomical observations.
Marine chronometer use today
Since the 1990s boats and ships can use several Global Navigation Satellite Systems (GNSS) to navigate all the world's lakes, seas and oceans. Maritime GNSS units include functions useful on water, such as "man overboard" (MOB) functions that allow instantly marking the location where a person has fallen overboard, which simplifies rescue efforts. GNSS may be connected to the ship's self-steering gear and Chartplotters using the NMEA 0183 interface, and can also improve the security of shipping traffic by enabling Automatic Identification Systems (AIS).
Even with these convenient 21st-century technological tools, modern practical navigators usually use celestial navigation using electric-powered time sources in combination with satellite navigation. Small handheld computers, laptops, navigational calculators and even scientific calculators enable modern navigators to "reduce" sextant sights in minutes, by automating all the calculation and/or data lookup steps. Using multiple independent position fix methods without solely relying on subject-to-failure electronic systems helps the navigator detect errors. Professional mariners are still required to be proficient in traditional piloting and celestial navigation, which requires the use of a precisely adjusted and rated autonomous or periodically external time-signal corrected chronometer. These abilities are still a requirement for certain international mariner certifications such as Officer in Charge of Navigational Watch, and Master and Chief Mate deck officers, and supplements offshore yachtmasters on long-distance private cruising yachts.
Modern marine chronometers can be based on quartz clocks that are corrected periodically by satellite time signals or radio time signals (see radio clock). These quartz chronometers are not always the most accurate quartz clocks when no signal is received, and their signals can be lost or blocked. However, there are autonomous quartz movements, even in wrist watches, that are accurate to within 5 or 20 seconds per year.
At least one quartz chronometer made for advanced navigation utilizes multiple quartz crystals which are corrected by a computer using an average value, in addition to GPS time signal corrections.
| Technology | Clocks | null |
10553857 | https://en.wikipedia.org/wiki/Sunflower%20sea%20star | Sunflower sea star | Pycnopodia helianthoides, commonly known as the sunflower sea star, is a large sea star found in the northeastern Pacific Ocean. The only species of its genus, it is among the largest sea stars in the world, with a maximum arm span of . Adult sunflower sea stars usually have 16 to 24 limbs. They vary in color. Sunflower sea stars are predatory and carnivorous, feeding mostly on sea urchins, clams, sea snails, and other small invertebrates. Although the species was widely distributed throughout the northeast Pacific, its population rapidly declined from 2013. The sunflower sea star is classified as Critically Endangered on the IUCN Red List.
Description
Sunflower sea stars can reach an arm span of . They are the heaviest known sea star, weighing about 5 kg. They are the second-biggest sea star in the world, second only to the little known deep water Midgardia xandaros, whose arm span is and whose body is 2.6 cm (roughly 1 inch) wide. Growth begins rapidly, but slows as the animal ages. Researchers estimate a growth rate of 8 cm (3.1 in)/year in the first several years of life, and a rate of 2.5 cm (0.98 in)/year later.
Their color ranges from bright orange, yellow-red to brown, and sometimes purple, with soft, velvet-textured bodies and 5–24 arms with powerful suckers. Most sea star species have a mesh-like skeleton that protects their internal organs.
Distribution and habitat
Sunflower sea stars were once common in the northeast Pacific from Alaska to southern California, and were dominant in Puget Sound, British Columbia, northern California, and southern Alaska. Between 2013 and 2015, the population declined rapidly due to sea star wasting disease and warmer water temperatures caused by global climate change. The species disappeared from its habitats in the waters off the coast of California and Oregon, and saw its population reduced by 99.2% in the waters near Washington. Ecologists using shallow-water observations and deep offshore trawl surveys found that, in their study period (2004–2017), mean biomass of sunflower sea stars declined 80–100%. In 2020, the species was declared critically endangered by the International Union for Conservation of Nature. There are suggestions that sea star wasting disease was caused by bacterial pathogens or parasites and was contagious, due to its tendency to spread to multiple locations.
Sunflower sea stars generally inhabit low subtidal and intertidal areas up to 435m deep that are rich in seaweed, kelp, sand, mud, shells, gravel, or rocky bottoms. They do not venture into high- and mid-tide areas because their body structure is heavy, and requires water to support it.
Diet and behavior
Sunflower sea stars are efficient hunters, moving at a speed of using 15,000 tube feet that lie on their undersides. They are commonly found around urchin barrens, as the sea urchin is a favorite food. They also eat clams, snails, abalone, sea cucumbers and other sea stars. In Monterey Bay, California, they may feed on dead or dying squid. Sea star appetites and food can depend on environmental factors in their habitats, such as climate, amount of prey in the area, and latitude. Although the sunflower sea star can extend its mouth for larger prey, the stomach can extend outside the mouth to digest prey, such as abalone.
Easily stressed by predators such as large fish and other sea stars, they can shed arms to escape, which regrow within a few weeks. They are preyed upon by the king crab.
Reproduction
Sunflower sea stars can reproduce sexually through broadcast spawning. They have separate sexes. Sunflower sea stars breed from May through June. In preparing to spawn, they arch up using about a dozen arms to hoist their fleshy central mass above the seafloor and release gametes into the water for external fertilization. The larvae float and feed near the surface for two to ten weeks. After the planktonic larval period, the larvae settle to the bottom and mature into juveniles. Juvenile sunflower sea stars begin life with five arms, and grow the rest as they mature. The lifespan of most sunflower sea stars is three to five years.
Conservation efforts
Since 2013, sunflower sea star populations have been in a rapid decline due to disease and changes in climate. In 2020, the IUCN first assessed that the sunflower sea star was critically endangered. The Nature Conservancy and its partner institutions, along with the University of Washington are working to initiate captive breeding. Captive breeding efforts include seasonal production, larval development, and growth and feeding experiments. On August 18, 2021, the Center for Biological Diversity created a petition asking that the sunflower sea star be protected under the Endangered Species Act. In March 2023, the National Marine Fisheries Service proposed listing the sunflower sea star as threatened under the act.
Sea star wasting disease spreads throughout the whole body. The limbs become affected and eventually fall off, ultimately causing death from degradation. Sea star wasting disease appears to be a Sea Star-associated Densovirus (SSaDV). The disease creates behavioral changes and lesions. This disease is known to be more prevalent and harmful in warmer water. The warming waters in California, Washington, and Oregon have coincided with the increased risk of sea star wasting disease.
Sunflower sea stars are one of sea urchins' main predators. Sea stars control their population and help maintain the health of kelp forests. Due to the decrease in sea star population, sea urchin populations are increasing and posing a threat to biodiversity, particularly in kelp forests.
| Biology and health sciences | Echinoderms | Animals |
8170365 | https://en.wikipedia.org/wiki/Bicycle%20parking%20rack | Bicycle parking rack | A bicycle parking rack, usually shortened to bike rack and also called a bicycle stand, is a device to which bicycles can be securely attached for parking purposes. It may be freestanding, or securely attached to the ground or a stationary object, such as a building. Indoor racks are commonly used for private bicycle parking, while outdoor racks are often used in commercial areas. General styles of racks include the Inverted U, Serpentine, Bollard, Grid, and Decorative. The most effective and secure bike racks are those that can secure both wheels and the frame of the bicycle, using a bicycle lock.
Bike racks can be constructed from a number of materials, including stainless steel, steel, recycled plastic, and thermoplastic. Durability, weather resistance, appearance and functionality are important factors when choosing this material.
The visibility of the bike rack, adequate spacing from automobile parking and pedestrian traffic, weather coverage, and proximity to destinations are all important factors determining usefulness of a bicycle rack, helping to increase its usage and assure cyclists that their bikes are securely parked.
History
Early models tend to offer a means of securing one wheel: these can be a grooved piece of concrete in the ground, a forked piece of metal into which a wheel of the bicycle is pushed, or a horizontal "ladder" providing positions for the front wheel of many bicycles. These are not very effective, since a thief need only detach the wheel in question from the bicycle to free the rest of the bicycle. They also do not offer much support, and a row of bicycles in this type of stand are susceptible to all being toppled in a domino effect. These types of stand are known as "wheel benders" among cyclists.
A modern version is known as the "Sheffield rack" or "Sheffield stand" after the city of Sheffield in England where these were pioneered. These consist of a thick metal bar or tube bent into the shape of a square arch. The top part is about level with the top bar of the bicycle frame, and thus supports the bicycle and allows the frame to be secured. The origin of the racks was when the frugal citizens of Sheffield had to decide what to do with some old gas piping. Local cyclists suggested the cycle rack idea and two simple bends later, and a little concrete in the ground, the rack was born. At the time this was a revolution in a world of 'single-point holders' that bent wheels and offered little lockability for frames. A version of this design feature a second, lower horizontal bar to support smaller bikes (this version is also known as "A stand"), and are coated to reduce their surface hardness and to not scratch the bike's paintwork.
Since 1984 the City of Toronto has installed post and ring bicycle racks consisting of a steel bollard or post topped by a cast aluminium ring. In August 2006, it became publicly known that these stands could be defeated by prying the ring off with a two-by-four, limiting its effectiveness in high-crime areas.
In Amsterdam two-tiered bicycle stands are ubiquitous. Bikes can be parked in a smaller area as the handlebars (usually wider than the back of the bicycle) of every other one is at a different height (either high or low). These racks are made of steel and have a large bar to which the frame may be easily locked. Most Dutch bicycles have a rear wheel lock, so that wheel need not be locked.
Classes
Bike parking needs vary from environment to environment.
Class I
Some locations require Class I standards (commonly referred to as long-term bike parking). Class I parking regulations are implemented when bicycles will be parked for hours at a time. Examples of these environments are office buildings, elementary schools, libraries, etc. When implementing Class I bike racks, installers should also incorporate some form of weather protection for the racks and bikes. ( | Technology | Road infrastructure | null |
18567040 | https://en.wikipedia.org/wiki/Intellectual%20disability | Intellectual disability | Intellectual disability (ID), also known as general learning disability (in the United Kingdom), and formerly mental retardation (in the United States), is a generalized neurodevelopmental disorder characterized by significant impairment in intellectual and adaptive functioning that is first apparent during childhood. Children with intellectual disabilities typically have an intelligence quotient (IQ) below 70 and deficits in at least two adaptive behaviors that affect everyday living. According to the DSM-5, intellectual functions include reasoning, problem solving, planning, abstract thinking, judgment, academic learning, and learning from experience. Deficits in these functions must be confirmed by clinical evaluation and individualized standard IQ testing. On the other hand, adaptive behaviors include the social, developmental, and practical skills people learn to perform tasks in their everyday lives. Deficits in adaptive functioning often compromise an individual's independence and ability to meet their social responsibility.
Intellectual disability is subdivided into syndromic intellectual disability, in which intellectual deficits associated with other medical and behavioral signs and symptoms are present, and non-syndromic intellectual disability, in which intellectual deficits appear without other abnormalities. Down syndrome and fragile X syndrome are examples of syndromic intellectual disabilities.
Intellectual disability affects about 2–3% of the general population. Seventy-five to ninety percent of the affected people have mild intellectual disability. Non-syndromic, or idiopathic cases account for 30–50% of these cases. About a quarter of cases are caused by a genetic disorder, and about 5% of cases are inherited. Cases of unknown cause affect about 95 million people .
Signs and symptoms
Intellectual disability (ID) becomes apparent during childhood, and involves deficits in mental abilities, social skills, and core activities of daily living (ADLs) when compared to peers of the same age. There are often no physical signs of mild forms of ID, although there may be characteristic physical traits when it is associated with a genetic disorder (e.g. Down syndrome).
The level of impairment ranges in severity for each person. Some of the early signs can include:
Delays in reaching, or failure to achieve milestones in motor skills development (sitting, crawling, walking)
Slowness learning to talk, or continued difficulties with speech and language skills after starting to talk
Difficulty with self-help and self-care skills (e.g., getting dressed, washing, and feeding themselves)
Poor planning or problem-solving abilities
Behavioral and social problems
Failure to grow intellectually, or continued infant childlike behavior
Problems keeping up in school
Failure to adapt or adjust to new situations
Difficulty understanding and following social rules
In early childhood, mild ID (IQ 50–69) may not be obvious or identified until children begin school. Even when poor academic performance is recognized, it may take expert assessment to distinguish mild intellectual disability from specific learning disability or emotional/behavioral disorders. People with mild ID are capable of learning reading and mathematics skills to approximately the level of a typical child aged nine to twelve. They can learn self-care and practical skills, such as cooking or using the local transit system. As individuals with intellectual disabilities reach adulthood, many learn to live independently and maintain gainful employment. About 85% of people with ID are likely to have mild ID.
Moderate ID (IQ 35–49) is almost always apparent within the first years of life. Speech delays are particularly common signs of moderate ID. People with moderate intellectual disabilities need considerable support in school, at home, and in the community in order to fully participate. While their academic potential is limited, they can learn simple health and safety skills and to participate in simple activities. As adults, they may live with their parents, in a supportive group home, or even semi-independently with significant supportive services to help them, for example, to manage their finances. As adults, they may work in a sheltered workshop. About 10% of people with ID are likely to have moderate ID.
People with Severe ID (IQ 20–34), accounting for 3.5% of persons with ID, or Profound ID (IQ 19 or below), accounting for 1.5% of people with ID, need more intensive support and supervision for their entire lives. They may learn some ADLs, but an intellectual disability is considered severe or profound when individuals are unable to independently care for themselves without ongoing significant assistance from a caregiver throughout adulthood. Individuals with profound ID are completely dependent on others for all ADLs and to maintain their physical health and safety. They may be able to learn to participate in some of these activities to a limited degree.
Co-morbidity
Autism and intellectual disability
Intellectual disability and autism spectrum disorder (ASD) share clinical characteristics which can result in confusion while diagnosing. Overlapping these two disorders, while common, can be detrimental to a person's well-being. Those with ASD that hold symptoms of ID may be grouped into a co-diagnosis in which they are receiving treatment for a disorder they do not have. Likewise, those with ID that are mistaken to have ASD may be treated for symptoms of a disorder they do not have. Differentiating between these two disorders will allow clinicians to deliver or prescribe the appropriate treatments. Comorbidity between ID and ASD is very common; roughly 30% of those with ASD also have ID. Both ASD and ID require shortfalls in communication and social awareness as defining criteria.
In a study conducted in 2016 surveying 2816 cases, it was found that the top subsets that help differentiate between those with ID and ASD are, "impaired non-verbal social behavior and lack of social reciprocity, [...] restricted interests, strict adherence to routines, stereotyped and repetitive motor mannerisms, and preoccupation with parts of objects". Those with ASD tend to show more deficits in non-verbal social behavior such as body language and understanding social cues. In a study done in 2008 of 336 individuals with varying levels of ID, it was found that those with ID display fewer instances of repetitive or ritualistic behaviors. It also recognized that those with ASD, when compared to those with ID, were more likely to isolate themselves and make less eye contact. When it comes to classification ID and ASD have very different guidelines. ID has a standardized assessment called the Supports Intensity Scale (SIS); this measures severity on a system built around how much support an individual will need. While ASD also classifies severity by support needed, there is no standard assessment; clinicians are free to diagnose severity at their own judgment.
Epilepsy and intellectual disability
Around 22% of individuals with ID suffer from epilepsy. The incidence of epilepsy is associated with level of ID; epilepsy affects around half of individuals with profound ID. Proper epilepsy management is particularly crucial in this population, as individuals are at increased risk of sudden unexpected death in epilepsy. Nonetheless, epilepsy management in the ID population can be challenging due to high levels of polypharmacy prescribing, drug interactions, and increased vulnerability to adverse effects. It is thought that 70% of individuals with ID are pharmaco-resistant, however only around 10% of individuals are prescribed Anti-Seizure Medications (ASMs) licensed for pharmaco-resistant epilepsy. Research shows that certain ASMs, including Levetiracetam and Brivaracetam, show similar efficacy and tolerability in individuals with ID as compared to those without. There is much ongoing research into epilepsy management in the ID population.
Causes
Among children, the cause of intellectual disability is unknown for one-third to one-half of cases. About 5% of cases are inherited. Genetic defects that cause intellectual disability, but are not inherited, can be caused by accidents or mutations in genetic development. Examples of such accidents are development of an extra chromosome 18 (trisomy 18) and Down syndrome, which is the most common genetic cause. DiGeorge syndrome and fetal alcohol spectrum disorders are the next most common causes. Some other frequently observed causes include:
Genetic conditions. Sometimes disability is caused by abnormal genes inherited from parents, errors when genes combine, or other reasons like de novo mutations in genes associated with intellectual disability. The most prevalent genetic conditions include Down syndrome, Klinefelter syndrome, Fragile X syndrome (common among boys), neurofibromatosis, congenital hypothyroidism, Williams syndrome, phenylketonuria (PKU), and Prader–Willi syndrome. Other genetic conditions include Phelan–McDermid syndrome (22q13del), Mowat–Wilson syndrome, genetic ciliopathy, and Siderius type X-linked intellectual disability () as caused by mutations in the PHF8 gene (). In the rarest of cases, abnormalities with the X or Y chromosome may also cause disability. Tetrasomy X and pentasomy X syndrome affect a small number of girls worldwide, while boys may be affected by 49, XXXXY, or 49, XYYYY. 47, XYY is not associated with significantly lowered IQ though affected individuals may have slightly lower IQs than non-affected siblings on average.
Problems during pregnancy. Intellectual disability can result when the fetus does not develop properly. For example, there may be a problem with the way the fetus's cells divide as it grows. A pregnant woman who drinks alcohol (see fetal alcohol spectrum disorder) or gets an infection like rubella during pregnancy may also have a baby with an intellectual disability.
Problems at birth. If a baby has problems during labor and birth, such as not getting enough oxygen, they may have a developmental disability due to brain damage.
The group of proteins known as histones have an essential part in gene regulation, and sometimes these proteins become modified and are prevented from working properly. When the genes responsible for the development of neurons are affected, it affects the brain and behavior in the individual.
Exposure to certain types of disease or toxins. Diseases like whooping cough, measles, or meningitis can cause intellectual disability if medical care is delayed or inadequate. Exposure to poisons like lead or mercury may also affect mental ability.
Iodine deficiency, affecting approximately 2 billion people worldwide, is the leading preventable cause of intellectual disability in areas of the developing world where iodine deficiency is endemic. Iodine deficiency also causes goiter, an enlargement of the thyroid gland. More common than full-fledged congenital iodine deficiency syndrome (formerly cretinism), as intellectual disability caused by severe iodine deficiency is called, is mild impairment of intelligence. Residents of certain areas of the world, due to natural deficiency and governmental inaction, are severely affected by iodine deficiency. India has 500 million people with a deficiency, 54 million with goiter, and 2 million with congenital iodine deficiency. Among other nations affected by iodine deficiency, China and Kazakhstan have instituted widespread salt iodization programs. But, as of 2006, Russia had not.
Malnutrition is a common cause of reduced intelligence in parts of the world affected by famine, such as Ethiopia and nations struggling with extended periods of warfare that disrupt agriculture production and distribution.
Absence of the arcuate fasciculus.
Furthermore, lack of stimulation of sensory pathways in infants can also cause developmental and cognitive delays.
Diagnosis
According to both the American Association on Intellectual and Developmental Disabilities and the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), three criteria must be met for a diagnosis of intellectual disability: significant limitation in general mental abilities (intellectual functioning), significant limitations in one or more areas of adaptive behavior across multiple environments (as measured by an adaptive behavior rating scale, i.e. communication, self-help skills, interpersonal skills, and more), and evidence that the limitations became apparent in childhood or adolescence (onset during developmental phase).
In general, people with intellectual disabilities have an IQ below 70, but clinical discretion may be necessary for individuals who have a somewhat higher IQ but severe impairment in adaptive functioning.
It is formally diagnosed by an assessment of IQ and adaptive behavior. A third condition requiring onset during the developmental period is used to distinguish intellectual disability from other conditions, such as traumatic brain injuries and dementias (including Alzheimer's disease).
Intelligence quotient
The first English-language IQ test, the Stanford–Binet Intelligence Scales, was adapted from a test battery designed for school placement by Alfred Binet in France. Lewis Terman adapted Binet's test and promoted it as a test measuring "general intelligence". Terman's test was the first widely used mental test to report scores in "intelligence quotient" form ("mental age" divided by chronological age, multiplied by 100). Current tests are scored in "deviation IQ" form, with a performance level by a test-taker two standard deviations below the median score for the test-takers age group defined as IQ 70. Until the most recent revision of diagnostic standards, an IQ of 70 or below was a primary factor for intellectual disability diagnosis, and IQ scores were used to categorize degrees of intellectual disability.
Since the current diagnosis of intellectual disability is not based on IQ scores alone, but must also take into consideration a person's adaptive functioning, the diagnosis is not made rigidly. It encompasses intellectual scores, adaptive functioning scores from an adaptive behavior rating scale based on descriptions of known abilities provided by someone familiar with the person, and also the observations of the assessment examiner, who is able to find out directly from the person what they can understand, communicate, and such like. IQ assessment must be based on a current test. This enables a diagnosis to avoid the pitfall of the Flynn effect, which is a consequence of changes in population IQ test performance changing IQ test norms over time.
Distinction from other disabilities
Clinically, intellectual disability is a subtype of cognitive deficit or disabilities affecting intellectual abilities, which is a broader concept and includes intellectual deficits that are too mild to properly qualify as intellectual disability, or too specific (as in specific learning disability), or acquired later in life through acquired brain injuries or neurodegenerative diseases like dementia. Cognitive deficits may appear at any age. Developmental disability is any disability that is due to problems with growth and development. This term encompasses many congenital medical conditions that have no mental or intellectual components, although it, too, is sometimes used as a euphemism for intellectual disability.
Limitations in more than one area
Adaptive behavior, or adaptive functioning, refers to the skills needed to live independently (or at the minimally acceptable level for age). To assess adaptive behavior, professionals compare the functional abilities of a child to those of other children of similar age. To measure adaptive behavior, professionals use structured interviews, with which they systematically elicit information about persons' functioning in the community from people who know them well. There are many adaptive behavior scales, and accurate assessment of the quality of someone's adaptive behavior requires clinical judgment as well. Certain skills are important to adaptive behavior, such as:
Daily living skills, such as getting dressed, using the bathroom, and feeding oneself
Communication skills, such as understanding what is said and being able to answer
Social skills with peers, family members, spouses, adults, and others
Other specific skills can be critical to an individual's inclusion in the community and to develop appropriate social behaviors, as for example being aware of the different social expectations linked to the principal lifespan stages (i.e., childhood, adulthood, old age). The results of a Swiss study suggest that the performance of adults with ID in recognizing different lifespan stages is related to specific cognitive abilities and to the type of material used to test this performance.
Management
By most definitions, intellectual disability is more accurately considered a disability rather than a disease. Intellectual disability can be distinguished in many ways from mental illness, such as schizophrenia or depression. Currently, there is no "cure" for an established disability, though with appropriate support and teaching, most individuals can learn to do many things. Causes, such as congenital hypothyroidism, if detected early may be treated to prevent the development of an intellectual disability.
There are thousands of agencies around the world that provide assistance for people with developmental disabilities. They include state-run, for-profit, and non-profit, privately run agencies. Within one agency there could be departments that include fully staffed residential homes, day rehabilitation programs that approximate schools, workshops wherein people with disabilities can obtain jobs, programs that assist people with developmental disabilities in obtaining jobs in the community, programs that provide support for people with developmental disabilities who have their own apartments, programs that assist them with raising their children, and many more. There are also many agencies and programs for parents of children with developmental disabilities.
Beyond that, there are specific programs that people with developmental disabilities can take part in wherein they learn basic life skills. These "goals" may take a much longer amount of time for them to accomplish, but the ultimate goal is independence. This may be anything from independence in tooth brushing to an independent residence. People with developmental disabilities learn throughout their lives and can obtain many new skills even late in life with the help of their families, caregivers, clinicians and the people who coordinate the efforts of all of these people.
There are four broad areas of intervention that allow for active participation from caregivers, community members, clinicians, and of course, the individual(s) with an intellectual disability. These include psychosocial treatments, behavioral treatments, cognitive-behavioral treatments, and family-oriented strategies. Psychosocial treatments are intended primarily for children before and during the preschool years as this is the optimum time for intervention. This early intervention should include encouragement of exploration, mentoring in basic skills, celebration of developmental advances, guided rehearsal and extension of newly acquired skills, protection from harmful displays of disapproval, teasing, or punishment, and exposure to a rich and responsive language environment. A great example of a successful intervention is the Carolina Abecedarian Project that was conducted with over 100 children from low socioeconomic status families beginning in infancy through pre-school years. Results indicated that by age 2, the children provided the intervention had higher test scores than control group children, and they remained approximately 5 points higher 10 years after the end of the program. By young adulthood, children from the intervention group had better educational attainment, employment opportunities, and fewer behavioral problems than their control-group counterparts.
Core components of behavioral treatments include language and social skills acquisition. Typically, one-to-one training is offered in which a therapist uses a shaping procedure in combination with positive reinforcements to help the child pronounce syllables until words are completed. Sometimes involving pictures and visual aids, therapists aim at improving speech capacity so that short sentences about important daily tasks (e.g. bathroom use, eating, etc.) can be effectively communicated by the child. In a similar fashion, older children benefit from this type of training as they learn to sharpen their social skills such as sharing, taking turns, following instruction, and smiling. At the same time, a movement known as social inclusion attempts to increase valuable interactions between children with an intellectual disability and their non-disabled peers. Cognitive-behavioral treatments, a combination of the previous two treatment types, involves a strategical-metastrategical learning technique that teaches children math, language, and other basic skills pertaining to memory and learning. The first goal of the training is to teach the child to be a strategical thinker through making cognitive connections and plans. Then, the therapist teaches the child to be metastrategical by teaching them to discriminate among different tasks and determine which plan or strategy suits each task. Finally, family-oriented strategies delve into empowering the family with the skill set they need to support and encourage their child or children with an intellectual disability. In general, this includes teaching assertiveness skills or behavior management techniques as well as how to ask for help from neighbors, extended family, or day-care staff. As the child ages, parents are then taught how to approach topics such as housing/residential care, employment, and relationships. The ultimate goal for every intervention or technique is to give the child autonomy and a sense of independence using the acquired skills they have. In a 2019 Cochrane review on beginning reading interventions for children and adolescents with intellectual disability, small to moderate improvements in phonological awareness, word reading, decoding, expressive and receptive language skills, and reading fluency were noted when these elements were part of the teaching intervention.
Although there is no specific medication for intellectual disability, many people with developmental disabilities have further medical complications and may be prescribed several medications. For example, autistic children with developmental delay may be prescribed antipsychotics or mood stabilizers to help with their behavior. Use of psychotropic medications such as benzodiazepines in people with intellectual disability requires monitoring and vigilance as side effects occur commonly and are often misdiagnosed as behavioral and psychiatric problems.
Epidemiology
Intellectual disability affects about 2–3% of the general population. 75–90% of the affected people have mild intellectual disability. Non-syndromic or idiopathic ID accounts for 30–50% of cases. About a quarter of cases are caused by a genetic disorder. Cases of unknown cause affect about 95 million people . It is more common in males and in low to middle income countries.
History
Intellectual disability has been documented under a variety of names throughout history. Throughout much of human history, society was unkind to those with any type of disability, and people with intellectual disability were commonly viewed as burdens on their families.
Greek and Roman philosophers, who valued reasoning abilities, disparaged people with intellectual disability as barely human. The oldest physiological view of intellectual disability is in the writings of Hippocrates in the late fifth century BCE, who believed that it was caused by an imbalance in the four humors in the brain. In ancient Rome people with intellectual disabilities had limited rights and were generally looked down upon. They were considered property and could be kept slaves by their father. These people could also not marry, hold office, or raise children. Many of them were killed early in the childhood, and then dumped into the Tiber in order to avoid them burdening society. However, they were exempt from their crimes under Roman law, and they were also used to perform menial labor.
Caliph Al-Walid (r. 705–715) built one of the first care homes for individuals with intellectual disabilities and built the first hospital which accommodated intellectually disabled individuals as part of its services. In addition, Al-Walid assigned each intellectually disabled individual a caregiver.
Until the Enlightenment in Europe, care and asylum was provided by families and the church (in monasteries and other religious communities), focusing on the provision of basic physical needs such as food, shelter, and clothing. Negative stereotypes were prominent in social attitudes of the time.
In the 13th century, England declared people with intellectual disabilities to be incapable of making decisions or managing their affairs. Guardianships were created to take over their financial affairs.
In the 17th century, Thomas Willis provided the first description of intellectual disability as a disease. He believed that it was caused by structural problems in the brain. According to Willis, the anatomical problems could be either an inborn condition or acquired later in life.
The first known person in the British colonies with an intellectual disability was Benoni Buck, son of Richard Buck, whose life and guardianship battles provide significant insight into the early legal and social treatment of people with disabilities in what is now the United States.
In the 18th and 19th centuries, housing and care moved away from families and towards an asylum model. People were placed by, or removed from, their families (usually in infancy) and housed in large professional institutions, many of which were self-sufficient through the labor of the residents. Some of these institutions provided a very basic level of education (such as differentiation between colors and basic word recognition and numeracy), but most continued to focus solely on the provision of basic needs of food, clothing, and shelter. Conditions in such institutions varied widely, but the support provided was generally non-individualized, with aberrant behavior and low levels of economic productivity regarded as a burden to society. Individuals of higher wealth were often able to afford higher degrees of care such as home care or private asylums. Heavy tranquilization and assembly-line methods of support were the norm, and the medical model of disability prevailed. Services were provided based on the relative ease to the provider, not based on the needs of the individual. A survey taken in 1891 in Cape Town, South Africa shows the distribution between different facilities. Out of 2,046 persons surveyed, 1,281 were in private dwellings, 120 in jails, and 645 in asylums, with men representing nearly two-thirds of the number surveyed. In situations of scarcity of accommodation, preference was given to white men and Black men (whose insanity threatened white society by disrupting employment relations and the taboo sexual contact with white women).
In the late 19th century, in response to Charles Darwin's On the Origin of Species, Francis Galton proposed selective breeding of humans to reduce intellectual disability. Early in the 20th century, the eugenics movement became popular throughout the world. This led to forced sterilization and prohibition of marriage in most of the developed world and was later used by Adolf Hitler as a rationale for the mass murder of people with intellectual disability during the Holocaust. Eugenics was later abandoned as a violation of human rights, and the practice of forced sterilization and prohibition from marriage was discontinued by most of the developed world by the mid-20th century.
In 1905, Alfred Binet produced the first standardized test for measuring intelligence in children.
Although ancient Roman law had declared people with intellectual disability to be incapable of the deliberate intent to harm that was necessary for a person to commit a crime, during the 1920s, Western society believed they were morally degenerate.
Ignoring the prevailing attitude, U.S.-based Civitans adopted service to people with developmental disabilities as a major organizational emphasis in 1952. Their earliest efforts included workshops for special education teachers and daycamps for children with disabilities, all at a time when such training and programs were almost nonexistent. The segregation of people with developmental disabilities was not widely questioned by academics or policy-makers until the 1969 publication of Wolf Wolfensberger's seminal work "The Origin and Nature of Our Institutional Models", drawing on some of the ideas proposed by S. G. Howe 100 years earlier. This study posited that society characterizes people with disabilities as deviant, sub-human and burdens of charity, resulting in the adoption of that "deviant" role. Wolfensberger argued that this dehumanization, and the segregated institutions that result from it, ignored the potential productive contributions that all people can make to society. He pushed for a shift in policy and practice that recognized the human needs of those with intellectual disability and provided the same basic human rights as for the rest of the population.
This publication may be regarded as the first move towards the widespread adoption of the social model of disability in regard to these types of disabilities, and was the impetus for the development of government strategies for desegregation. Successful lawsuits against governments and increasing awareness of human rights and self-advocacy also contributed to this process, resulting in the passing in the U.S. of the Civil Rights of Institutionalized Persons Act in 1980.
From the 1960s to the present, most states have moved towards the elimination of segregated institutions. Normalization and deinstitutionalization are dominant. Along with the work of Wolfensberger and others including Gunnar and Rosemary Dybwad, a number of scandalous revelations around the horrific conditions within state institutions created public outrage that led to change to a more community-based method of providing services.
By the mid-1970s, most governments had committed to de-institutionalization and had started preparing for the wholesale movement of people into the general community, in line with the principles of normalization. In most countries, this was essentially complete by the late 1990s, although the debate over whether or not to close institutions persists in some states, including Massachusetts.
In the past, lead poisoning and infectious diseases were significant causes of intellectual disability. Some causes of intellectual disability are decreasing, as medical advances, such as vaccination, increase. Other causes are increasing as a proportion of cases, perhaps due to rising maternal age, which is associated with several syndromic forms of intellectual disability.
Along with the changes in terminology, and the downward drift in acceptability of the old terms, institutions of all kinds have had to repeatedly change their names. This affects the names of schools, hospitals, societies, government departments, and academic journals. For example, the Midlands Institute of Mental Sub-normality became the British Institute of Mental Handicap and is now the British Institute of Learning Disability. This phenomenon is shared with mental health and motor disabilities, and seen to a lesser degree in sensory disabilities.
Terminology
Over the past two decades, the term intellectual disability has become preferred by most advocates and researchers in most English-speaking countries. In a 2012 survey of 101 Canadian healthcare professionals, 78% said they would use the term developmental delay with parents over intellectual disability (8%). Expressions like developmentally disabled, special, special needs, or challenged are sometimes used, but have been criticized for "reinforc[ing] the idea that people cannot deal honestly with their disabilities".
The term mental retardation, which stemmed from the understanding that such conditions arose as a result of delays or retardation of a child's natural development, was used in the American Psychiatric Association's DSM-IV (1994) and in the World Health Organization's ICD-10 (codes F70–F79). In the next revision, ICD-11, it was replaced by the term "disorders of intellectual development" (codes 6A00–6A04; 6A00.Z for the "unspecified" diagnosis code). The term "intellectual disability (intellectual developmental disorder)" is used in the DSM-5 (2013). The term "mental retardation" is still used in some professional settings such as governmental aid programs or health insurance paperwork, where "mental retardation" is specifically covered but "intellectual disability" is not.
Historical terms for intellectual disability eventually become perceived as an insult, in a process commonly known as the euphemism treadmill. The terms mental retardation and mentally retarded became popular in the middle of the 20th century to replace the previous set of terms, which included "imbecile", "idiot", "feeble-minded", and "moron", among others, and are now considered offensive. By the end of the 20th century, retardation and retard become widely seen as disparaging, politically incorrect, and in need of replacement.
Usage has changed over the years and differed from country to country. For example, mental retardation in some contexts covers the whole field, but it previously applied to people with milder impairments. Feeble-minded used to mean mild impairments in the UK, and once applied in the US to the whole field. "Borderline intellectual functioning" is not currently defined, but the term may be used to apply to people with IQs in the 70s. People with IQs of 70 to 85 used to be eligible for special consideration in the US public education system on grounds of intellectual disability.
United States
In North America, intellectual disability is subsumed into the broader term developmental disability, which also includes epilepsy, autism, cerebral palsy, and other disorders that develop during the developmental period (birth to age 18). Because service provision is tied to the designation "developmental disability", it is used by many parents, direct support professionals, and physicians. In the United States, however, in school-based settings, the more specific term mental retardation or, more recently (and preferably), intellectual disability, is still typically used, and is one of 13 categories of disability under which children may be identified for special education services under Public Law 108–446.
The phrase intellectual disability is increasingly being used as a synonym for people with significantly below-average cognitive ability. These terms are sometimes used as a means of separating general intellectual limitations from specific, limited deficits as well as indicating that it is not an emotional or psychological disability. It is not specific to congenital disorders such as Down syndrome.
The American Association on Mental Retardation changed its name to the American Association on Intellectual and Developmental Disabilities (AAIDD) in 2007, and soon thereafter changed the names of its scholarly journals to reflect the term "intellectual disability". In 2010, the AAIDD released its 11th edition of its terminology and classification manual, which also used the term intellectual disability.
United Kingdom
In the UK, mental handicap had become the common medical term, replacing mental subnormality in Scotland and mental deficiency in England and Wales, until Stephen Dorrell, Secretary of State for Health for the United Kingdom from 1995 to 1997, changed the NHS's designation to learning disability. The new term is not yet widely understood, and is often taken to refer to problems affecting schoolwork (the American usage), which are known in the UK as "learning difficulties". British social workers may use "learning difficulty" to refer to both people with intellectual disability and those with conditions such as dyslexia. In education, "learning difficulties" is applied to a wide range of conditions: "specific learning difficulty" may refer to dyslexia, dyscalculia or developmental coordination disorder, while "moderate learning difficulties", "severe learning difficulties" and "profound learning difficulties" refer to more significant impairments. The term "Profound and Multiple Learning Disability/ies" (PMLD) is used: the NHS describes PMLD as "when a person has a severe learning disability and other disabilities that significantly affect their ability to communicate and be independent".
In England and Wales between 1983 and 2008, the Mental Health Act 1983 defined "mental impairment" and "severe mental impairment" as "a state of arrested or incomplete development of mind which includes significant/severe impairment of intelligence and social functioning and is associated with abnormally aggressive or seriously irresponsible conduct on the part of the person concerned." As behavior was involved, these were not necessarily permanent conditions: they were defined for the purpose of authorizing detention in hospital or guardianship. The term mental impairment was removed from the Act in November 2008, but the grounds for detention remained. However, English statute law uses mental impairment elsewhere in a less well-defined manner—e.g. to allow exemption from taxes—implying that intellectual disability without any behavioral problems is what is meant.
A 2008 BBC poll conducted in the United Kingdom came to the conclusion that 'retard' was the most offensive disability-related word. On the reverse side of that, when a contestant on Celebrity Big Brother live used the phrase "walking like a retard", despite complaints from the public and the charity Mencap, the communications regulator Ofcom did not uphold the complaint saying "it was not used in an offensive context [...] and had been used light-heartedly". It was, however, noted that two previous similar complaints from other shows were upheld.
Australia
In the past, Australia has used British and American terms interchangeably, including "mental retardation" and "mental handicap". Today, "intellectual disability" is the preferred and more commonly used descriptor.
Society and culture
People with intellectual disabilities are often not seen as full citizens of society. Person-centered planning and approaches are seen as methods of addressing the continued labeling and exclusion of socially devalued people, such as people with disabilities, encouraging a focus on the person as someone with capacities and gifts as well as support needs. The self-advocacy movement promotes the right of self-determination and self-direction by people with intellectual disabilities, which means allowing them to make decisions about their own lives.
Until the middle of the 20th century, people with intellectual disabilities were routinely excluded from public education, or educated away from other typically developing children. Compared to peers who were segregated in special schools, students who are mainstreamed or included in regular classrooms report similar levels of stigma and social self-conception, but more ambitious plans for employment. As adults, they may live independently, with family members, or in different types of institutions organized to support people with disabilities. About 8% currently live in an institution or a group home.
In the United States, the average lifetime cost of a person with an intellectual disability amounts to $223,000 per person, in 2003 US dollars, for direct costs such as medical and educational expenses. The indirect costs were estimated at $771,000, due to shorter lifespans and lower than average economic productivity. The total direct and indirect costs, which amount to a little more than a million dollars, are slightly more than the economic costs associated with cerebral palsy, and double that associated with serious vision or hearing impairments. Of the costs, about 14% is due to increased medical expenses (not including what is normally incurred by the typical person), and 10% is due to direct non-medical expenses, such as the excess cost of special education compared to standard schooling. The largest amount, 76%, is indirect costs accounting for reduced productivity and shortened lifespans. Some expenses, such as ongoing costs to family caregivers or the extra costs associated with living in a group home, were excluded from this calculation.
Human rights and legal status
The law treats person with intellectual disabilities differently than those without intellectual disabilities. Their human rights and freedoms, including the right to vote, the right to conduct business, enter into a contract, enter into marriage, right to education, are often limited. The courts have upheld some of these limitations and found discrimination in others. The UN Convention on the Rights of Persons with Disabilities, which sets minimum standards for the rights of persons with disabilities, has been ratified by more than 180 countries. In several U.S. states, and several European Union states, persons with intellectual disabilities are disenfranchised. The European Court of Human Rights ruled in Alajos Kiss v. Hungary (2010) that Hungary cannot restrict voting rights only on the basis of guardianship due to a psychosocial disability.
Health disparities
People with intellectual disabilities are usually at a higher risk of living with complex health conditions such as epilepsy and neurological disorders, gastrointestinal disorders, and behavioral and psychiatric problems compared to people without disabilities. Adults also have a higher prevalence of poor social determinants of health, behavioral risk factors, depression, diabetes, and poor or fair health status than adults without intellectual disability.
In the United Kingdom people with intellectual disability live on average 16 years less than the general population. Some of the barriers that exist for people with ID accessing quality healthcare include: communication challenges, service eligibility, lack of training for healthcare providers, diagnostic overshadowing, and absence of targeted health promotion services. Key recommendations from the CDC for improving the health status for people with intellectual disabilities include: improve access to health care, improve data collection, strengthen the workforce, include people with ID in public health programs, and prepare for emergencies with people with disabilities in mind.
| Biology and health sciences | Disability | null |
18567168 | https://en.wikipedia.org/wiki/Computer%20graphics%20%28computer%20science%29 | Computer graphics (computer science) | Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.
Overview
Computer graphics studies manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities.
Connected studies include:
Applied mathematics
Computational geometry
Computational topology
Computer vision
Image processing
Information visualization
Scientific visualization
Applications of computer graphics include:
Print design
Digital art
Special effects
Video games
Visual effects
History
There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the SIGGRAPH and Eurographics conferences and the Association for Computing Machinery (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: Symposium on Geometry Processing, Symposium on Rendering, Symposium on Computer Animation, and High Performance Graphics.
As in the rest of computer science, conference publications in computer graphics are generally more significant than journal publications (and subsequently have lower acceptance rates).
Subfields
A broad classification of major subfields in computer graphics might be:
Geometry: ways to represent and process surfaces
Animation: ways to represent and manipulate motion
Rendering: algorithms to reproduce light transport
Imaging: image acquisition or image editing
Geometry
The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on its exterior, boundary representations are most commonly used. Two dimensional surfaces are a good representation for most objects, though they may be non-manifold. Since surfaces are not finite, discrete digital approximations are used. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have become more popular recently (see for instance the Symposium on Point-Based Graphics). These representations are Lagrangian, meaning the spatial locations of the samples are independent. Recently, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example).
Geometry subfields include:
Implicit surface modeling – an older subfield which examines the use of algebraic surfaces, constructive solid geometry, etc., for surface representation.
Digital geometry processing – surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading.
Discrete differential geometry – a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics.
Point-based graphics – a recent field which focuses on points as the fundamental representation of surfaces.
Subdivision surfaces
Out-of-core mesh processing – another recent field which focuses on mesh datasets that do not fit in main memory.
Animation
The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and data-driven models, but recently physical simulation has become more popular as computers have become more powerful computationally.
Animation subfields include:
Performance capture
Character animation
Physical simulation (e.g. cloth modeling, animation of fluid dynamics, etc.)
Rendering
Rendering generates images from a model. Rendering may simulate light transport to create realistic images or it may create images that have a particular artistic style in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light). See Rendering (computer graphics) for more information.
Rendering subfields include:
Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.
Scattering: Models of scattering (how light interacts with the surface at a given point) and shading (how material properties vary across the surface) are used to describe the appearance of a surface. In graphics these problems are often studied within the context of rendering since they can substantially affect the design of rendering algorithms. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function (BSDF). The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (There is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)
Non-photorealistic rendering
Physically based rendering – concerned with generating images according to the laws of geometric optics
Real-time rendering – focuses on rendering for interactive applications, typically using specialized hardware like GPUs
Relighting – recent area concerned with quickly re-rendering scenes
Notable researchers
Arthur Appel
James Arvo
Brian A. Barsky
Jim Blinn
Jack E. Bresenham
Loren Carpenter
Edwin Catmull
James H. Clark
Robert L. Cook
Franklin C. Crow
Paul Debevec
David C. Evans
Ron Fedkiw
Steven K. Feiner
James D. Foley
David Forsyth
Henry Fuchs
Andrew Glassner
Henri Gouraud (computer scientist)
Donald P. Greenberg
Eric Haines
R. A. Hall
Pat Hanrahan
John Hughes
Jim Kajiya
Takeo Kanade
Kenneth Knowlton
Marc Levoy
Martin Newell (computer scientist)
James O'Brien
Ken Perlin
Matt Pharr
Bui Tuong Phong
Przemyslaw Prusinkiewicz
William Reeves
David F. Rogers
Holly Rushmeier
Peter Shirley
James Sethian
Ivan Sutherland
Demetri Terzopoulos
Kenneth Torrance
Greg Turk
Andries van Dam
Henrik Wann Jensen
Gregory Ward
John Warnock
J. Turner Whitted
Lance Williams
Applications for their use
Bitmap Design / Image Editing
Adobe Photoshop
Corel Photo-Paint
GIMP
Krita
Vector drawing
Adobe Illustrator
CorelDRAW
Inkscape
Affinity Designer
Sketch
Architecture
VariCAD
FreeCAD
AutoCAD
QCAD
LibreCAD
DataCAD
Corel Designer
Video editing
Adobe Premiere Pro
Sony Vegas
Final Cut
DaVinci Resolve
Cinelerra
VirtualDub
Sculpting, Animation, and 3D Modeling
Blender 3D
Wings 3D
ZBrush
Sculptris
SolidWorks
Rhino3D
SketchUp
3ds Max
Cinema 4D
Maya
Houdini
Digital composition
Nuke
Blackmagic Fusion
Adobe After Effects
Natron
Rendering
V-Ray
RedShift
RenderMan
Octane Render
Mantra
Lumion (Architectural visualization)
Other applications examples
ACIS - geometric core
Autodesk Softimage
POV-Ray
Scribus
Silo
Hexagon
Lightwave
| Technology | Computer science | null |
465802 | https://en.wikipedia.org/wiki/Pygmy%20sperm%20whale | Pygmy sperm whale | The pygmy sperm whale (Kogia breviceps) is one of two extant species in the family Kogiidae in the sperm whale superfamily. They are not often sighted at sea, and most of what is known about them comes from the examination of stranded specimens.
Taxonomy
The pygmy sperm whale was first described by naturalist Henri Marie Ducrotay de Blainville in 1838. He based this on the head of an individual washed up on the coasts of Audierne in France in 1784, which was then stored in the Muséum d'histoire naturelle. He recognized it as a type of sperm whale and assigned it to the same genus as the sperm whale (Physeter macrocephalus) as Physeter breviceps. He noted its small size and nicknamed it "cachalot a tête courte"–small-headed sperm whale; further, the species name breviceps is Latin for "short-headed". In 1846, zoologist John Edward Gray erected the genus Kogia for the pygmy sperm whale as Kogia breviceps, and said it was intermediate between the sperm whale and dolphins.
In 1871, mammalogist Theodore Gill assigned it and Euphysetes (now the dwarf sperm whale, Kogia sima) to the subfamily Kogiinae, and the sperm whale to the subfamily Physeterinae. Both have now been elevated to the family level. In 1878, naturalist James Hector synonymized the dwarf sperm whale with the pygmy sperm whale, with both being referred to as K. breviceps until 1998.
Description
The pygmy sperm whale is not much larger than many dolphins. They are about at birth, growing to about at maturity. Adults weigh about . The underside is a creamy, occasionally pinkish colour and the back and sides are a bluish grey; however, considerable intermixing occurs between the two colours. The shark-like head is large in comparison to body size, given an almost swollen appearance when viewed from the side. A whitish marking, often described as a "false gill", is seen behind each eye.
The lower jaw is very small and slung low. The blowhole is displaced slightly to the left when viewed from above facing forward. The dorsal fin is very small and hooked; its size is considerably smaller than that of the dwarf sperm whale and may be used for diagnostic purposes.
Anatomy
Like its giant relative, the sperm whale, the pygmy sperm whale has a spermaceti organ in its forehead (see sperm whale for a discussion of its purpose). It also has a sac in its intestines that contains a dark red fluid. The whale may expel this fluid when frightened, perhaps to confuse and disorient predators.
Dwarf and pygmy sperm whales possess the shortest rostrum of current day cetaceans with a skull that is greatly asymmetrical.
Pygmy sperm whales have from 50 to 55 vertebrae, and from 12 to 14 ribs on either side, although the latter are not necessarily symmetrical, and the hindmost ribs do not connect with the vertebral column. Each of the flippers has seven carpals, and a variable number of phalanges in the digits, reportedly ranging from two in the first digit to as many as 10 in the second digit. No true innominate bone exists; it is replaced by a sheet of dense connective tissue. The hyoid bone is unusually large, and presumably has a role in the whale's suction feeding.
Teeth
The pygmy sperm has between 20 and 32 teeth, all of which are set into the rostral part of the lower jaw. Unusually, adults lack enamel due to a mutation in the enamelysin gene, although enamel is present in very young individuals.
Melon
Like other toothed whales, the pygmy sperm whale has a "melon", a body of fat and wax in the head that it uses to focus and modulate the sounds it makes. The inner core of the melon has a higher wax content than the outer cortex. The inner core transmits sound more slowly than the outer layer, allowing it to refract sound into a highly directional beam. Behind the melon, separated by a thin membrane, is the spermaceti organ. Both the melon and the spermaceti organ are encased in a thick fibrous coat, resembling a bursa. The whale produces sound by moving air through the right nasal cavity, which includes a valvular structure, or museau de singe, with a thickened vocal reed, functioning like the vocal cords of humans.
Stomach
The stomach has three chambers. The first chamber, or forestomach, is not glandular, and opens directly into the second, fundic chamber, which is lined by digestive glands. A narrow tube runs from the second to the third, or pyloric, stomach, which is also glandular, and connects, via a sphincter, to the duodenum. Although fermentation of food material apparently occurs in the small intestine, no caecum is present.
Brain
The rostroventral dura of the brain contains a significant concentration of magnetite crystals, which suggests that K. breviceps can navigate by magnetoreception.
Studies have also shown that compared to the sperm whale, the pygmy sperm whale brain has significantly fewer neurons, which may be connected to a decreased complexity in social interaction and group-based living.
Echolocation
Like all toothed whales, the pygmy sperm whale hunts prey by echolocation. Sound produced for echolocation by many odontocetes come in the form of high-frequency clicks. The frequencies it uses are mostly ultrasonic, peaking around 125 kHz. The clicks from their echolocation has been recorded to last an average of 600 microseconds. When closing in on prey, the click repetition rate starts at 20 Hz and then begins to rise when nearing the target.
The pulse sounds that pygmy sperm whales make for echolocation are generated primarily from the museau de singe or monkey's muzzle, which is an anatomical structure located within the whale's skull that produces sound when air passes through its lips. The sound from the museau de singe is transferred to the attached cone of the spermaceti organ. Unique from other odontocetes, the spermaceti organ contacts the posterior of the whale's melon. Fat from the core of the spermaceti organ helps direct sonic energy from the museau de singe to the melon. The melon acts as a sonic magnifier and gives directionality to sound pulses, making echolocation more efficient. Fat on the interior of the melon has a lower molecular weight lipid than the surrounding outer melon. Since the sound waves move from a lower velocity material to a higher one during sound production, the sound undergoes inward refraction and becomes increasingly focused. Variation in fat density within the melon ultimately contributes to the production of highly directional, ultrasonic sound beams in front of the melon. The combined melon and spermaceti organ system cooperate to focus echolocative sounds.
Like in most odontocetes, the known echoreception apparatus used by the pygmy sperm whale is linked to the fat-filled lower mandibles within the skull. However, compositional topography of the pygmy sperm whale's skull indicates abnormally large fatty jowls surrounding the mandibles, suggesting a more intricate echoreception apparatus. Additionally, an unusual cushion structure, of porous and spongy texture, found behind the museau de singe has been hypothesized of being a possible "pressure receptor". The positioning of this cushion structure in close proximity to the largest cavities closest to the museau de singe may suggest that it is a sound absorber used for echoreception.
Reproduction
Although firm details concerning pygmy sperm whale reproduction are limited, they are believed to mate from April to September in the Southern Hemisphere and March to August in the Northern Hemisphere. These whales become sexually mature at age 4-5, and like virtually all mammals, are iteroparous (reproducing many times during their lives). Once a female whale is impregnated, the average gestation period lasts 9–11 months, and unusually for cetaceans, the female gives birth to a single calf head-first. Newborn calves are about in length, weighing 50 kg, and are weaned around one year of age. They are believed to live up to age 23.
Behaviour
The whale makes very inconspicuous movements. It rises to the surface slowly, with little splash or blow, and remains there motionless for some time. In Japan, the whale was historically known as the "floating whale" because of this. Its dive is equally lacking in grand flourish - it simply drops out of view. The species has a tendency to back away from rather than approach boats. Breaching has been observed, but is not common.
Pygmy sperm whales are normally either solitary or found in pairs, but have been seen in groups up to six. Dives have been estimated to last an average of 11 minutes, although longer dives up to 45 minutes have been reported. The ultrasonic clicks of pygmy sperm whales range from 60 to 200 kHz, peaking at 125 kHz, and the animals also make much lower-frequency "cries" at 1 to 2 kHz.
Diet/foraging behavior
Analysis of stomach contents suggests that pygmy sperm whales feed primarily on cephalopods, most commonly including bioluminescent species found in midwater environments. Most of the cephalopod hunting is known to be pelagic, and fairly shallow, within the first 100 m of the surface. The most common prey are reported to include glass squid, and lycoteuthid and ommastrephid squid, although the whales also consume other squid, and octopuses. They have also been reported to eat some deep-sea shrimps, but, compared with dwarf sperm whales, relatively few fish.
Predators may include great white sharks and killer whales.
Pygmy sperm whales and dwarf sperm whales are unique among cetaceans in using a form of "ink" to evade predation in a manner similar to squid. Both species have a sac in the lower portion of their intestinal tracts that contains up to 12 liters of dark reddish-brown fluid, which can be ejected to confuse or discourage potential predators.
Population and distribution
Pygmy sperm whales are found throughout the tropical and temperate waters of the Atlantic, Pacific, and Indian Oceans, and occasionally among colder waters such as off Russia. However, they are rarely sighted at sea, so most data come from stranded animals - making a precise range and migration map difficult. They are believed to prefer off-shore waters, and are most frequently sighted in waters ranging from in depth, especially where upwelling water produces local concentrations of zooplankton and animal prey. Their status is usually described as rare, but occasional patches of higher-density strandings suggest they might be more common than previously estimated. The total population is unknown.
Fossils identified as belonging to K. breviceps have been recovered from Miocene deposits in Italy, Japan, and southern Africa.
Human interaction
Pygmy sperm whales have never been hunted on a wide scale. Land-based whalers have hunted them from Indonesia, Japan, and the Lesser Antilles. This species was impacted by whaling in the 18th and 19th centuries, as sperm whales were especially sought after by whalers for their sperm oil, produced by the spermaceti organ. The oil was used to fuel kerosene lamps. Ambergris, a waste product produced by the whales, was also valuable to whalers as it was used by humans in cosmetics and perfume.
They were not as heavily hunted as their larger counterpart Physeter macrocephalus, the sperm whale, which are typically 120,000 lb, thus preferred by whalers. The pygmy sperm whale is also rather inactive and slow rising when coming to the surface and as such do not bring much attention to themselves. However, they were easy targets, as they tended to swim slowly and lie motionless at the surface.
Individuals have also been recorded killed in drift nets. Some stranded animals have been found with plastic bags in their stomachs, which may be a cause for concern. Whether these activities are causing long-term damage to the survival of the species is not known.
Pygmy sperm whales do not do well in captivity. The longest recorded survival in captivity is 21 months, and most captive individuals die within one month, mostly due to dehydration or dietary problems.
Pygmy sperm whales have been reported as being forced to change their diets and foraging behaviors due to anthropic factors such as deep-sea trawling and increased fishing for cephalopods off the coast of many Southeast Asian countries.
Conservation
In 1985, the International Whaling Commission ended sperm whaling.
The pygmy sperm whale is covered by the Agreement on the Conservation of Small Cetaceans of the Baltic, North East Atlantic, Irish and North Seas (ASCOBANS) and the Agreement on the Conservation of Cetaceans in the Black Sea, Mediterranean Sea and Contiguous Atlantic Area (ACCOBAMS). The species is further included in the Memorandum of Understanding Concerning the Conservation of the Manatee and Small Cetaceans of Western Africa and Macaronesia (Western African Aquatic Mammals MoU) and the Memorandum of Understanding for the Conservation of Cetaceans and Their Habitats in the Pacific Islands Region (Pacific Cetaceans MoU).
As not much is known about this species, as well as due to the whaling and conservation laws in place for marine mammals, it is listed as "lower risk least concern" in the IUCN Red list. However, it faces some modern day issues; it is one of the most common stranded species in Florida sound. Due to its slow-moving and quiet nature, the species is at a higher risk of boat strikes. Its small size also allows for it to become a byproduct of commercial fishing, caught in seine nets. Anthropogenic noise caused by military activity and shipping is another issue affecting this species, as it echolocates. Pygmy sperm whales have also repeatedly been found stranded with plastic in their stomachs.
Specimens
MNZ MM002651, collected Hawke's Bay, New Zealand, no date data
| Biology and health sciences | Toothed whale | Animals |
465839 | https://en.wikipedia.org/wiki/Mung%20bean | Mung bean | The mung bean or green gram (Vigna radiata) is a plant species in the legume family. The mung bean is mainly cultivated in East, Southeast, and South Asia. It is used as an ingredient in both savoury and sweet dishes.
Names
The English names "mung" or "mungo" originated from the Hindi word (), which is derived from the Sanskrit word (). It is also known in Philippine English as "mongo bean". Other less common English names include "golden gram" and "Jerusalem pea".
In other languages, mung beans are also known as
Persian, Kurdish: maash (ماش)
Urdu- mūng (مونگ)
Hindi- mūng (मूंग)
Punjabi- mūng (ਮੁੰਗ)
Gujarati-mag (મગ)
Marathi- hirve mug (हिरवे मूग)
Konkani- mugā (मुगा)
Bengali- mū̃g (মুঁগ)
Odia- mûgå (ମୁଗ)
Assamese- magu (মগু)
Kannada- hesaru bele (ಹೆಸರು ಬೇಳೆ)
Tamil- payatham paruppu (பயத்தம் பருப்பு)
Malayalam- cherupayar parippu (ചെറുപയർ പരിപ്പ്)
Sinhala- mung-ata (මුං ඇට), meaning Mung-Seeds
Swahili- choroko
Tulu- padangi salayi(ಪದಂಗಿ ಸಲೈ)
Telugu -pesalu (పెసలు)
Vietnamese -đậu xanh (literally "green bean")
Bahasa Indo/Bahasa Malay-kacang hijau (literally "green bean")
Philippine languages-munggo or monggo;
Mandarin Chinese - lǜ dòu (绿豆, literally "green bean")
Description
The green gram is an annual vine with yellow flowers and fuzzy brown pods.
Morphology
Mung bean (Vigna radiata) is a plant species of Fabaceae and is also known as green gram. It is sometimes confused with black gram (Vigna mungo) for their similar morphology, though they are two different species. The green gram is an annual vine with yellow flowers and fuzzy brown pods. There are three subgroups of Vigna radiata, including one cultivated (Vigna radiata subsp. radiata) and two wild ones (Vigna radiata subsp. sublobata and Vigna radiata subsp. glabra). It has a height of about .
Mung bean has a well-developed root system. The lateral roots are many and slender, with root nodules grown. Stems are much branched, sometimes twining at the tips. Young stems are purple or green, and mature stems are grayish-yellow or brown. They can be divided into erect cespitose, semi-trailing and trailing types. Wild types tend to be prostrate while cultivated types are more erect.
Leaves are ovoid or broad-ovoid, cotyledons die after emergence, and ternate leaves are produced on two single leaves. The leaves are 6–12 cm long and 5–10 cm wide. Racemes with yellow flowers are borne in the axils and tips of the leaves, with 10–25 flowers per pedicel, self-pollinated. The fruits are elongated cylindrical or flat cylindrical pods, usually 30–50 per plant. The pods are 5–10 cm long and 0.4–0.6 cm wide and contain 12–14 septum-separated seeds, which can be either cylindrical or spherical in shape, and green, yellow, brown, or blue in color. Seed colors and presence or absence of a rough layer are used to distinguish different types of mung bean.
Growth stages
Germination is typically within 4–5 days, but the actual rate varies according to the amount of moisture introduced during the germination stage. It is epigeal, with the stem and cotyledons emerging from the seedbed.
After germination, the seed splits, and a soft, whitish root grows. Mung bean sprouts are harvested during this stage. If not harvested, it develops a root system, then a green stem which contains two leaves and shoots up from the soil. After that, seed pods begin to form on its branches, with 10–15 seeds contained in each pod.
The maturation can take up to 60 days. Once matured, it can reach up to 30 inches (76 cm) tall, with multiple branches with seed pods. Most of the seed pods become darker, while some remain green.
Nitrogen fixation and cover crop
As a legume plant, mung bean is in symbiotic association with Rhizobia which enables it to fix atmospheric nitrogen (58–109 kg per ha mung bean). It can provide large amounts of biomass (7.16 t biomass/ha) and nitrogen to the soil (ranging from 30 to 251 kg/ha). The nitrogen fixation ability not only enables it to meet its own nitrogen requirement, but also benefits the succeeding crops. It can be used as a cover crop before or after cereal crops in rotation, which makes a good green manure.
Taxonomy
Mung beans are one of many species moved from the genus Phaseolus to Vigna in the 1970s. The previous names were Phaseolus aureus or P. radiatus.
Cultivation
Varieties
The mung bean varieties now are mainly targeted in resistance to pests and diseases, particularly the bean weevil and mung bean yellow mosaic virus (MYMV). For now, the main varieties include Samrat, IPM2-3, SML 668 and Meha in India; Crystal, Jade-AU, Celera-AU,Satin II,Regur in Australia; Zhonglv No. 1, Zhonglv No. 2, Jilv No. 2, Jilv No. 7, Weilv No. 4, Jihong 9218, Jihong 8937, Bao 876-16, Bao 8824-17 in China. Also, with the help of the World Vegetable Center, the traits of mung bean have been considerably improved.
'Summer Moong' is a short-duration mung bean pulse crop grown in northern India. Due to its short duration, it can fit well in-between of many cropping systems. It is mainly cultivated in East and Southeast Asia and the Indian subcontinent. It is considered to be the hardiest of all pulse crops and requires a hot climate for germination and growth.
Climate and soil requirements
Mung bean is a warm-season and frost-intolerant plant. Mung bean is suitable for being planted in temperate, sub-tropical and tropical regions. The most suitable temperature for mung bean's germination and growth is . Mung bean has high adaptability to various soil types, while the best pH of the soil is between 6.2 and 7.2. Mung bean is a short-day plant and long days will delay its flowering and podding.
Harvest
The yield potential of mung bean is around 2.5 to 3.0 t/ha, however, usually due to the resistance to environmental stress and improper management, the average productivity for mung bean is only 0.5 t/ha. Due to the indeterminate flowering habit of mung bean, when facing proper environmental conditions, there can be both flowers and pods in one mung bean plant, which makes it difficult to harvest it. The perfect harvesting stage is when 90% of the pods' colour in one yield has been black. Mung beans can use a harvester for harvesting. It is important to set up the header in case of over-threshing.
Transportation and storage condition
The perfect moisture of grain for transportation is 13%. Before storage, the cleaning and grading process must be done. The ideal storage condition should keep the mung bean's moisture at exactly 12%.
Pests, diseases and abiotic stress
Most of the mung bean cultivars have a yield potential of 1.8–2.5 tons/ha. However, the actual average productivity of mung bean hovers around 0.5–0.7 t/ha. Several factors constrain its yield, including biotic stresses (pests and diseases) and abiotic stresses. Stresses not only decrease productivity but also affect the physical quality of seeds, making them unusable or unfit for human consumption. All the stresses collectively can lead to significant yield losses of up to 10–100%.
Pests
Insect pests attack mung bean at all crop stages from sowing to storage stage and take a heavy toll on crop yield. Some insect pests directly damage the crop, while others act as vectors of diseases to transmit the virus.
Stem fly (bean fly) is one of the major pests of mung bean. This pest infests the crop within a week after germination and under epidemic conditions, it can cause total crop loss.
Whitefly, B. tabaci, is a serious pest in mung bean and damages the crop either directly by feeding on phloem sap and excreting honeydew on the plant that forms black sooty mould or indirectly by transmitting mung bean yellow mosaic disease (MYMD). Whitefly causes yield losses between 17% and 71% in mung bean.
Thrips infest mung bean both in the seedling and flowering stages. During the seedling stage, thrips infest the seedling's growing point when it emerges from the ground, and under severe infestation, the seedlings fail to grow. Flowering thrips cause heavy damage and attack during flowering and pod formation, which feed on the pedicles and stigma of flowers. Under severe infestation, flowers drop and no pod formation takes place.
Spotted pod borer, Maruca vitrata, is a major insect pest in mung bean in the tropics and subtropics. The pest causes a yield loss of 2–84% in mung bean amounting to US $30 million. The larvae damage all the stages of the crop including flowers, stems, peduncles, and pods; however, heavy damage occurs at the flowering stage where the larvae form webs combining flowers and leaves.
Cowpea aphid sucks plant sap that causes loss of plant vigor and may lead to yellowing, stunting or distortion of plant parts. Further, aphids secrete honeydew (unused sap) which leads to the development of sooty mould on plant parts. Cowpea aphid also can act as a vector of the mung bean common mosaic virus.
Bruchid is the most severe stored pest of legume seeds worldwide, with damage up to 100% losses within 3–6 months, if not controlled. Bruchid infestation in mungbean results in weight loss, low germination, and nutritional changes in seeds, thereby reducing the nutritional and market value, rendering it unfit for human consumption, and agricultural and commercial uses.
Diseases
Mungbean yellow mosaic disease (MYMD) is a significant viral disease of mung bean, which causes severe yield losses annually. MYMD is caused by three distinct begomoviruses, transmitted by whitefly. The economic losses due to MYMD account for up to 85% yield reduction in India.
The major fungal diseases are Cercospora leaf spot (CLS), dry root rot, powdery mildew and anthracnose. Dry root rot (Macrophomina phaseolina) is an emerging disease of mungbean, causing 10–44% yield losses in mung bean production in India and Pakistan. The pathogen affects the fibrovascular system of the roots and basal internodes of its host, impeding the transport of water and nutrients to the upper parts of the plant.
Halo blight, bacterial leaf spot, and tan spot are significant bacterial diseases.
Abiotic stress
Abiotic stresses negatively influence plant growth and productivity and are the primary causes of extensive agricultural losses worldwide. Reduction in crop yield due to environmental variations has increased steadily over the decades.
Salinity affects crop growth and yield by way of osmotic stress, ion toxicity, and reduced nodulation which ultimately lead to reduced nitrogen-fixing ability. Excessive salt leads to leaf injury and then reduced photosynthesis.
High-temperature stress negatively affects reproductive development in mung bean and affects all reproductive traits like flower initiation, pollen viability, fertilization, pod set, seed quality, etc. High temperatures over 42 °C during summer causes hardening of seeds due to incomplete sink development.
Mung bean requires a light moisture regime in the soil during its growing period, while at the time of harvest, complete dry conditions are required. Since it is mostly grown under rainfed conditions, it is more susceptible to water deficiencies as compared to many other food legumes. Drought affects its growth and development by negatively affecting vegetative growth, flower initiation, abnormal pollen behavior and pod set. However, simultaneously, excess moisture or waterlogging, even for a short period of time, especially at the early vegetative stage may be detrimental to the crop.
Mung bean may also be affected by excess soil and atmospheric moisture during the rainy season which may lead to pre-harvest sprouting in mature pods. It deteriorates the quality of the seed/grain produced.
Integrated disease management
Using climate analysis tools delivered on the web can firstly help farmers interrogate climate records to ask questions relating to rainfall, temperature, radiation, and derived variables to avoid some of the abiotic stresses. Deployment of varieties with genetic resistance is the most effective and durable method for integrated disease management, in the meantime focusing on yield, height, grain quality, market opportunities and seed availability. For pre-harvest sprouting (PHS), the development of mung bean cultivars with a short (10–15 days) period of fresh seed dormancy (FSD) is important to curtail losses incurred by PHS.
Market
Mung bean plants have a long history of being consumed by humans. The main consumed parts are the seeds and sprouts. The mature seeds provide an invaluable source of digestible protein for humans in places where meat is lacking or where people are mostly vegetarian. Mung bean has a large market in Asia (India, Southeast Asia and East Asia) and is also consumed in Southern Europe and in the Southern US. Mung bean protein is considered safe as a novel food (NF) pursuant to Regulation (EU) 2015/2283. The consumption of mung bean varies depending on the geographic region. For instance, in India, mung bean is used in sweets, snacks and savoury items. In other parts of Asia, it is used in cakes, sprouts, noodles and soups. In Europe and America, it is mainly used as fresh bean sprouts. The consumption of mung beans as such in the US is in the order of 22–29 g/capita per year, while the consumption in some areas of Asia can be as high as 2 kg/capita per year.
Mung bean is considered an alternative crop in many regions, which is generally preferable to sign a contract for the growing process before planting. In the US, the average price of mung bean is around $0.20 per pound. This is double the price of soybeans. The difference in production costs for mung bean and soybean is due to post-harvest cleaning and/or transportation. Overall, mung bean is considered to have market potential for its drought tolerance, and it is a food crop and not a feed crop, which can help buffer the economic risk from variability in commodity crop prices for farmers.
Uses
Nutritional value
The mung bean is recognized for its high nutritive value. A mung bean contains about 55–65% carbohydrate (equal to 630 g/kg dry weight) and are rich in protein, vitamins, and minerals. It is composed of about 20–50% protein of total dry weight, among which globulin (60%) and albumin (25%) are the primary storage proteins (see table). The mung bean is considered to be a substantive source of dietary proteins. The proteolytic cleavage of these proteins is even higher during sprouting. Mung bean carbohydrates are easily digestible, which causes less flatulence in humans compared to other forms of legumes. Both seeds and sprouts of the mung bean produce lower calories compared to other cereals, which makes it a more attractive bean to obese and diabetic individuals.
Culinary
Whole cooked mung beans are generally prepared from dried beans by boiling until they are soft. Mung beans are light yellow in colour when their skins are removed. Mung bean paste can be made by hulling, cooking, and pulverizing the beans to a dry paste.
South Asia
Although whole mung beans are also occasionally used in Indian cuisine, beans without skins are more commonly used. In Karnataka, Maharashtra,Odisha, Gujarat, Kerala, and Tamil Nadu, whole mung beans are commonly boiled to make a dry preparation often served with congee. Hulled mung beans can also be used in a similar fashion as whole beans for the purpose of making sweet soups.
In Madhya Pradesh and Rajasthan, mung beans are partially mashed, fermented, and made into fritters called mangode, which serves as a common tea time snack similar to Pakora.
In Goa, sprouted mung beans are cooked in a coconut milk based, mild curry called moonga gaathi.
Mung beans in some regional cuisines of India are stripped of their outer coats to make mung dal. In Odisha, West Bengal and Bangladesh the stripped and split bean is used to make a soup-like dal known as ().
In Southern India, state of Karnataka, Tamil Nadu, Telangana, and Andhra Pradesh, as well as in Maharashtra, steamed whole beans are seasoned with spices and fresh grated coconut. In South India, especially Andhra Pradesh, batter made from ground whole moong beans (including skin) is used to make a popular variety of dosa called () or pesara-dosa.
In Pakistan, cooked mung dal is often paired with boiled white basmati rice in a dish called "dal chawal". If butter is added to this dal, it is called "dal makhani" and is eaten with chapati.
In Sri Lanka, boiled Mung beans are usually eaten with grated coconut and lunu-miris, a spicy chili and onion sambol, most commonly as a breakfast food. Mung beans are also added to kiribath, which is then termed mung-kiribath. During the traditional New Year Celebration (celebrated in April) mung beans are used to make a traditional fried sweet, mung-kavum.
East Asia
In southern Chinese cuisine, whole mung beans are used to make a , or dessert, called , which is served either warm or chilled. They are also often cooked with rice to make congee. Unlike in South Asia, whole mung beans seldom appear in savory dishes.
In Hong Kong, hulled mung beans and mung bean paste are made into ice cream or frozen ice pops. Mung bean paste is used as a common filling for Chinese mooncakes in East China and Taiwan. During the Dragon Boat Festival, the boiled and shelled beans are used as filling in zongzi prepared for consumption. The beans may also be cooked until soft, blended into a liquid, sweetened, and served as a beverage, popular in many parts of China. In South China and Vietnam, mung bean paste may be mixed with sugar, fat, and fruits or spices to make pastries, such as bánh đậu xanh.
In Korea, skinned mung beans are soaked and ground with some water to make a thick batter. This is used as a basis for the Korean pancakes called bindae-tteok. They are also commonly used for Hobak-tteok.
Southeast Asia
In the Philippines, ginisáng monggó/mónggo (sautéed mung bean stew), also known as monggó/mónggo guisado or balatong, is a savoury stew of whole mung beans with prawns or fish. It is traditionally served on Fridays of Lent, when the majority of Catholic Filipinos traditionally abstain from meat. Variants of ginisáng monggó/mónggo may also be made with chicken or pork. Mung beans are also used in the Filipino dessert ginataang munggo (also known as balatong), a rice gruel with coconut milk and sugar flavored with pandan leaves or vanilla.
Mung bean paste is also a common filling of pastries known as ondé-ondé and bakpia in Indonesia and hopia in the Philippines, and further afield in Guyana (where it is known as "black eye cake"). It is also used as a filling for pan de monggo, a Filipino bread. In Indonesia, mung beans are also made into a popular dessert snack called es kacang hijau, which has the consistency of a porridge. The beans are cooked with sugar, coconut milk, and a little ginger.
Middle East
A staple diet in some parts of the Middle East is mung beans and rice. Both are cooked together in a pilaf-like rice dish called , which means mung beans and rice.
Bean sprouts
Mung beans are germinated by leaving them in water for four hours of daytime light and spending the rest of the day in the dark. Mung bean sprouts can be grown under artificial light for four hours over the period of a week. They are usually simply called "bean sprouts". However, when bean sprouts are called for in recipes, it generally refers to mung bean or soybean sprouts.
Mung bean sprouts are stir-fried as a Chinese vegetable accompaniment to a meal, usually with garlic, ginger, spring onions, or pieces of salted dried fish to add flavour. Uncooked bean sprouts are used in filling for Vietnamese spring rolls, as well as a garnish for phở. They are a major ingredient in a variety of Malaysian and Peranakan cuisine, including char kway teow, hokkien mee, mee rebus, and pasembor.
In Korea, slightly cooked mung bean sprouts, called sukjunamul (), are often served as a side dish. They are blanched (placed into boiling water for less than a minute), immediately cooled in cold water, and mixed with sesame oil, garlic, salt, and often other ingredients.
In the Philippines, mung bean sprouts are called togue and are most commonly used in lumpia rolls called lumpiang togue.
In India, mung bean sprouts are cooked with green chili, garlic, and other spices.
In Indonesia the food are often used as fillings like tahu isi (stuffed tofu) and complementary ingredient in many dishes such as rawon and soto.
In Japan, the sprouts are called moyashi.
Starch
Mung bean starch, which is extracted from ground mung beans, is used to make transparent cellophane noodles (also known as bean thread noodles, bean threads, glass noodles, fensi (), tung hoon (), , , or ). Cellophane noodles become soft and slippery when they are soaked in hot water. A variation of cellophane noodles, called mung bean sheets or green bean sheets, are also available.
In Korea, a jelly called nokdumuk (; also called cheongpomuk, ) is made from mung bean starch; a similar jelly, colored yellow with the addition of gardenia coloring, is called hwangpomuk ().
In northern China, mung bean jelly is called liangfen (), which is a very popular food during summer. The Hokkiens add sugar to mung bean jelly to make it a dessert called Lio̍k-tāu hún-kóe ().
Plant-based protein
Mung beans are increasingly used in plant-based meat and egg alternatives such as Beyond Meat and Eat Just's Just Egg.
History of domestication and cultivation
The mung bean was domesticated in India, where its progenitor (Vigna radiata subspecies sublobata) occurs wild.
2nd millennium BCE scripture Yajurveda in its 4th chapter refers to mudga (मुद्ग) as one of the important grains and asks Rudra to bless with its good harvest (मु॒द्गाश्च॑ मे॒ खल्वा॑श्च मे) in Rudradhyaya - still prevalent and popular set of hymns in Shiva worship. The mung bean is listed as one of the nine auspicious grains (navdhānya) in Vedic astrology and associated with planet Budha (Mercury).
Carbonized mung beans have been discovered in many archeological sites in India. Areas with early finds include the eastern zone of the Harappan civilisation in modern-day Pakistan and western and northwestern India, where finds date back about 4,500 years, and South India in the modern state of Karnataka where finds date back more than 4,000 years. Some scholars, therefore, infer two separate domestications in the northwest and south of India. On the other hand, a recent study suggested a single genetic origin likely contributing to the loss of pod shattering, the key domestication trait in legumes. In South India, there is evidence for the evolution of larger-seeded mung beans 3,500 to 3,000 years ago. By about 3500 years ago mung beans were widely cultivated throughout India.
Cultivated mung beans later spread from India to China and Southeast Asia. Archaeobotanical research at the site of Khao Sam Kaeo in southern Thailand indicates that mung beans had arrived in Thailand by at least 2,200 years ago.
A genetic study demonstrated that, following its domestication in South Asia, mung bean spread sequentially to Southeast Asia and East Asia and eventually to Central Asia, despite the geographic proximity of South and Central Asia. The study suggests that the short and dry growing seasons in the northern regions of Asia were not suitable for southern cultivars, which had been bred for extended life cycles to maximize yield. This highlights the critical role of ecological factors, such as climate, in shaping crops evolution.
| Biology and health sciences | Pulses | Plants |
465886 | https://en.wikipedia.org/wiki/Parietal%20lobe | Parietal lobe | The parietal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The parietal lobe is positioned above the temporal lobe and behind the frontal lobe and central sulcus.
The parietal lobe integrates sensory information among various modalities, including spatial sense and navigation (proprioception), the main sensory receptive area for the sense of touch in the somatosensory cortex which is just posterior to the central sulcus in the postcentral gyrus, and the dorsal stream of the visual system. The major sensory inputs from the skin (touch, temperature, and pain receptors), relay through the thalamus to the parietal lobe.
Several areas of the parietal lobe are important in language processing. The somatosensory cortex can be illustrated as a distorted figure – the cortical homunculus (Latin: "little man") in which the body parts are rendered according to how much of the somatosensory cortex is devoted to them. The superior parietal lobule and inferior parietal lobule are the primary areas of body or spatial awareness. A lesion commonly in the right superior or inferior parietal lobule leads to hemispatial neglect.
The name comes from the parietal bone, which is named from the Latin paries-, meaning "wall".
Structure
The parietal lobe is defined by three anatomical boundaries: The central sulcus separates the parietal lobe from the frontal lobe; the parieto-occipital sulcus separates the parietal and occipital lobes; the lateral sulcus (sylvian fissure) is the most lateral boundary, separating it from the temporal lobe; and the longitudinal fissure divides the two hemispheres. Within each hemisphere, the somatosensory cortex represents the skin area on the contralateral surface of the body.
Immediately posterior to the central sulcus, and the most anterior part of the parietal lobe, is the postcentral gyrus (Brodmann area 3), the primary somatosensory cortical area. Separating this from the posterior parietal cortex is the postcentral sulcus.
The posterior parietal cortex can be subdivided into the superior parietal lobule (Brodmann areas 5 + 7) and the inferior parietal lobule (39 + 40), separated by the intraparietal sulcus (IPS). The intraparietal sulcus and adjacent gyri are essential in guidance of limb and eye movement, and—based on cytoarchitectural and functional differences—is further divided into medial (MIP), lateral (LIP), ventral (VIP), and anterior (AIP) areas.
Function
Functions of the parietal lobe include:
Two point discrimination – through touch alone without other sensory input (e.g. visual)
Graphesthesia – recognizing writing on skin by touch alone
Touch localization (bilateral simultaneous stimulation)
The parietal lobe plays important roles in integrating sensory information from various parts of the body, knowledge of numbers and their relations, and in the manipulation of objects. Its function also includes processing information relating to the sense of touch. Portions of the parietal lobe are involved with visuospatial processing. Although multisensory in nature, the posterior parietal cortex is often referred to by vision scientists as the dorsal stream of vision (as opposed to the ventral stream in the temporal lobe). This dorsal stream has been called both the "where" stream (as in spatial vision) and the "how" stream (as in vision for action). The posterior parietal cortex (PPC) receives somatosensory and visual input, which then, through motor signals, controls movement of the arm, hand, and eyes.
Various studies in the 1990s found that different regions of the posterior parietal cortex in macaques represent different parts of space.
The lateral intraparietal (LIP) area contains a map of neurons (retinotopically-coded when the eyes are fixed) representing the saliency of spatial locations, and attention to these spatial locations. It can be used by the oculomotor system for targeting eye movements, when appropriate.
The ventral intraparietal (VIP) area receives input from a number of senses (visual, somatosensory, auditory, and vestibular). Neurons with tactile receptive fields represent space in a head-centered reference frame. The cells with visual receptive fields also fire with head-centered reference frames but possibly also with eye-centered coordinates
The medial intraparietal (MIP) area neurons encode the location of a reach target in eye-centered coordinates.
The anterior intraparietal (AIP) area contains neurons responsive to shape, size, and orientation of objects to be grasped as well as for manipulation of the hands themselves, both to viewed and remembered stimuli. The AIP has neurons that are responsible for grasping and manipulating objects through motor and visual inputs. The AIP and ventral premotor together are responsible for visuomotor transformations for actions of the hand.
More recent fMRI studies have shown that humans have similar functional regions in and around the intraparietal sulcus and parietal-occipital junction. The human "parietal eye fields" and "parietal reach region", equivalent to LIP and MIP in the monkey, also appear to be organized in gaze-centered coordinates so that their goal-related activity is "remapped" when the eyes move.
Emerging evidence has linked processing in the inferior parietal lobe to declarative memory. Bilateral damage to this brain region does not cause amnesia however the strength of memory is diminished, details of complex events become harder to retrieve, and subjective confidence in memory is very low. This has been interpreted as reflecting either deficits in internal attention, deficits in subjective memory states, or problems with the computation that allows evidence to accumulate, thus allowing decisions to be made about internal representations.
Clinical significance
Features of parietal lobe lesions are as follows:
Unilateral parietal lobe
Contralateral hemisensory loss
Astereognosis – inability to determine 3-D shape by touch.
Agraphaesthesia – inability to read numbers or letters drawn on hand, with eyes shut.
Contralateral homonymous inferior quadrantanopia
Asymmetry of optokinetic nystagmus (OKN)
Sensory seizures
Dominant hemisphere
Conduction aphasia
Dyslexia – a general term for disorders that can involve difficulty in learning to read or interpret words, letters, and other symbols
Apraxia – inability to perform complex movements in the presence of normal motor, sensory and cerebellar function
Gerstmann syndrome – characterized by acalculia, agraphia, finger agnosia, and left-right disorientation
Non-dominant hemisphere
Contralateral hemispatial neglect
Constructional apraxia
Dress apraxia
Anosognosia – lack of awareness of the existence of one's disability
Bilateral hemispheres
Bálint's syndrome
Damage to this lobe in the right hemisphere results in the loss of imagery, visualization of spatial relationships and neglect of left-side space and left side of the body. Even drawings may be neglected on the left side. Damage to this lobe in the left hemisphere will result in problems in mathematics, long reading, writing, and understanding symbols. The parietal association cortex enables individuals to read, write, and solve mathematical problems. The sensory inputs from the right side of the body go to the left side of the brain and vice versa.
The syndrome of hemispatial neglect is usually associated with large deficits of attention of the non-dominant hemisphere. Optic ataxia is associated with difficulties reaching toward objects in the visual field opposite to the side of the parietal damage. Some aspects of optic ataxia have been explained in terms of the functional organization described above.
Apraxia is a disorder of motor control which can be referred neither to "elemental" motor deficits nor to general cognitive impairment. The concept of apraxia was shaped by Hugo Liepmann. Apraxia is predominantly a symptom of left brain damage, but some symptoms of apraxia can also occur after right brain damage.
Amorphosynthesis is a loss of perception on one side of the body caused by a lesion in the parietal lobe. Usually, left-sided lesions cause agnosia, a full-body loss of perception, while right-sided lesions cause lack of recognition of the person's left side and extrapersonal space. The term amorphosynthesis was coined by D. Denny-Brown to describe patients he studied in the 1950s.
Can also result in sensory impairment where one of the affected person's senses (sight, hearing, smell, touch, taste and spatial awareness) is no longer normal.
| Biology and health sciences | Nervous system | Biology |
465920 | https://en.wikipedia.org/wiki/Listeria%20monocytogenes | Listeria monocytogenes | Listeria monocytogenes is the species of pathogenic bacteria that causes the infection listeriosis. It is a facultative anaerobic bacterium, capable of surviving in the presence or absence of oxygen. It can grow and reproduce inside the host's cells and is one of the most virulent foodborne pathogens. Twenty to thirty percent of foodborne listeriosis infections in high-risk individuals may be fatal. In the European Union, listeriosis continues an upward trend that began in 2008, causing 2,161 confirmed cases and 210 reported deaths in 2014, 16% more than in 2013. In the EU, listeriosis mortality rates also are higher than those of other foodborne pathogens. Responsible for an estimated 1,600 illnesses and 260 deaths in the United States annually, listeriosis ranks third in total number of deaths among foodborne bacterial pathogens, with fatality rates exceeding even Salmonella spp. and Clostridium botulinum.
Named for Joseph Lister, Listeria monocytogenes is a Gram-positive bacterium, in the phylum Bacillota. Its ability to grow at temperatures as low as 0 °C permits multiplication at typical refrigeration temperatures, greatly increasing its ability to evade control in human foodstuffs. Motile via flagella at 30 °C and below, but usually not at 37 °C, L. monocytogenes can instead move within eukaryotic cells by explosive polymerization of actin filaments (known as comet tails or actin rockets). Once Listeria monocytogenes enters the host cytoplasm, multiple changes in bacterial metabolism and gene expression help to complete its metamorphosis from soil dweller to intracellular pathogen.
Studies suggest that up to 10% of human gastrointestinal tracts may be colonized by L. monocytogenes. Nevertheless, clinical diseases due to L. monocytogenes are more frequently recognized by veterinarians, especially as meningoencephalitis in ruminants. See: listeriosis in animals.
Due to its frequent pathogenicity, causing meningitis in newborns (acquired transvaginally), pregnant women are often advised not to eat soft cheeses such as Brie, Camembert, feta, and queso blanco fresco, which may be contaminated with and permit growth of L. monocytogenes. It is the third most common cause of meningitis in newborns. Listeria monocytogenes can infect the brain, spinal-cord membranes and bloodstream of the host through the ingestion of contaminated food such as unpasteurized dairy or raw foods.
Classification
L. monocytogenes is a Gram-positive, non-spore-forming, motile, facultatively anaerobic, rod-shaped bacterium. It is catalase-positive and oxidase-negative, and expresses a beta hemolysin, which causes destruction of red blood cells. This bacterium exhibits characteristic tumbling motility when viewed with light microscopy. Although L. monocytogenes is actively motile by means of peritrichous flagella at room temperature (20−25 °C), the organism does not synthesize flagella at body temperatures (37 °C).
The genus Listeria belongs to the class Bacilli and the order Bacillales, which also includes Bacillus and Staphylococcus. Listeria currently contains 27 species: Listeria aquatica, Listeria booriae, Listeria cornellensis, Listeria cossartiae, Listeria costaricensis, Listeria farberi, Listeria fleischmannii, Listeria floridensis, Listeria goaensis, Listeria grandensis, Listeria grayi, Listeria immobilis, Listeria innocua, Listeria ivanovii, Listeria marthii, Listeria monocytogenes, Listeria murrayi, Listeria newyorkensis, Listeria portnoyi, Listeria riparia, Listeria rocourtiae, Listeria rustica, Listeria seeligeri, Listeria thailandensis, Listeria valentina, Listeria weihenstephanensis, Listeria welshimeri. L. denitrificans, previously thought to be part of the genus Listeria, was reclassified into the new genus Jonesia. Both L. ivanovii and L. monocytogenes are pathogenic in mice, but only L. monocytogenes is consistently associated with human illness. The 13 serotypes of L. monocytogenes can cause disease, but more than 90% of human isolates belong to only three serotypes: 1/2a, 1/2b, and 4b. L. monocytogenes serotype 4b strains are responsible for 33 to 35% of sporadic human cases worldwide and for all major foodborne outbreaks in Europe and North America since the 1980s.
History
L. monocytogenes was first described by E.G.D. Murray (Everitt George Dunne Murray) in 1924 based on six cases of sudden death in young rabbits, and published a description with his colleagues in 1926 .
Murray referred to the organism as Bacterium monocytogenes before Harvey Pirie changed the genus name to Listeria in 1940. Although clinical descriptions of L. monocytogenes infection in both animals and humans were published in the 1920s, it was not recognized as a significant cause of neonatal infection, sepsis, and meningitis until 1952 in East Germany. Listeriosis in adults was later associated with patients living with compromised immune systems, such as individuals taking immunosuppressant drugs and corticosteroids for malignancies or organ transplants, and those with HIV infection.
L. monocytogenes was not identified as a cause of foodborne illness until 1981, however. An outbreak of listeriosis in Halifax, Nova Scotia, involving 41 cases and 18 deaths, mostly in pregnant women and neonates, was epidemiologically linked to the consumption of coleslaw containing cabbage that had been contaminated with L. monocytogenes-contaminated sheep manure. Since then, a number of cases of foodborne listeriosis have been reported, and L. monocytogenes is now widely recognized as an important hazard in the food industry.
Pathogenesis
Invasive infection by L. monocytogenes causes the disease listeriosis. When the infection is not invasive, any illness as a consequence of infection is termed febrile gastroenteritis. The manifestations of listeriosis include sepsis, meningitis (or meningoencephalitis), encephalitis, corneal ulcer, pneumonia, myocarditis, and intrauterine or cervical infections in pregnant women, which may result in spontaneous abortion (second to third trimester) or stillbirth. Surviving neonates of fetomaternal listeriosis may suffer granulomatosis infantiseptica — pyogenic granulomas distributed over the whole body — and may suffer from physical retardation. Influenza-like symptoms, including persistent fever, usually precede the onset of the aforementioned disorders. Gastrointestinal symptoms, such as nausea, vomiting, and diarrhea, may precede more serious forms of listeriosis or may be the only symptoms expressed. Gastrointestinal symptoms were epidemiologically associated with use of antacids or cimetidine. The onset time to serious forms of listeriosis is unknown, but may range from a few days to 3 weeks. The onset time to gastrointestinal symptoms is unknown, but probably exceeds 12 hours. An early study suggested that L. monocytogenes is unique among Gram-positive bacteria in that it might possess lipopolysaccharide, which serves as an endotoxin. Later, it was found to not be a true endotoxin. Listeria cell walls consistently contain lipoteichoic acids, in which a glycolipid moiety, such as a galactosyl-glucosyl-diglyceride, is covalently linked to the terminal phosphomonoester of the teichoic acid. This lipid region anchors the polymer chain to the cytoplasmic membrane. These lipoteichoic acids resemble the lipopolysaccharides of Gram-negative bacteria in both structure and function, being the only amphipathic polymers at the cell surface.
L. monocytogenes has D-galactose residues on its surface that can attach to D-galactose receptors on the host cell walls. These host cells are generally M cells and Peyer's patches of the intestinal mucosa. Once attached to these cells, L. monocytogenes can translocate past the intestinal membrane and into the body.. Alternatively, losses of structural integrity (such as small lacerations) in the gastrointestinal epithelium could allow the microorganism to penetrate from the gastrointestinal tract to the bloodstream.
The infectious dose of L. monocytogenes varies with the strain and with the susceptibility of the victim. From cases contracted through raw or supposedly pasteurized milk, one may safely assume that, in susceptible persons, fewer than 1,000 total organisms may cause disease. L. monocytogenes may invade the gastrointestinal epithelium. Once the bacterium enters the host's monocytes, macrophages, or polymorphonuclear leukocytes, it becomes bloodborne (sepsis) and can grow. Its presence intracellularly in phagocytic cells also permits access to the brain and probably transplacental migration to the fetus in pregnant women. This process is known as the "Trojan Horse mechanism". The pathogenesis of L. monocytogenes centers on its ability to survive and multiply in phagocytic host cells. It seems that Listeria originally evolved to invade membranes of the intestines, as an intracellular infection, and developed a chemical mechanism to do so. This involves a bacterial protein internalin (InlA/InlB), which attaches to a protein on the intestinal cell membrane "cadherin" and allows the bacteria to invade the cells through a zipper mechanism. These adhesion molecules are also to be found in two other unusually tough barriers in humans — the blood-brain barrier and the fetal–placental barrier, and this may explain the apparent affinity that L. monocytogenes has for causing meningitis and affecting babies in utero. Once inside the cell, L. monocytogenes rapidly acidifies the lumen of the vacuole formed around it during cell entry to activate listeriolysin O, a cholesterol-dependent cytolysin capable of disrupting the vacuolar membrane. This frees the pathogen and gives it access to the cytosol of the cell, where it continues its pathogenesis. Motility in the intracellular space is provided by actin assembly-inducing protein, which allows the bacteria to use the host cell's actin polymerization machinery to polymerize the cytoskeleton to give a "boost" to the bacterial cell so it can move in the cell. The same mechanism also allows the bacteria to travel from cell to cell.
Regulation of pathogenesis
L. monocytogenes can act as a saprophyte or a pathogen, depending on its environment. When this bacterium is present within a host organism, quorum sensing and other signals cause the up-regulation of several virulence genes. Depending on the location of the bacterium within the host organism, different activators up-regulate the virulence genes. SigB, an alternative sigma factor, up-regulates Vir genes in the intestines, whereas PrfA up-regulates gene expression when the bacterium is present in blood. L. monocytogenes also senses the entry to host by examining available nutrient sources. For example L-glutamine, an abundant nitrogen source in the host, induces the expression of virulence genes in L. monocytogenes. Little is known about how this bacterium switches between acting as a saprophyte and a pathogen; however, several noncoding RNAs are thought to be required to induce this change.
Pathogenicity of lineages
L. monocytogenes has three distinct lineages, with differing evolutionary histories and pathogenic potentials. Lineage I strains contain the majority of human clinical isolates and all human epidemic clones, but are underrepresented in animal clinical isolates. Lineage II strains are overrepresented in animal cases and underrepresented in human clinical cases, and are more prevalent in environmental and food samples. Lineage III isolates are very rare, but significantly more common in animal than human isolates.
Detection
The Anton test is used in the identification of L. monocytogenes; instillation of a culture into the conjunctival sac of a rabbit or guinea pig causes severe keratoconjunctivitis within 24 hours.
Listeria species grow on media such as Mueller-Hinton agar. Identification is enhanced if the primary cultures are done on agar containing sheep blood, because the characteristic small zone of hemolysis can be observed around and under colonies. Isolation can be enhanced if the tissue is kept at 4 °C for some days before inoculation into bacteriologic media. The organism is a facultative anaerobe and is catalase-positive and motile. Listeria produces acid, but not gas, when fermenting a variety of carbohydrates.
The motility at room temperature and hemolysin production are primary findings that help differentiate Listeria from Corynebacterium.
The methods for analysis of food are complex and time-consuming. The present U.S. FDA method, revised in September 1990, requires 24 and 48 hours of enrichment, followed by a variety of other tests. Total time to identification takes five to seven days, but the announcement of specific non-radiolabelled DNA probes should soon allow a simpler and faster confirmation of suspect isolates.
Recombinant DNA technology may even permit two- to three-day positive analysis in the future. Currently, the FDA is collaborating in adapting its methodology to quantitate very low numbers of the organisms in foods.
Treatment
When listeric meningitis occurs, the overall mortality may reach 70%, from sepsis 50%, and from perinatal/neonatal infections greater than 80%. In infections during pregnancy, the mother usually survives. Reports of successful treatment with parenteral penicillin or ampicillin exist. Trimethoprim-sulfamethoxazole has been shown effective in patients allergic to penicillin.
A bacteriophage, Listeria phage P100, has been proposed as food additive to control L. monocytogenes. Bacteriophage treatments have been developed by several companies. EBI Food Safety and Intralytix both have products suitable for treatment of the bacterium. The U.S. Food and Drug Administration (FDA) approved a cocktail of six bacteriophages from Intralytix, and a one-type phage product from EBI Food Safety designed to kill L. monocytogenes. Uses would potentially include spraying it on fruits and ready-to-eat meat such as sliced ham and turkey.
Use as a transfection vector
Because L. monocytogenes is an intracellular bacterium, some studies have used this bacterium as a vector to deliver genes in vitro. Current transfection efficiency remains poor. One example of the successful use of L. monocytogenes in in vitro transfer technologies is in the delivery of gene therapies for cystic fibrosis cases.
Cancer treatment
Listeria monocytogenes is being investigated as a cancer immunotherapy for several types of cancer.
A live attenuated Listeria monocytogenes cancer vaccine, ADXS11-001, is under development as a possible treatment for cervical carcinoma.
Epidemiology
Researchers have found Listeria monocytogenes in at least 37 mammalian species, both domesticated and feral, as well as in at least 17 species of birds and possibly in some species of fish and shellfish. Laboratories can isolate Listeria monocytogenes from soil, silage, and other environmental sources. Listeria monocytogenes is quite hardy and resists the deleterious effects of freezing, drying, and heat remarkably well for a bacterium that does not form spores. Most Listeria monocytogenes strains are pathogenic to some degree.
Routes of infection
Listeria monocytogenes has been associated with such foods as raw milk, pasteurized fluid milk, cheeses (particularly soft-ripened varieties), hard-boiled eggs, ice cream, raw vegetables, fermented raw-meat sausages, raw and cooked poultry, raw meats (of all types), and raw and smoked fish. Most bacteria can survive near freezing temperatures, but cannot absorb nutrients, grow or replicate; however, L. monocytogenes has the ability to grow at temperatures as low as 0 °C which permits exponential multiplication in refrigerated foods. At refrigeration temperature, such as 4 °C, the amount of ferric iron can affect the growth of L. monocytogenes.
Infectious cycle
The primary site of infection is the intestinal epithelium, where the bacteria invade nonphagocytic cells via the "zipper" mechanism. Uptake is stimulated by the binding of listerial internalins (Inl) to E-cadherin, a host cell adhesion factor, or Met (c-Met), hepatocyte growth factor. This binding activates certain Rho-GTPases, which subsequently bind and stabilize Wiskott–Aldrich syndrome protein (WASp). WASp can then bind the Arp2/3 complex and serve as an actin nucleation point. Subsequent actin polymerization creates a "phagocytic cup", an actin-based structure normally formed around foreign materials by phagocytes prior to endocytosis. The net effect of internalin binding is to exploit the junction-forming apparatus of the host into internalizing the bacterium. L. monocytogenes can also invade phagocytic cells (e.g., macrophages), but requires only internalins for invasion of nonphagocytic cells.
Following internalization, the bacterium must escape from the vacuole/phagosome before fusion with a lysosome can occur. Three main virulence factors that allow the bacterium to escape are listeriolysin O (LLO encoded by hly) phospholipase A (encoded by plcA) and phospholipase B (plcB). Secretion of LLO and PlcA disrupts the vacuolar membrane and allows the bacterium to escape into the cytoplasm, where it may proliferate.
Once in the cytoplasm, L. monocytogenes exploits host actin for the second time. ActA proteins associated with the old bacterial cell pole (being a bacillus, L. monocytogenes septates in the middle of the cell, thus has one new pole and one old pole) are capable of binding the Arp2/3 complex, thereby inducing actin nucleation at a specific area of the bacterial cell surface. Actin polymerization then propels the bacterium unidirectionally into the host cell membrane. The protrusion formed may then be internalized by a neighboring cell, forming a double-membrane vacuole from which the bacterium must escape using LLO and PlcB. This mode of direct cell-to-cell spread involves a cellular mechanism known as paracytophagy.
The ability of L. monocytogenes to successfully infect depends on its resistance to the high concentrations of bile encountered throughout the gastrointestinal tract. This resistance is due, in part, to the nucleotide excision repair protein UvrA that is necessary for repair of DNA damages caused by bile salts.
| Biology and health sciences | Gram-positive bacteria | Plants |
465941 | https://en.wikipedia.org/wiki/Betula%20nigra | Betula nigra | Betula nigra, the black birch, river birch or water birch, is a species of birch native to the Eastern United States from New Hampshire west to southern Minnesota, and south to northern Florida and west to Texas. It is one of the few heat-tolerant birches in a family of mostly cold-weather trees which do not thrive in USDA Zone 6 and up. B. nigra commonly occurs in floodplains and swamps.
Description
Betula nigra is a deciduous tree growing to with a trunk in diameter. The base of the tree is often divided into multiple slender trunks.
Bark
Bark characteristics of the river birch differ during its youth stage, maturation, and old growth. The bark of a young river birch can vary from having a salmon-pink to brown-gray tint and can be described as having loose layers of curling, paper thin scales. As the tree matures, the salmon-pink color is exchanged for a reddish-brown with a dark grey base color. The scales on a mature tree lack the loose curling and are closely pressed into thick, irregular plates. These scales are slightly separated from the trunk and can shift outward to the side. Once the river birch ages past maturity, the scales become thicker towards the base of the trunk and are divided in deep furrows.
Leaves and fruit
The twigs are glabrous or thinly hairy. There is an absence of terminal buds, and lateral buds often have a hook at the tip of the bud, which differs from other species in the family Betulaceae. The leaves are alternate, ovate, long and broad, with a serrated margin and five to twelve pairs of veins. The upper surface of the leaf is dark green in color, while the underside can be described as having a light yellow-green color. The leaves turn yellow in Autumn. The flowers are wind-pollinated catkins long, the male catkins pendulous, the female catkins erect. The fruit is unusual among birches in maturing in late spring; it is composed of numerous tiny winged seeds packed between the catkin bracts.
Taxonomy
Betula nigra is a tree that classified in the family Betulaceae, which is commonly known as the Birch or Alder Family. This family comprises six genera (Alnus, Betula, Carpinus, Corylus, Ostrya, and Ostryopsis) and includes alders, birches, hornbeams, and hazelnuts. Species within this family, along with Betula nigra, are shrubs or trees that grow along stream sides or in poorly drained soils throughout the Northern Hemisphere. Betulaceae is included within the order Fagales, which branches from the Rosid clade.
Habitat and range
The river birch is often found in low-elevation regions from as north as Massachusetts to as south as northern Florida. It can be found extending west to Kansas and east to the coast where proper habitat conditions occur. As its name depicts, this birch is found along stream-sides. It can also be a prominent species found in forested wetland communities and in areas containing moist soil, such as floodplains. States include: Alabama, Arkansas, Connecticut, Delaware, Florida, Georgia, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maryland, Massachusetts, Mississippi, Missouri, New Hampshire, New Jersey, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, South Carolina, Tennessee, Texas, Virginia, West Virginia, and Wisconsin.
River birch is best placed in USDA hardiness zones 4–9.
Conservation status in the United States
It is listed as threatened in New Hampshire.
Ecology
In states in which mining is prevalent, the river birch is often used for reclamation and erosion control, as it is well suited for soils that are too acidic for other species of hardwoods. In West Virginia, they have been found to establish within mine refuse sites after being blown from neighboring areas.As the species occurs predominately in flood plains and along stream banks, it is described as being moderately tolerant of flooding. Saplings were observed to survive up to 30 days of continuous flooding in some regions. While the species is tolerant of excessive water, it is intolerant of shade. Seeds will not germinate without a large amount of direct sunlight.
This species is utilized by many local bird species, such as waterfowl, ruffed grouse, and wild turkey. Many waterfowl use the cover for nesting sites, while the ruffed grouse and wild turkey use the seeds as a food source. Deer have been known to graze on saplings or reachable branches. It is a larval host for over fifteen moth species, including Acronicta betulae, Acrobasis betulivorella, Bucculatrix coronatella, Nemoria bistriaria, Nites betulella, Orgyia leucostigma, and Pseudotelphusa betulella.
Germination
Seeds are typically produced annually. Seasonal development begins in the fall as male catkins begin to form and mature. The emergence of female catkins corresponds with the return of leaves around early spring. Male and female fruit matures during the spring season or in the early summer months.
Once mature, the seeds are predominantly spread by wind or water from neighboring stream channels. Seeds spread by water are generally more successful as the moist banks of stream channels, where the seeds are deposited, are favorable for germination and sturdy establishment. Successful germination often occurs in large numbers along sandbars, where alluvial soil is present.
Cultivation and uses
While its native habitat is wet ground, it will grow on higher land, and its bark is quite distinctive, making it a favored ornamental tree for landscape use. A number of cultivars with much whiter bark than the normal wild type have been selected for garden planting, including 'Heritage' and 'Dura Heat'; these are notable as the only white-barked birches resistant to the bronze birch borer (Agrilus anxius) in warm areas of the southeastern United States of America.Native Americans used the boiled sap as a sweetener similar to maple syrup, and the inner bark as a survival food. The river birch is not typically used in the commercial lumber industry, due to knotting, but its strong, closely grained wood is sometimes used for local furniture, woodenware, and fuel.
Essential oils
The essential oils derived from leaves, inner bark, and buds of B. nigra are mostly composed of eugenol, linalool, palmitic acid, and heptacosane with many more compounds in smaller concentrations. The combined essential oils are phytotoxic to lettuce (Lactuca sativa) and perennial ryegrass (Lolium perenne) seedlings. They have also demonstrated insecticidal, nematicidal, and antibacterial properties.
Gallery
| Biology and health sciences | Fagales | Plants |
466192 | https://en.wikipedia.org/wiki/Thermal%20equilibrium | Thermal equilibrium | Two physical systems are in thermal equilibrium if there is no net flow of thermal energy between them when they are connected by a path permeable to heat. Thermal equilibrium obeys the zeroth law of thermodynamics. A system is said to be in thermal equilibrium with itself if the temperature within the system is spatially uniform and temporally constant.
Systems in thermodynamic equilibrium are always in thermal equilibrium, but the converse is not always true. If the connection between the systems allows transfer of energy as 'change in internal energy' but does not allow transfer of matter or transfer of energy as work, the two systems may reach thermal equilibrium without reaching thermodynamic equilibrium.
Two varieties of thermal equilibrium
Relation of thermal equilibrium between two thermally connected bodies
The relation of thermal equilibrium is an instance of equilibrium between two bodies, which means that it refers to transfer through a selectively permeable partition of matter or work; it is called a diathermal connection. According to Lieb and Yngvason, the essential meaning of the relation of thermal equilibrium includes that it is reflexive and symmetric. It is not included in the essential meaning whether it is or is not transitive. After discussing the semantics of the definition, they postulate a substantial physical axiom, that they call the "zeroth law of thermodynamics", that thermal equilibrium is a transitive relation. They comment that the equivalence classes of systems so established are called isotherms.
Internal thermal equilibrium of an isolated body
Thermal equilibrium of a body in itself refers to the body when it is isolated. The background is that no heat enters or leaves it, and that it is allowed unlimited time to settle under its own intrinsic characteristics. When it is completely settled, so that macroscopic change is no longer detectable, it is in its own thermal equilibrium. It is not implied that it is necessarily in other kinds of internal equilibrium. For example, it is possible that a body might reach internal thermal equilibrium but not be in internal chemical equilibrium; glass is an example.
One may imagine an isolated system, initially not in its own state of internal thermal equilibrium. It could be subjected to a fictive thermodynamic operation of partition into two subsystems separated by nothing, no wall. One could then consider the possibility of transfers of energy as heat between the two subsystems. A long time after the fictive partition operation, the two subsystems will reach a practically stationary state, and so be in the relation of thermal equilibrium with each other. Such an adventure could be conducted in indefinitely many ways, with different fictive partitions. All of them will result in subsystems that could be shown to be in thermal equilibrium with each other, testing subsystems from different partitions. For this reason, an isolated system, initially not its own state of internal thermal equilibrium, but left for a long time, practically always will reach a final state which may be regarded as one of internal thermal equilibrium. Such a final state is one of spatial uniformity or homogeneity of temperature. The existence of such states is a basic postulate of classical thermodynamics. This postulate is sometimes, but not often, called the minus first law of thermodynamics. A notable exception exists for isolated quantum systems which are many-body localized and which never reach internal thermal equilibrium.
Thermal contact
Heat can flow into or out of a closed system by way of thermal conduction or of thermal radiation to or from a thermal reservoir, and when this process is effecting net transfer of heat, the system is not in thermal equilibrium. While the transfer of energy as heat continues, the system's temperature can be changing.
Bodies prepared with separately uniform temperatures, then put into purely thermal communication with each other
If bodies are prepared with separately microscopically stationary states, and are then put into purely thermal connection with each other, by conductive or radiative pathways, they will be in thermal equilibrium with each other just when the connection is followed by no change in either body. But if initially they are not in a relation of thermal equilibrium, heat will flow from the hotter to the colder, by whatever pathway, conductive or radiative, is available, and this flow will continue until thermal equilibrium is reached and then they will have the same temperature.
One form of thermal equilibrium is radiative exchange equilibrium. Two bodies, each with its own uniform temperature, in solely radiative connection, no matter how far apart, or what partially obstructive, reflective, or refractive, obstacles lie in their path of radiative exchange, not moving relative to one another, will exchange thermal radiation, in net the hotter transferring energy to the cooler, and will exchange equal and opposite amounts just when they are at the same temperature. In this situation, Kirchhoff's law of equality of radiative emissivity and absorptivity and the Helmholtz reciprocity principle are in play.
Change of internal state of an isolated system
If an initially isolated physical system, without internal walls that establish adiabatically isolated subsystems, is left long enough, it will usually reach a state of thermal equilibrium in itself, in which its temperature will be uniform throughout, but not necessarily a state of thermodynamic equilibrium, if there is some structural barrier that can prevent some possible processes in the system from reaching equilibrium; glass is an example. Classical thermodynamics in general considers idealized systems that have reached internal equilibrium, and idealized transfers of matter and energy between them.
An isolated physical system may be inhomogeneous, or may be composed of several subsystems separated from each other by walls. If an initially inhomogeneous physical system, without internal walls, is isolated by a thermodynamic operation, it will in general over time change its internal state. Or if it is composed of several subsystems separated from each other by walls, it may change its state after a thermodynamic operation that changes its walls. Such changes may include change of temperature or spatial distribution of temperature, by changing the state of constituent materials. A rod of iron, initially prepared to be hot at one end and cold at the other, when isolated, will change so that its temperature becomes uniform all along its length; during the process, the rod is not in thermal equilibrium until its temperature is uniform. In a system prepared as a block of ice floating in a bath of hot water, and then isolated, the ice can melt; during the melting, the system is not in thermal equilibrium; but eventually, its temperature will become uniform; the block of ice will not re-form. A system prepared as a mixture of petrol vapour and air can be ignited by a spark and produce carbon dioxide and water; if this happens in an isolated system, it will increase the temperature of the system, and during the increase, the system is not in thermal equilibrium; but eventually, the system will settle to a uniform temperature.
Such changes in isolated systems are irreversible in the sense that while such a change will occur spontaneously whenever the system is prepared in the same way, the reverse change will practically never occur spontaneously within the isolated system; this is a large part of the content of the second law of thermodynamics. Truly perfectly isolated systems do not occur in nature, and always are artificially prepared.
In a gravitational field
One may consider a system contained in a very tall adiabatically isolating vessel with rigid walls initially containing a thermally heterogeneous distribution of material, left for a long time under the influence of a steady gravitational field, along its tall dimension, due to an outside body such as the earth. It will settle to a state of uniform temperature throughout, though not of uniform pressure or density, and perhaps containing several phases. It is then in internal thermal equilibrium and even in thermodynamic equilibrium. This means that all local parts of the system are in mutual radiative exchange equilibrium. This means that the temperature of the system is spatially uniform. This is so in all cases, including those of non-uniform external force fields. For an externally imposed gravitational field, this may be proved in macroscopic thermodynamic terms, by the calculus of variations, using the method of Lagrange multipliers. Considerations of kinetic theory or statistical mechanics also support this statement.
Distinctions between thermal and thermodynamic equilibria
There is an important distinction between thermal and thermodynamic equilibrium. According to Münster (1970), in states of thermodynamic equilibrium, the state variables of a system do not change at a measurable rate. Moreover, "The proviso 'at a measurable rate' implies that we can consider an equilibrium only with respect to specified processes and defined experimental conditions." Also, a state of thermodynamic equilibrium can be described by fewer macroscopic variables than any other state of a given body of matter. A single isolated body can start in a state which is not one of thermodynamic equilibrium, and can change till thermodynamic equilibrium is reached. Thermal equilibrium is a relation between two bodies or closed systems, in which transfers are allowed only of energy and take place through a partition permeable to heat, and in which the transfers have proceeded till the states of the bodies cease to change.
An explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by C.J. Adkins. He allows that two systems might be allowed to exchange heat but be constrained from exchanging work; they will naturally exchange heat till they have equal temperatures, and reach thermal equilibrium, but in general, will not be in thermodynamic equilibrium. They can reach thermodynamic equilibrium when they are allowed also to exchange work.
Another explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by B. C. Eu. He considers two systems in thermal contact, one a thermometer, the other a system in which several irreversible processes are occurring. He considers the case in which, over the time scale of interest, it happens that both the thermometer reading and the irreversible processes are steady. Then there is thermal equilibrium without thermodynamic equilibrium. Eu proposes consequently that the zeroth law of thermodynamics can be considered to apply even when thermodynamic equilibrium is not present; also he proposes that if changes are occurring so fast that a steady temperature cannot be defined, then "it is no longer possible to describe the process by means of a thermodynamic formalism. In other words, thermodynamics has no meaning for such a process."
Thermal equilibrium of planets
A planet is in thermal equilibrium when the incident energy reaching it (typically the solar irradiance from its parent star) is equal to the infrared energy radiated away to space.
| Physical sciences | Thermodynamics | Physics |
466322 | https://en.wikipedia.org/wiki/Temporal%20lobe | Temporal lobe | The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.
The temporal lobe is involved in processing sensory input into derived meanings for the appropriate retention of visual memory, language comprehension, and emotion association.
Temporal refers to the head's temples.
Structure
The temporal lobe consists of structures that are vital for declarative or long-term memory. Declarative (denotative) or explicit memory is conscious memory divided into semantic memory (facts) and episodic memory (events).
The medial temporal lobe structures are critical for long-term memory, and include the hippocampal formation, perirhinal cortex, parahippocampal, and entorhinal neocortical regions. The hippocampus is critical for memory formation, and the surrounding medial temporal cortex is currently theorized to be critical for memory storage. The prefrontal and visual cortices are also involved in explicit memory.
Research has shown that lesions in the hippocampus of monkeys results in limited impairment of function, whereas extensive lesions that include the hippocampus and the medial temporal cortex result in severe impairment.
A form of epilepsy that involves the medial lobe is usually known as mesial temporal lobe epilepsy.
Function
Visual memories
The temporal lobe communicates with the hippocampus and plays a key role in the formation of explicit long-term memory modulated by the amygdala.
Processing sensory input
Auditory Adjacent areas in the superior, posterior, and lateral parts of the temporal lobes are involved in high-level auditory processing. The temporal lobe is involved in primary auditory perception, such as hearing, and holds the primary auditory cortex. The primary auditory cortex receives sensory information from the ears and secondary areas process the information into meaningful units such as speech and words. The superior temporal gyrus includes an area (within the lateral fissure) where auditory signals from the cochlea first reach the cerebral cortex and are processed by the primary auditory cortex in the left temporal lobe.
Visual The areas associated with vision in the temporal lobe interpret the meaning of visual stimuli and establish object recognition. The ventral part of the temporal cortices appears to be involved in high-level visual processing of complex stimuli such as faces (fusiform gyrus) and scenes (parahippocampal gyrus). Anterior parts of this ventral stream for visual processing are involved in object perception and recognition.
Language recognition
In humans, temporal lobe regions are critical for accessing the semantic meaning of spoken words, printed words, and visual objects. Wernicke's area, which spans the region between temporal and parietal lobes of the dominant cerebral hemisphere (the left, in the majority of cases), plays a key role (in tandem with Broca's area in the frontal lobe) in language comprehension, whether spoken language or signed language. FMRI imaging shows these portions of the brain are activated by signed or spoken languages. These areas of the brain are active in children's language acquisition whether accessed via hearing a spoken language, watching a signed language, or via hand-over-hand tactile versions of a signed language.
The functions of the left temporal lobe are not limited to low-level perception but extend to comprehension, naming, and verbal memory.
New memories
The medial temporal lobes (near the sagittal plane) are thought to be involved in encoding declarative long term memory. The medial temporal lobes include the hippocampi, which are essential for memory storage, therefore damage to this area can result in impairment in new memory formation leading to permanent or temporary anterograde amnesia.
Clinical significance
Unilateral temporal lesion
Contralateral homonymous upper quadrantanopia (sector anopsia)
Complex hallucinations (smell, sound, vision, memory)
Dominant hemisphere
Receptive aphasia
Wernicke's aphasia
Anomic aphasia
Dyslexia
Impaired verbal memory
Word agnosia, word deafness
Non-dominant hemisphere
Impaired non-verbal memory
Impaired musical skills
Bitemporal lesions (additional features)
Deafness
Apathy (affective indifference)
Impaired learning and memory
Amnesia, Korsakoff syndrome, Klüver–Bucy syndrome
Damage
Individuals who suffer from medial temporal lobe damage have a difficult time recalling visual stimuli. This neurotransmission deficit is not due to lacking perception of visual stimuli, but rather to the inability to interpret what is perceived. The most common symptom of inferior temporal lobe damage is visual agnosia, which involves impairment in the identification of familiar objects. Another less common type of inferior temporal lobe damage is prosopagnosia which is an impairment in the recognition of faces and distinction of unique individual facial features.
Damage specifically to the anterior portion of the left temporal lobe can cause savant syndrome.
Disorders
Pick's disease, also known as frontotemporal amnesia, is caused by atrophy of the frontotemporal lobe. Emotional symptoms include mood changes, which the patient may be unaware of, including poor attention span and aggressive behavior towards themselves or others. Language symptoms include loss of speech, inability to read or write, loss of vocabulary and overall degeneration of motor ability.
Temporal lobe epilepsy is a chronic neurological condition characterized by recurrent seizures; symptoms include a variety of sensory (visual, auditory, olfactory, and gustation) hallucinations, as well as an inability to process semantic and episodic memories.
Schizophrenia is a severe psychotic disorder characterized by severe disorientation. Its most explicit symptom is the perception of external voices in the form of auditory hallucinations. The cause of such hallucinations has been attributed to deficits in the left temporal lobe, specifically within the primary auditory cortex. Decreased gray matter, among other cellular deficits, contribute to spontaneous neural activity that affects the primary auditory cortex as if it were experiencing acoustic auditory input. The misrepresentation of speech in the auditory cortex results in the perception of external voices in the form of auditory hallucinations in schizophrenic patients. Structural and functional MRI techniques have accounted for this neural activity by testing affected and non-affected individuals with external auditory stimuli.
| Biology and health sciences | Nervous system | Biology |
466716 | https://en.wikipedia.org/wiki/Hundredweight | Hundredweight | The hundredweight (abbreviation: cwt), formerly also known as the centum weight or quintal, is a British imperial and United States customary unit of weight or mass. Its value differs between the United States customary and British imperial systems. The two values are distinguished in American English as the short and long hundredweight and in British English as the cental and imperial hundredweight.
The short hundredweight or cental of is defined in the United States customary system.
The long or imperial hundredweight of 8 stone or is defined in the British imperial system.
Under both conventions, there are 20 hundredweight in a ton, producing a "short ton" of 2,000 pounds (907.2 kg) and a "long ton" of 2,240 pounds (1,016 kg).
History
The hundredweight has had many values. In England in around 1300, different hundreds (centum in Medieval Latin) were defined. The Weights and Measures Act 1835 formally established the present imperial hundredweight of .
The United States and Canada came to use the term "hundredweight" to refer to a unit of . This measure was specifically banned from British use—upon risk of being sued for fraud—by the Weights and Measures Act 1824 (5 Geo. 4. c. 74), but in 1879 the measure was legalised under the name "cental" in response to legislative pressure from British merchants importing wheat and tobacco from the United States into the United Kingdom.
Use
The short hundredweight is commonly used as a measurement in the United States in the sale of livestock and some cereal grains and oilseeds, paper, and concrete additives and on some commodities in futures exchanges.
A few decades ago, commodities weighed in terms of long hundredweight included cattle, cattle fodder, fertilizers, coal, some industrial chemicals, other industrial materials, and so on. However, since the increasing usage of the metric system in most English-speaking countries, it is now used to a far lesser extent. Church bell ringers use the unit commonly, although church bell manufacturers are increasingly moving over to the metric system .
Older blacksmiths' anvils are often stamped with a three-digit number indicating their total weight in hundredweight, quarter-hundredweight (, abbreviated qr), and pounds. Thus, an anvil stamped "1.1.8" will weigh ( + + ).
The same three part scheme is used for church bells (formatted cwt–qr–lb).
The long hundredweight is used as a measurement of vehicle weight in the Bailiwick of Guernsey. It was also previously used to indicate the maximum recommended carrying load of vans and trucks, such as the Ford Thames 5 and 7 cwt vans and the 8, 15, 30 and 60 cwt Canadian Military Pattern trucks.
Europe
In Europe outside the British Isles, a centum or quintal was never defined in terms of British units. Instead, it was based on the kilogramme or former customary units. It is usually abbreviated q. It was in Germany, in France, in Austria, etc. The unit was phased out or metricized after the introduction of the metric system in the 1790s, being occasionally retained in informal use up to the mid-20th century.
| Physical sciences | Mass and weight | Basics and measurement |
466746 | https://en.wikipedia.org/wiki/Mycoplasma%20pneumoniae | Mycoplasma pneumoniae | Mycoplasma pneumoniae is a species of very small-cell bacteria that lack a cell wall, in the class Mollicutes. M. pneumoniae is a human pathogen that causes the disease Mycoplasma pneumonia, a form of atypical bacterial pneumonia related to cold agglutinin disease.
It is one of the smallest self-replicating organisms and its discovery traces back to 1898 when Nocard and Roux isolated a microorganism linked to cattle pneumonia. This microbe shared characteristics with pleuropneumonia-like organisms (PPLOs), which were soon linked to pneumonias and arthritis in several animals. A significant development occurred in 1944 when Monroe Eaton cultivated an agent thought responsible for human pneumonia in embryonated chicken eggs, referred to as the "Eaton agent." This agent was classified as a bacteria due to its cultivation method and because antibiotics were effective in treating the infection, questioning its viral nature. In 1961, a researcher named Robert Chanock, collaborating with Leonard Hayflick, revisited the Eaton agent and posited it could be a mycoplasma, a hypothesis confirmed by Hayflick’s isolation of a unique mycoplasma, later named Mycoplasma pneumoniae. Hayflick’s discovery proved M. pneumoniae was responsible for causing human pneumonia.
Taxonomically, Mycoplasma pneumoniae is part of the Mollicutes class, characterized by their lack of a peptidoglycan cell wall, making them inherently resistant to antibiotics targeting cell wall synthesis, such as beta-lactams. With a reduced genome and metabolic simplicity, mycoplasmas are obligate parasites with limited metabolic pathways, relying heavily on host resources. This bacterium uses a specialized attachment organelle to adhere to respiratory tract cells, facilitating motility and cell invasion. The persistence of M. pneumoniae infections even after treatment is associated with its ability to mimic host cell surface composition.
Pathogenic mechanisms of M. pneumoniae involve host cell adhesion and cytotoxic effects, including cilia loss and hydrogen peroxide release, which lead to respiratory symptoms and complications such as bronchial asthma and chronic obstructive pulmonary disease. Additionally, the bacterium produces a unique CARDS toxin, contributing to inflammation and respiratory distress. Treatment of M. pneumoniae infections typically involves macrolides or tetracyclines, as these antibiotics inhibit protein synthesis, though resistance has been increasing, particularly in Asia. This resistance predominantly arises from mutations in the 23S rRNA gene, which interfere with macrolide binding, complicating management and necessitating alternative treatment strategies.
Discovery and history
In 1898, Nocard and Roux isolated an agent assumed to be the cause of cattle pneumonia and named it microbe de la peripneumonie Microorganisms from other sources, having properties similar to the pleuropneumonia organism (PPO) of cattle, soon came to be known as pleuropneumonia-like organisms (PPLO), but their true nature remained unknown. Many PPLO were later proven to be the cause of pneumonias and arthritis in several lower animals.
In 1944, Monroe Eaton used embryonated chicken eggs to cultivate an agent thought to be the cause of human primary atypical pneumonia (PAP), commonly known as "walking pneumonia." This unknown organism became known as the "Eaton agent". At that time, Eaton's use of embryonated eggs, then used for cultivating viruses, supported the idea that the Eaton agent was a virus. Yet it was known that PAP was amenable to treatment with broad-spectrum antibiotics, making a viral etiology suspect.
Robert Chanock, a researcher from the NIH who was studying the Eaton agent as a virus, visited the Wistar Institute in Philadelphia in 1961 to obtain a cell culture of a normal human cell strain developed by Leonard Hayflick. This cell strain was known to be exquisitely sensitive to isolate and grow human viruses. Chanock told Hayflick of his research on the Eaton agent, and his belief that its viral nature was questionable. Although Hayflick knew little about the current research on this agent, his doctoral dissertation had been done on animal diseases caused by PPLO. Hayflick knew that many lower animals suffered from pneumonias caused by PPLOs (later to be termed mycoplasmas). Hayflick reasoned that the Eaton agent might be a mycoplasma, and not a virus.
Chanock had never heard of mycoplasmas, and at Hayflick's request sent him egg yolk containing the Eaton agent.
Using a novel agar and fluid medium formulation he had devised, Hayflick isolated a unique mycoplasma from the egg yolk. This was soon proven by Chanock and Hayflick to be the causative agent of PAP. When this discovery became known to Emmy Klieneberger-Nobel of the Lister Institute in London, the world's leading authority on these organisms, she suggested that the organism be named Mycoplasma hayflickiae. Hayflick demurred in favor of Mycoplasma pneumoniae.
This smallest free-living microorganism was the first to be isolated and proven to be the cause of a human disease. For his discovery, Hayflick was presented with the Presidential Award by the International Organization of Mycoplasmology. The inverted microscope under which Hayflick discovered Mycoplasma pneumoniae is kept by the Smithsonian Institution.
Taxonomy and classification
The term mycoplasma ( meaning fungus, and , meaning formed) is derived from the fungal-like growth of some mycoplasma species. The mycoplasmas were classified as Mollicutes (“mollis”, meaning soft and “cutis”, meaning skin) in 1960 due to their small size and genome, lack of cell wall, low G+C content and unusual nutritional needs.
Mycoplasmas, which are among the smallest self-replicating organisms, are parasitic species that lack a cell wall and periplasmic space, have reduced genomes, and limited metabolic activity. M. pneumoniae has also been designated as an arginine nonfermenting species. Mycoplasmas are further classified by the sequence composition of 16s rRNA. All mycoplasmas of the pneumoniae group possess similar 16s rRNA variations unique to the group, of which M. pneumoniae has a 6.3% variation in the conserved regions, that suggest mycoplasmas formed by degenerative evolution from the gram-positive eubacterial group that includes bacilli, streptococci, and lactobacilli. M. pneumoniae is a member of the family Mycoplasmataceae and order Mycoplasmatales.
Cell biology
Mycoplasma pneumoniae cells have an elongated shape that is approximately 0.1–0.2 μm (100–200 nm) in width and 1–2 μm (1000-2000 nm) in length. The extremely small cell size means they are incapable of being examined by light microscopy; a stereomicroscope is required for viewing the morphology of M. pneumoniae colonies, which are usually less than 100 μm in length. The inability to synthesize a peptidoglycan cell wall is due to the absence of genes encoding its formation and results in an increased importance in maintenance of osmotic stability to avoid desiccation. The lack of a cell wall also calls for increased support of the cell membrane(reinforced with sterols), which includes a rigid cytoskeleton composed of an intricate protein network and, potentially, an extracellular capsule to facilitate adherence to the host cell. M. pneumoniae are the only bacterial cells that possess cholesterol in their cell membrane (obtained from the host) and possess more genes that encode for membrane lipoprotein variations than other mycoplasmas, which are thought to be associated with its parasitic lifestyle. M. pneumoniae cells also possess an attachment organelle, which is used in the gliding motility of the organism by an unknown mechanism.
Genomics and metabolic reconstruction
Sequencing of the M. pneumoniae genome in 1996 revealed it is 816,394 bp in size. The genome contains 687 genes that encode for proteins, of which about 56.6% code for essential metabolic enzymes; notably those involved in glycolysis and organic acid fermentation. M. pneumoniae is consequently very susceptible to loss of enzymatic function by gene mutations, as the only buffering systems against functional loss by point mutations are for maintenance of the pentose phosphate pathway and nucleotide metabolism. Loss of function in other pathways is suggested to be compensated by host cell metabolism. In addition to the potential for loss of pathway function, the reduced genome of M. pneumoniae outright lacks a number of pathways, including the TCA cycle, respiratory electron transport chain, and biosynthesis pathways for amino acids, fatty acids, cholesterol and purines and pyrimidines. These limitations make M. pneumoniae dependent upon import systems to acquire essential building blocks from their host or the environment that cannot be obtained through glycolytic pathways. Along with energy costly protein and RNA production, a large portion of energy metabolism is exerted to maintain proton gradients (up to 80%) due to the high surface area to volume ratio of M. pneumoniae cells. Only 12 – 29% of energy metabolism is directed at cell growth, which is unusually low for bacterial cells, and is thought to be an adaptation of its parasitic lifestyle.
Unlike other bacteria, M. pneumoniae uses the codon UGA to code for tryptophan rather than using it as a stop codon.
Comparative metabolomics
Mycoplasma pneumoniae has a reduced metabolome in comparison to other bacterial species. This means that the pathogen has fewer metabolic reactions in comparison to other bacterial species such as B.subtilis and Escherichia coli.
Since Mycoplasma pneumoniae has a reduced genome, it has a smaller number of overall paths and metabolic enzymes, which contributes to its more linear metabolome. A linear metabolome causes Mycoplasma pneumoniae to be less adaptable to external factors. Additionally, since Mycoplasma pneumoniae has a reduced genome, the majority of its metabolic enzymes are essential. This is in contrast to another model organism, Escherichia coli, in which only 15% of its metabolic enzymes are essential. In summary, the linear topology of Mycoplasma pneumoniae's metabolome leads to reduced efficiency in its metabolic reactions, but still maintains similar levels of metabolite concentrations, cellular energetics, adaptability, and global gene expression.
The table above depicts the mean path length for the metabolomes of M. pneumoniae, E. coli, L. lactis, and B. subtilis. This number describes, essentially, the mean number of reactions that occur in the metabolome. Mycoplasma pneumoniae, on average, has a high number of reactions per path within its metabolome in comparison to other model bacterial species.
One effect of Mycoplasma pneumoniae’s unique metabolome is its longer duplication time. It takes the pathogen significantly more time to duplicate on average compared to other model organism bacteria. This may be due to the fact that Mycoplasma pneumoniae’s metabolome is less efficient than that of Escherichia coli.
The metabolome of Mycoplasma pneumoniae can also be informative in analyzing its pathogenesis. Extensive study of the metabolic network of this organism has led to the identification of biomarkers that can potentially reveal the presence of the extensive complications the bacteria can cause. Metabolomics is increasingly being used as a useful tool for the verification of biomarkers of infectious pathogens.
Pathogenicity
Mycoplasma pneumoniae parasitizes the respiratory tract epithelium of humans. Adherence to the respiratory epithelial cells is thought to occur via the attachment organelle, followed by evasion of host immune system by intracellular localization and adjustment of the cell membrane composition to mimic the host cell membrane.
Mycoplasma pneumoniae grows exclusively by parasitizing mammals. Reproduction, therefore, is dependent upon attachment to a host cell. According to Waites and Talkington, specialized reproduction occurs by “binary fission, temporally linked with duplication of its attachment organelle, which migrates to the opposite pole of the cell during replication and before nucleoid separation”. Mutations that affect the formation of the attachment organelle not only hinder motility and cell division, but also reduce the ability of M. pneumoniae cells to adhere to the host cell.
Cytoadherence
Adherence of M. pneumoniae to a host cell (usually a respiratory tract cell, but occasionally an erythrocyte or urogenital lining cell) is the initiating event for pneumonic disease and related symptoms. The specialized attachment organelle is a polar, electron dense and elongated cell extension that facilitates motility and adherence to host cells. It is composed of a central filament surrounded by an intracytoplasmic space, along with a number of adhesins and structural and accessory proteins localized at the tip of the organelle.
A variety of proteins are known to contribute to the formation and functionality of the attachment organelle, including the accessory proteins HMW1–HMW5, P30, P56, and P90 that confer structure and adhesin support, and P1, P30 and P116 which are involved directly in attachment. This network of proteins participates not only in the initiation of attachment organelle formation and adhesion but also in motility.
The P1 adhesin (trypsin-sensitive protein) is a 120 kDa protein highly clustered on the surface of the attachment organelle tip in virulent mycoplasmas. Both the presence of P1 and its concentration on the cell surface are required for the attachment of M. pneumoniae to the host cell. M. pneumoniae cells treated with monoclonal antibodies specific to the immunogenic C-terminus of the P1 adhesin have been shown to be inhibited in their ability to attach to the host cell surface by approximately 75%, suggesting P1 is a major component in adherence. These antibodies also decreased the ability of the cell to glide quickly, which may contribute to decreased adherence to the host by hindering their capacity to locate a host cell. Furthermore, mutations in P1 or degradation by trypsin treatment yield avirulent M. pneumoniae cells. Loss of proteins in the cytoskeleton involved in the localization of P1 in the tip structure, such as HMW1–HMW3, also cause avirulence due to the lack of adhesin clustering. Another protein considered to play an important role in adherence is P30, as M. pneumoniae cells with mutations in this protein or that have had antibodies raised against P30 are incapable of adhering to host cells. P30 is not involved in the localization of P1 in the tip structure since P1 is trafficked to the attachment organelle in P30 mutants, but rather it may function as a receptor-binding accessory adhesin. P30 mutants also display distinct morphological features such as multiple lobes and a rounded shape as opposed to elongated, which suggests P30 may interact with the cytoskeleton during formation of the attachment organelle.
A number of eukaryotic cell surface components have been implicated in the adherence of M. pneumoniae cells to the respiratory tract epithelium. Among them are sialoglycoconjugates, sulfated glycolipids, glycoproteins, fibronectin, and neuraminic acid receptors. Lectins on the surface of the bacterial cells are capable of binding oligosaccharide chains on glycolipids and glycoproteins to facilitate attachment, in addition to the proteins TU and pyruvate dehydrogenase E1 β, which bind to fibronectin.
Intracellular localization
Mycoplasma pneumoniae fuses with host cells and survive intracellularly. Thus it can evade host immune system detection, resist antibiotic treatment, and cross mucosal barriers,. In addition to the close physical proximity of M. pneumoniae and host cells, the lack of cell wall and peculiar cell membrane components, like cholesterol, may facilitate fusion. Internal localization may produce chronic or latent infections as M. pneumoniae is capable of persisting, synthesizing DNA, and replicating within the host cell even after treatment with antibiotics. The exact mechanism of intracellular localization is unknown, however the potential for cytoplasmic sequestration within the host explains the difficulty in completely eliminating M. pneumoniae infections in afflicted individuals.
Immune response
In addition to evasion of host immune system by intracellular localization, M. pneumoniae can change the composition of its cell membrane to mimic the host cell membrane and avoid detection by immune system cells. M. pneumoniae cells possess a number of protein and glycolipid antigens that elicit immune responses, but variation of these surface antigens would allow the infection to persist long enough for M. pneumoniae cells to fuse with host cells and escape detection. The similarity between the compositions of M. pneumoniae and human cell membranes can also result in autoimmune responses in several organs and tissues.
Cytotoxicity and organismal effects
The main cytotoxic effect of M. pneumoniae is local disruption of tissue and cell structure along the respiratory tract epithelium due to its attachment to host cells. Attachment of the bacteria to host cells can result in loss of cilia, a reduction in metabolism, biosynthesis, and import of macromolecules, and, eventually, infected cells may be shed from the epithelial lining. Local damage may also be a result of lactoferrin acquisition and subsequent hydroxyl radical, superoxide anion and peroxide formation.
Secondly, M. pneumoniae produces a unique virulence factor known as Community Acquired Respiratory Distress Syndrome (CARDS) toxin. The CARDS toxin most likely aids in the colonization and pathogenic pathways of M. pneumoniae, leading to inflammation and airway dysfunction.
The third virulence factor is the formation of hydrogen peroxide in M. pneumoniae infections. When M. pneumoniae is attached to erythrocytes, hydrogen peroxide diffuses from the bacteria to the host cell without it being detoxified by catalase or peroxidase, thus injuring the host cell by reducing glutathione, damaging lipid membranes and causing protein denaturation, i.e. oxidation of heme and hemolysis.
Most recently it was shown that hydrogen peroxide plays a minor if any role in haemolysis, but that hydrogen sulfide is the true culprit.
The cytotoxic effects of M. pneumoniae infections translate into common symptoms like coughing and lung irritation that may persist for months after infection has subsided. Local inflammation and hyperresponsiveness by infection induced cytokine production has been associated with chronic conditions such as bronchial asthma and has also been linked to progression of symptoms in individuals with cystic fibrosis and COPD.
Antimicrobial activity
Infections can be treated with oral antibiotics from the macrolide family, which work by inhibiting the Mycoplasma protein biosynthesis. Historically, erythromycin is the oldest drug. As first choice, azithromycin or clarithromycin are used, as they have more convenient pharmacokinetics than erythromycin : they only need to be taken once or twice and not four times a day and they have fewer side effects.
Alternatively, tetracyclines (eg, doxycycline), and respiratory fluoroquinolones (eg, levofloxacin or moxifloxacin) can be used; they have an undesirable side effect profile in children. Beta-lactams such as penicillin are completely ineffective, because they target the cell wall synthesis.
Resistance
Resistance to macrolides has been reported as early as 1967. Increased antibiotic usage has been followed by an increase in resistance since 2000. Resistance in the 2020s has been highest in Asia, as high as 100%, while rates in the United States have varied from 3.5% to 13%. A single base mutation in the V region of 23S rRNA, like A2063/2064G is responsible for more than 90% of the macrolide-resistant infections.
Since routine culture and susceptibility testing is not performed, as M. pneumoniae is difficult to grow, clinicians will select an antibiotic based on an estimate of local resistance, on treatment response and on other factors.
| Biology and health sciences | Gram-positive bacteria | Plants |
466900 | https://en.wikipedia.org/wiki/Carpet%20shark | Carpet shark | Carpet sharks are sharks classified in the order Orectolobiformes . Sometimes the common name "carpet shark" (given because many species resemble ornately patterned carpets) is used interchangeably with "wobbegong", which is the common name of sharks in the family Orectolobidae. Carpet sharks have five gill slits, two spineless dorsal fins, and a small mouth that does not extend past the eyes. Many species have barbels.
Characteristics
The carpet sharks are a diverse group of sharks with differing sizes, appearances, diets, and habits. They first appeared in the fossil record in the Early Jurassic; the oldest known orectolobiform genera are Folipistrix (known from Toarcian to Aalenian of Belgium and Germany), Palaeobrachaelurus (Aalenian to Barremian) and Annea (Toarcian to Bajocian of Europe). All species have two dorsal fins and a relatively short, transverse mouth that does not extend behind the eyes. Besides the nostrils are barbels, tactile sensory organs, and grooves known as nasoral grooves connect the nostrils to the mouth. Five short gill slits are just in front of the origin of the pectoral fin and the fifth slit tends to overlap the fourth one. A spiracle occurs beneath each eye which is used in respiration. The only exception to this rule is the whale shark, the spiracles of which are situated just behind the eyes. Carpet sharks derive their common name from the fact that many species have a mottled appearance with intricate patterns reminiscent of carpet designs. The patterning provides camouflage when the fish is lying on the seabed. The largest carpet shark is the whale shark (Rhincodon typus) which can grow to a length of . It is the largest species of fish, but despite its size, is not dangerous, as it is a filter feeder, drawing in water through its wide mouth and sifting out the plankton. The smallest carpet shark, at up to about long, is the barbelthroat carpet shark, (Cirrhoscyllium expolitum). Some of the most spectacularly coloured members of the order are the necklace carpet shark (Parascyllium variolatum), the zebra shark (Stegostoma fasciatum), and the ornate wobbegong (Orectolobus ornatus). Nurse sharks and whale sharks have a fringe of barbels on their snouts, and barbelthroat carpet sharks (Cirrhoscyllium expolitum) have barbels dangling from their throat regions.
Behaviour
Most carpet sharks feed on the seabed in shallow to medium-depth waters, detecting and picking up molluscs, crustaceans, and other small creatures. The wobbegongs tend to be ambush predators, lying hidden on the seabed until prey approaches. One has been observed swallowing a bamboo shark whole.
The methods of reproduction of carpet sharks varies. Some species are oviparous and lay eggs which may be liberated directly into the water or may be enclosed in horny egg cases. Some female sharks have been observed to push egg cases into crevices and this would be an added protection for the developing embryos. Other species are ovoviviparous and the fertilised eggs are retained in the mother's oviduct. There, the developing embryos, which are usually few in number, feed on their yolk sacs at first and later hatch out and feed on nutrients secreted by the walls of the oviduct. The young are born in an advanced state, ready to live independent lives.
Distribution
Carpet sharks are found in all the oceans of the world but predominantly in tropical and temperate waters. They are most common in the western Indo-Pacific region and are usually found in relatively deep water.
Classification
The order is small, with seven families in 13 genera and with a total of around 43 species:
Extant species
Order Orectolobiformes
Family Brachaeluridae Applegate (blind sharks)
Genus Brachaelurus Ogilby, 1908
Brachaelurus colcloughi (Ogilby, 1908) (bluegrey carpetshark)
Brachaelurus waddi (Bloch & J. G. Schneider, 1801) (blind shark)
Family Ginglymostomatidae Gill, 1862 (nurse sharks)
Genus Ginglymostoma J. P. Müller & Henle, 1837
Ginglymostoma cirratum Bonnaterre, 1788 (nurse shark)
Ginglymostoma unami Del-Moral-Flores, Ramírez-Antonio, Angulo & Pérez-Ponce de León, 2015
Genus Nebrius Rüppell, 1837
Nebrius ferrugineus (Lesson, 1831) (tawny nurse shark)
Genus Pseudoginglymostoma Dingerkus, 1986
Pseudoginglymostoma brevicaudatum (Günther, 1867) (short-tail nurse shark)
Family Hemiscylliidae Gill, 1862 (bamboo sharks)
Genus Chiloscyllium J. P. Müller & Henle, 1837
Chiloscyllium arabicum Gubanov, 1980 (Arabian carpetshark)
Chiloscyllium burmensis Dingerkus & DeFino, 1983 (Burmese bamboo shark)
Chiloscyllium griseum J. P. Müller & Henle, 1838 (grey bamboo shark)
Chiloscyllium hasselti Bleeker, 1852 (Hasselt's bamboo shark)
Chiloscyllium indicum (J. F. Gmelin, 1789) (slender bamboo shark)
Chiloscyllium plagiosum (Anonymous, referred to Bennett, 1830) (white-spotted bamboo shark)
Chiloscyllium punctatum J. P. Müller & Henle, 1838 (brownbanded bamboo shark)
Genus Hemiscyllium J. P. Müller & Henle, 1837
Hemiscyllium freycineti (Quoy & Gaimard, 1824) (Indonesian speckled carpetshark)
Hemiscyllium galei G. R. Allen & Erdmann, 2008 (Cenderwasih epaulette shark)
Hemiscyllium hallstromi Whitley, 1967 (Papuan epaulette shark)
Hemiscyllium halmahera G. R. Allen, Erdmann & Dudgeon, 2013 (Halmahera epaulette shark)
Hemiscyllium henryi G. R. Allen & Erdmann, 2008 (Henry's epaulette shark)
Hemiscyllium michaeli G. R. Allen & Dudgeon, 2010 (Milne Bay epaulette shark)
Hemiscyllium ocellatum (Bonnaterre, 1788) (epaulette shark)
Hemiscyllium strahani Whitley, 1967 (hooded carpetshark)
Hemiscyllium trispeculare J. Richardson, 1843 (speckled carpetshark)
Family Orectolobidae Gill, 1896 (wobbegong sharks)
Genus Eucrossorhinus Regan, 1908
Eucrossorhinus dasypogon (Bleeker, 1867) (tasselled wobbegong)
Genus Orectolobus Bonaparte, 1834
Orectolobus floridus Last & Chidlow, 2008 (floral banded wobbegong)
Orectolobus halei Whitley, 1940.
Orectolobus hutchinsi Last, Chidlow & Compagno, 2006. (western wobbegong)
Orectolobus japonicus Regan, 1906 (Japanese wobbegong)
Orectolobus leptolineatus Last, Pogonoski & W. T. White, 2010 (Indonesian wobbegong)
Orectolobus maculatus (Bonnaterre, 1788) (spotted wobbegong)
Orectolobus ornatus (De Vis, 1883) (ornate wobbegong)
Orectolobus parvimaculatus Last & Chidlow, 2008 (dwarf spotted wobbegong)
Orectolobus reticulatus Last, Pogonoski & W. T. White, 2008 (network wobbegong)
Orectolobus wardi Whitley, 1939 (northern wobbegong)
Genus Sutorectus Whitley, 1939
Sutorectus tentaculatus (W. K. H. Peters, 1864) (cobbler wobbegong)
Family Parascylliidae Gill, 1862 (collared carpet sharks)
Genus Cirrhoscyllium H. M. Smith & Radcliffe, 1913
Cirrhoscyllium expolitum H. M. Smith & Radcliffe, 1913 (barbelthroat carpetshark)
Cirrhoscyllium formosanum Teng, 1959 (Taiwan saddled carpetshark)
Cirrhoscyllium japonicum Kamohara, 1943 (saddle carpetshark)
Genus Parascyllium Gill, 1862
Parascyllium collare E. P. Ramsay & Ogilby, 1888 (collared carpetshark)
Parascyllium elongatum Last & Stevens, 2008 (elongate carpetshark)
Parascyllium ferrugineum McCulloch, 1911 (rusty carpetshark)
Parascyllium sparsimaculatum T. Goto & Last, 2002 (ginger carpetshark)
Parascyllium variolatum (A. H. A. Duméril, 1853) (necklace carpetshark)
Family Rhincodontidae (J. P. Müller & Henle, 1839) (whale sharks)
Genus Rhincodon A. Smith, 1828
Rhincodon typus A. Smith, 1828 (whale shark)
Family Stegostomatidae Gill, 1862 (zebra sharks)
Genus Stegostoma J. P. Müller & Henle, 1837
Stegostoma fasciatum (Hermann, 1783) (zebra shark)
Fossil genera
The following fossil genera are recognized:
Order Orectolobiformes
Genus †Akaimia Rees, 2010
Genus †Annea Thies, 1982
Genus †Dorsetoscyllium Underwood & Ward, 2004
Genus †Galagadon Gates, Gorscak & Makovicky, 2019
Genus †Heterophorcynus Underwood & Ward, 2004
Genus †Folipistrix Kriwet, 2003
Genus †Palaeorectolobus Kriwet, 2008
Genus †Parasquatina Herman, 1982
Genus †Phorcynis Thiollière, 1854
Genus †Similiteroscyllium Fuchs, Engelbrecht, Lukeneder & Kriwet, 2017
Family Brachaeluridae
Genus †Eostegostoma Herman & Crochard, 1977
Genus †Garrigascyllium Guinot et al., 2014
Genus †Magistrauia Guinot, Cappetta & Adnet, 2014
Genus †Palaeobrachaelurus Thies, 1983
Genus †Paraginglymostoma Thies, 1982
Genus †Parahemiscyllium Guinot, Cappetta & Adnet, 2014
Family Ginglymostomatidae
Genus †Cantioscyllium Woodward, 1889
Genus †Delpitoscyllium Noubhani & Cappetta, 1997
Genus †Ganntouria Noubhani & Cappetta, 1997
Genus †Hologinglymostoma Noubhani & Cappetta, 1997
Genus †Plicatoscyllium Case & Cappetta, 1997
Genus †Protoginglymostoma Herman & Crochard, 1977
Family Hemiscyllidae
Genus †Acanthoscyllium (Pictet & Humbert, 1966)
Genus †Adnetoscyllium Guinot, Underwood, Cappetta & Ward, 2013
Genus †Almascyllium Cappetta, 1980
Genus †Notaramphoscyllium Engelbrecht, Mörs, Reguero & Kriwet, 2017
Genus †Pseudospinax Müller & Diedrich, 1991
Family †Mesiteiidae Pfeil, 2021
Genus †Mesiteia Gorjanovic-Kramberger, 1885
Family Orectolobidae
Genus †Cederstroemia Siverson, 1995
Genus †Coelometlaouia Engelbrecht, Mörs, Reguero & Kriwet, 2017
Genus †Cretorectolobus Case, 1978
Genus †Eometlaouia Noubhani & Cappetta, 2002
Genus †Gryphodobatis Leidy, 1877
Genus †Orectoloboides Cappetta, 1977
Genus †Restesia Cook et al., 2014
Genus †Squatiscyllium Cappetta, 1980
Family Parascyllidae
Genus †Pararhincodon Herman in Cappetta, 1976
Family Rhincodontidae
Genus †Palaeorhincodon Herman, 1974
| Biology and health sciences | Sharks | Animals |
466928 | https://en.wikipedia.org/wiki/Zebra%20shark | Zebra shark | The zebra shark (Stegostoma tigrinum) is a species of carpet shark and the sole member of the family Stegostomatidae. It is found throughout the tropical Indo-Pacific, frequenting coral reefs and sandy flats to a depth of . Adult zebra sharks are distinctive in appearance, with five longitudinal ridges on a cylindrical body, a low caudal fin comprising nearly half the total length, and usually a pattern of dark spots on a pale background. Young zebra sharks under long have a completely different pattern, consisting of light vertical stripes on a brown background, and lack the ridges. This species attains a length of .
Zebra sharks are nocturnal and spend most of the day resting motionless on the sea floor. At night, they actively hunt for molluscs, crustaceans, small bony fishes, and possibly sea snakes inside holes and crevices in the reef. Though solitary for most of the year, they form large seasonal aggregations. The zebra shark is oviparous: females produce several dozen large egg capsules, which they anchor to underwater structures via adhesive tendrils. Innocuous to humans and hardy in captivity, zebra sharks are popular subjects of ecotourism dives and public aquaria. The World Conservation Union has assessed this species as Endangered worldwide, as it is taken by commercial fisheries across most of its range (except off Australia) for meat, fins, and liver oil. There is evidence that its numbers are dwindling.
Taxonomy
The zebra shark was first described as Squalus varius by Seba in 1758 (Seba died years earlier; the publication was posthumous). No type specimen was designated, though Seba included a comprehensive description in Latin and an accurate illustration of a juvenile. Müller and Henle placed this species in the genus Stegostoma in 1837, using the specific epithet fasciatus (or the neuter form fasciatum, as Stegostoma is neuter while Squalus is masculine) from an 1801 work by Bloch and Schneider. In 1984, Compagno rejected the name "varius/m" in favor of "fasciatus/m" for the zebra shark, because Seba did not consistently use binomial nomenclature in his species descriptions (though Squalus varius is one that can be construed as a binomial name). In Compagno's view, the first proper usage of "varius/m" was by Garman in 1913, making it a junior synonym. Both S. fasciatum and S. varium are currently in usage for this species; until the early 1990s most authorities used the latter name, but since then most have followed Compagno and used the former name. A taxonomic review in 2019 instead argued that S. tigrinum is its valid name. This name was omitted in Compagno's review in 1984, possibly due to confusion over its year of description (in a publication in 1941, Fowler mistakenly listed it as being described in 1795). Squalus tigrinus was described by Forster in 1781, two years before Squalus fasciatus was described by Hermann. Consequently, the former and older is the valid name (as Stegostoma tigrinum), while the latter and younger is its junior synonym. As the name proposed by Forster in 1781 has been used in tens of publications since 1899, it is not a nomen oblitum.
The genus name is derived from the Greek stego meaning "covered", and stoma meaning "mouth". The specific epithet fasciatum means "banded", referring to the striped pattern of the juvenile. The juvenile coloration is also the origin of the common name "zebra shark". The name "leopard shark" is sometimes applied to the spotted adult, but that name usually refers to the houndshark Triakis semifasciata, and is also sometimes used for the tiger shark (Galeocerdo cuvier). Due to their different color patterns and body proportions, both juveniles and subadults have historically been described as separate species (Squalus tigrinus and S. longicaudatus respectively).
Phylogeny
There is robust morphological support for the placement of the zebra shark, the whale shark (Rhincodon typus), and the nurse sharks (Ginglymostoma cirratum, Nebrius ferrugineus, and Pseudoginglymostoma brevicaudatum) in a single clade. However, the interrelationships between these taxa are disputed by various authors. Dingerkus (1986) suggested that the whale shark is the closest relative of the zebra shark, and proposed a single family encompassing all five species in the clade. Compagno (1988) suggested affinity between this species and either Pseudoginglymostoma or a clade containing Rhincodon, Ginglymostoma, and Nebrius. Goto (2001) placed the zebra shark as the sister group to a clade containing Rhincodon and Ginglymostoma.
Description
The zebra shark has a cylindrical body with a large, slightly flattened head and a short, blunt snout. The eyes are small and placed on the sides of the head; the spiracles are located behind them and are as large or larger. The last 3 of the 5 short gill slits are situated over the pectoral fin bases, and the fourth and fifth slits are much closer together than the others. Each nostril has a short barbel and a groove running from it to the mouth. The mouth is nearly straight, with three lobes on the lower lip and furrows at the corners. There are 28–33 tooth rows in the upper jaw and 22–32 tooth rows in the lower jaw; each tooth has a large central cusp flanked by two smaller ones.
There are five distinctive ridges running along the body in adults, one along the dorsal midline and two on the sides. The dorsal midline ridge merges into the first dorsal fin, placed about halfway along the body and twice the size of the second dorsal fin. The pectoral fins are large and broad; the pelvic and anal fins are much smaller but larger than the second dorsal fin. The caudal fin is almost as long as the rest of the body, with a barely developed lower lobe and a strong ventral notch near the tip of the upper lobe. The zebra shark attains a length of , with an unsubstantiated record of . Males and females are not dimorphic in size.
The color pattern in young sharks is dark brown above and light yellow below, with vertical yellow stripes and spots. As the shark grows to long, the dark areas begin to break up, changing the general pattern from light-on-dark stripes to dark-on-light spots. There is substantial variation in pattern amongst adults, which can be used to identify particular individuals. A rare morph, informally called the sandy zebra shark, is overall sandy–brown in color with inconspicuous dark brown freckles on its upperside, lacking the distinct dark-spotted and banded pattern typical of the species. The appearance of juveniles of this morph is unknown, but subadults that are transitioning into adult sandy zebra sharks have a brown-netted pattern. Faint remnants of this pattern can often be seen in adult sandy zebra sharks. This morph, which is genetically inseparable from the normal morph, is only known from the vicinity of Malindi in Kenya, although seemingly similar individuals have been reported from Japan and northwestern Australia.
In 1964, a partially albino zebra shark was discovered in the Indian Ocean. It was overall white and completely lacked spots, but its eyes were blackish-brown as typical of the species and unlike full albinos. The shark, a long mature female, was unusual in that albino animals rarely survive long in the wild due to their lack of crypsis.
Distribution and habitat
The zebra shark occurs in the tropical waters of the Indo-Pacific region, from South Africa to the Red Sea and the Persian Gulf (including Madagascar and the Maldives), to India and Southeast Asia (including Indonesia, the Philippines, and Palau), northward to Taiwan and Japan, eastward to New Caledonia and Tonga, and southward to northern Australia.
Bottom-dwelling in nature, the zebra shark is found from the intertidal zone to a depth of over the continental and insular shelves. Adults and large juveniles frequent coral reefs, rubble, and sandy areas. There are unsubstantiated reports of this species from fresh water in the Philippines. Zebra sharks sometimes cross oceanic waters to reach isolated seamounts. Movements of up to have been recorded for individual sharks. However, genetic data indicates that there is little exchange between populations of zebra sharks, even if their ranges are contiguous.
Biology and ecology
During the day, zebra sharks are sluggish and usually found resting on the sea bottom, sometimes using their pectoral fins to prop up the front part of their bodies and facing into the current with their mouths open to facilitate respiration. Reef channels are favored resting spots, since the tightened space yields faster, more oxygenated water. They become more active at night or when food becomes available. Zebra sharks are strong and agile swimmers, propelling themselves with pronounced anguilliform (eel-like) undulations of the body and tail. In a steady current, they have been seen hovering in place with sinuous waves of their tails.
The zebra shark feeds primarily on shelled molluscs, though it also takes crustaceans, small bony fishes, and possibly sea snakes. The slender, flexible body of this shark allows it to wriggle into narrow holes and crevices in search of food, while its small mouth and thickly muscled buccal cavity allow it to create a powerful suction force with which to extract prey. This species may be preyed upon by larger fishes (notably other larger sharks) and marine mammals. Known parasites of the zebra shark include four species of tapeworms in the genus Pedibothrium.
Social life
Zebra sharks are usually solitary, though aggregations of 20–50 individuals have been recorded. Off southeast Queensland, aggregations of several hundred zebra sharks form every summer in shallow water. These aggregations consist entirely of large adults, with females outnumbering males by almost three to one. The purpose of these aggregations is yet unclear; no definite mating behavior has been observed between the sharks. There is an observation of an adult male zebra shark biting the pectoral fin of another adult male and pushing him against the sea floor; the second male was turned on his back, and remained motionless for several minutes. This behavior resembles pre-copulatory behaviors between male and female sharks, and in both cases the biting and holding of the pectoral fin has been speculated to relate to one shark asserting dominance over the other.
Life history
The courtship behavior of the zebra shark consists of the male following the female and biting vigorously at her pectoral fins and tail, with periods in which he holds onto her pectoral fin and both sharks lie still on the bottom. On occasion this leads to mating, in which the male curls his body around the female and inserts one of his claspers into her cloaca. Copulation lasts for two to five minutes. The zebra shark is oviparous, with females laying large egg capsules measuring long, wide, and thick. The egg case is dark brown to purple in color, and has hair-like fibers along the sides that secure it to the substrate. The adhesive fibers emerge first from the female's vent; the female circles vertical structures such as reef outcroppings to entangle the fibers, so as to anchor the eggs. Females have been documented laying up to 46 eggs over a 112-day period. Eggs are deposited in batches of around four. Reproductive seasonality in the wild is unknown.
In captivity, the eggs hatch after four to six months, depending on temperature. The hatchlings measure long and have proportionately longer tails than adults. The habitat preferences of juveniles are unclear; one report places them at depths greater than , while another report from India suggests they inhabit shallower water than adults. The stripes of the juveniles may have an anti-predator function, making each individual in a group harder to target. Males attain sexual maturity at long, and females at long. Their lifespan has been estimated to be 25–30 years in the wild. There have been two reports of female zebra sharks producing young asexually. An additional study has observed parthenogenesis in females regardless of sexual history.
Human interactions
Docile and slow-moving, zebra sharks are not dangerous to humans and can be easily approached underwater. However, they have bitten divers who pull on their tails or attempt to ride them. As of 2008 there is one record of an unprovoked attack in the International Shark Attack File, though no injuries resulted. They are popular attractions for ecotourist divers in the Red Sea, off the Maldives, off Thailand's Phuket and Phi Phi islands, on the Great Barrier Reef, and elsewhere. Many zebra sharks at diving sites have become accustomed to the presence of humans, taking food from divers' hands and allowing themselves to be touched. The zebra shark adapts well to captivity and is displayed by a number of public aquaria around the world. The small, attractively colored young also find their way into the hands of private hobbyists, though this species grows far too large for the home aquarium.
The zebra shark is taken by commercial fisheries across most of its range, using bottom trawls, gillnets, and longlines. The meat is sold fresh or dried and salted for human consumption. Furthermore, the liver oil is used for vitamins, the fins for shark fin soup, and the offal for fishmeal. Zebra sharks are highly susceptible to localized depletion due to their shallow habitat and low levels of dispersal between populations, and market surveys suggest that they are much less common now than in the past. They are also threatened by the degradation of their coral reef habitat by human development, and by destructive fishing practices such as dynamiting or poisoning. As a result, the IUCN Red List has this species categorized as Endangered. Off Australia, the only threat to this species is a very low level of bycatch in prawn trawls, and there it has been assessed as of Least Concern.
| Biology and health sciences | Sharks | Animals |
467047 | https://en.wikipedia.org/wiki/Thermal%20energy | Thermal energy | The term "thermal energy" is often used ambiguously in physics and engineering. It can denote several different physical concepts, including:
Internal energy: The energy contained within a body of matter or radiation, excluding the potential energy of the whole system, and excluding the kinetic energy of the system moving as a whole.
Heat: Energy in transfer between a system and its surroundings by mechanisms other than thermodynamic work and transfer of matter.
The characteristic energy associated with a single microscopic degree of freedom, where denotes temperature and denotes the Boltzmann constant.
Mark Zemansky (1970) has argued that the term “thermal energy” is best avoided due to its ambiguity. He suggests using more precise terms such as “internal energy” and “heat” to avoid confusion. The term is, however, used in some textbooks.
Relation between heat and internal energy
In thermodynamics, heat is energy in transfer to or from a thermodynamic system by mechanisms other than thermodynamic work or transfer of matter, such as conduction, radiation, and friction. Heat refers to a quantity in transfer between systems, not to a property of any one system, or "contained" within it; on the other hand, internal energy and enthalpy are properties of a single system. Heat and work depend on the way in which an energy transfer occurs. In contrast, internal energy is a property of the state of a system and can thus be understood without knowing how the energy got there.
Macroscopic thermal energy
In addition to the microscopic kinetic energies of its molecules, the internal energy of a body includes chemical energy belonging to distinct molecules, and the global joint potential energy involved in the interactions between molecules and suchlike. Thermal energy may be viewed as contributing to internal energy or to enthalpy.
Chemical internal energy
The internal energy of a body can change in a process in which chemical potential energy is converted into non-chemical energy. In such a process, the thermodynamic system can change its internal energy by doing work on its surroundings, or by gaining or losing energy as heat. It is not quite lucid to merely say that "the converted chemical potential energy has simply become internal energy". It is, however, sometimes convenient to say that "the chemical potential energy has been converted into thermal energy". This is expressed in ordinary traditional language by talking of 'heat of reaction'.
Potential energy of internal interactions
In a body of material, especially in condensed matter, such as a liquid or a solid, in which the constituent particles, such as molecules or ions, interact strongly with one another, the energies of such interactions contribute strongly to the internal energy of the body. Still, they are not immediately apparent in the kinetic energies of molecules, as manifest in temperature. Such energies of interaction may be thought of as contributions to the global internal microscopic potential energies of the body.
Microscopic thermal energy
In a statistical mechanical account of an ideal gas, in which the molecules move independently between instantaneous collisions, the internal energy is just the sum total of the gas's independent particles' kinetic energies, and it is this kinetic motion that is the source and the effect of the transfer of heat across a system's boundary. For a gas that does not have particle interactions except for instantaneous collisions, the term "thermal energy" is effectively synonymous with "internal energy".
In many statistical physics texts, "thermal energy" refers to , the product of the Boltzmann constant and the absolute temperature, also written as .
Thermal current density
When there is no accompanying flow of matter, the term "thermal energy" is also applied to the energy carried by a heat flow.
| Physical sciences | Thermodynamics | Physics |
467147 | https://en.wikipedia.org/wiki/Radiative%20forcing | Radiative forcing | Radiative forcing (or climate forcing) is a concept used to quantify a change to the balance of energy flowing through a planetary atmosphere. Various factors contribute to this change in energy balance, such as concentrations of greenhouse gases and aerosols, and changes in surface albedo and solar irradiance. In more technical terms, it is defined as "the change in the net, downward minus upward, radiative flux (expressed in W/m2) due to a change in an external driver of climate change." These external drivers are distinguished from feedbacks and variability that are internal to the climate system, and that further influence the direction and magnitude of imbalance. Radiative forcing on Earth is meaningfully evaluated at the tropopause and at the top of the stratosphere. It is quantified in units of watts per square meter, and often summarized as an average over the total surface area of the globe.
A planet in radiative equilibrium with its parent star and the rest of space can be characterized by net zero radiative forcing and by a planetary equilibrium temperature.
Radiative forcing is not a thing in the sense that a single instrument can independently measure it. Rather it is a scientific concept and entity whose strength can be estimated from more fundamental physics principles. Scientists use measurements of changes in atmospheric parameters to calculate the radiative forcing.
The IPCC summarized the current scientific consensus about radiative forcing changes as follows: "Human-caused radiative forcing of 2.72 W/m2 in 2019 relative to 1750 has warmed the climate system. This warming is mainly due to increased GHG concentrations, partly reduced by cooling due to increased aerosol concentrations".
The atmospheric burden of greenhouse gases due to human activity has grown especially rapidly during the last several decades (since about year 1950). For carbon dioxide, the 50% increase (C/C0 = 1.5) realized as of year 2020 since 1750 corresponds to a cumulative radiative forcing change (ΔF) of +2.17 W/m2. Assuming no change in the emissions growth path, a doubling of concentrations (C/C0 = 2) within the next several decades would correspond to a cumulative radiative forcing change (ΔF) of +3.71 W/m2.
Radiative forcing can be a useful way to compare the growing warming influence of different anthropogenic greenhouse gases over time. The radiative forcing of long-lived and well-mixed greenhouse gases have been increasing in earth's atmosphere since the industrial revolution. Carbon dioxide has the biggest impact on total forcing, while methane and chlorofluorocarbons (CFCs) play smaller roles as time goes on. The five major greenhouse gases account for about 96% of the direct radiative forcing by long-lived greenhouse gas increases since 1750. The remaining 4% is contributed by the 15 minor halogenated gases.
Definition and fundamentals
Radiative forcing is defined in the IPCC Sixth Assessment Report as follows: "The change in the net, downward minus upward, radiative flux (expressed in W/m2) due to a change in an external driver of climate change, such as a change in the concentration of carbon dioxide (CO2), the concentration of volcanic aerosols or the output of the Sun."
There are some different types of radiative forcing as defined in the literature:
Stratospherically adjusted radiative forcing: "when all tropospheric properties held fixed at their unperturbed values, and after allowing for stratospheric temperatures, if perturbed, to readjust to radiative-dynamical equilibrium."
Instantaneous radiative forcing: "if no change in stratospheric temperature is accounted for".
Effective radiative forcing: "once both stratospheric and tropospheric adjustments are accounted for".
The radiation balance of the Earth (i.e. the balance between absorbed and radiated energy) determines the average global temperature. This balance is also called Earth's energy balance. Changes to this balance occur due to factors such as the intensity of solar energy, reflectivity of clouds or gases, absorption by various greenhouse gases or surfaces and heat emission by various materials. Any such alteration is a radiative forcing, which along with its climate feedbacks, ultimately changes the balance. This happens continuously as sunlight hits the surface of Earth, clouds and aerosols form, the concentrations of atmospheric gases vary and seasons alter the groundcover.
Positive radiative forcing means Earth receives more incoming energy from sunlight than it radiates to space. This net gain of energy will cause global warming. Conversely, negative radiative forcing means that Earth loses more energy to space than it receives from the Sun, which produces cooling (global dimming).
History
Transport of energy and matter in the Earth-atmosphere system is governed by the principles of equilibrium thermodynamics and more generally non-equilibrium thermodynamics. During the first half of the 20th century, physicists developed a comprehensive description of radiative transfer that they began to apply to stellar and planetary atmospheres in radiative equilibrium. Studies of radiative-convective equilibrium (RCE) followed and matured through the 1960s and 1970s. RCE models began to account for more complex material flows within the energy balance, such as those from a water cycle, and thereby described observations better.
Another application of equilibrium models is that a perturbation in the form of an externally imposed intervention can estimate a change in state. The RCE work distilled this into a forcing-feedback framework for change, and produced climate sensitivity results agreeing with those from GCMs. This conceptual framework asserts that a homogeneous disturbance (effectively imposed onto the top-of-atmosphere energy balance) will be met by slower responses (correlated more or less with changes in a planet's surface temperature) to bring the system to a new equilibrium state. Radiative forcing was a term used to describe these disturbances and gained widespread traction in the literature by the 1980s.
Related metrics
The concept of radiative forcing has been evolving from the initial proposal, named nowadays instantaneous radiative forcing (IRF), to other proposals that aim to relate better the radiative imbalance with global warming (global surface mean temperature). For example, researchers explained in 2003 how the adjusted troposphere and stratosphere forcing can be used in general circulation models.
The adjusted radiative forcing, in its different calculation methodologies, estimates the imbalance once the stratosphere temperatures has been modified to achieve a radiative equilibrium in the stratosphere (in the sense of zero radiative heating rates). This new methodology is not estimating any adjustment or feedback that could be produced on the troposphere (in addition to stratospheric temperature adjustments), for that goal another definition, named effective radiative forcing has been introduced. In general the ERF is the recommendation of the CMIP6 radiative forcing analysis although the stratospherically adjusted methodologies are still being applied in those cases where the adjustments and feedbacks on the troposphere are considered not critical, like in the well mixed greenhouse gases and ozone. A methodology named radiative kernel approach allows to estimate the climate feedbacks within an offline calculation based on a linear approximation
Uses
Climate change attribution
Radiative forcing is used to quantify the strengths of different natural and man-made drivers of Earth's energy imbalance over time. The detailed physical mechanisms by which these drivers cause the planet to warm or cool are varied. Radiative forcing allows the contribution of any one driver to be compared against others.
Another metric called effective radiative forcing or ERF removes the effect of rapid adjustments (so-called "fast feedbacks") within the atmosphere that are unrelated to longer term surface temperature responses. ERF means that climate change drivers can be placed onto a more level playing field to enable comparison of their effects and a more consistent view of how global surface temperature responds to various types of human forcing.
Climate sensitivity
Radiative forcing and climate feedbacks can be used together to estimate a subsequent change in steady-state (often denoted "equilibrium") surface temperature (ΔTs) via the equation:
where is commonly denoted the climate sensitivity parameter, usually with units K/(W/m2), and ΔF is the radiative forcing in W/m2. An estimate for is obtained from the inverse of the climate feedback parameter having units (W/m2)/K. An estimated value of gives an increase in global temperature of about 1.6 K above the 1750 reference temperature due to the increase in over that time (278 to 405 ppm, for a forcing of 2.0 W/m2), and predicts a further warming of 1.4 K above present temperatures if the mixing ratio in the atmosphere were to become double its pre-industrial value. Both of these calculations assume no other forcings.
Historically, radiative forcing displays the best predictive capacity for specific types of forcing such as greenhouse gases. It is less effective for other anthropogenic influences like soot.
Calculations and measurements
Atmospheric observation
Earth's global radiation balance fluctuates as the planet rotates and orbits the Sun, and as global-scale thermal anomalies arise and dissipate within the terrestrial, oceanic and atmospheric systems (e.g. ENSO). Consequently, the planet's 'instantaneous radiative forcing' (IRF) is also dynamic and naturally fluctuates between states of overall warming and cooling. The combination of periodic and complex processes that give rise to these natural variations will typically revert over periods lasting as long as a few years to produce a net-zero average IRF. Such fluctuations also mask the longer-term (decade-long) forcing trends due to human activities, and thus make direct observation of such trends challenging.
Earth's radiation balance has been continuously monitored by NASA's Clouds and the Earth's Radiant Energy System (CERES) instruments since year 1998. Each scan of the globe provides an estimate of the total (all-sky) instantaneous radiation balance. This data record captures both the natural fluctuations and human influences on IRF; including changes in greenhouse gases, aerosols, land surface, etc. The record also includes the lagging radiative responses to the radiative imbalances; occurring mainly by way of Earth system feedbacks in temperature, surface albedo, atmospheric water vapor and clouds.
Researchers have used measurements from CERES, AIRS, CloudSat and other satellite-based instruments within NASA's Earth Observing System to parse out contributions by the natural fluctuations and system feedbacks. Removing these contributions within the multi-year data record allows observation of the anthropogenic trend in top-of-atmosphere (TOA) IRF. The data analysis has also been done in a way that is computationally efficient and independent of most related modelling methods and results. Radiative forcing was thus directly observed to have risen by +0.53 W m−2 (±0.11 W m−2) from years 2003 to 2018. About 20% of the increase was associated with a reduction in the atmospheric aerosol burden, and most of the remaining 80% was attributed to the rising burden of greenhouse gases.
A rising trend in the radiative imbalance due to increasing global has been previously observed by ground-based instruments. For example, such measurements have been separately gathered under clear-sky conditions at two Atmospheric Radiation Measurement (ARM) sites in Oklahoma and Alaska. Each direct observation found that the associated radiative (infrared) heating experienced by surface dwellers rose by +0.2 W m−2 (±0.07 W m−2) during the decade ending 2010. In addition to its focus on longwave radiation and the most influential forcing gas () only, this result is proportionally less than the TOA forcing due to its buffering by atmospheric absorption.
Basic estimates
Radiative forcing can be evaluated for its dependence on different factors which are external to the climate system. Basic estimates summarized in the following sections have been derived (assembled) in accordance with first principles of the physics of matter and energy. Forcings (ΔF) are expressed as changes over the total surface of the planet and over a specified time interval. Estimates may be significant in the context of global climate forcing for times spanning decades or longer. Gas forcing estimates presented in the IPCC's AR6 report have been adjusted to include so-called "fast" feedbacks (positive or negative) which occur via atmospheric responses (i.e. effective radiative forcing).
Forcing due to changes in atmospheric gases
For a well-mixed greenhouse gas, radiative transfer codes that examine each spectral line for atmospheric conditions can be used to calculate the forcing ΔF as a function of a change in its concentration. These calculations may be simplified into an algebraic formulation that is specific to that gas.
Carbon dioxide
A simplified first-order approximation expression for carbon dioxide () is:
,
where C0 is a reference concentration in parts per million (ppm) by volume and ΔC is the concentration change in ppm. For the purpose of some studies (e.g. climate sensitivity), C0 is taken as the concentration prior to substantial anthropogenic changes and has a value of 278 ppm as estimated for the year 1750.
The atmospheric burden of greenhouse gases due to human activity has grown especially rapidly during the last several decades (since about year 1950). For carbon dioxide, the 50% increase (C/C0 = 1.5) realized as of year 2020 since 1750 corresponds to a cumulative radiative forcing change (delta F) of +2.17 W/m2. Assuming no change in the emissions growth path, a doubling of concentrations (C/C0 = 2) within the next several decades would correspond to a cumulative radiative forcing change (delta F) of +3.71 W/m2.
The relationship between and radiative forcing is logarithmic at concentrations up to around eight times the current value. Constant concentration increases thus have a progressively smaller warming effect. However, the first-order approximation is inaccurate at higher concentrations and there is no saturation in the absorption of infrared radiation by . Various mechanism behind the logarithmic scaling has been proposed but the spectrum distribution of the carbon dioxide seems to be essential, particularly a broadening in the relevant 15-μm band coming from a Fermi resonance present in the molecule.
Other trace gases
Somewhat different formulae apply for other trace greenhouse gases such as methane and (square-root dependence) or CFCs (linear), with coefficients that may be found for example in the IPCC reports. A year 2016 study suggests a significant revision to the methane IPCC formula. Forcings by the most influential trace gases in Earth's atmosphere are included in the section describing recent growth trends, and in the IPCC list of greenhouse gases.
Water vapor
Water vapor is Earth's primary greenhouse gas currently responsible for about half of all atmospheric gas forcing. Its overall atmospheric concentration depends almost entirely on the average planetary temperature, and has the potential to increase by as much as 7% with every degree (°C) of temperature rise (see also: Clausius–Clapeyron relation). Thus over long time scales, water vapor behaves as a system feedback that amplifies the radiative forcing driven by the growth of carbon dioxide and other trace gases.
Forcing due to changes in solar irradiance
Variations in total solar irradiance (TSI)
The intensity of solar irradiance including all wavelengths is the Total Solar Irradiance (TSI) and on average is the solar constant. It is equal to about 1361 W m−2 at the distance of Earth's annual-mean orbital radius of one astronomical unit and as measured at the top of the atmosphere. Earth TSI varies with both solar activity and planetary orbital dynamics. Multiple satellite-based instruments including ERB, ACRIM 1-3, VIRGO, and TIM have continuously measured TSI with improving accuracy and precision since 1978.
Approximating Earth as a sphere, the cross-sectional area exposed to the Sun () is equal to one quarter the area of the planet's surface (). The globally and annually averaged amount of solar irradiance per square meter of Earth's atmospheric surface () is therefore equal to one quarter of TSI, and has a nearly constant value of .
Earth follows an elliptical orbit around the Sun, so that the TSI received at any instant fluctuates between about 1321 W m−2 (at aphelion in early July) and 1412 W m−2 (at perihelion in early January), and thus by about ±3.4% over each year. This change in irradiance has minor influences on Earth's seasonal weather patterns and its climate zones, which primarily result from the annual cycling in Earth's relative tilt direction. Such repeating cycles contribute a net-zero forcing (by definition) in the context of decades-long climate changes.
Sunspot activity
Average annual TSI varies between about 1360 W m−2 and 1362 W m−2 (±0.05%) over the course of a typical 11-year sunspot activity cycle. Sunspot observations have been recorded since about year 1600 and show evidence of lengthier oscillations (Gleissberg cycle, Devries/Seuss cycle, etc.) which modulate the 11-year cycle (Schwabe cycle). Despite such complex behavior, the amplitude of the 11-year cycle has been the most prominent variation throughout this long-term observation record.
TSI variations associated with sunspots contribute a small but non-zero net forcing in the context of decadal climate changes. Some research suggests they may have partly influenced climate shifts during the Little Ice Age, along with concurrent changes in volcanic activity and deforestation. Since the late 20th century, average TSI has trended slightly lower along with a downward trend in sunspot activity.
Milankovitch shifts
Climate forcing caused by variations in solar irradiance have occurred during Milankovitch cycles, which span periods of about 40,000 to 100,000 years. Milankovitch cycles consist of long-duration cycles in Earth's orbital eccentricity (or ellipticity), cycles in its orbital obliquity (or axial tilt), and precession of its relative tilt direction. Among these, the 100,000 year cycle in eccentricity causes TSI to fluctuate by about ±0.2%. Currently, Earth's eccentricity is nearing its least elliptic (most circular) causing average annual TSI to very slowly decrease. Simulations also indicate that Earth's orbital dynamics will remain stable including these variations for least the next 10 million years.
Sun aging
The Sun has consumed about half its hydrogen fuel since forming approximately 4.5 billion years ago. TSI will continue to slowly increase during the aging process at a rate of about 1% each 100 million years. Such rate of change is far too small to be detectable within measurements and is insignificant on human timescales.
Total solar irradiance (TSI) forcing summary
The maximum fractional variations (Δτ) in Earth's solar irradiance during the last decade are summarized in the accompanying table. Each variation previously discussed contributes a forcing of:
,
where R=0.30 is Earth's reflectivity. The radiative and climate forcings arising from changes in the Sun's insolation are expected to continue to be minor, notwithstanding some as-of-yet undiscovered solar physics.
Forcing due to changes in albedo and aerosols
Variations in Earth's albedo
A fraction of incident solar radiation is reflected by clouds and aerosols, oceans and landforms, snow and ice, vegetation, and other natural and man-made surface features. The reflected fraction is known as Earth's bond albedo (R), is evaluated at the top of the atmosphere, and has an average annual global value of about 0.30 (30%). The overall fraction of solar power absorbed by Earth is then (1−R) or 0.70 (70%).
Atmospheric components contribute about three-quarters of Earth albedo, and clouds alone are responsible for half. The major roles of clouds and water vapor are linked with the majority presence of liquid water covering the planet's crust. Global patterns in cloud formation and circulation are highly complex, with couplings to ocean heat flows, and with jet streams assisting their rapid transport. Moreover, the albedos of Earth's northern and southern hemispheres have been observed to be essentially equal (within 0.2%). This is noteworthy since more than two-thirds of land and 85% of the human population are in the north.
Multiple satellite-based instruments including MODIS, VIIRs, and CERES have continuously monitored Earth's albedo since 1998. Landsat imagery, available since 1972, has also been used in some studies. Measurement accuracy has improved and results have converged in recent years, enabling more confident assessment of the recent decadal forcing influence of planetary albedo. Nevertheless, the existing data record is still too short to support longer-term predictions or to address other related questions.
Seasonal variations in planetary albedo can be understood as a set of system feedbacks that occur largely in response to the yearly cycling of Earth's relative tilt direction. Along with the atmospheric responses, most apparent to surface dwellers are the changes in vegetation, snow, and sea-ice coverage. Intra-annual variations of about ±0.02 (± 7%) around Earth's mean albedo have been observed throughout the course of a year, with maxima occurring twice per year near the time of each solar equinox. This repeating cycle contributes net-zero forcing in the context of decades-long climate changes.
Interannual variability
Regional albedos change from year to year due to shifts arising from natural processes, human actions, and system feedbacks. For example, human acts of deforestion typically raise Earth's reflectivity while introducing water storage and irrigation to arid lands may lower it. Likewise considering feedbacks, ice loss in arctic regions decreases albedo while expanding desertification at low to middle latitudes increases it.
During years 2000-2012, no overall trend in Earth's albedo was discernible within the 0.1% standard deviation of values measured by CERES. Along with the hemispherical equivalence, some researchers interpret the remarkably small interannual differences as evidence that planetary albedo may currently be constrained by the action of complex system feedbacks. Nevertheless, historical evidence also suggests that infrequent events such as major volcanic eruptions can significantly perturb the planetary albedo for several years or longer.
Albedo forcing summary
The measured fractional variations (Δα) in Earth's albedo during the first decade of the 21st century are summarized in the accompanying table. Similar to TSI, the radiative forcing due to a fractional change in planetary albedo (Δα) is:
.
Satellite observations show that various Earth system feedbacks have stabilized planetary albedo despite recent natural and human-caused shifts. On longer timescales, it is more uncertain whether the net forcing which results from such external changes will remain minor.
Recent growth trends
The IPCC summarized the current scientific consensus about radiative forcing changes as follows: "Human-caused radiative forcing of 2.72 [1.96 to 3.48] W/m2 in 2019 relative to 1750 has warmed the climate system. This warming is mainly due to increased GHG concentrations, partly reduced by cooling due to increased aerosol concentrations".
Radiative forcing can be a useful way to compare the growing warming influence of different anthropogenic greenhouse gases over time.
The radiative forcing of long-lived and well-mixed greenhouse gases have been increasing in earth's atmosphere since the industrial revolution. The table includes the direct forcing contributions from carbon dioxide (), methane (), nitrous oxide (); chlorofluorocarbons (CFCs) 12 and 11; and fifteen other halogenated gases. These data do not include the significant forcing contributions from shorter-lived and less-well-mixed gases or aerosols; including those indirect forcings from the decay of methane and some halogens. They also do not account for changes in land use or solar activity.
These data show that dominates the total forcing, with methane and chlorofluorocarbons (CFC) becoming relatively smaller contributors to the total forcing over time. The five major greenhouse gases account for about 96% of the direct radiative forcing by long-lived greenhouse gas increases since 1750. The remaining 4% is contributed by the 15 minor halogenated gases.
It might be observed that the total forcing for year 2016, 3.027 W m−2, together with the commonly accepted value of climate sensitivity parameter λ, 0.8 K /(W m−2), results in an increase in global temperature of 2.4 K, much greater than the observed increase, about 1.2 K. Part of this difference is due to lag in the global temperature achieving steady state with the forcing. The remainder of the difference is due to negative aerosol forcing (compare climate effects of particulates), climate sensitivity being less than the commonly accepted value, or some combination thereof.
The table also includes an "Annual Greenhouse Gas Index" (AGGI), which is defined as the ratio of the total direct radiative forcing due to long-lived greenhouse gases for any year for which adequate global measurements exist to that which was present in 1990. 1990 was chosen because it is the baseline year for the Kyoto Protocol. This index is a measure of the inter-annual changes in conditions that affect carbon dioxide emission and uptake, methane and nitrous oxide sources and sinks, the decline in the atmospheric abundance of ozone-depleting chemicals related to the Montreal Protocol. and the increase in their substitutes (hydrogenated CFCs (HCFCs) and hydrofluorocarbons (HFC). Most of this increase is related to . For 2013, the AGGI was 1.34 (representing an increase in total direct radiative forcing of 34% since 1990). The increase in forcing alone since 1990 was about 46%. The decline in CFCs considerably tempered the increase in net radiative forcing.
An alternative table prepared for use in climate model intercomparisons conducted under the auspices of IPCC and including all forcings, not just those of greenhouse gases.
| Physical sciences | Climate change | Earth science |
467164 | https://en.wikipedia.org/wiki/Bacteroidota | Bacteroidota | The phylum Bacteroidota (synonym Bacteroidetes) is composed of three large classes of Gram-negative, nonsporeforming, anaerobic or aerobic, and rod-shaped bacteria that are widely distributed in the environment, including in soil, sediments, and sea water, as well as in the guts and on the skin of animals.
Although some Bacteroides spp. can be opportunistic pathogens, many Bacteroidota are symbiotic species highly adjusted to the gastrointestinal tract. Bacteroides are highly abundant in intestines, reaching up to 1011 cells g−1 of intestinal material. They perform metabolic conversions that are essential for the host, such as degradation of proteins or complex sugar polymers. Bacteroidota colonize the gastrointestinal tract already in infants, as non-digestible oligosaccharides in mother milk support the growth of both Bacteroides and Bifidobacterium spp. Bacteroides spp. are selectively recognized by the immune system of the host through specific interactions.
History
Bacteroides fragilis was the first Bacteroides species isolated in 1898 as a human pathogen linked to appendicitis among other clinical cases. By far, the species in the class Bacteroidia are the most well-studied, including the genus Bacteroides (an abundant organism in the feces of warm-blooded animals including humans), and Porphyromonas, a group of organisms inhabiting the human oral cavity. The class Bacteroidia was formerly called Bacteroidetes; as it was until recently the only class in the phylum, the name was changed in the of Bergey's Manual of Systematic Bacteriology.
For a long time, it was thought that the majority of Gram-negative gastrointestinal tract bacteria belonged to the genus Bacteroides, but in recent years many species of Bacteroides have undergone reclassification. Based on current classification, the majority of the gastrointestinal Bacteroidota species belong to the families Bacteroidaceae, Prevotellaceae, Rikenellaceae, and Porphyromonadaceae.
This phylum is sometimes grouped with Chlorobiota, Fibrobacterota, Gemmatimonadota, Calditrichota, and marine group A to form the FCB group or superphylum. In the alternative classification system proposed by Cavalier-Smith, this taxon is instead a class in the phylum Sphingobacteria.
Medical and ecological role
In the gastrointestinal microbiota Bacteroidota have a very broad metabolic potential and are regarded as one of the most stable part of gastrointestinal microflora. Reduced abundance of the Bacteroidota in some cases is associated with obesity. This bacterial group as a whole has conflicting evidence for alteration of abundance in patients with irritable bowel syndrome, though its genus Bacteroides is likely enriched, but it may be involved in type 1 and type 2 diabetes pathogenesis. Bacteroides spp. in contrast to Prevotella spp. were recently found to be enriched in the metagenomes of subjects with low gene richness that were associated with adiposity, insulin resistance and dyslipidaemia as well as an inflammatory phenotype. Bacteroidota species that belong to classes Flavobacteriales and Sphingobacteriales are typical soil bacteria and are only occasionally detected in the gastrointestinal tract, except Capnocytophaga spp. and Sphingobacterium spp. that can be detected in the human oral cavity.
Bacteroidota are not limited to gut microbiota, they colonize a variety of habitats on Earth. For example, Bacteroidota, together with "Pseudomonadota", "Bacillota", and "Actinomycetota", are also among the most abundant bacterial groups in rhizosphere. They have been detected in soil samples from various locations, including cultivated fields, greenhouse soils and unexploited areas. Bacteroidota also inhabit freshwater lakes, rivers, as well as oceans. They are increasingly recognized as an important compartment of the bacterioplankton in marine environments, especially in pelagic oceans. Halophilic Bacteroidota genus Salinibacter inhabit hypersaline environments such as salt-saturated brines in hypersaline lakes. Salinibacter shares many properties with halophilic Archaea such as Halobacterium and Haloquadratum that inhabit the same environments. Phenotypically, Salinibacter is remarkably similar to Halobacterium and therefore for a long time remained unidentified.
Metabolism
Gastrointestinal Bacteroidota species produce succinic acid, acetic acid, and in some cases propionic acid, as the major end-products. Species belonging to the genera Alistipes, Bacteroides, Parabacteroides, Prevotella, Paraprevotella, Alloprevotella, Barnesiella, and Tannerella are saccharolytic, while species belonging to Odoribacter and Porphyromonas are predominantly asaccharolytic. Some Bacteroides spp. and Prevotella spp. can degrade complex plant polysaccharides such as starch, cellulose, xylans, and pectins. The Bacteroidota species also play an important role in protein metabolism by proteolytic activity assigned to the proteases linked to the cell. Some "Bacteroides spp. have a potential to utilize urea as a nitrogen source. Other important functions of Bacteroides spp. include the deconjugation of bile acids and growth on mucus. Many members of the Bacteroidota genera (Flexibacter, Cytophaga, Sporocytophaga and relatives) are coloured yellow-orange to pink-red due to the presence of pigments of the flexirubin group. In some Bacteroidota strains, flexirubins may be present together with carotenoid pigments. Carotenoid pigments are usually found in marine and halophilic members of the group, whereas flexirubin pigments are more frequent in clinical, freshwater or soil-colonizing representatives.
Genomics
Comparative genomic analysis has led to the identification of 27 proteins which are present in most species of the phylum Bacteroidota. Of these, one protein is found in all sequenced Bacteroidota species, while two other proteins are found in all sequenced species with the exception of those from the genus Bacteroides. The absence of these two proteins in this genus is likely due to selective gene loss. Additionally, four proteins have been identified which are present in all Bacteroidota species except Cytophaga hutchinsonii; this is again likely due to selective gene loss. A further eight proteins have been identified which are present in all sequenced Bacteroidota genomes except Salinibacter ruber. The absence of these proteins may be due to selective gene loss, or because S. ruber branches very deeply, the genes for these proteins may have evolved after the divergence of S. ruber. A conserved signature indel has also been identified; this three-amino-acid deletion in ClpB chaperone is present in all species of the Bacteroidota phylum except S. ruber. This deletion is also found in one Chlorobiota species and one Archaeum species, which is likely due to horizontal gene transfer. These 27 proteins and the three-amino-acid deletion serve as molecular markers for the Bacteroidota.
Relatedness of Bacteroidota, Chlorobiota, and Fibrobacterota phyla
Species from the Bacteroidota and Chlorobiota phyla branch very closely together in phylogenetic trees, indicating a close relationship. Through the use of comparative genomic analysis, three proteins have been identified which are uniquely shared by virtually all members of the Bacteroidota and Chlorobiota phyla. The sharing of these three proteins is significant because other than them, no proteins from either the Bacteroidota or Chlorobiota phyla are shared by any other groups of bacteria. Several conserved signature indels have also been identified which are uniquely shared by members of the phyla. The presence of these molecular signatures supports their close relationship. Additionally, the phylum Fibrobacterota is indicated to be specifically related to these two phyla. A clade consisting of these three phyla is strongly supported by phylogenetic analyses based upon a number of different proteins These phyla also branch in the same position based upon conserved signature indels in a number of important proteins. Lastly and most importantly, two conserved signature indels (in the RpoC protein and in serine hydroxymethyltransferase) and one signature protein PG00081 have been identified that are uniquely shared by all of the species from these three phyla. All of these results provide compelling evidence that the species from these three phyla shared a common ancestor exclusive of all other bacteria, and it has been proposed that they should all recognized as part of a single "FCB" superphylum.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature
| Biology and health sciences | Gram-negative bacteria | Plants |
467198 | https://en.wikipedia.org/wiki/Gallium%20nitride | Gallium nitride | Gallium nitride () is a binary III/V direct bandgap semiconductor commonly used in blue light-emitting diodes since the 1990s. The compound is a very hard material that has a Wurtzite crystal structure. Its wide band gap of 3.4 eV affords it special properties for applications in optoelectronics, high-power and high-frequency devices. For example, GaN is the substrate that makes violet (405 nm) laser diodes possible, without requiring nonlinear optical frequency doubling.
Its sensitivity to ionizing radiation is low (like other group III nitrides), making it a suitable material for solar cell arrays for satellites. Military and space applications could also benefit as devices have shown stability in high radiation environments.
Because GaN transistors can operate at much higher temperatures and work at much higher voltages than gallium arsenide (GaAs) transistors, they make ideal power amplifiers at microwave frequencies. In addition, GaN offers promising characteristics for THz devices. Due to high power density and voltage breakdown limits GaN is also emerging as a promising candidate for 5G cellular base station applications. Since the early 2020s, GaN power transistors have come into increasing use in power supplies in electronic equipment, converting AC mains electricity to low-voltage DC.
Physical properties
GaN is a very hard (Knoop hardness 14.21 GPa), mechanically stable wide-bandgap semiconductor material with high heat capacity and thermal conductivity. In its pure form it resists cracking and can be deposited in thin film on sapphire or silicon carbide, despite the mismatch in their lattice constants. GaN can be doped with silicon (Si) or with oxygen to n-type and with magnesium (Mg) to p-type. However, the Si and Mg atoms change the way the GaN crystals grow, introducing tensile stresses and making them brittle. Gallium nitride compounds also tend to have a high dislocation density, on the order of 108 to 1010 defects per square centimeter.
The U.S. Army Research Laboratory (ARL) provided the first measurement of the high field electron velocity in GaN in 1999. Scientists at ARL experimentally obtained a peak steady-state velocity of , with a transit time of 2.5 picoseconds, attained at an electric field of 225 kV/cm. With this information, the electron mobility was calculated, thus providing data for the design of GaN devices.
Developments
One of the earliest syntheses of gallium nitride was at the George Herbert Jones Laboratory in 1932.
An early synthesis of gallium nitride was by Robert Juza and Harry Hahn in 1938.
GaN with a high crystalline quality can be obtained by depositing a buffer layer at low temperatures. Such high-quality GaN led to the discovery of p-type GaN, p–n junction blue/UV-LEDs and room-temperature stimulated emission (essential for laser action). This has led to the commercialization of high-performance blue LEDs and long-lifetime violet laser diodes, and to the development of nitride-based devices such as UV detectors and high-speed field-effect transistors.
LEDs
High-brightness GaN light-emitting diodes (LEDs) completed the range of primary colors, and made possible applications such as daylight-visible full-color LED displays, white LEDs and blue laser devices. The first GaN-based high-brightness LEDs used a thin film of GaN deposited via metalorganic vapour-phase epitaxy (MOVPE) on sapphire. Other substrates used are zinc oxide, with lattice constant mismatch of only 2% and silicon carbide (SiC). Group III nitride semiconductors are, in general, recognized as one of the most promising semiconductor families for fabricating optical devices in the visible short-wavelength and UV region.
GaN transistors and power ICs
The very high breakdown voltages, high electron mobility, and high saturation velocity of GaN has made it an ideal candidate for high-power and high-temperature microwave applications, as evidenced by its high Johnson's figure of merit. Potential markets for high-power/high-frequency devices based on GaN include microwave radio-frequency power amplifiers (e.g., those used in high-speed wireless data transmission) and high-voltage switching devices for power grids. A potential mass-market application for GaN-based RF transistors is as the microwave source for microwave ovens, replacing the magnetrons currently used. The large band gap means that the performance of GaN transistors is maintained up to higher temperatures (~400 °C) than silicon transistors (~150 °C) because it lessens the effects of thermal generation of charge carriers that are inherent to any semiconductor. The first gallium nitride metal semiconductor field-effect transistors (GaN MESFET) were experimentally demonstrated in 1993 and they are being actively developed.
In 2010, the first enhancement-mode GaN transistors became generally available. Only n-channel transistors were available. These devices were designed to replace power MOSFETs in applications where switching speed or power conversion efficiency is critical. These transistors are built by growing a thin layer of GaN on top of a standard silicon wafer, often referred to as GaN-on-Si by manufacturers. This allows the FETs to maintain costs similar to silicon power MOSFETs but with the superior electrical performance of GaN. Another seemingly viable solution for realizing enhancement-mode GaN-channel HFETs is to employ a lattice-matched quaternary AlInGaN layer of acceptably low spontaneous polarization mismatch to GaN.
GaN power ICs monolithically integrate a GaN FET, GaN-based drive circuitry and circuit protection into a single surface-mount device. Integration means that the gate-drive loop has essentially zero impedance, which further improves efficiency by virtually eliminating FET turn-off losses. Academic studies into creating low-voltage GaN power ICs began at the Hong Kong University of Science and Technology (HKUST) and the first devices were demonstrated in 2015. Commercial GaN power IC production began in 2018.
CMOS logic
In 2016 the first GaN CMOS logic using PMOS and NMOS transistors was reported with gate lengths of 0.5 μm (gate widths of the PMOS and NMOS transistors were 500 μm and 50 μm, respectively).
Applications
LEDs and lasers
GaN-based violet laser diodes are used to read Blu-ray Discs. The mixture of GaN with In (InGaN) or Al (AlGaN) with a band gap dependent on the ratio of In or Al to GaN allows the manufacture of light-emitting diodes (LEDs) with colors that can go from red to ultra-violet.
Transistors and power ICs
GaN transistors are suitable for high frequency, high voltage, high temperature and high-efficiency applications. GaN is efficient at transferring current, and this ultimately means that less energy is lost to heat.
GaN high-electron-mobility transistors (HEMT) have been offered commercially since 2006, and have found immediate use in various wireless infrastructure applications due to their high efficiency and high voltage operation. A second generation of devices with shorter gate lengths will address higher-frequency telecom and aerospace applications.
GaN-based metal–oxide–semiconductor field-effect transistors (MOSFET) and metal–semiconductor field-effect transistor (MESFET) transistors also offer advantages including lower loss in high power electronics, especially in automotive and electric car applications. Since 2008 these can be formed on a silicon substrate. High-voltage (800 V) Schottky barrier diodes (SBDs) have also been made.
The higher efficiency and high power density of integrated GaN power ICs allows them to reduce the size, weight and component count of applications including mobile and laptop chargers, consumer electronics, computing equipment and electric vehicles.
GaN-based electronics (not pure GaN) have the potential to drastically cut energy consumption, not only in consumer applications but even for power transmission utilities.
Unlike silicon transistors that switch off due to power surges, GaN transistors are typically depletion mode devices (i.e. on / resistive when the gate-source voltage is zero). Several methods have been proposed to reach normally-off (or E-mode) operation, which is necessary for use in power electronics:
the implantation of fluorine ions under the gate (the negative charge of the F-ions favors the depletion of the channel)
the use of a MIS-type gate stack, with recess of the AlGaN
the integration of a cascaded pair constituted by a normally-on GaN transistor and a low voltage silicon MOSFET
the use of a p-type layer on top of the AlGaN/GaN heterojunction
Radars
GaN technology is also utilized in military electronics such as active electronically scanned array radars.
Thales Group introduced the Ground Master 400 radar in 2010 utilizing GaN technology. In 2021 Thales put in operation more than 50,000 GaN Transmitters on radar systems.
The U.S. Army funded Lockheed Martin to incorporate GaN active-device technology into the AN/TPQ-53 radar system to replace two medium-range radar systems, the AN/TPQ-36 and the AN/TPQ-37. The AN/TPQ-53 radar system was designed to detect, classify, track, and locate enemy indirect fire systems, as well as unmanned aerial systems. The AN/TPQ-53 radar system provided enhanced performance, greater mobility, increased reliability and supportability, lower life-cycle cost, and reduced crew size compared to the AN/TPQ-36 and the AN/TPQ-37 systems.
Lockheed Martin fielded other tactical operational radars with GaN technology in 2018, including TPS-77 Multi Role Radar System deployed to Latvia and Romania. In 2019, Lockheed Martin's partner ELTA Systems Limited, developed a GaN-based ELM-2084 Multi Mission Radar that was able to detect and track air craft and ballistic targets, while providing fire control guidance for missile interception or air defense artillery.
On April 8, 2020, Saab flight tested its new GaN designed AESA X-band radar in a JAS-39 Gripen fighter. Saab already offers products with GaN based radars, like the Giraffe radar, Erieye, GlobalEye, and Arexis EW. Saab also delivers major subsystems, assemblies and software for the AN/TPS-80 (G/ATOR)
India's Defence Research and Development Organisation is developing Virupaakhsha radar for Sukhoi Su-30MKI based on GaN technology. The radar is a further development of Uttam AESA Radar for use on HAL Tejas which employs GaAs technology.
Nanoscale
GaN nanotubes and nanowires are proposed for applications in nanoscale electronics, optoelectronics and biochemical-sensing applications.
Spintronics potential
When doped with a suitable transition metal such as manganese, GaN is a promising spintronics material (magnetic semiconductors).
Synthesis
Bulk substrates
GaN crystals can be grown from a molten Na/Ga melt held under 100 atmospheres of pressure of N2 at 750 °C. As Ga will not react with N2 below 1000 °C, the powder must be made from something more reactive, usually in one of the following ways:
2 Ga + 2 NH3 → 2 GaN + 3 H2
Ga2O3 + 2 NH3 → 2 GaN + 3 H2O
Gallium nitride can also be synthesized by injecting ammonia gas into molten gallium at at normal atmospheric pressure.
Metal-organic vapour phase epitaxy
Blue, white and ultraviolet LEDs are grown on industrial scale by metalorganic vapour-phase epitaxy (MOVPE). The precursors are ammonia with either trimethylgallium or triethylgallium, the carrier gas being nitrogen or hydrogen. Growth temperature ranges between . Introduction of trimethylaluminium and/or trimethylindium is necessary for growing quantum wells and other kinds of heterostructures.
Molecular beam epitaxy
Commercially, GaN crystals can be grown using molecular beam epitaxy or MOVPE. This process can be further modified to reduce dislocation densities. First, an ion beam is applied to the growth surface in order to create nanoscale roughness. Then, the surface is polished. This process takes place in a vacuum. Polishing methods typically employ a liquid electrolyte and UV irradiation to enable mechanical removal of a thin oxide layer from the wafer. More recent methods have been developed that utilize solid-state polymer electrolytes that are solvent-free and require no radiation before polishing.
Safety
GaN dust is an irritant to skin, eyes and lungs. The environment, health and safety aspects of gallium nitride sources (such as trimethylgallium and ammonia) and industrial hygiene monitoring studies of MOVPE sources have been reported in a 2004 review.
Bulk GaN is non-toxic and biocompatible. Therefore, it may be used in the electrodes and electronics of implants in living organisms.
| Physical sciences | Ceramic compounds | Chemistry |
467832 | https://en.wikipedia.org/wiki/Electrophile | Electrophile | In chemistry, an electrophile is a chemical species that forms bonds with nucleophiles by accepting an electron pair. Because electrophiles accept electrons, they are Lewis acids. Most electrophiles are positively charged, have an atom that carries a partial positive charge, or have an atom that does not have an octet of electrons.
Electrophiles mainly interact with nucleophiles through addition and substitution reactions. Frequently seen electrophiles in organic syntheses include cations such as H+ and NO+, polarized neutral molecules such as HCl, alkyl halides, acyl halides, and carbonyl compounds, polarizable neutral molecules such as Cl2 and Br2, oxidizing agents such as organic peracids, chemical species that do not satisfy the octet rule such as carbenes and radicals, and some Lewis acids such as BH3 and DIBAL.
Organic chemistry
Addition of halogens
These occur between alkenes and electrophiles, often halogens as in halogen addition reactions. Common reactions include use of bromine water to titrate against a sample to deduce the number of double bonds present. For example, ethene + bromine → 1,2-dibromoethane:
C2H4 + Br2 → BrCH2CH2Br
This takes the form of 3 main steps shown below;
Forming of a π-complex
The electrophilic Br-Br molecule interacts with electron-rich alkene molecule to form a π-complex 1.
Forming of a three-membered bromonium ion
The alkene is working as an electron donor and bromine as an electrophile. The three-membered bromonium ion 2 consisted of two carbon atoms and a bromine atom forms with a release of Br−.
Attacking of bromide ion
The bromonium ion is opened by the attack of Br− from the back side. This yields the vicinal dibromide with an antiperiplanar configuration. When other nucleophiles such as water or alcohol are existing, these may attack 2 to give an alcohol or an ether.
This process is called AdE2 mechanism ("addition, electrophilic, second-order"). Iodine (I2), chlorine (Cl2), sulfenyl ion (RS+), mercury cation (Hg2+), and dichlorocarbene (:CCl2) also react through similar pathways. The direct conversion of 1 to 3 will appear when the Br− is large excess in the reaction medium. A β-bromo carbenium ion intermediate may be predominant instead of 3 if the alkene has a cation-stabilizing substituent like phenyl group. There is an example of the isolation of the bromonium ion 2.
Addition of hydrogen halides
Hydrogen halides such as hydrogen chloride (HCl) adds to alkenes to give alkyl halides in hydrohalogenation. For example, the reaction of HCl with ethylene furnishes chloroethane. The reaction proceeds with a cation intermediate, being different from the above halogen addition. An example is shown below:
Proton (H+) adds (by working as an electrophile) to one of the carbon atoms on the alkene to form cation 1.
Chloride ion (Cl−) combines with the cation 1 to form the adducts 2 and 3.
In this manner, the stereoselectivity of the product, that is, from which side Cl− will attack relies on the types of alkenes applied and conditions of the reaction. At least, which of the two carbon atoms will be attacked by H+ is usually decided by Markovnikov's rule. Thus, H+ attacks the carbon atom that carries fewer substituents so as the more stabilized carbocation (with the more stabilizing substituents) will form.
This is another example of an AdE2 mechanism. Hydrogen fluoride (HF) and hydrogen iodide (HI) react with alkenes in a similar manner, and Markovnikov-type products will be given. Hydrogen bromide (HBr) also takes this pathway, but sometimes a radical process competes and a mixture of isomers may form. Although introductory textbooks seldom mentions this alternative, the AdE2 mechanism is generally competitive with the AdE3 mechanism (described in more detail for alkynes, below), in which transfer of the proton and nucleophilic addition occur in a concerted manner. The extent to which each pathway contributes depends on the several factors like the nature of the solvent (e.g., polarity), nucleophilicity of the halide ion, stability of the carbocation, and steric effects. As brief examples, the formation of a sterically unencumbered, stabilized carbocation favors the AdE2 pathway, while a more nucleophilic bromide ion favors the AdE3 pathway to a greater extent compared to reactions involving the chloride ion.
In the case of dialkyl-substituted alkynes (e.g., 3-hexyne), the intermediate vinyl cation that would result from this process is highly unstable. In such cases, the simultaneous protonation (by HCl) and attack of the alkyne by the nucleophile (Cl−) is believed to take place. This mechanistic pathway is known by the Ingold label AdE3 ("addition, electrophilic, third-order"). Because the simultaneous collision of three chemical species in a reactive orientation is improbable, the termolecular transition state is believed to be reached when the nucleophile attacks a reversibly-formed weak association of the alkyne and HCl. Such a mechanism is consistent with the predominantly anti addition (>15:1 anti:syn for the example shown) of the hydrochlorination product and the termolecular rate law, Rate = k[alkyne][HCl]2. In support of the proposed alkyne-HCl association, a T-shaped complex of an alkyne and HCl has been characterized crystallographically.
In contrast, phenylpropyne reacts by the AdE2ip ("addition, electrophilic, second-order, ion pair") mechanism to give predominantly the syn product (~10:1 syn:anti). In this case, the intermediate vinyl cation is formed by addition of HCl because it is resonance-stabilized by the phenyl group. Nevertheless, the lifetime of this high energy species is short, and the resulting vinyl cation-chloride anion ion pair immediately collapses, before the chloride ion has a chance to leave the solvent shell, to give the vinyl chloride. The proximity of the anion to the side of the vinyl cation where the proton was added is used to rationalize the observed predominance of syn addition.
Hydration
One of the more complex hydration reactions utilises sulfuric acid as a catalyst. This reaction occurs in a similar way to the addition reaction but has an extra step in which the OSO3H group is replaced by an OH group, forming an alcohol:
C2H4 + H2O → C2H5OH
As can be seen, the H2SO4 does take part in the overall reaction, however it remains unchanged so is classified as a catalyst.
This is the reaction in more detail:
The H–OSO3H molecule has a δ+ charge on the initial H atom. This is attracted to and reacts with the double bond in the same way as before.
The remaining (negatively charged) −OSO3H ion then attaches to the carbocation, forming ethyl hydrogensulphate (upper way on the above scheme).
When water (H2O) is added and the mixture heated, ethanol (C2H5OH) is produced. The "spare" hydrogen atom from the water goes into "replacing" the "lost" hydrogen and, thus, reproduces sulfuric acid. Another pathway in which water molecule combines directly to the intermediate carbocation (lower way) is also possible. This pathway become predominant when aqueous sulfuric acid is used.
Overall, this process adds a molecule of water to a molecule of ethene.
This is an important reaction in industry, as it produces ethanol, whose purposes include fuels and starting material for other chemicals.
Chiral derivatives
Many electrophiles are chiral and optically stable. Typically chiral electrophiles are also optically pure.
One such reagent is the fructose-derived organocatalyst used in the Shi epoxidation. The catalyst can accomplish highly enantioselective epoxidations of trans-disubstituted and trisubstituted alkenes. The Shi catalyst, a ketone, is oxidized by stoichiometric oxone to the active dioxirane form before proceeding in the catalytic cycle.
Oxaziridines such as chiral N-sulfonyloxaziridines effect enantioselective ketone alpha oxidation en route to the AB-ring segments of various natural products, including γ-rhodomycionone and α-citromycinone.
Polymer-bound chiral selenium electrophiles effect asymmetric selenenylation reactions. The reagents are aryl selenenyl bromides, and they were first developed for solution phase chemistry and then modified for solid phase bead attachment via an aryloxy moiety. The solid-phase reagents were applied toward the selenenylation of various alkenes with good enantioselectivities. The products can be cleaved from the solid support using organotin hydride reducing agents. Solid-supported reagents offers advantages over solution phase chemistry due to the ease of workup and purification.
Electrophilicity scale
Several methods exist to rank electrophiles in order of reactivity and one of them is devised by Robert Parr with the electrophilicity index ω given as:
with the electronegativity and chemical hardness. This equation is related to the classical equation for electrical power:
where is the resistance (Ohm or Ω) and is voltage. In this sense the electrophilicity index is a kind of electrophilic power. Correlations have been found between electrophilicity of various chemical compounds and reaction rates in biochemical systems and such phenomena as allergic contact dermititis.
An electrophilicity index also exists for free radicals. Strongly electrophilic radicals such as the halogens react with electron-rich reaction sites, and strongly nucleophilic radicals such as the 2-hydroxypropyl-2-yl and tert-butyl radical react with a preference for electron-poor reaction sites.
Superelectrophiles
Superelectrophiles are defined as cationic electrophilic reagents with greatly enhanced reactivities in the presence of superacids. These compounds were first described by George A. Olah. Superelectrophiles form as a doubly electron deficient superelectrophile by protosolvation of a cationic electrophile. As observed by Olah, a mixture of acetic acid and boron trifluoride is able to remove a hydride ion from isobutane when combined with hydrofluoric acid via the formation of a superacid from BF3 and HF. The responsible reactive intermediate is the [CH3CO2H3]2+ dication. Likewise, methane can be nitrated to nitromethane with nitronium tetrafluoroborate NOBF only in presence of a strong acid like fluorosulfuric acid via the protonated nitronium dication.
In gitionic (gitonic) superelectrophiles, charged centers are separated by no more than one atom, for example, the protonitronium ion O=N+=O+—H (a protonated nitronium ion). And, in distonic superelectrophiles, they are separated by 2 or more atoms, for example, in the fluorination reagent F-TEDA-BF4.
| Physical sciences | Concepts | Chemistry |
467899 | https://en.wikipedia.org/wiki/Systems%20biology | Systems biology | Systems biology is the computational and mathematical analysis and modeling of complex biological systems. It is a biology-based interdisciplinary field of study that focuses on complex interactions within biological systems, using a holistic approach (holism instead of the more traditional reductionism) to biological research.
Particularly from the year 2000 onwards, the concept has been used widely in biology in a variety of contexts. The Human Genome Project is an example of applied systems thinking in biology which has led to new, collaborative ways of working on problems in the biological field of genetics. One of the aims of systems biology is to model and discover emergent properties, properties of cells, tissues and organisms functioning as a system whose theoretical description is only possible using techniques of systems biology. These typically involve metabolic networks or cell signaling networks.
Overview
Systems biology can be considered from a number of different aspects.
As a field of study, particularly, the study of the interactions between the components of biological systems, and how these interactions give rise to the function and behavior of that system (for example, the enzymes and metabolites in a metabolic pathway or the heart beats).
As a paradigm, systems biology is usually defined in antithesis to the so-called reductionist paradigm (biological organisation), although it is consistent with the scientific method. The distinction between the two paradigms is referred to in these quotations: "the reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge ... the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models." (Sauer et al.) "Systems biology ... is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different. ... It means changing our philosophy, in the full sense of the term." (Denis Noble)
As a series of operational protocols used for performing research, namely a cycle composed of theory, analytic or computational modelling to propose specific testable hypotheses about a biological system, experimental validation, and then using the newly acquired quantitative description of cells or cell processes to refine the computational model or theory. Since the objective is a model of the interactions in a system, the experimental techniques that most suit systems biology are those that are system-wide and attempt to be as complete as possible. Therefore, transcriptomics, metabolomics, proteomics and high-throughput techniques are used to collect quantitative data for the construction and validation of models.
As the application of dynamical systems theory to molecular biology. Indeed, the focus on the dynamics of the studied systems is the main conceptual difference between systems biology and bioinformatics.
As a socioscientific phenomenon defined by the strategy of pursuing integration of complex data about the interactions in biological systems from diverse experimental sources using interdisciplinary tools and personnel.
History
Although the concept of a systems view of cellular function has been well understood since at least the 1930s, technological limitations made it difficult to make systems wide measurements. The advent of microarray technology in the 1990s opened up an entire new visa for studying cells at the systems level. In 2000, the Institute for Systems Biology was established in Seattle in an effort to lure "computational" type people who it was felt were not attracted to the academic settings of the university. The institute did not have a clear definition of what the field actually was: roughly bringing together people from diverse fields to use computers to holistically study biology in new ways. A Department of Systems Biology at Harvard Medical School was launched in 2003. In 2006 it was predicted that the buzz generated by the "very fashionable" new concept would cause all the major universities to need a systems biology department, thus that there would be careers available for graduates with a modicum of ability in computer programming and biology. In 2006 the National Science Foundation put forward a challenge to build a mathematical model of the whole cell. In 2012 the first whole-cell model of Mycoplasma genitalium was achieved by the Covert Laboratory at Stanford University. The whole-cell model is able to predict viability of M. genitalium cells in response to genetic mutations.
An earlier precursor of systems biology, as a distinct discipline, may have been by systems theorist Mihajlo Mesarovic in 1966 with an international symposium at the Case Institute of Technology in Cleveland, Ohio, titled Systems Theory and Biology. Mesarovic predicted that perhaps in the future there would be such a thing as "systems biology". Other early precursors that focused on the view that biology should be analyzed as a system, rather than a simple collection of parts, were Metabolic Control Analysis, developed by Henrik Kacser and Jim Burns later thoroughly revised, and Reinhart Heinrich and Tom Rapoport, and Biochemical Systems Theory developed by Michael Savageau.
According to Robert Rosen in the 1960s, holistic biology had become passé by the early 20th century, as more empirical science dominated by molecular chemistry had become popular. Echoing him forty years later in 2006 Kling writes that the success of molecular biology throughout the 20th century had suppressed holistic computational methods. By 2011 the National Institutes of Health had made grant money available to support over ten systems biology centers in the United States, but by 2012 Hunter writes that systems biology still has someway to go to achieve its full potential. Nonetheless, proponents hoped that it might once prove more useful in the future.
An important milestone in the development of systems biology has become the international project Physiome.
Associated disciplines
According to the interpretation of systems biology as using large data sets using interdisciplinary tools, a typical application is metabolomics, which is the complete set of all the metabolic products, metabolites, in the system at the organism, cell, or tissue level.
Items that may be a computer database include: phenomics, organismal variation in phenotype as it changes during its life span; genomics, organismal deoxyribonucleic acid (DNA) sequence, including intra-organismal cell specific variation. (i.e., telomere length variation); epigenomics/epigenetics, organismal and corresponding cell specific transcriptomic regulating factors not empirically coded in the genomic sequence. (i.e., DNA methylation, Histone acetylation and deacetylation, etc.); transcriptomics, organismal, tissue or whole cell gene expression measurements by DNA microarrays or serial analysis of gene expression; interferomics, organismal, tissue, or cell-level transcript correcting factors (i.e., RNA interference), proteomics, organismal, tissue, or cell level measurements of proteins and peptides via two-dimensional gel electrophoresis, mass spectrometry or multi-dimensional protein identification techniques (advanced HPLC systems coupled with mass spectrometry). Sub disciplines include phosphoproteomics, glycoproteomics and other methods to detect chemically modified proteins; glycomics, organismal, tissue, or cell-level measurements of carbohydrates; lipidomics, organismal, tissue, or cell level measurements of lipids.
The molecular interactions within the cell are also studied, this is called interactomics. A discipline in this field of study is protein–protein interactions, although interactomics includes the interactions of other molecules. Neuroelectrodynamics, where the computer's or a brain's computing function as a dynamic system is studied along with its (bio)physical mechanisms; and fluxomics, measurements of the rates of metabolic reactions in a biological system (cell, tissue, or organism).
In approaching a systems biology problem there are two main approaches. These are the top down and bottom up approach. The top down approach takes as much of the system into account as possible and relies largely on experimental results. The RNA-Seq technique is an example of an experimental top down approach. Conversely, the bottom up approach is used to create detailed models while also incorporating experimental data. An example of the bottom up approach is the use of circuit models to describe a simple gene network.
Various technologies utilized to capture dynamic changes in mRNA, proteins, and post-translational modifications. Mechanobiology, forces and physical properties at all scales, their interplay with other regulatory mechanisms; biosemiotics, analysis of the system of sign relations of an organism or other biosystems; Physiomics, a systematic study of physiome in biology.
Cancer systems biology is an example of the systems biology approach, which can be distinguished by the specific object of study (tumorigenesis and treatment of cancer). It works with the specific data (patient samples, high-throughput data with particular attention to characterizing cancer genome in patient tumour samples) and tools (immortalized cancer cell lines, mouse models of tumorigenesis, xenograft models, high-throughput sequencing methods, siRNA-based gene knocking down high-throughput screenings, computational modeling of the consequences of somatic mutations and genome instability). The long-term objective of the systems biology of cancer is ability to better diagnose cancer, classify it and better predict the outcome of a suggested treatment, which is a basis for personalized cancer medicine and virtual cancer patient in more distant prospective. Significant efforts in computational systems biology of cancer have been made in creating realistic multi-scale in silico models of various tumours.
The systems biology approach often involves the development of mechanistic models, such as the reconstruction of dynamic systems from the quantitative properties of their elementary building blocks. For instance, a cellular network can be modelled mathematically using methods coming from chemical kinetics and control theory. Due to the large number of parameters, variables and constraints in cellular networks, numerical and computational techniques are often used (e.g., flux balance analysis).
Bioinformatics and data analysis
Other aspects of computer science, informatics, and statistics are also used in systems biology. These include new forms of computational models, such as the use of process calculi to model biological processes (notable approaches include stochastic π-calculus, BioAmbients, Beta Binders, BioPEPA, and Brane calculus) and constraint-based modeling; integration of information from the literature, using techniques of information extraction and text mining; development of online databases and repositories for sharing data and models, approaches to database integration and software interoperability via loose coupling of software, websites and databases, or commercial suits; network-based approaches for analyzing high dimensional genomic data sets. For example, weighted correlation network analysis is often used for identifying clusters (referred to as modules), modeling the relationship between clusters, calculating fuzzy measures of cluster (module) membership, identifying intramodular hubs, and for studying cluster preservation in other data sets; pathway-based methods for omics data analysis, e.g. approaches to identify and score pathways with differential activity of their gene, protein, or metabolite members. Much of the analysis of genomic data sets also include identifying correlations. Additionally, as much of the information comes from different fields, the development of syntactically and semantically sound ways of representing biological models is needed.
Creating biological models
Researchers begin by choosing a biological pathway and diagramming all of the protein, gene, and/or metabolic pathways. After determining all of the interactions, mass action kinetics or enzyme kinetic rate laws are used to describe the speed of the reactions in the system. Using mass-conservation, the differential equations for the biological system can be constructed. Experiments or parameter fitting can be done to determine the parameter values to use in the differential equations. These parameter values will be the various kinetic constants required to fully describe the model. This model determines the behavior of species in biological systems and bring new insight to the specific activities of systems. Sometimes it is not possible to gather all reaction rates of a system. Unknown reaction rates are determined by simulating the model of known parameters and target behavior which provides possible parameter values.
The use of constraint-based reconstruction and analysis (COBRA) methods has become popular among systems biologists to simulate and predict the metabolic phenotypes, using genome-scale models. One of the methods is the flux balance analysis (FBA) approach, by which one can study the biochemical networks and analyze the flow of metabolites through a particular metabolic network, by optimizing the objective function of interest (e.g. maximizing biomass production to predict growth).
| Biology and health sciences | Biology basics | Biology |
468452 | https://en.wikipedia.org/wiki/Grenadiers%20%28fish%29 | Grenadiers (fish) | Grenadiers or rattails are generally large, brown to black gadiform marine fish of the subfamily Macrourinae, the largest subfamily of the family Macrouridae. Found at great depths from the Arctic to Antarctic, members of this subfamily are amongst the most abundant of the deep-sea fish.
The macrourins form a large and diverse family with 28 extant genera recognized (well over half of the total species are contained in just three genera, Coelorinchus, Coryphaenoides, and Nezumia). They range in length from about in Hymenogadus gracilis to in Albatrossia pectoralis. Several attempts have been made to establish a commercial fishery for the most common larger species, such as the giant grenadier, but the fish is considered unpalatable, and attempts thus far have proven unsuccessful. The subfamily as a whole may represent up to 15% of the deep-sea fish population.
Rattails, characterized by large heads with large mouths and eyes, have slender bodies that taper very much to very thin caudal peduncles or tails (except for one species without a caudal fin): this rat-like tail explains the common name "rattail" and the name of the subfamily and the surname are derived from the Greek makros meaning "big" and Oura meaning "tail". The first dorsal flat is small, tall and pointed (and may have rays modified into spines); The second dorsal fin runs along the rest of the back and connects to the tail and the large anal fin. The scales are small.
As with many deep-living fish, the lateral line system in grenadiers is well-developed; it is further aided by numerous chemoreceptors located on the head and lips and chemosensory barbels underneath the chin. Benthic species have swim bladders with unique muscles attached to them. The animals are thought to use these muscles to "strum" their bladders and produce sound, possibly playing a role in courtship and mate location. Light-producing organs, photophores, are present in some species; they are located in the middle of the abdomen, just before the anus and underneath the skin.
Grenadiers have been recorded from depths of about , and are among the most common benthic fish of the deep (however, two genera are known to prefer the midwater). They may be solitary or may form large schools, as with the roundnose grenadiers. The benthic species are attracted to structural oases, such as hydrothermal vents, cold seeps, and shipwrecks. They are thought to be generalists, feeding on smaller fish, pelagic crustaceans, such as shrimp, amphipods, cumaceans, and less often cephalopods and lanternfish. As well as being important apex predators in the benthic habitat, some species are also notable as scavengers.
As few rattail larvae have been recovered, little is known of their life histories. They are known to produce a large number (over 100,000) of tiny ( in diameter) eggs made buoyant by lipid droplets. The eggs are presumed to float up to the thermocline (the interface between warmer surface waters and cold, deeper waters) where they develop. The juveniles remain in shallower waters, gradually migrating to greater depths with age.
Spawning may or may not be tied to the seasons, depending on the species. At least one species, Coryphaenoides armatus, is thought to be semelparous; that is, the adults die after spawning. Nonsemelparous species may live to 56 years or more. The macrourins, in general, are thought to have low resilience; commercially exploited species may be overfished and this could soon lead to a collapse of their fisheries.
Genera
Currently 28 extant genera in this subfamily are recognized:
Albatrossia Jordan & Gilbert, 1898
Asthenomacrurus Sazonov & Shcherbachev, 1982
Cetonurichthys Sazonov & Shcherbachev, 1982
Cetonurus Günther, 1887
Coelorinchus Giorna, 1809
Coryphaenoides Gunnerus, 1765
Cynomacrurus Dollo, 1909
Echinomacrurus Roule, 1916
Haplomacrourus Trunov, 1980
Hymenocephalus Giglioli, 1884
Hymenogadus Gilbert & Hubbs, 1920
Kumba Marshall, 1973
Kuronezumia Iwamoto, 1974
Lepidorhynchus Richardson, 1846
Lucigadus Gilbert & Hubbs, 1920
Macrosmia Merrett, Sazonov & Shcherbachev, 1983
Macrourus Bloch, 1786
Malacocephalus Günther, 1862
Mataeocephalus Berg, 1898
Mesovagus Nakayama & Endo, 2016
Nezumia Jordan, 1904
Odontomacrurus Norman, 1939
Paracetonurus Marshall, 1973
Pseudocetonurus Sazonov & Shcherbachev, 1982
Pseudonezumia Okamura, 1970
Sphagemacrurus Fowler, 1925
Spicomacrurus Okamura, 1970
Trachonurus Günther, 1887
Ventrifossa Gilbert & Hubbs, 1920
| Biology and health sciences | Acanthomorpha | Animals |
1856394 | https://en.wikipedia.org/wiki/Helicoprion | Helicoprion | Helicoprion is a genus of extinct shark-like eugeneodont fish. Almost all fossil specimens are of spirally arranged clusters of the individuals' teeth, called "tooth whorls", which in life were embedded in the lower jaw. As with most extinct cartilaginous fish, the skeleton is mostly unknown. Fossils of Helicoprion are known from a 20 million-year timespan during the Permian period from the Artinskian stage of the Cisuralian (Early Permian) to the Roadian stage of the Guadalupian (Middle Permian). The closest living relatives of Helicoprion (and other eugeneodonts) are the chimaeras, though their relationship is very distant. The unusual tooth arrangement is thought to have been an adaption for feeding on soft-bodied prey, and may have functioned as a deshelling mechanism for hard-bodied cephalopods such as nautiloids and ammonoids. In 2013, systematic revision of Helicoprion via morphometric analysis of the tooth whorls found only H. davisii, H. bessonowi and H. ergassaminon to be valid, with some of the larger tooth whorls being outliers.
Fossils of Helicoprion have been found worldwide, as the genus is known from Russia, Western Australia, China, Kazakhstan, Japan, Laos, Norway, Canada, Mexico, and the United States (Idaho, Nevada, Wyoming, Texas, Utah, and California). More than 50% of the fossils referred to Helicoprion are H. davisii specimens from the Phosphoria Formation of Idaho. An additional 25% of fossils are found in the Ural Mountains of Russia, belonging to the species H. bessonowi.
Description
Like other chondrichthyan fish, Helicoprion and other eugeneodonts had skeletons made of cartilage. As a result, the entire body disintegrated once it began to decay, unless preserved by exceptional circumstances. This can make drawing precise conclusions on the full body appearance of Helicoprion difficult, but the body shape can be estimated from postcranial remains known from a few eugeneodonts. Eugeneodonts with preserved postcrania include the Pennsylvanian to Triassic-age caseodontoids Caseodus, Fadenia, and Romerodus.
These taxa have a fusiform (streamlined, torpedo-shaped) body plan, with triangular pectoral fins. They have a single large and triangular dorsal fin without a fin spine, and a tall, forked caudal fin, which externally appears to be homocercal (with two equally sized lobes). This general body plan is shared by active, open-water predatory fish such as tuna, swordfish, and lamnid sharks. Eugeneodonts also lack pelvic and anal fins, and judging by Romerodus, they would have had broad keels along the side of the body up to the caudal fin. Fadenia had five well-exposed gill slits, possibly with a vestigial sixth gill. No evidence has been found of the specialized gill basket and fleshy operculum present in living chimaeroids. Based on the proportional size of caseodontoid tooth whorls, Lebedev suggested that Helicoprion individuals with tooth whorls reaching in diameter could reach in length, rivaling the size of modern basking sharks. The largest known Helicoprion tooth whorl, specimen IMNH 49382 representing an unknown species, reached in diameter and in crown height, which would have belonged to an individual over in length.
Tooth whorls
Almost all Helicoprion specimens are known solely from "tooth whorls", which consist of dozens of enameloid-covered teeth embedded within a common logarithmic spiral-shaped root. The youngest and first tooth at the center of the spiral, referred to as the "juvenile tooth arch", is hooked, but all other teeth are generally triangular in shape, laterally compressed and often serrated. Tooth size increases away from the center of the spiral (abaxial), with the largest teeth possibly exceeding in length. The lower part of the teeth form projections that are shingled below the crown of the previous tooth. The lowest portion of the root below the enameloid tooth projections is referred to as the "shaft", and lies on cartilage that encapsulates the previous revolutions of the whorl. In a complete tooth whorl, the outermost part of the spiral terminates with an extended root that lacks the middle and upper portions of the tooth crown.
Cartilaginous skull
Helicoprion specimens preserving more than tooth whorls are very rare. The best-preserved specimen of Helicoprion is IMNH 37899 (also known as "Idaho 4"), referred to Helicoprion davisii. It was found in Idaho in 1950 and was originally described in 1966 by Svend Erik Bendix-Almgreen. A 2013 redescription by Tapanila and colleagues was accompanied by CT scanning, to reveal the cartilaginous remains in more detail. CT scanning revealed a nearly complete jaw apparatus, articulated in a closed position with three-dimensional preservation. Alongside the tooth whorl, the specimen preserves a palatoquadrate (forming the upper jaw), Meckel's cartilage (forming the lower jaw), and a robust labial cartilage bracing the tooth whorl. All of these structures are composed of prismatic calcified cartilage, as with modern chondrichthyans. The specimen did not preserve a chondrocranium, the cartilaginous structure which would have housed the brain and sensory organs. The jaws are extensively laterally compressed (narrow) compared to living chondrichthyans, though this may at least partially be an artifact of post mortem compression.
Helicoprion had an autodiastylic jaw suspension, meaning that the inner edge of the palatoquadrate was firmly attached (but not fused) to the chondrocranium at two separate points. These two attachment points are the dome-shaped ethmoid process at the front of the palatoquadrate, and the flange-like basal process at its upper rear corner. Autodiastylic jaws are common in early euchondrocephalans, though in modern animals they can only be found in embryonic chimaeriforms. Another well-preserved specimen, USNM 22577+494391 (the "Sweetwood specimen"), has demonstrated that the inner surface of the palatoquadrate was covered with numerous small (~2 mm wide) teeth. The palatoquadrate teeth were low and rounded, forming a "pavement" that scraped against the tooth whorl. When seen from behind, the palatoquadrate forms a paired jaw joint with the Meckel's cartilage. No evidence is seen for an articulation between the palatoquadrate and the hyomandibula.
Meckel's cartilage has an additional projection right before the joint with the palatoquadrate. This extra process, unique to Helicoprion, likely served to limit jaw closure to prevent the whorl from puncturing the chondrocranium. Another unique characteristic of Helicoprion is that the preserved labial cartilage forms a synchondrosis (fused joint) with the upper surface of Meckel's cartilage. This joint is facilitated by a long facet on the upper edge of Meckel's cartilage. The labial cartilage provides lateral support for the tooth whorl, widening near the root of each volution. By wedging into the palatoquadrate while the mouth is closed, the upper edge of the labial cartilage helps to spread out the forces used to limit the extent of the jaw closure. The rear portion of the labial cartilage has a cup-like form, protecting the developing root of the last and youngest volution.
Scales
Tooth-like chondricthyan scales, specifically known as odontodes, have been found associated with H. bessonowi remains in Kazakhstan. They are broadly similar to scales of other eugeneodonts such as Sarcoprion and Ornithoprion. The scales have a cap-shaped base with a concave lower surface. The crowns are conical and covered with serrated, longitudinal ridges. The scales may be monodontode (with one crown per base) or polyodontode (with a bundle of multiple crowns resulting from the fusion of several odontodes into a larger structure). Compared to other eugeneodonts, the scales of Helicoprion are more strongly pointed.
Paleobiology
The unusual saw-like tooth whorl and the lack of wear on the teeth of Helicoprion implies a diet of soft-bodied prey, as hard-shelled prey would simply slip out of the mouth. Due to the narrow nature of the jaw, suction feeding is unlikely to have been effective, and Helicoprion is thought to have been a bite feeder. Biomechanical modelling by Ramsay et al. (2015) suggests that the teeth in the whorl had distinct functions depending on where they were in the spiral. The frontmost teeth served to snag and pull prey further into the mouth, while the middle teeth spear, and the hind teeth served to puncture and bring prey further into the throat, with the prey being squeezed between the whorl and the two halves of the palatoquadrate. The labial cartilage served to buttress and provide support to the whorl.
Helicoprion may have started with a large gape during initial prey capture, followed by smaller jaw opening and closing cycles to further transport prey into the mouth, as is done by modern bite-feeding sharks. While modern sharks shake their heads from side to side to facilitate sawing and cutting their prey, the teeth of Helicoprion would likely further cut the prey during the jaw opening, due to the arc-like path of the front teeth, similar to the slashing motion of a knife. Helicoprion likely used a series of rapid, forceful jaw closures to initially capture and push prey deeper into the oral cavity, followed by cyclic opening and closing of the jaw to facilitate sawing through prey.
Ramsay and colleagues further suggested that the whorl could have served as an effective mechanism for deshelling hard-shelled cephalopods such as ammonoids and nautiloids, which were abundant in Early Permian oceans. If a hard-shelled cephalopod was bitten head-on, the whorl could have served to pull the soft body out of the shell and into the mouth. During jaw closure, the palatoquadrates and tooth whorl combined to form a three-point system, equivalent to the set-up of an inverted three-point flexural test. This system was effective at trapping and holding soft parts to increase cutting efficiency and provide leverage against hard-shelled prey. At the three points of contact, the estimated bite force ranges between , with estimated bite stresses ranging from during initial prey contact. This large bite force may have allowed Helicoprion to expand its diet to vertebrates, as its jaw apparatus was more than capable of cutting through skeletal elements of unarmoured bony fish and other chondrichthyans.
Classification
Skull data from IMNH 37899 reveal several characteristics, such as an autodiastylic jaw suspension without an integrated hyomandibula, which confirm the placement of Helicoprion within the chondrichthyan subgroup Euchondrocephali. In contrast to their sister group Elasmobranchii (containing true sharks, rays, and kin), euchondrocephalans are primarily an extinct group. Living members of the Euchondrocephali are solely represented by the order Chimaeriformes in the subclass Holocephali. Chimaeriforms, commonly known as chimaeras or ratfish, are a small and specialized group of rare deep-sea cartilaginous fish. The relationship between Helicoprion and living chimaeras is very distant, but had been previously suspected based on details of its tooth anatomy.
More specifically, Helicoprion can be characterized as a member of Eugeneodontida, an order of shark-like euchondrocephalans that lived from the Devonian to Triassic periods. Eugeneodonts have simple, autodiastylic skulls with reduced marginal dentition and enlarged whorls of blade-like symphysial teeth on the midline of the jaw. Within the Eugeneodontida, Helicoprion is placed within the Edestoidea, a group of eugeneodonts with particularly tall and angled symphysial teeth. Members of the Edestoidea are divided into two families based on the style of the dentition. One family, the Edestidae, has relatively short tooth blades with roots that incline backwards.
The other family, which contains Helicoprion, is sometimes called Agassizodontidae, based on the genus Agassizodus. Other authors, though, prefer the family name Helicoprionidae, which was first used 70 years prior to Agassizodontidae. Helicoprionids (or agassizodontids) have large, cartilage-supported whorls with strongly arched shapes. Helicoprionids do not shed their teeth; instead, their tooth whorls continually add new teeth with bases inclined forwards at the top of the whorl. As most eugeneodonts are based on fragmentary tooth remains, concrete phylogenetic relationships within the group remain unclear.
History and species
Three species of Helicoprion are currently considered valid via morphometric analyses, differing in the proportions of the upper, middle and lower sections of the tooth crown. These differences are only apparent in adult individuals past the 85th tooth of the spiral.
H. davisii and synonyms
The first specimen of Helicoprion to be described was WAMAG 9080, a 15-tooth fragment of a tooth whorl found along a tributary of the Gascoyne River in Western Australia. Henry Woodward described the fossil in 1886 and named it as the species Edestus davisii, commemorating the man who discovered it. Upon naming H. bessonowi in 1899, Alexander Karpinsky reassigned E. davisii to Helicoprion. In 1902, Charles R. Eastman referred H. davisii to his new genus Campyloprion, but this proposal was never widely accepted. Karpinsky's identification of Edestus davisii as a species of Helicoprion would eventually be upheld by Curt Teichert, who described several more complete tooth whorls from the Wandagee Formation of Western Australia in the late 1930s.
In 1907 and 1909, Oliver Perry Hay described a new genus and species of eugeneodont, Lissoprion ferrieri, from numerous fossils found in phosphate-rich Phosphoria Formation on the border between Idaho and Wyoming. He also synonymized H. davisii with his new genus and species. However, Karpinsky separated the two species once more and transferred them to Helicoprion in 1911. H. ferrieri was initially differentiated using the metrics of tooth angle and height, but Tapanila and Pruitt (2013) considered these characteristics to be intraspecifically variable. As a result, they reassigned H. ferrieri as a junior synonym of H. davisii. Outside the Phosphoria Formation, H. davisii specimens have also been found in Mexico, Texas, and Canada (Nunavut and Alberta). H. davisii is characterized by its tall and widely spaced tooth whorl, with these becoming more pronounced with age. The teeth also noticeably curve forward.In a 1939 publication, Harry E. Wheeler described two new species of Helicoprion from California and Nevada. One of these, H, sierrensis, was described from a specimen (UNMMPC 1002) found in glacial moraine deposits in Eastern California, likely originating from the Goodhue Formation. Tapanila and Pruitt determined that the distinguishing shaft range of H. sierrensis was well within the variation found in H. davisii.
H. jingmenense was described in 2007 from a nearly complete tooth whorl (YIGM V 25147) with more than four volutions across a part and counterpart slab. It was discovered during the construction of a road passing through the Lower Permian Qixia Formation of Hubei Province, China. The specimen is very similar to H. ferrieri and H. bessonowi, though it differs from the former by having teeth with a wider cutting blade, and a shorter compound root, and differs from the latter by having fewer than 39 teeth per volution. Tapanila and Pruitt argued that the specimen was partially obscured by the surrounding matrix, resulting in an underestimation of tooth height. Taking into account intraspecific variation, they synonymized it with H. davisii.
H. bessonowi and synonyms
Helicoprion bessonowi was first described in an 1899 monograph by Alexander Karpinsky. Although it was not the first Helicoprion species to be described, it was the first known from complete tooth whorls, demonstrating that Helicoprion was distinct from Edestus. As a result, H. bessonowi serves as the type species for Helicoprion. H. bessonowi is primarily based on a number of specimens from Artinskian-age limestone of the Divya Formation, in the Ural Mountains of Russia. H. bessonowi specimens are also known from the Tanukihara Formation of Japan and Artinskian-age strata in Kazakhstan. It can be differentiated from other Helicoprion species by a short and narrowly spaced tooth whorl, backward-directed tooth tips, obtusely angled tooth bases, and a consistently narrow whorl shaft.
One of two Helicoprion species described by Wheeler in 1939, H. nevadensis, is based on a single partial fossil found in a Nevadan mine by Elbert A. Stuart in 1929. This fossil, UNMMPC 1001, has been lost. It was reported as having originated from the Rochester Trachyte deposits, which Wheeler considered to be of Artinskian age. However, the Rochester Trachyte is in fact Triassic, and H. nevadensis likely did not originate in the Rochester Trachyte, thus rendering its true age unknown. Wheeler differentiated H. nevadensis from H. bessonowi by its pattern of whorl expansion and tooth height, but Tapanila and Pruitt showed in 2013 that these were consistent with H. bessonowi at the developmental stage that the specimen represents.
Based on isolated teeth and partial whorls found on the island of Spitsbergen, Norway, H. svalis was described by Stanisław Siedlecki in 1970. The type specimen, a very large whorl with specimen number PMO A-33961, was noted for its narrow teeth that apparently are not in contact with each other, but this seems to be a consequence of only the central part of the teeth being preserved, according to Tapanila and Pruitt. Since the whorl shaft is partially obscured, H. svalis cannot be definitely assigned to H. bessonowi, but it closely approaches the latter species in many aspects of its proportions. With a maximum volution height of , H. svalis is similar in size to the largest H. bessonowi, which has a maximum volution height of .
In 1999, the holotype of H. bessonowi was stolen, but afterwards was shortly recovered with the aid of an anonymous fossil dealer.
H. ergassaminon
Like H. davisii, H. ergassaminon is known from the Phosphoria Formation of Idaho, though it is comparatively much rarer. H. ergassaminon was named and described in detail within a 1966 monograph by Svend Erik Bendix-Almgreen, and the holotype specimen ("Idaho 5"), bears breakage and wear marks indicative of its usage in feeding. H. ergassaminon is also represented by several other specimens from the Phosphoria Formation, though none of these show wear marks. This species is roughly intermediate between the two contrasting forms represented by H. bessonowi and H. davisii, having tall but narrowly spaced teeth. Its teeth are also gently curved, with obtusely angled tooth bases. The type specimen of this species was formerly considered lost, but following its rediscovery in 2023, it has been returned to the collection of the Idaho Museum of Natural History.
Other material
Several large whorls are difficult to assign to any particular species group, H. svalis among them. IMNH 14095, a specimen from Idaho, appears to be similar to H. bessonowi, but it has unique flange-like edges on the apices of its teeth. IMNH 49382, also from Idaho, has the largest known whorl diameter at for the outermost volution (the only one preserved), but it is incompletely preserved and still partially buried. H. mexicanus, named by F.K.G. Müllerreid in 1945, was supposedly distinguished by its tooth ornamentation. Its holotype is currently missing, though its morphology was similar to that of IMNH 49382. In the absence of other material, it is currently a nomen dubium. Vladimir Obruchev described H. karpinskii from two teeth in 1953. He provided no distinguishing traits for this species, thus it must be regarded as a nomen nudum. Various other indeterminate Helicoprion specimens have been described from Canada, Japan, Laos, Idaho, Utah, Wyoming, and Nevada.
In 1922, Karpinsky named a new species of Helicoprion, H. ivanovi, from Gzhelian (latest Carboniferous) strata near Moscow. However, this species has subsequently been removed from Helicoprion and placed as a second species of the related eugeneodont Campyloprion. In 1924, Karpinsky separated H. clerci from Helicoprion and reclassified it under the new genus name Parahelicoprion, but Parahelicoprion has recently been suggested to represent a junior synonym of this genus.
Historical reconstructions
Earliest reconstructions
Hypotheses for the placement and identity of Helicoprion's tooth whorls were controversial from the moment it was discovered. Woodward (1886), who referred the first known Helicoprion fossils to Edestus, discussed the various hypotheses concerning the nature of Edestus fossils.
Joseph Leidy, who originally described Edestus vorax, argued that they represented the jaws of "plagiostomous" (chondrichthyan) fish. William Davies agreed, specifically comparing it to the jaws of Janassa bituminosa, a Permian petalodont. On the other hand, J.S. Newberry suggested that the jaw-like fossils were defensive spines of a stringray-like fish. Woodward eventually settled on E.D. Cope's argument that they represented pectoral fin spines from fish similar to "Pelecopterus" (now known as Protosphyraena).
Karpinsky's 1899 monograph on Helicoprion noted that the bizarre nature of the tooth whorl made reaching precise conclusions on its function difficult. He tentatively suggested that it curled up from the upper jaw for defensive or offensive purposes. This was justified by comparison to the upper tooth blades of Edestus, which by 1899 had been re-evaluated as structures belonging to the jaw.
Debates over the identity of Helicoprion's tooth whorl were abundant in the years following Karpinsky's monograph. In 1900, the publication was reviewed by Charles Eastman, who appreciated the paper as a whole, but derided the sketch of the supposed life position of the whorl. Though Eastman admitted that the teeth of the whorl were very similar to those of other chondrichthyans, he still supported the idea that the whorl may have been a defensive structure embedded into the body of the animal, rather than the mouth. Shortly after his original monograph, Karpinsky published the argument that the whorl represented a curled, scute-covered tail akin to that of Hippocampus (seahorses). This proposal was immediately criticized by various researchers. E. Van den Broeck noted the fragility of the structure and argued that it was most well-protected as a paired feeding apparatus in the cheek of the animal. A.S. Woodward (unrelated to Henry Woodward) followed this suggestion with the hypothesis that each whorl represented a tooth battery from a gigantic shark. G. Simoens illustrated Karpinsky's various proposals and used histological data to adamantly argue that the whorls were toothed structures placed within the mouth. In 1911, Karpinsky illustrated the whorls as components of the dorsal fins. Reconstructions similar to those of Karpinsky (1899) were common in Russian publications as late as 2001.
Later reconstructions
By the mid-20th century, the tooth whorl was generally accepted as positioned in the lower jaw of the animal. Though this general position was suspected almost immediately in the aftermath of Karpinsky's monograph, it was not illustrated as such until the mid-1900s. Around that time, an artist known only as "F. John" depicted Helicoprion within a set of "Tiere der Urwelt" trading cards. Their reconstruction presented the tooth whorl as an external structure curling down from the lower jaw of the animal. Similar downward-curling reconstructions have also been created by modern paleontologists and artists such as John A. Long, Todd Marshall, and Karen Carr. The utility of the tooth whorl in this type of reconstruction was inferred based on sawfish, which incapacitate prey using lateral blows of their denticle-covered snouts.
Information on the position of eugeneodont tooth whorls was bolstered by two major publications in 1966. The first was Rainer Zangerl's description of a new Carboniferous eugeneodont, Ornithoprion. This taxon had a highly specialized skull with a small tooth whorl in a symphysial position, i.e. at the midline of the base of the lower jaw. Although skull material had also been reported for Sarcoprion and Fadenia at the time, Ornithoprion was the first eugeneodont to have its skull described in detail.
The other publication was Bendix-Almgreen's monograph on Helicoprion. His investigations reinterpreted the tooth whorl as a symphyseal structure wedged between the meckelian cartilages, which were separated by a gap at the front. A pair of cartilage loops, the symphyseal crista, seems to develop as a paired extension of the jaw symphysis where the meckelian cartilages meet at the back of the jaw. Each loop arches up before curling back inwards, tracing over the root of the tooth whorl. The largest and youngest teeth form at the symphysis near the back of the jaw. Over time, they are carried along the symphyseal crista, spiraling forwards, then downwards and inwards. The series of teeth accumulates into a spiraling structure, which is housed within the cavity defined by the symphyseal crista. The lateral and lower edges of the tooth whorl would have been obscured by skin during life. According to Bendix-Almgreen, the most likely use of the tooth whorl was as a tool for tearing and cutting prey against the upper jaw.
In the 1994 book Planet Ocean: A Story of Life, the Sea, and Dancing to the Fossil Record, author Brad Matsen and artist Ray Troll describe and depict a reconstruction based on the information gleaned by Bendix-Almgreen (1966). They proposed that no teeth were present in the animal's upper jaw, besides crushing teeth for the whorl to cut against. The two envisioned the living animal to have a long and very narrow skull, creating a long nose akin to the modern-day goblin shark. A 1996 textbook by Philippe Janvier presented a similar reconstruction, albeit with sharp teeth at the front of the upper jaw and rows of low crushing teeth in the back of the jaw.
In 2008, Mary Parrish created a new reconstruction for the renovated Ocean Hall at the Smithsonian Museum of Natural History. Designed under the direction of Robert Purdy, Victor Springer, and Matt Carrano, Parrish's reconstruction places the whorl deeper within the throat. This hypothesis was justified by the argument that the teeth supposedly had no wear marks, and the assumption that the whorl would have created a drag-inducing bulge on the chin of the animal if located in a symphysial position. They envisioned the tooth whorl as a structure derived from throat denticles and designed to assist swallowing. This would hypothetically negate the disadvantages the tooth whorl would produce if positioned further forward in the jaw. This reconstruction was criticized for the overly intricate and potentially ineffective design of such a structure, if solely used to assist swallowing.
Lebedev (2009) found more support for a reconstruction similar to those of Bendix-Almgreen (1966) and Troll (1994). A tooth whorl found in Kazakhstan preserved radial scratch marks; the whorl was also found near several wide, tuberculated teeth similar to those of the putative caseodontoid Campodus. Lebedev's reconstruction presented a cartilage-protected tooth whorl in a symphysial position at the front of the long lower jaw. When the mouth was closed, the tooth whorl would fit into a deep longitudinal pocket on the upper jaw. Both the pocket in the upper jaw and the edges of the lower jaw would have been lined with dense rows of Campodus-like teeth. This was similar to the situation reported in related helicoprionids such as Sarcoprion and Agassizodus. As for Helicoprion's ecology, it was compared to modern cetaceans such as Physeter (the sperm whale), Kogia (dwarf and pygmy sperm whales), Grampus (Risso's dolphin), and Ziphius (Cuvier's beaked whale). These fish- and squid-eating mammals have reduced dentition, often restricted to the tip of the lower jaw. Lebedev's reconstruction approximates modern views on Helicoprion's anatomy, though the hypothetical long jaw and Campodus-like lateral dentition has been superseded by CT data.
| Biology and health sciences | Prehistoric chondrichthyans | Animals |
1856816 | https://en.wikipedia.org/wiki/Laminaria | Laminaria | Laminaria is a genus of brown seaweed in the order Laminariales (kelp), comprising 31 species native to the north Atlantic and northern Pacific Oceans. This economically important genus is characterized by long, leathery laminae and relatively large size. Some species are called Devil's apron, due to their shape, or sea colander, due to the perforations present on the lamina. Others are referred to as tangle. Laminaria form a habitat for many fish and invertebrates.
The life cycle of Laminaria has heteromorphic alternation of generations which differs from Fucus. At meiosis the male and female zoospores are produced separately, then germinate into male and female gametophytes. The female egg matures in the oogonium until the male sperm fertilizes it. Life-Cycle: The most apparent form of Laminaria is its sporophyte phase, a structure composed of the holdfast, the stipe, and the blades. While it spends its time predominately in the sporophyte phase, it alternates between the sporophyte and its microscopic gametophyte phase.
Laminaria japonica (J. E. Areschoug – Japón) is now regarded as a synonym of Saccharina japonica and Laminaria saccharina is now classified as Saccharina latissima.
History
Laminaria arrived in China from Hokkaido, Japan in the late 1920s. Once in China, Laminaria was cultivated on a much larger industrial scale. The rocky shores at Dalian, the northern coast of the Yellow Sea, along with its cold waters provided excellent growing conditions for these species. Laminaria was harvested for food and 1949 yielded 40.3 metric tons of dry weight. Laminaria need cold water to survive and can only live above 36° N latitude.
In 1949, the Chinese started to commercially grow laminaria as a crop. This increased the production of dry weight to 6,200 metric tons. Farming laminaria is still a large production for China. However, since the 1980s production has dropped due to new mariculture technology .
Farming practices
Laminaria is generally farmed using the floating raft method, in which young laminaria sporophytes are attached to submerged ropes. These ropes are then attached to floating rafts.
Ecology
Laminaria is found in colder ocean waters, such as arctic regions. Preferring to stay in regions where there are rocky shores, this allows the laminaria to attach. Due to the height of the Laminaria, they provide protection for creatures that the open ocean does not often give. Invertebrates are just one of the organisms that live among the algae. Sea snails and other invertebrates feed on the blades (leaves) of the laminaria. Other organisms, such as sea urchins, feed on the holdfasts, which can kill the algae. Red sea urchins, found on the North America Pacific Coast, can decimate kelp, including Laminaria, if the urchins are not managed by sea otters. Species such as Coelopa pilipes feed and lay eggs on Laminaria when it is washed up on beaches.
Life cycle
Laminaria expresses a haplo-diplophasic life history, in which it alternates from a macroscopic thallic sporophyte structure, consisting of the holdfast, a stipe, and the blades, to a filamentous, microscopic gametophyte. The sporophyte structure of laminaria can grow to , which is large in comparison to other algae, but still smaller than the giant kelps such as Macrocystis and Nereocystis, which can grow up to . On the other hand, the gametophyte structure is no more than a few millimeters in length. In opposition to the gametophyte phase, which only consists of one type of tissue, the more complex sporophyte phase is made up of different types of tissue. One of these tissues includes a sieve-like element which translocates photoassimilates. These structures are very similar to mesophyll cells found in higher plant leaves.
Uses
Medical
A laminaria stick may be used to slowly dilate the cervix to induce labor, or for surgical procedures including abortions or to facilitate the placement of an intrauterine device. The stick is made up of a bundle of dried and compressed laminaria that expands as water is absorbed.
Laminaria is a source of the relatively rare element, iodine, which is commonly used to promote thyroid health.
Food
Various species of Laminaria have been used for food purposes since ancient times wherever humans have encountered them. Typically, the prepared parts, usually the blade, are consumed either immediately after boiling in broth or water, or consumed after drying. The greater proportion of commercial cultivation is for algin, iodine, and mannitol, which are used in a range of industrial applications. In South Korea it is processed into a sweetmeat known as laminaria jelly, in other countries it is also used in fresh salad form, which is also canned for preservation for deliverу and selling purposes in other regions. Many countries produce and consume laminaria products, the largest being China.
Energy
Due to their ability to grow underwater and in salt water, algae are being looked into as a source of biofuel. Laminaria is one of the five macroalgae farmed for products such as food, chemicals and power. Those five genera contribute to 76% of the total tonnage for farmed macroalgae. Laminaria is less desired as a renewable energy source due to its high ash content when burned. Laminaria has an ash content of 33%, while wood has about a 2% ash content when burned. Algae have a high water content requiring much energy to dry the algae before being able to properly use it.
More research is being done with anaerobic digestion, which is the most promising practice to extract energy from Laminaria. There are still barriers to overcome before moving forward with anaerobic digestion, such as its cost per kwh.
Metal absorption
The ability of laminaria, along with other brown algae, to absorb heavy metals is a current area of interest regarding their use to remove heavy metals from wastewater. Laminaria has been shown by recent research to have a favorable mannuronic/guluronic acid residues ratio (M/G ratio) for heavy metal absorption in its alginate. This M/G ratio is the ratio between the L-guluronate (G) and D-mannuronate (M) in the alginate, a natural anionic polymer that is found in all brown algae. This alginate is able to form a gel that contains carboxyl groups that can bind heavy metal cations such as , , and , thereby allowing these metals to be removed from wastewater.
Predator
Coelopa frigida and related flies from the genus Coelopa are known to feed, mate, and create habitats out of different species of Laminaria. This is of particular notice when the Laminaria is stranded on the beach and not when it is submerged under seawater. With increasing amounts of seaweed washing up on shores, there is an increasing recognition of Laminaria and their close pairing with Coelopa.
Species
Laminaria abyssalis A.B. Joly & E.C. Oliveira – South American Atlantic
Laminaria agardhii Kjellman – North American Atlantic
Laminaria appressirhiza J. E. Petrov & V. B. Vozzhinskaya
Laminaria brasiliensis A. B. Loly & E. C. Oliveira
Laminaria brongardiana Postels & Ruprecht
Laminaria bulbosa J. V. Lamouroux
Laminaria bullata Kjellman
Laminaria complanata (Setchell & N. L. Garder) Muenscher
Laminaria digitata (Hudson) J. V. Lamouroux
Laminaria ephemera Setchell – Pacific of North America: From Vancouver to California
Laminaria farlowii Setchell – Coast of the North American Pacific
Laminaria groenlandica – British Columbia
Laminaria hyperborea (Gunnerus) Foslie – Northeast Atlantic, Baltic Sea and North Sea.
Laminaria inclinatorhiza J. Petrov & V. Vozzhinskaya
Laminaria longipes Bory de Saint-Vincent, 1826
Laminaria multiplicata J. Petrov & M. Suchovejeva
Laminaria nigripes J. Agardh
Laminaria ochroleuca Bachelot de la Pylaie
Laminaria pallida Greville – South Africa, Indian Ocean, Canary Islands and Tristán da Cunha
Laminaria platymeris Bachelot de la Pylaie
Laminaria rodriguezii Barnet
Laminaria ruprechtii (Areschoug) Setchell
Laminaria sachalinensis (Miyabe) Miyabe
Laminaria setchellii P. C. Silva
Laminaria sinclairii (Harvey ex J. D. Hooker & Harvey) Farlow, Anderson & Eaton – North American Pacific coast
Laminaria solidungula J. Agardh
Laminaria yezoensis Miyabe
| Biology and health sciences | SAR supergroup | Plants |
1857163 | https://en.wikipedia.org/wiki/Granodiorite | Granodiorite | Granodiorite ( ) is a coarse-grained (phaneritic) intrusive igneous rock similar to granite, but containing more plagioclase feldspar than orthoclase feldspar.
The term banatite is sometimes used informally for various rocks ranging from granite to diorite, including granodiorite.
Composition
According to the QAPF diagram, granodiorite has a greater than 20% quartz by volume, and between 65% and 90% of the feldspar is plagioclase. A greater amount of plagioclase would designate the rock as tonalite.
Granodiorite is felsic to intermediate in composition. It is the intrusive igneous equivalent of the extrusive igneous dacite. It contains a large amount of sodium (Na) and calcium (Ca) rich plagioclase, potassium feldspar, quartz, and minor amounts of muscovite mica as the lighter colored mineral components. Biotite and amphiboles often in the form of hornblende are more abundant in granodiorite than in granite, giving it a more distinct two-toned or overall darker appearance. Mica may be present in well-formed hexagonal crystals, and hornblende may appear as needle-like crystals. Minor amounts of oxide minerals such as magnetite, ilmenite, and ulvöspinel, as well as some sulfide minerals may also be present.
Geology
On average, the upper continental crust has the same composition as granodiorite.
Granodiorite is a plutonic igneous rock, formed by intrusion of silica-rich magma, which cools in batholiths or stocks below the Earth's surface. It is usually only exposed at the surface after uplift and erosion have occurred.
Etymology
The name comes from two related rocks to which granodiorite is an intermediate: granite and diorite. The gran- root comes from the Latin grānum for "grain", an English language derivative. Diorite is named after the contrasting colors of the rock.
Banatite
Banatite is a term used informally for various rocks ranging from granite to diorite, but often granodiorite, that were intruded in the Late Cretaceous in the Banat and nearby regions of present-day Hungary and Serbia. The term is also used in Australia in connection with Gulaga / Mount Dromedary in New South Wales, where it is described as "a rock of intermediate composition between quartz diorite and quartz monzonite".
Occurrence
United States
Plymouth Rock is a glacial erratic boulder of granodiorite. The Sierra Nevada mountains contain large sections of granodiorite.
Egypt
Granodiorite was quarried at Mons Claudianus in the Red Sea Governorate in eastern Egypt from the 1st century AD to the mid-3rd century AD. Much of the quarried stone was transported to Rome for use in major projects such as the Pantheon and Hadrian's Villa. Additionally, granodiorite was used for the Rosetta Stone.
The extent of Egyptian granodiorite masonry is unclear. Egypt's 6000-year history makes determining the period of usage difficult as well. Perhaps like porphyry, it was ignored by the successive dynasties of Egypt and only heavily mined during Ptolemaic or Roman times. This is evidenced by the fact that most examples of granodiorite sculpture seem to have come from later dates. However, its presence in the Rosetta Stone implies that they had considerable experience with it and the fact that only newer artifacts are found may simply be because earlier pieces were lost.
Ireland
Granodiorite is quarried in the Newry area of County Armagh with the common name of 'Newry granite'.
Uses
Granodiorite is most often used as crushed stone for road building. It is also used as construction material, building facade, and paving, and as an ornamental stone. The Rosetta Stone is a stele made from granodiorite. The portico columns of the Pantheon in Rome are formed from single shafts of granodiorite, each 12 metres tall by 1.5 metres in diameter.
| Physical sciences | Igneous rocks | Earth science |
1859590 | https://en.wikipedia.org/wiki/Acaricide | Acaricide | Acaricides are pesticides that kill members of the arachnid subclass Acari, which includes ticks and mites.
Acaricides are used both in medicine and agriculture, although the desired selective toxicity differs between the two fields.
Terminology
More specific words are sometimes used, depending upon the targeted group:
"Ixodicides" are substances that kill ticks.
"Miticides" are substances that kill mites.
The term scabicide is more narrow, and refers to agents specifically targeting Sarcoptes.
The term "arachnicide" is more general, and refers to agents that target arachnids. This term is used much more rarely, but occasionally appears in informal writing.
As a practical matter, mites are a paraphyletic grouping, and mites and ticks are usually treated as a single group.
Examples
Examples include:
Permethrin can be applied as a spray. The effects are not limited to mites: lice, cockroaches, fleas, mosquitos, and other insects will be affected.
Ivermectin can be prescribed by a medical doctor to rid humans of mite and lice infestations, and agricultural formulations are available for infested birds and rodents.
Antibiotic miticides
Carbamate miticides
Dienochlor miticides
Formamidine miticides
Oxalic acid is used by some beekeepers against the parasitic varroa mite.
Organophosphate miticides
Diatomaceous earth will also kill mites by disrupting their cuticles, which dries out the mites.
Dicofol, a compound structurally related to the insecticide DDT, is a miticide that is effective against the red spider mite Tetranychus urticae.
Lime sulfur is effective against sarcoptic mange. It is made by mixing hydrated lime, sulfur, and water, and boiling for about 1 hour. Hydrated lime can bond with about 1.7 times its weight of sulfur (quicklime can bond with as much as 2.2 times its weight of sulfur). The strongest concentrate is diluted 1:32 before saturating the skin (avoiding the eyes), applied at six-day intervals.
A variety of commercially available systemic and non-systemic miticides: abamectin, acequinocyl, bifenazate, chlorfenapyr, clofentezine, cyflumetofen, cypermethrin, dicofol, etoxazole, fenazaquin, fenpyroximate, hexythiazox, imidacloprid, propargite, pyridaben, spiromesifen, spirotetramat.
Acaricides are also being used in attempts to stop rhinoceros poaching. Holes are drilled into the horn of a sedated rhino and acaricide is pumped in and pressurized. Should the horn be consumed by humans as in traditional Chinese medicine, it is expected to cause nausea, stomachache, and diarrhea, or convulsions, depending on the quantity, but not fatalities. Signs posted at wildlife refuges that the rhinos therein have been treated are thus expected to deter poaching. The original idea grew out of research into using the horn as a reservoir for one-time tick treatments; the acaricide is selected to be safe for the rhino, oxpeckers, vultures, and other animals in the preserve's ecosystem.
| Technology | Pest and disease control | null |
1859694 | https://en.wikipedia.org/wiki/Bioindicator | Bioindicator | A bioindicator is any species (an indicator species) or group of species whose function, population, or status can reveal the qualitative status of the environment. The most common indicator species are animals. For example, copepods and other small water crustaceans that are present in many water bodies can be monitored for changes (biochemical, physiological, or behavioural) that may indicate a problem within their ecosystem. Bioindicators can tell us about the cumulative effects of different pollutants in the ecosystem and about how long a problem may have been present, which physical and chemical testing cannot.
A biological monitor or biomonitor is an organism that provides quantitative information on the quality of the environment around it. Therefore, a good biomonitor will indicate the presence of the pollutant and can also be used in an attempt to provide additional information about the amount and intensity of the exposure.
A biological indicator is also the name given to a process for assessing the sterility of an environment through the use of resistant microorganism strains (e.g. Bacillus or Geobacillus). Biological indicators can be described as the introduction of a highly resistant microorganisms to a given environment before sterilization, tests are conducted to measure the effectiveness of the sterilization processes. As biological indicators use highly resistant microorganisms, any sterilization process that renders them inactive will have also killed off more common, weaker pathogens.
Overview
A bioindicator is an organism or biological response that reveals the presence of pollutants by the occurrence of typical symptoms or measurable responses and is, therefore, more qualitative.
These organisms (or communities of organisms) can be used to deliver information on alterations in the environment or the quantity of environmental pollutants by changing in one of the following ways: physiologically, chemically or behaviourally.
The information can be deduced through the study of:
their content of certain elements or compounds
their morphological or cellular structure
metabolic biochemical processes
behaviour
population structure(s).
The importance and relevance of biomonitors, rather than man-made equipment, are justified by the observation that the best indicator of the status of a species or system is itself. Bioindicators can reveal indirect biotic effects of pollutants when many physical or chemical measurements cannot. Through bioindicators, scientists need to observe only the single indicating species to check on the environment rather than monitor the whole community. Small sets of indicator species can also be used to predict species richness for multiple taxonomic groups.
The use of a biomonitor is described as biological monitoring and is the use of the properties of an organism to obtain information on certain aspects of the biosphere. Biomonitoring of air pollutants can be passive or active. Experts use passive methods to observe plants growing naturally within the area of interest. Active methods are used to detect the presence of air pollutants by placing test plants of known response and genotype into the study area.
The use of a biomonitor is described as biological monitoring. This refers to the measurement of specific properties of an organism to obtain information on the surrounding physical and chemical environment.
Bioaccumulative indicators are frequently regarded as biomonitors. Depending on the organism selected and their use, there are several types of bioindicators.
Use
In most instances, baseline data for biotic conditions within a pre-determined reference site are collected. Reference sites must be characterized by little to no outside disturbance (e.g. anthropogenic disturbances, land use change, invasive species). The biotic conditions of a specific indicator species are measured within both the reference site and the study region over time. Data collected from the study region are compared against similar data collected from the reference site in order to infer the relative environmental health or integrity of the study region.
An important limitation of bioindicators in general is that they have been reported as inaccurate when applied to geographically and environmentally diverse regions. As a result, researchers who use bioindicators need to consistently ensure that each set of indices is relevant within the environmental conditions they plan to monitor.
Plant and fungal indicators
The presence or absence of certain plant or other vegetative life in an ecosystem can provide important clues about the health of the environment: environmental preservation. There are several types of plant biomonitors, including mosses, lichens, tree bark, bark pockets, tree rings, and leaves. As an example, environmental pollutants can be absorbed and incorporated into tree bark, which can then be analyzed to pollutant presence and concentration in the surrounding environment. The leaves of certain vascular plants experience harmful effects in the presence of ozone, particularly tissue damage, making them useful in detecting the pollutant. These plants are observed abundantly in Atlantic islands in the Northern Hemisphere, the Mediterranean Basin, equatorial Africa, Ethiopia, the Indian coastline, the Himalayan region, southern Asia, and Japan. These regions with high endemic richness are particularly vulnerable to ozone pollution, emphasizing the importance of certain vascular plant species as valuable indicators of environmental health in terrestrial ecosystems. Conservationists use such plant bioindicators as tools, allowing them to ascertain potential changes and damages to the environment.
As an example, Lobaria pulmonaria has been identified as an indicator species for assessing stand age and macrolichen diversity in Interior Cedar–Hemlock forests of east-central British Columbia, highlighting its ecological significance as a bioindicator. The abundance of Lobaria pulmonaria was strongly correlated with this increase in diversity, suggesting its potential as an indicator of stand age in the ICH. Another Lichen species, Xanthoria parietina, serves as a reliable indicator of air quality, effectively accumulating pollutants like heavy metals and organic compounds. Studies have shown that X. parietina samples collected from industrial areas exhibit significantly higher concentrations of these pollutants compared to those from greener, less urbanized environments. This highlights the lichen's valuable role in assessing environmental health and identifying areas with elevated pollution levels, aiding in targeted mitigation efforts and environmental management strategies.
Fungi is also useful as bioindicators, as they are found throughout the globe and undergo noticeable changes in different environments.
Lichens are organisms comprising both fungi and algae. They are found on rocks and tree trunks, and they respond to environmental changes in forests, including changes in forest structure – conservation biology, air quality, and climate. The disappearance of lichens in a forest may indicate environmental stresses, such as high levels of sulfur dioxide, sulfur-based pollutants, and nitrogen oxides.
The composition and total biomass of algal species in aquatic systems serve as an important metric for organic water pollution and nutrient loading such as nitrogen and phosphorus.
There are genetically engineered organisms that can respond to toxicity levels in the environment; e.g., a type of genetically engineered grass that grows a different colour if there are toxins in the soil.
Animal indicators and toxins
Changes in animal populations, whether increases or decreases, can indicate pollution. For example, if pollution causes depletion of a plant, animal species that depend on that plant will experience population decline. Conversely, overpopulation may be opportunistic growth of a species in response to loss of other species in an ecosystem. On the other hand, stress-induced sub-lethal effects can be manifested in animal physiology, morphology, and behaviour of individuals long before responses are expressed and observed at the population level. Such sub-lethal responses can be very useful as "early warning signals" to predict how populations will further respond.
Pollution and other stress agents can be monitored by measuring any of several variables in animals: the concentration of toxins in animal tissues; the rate at which deformities arise in animal populations; behaviour in the field or in the laboratory; and by assessing changes in individual physiology.
Frogs and toads
Amphibians, particularly anurans (frogs and toads), are increasingly used as bioindicators of contaminant accumulation in pollution studies. Anurans absorb toxic chemicals through their skin and their larval gill membranes and are sensitive to alterations in their environment. They have a poor ability to detoxify pesticides that are absorbed, inhaled, or ingested by eating contaminated food. This allows residues, especially of organochlorine pesticides, to accumulate in their systems. They also have permeable skin that can easily absorb toxic chemicals, making them a model organism for assessing the effects of environmental factors that may cause the declines of the amphibian population. These factors allow them to be used as bioindicator organisms to follow changes in their habitats and in ecotoxicological studies due to humans increasing demands on the environment.
Knowledge and control of environmental agents is essential for sustaining the health of ecosystems. Anurans are increasingly utilized as bioindicator organisms in pollution studies, such as studying the effects of agricultural pesticides on the environment. Environmental assessment to study the environment in which they live is performed by analyzing their abundance in the area as well as assessing their locomotive ability and any abnormal morphological changes, which are deformities and abnormalities in development. Decline of anurans and malformations could also suggest increased exposure to ultra-violet light and parasites. Expansive application of agrochemicals such as glyphosate have been shown to have harmful effects on frog populations throughout their lifecycle due to run off of these agrochemicals into the water systems these species live and their proximity to human development.
Pond-breeding anurans are especially sensitive to pollution because of their complex life cycles, which could consist of terrestrial and aquatic living. During their embryonic development, morphological and behavioral alterations are the effects most frequently cited in connection with chemical exposures. Effects of exposure may result in shorter body length, lower body mass and malformations of limbs or other organs. The slow development, late morphological change, and small metamorph size result in increased risk of mortality and exposure to predation.
Crustaceans
Crayfish have also been hypothesized as being suitable bioindicators, under the appropriate conditions. One example of use is an examination of accumulation of microplastics in the digestive tract of red swamp crayfish (Procambarus clarkii) being used as a bioindicator of wider microplastics pollution.
Microbial indicators
Chemical pollutants
Microorganisms can be used as indicators of aquatic or terrestrial ecosystem health. Found in large quantities, microorganisms are easier to sample than other organisms. Some microorganisms will produce new proteins, called stress proteins, when exposed to contaminants such as cadmium and benzene. These stress proteins can be used as an early warning system to detect changes in levels of pollution.
In oil and gas exploration
Microbial Prospecting for oil and gas (MPOG) can be used to identify prospective areas for oil and gas occurrences. In many cases, oil and gas is known to seep toward the surface as a hydrocarbon reservoir will usually leak or have leaked towards the surface through buoyancy forces overcoming sealing pressures. These hydrocarbons can alter the chemical and microbial occurrences found in the near-surface soils or can be picked up directly. Techniques used for MPOG include DNA analysis, simple bug counts after culturing a soil sample in a hydrocarbon-based medium or by looking at the consumption of hydrocarbon gases in a culture cell.
Microalgae in water quality
Microalgae have gained attention in recent years due to several reasons including their greater sensitivity to pollutants than many other organisms. In addition, they occur abundantly in nature, they are an essential component in very many food webs, they are easy to culture and to use in assays and there are few if any ethical issues involved in their use.
Euglena gracilis is a motile, freshwater, photosynthetic flagellate. Although Euglena is rather tolerant to acidity, it responds rapidly and sensitively to environmental stresses such as heavy metals or inorganic and organic compounds. Typical responses are the inhibition of movement and a change of orientation parameters. Moreover, this organism is very easy to handle and grow, making it a very useful tool for eco-toxicological assessments. One very useful particularity of this organism is gravitactic orientation, which is very sensitive to pollutants. The gravireceptors are impaired by pollutants such as heavy metals and organic or inorganic compounds. Therefore, the presence of such substances is associated with random movement of the cells in the water column. For short-term tests, gravitactic orientation of E. gracilis is very sensitive.
Other species such as Paramecium biaurelia (see Paramecium aurelia) also use gravitactic orientation.
Automatic bioassay is possible, using the flagellate Euglena gracilis in a device which measures their motility at different dilutions of the possibly polluted water sample, to determine the EC50 (the concentration of sample which affects 50 percent of organisms) and the G-value (lowest dilution factor at which no-significant toxic effect can be measured).
Macroinvertebrates
Macroinvertebrates are useful and convenient indicators of the ecological health of water bodies and terrestrial ecosystems. They are almost always present, and are easy to sample and identify. This is largely due to the fact that most macro-invertebrates are visible to the naked eye, they typically have a short life-cycle (often the length of a single season) and are generally sedentary. Pre-existing river conditions such as river type and flow will affect macro invertebrate assemblages and so various methods and indices will be appropriate for specific stream types and within specific eco-regions. While some benthic macroinvertebrates are highly tolerant to various types of water pollution, others are not. Changes in population size and species type in specific study regions indicate the physical and chemical state of streams and rivers. Tolerance values are commonly used to assess ecological effects of water pollution such as pesticide contamination with the SPEAR system and environmental degradation, such as human activities (e.g. selective logging and wildfires) in tropical forests.
Benthic indicators for water quality testing
Benthic macroinvertebrates are found within the benthic zone of a stream or river. They consist of aquatic insects, crustaceans, worms and mollusks that live in the vegetation and stream beds of rivers. Macroinvertebrate species can be found in nearly every stream and river, except in some of the world's harshest environments. They also can be found in mostly any size of stream or river, prohibiting only those that dry up within a short timeframe. This makes the beneficial for many studies because they can be found in regions where stream beds are too shallow to support larger species such as fish. Benthic indicators are often used to measure the biological components of fresh water streams and rivers. In general, if the biological functioning of a stream is considered to be in good standing, then it is assumed that the chemical and physical components of the stream are also in good condition. Benthic indicators are the most frequently used water quality test within the United States. While benthic indicators should not be used to track the origins of stressors in rivers and streams, they can provide background on the types of sources that are often associated with the observed stressors.
Global context
In Europe, the Water Framework Directive (WFD) went into effect on October 23, 2000. It requires all EU member states to show that all surface and groundwater bodies are in good status. The WFD requires member states to implement monitoring systems to estimate the integrity of biological stream components for specific sub-surface water categories. This requirement increased the incidence of biometrics applied to ascertain stream health in Europe A remote online biomonitoring system was designed in 2006. It is based on bivalve molluscs and the exchange of real-time data between a remote intelligent device in the field (able to work for more than 1 year without in-situ human intervention) and a data centre designed to capture, process and distribute the web information derived from the data. The technique relates bivalve behaviour, specifically shell gaping activity, to water quality changes. This technology has been successfully used for the assessment of coastal water quality in various countries (France, Spain, Norway, Russia, Svalbard (Ny-Ålesund) and New Caledonia).
In the United States, the Environmental Protection Agency (EPA) published Rapid Bioassessment Protocols, in 1999, based on measuring macroinvertebrates, as well as periphyton and fish for assessment of water quality.
In South Africa, the Southern African Scoring System (SASS) method is based on benthic macroinvertebrates, and is used for the assessment of water quality in South African rivers. The SASS aquatic biomonitoring tool has been refined over the past 30 years and is now on the fifth version (SASS5) in accordance with the ISO/IEC 17025 protocol. The SASS5 method is used by the South African Department of Water Affairs as a standard method for River Health Assessment, which feeds the national River Health Programme and the national Rivers Database.
The imposex phenomenon in the dog conch species of sea snail leads to the abnormal development of a penis in females, but does not cause sterility. Because of this, the species has been suggested as a good indicator of pollution with organic man-made tin compounds in Malaysian ports.
| Biology and health sciences | Ecology | Biology |
1859848 | https://en.wikipedia.org/wiki/American%20cockroach | American cockroach | The American cockroach (Periplaneta americana) is the largest species of common cockroach, and often considered a pest. In certain regions of the U.S. it is colloquially known as the waterbug, though it is not a true waterbug since it is not aquatic. It is also known as the ship cockroach, kakerlac, and Bombay canary. It is often misidentified as a palmetto bug.
Despite their name, American cockroaches are native to Africa and the Middle East. They are believed to have been introduced to the Americas only from the 17th century onward as a result of human commercial patterns, including the Atlantic slave trade.
Distribution
Despite the name, none of the Periplaneta species is native to the Americas; P. americana was introduced to what is now the United States from Africa as early as 1625. They are now common in tropical climates because human activity has extended the insects' range of habitation, and are virtually cosmopolitan in distribution as a result of global commerce.
Biology
Characteristics
Of all common cockroach species, the American cockroach has the largest body size; molts 6–14 times (mostly 13 times) before metamorphosis; and has the longest life cycle, up to about 700 days.
It has an average length around and is about tall. They are reddish brown and have a yellowish margin on the pronotum, the body region behind the head. Immature cockroaches resemble adults except they are wingless.
The cockroach is divided into three sections; the body is flattened and broadly oval, with a shield-like pronotum covering its head. A pronotum is a plate-like structure that covers all or part of the dorsal surface of the thorax of certain insects. They also have chewing mouth parts, long, segmented antennae, and leathery fore wings with delicate hind wings. The third section of the cockroach is the abdomen.
The insect can travel quickly, often darting out of sight when a threat is perceived, and can fit into small cracks and under doors despite its fairly large size. It is considered one of the fastest running insects.
In an experiment, a P. americana registered a record speed of , about 50 body lengths per second, which would be comparable to a human running at .
It has a pair of large compound eyes, each having over 3500 individual lenses (ommatidia, hexagonal apertures which provide a kind of vision known as mosaic vision, with more sensitivity but less resolution, particularly useful at night, hence called nocturnal vision). It is a very active night insect that shuns light.
American cockroach nymphs are capable of limb regeneration.
Morphology
The American cockroach shows a characteristic insect morphology with its body bearing divisions as head, trunk, and abdomen. The trunk, or thorax, is divisible into prothorax, mesothorax and metathorax. Each thoracic segment gives rise to a pair of walking appendages (known as cursorial legs). The organism bears two pairs of wings. The fore wings, known as tegmina, arise from mesothorax and are dark and opaque. The hind wings arise from the metathorax and are used in flight, though cockroaches rarely resort to flight. The abdomen is divisible into 10 segments, each of which is surrounded by chitinous exoskeleton plates called sclerites, including dorsal tergites, ventral sternites, and lateral pleurites.
Life cycle
American cockroaches have three developmental stages: egg, nymph, and adult. Females produce an egg case (ootheca) which protrudes from the tip of the abdomen. On average, females produce 9–10 oothecae, although they can sometimes produce as many as 90. After about two days, the egg cases are placed on a surface in a safe location. Egg cases are about long, brown, and purse-shaped. Immature cockroaches emerge from egg cases in 6–8 weeks and require 6–12 months to mature. After hatching, the nymphs feed and undergo a series of 13 moultings (or ecdysis). Adult cockroaches can live up to an additional year, during which females produce an average of 150 young. The American cockroach reproductive cycle can last up to 600 days.
Sex pheromone
The sex pheromone of the American cockroach is the sequiterpene (1Z,5E)-1,10(14)-diepoxy-4(15),5-germacradien-9-one, which has been given the trivial name periplanone-B. This pheromone was isolated from the feces of virgin female cockroaches. Previously, 2,2-dimethyl-3-isopropylidenecyclopropyl propionate had been thought to be the structure of this pheromone, but on synthesis was shown to be inactive. The structure determination of this pheromone was an eventful chapter in the history of pheromone chemistry.
Parthenogenesis
When female American cockroaches are housed in groups, this close association promotes facultative parthenogenic reproduction. The oothecae are produced asexually, without fertilization. The process by which the eggs are produced is automixis; during automixis, meiosis occurs, but instead of giving rise to haploid gametes as ordinarily happens, diploid gametes are produced (probably by terminal fusion of meiotic products) that can then develop into female cockroaches. Eggs produced by parthenogenesis have lower viability than eggs produced by sexual reproduction.
Genetics
The American cockroach genome is the second-largest insect genome on record, after Locusta migratoria. Around 60% of its genome is composed of repeat elements. Around 90% of the genome can be found in other members of Blattodea. The genome codes for a large number of chemoreceptor families, including 522 taste receptors and 154 olfactory receptors. The 522 taste receptors comprise the largest number found among insects for which genomes have been sequenced. About 329 of the taste receptors are involved in bitter taste perception. These traits, along with enlarged groups of genes relating to detoxification, the immune system, and growth and reproduction, are believed to be part of the reasons behind the cockroach's ability to adapt to human living spaces.
Diet
American cockroaches are omnivorous and opportunistic feeders that eat materials such as cheese, sweets, beer, tea, leather, bakery products, starch in book bindings, manuscripts, glue, hair, flakes of dried skin, dead animals, plant materials, soiled clothing, and glossy paper with starch sizing. They are particularly fond of fermenting foods. They have also been observed to feed upon dead or wounded cockroaches of their own or other species.
Flight
In the immature (nymph) stage, American cockroaches are wingless and incapable of flight. Adults have useful wings and can fly for short distances. If they start from a high place, such as a tree, they can glide for some distance. However, despite their ability to do so, American cockroaches aren't regular fliers. They can run very fast and, when frightened, these insects more commonly scatter on foot.
Habitat
American cockroaches generally live in moist areas but can survive in dry areas if they have access to water. They prefer high temperatures around and do not tolerate low temperatures. These cockroaches are common in basements, crawl spaces, cracks and crevices of porches, foundations, and walkways adjacent to buildings. In residential areas outside the tropics, these cockroaches live in basements and sewers and may move outdoors into yards during warm weather.
Relationship with humans
Risk to humans
The odorous secretions produced by American cockroaches can alter the flavor of food. Also, if populations of cockroaches are high, a strong concentration of this odorous secretion can be present. Cockroaches can pick up disease-causing bacteria, such as Salmonella, on their legs and later deposit them on foods and cause food poisoning or infection if they walk on the food. House dust containing cockroach feces and body parts can trigger allergic reactions and asthma in certain individuals.
At least 22 species of pathogenic human bacteria, viruses, fungi, and protozoans, as well as five species of helminthic worms, have been isolated from field-collected P. americana (L.)
Control as pests
In cold climates, these cockroaches may move indoors, seeking warmer environments and food. Cockroaches may enter houses via wastewater plumbing, underneath doors, or via air ducts or other openings in the walls, windows or foundation. Cockroach populations may be controlled through the use of glue board traps or insecticides. Glue board traps (also called adhesive or sticky traps) are made using adhesive applied to cardboard or similar material. Bait can be placed in the center or a scent may be added to the adhesive. Inexpensive glue board traps are normally placed in warm indoor locations readily accessible to insects but not likely to be encountered by people: underneath refrigerators or freezers, behind trash cans, etc.
Covering any cracks or crevices through which cockroaches may enter, sealing food inside insect-proof containers, and quickly cleaning any spills or messes that have been made is beneficial. Another way to prevent an infestation is to thoroughly check any materials brought inside: cockroaches and their egg cases (ootheca) can be hidden inside or on furniture, or inside boxes, suitcases, grocery bags, etc. Upon finding an egg case, use a napkin to pick it up and then forcefully crush it; the resulting fluid leakage will then indicate the destruction of the eggs inside. Discard the napkin and the destroyed egg case as garbage.
Use in traditional Chinese medicine
The American cockroach has been used as an ingredient in traditional Chinese medicine, with references to its usage in the Compendium of Materia Medica and Shennong Ben Cao Jing. In China, an ethanol extract of the American cockroach, Kāngfùxīn Yè (), is prescribed for wound healing and tissue repair.
Comparison of three common cockroaches
| Biology and health sciences | Cockroaches & Termites (Blattodea) | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.