id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
4,185,921 | https://en.wikipedia.org/wiki/Gompertz%E2%80%93Makeham%20law%20of%20mortality | The Gompertz–Makeham law states that the human death rate is the sum of an age-dependent component (the Gompertz function, named after Benjamin Gompertz), which increases exponentially with age, and an age-independent component (the Makeham term, named after William Makeham). In a protected environment where external causes of death are rare (laboratory conditions, low mortality countries, etc.), the age-independent mortality component is often negligible. In this case the formula simplifies to a Gompertz law of mortality. In 1825, Benjamin Gompertz proposed an exponential increase in death rates with age.
Description
The Gompertz–Makeham law of mortality describes the age dynamics of human mortality rather accurately in the age window from about 30 to 80 years of age. At more advanced ages, some studies have found that death rates increase more slowly – a phenomenon known as the late-life mortality deceleration – but more recent studies disagree.
The decline in the human mortality rate before the 1950s was mostly due to a decrease in the age-independent (Makeham) mortality component, while the age-dependent (Gompertz) mortality component was surprisingly stable. Since the 1950s, a new mortality trend has started in the form of an unexpected decline in mortality rates at advanced ages and "rectangularization" of the survival curve.
The hazard function for the Gompertz-Makeham distribution is most often characterised as . The empirical magnitude of the beta-parameter is about .085, implying a doubling of mortality every .69/.085 = 8 years (Denmark, 2006).
The quantile function can be expressed in a closed-form expression using the Lambert W function:
The Gompertz law is the same as a Fisher–Tippett distribution for the negative of age, restricted to negative values for the random variable (positive values for age).
See also
Bathtub curve
Biodemography
Biodemography of human longevity
Gerontology
Demography
Life table
Maximum life span
Reliability theory of aging and longevity
References
Actuarial science
Medical aspects of death
Population
Senescence
Statistical laws
Applied probability | Gompertz–Makeham law of mortality | [
"Chemistry",
"Mathematics",
"Biology"
] | 446 | [
"Applied probability",
"Applied mathematics",
"Senescence",
"Actuarial science",
"Cellular processes",
"Metabolism"
] |
4,186,087 | https://en.wikipedia.org/wiki/Water%20dimer | The water dimer consists of two water molecules loosely bound by a hydrogen bond. It is the smallest water cluster. Because it is the simplest model system for studying hydrogen bonding in water, it has been the target of many theoretical (and later experimental) studies that it has been called a "theoretical Guinea pig".
Structure and properties
The ab initio binding energy between the two water molecules is estimated to be 5-6 kcal/mol, although values between 3 and 8 have been obtained depending on the method. The experimentally measured dissociation energy (including nuclear quantum effects) of (H2O)2 and (D2O)2 are 3.16 ± 0.03 kcal/mol (13.22 ± 0.12 kJ/mol) and 3.56 ± 0.03 kcal/mol (14.88 ± 0.12 kJ/mol), respectively. The values are in excellent agreement with calculations. The O-O distance of the vibrational ground-state is experimentally measured at ca. 2.98 Å; the hydrogen bond is almost linear, but the angle with the plane of the acceptor molecule is about 57°. The vibrational ground-state is known as the linear water dimer (shown in the figure to the right), which is a near prolate top (viz., in terms of rotational constants, A > B ≈ C). Other configurations of interest include the cyclic dimer and the bifurcated dimer.
History and relevance
The first theoretical study of the water dimer was an ab initio calculation published in 1968 by Morokuma and Pedersen. Since then, the water dimer has been the focus of sustained interest by theoretical chemists concerned with hydrogen bonding—a search of the CAS database up to 2006 returns over 1100 related references (73 of them in 2005). In addition to serving as a model for hydrogen bonding, (H2O)2 is thought to play a significant role in many atmospheric processes, including chemical reactions, condensation, and solar energy absorption by the atmosphere. In addition, a complete understanding of the water dimer is thought to play a key role in a more thorough understanding of hydrogen bonding in liquid and solid forms of water.
References
Forms of water
Water chemistry
Cluster chemistry
Dimers (chemistry) | Water dimer | [
"Physics",
"Chemistry",
"Materials_science"
] | 480 | [
"Cluster chemistry",
"Phases of matter",
"Dimers (chemistry)",
"Forms of water",
"Polymer chemistry",
"nan",
"Organometallic chemistry",
"Matter"
] |
4,186,556 | https://en.wikipedia.org/wiki/Mermin%E2%80%93Wagner%20theorem | In quantum field theory and statistical mechanics, the Hohenberg–Mermin–Wagner theorem or Mermin–Wagner theorem (also known as Mermin–Wagner–Berezinskii theorem or Coleman theorem) states that continuous symmetries cannot be spontaneously broken at finite temperature in systems with sufficiently short-range interactions in dimensions . Intuitively, this theorem implies that long-range fluctuations can be created with little energy cost, and since they increase the entropy, they are favored.
This preference is because if such a spontaneous symmetry breaking occurred, then the corresponding Goldstone bosons, being massless, would have an infrared divergent correlation function.
The absence of spontaneous symmetry breaking in dimensional infinite systems was rigorously proved by David Mermin and Herbert Wagner (1966), citing a more general unpublished proof by Pierre Hohenberg (published later in 1967) in statistical mechanics. It was also reformulated later by for quantum field theory. The theorem does not apply to discrete symmetries that can be seen in the two-dimensional Ising model.
Introduction
Consider the free scalar field of mass in two Euclidean dimensions. Its propagator is:
For small is a solution to Laplace's equation with a point source:
This is because the propagator is the reciprocal of in space. To use Gauss's law, define the electric field analog to be . The divergence of the electric field is zero. In two dimensions, using a large Gaussian ring:
So that the function G has a logarithmic divergence both at small and large r.
The interpretation of the divergence is that the field fluctuations cannot stay centered around a mean. If you start at a point where the field has the value 1, the divergence tells you that as you travel far away, the field is arbitrarily far from the starting value. This makes a two dimensional massless scalar field slightly tricky to define mathematically. If you define the field by a Monte Carlo simulation, it doesn't stay put, it slides to infinitely large values with time.
This happens in one dimension too, when the field is a one dimensional scalar field, a random walk in time. A random walk also moves arbitrarily far from its starting point, so that a one-dimensional or two-dimensional scalar does not have a well defined average value.
If the field is an angle, , as it is in the Mexican hat model where the complex field has an expectation value but is free to slide in the direction, the angle will be random at large distances. This is the Mermin–Wagner theorem: there is no spontaneous breaking of a continuous symmetry in two dimensions.
XY model transition
While the Mermin–Wagner theorem prevents any spontaneous symmetry breaking on a global scale, ordering transitions of Kosterlitz–Thouless–type may be allowed. This is the case for the XY model where the continuous (internal) symmetry on a spatial lattice of dimension , i.e. the (spin-)field's expectation value, remains zero for any finite temperature (quantum phase transitions remain unaffected). However, the theorem does not prevent the existence of a phase transition in the sense of a diverging correlation length . To this end, the model has two phases: a conventional disordered phase at high temperature with dominating exponential decay of the correlation function for , and a low-temperature phase with quasi-long-range order where decays according to some power law for "sufficiently large", but finite distance ( with the lattice spacing).
Heisenberg model
We will present an intuitive way to understand the mechanism that prevents symmetry breaking in low dimensions, through an application to the Heisenberg model, that is a system of -component spins of unit length , located at the sites of a -dimensional square lattice, with nearest neighbour coupling . Its Hamiltonian is
The name of this model comes from its rotational symmetry. Consider the low temperature behavior of this system and assume that there exists a spontaneously broken symmetry, that is a phase where all spins point in the same direction, e.g. along the -axis. Then the rotational symmetry of the system is spontaneously broken, or rather reduced to the symmetry under rotations around this direction. We can parametrize the field in terms of independent fluctuations around this direction as follows:
with , and Taylor expand the resulting Hamiltonian. We have
whence
Ignoring the irrelevant constant term and passing to the continuum limit, given that we are interested in the low temperature phase where long-wavelength fluctuations dominate, we get
The field fluctuations are called spin waves and can be recognized as Goldstone bosons. Indeed, they are n-1 in number and they have zero mass since there is no mass term in the Hamiltonian.
To find if this hypothetical phase really exists we have to check if our assumption is self-consistent, that is if the expectation value of the magnetization, calculated in this framework, is finite as assumed. To this end we need to calculate the first order correction to the magnetization due to the fluctuations. This is the procedure followed in the derivation of the well-known Ginzburg criterion.
The model is Gaussian to first order and so the momentum space correlation function is proportional to . Thus the real space two-point correlation function for each of these modes is
where a is the lattice spacing. The average magnetization is
and the first order correction can now easily be calculated:
The integral above is proportional to
and so it is finite for , but appears to be divergent for (logarithmically for ).
This divergence signifies that fluctuations are large so that the expansion in the parameter performed above is not self-consistent. One can naturally expect then that beyond that approximation, the average magnetization is zero.
We thus conclude that for our assumption that there exists a phase of spontaneous magnetization is incorrect for all , because the fluctuations are strong enough to destroy the spontaneous symmetry breaking. This is a general result:
Hohenberg–Mermin–Wagner theorem. There is no phase with spontaneous breaking of a continuous symmetry for , in dimensions for an infinite system.
The result can also be extended to other geometries, such as Heisenberg films with an arbitrary number of layers, as well as to other lattice systems (Hubbard model, s-f model).
Generalizations
Much stronger results than absence of magnetization can actually be proved, and the setting can be substantially more general. In particular :
The Hamiltonian can be invariant under the action of an arbitrary compact, connected Lie group .
Long-range interactions can be allowed (provided that they decay fast enough; necessary and sufficient conditions are known).
In this general setting, Mermin–Wagner theorem admits the following strong form (stated here in an informal way):
All (infinite-volume) Gibbs states associated to this Hamiltonian are invariant under the action of .
When the assumption that the Lie group be compact is dropped, a similar result holds, but with the conclusion that infinite-volume Gibbs states do not exist.
Finally, there are other important applications of these ideas and methods, most notably to the proof that there cannot be non-translation invariant Gibbs states in 2-dimensional systems. A typical such example would be the absence of crystalline states in a system of hard disks (with possibly additional attractive interactions).
It has been proved however that interactions of hard-core type can lead in general to violations of Mermin–Wagner theorem.
Historical arguments
In 1930, Felix Bloch argued that, by diagonalizing the Slater determinant for fermions, magnetism in 2D should not exist. Some easy arguments, which are summarized below, were given by Rudolf Peierls based on entropic and energetic considerations. Lev Landau also did some work on symmetry breaking in two dimensions.
Energetic argument
One reason for the lack of global symmetry breaking is, that one can easily excite long wavelength fluctuations which destroy perfect order. "Easily excited" means, that the energy for those fluctuations tend to zero for large enough systems. Let's consider a magnetic model (e.g. the XY-model in one dimension). It is a chain of magnetic moments of length . We consider harmonic approximation, where the forces (torque) between neighbouring moments increase linearly with the angle of twisting . This implies, that the energy due to twisting increases quadratically . The total energy is the sum of all twisted pairs of magnetic moments . If one considers the excited mode with the lowest energy in one dimension (see figure), then the moments on the chain of length are tilted by along the chain. The relative angle between neighbouring moments is the same for all pairs of moments in this mode and equals , if the chain consists of magnetic moments. It follows that the total energy of this lowest mode is . It decreases with increasing system size and tends to zero in the thermodynamic limit , , For arbitrary large systems follows, that the lowest modes do not cost any energy and will be thermally excited. Simultaneously, the long range order is destroyed on the chain. In two dimensions (or in a plane) the number of magnetic moments is proportional to the area of the plain . The energy for the lowest excited mode is then , which tends to a constant in the thermodynamic limit. Thus the modes will be excited at sufficiently large temperatures. In three dimensions, the number of magnetic moments is proportional to the volume and the energy of the lowest mode is . It diverges with system size and will thus not be excited for large enough systems. Long range order is not affected by this mode and global symmetry breaking is allowed.
Entropic argument
An entropic argument against perfect long range order in crystals with is as follows (see figure): consider a chain of atoms/particles with an average particle distance of . Thermal fluctuations between particle and particle will lead to fluctuations of the average particle distance of the order of , thus the distance is given by . The fluctuations between particle and will be of the same size: . We assume that the thermal fluctuations are statistically independent (which is evident if we consider only nearest neighbour interaction) and the fluctuations between and particle (with double the distance) has to be summed statistically independent (or incoherent): . For particles N-times the average distance, the fluctuations will increase with the square root if neighbouring fluctuations are summed independently. Although the average distance is well defined, the deviations from a perfect periodic chain increase with the square root of the system size. In three dimensions, one has to walk along three linearly independent directions to cover the whole space; in a cubic crystal, this is effectively along the space diagonal, to get from particle to particle . As one can easily see in the figure, there are six different possibilities to do this. This implies, that the fluctuations on the six different pathways cannot be statistically independent, since they pass the same particles at position and . Now, the fluctuations of the six different ways have to be summed in a coherent way and will be of the order of – independent of the size of the cube. The fluctuations stay finite and lattice sites are well defined. For the case of two dimensions, Herbert Wagner and David Mermin have proved rigorously, that fluctuations distances increase logarithmically with systems size . This is frequently called the logarithmic divergence of displacements.
Crystals in 2D
The image shows a (quasi-) two-dimensional crystal of colloidal particles. These are micrometre-sized particles dispersed in water and sedimented on a flat interface, thus they can perform Brownian motions only within a plane. The sixfold crystalline order is easy to detect on a local scale, since the logarithmic increase of displacements is rather slow. The deviations from the (red) lattice axis are easy to detect, too, here shown as green arrows. The deviations are basically given by the elastic lattice vibrations (acoustic phonons). A direct experimental proof of Hohenberg–Mermin–Wagner fluctuations would be, if the displacements increase logarithmic with the distance of a locally fitted coordinate frame (blue). This logarithmic divergence goes along with an algebraic (slow) decay of positional correlations. The spatial order of a 2D crystal is called quasi-long-range (see also such hexatic phase for the phase behaviour of 2D ensembles).
Interestingly, significant signatures of Hohenberg–Mermin–Wagner fluctuations have not been found in crystals but in disordered amorphous systems.
This work did not investigate the logarithmic displacements of lattice sites (which are difficult to quantify for a finite system size), but the magnitude of the mean squared displacement of the particles as function of time. This way, the displacements are not analysed in space but in the time domain. The theoretical background is given by D. Cassi, as well as F. Merkl and H. Wagner. This work analyses the recurrence probability of random walks and spontaneous symmetry breaking in various dimensions. The finite recurrence probability of a random walk in one and two dimension shows a dualism to the lack of perfect long-range order in one and two dimensions, while the vanishing recurrence probability of a random walk in 3D is dual to existence of perfect long-range order and the possibility of symmetry breaking.
Limits
Real magnets usually do not have a continuous symmetry, since the spin-orbit coupling of the electrons imposes an anisotropy. For atomic systems like graphene, one can show that monolayers of cosmological (or at least continental) size are necessary to measure a significant size of the amplitudes of fluctuations.
A recent discussion about the Hohenberg–Mermin–Wagner theorems and its limitations in the thermodynamic limit is given by Bertrand Halperin.
More recently, it was shown that the most severe physical limitation are finite-size effects in 2D, because the suppression due to infrared fluctuations is only logarithmic in the size: The sample would have to be larger than the observable universe for a 2D superconducting transition to be suppressed below ~100 K.
For magnetism, there is a similar behaviour where the sample size must approach the size of the universe to have a Curie temperature Tc in the mK range. However, because disorder and interlayer coupling compete with finite-size effects at restoring order, it cannot be said a priori which of them is responsible for the observation of magnetic ordering in a given 2D sample.
Remarks
The discrepancy between the Hohenberg–Mermin–Wagner theorem (ruling out long range order in 2D) and the first computer simulations (Alder&Wainwright), which indicated crystallization in 2D, once motivated J. Michael Kosterlitz and David J, Thouless, to work on topological phase transitions in 2D. This work is awarded with the 2016 Nobel Prize in Physics (together with Duncan Haldane).
See also
Elitzur's theorem
Notes
References
Eponymous theorems of physics
Quantum field theory
No-go theorems
Physics theorems
Theorems in quantum mechanics
Statistical mechanics theorems
Theorems in mathematical physics | Mermin–Wagner theorem | [
"Physics",
"Mathematics"
] | 3,096 | [
"Theorems in dynamical systems",
"Theorems in quantum mechanics",
"Quantum field theory",
"No-go theorems",
"Mathematical theorems",
"Equations of physics",
"Quantum mechanics",
"Statistical mechanics theorems",
"Eponymous theorems of physics",
"Theorems in mathematical physics",
"Statistical me... |
4,187,137 | https://en.wikipedia.org/wiki/Higman%27s%20lemma | In mathematics, Higman's lemma states that the set of finite sequences over a finite alphabet , as partially ordered by the subsequence relation, is well-quasi-ordered. That is, if is an infinite sequence of words over a finite alphabet , then there exist indices such that can be obtained from by deleting some (possibly none) symbols. More generally this remains true when is not necessarily finite, but is itself well-quasi-ordered, and the subsequence relation is generalized into an "embedding" relation that allows the replacement of symbols by earlier symbols in the well-quasi-ordering of . This is a special case of the later Kruskal's tree theorem. It is named after Graham Higman, who published it in 1952.
Proof
Let be a well-quasi-ordered alphabet of symbols (in particular, could be finite and ordered by the identity relation). Suppose for a contradiction that there exist infinite bad sequences, i.e. infinite sequences of words such that no embeds into a later . Then there exists an infinite bad sequence of words that is minimal in the following sense: is a word of minimum length from among all words that start infinite bad sequences; is a word of minimum length from among all infinite bad sequences that start with ; is a word of minimum length from among all infinite bad sequences that start with ; and so on. In general, is a word of minimum length from among all infinite bad sequences that start with .
Since no can be the empty word, we can write for and . Since is well-quasi-ordered, the sequence of leading symbols must contain an infinite increasing sequence with .
Now consider the sequence of words
Because is shorter than ,
this sequence is "more minimal" than , and so it must contain a word that embeds into a later word . But and cannot both be 's, because then the original sequence would not be bad. Similarly, it cannot be that is a and is a , because then would also embed into . And similarly, it cannot be that and , , because then would embed into . In every case we arrive at a contradiction.
Ordinal type
The ordinal type of is related to the ordinal type of as follows:
Reverse-mathematical calibration
Higman's lemma has been reverse mathematically calibrated (in terms of subsystems of second-order arithmetic) as equivalent to over the base theory .
References
Citations
Wellfoundedness
Order theory
Lemmas | Higman's lemma | [
"Mathematics"
] | 513 | [
"Mathematical induction",
"Order theory",
"Wellfoundedness",
"Combinatorics",
"Combinatorics stubs",
"Mathematical problems",
"Mathematical theorems",
"Lemmas"
] |
4,187,554 | https://en.wikipedia.org/wiki/Chinese%20magic%20mirror | The Chinese magic mirror () traces back to at least the 5th century, although their existence during the Han dynasty (206 BC – 24 AD) has been claimed. The mirrors were made out of solid bronze. The front was polished and could be used as a mirror, while the back has a design cast in the bronze, or other decoration. When sunlight or other bright light shines onto the mirror, the mirror appears to become transparent. If that light is reflected from the mirror onto a wall, the pattern on the back of the mirror is then projected onto the wall.
Bronze mirrors were the standard in many Eurasian cultures, but most lacked this characteristic, as did most Chinese bronze mirrors.
Construction
Robert Temple describes their construction:
The basic mirror shape, with the design on the back, was cast flat, and the convexity of the surface produced afterwards by elaborate scraping and scratching. The surface was then polished to become shiny. The stresses set up by these processes caused the thinner parts of the surface to bulge outwards and become more convex than the thicker portions. Finally, a mercury amalgam was laid over the surface; this created further stresses and preferential buckling. The result was that imperfections of the mirror surface matched the patterns on the back, although they were too minute to be seen by the eye. But when the mirror reflected bright sunlight against a wall, with the resultant magnification of the whole image, the effect was to reproduce the patterns as if they were passing through the solid bronze by way of light beams.
History
China
In about 800 AD, during the Tang dynasty (618–907), a book entitled Record of Ancient Mirrors described the method of crafting solid bronze mirrors with decorations, written characters, or patterns on the reverse side that could cast these in a reflection on a nearby surface as light struck the front, polished side of the mirror; due to this seemingly transparent effect, they were called "light-penetration mirrors" by the Chinese.
This Tang-era book was lost over the centuries, but magic mirrors were described in the Dream Pool Essays by Shen Kuo (1031–1095), who owned three of them as family heirlooms. Perplexed as to how solid metal could be transparent, Shen guessed that some sort of quenching technique was used to produce tiny wrinkles on the face of the mirror too small to be observed by the eye. Although his explanation of different cooling rates was incorrect, he was right to suggest the surface contained minute variations which the naked eye could not detect; these mirrors also had no transparent quality at all, as discovered by the British scientist William Bragg in 1932. Bragg noted that "Only the magnifying effect of reflection makes them [the designs] plain".
Japan
As the manufacture of mirrors in China increased, it expanded to Korea and Japan. In fact, Emperor Cao Rui and the Wei Kingdom of China gave numerous bronze mirrors (known as Shinju-kyo in Japan) to Queen Himiko of Wa (Japan), where they were received as rare and mysterious objects. They were described as "sources of honesty" as they were said to reflect all good and evil without error. That is why Japan considers a sacred mirror called Yata-no-Kagami to be one of the three great imperial treasures.
Today, Yamamoto Akihisa is said to be the last manufacturer of magic mirrors in Japan. The Kyoto Journal interviewed the craftsman and he explained a small portion of the technique, that he learned from his father.
Western Europe
The first magic mirror to appear in Western Europe was owned by the director of the Paris Observatory, who, on his return from China, brought several mirrors and one of them was magical. The latter was presented as an unknown object to the French Academy of Sciences in 1844. In total, just four magic mirrors brought from China to Europe, but in 1878 two engineering professors presented to the Royal Society of London several models they had brought from Japan. The English called the artefacts "open mirrors" and for the first time made technical observations regarding their construction.
In 2022, the Cincinnati Art Museum discovered that they had a Chinese magic mirror in their collection. The curator, Hou-mei Sung, discovered that a mirror in their collection reflected an image of Amitabha, an important figure in Chinese Buddhism, his name being inscribed on the back of the mirror.
See also
TLV mirror
References
Chinese art
Optical illusions
Chinese inventions
Bronze mirrors | Chinese magic mirror | [
"Physics"
] | 901 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
4,187,856 | https://en.wikipedia.org/wiki/Primary%20instrument | A primary instrument is a scientific instrument, which by its physical characteristics is accurate and is not calibrated against anything else. A primary instrument must be able to be exactly duplicated anywhere, anytime with identical results.
Example
Pressure. A U tube filled with water is a primary instrument as the water column differential is unchangeable as water is a basic physical substance. It is accurate due to its nature. Similarly a liquid in glass thermometer is a primary instrument as temperature change causes change in height of mercury column differential of which is unchangeable.
Secondary instruments
Secondary instruments must be calibrated against a primary standard. For example:
a dial bourdon tube type pressure gauge must be calibrated against a water or mercury U tube to assure good accuracy.
Time. The earth moving in its orbit is primary. Clocks must be calibrated against it.
Measurement | Primary instrument | [
"Physics",
"Mathematics"
] | 179 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
4,189,127 | https://en.wikipedia.org/wiki/River%20ecosystem | River ecosystems are flowing waters that drain the landscape, and include the biotic (living) interactions amongst plants, animals and micro-organisms, as well as abiotic (nonliving) physical and chemical interactions of its many parts. River ecosystems are part of larger watershed networks or catchments, where smaller headwater streams drain into mid-size streams, which progressively drain into larger river networks. The major zones in river ecosystems are determined by the river bed's gradient or by the velocity of the current. Faster moving turbulent water typically contains greater concentrations of dissolved oxygen, which supports greater biodiversity than the slow-moving water of pools. These distinctions form the basis for the division of rivers into upland and lowland rivers.
The food base of streams within riparian forests is mostly derived from the trees, but wider streams and those that lack a canopy derive the majority of their food base from algae. Anadromous fish are also an important source of nutrients. Environmental threats to rivers include loss of water, dams, chemical pollution and introduced species. A dam produces negative effects that continue down the watershed. The most important negative effects are the reduction of spring flooding, which damages wetlands, and the retention of sediment, which leads to the loss of deltaic wetlands.
River ecosystems are prime examples of lotic ecosystems. Lotic refers to flowing water, from the Latin , meaning washed. Lotic waters range from springs only a few centimeters wide to major rivers kilometers in width. Much of this article applies to lotic ecosystems in general, including related lotic systems such as streams and springs. Lotic ecosystems can be contrasted with lentic ecosystems, which involve relatively still terrestrial waters such as lakes, ponds, and wetlands. Together, these two ecosystems form the more general study area of freshwater or aquatic ecology.
The following unifying characteristics make the ecology of running waters unique among aquatic habitats: the flow is unidirectional, there is a state of continuous physical change, and there is a high degree of spatial and temporal heterogeneity at all scales (microhabitats), the variability between lotic systems is quite high and the biota is specialized to live with flow conditions.
Abiotic components (non-living)
The non-living components of an ecosystem are called abiotic components.
E.g. stone, air, soil, etc.
Water flow
Unidirectional water flow is the key factor in lotic systems influencing their ecology. Streamflow can be continuous or intermittent, though. Streamflow is the result of the summative inputs from groundwater, precipitation, and overland flow. Water flow can vary between systems, ranging from torrential rapids to slow backwaters that almost seem like lentic systems. The speed or velocity of the water flow of the water column can also vary within a system and is subject to chaotic turbulence, though water velocity tends to be highest in the middle part of the stream channel (known as the thalveg). This turbulence results in divergences of flow from the mean downslope flow vector as typified by eddy currents. The mean flow rate vector is based on the variability of friction with the bottom or sides of the channel, sinuosity, obstructions, and the incline gradient. In addition, the amount of water input into the system from direct precipitation, snowmelt, and/or groundwater can affect the flow rate. The amount of water in a stream is measured as discharge (volume per unit time). As water flows downstream, streams and rivers most often gain water volume, so at base flow (i.e., no storm input), smaller headwater streams have very low discharge, while larger rivers have much higher discharge. The "flow regime" of a river or stream includes the general patterns of discharge over annual or decadal time scales, and may capture seasonal changes in flow.
While water flow is strongly determined by slope, flowing waters can alter the general shape or direction of the stream bed, a characteristic also known as geomorphology. The profile of the river water column is made up of three primary actions: erosion, transport, and deposition. Rivers have been described as "the gutters down which run the ruins of continents". Rivers are continuously eroding, transporting, and depositing substrate, sediment, and organic material. The continuous movement of water and entrained material creates a variety of habitats, including riffles, glides, and pools.
Light
Light is important to lotic systems, because it provides the energy necessary to drive primary production via photosynthesis, and can also provide refuge for prey species in shadows it casts. The amount of light that a system receives can be related to a combination of internal and external stream variables. The area surrounding a small stream, for example, might be shaded by surrounding forests or by valley walls. Larger river systems tend to be wide so the influence of external variables is minimized, and the sun reaches the surface. These rivers also tend to be more turbulent, however, and particles in the water increasingly attenuate light as depth increases. Seasonal and diurnal factors might also play a role in light availability because the angle of incidence, the angle at which light strikes water can lead to light lost from reflection. Known as Beer's Law, the shallower the angle, the more light is reflected and the amount of solar radiation received declines logarithmically with depth. Additional influences on light availability include cloud cover, altitude, and geographic position.
Temperature
Most lotic species are poikilotherms whose internal temperature varies with their environment, thus temperature is a key abiotic factor for them. Water can be heated or cooled through radiation at the surface and conduction to or from the air and surrounding substrate. Shallow streams are typically well mixed and maintain a relatively uniform temperature within an area. In deeper, slower moving water systems, however, a strong difference between the bottom and surface temperatures may develop. Spring fed systems have little variation as springs are typically from groundwater sources, which are often very close to ambient temperature. Many systems show strong diurnal fluctuations and seasonal variations are most extreme in arctic, desert and temperate systems. The amount of shading, climate and elevation can also influence the temperature of lotic systems.
Chemistry
Water chemistry in river ecosystems varies depending on which dissolved solutes and gases are present in the water column of the stream. Specifically river water can include, apart from the water itself,
dissolved inorganic matter and major ions (calcium, sodium, magnesium, potassium, bicarbonate, sulphide, chloride)
dissolved inorganic nutrients (nitrogen, phosphorus, silica)
suspended and dissolved organic matter
gases (nitrogen, nitrous oxide, carbon dioxide, oxygen)
trace metals and pollutants
Dissolved ions and nutrients
Dissolved stream solutes can be considered either reactive or conservative. Reactive solutes are readily biologically assimilated by the autotrophic and heterotrophic biota of the stream; examples can include inorganic nitrogen species such as nitrate or ammonium, some forms of phosphorus (e.g., soluble reactive phosphorus), and silica. Other solutes can be considered conservative, which indicates that the solute is not taken up and used biologically; chloride is often considered a conservative solute. Conservative solutes are often used as hydrologic tracers for water movement and transport. Both reactive and conservative stream water chemistry is foremost determined by inputs from the geology of its watershed, or catchment area. Stream water chemistry can also be influenced by precipitation, and the addition of pollutants from human sources. Large differences in chemistry do not usually exist within small lotic systems due to a high rate of mixing. In larger river systems, however, the concentrations of most nutrients, dissolved salts, and pH decrease as distance increases from the river's source.
Dissolved gases
In terms of dissolved gases, oxygen is likely the most important chemical constituent of lotic systems, as all aerobic organisms require it for survival. It enters the water mostly via diffusion at the water-air interface. Oxygen's solubility in water decreases as water pH and temperature increases. Fast, turbulent streams expose more of the water's surface area to the air and tend to have low temperatures and thus more oxygen than slow, backwaters. Oxygen is a byproduct of photosynthesis, so systems with a high abundance of aquatic algae and plants may also have high concentrations of oxygen during the day. These levels can decrease significantly during the night when primary producers switch to respiration. Oxygen can be limiting if circulation between the surface and deeper layers is poor, if the activity of lotic animals is very high, or if there is a large amount of organic decay occurring.
Suspended matter
Rivers can also transport suspended inorganic and organic matter. These materials can include sediment or terrestrially-derived organic matter that falls into the stream channel. Often, organic matter is processed within the stream via mechanical fragmentation, consumption and grazing by invertebrates, and microbial decomposition. Leaves and woody debris recognizable coarse particulate organic matter (CPOM) into particulate organic matter (POM), down to fine particulate organic matter. Woody and non-woody plants have different instream breakdown rates, with leafy plants or plant parts (e.g., flower petals) breaking down faster than woody logs or branches.
Substrate
The inorganic substrate of lotic systems is composed of the geologic material present in the catchment that is eroded, transported, sorted, and deposited by the current. Inorganic substrates are classified by size on the Wentworth scale, which ranges from boulders, to pebbles, to gravel, to sand, and to silt. Typically, substrate particle size decreases downstream with larger boulders and stones in more mountainous areas and sandy bottoms in lowland rivers. This is because the higher gradients of mountain streams facilitate a faster flow, moving smaller substrate materials further downstream for deposition. Substrate can also be organic and may include fine particles, autumn shed leaves, large woody debris such as submerged tree logs, moss, and semi-aquatic plants. Substrate deposition is not necessarily a permanent event, as it can be subject to large modifications during flooding events.
Biotic components (living)
The living components of an ecosystem are called the biotic components. Streams have numerous types of biotic organisms that live in them, including bacteria, primary producers, insects and other invertebrates, as well as fish and other vertebrates.
Microorganisms
Bacteria are present in large numbers in lotic waters. Free-living forms are associated with decomposing organic material, biofilm on the surfaces of rocks and vegetation, in between particles that compose the substrate, and suspended in the water column. Other forms are also associated with the guts of lotic organisms as parasites or in commensal relationships. Bacteria play a large role in energy recycling (see below).
Diatoms are one of the main dominant groups of periphytic algae in lotic systems and have been widely used as efficient indicators of water quality, because they respond quickly to environmental changes, especially organic pollution and eutrophication, with a broad spectrum of tolerances to conditions ranging, from oligotrophic to eutrophic.
Biofilm
A biofilm is a combination of algae (diatoms etc.), fungi, bacteria, and other small microorganisms that exist in a film along the streambed or the benthos. Biofilm assemblages themselves are complex, and add to the complexity of a streambed.
The different biofilm components (algae and bacteria are the principal components) are embedded in an exopolysaccharide matrix (EPS), and are net receptors of inorganic and organic elements and remain submitted to the influences of the different environmental factors.
Biofilms are one of the main biological interphases in river ecosystems, and probably the most important in intermittent rivers, where the importance of the water column is reduced during extended low-activity periods of the hydrological cycle. Biofilms can be understood as microbial consortia of autotrophs and heterotrophs, coexisting in a matrix of hydrated extracellular polymeric substances (EPS). These two main biological components are respectively mainly algae and cyanobacteria on one side, and bacteria and fungi on the other. Micro- and meiofauna also inhabit the biofilm, predating on the organisms and organic particles and contributing to its evolution and dispersal. Biofilms therefore form a highly active biological consortium, ready to use organic and inorganic materials from the water phase, and also ready to use light or chemical energy sources. The EPS immobilize the cells and keep them in close proximity allowing for intense interactions including cell-cell communication and the formation of synergistic consortia. The EPS is able to retain extracellular enzymes and therefore allows the utilization of materials from the environment and the transformation of these materials into dissolved nutrients for the use by algae and bacteria. At the same time, the EPS contributes to protect the cells from desiccation as well from other hazards (e.g., biocides, UV radiation, etc.) from the outer world. On the other hand, the packing and the EPS protection layer limits the diffusion of gases and nutrients, especially for the cells far from the biofilm surface, and this limits their survival and creates strong gradients within the biofilm. Both the biofilm physical structure, and the plasticity of the organisms that live within it, ensure and support their survival in harsh environments or under changing environmental conditions.
Primary producers
Algae, consisting of phytoplankton and periphyton, are the most significant sources of primary production in most streams and rivers. Phytoplankton float freely in the water column and thus are unable to maintain populations in fast flowing streams. They can, however, develop sizeable populations in slow moving rivers and backwaters. Periphyton are typically filamentous and tufted algae that can attach themselves to objects to avoid being washed away by fast currents. In places where flow rates are negligible or absent, periphyton may form a gelatinous, unanchored floating mat.
Plants exhibit limited adaptations to fast flow and are most successful in reduced currents. More primitive plants, such as mosses and liverworts attach themselves to solid objects. This typically occurs in colder headwaters where the mostly rocky substrate offers attachment sites. Some plants are free floating at the water's surface in dense mats like duckweed or water hyacinth. Others are rooted and may be classified as submerged or emergent. Rooted plants usually occur in areas of slackened current where fine-grained soils are found. These rooted plants are flexible, with elongated leaves that offer minimal resistance to current.
Living in flowing water can be beneficial to plants and algae because the current is usually well aerated and it provides a continuous supply of nutrients. These organisms are limited by flow, light, water chemistry, substrate, and grazing pressure. Algae and plants are important to lotic systems as sources of energy, for forming microhabitats that shelter other fauna from predators and the current, and as a food resource.
Insects and other invertebrates
Up to 90% of invertebrates in some lotic systems are insects. These species exhibit tremendous diversity and can be found occupying almost every available habitat, including the surfaces of stones, deep below the substratum in the hyporheic zone, adrift in the current, and in the surface film.
Insects have developed several strategies for living in the diverse flows of lotic systems. Some avoid high current areas, inhabiting the substratum or the sheltered side of rocks. Others have flat bodies to reduce the drag forces they experience from living in running water. Some insects, like the giant water bug (Belostomatidae), avoid flood events by leaving the stream when they sense rainfall. In addition to these behaviors and body shapes, insects have different life history adaptations to cope with the naturally-occurring physical harshness of stream environments. Some insects time their life events based on when floods and droughts occur. For example, some mayflies synchronize when they emerge as flying adults with when snowmelt flooding usually occurs in Colorado streams. Other insects do not have a flying stage and spend their entire life cycle in the river.
Like most of the primary consumers, lotic invertebrates often rely heavily on the current to bring them food and oxygen. Invertebrates are important as both consumers and prey items in lotic systems.
The common orders of insects that are found in river ecosystems include Ephemeroptera (also known as a mayfly), Trichoptera (also known as a caddisfly), Plecoptera (also known as a stonefly, Diptera (also known as a true fly), some types of Coleoptera (also known as a beetle), Odonata (the group that includes the dragonfly and the damselfly), and some types of Hemiptera (also known as true bugs).
Additional invertebrate taxa common to flowing waters include mollusks such as snails, limpets, clams, mussels, as well as crustaceans like crayfish, amphipoda and crabs.
Fish and other vertebrates
Fish are probably the best-known inhabitants of lotic systems. The ability of a fish species to live in flowing waters depends upon the speed at which it can swim and the duration that its speed can be maintained. This ability can vary greatly between species and is tied to the habitat in which it can survive. Continuous swimming expends a tremendous amount of energy and, therefore, fishes spend only short periods in full current. Instead, individuals remain close to the bottom or the banks, behind obstacles, and sheltered from the current, swimming in the current only to feed or change locations. Some species have adapted to living only on the system bottom, never venturing into the open water flow. These fishes are dorso-ventrally flattened to reduce flow resistance and often have eyes on top of their heads to observe what is happening above them. Some also have sensory barrels positioned under the head to assist in the testing of substratum.
Lotic systems typically connect to each other, forming a path to the ocean (spring → stream → river → ocean), and many fishes have life cycles that require stages in both fresh and salt water. Salmon, for example, are anadromous species that are born in freshwater but spend most of their adult life in the ocean, returning to fresh water only to spawn. Eels are catadromous species that do the opposite, living in freshwater as adults but migrating to the ocean to spawn.
Other vertebrate taxa that inhabit lotic systems include amphibians, such as salamanders, reptiles (e.g. snakes, turtles, crocodiles and alligators) various bird species, and mammals (e.g., otters, beavers, hippos, and river dolphins). With the exception of a few species, these vertebrates are not tied to water as fishes are, and spend part of their time in terrestrial habitats. Many fish species are important as consumers and as prey species to the larger vertebrates mentioned above.
Trophic level dynamics
The concept of trophic levels are used in food webs to visualise the manner in which energy is transferred from one part of an ecosystem to another. Trophic levels can be assigned numbers determining how far an organism is along the food chain.
Level one: Producers, plant-like organisms that generate their own food using solar radiation, including algae, phytoplankton, mosses and lichens.
Level two: Consumers, animal-like organism that get their energy from eating producers, such as zooplankton, small fish, and crustaceans.
Level three: Decomposers, organisms that break down the dead matter of consumers and producers and return the nutrients back to the system. Example are bacteria and fungi.
All energy transactions within an ecosystem derive from a single external source of energy, the sun. Some of this solar radiation is used by producers (plants) to turn inorganic substances into organic substances which can be used as food by consumers (animals). Plants release portions of this energy back into the ecosystem through a catabolic process. Animals then consume the potential energy that is being released from the producers. This system is followed by the death of the consumer organism which then returns nutrients back into the ecosystem. This allow further growth for the plants, and the cycle continues. Breaking cycles down into levels makes it easier for ecologists to understand ecological succession when observing the transfer of energy within a system.
Top-down and bottom-up affect
A common issue with trophic level dynamics is how resources and production are regulated. The usage and interaction between resources have a large impact on the structure of food webs as a whole. Temperature plays a role in food web interactions including top-down and bottom-up forces within ecological communities. Bottom-up regulations within a food web occur when a resource available at the base or bottom of the food web increases productivity, which then climbs the chain and influence the biomass availability to higher trophic organism. Top-down regulations occur when a predator population increases. This limits the available prey population, which limits the availability of energy for lower trophic levels within the food chain. Many biotic and abiotic factors can influence top-down and bottom-up interactions.
Trophic cascade
Another example of food web interactions are trophic cascades. Understanding trophic cascades has allowed ecologists to better understand the structure and dynamics of food webs within an ecosystem. The phenomenon of trophic cascades allows keystone predators to structure entire food web in terms of how they interact with their prey. Trophic cascades can cause drastic changes in the energy flow within a food web. For example, when a top or keystone predator consumes organisms below them in the food web, the density and behavior of the prey will change. This, in turn, affects the abundance of organisms consumed further down the chain, resulting in a cascade down the trophic levels. However, empirical evidence shows trophic cascades are much more prevalent in terrestrial food webs than aquatic food webs.
Food chain
A food chain is a linear system of links that is part of a food web, and represents the order in which organisms are consumed from one trophic level to the next. Each link in a food chain is associated with a trophic level in the ecosystem. The numbered steps it takes for the initial source of energy starting from the bottom to reach the top of the food web is called the food chain length. While food chain lengths can fluctuate, aquatic ecosystems start with primary producers that are consumed by primary consumers which are consumed by secondary consumers, and those in turn can be consumed by tertiary consumers so on and so forth until the top of the food chain has been reached.
Primary producers
Primary producers start every food chain. Their production of energy and nutrients comes from the sun through photosynthesis. Algae contributes to a lot of the energy and nutrients at the base of the food chain along with terrestrial litter-fall that enters the stream or river. Production of organic compounds like carbon is what gets transferred up the food chain. Primary producers are consumed by herbivorous invertebrates that act as the primary consumers. Productivity of these producers and the function of the ecosystem as a whole are influenced by the organism above it in the food chain.
Primary consumers
Primary consumers are the invertebrates and macro-invertebrates that feed upon the primary producers. They play an important role in initiating the transfer of energy from the base trophic level to the next. They are regulatory organisms which facilitate and control rates of nutrient cycling and the mixing of aquatic and terrestrial plant materials. They also transport and retain some of those nutrients and materials. There are many different functional groups of these invertebrate, including grazers, organisms that feed on algal biofilm that collects on submerged objects, shredders that feed on large leaves and detritus and help break down large material. Also filter feeders, macro-invertebrates that rely on stream flow to deliver them fine particulate organic matter (FPOM) suspended in the water column, and gatherers who feed on FPOM found on the substrate of the river or stream.
Secondary consumers
The secondary consumers in a river ecosystem are the predators of the primary consumers. This includes mainly insectivorous fish. Consumption by invertebrate insects and macro-invertebrates is another step of energy flow up the food chain. Depending on their abundance, these predatory consumers can shape an ecosystem by the manner in which they affect the trophic levels below them. When fish are at high abundance and eat lots of invertebrates, then algal biomass and primary production in the stream is greater, and when secondary consumers are not present, then algal biomass may decrease due to the high abundance of primary consumers. Energy and nutrients that starts with primary producers continues to make its way up the food chain and depending on the ecosystem, may end with these predatory fish.
Food web complexity
Diversity, productivity, species richness, composition and stability are all interconnected by a series of feedback loops. Communities can have a series of complex, direct and/or indirect, responses to major changes in biodiversity. Food webs can include a wide array of variables, the three main variables ecologists look at regarding ecosystems include species richness, biomass of productivity and stability/resistant to change. When a species is added or removed from an ecosystem it will have an effect on the remaining food web, the intensity of this effect is related to species connectedness and food web robustness. When a new species is added to a river ecosystem the intensity of the effect is related to the robustness or resistance to change of the current food web. When a species is removed from a river ecosystem the intensity of the effect is related to the connectedness of the species to the food web. An invasive species could be removed with little to no effect, but if important and native primary producers, prey or predatory fish are removed you could have a negative trophic cascade. One highly variable component to river ecosystems is food supply (biomass of primary producers). Food supply or type of producers is ever changing with the seasons and differing habitats within the river ecosystem. Another highly variable component to river ecosystems is nutrient input from wetland and terrestrial detritus. Food and nutrient supply variability is important for the succession, robustness and connectedness of river ecosystem organisms.
Trophic relationships
Energy inputs
Energy sources can be autochthonous or allochthonous.
Autochthonous (from the Latin "auto" = "self) energy sources are those derived from within the lotic system. During photosynthesis, for example, primary producers form organic carbon compounds out of carbon dioxide and inorganic matter. The energy they produce is important for the community because it may be transferred to higher trophic levels via consumption. Additionally, high rates of primary production can introduce dissolved organic matter (DOM) to the waters. Another form of autochthonous energy comes from the decomposition of dead organisms and feces that originate within the lotic system. In this case, bacteria decompose the detritus or coarse particulate organic material (CPOM; >1 mm pieces) into fine particulate organic matter (FPOM; <1 mm pieces) and then further into inorganic compounds that are required for photosynthesis. This process is discussed in more detail below.
Allochthonous energy sources are those derived from outside the lotic system, that is, from the terrestrial environment. Leaves, twigs, fruits, etc. are typical forms of terrestrial CPOM that have entered the water by direct litter fall or lateral leaf blow. In addition, terrestrial animal-derived materials, such as feces or carcasses that have been added to the system are examples of allochthonous CPOM. The CPOM undergoes a specific process of degradation. Allan gives the example of a leaf fallen into a stream. First, the soluble chemicals are dissolved and leached from the leaf upon its saturation with water. This adds to the DOM load in the system. Next microbes such as bacteria and fungi colonize the leaf, softening it as the mycelium of the fungus grows into it. The composition of the microbial community is influenced by the species of tree from which the leaves are shed (Rubbo and Kiesecker 2004). This combination of bacteria, fungi, and leaf are a food source for shredding invertebrates, which leave only FPOM after consumption. These fine particles may be colonized by microbes again or serve as a food source for animals that consume FPOM. Organic matter can also enter the lotic system already in the FPOM stage by wind, surface runoff, bank erosion, or groundwater. Similarly, DOM can be introduced through canopy drip from rain or from surface flows.
Invertebrates
Invertebrates can be organized into many feeding guilds in lotic systems. Some species are shredders, which use large and powerful mouth parts to feed on non-woody CPOM and their associated microorganisms. Others are suspension feeders, which use their setae, filtering aparati, nets, or even secretions to collect FPOM and microbes from the water. These species may be passive collectors, utilizing the natural flow of the system, or they may generate their own current to draw water, and also, FPOM in Allan. Members of the gatherer-collector guild actively search for FPOM under rocks and in other places where the stream flow has slackened enough to allow deposition. Grazing invertebrates utilize scraping, rasping, and browsing adaptations to feed on periphyton and detritus. Finally, several families are predatory, capturing and consuming animal prey. Both the number of species and the abundance of individuals within each guild is largely dependent upon food availability. Thus, these values may vary across both seasons and systems.
Fish
Fish can also be placed into feeding guilds. Planktivores pick plankton out of the water column. Herbivore-detritivores are bottom-feeding species that ingest both periphyton and detritus indiscriminately. Surface and water column feeders capture surface prey (mainly terrestrial and emerging insects) and drift (benthic invertebrates floating downstream). Benthic invertebrate feeders prey primarily on immature insects, but will also consume other benthic invertebrates. Top predators consume fishes and/or large invertebrates. Omnivores ingest a wide range of prey. These can be floral, faunal, and/or detrital in nature. Finally, parasites live off of host species, typically other fishes. Fish are flexible in their feeding roles, capturing different prey with regard to seasonal availability and their own developmental stage. Thus, they may occupy multiple feeding guilds in their lifetime. The number of species in each guild can vary greatly between systems, with temperate warm water streams having the most benthic invertebrate feeders, and tropical systems having large numbers of detritus feeders due to high rates of allochthonous input.
Community patterns and diversity
Local species richness
Large rivers have comparatively more species than small streams. Many relate this pattern to the greater area and volume of larger systems, as well as an increase in habitat diversity. Some systems, however, show a poor fit between system size and species richness. In these cases, a combination of factors such as historical rates of speciation and extinction, type of substrate, microhabitat availability, water chemistry, temperature, and disturbance such as flooding seem to be important.
Resource partitioning
Although many alternate theories have been postulated for the ability of guild-mates to coexist (see Morin 1999), resource partitioning has been well documented in lotic systems as a means of reducing competition. The three main types of resource partitioning include habitat, dietary, and temporal segregation.
Habitat segregation was found to be the most common type of resource partitioning in natural systems (Schoener, 1974). In lotic systems, microhabitats provide a level of physical complexity that can support a diverse array of organisms (Vincin and Hawknis, 1998). The separation of species by substrate preferences has been well documented for invertebrates. Ward (1992) was able to divide substrate dwellers into six broad assemblages, including those that live in: coarse substrate, gravel, sand, mud, woody debris, and those associated with plants, showing one layer of segregation. On a smaller scale, further habitat partitioning can occur on or around a single substrate, such as a piece of gravel. Some invertebrates prefer the high flow areas on the exposed top of the gravel, while others reside in the crevices between one piece of gravel and the next, while still others live on the bottom of this gravel piece.
Dietary segregation is the second-most common type of resource partitioning. High degrees of morphological specializations or behavioral differences allow organisms to use specific resources. The size of nets built by some species of invertebrate suspension feeders, for example, can filter varying particle size of FPOM from the water (Edington et al. 1984). Similarly, members in the grazing guild can specialize in the harvesting of algae or detritus depending upon the morphology of their scraping apparatus. In addition, certain species seem to show a preference for specific algal species.
Temporal segregation is a less common form of resource partitioning, but it is nonetheless an observed phenomenon. Typically, it accounts for coexistence by relating it to differences in life history patterns and the timing of maximum growth among guild mates. Tropical fishes in Borneo, for example, have shifted to shorter life spans in response to the ecological niche reduction felt with increasing levels of species richness in their ecosystem (Watson and Balon 1984).
Persistence and succession
Over long time scales, there is a tendency for species composition in pristine systems to remain in a stable state. This has been found for both invertebrate and fish species. On shorter time scales, however, flow variability and unusual precipitation patterns decrease habitat stability and can all lead to declines in persistence levels. The ability to maintain this persistence over long time scales is related to the ability of lotic systems to return to the original community configuration relatively quickly after a disturbance (Townsend et al. 1987). This is one example of temporal succession, a site-specific change in a community involving changes in species composition over time. Another form of temporal succession might occur when a new habitat is opened up for colonization. In these cases, an entirely new community that is well adapted to the conditions found in this new area can establish itself.
River continuum concept
The River continuum concept (RCC) was an attempt to construct a single framework to describe the function of temperate lotic ecosystems from the headwaters to larger rivers and relate key characteristics to changes in the biotic community (Vannote et al. 1980). The physical basis for RCC is size and location along the gradient from a small stream eventually linked to a large river. Stream order (see characteristics of streams) is used as the physical measure of the position along the RCC.
According to the RCC, low ordered sites are small shaded streams where allochthonous inputs of CPOM are a necessary resource for consumers. As the river widens at mid-ordered sites, energy inputs should change. Ample sunlight should reach the bottom in these systems to support significant periphyton production. Additionally, the biological processing of CPOM (coarse particulate organic matter larger than 1 mm) inputs at upstream sites is expected to result in the transport of large amounts of FPOM (fine particulate organic matter smaller than 1 mm) to these downstream ecosystems. Plants should become more abundant at edges of the river with increasing river size, especially in lowland rivers where finer sediments have been deposited and facilitate rooting. The main channels likely have too much current and turbidity and a lack of substrate to support plants or periphyton. Phytoplankton should produce the only autochthonous inputs here, but photosynthetic rates will be limited due to turbidity and mixing. Thus, allochthonous inputs are expected to be the primary energy source for large rivers. This FPOM will come from both upstream sites via the decomposition process and through lateral inputs from floodplains.
Biota should change with this change in energy from the headwaters to the mouth of these systems. Namely, shredders should prosper in low-ordered systems and grazers in mid-ordered sites. Microbial decomposition should play the largest role in energy production for low-ordered sites and large rivers, while photosynthesis, in addition to degraded allochthonous inputs from upstream will be essential in mid-ordered systems. As mid-ordered sites will theoretically receive the largest variety of energy inputs, they might be expected to host the most biological diversity (Vannote et al. 1980).
Just how well the RCC actually reflects patterns in natural systems is uncertain and its generality can be a handicap when applied to diverse and specific situations. The most noted criticisms of the RCC are: 1. It focuses mostly on macroinvertebrates, disregarding that plankton and fish diversity is highest in high orders; 2. It relies heavily on the fact that low ordered sites have high CPOM inputs, even though many streams lack riparian habitats; 3. It is based on pristine systems, which rarely exist today; and 4. It is centered around the functioning of temperate streams. Despite its shortcomings, the RCC remains a useful idea for describing how the patterns of ecological functions in a lotic system can vary from the source to the mouth.
Disturbances such as congestion by dams or natural events such as shore flooding are not included in the RCC model. Various researchers have since expanded the model to account for such irregularities. For example, J.V. Ward and J.A. Stanford came up with the Serial Discontinuity Concept in 1983, which addresses the impact of geomorphologic disorders such as congestion and integrated inflows. The same authors presented the Hyporheic Corridor concept in 1993, in which the vertical (in depth) and lateral (from shore to shore) structural complexity of the river were connected. The flood pulse concept, developed by W. J. Junk in 1989, further modified by P. B. Bayley in 1990 and K. Tockner in 2000, takes into account the large amount of nutrients and organic material that makes its way into a river from the sediment of surrounding flooded land.
Human impacts
Humans exert a geomorphic force that now rivals that of the natural Earth. The period of human dominance has been termed the Anthropocene, and several dates have been proposed for its onset. Many researchers have emphasised the dramatic changes associated with the Industrial Revolution in Europe after about 1750 CE (Common Era) and the Great Acceleration in technology at about 1950 CE.
However, a detectable human imprint on the environment extends back for thousands of years, and an emphasis on recent changes minimises the enormous landscape transformation caused by humans in antiquity. Important earlier human effects with significant environmental consequences include megafaunal extinctions between 14,000 and 10,500 cal yr BP; domestication of plants and animals close to the start of the Holocene at 11,700 cal yr BP; agricultural practices and deforestation at 10,000 to 5000 cal yr BP; and widespread generation of anthropogenic soils at about 2000 cal yr BP. Key evidence of early anthropogenic activity is encoded in early fluvial successions, long predating anthropogenic effects that have intensified over the past centuries and led to the modern worldwide river crisis.
Pollution
River pollution can include but is not limited to: increasing sediment export, excess nutrients from fertilizer or urban runoff, sewage and septic inputs, plastic pollution, nano-particles, pharmaceuticals and personal care products, synthetic chemicals, road salt, inorganic contaminants (e.g., heavy metals), and even heat via thermal pollutions. The effects of pollution often depend on the context and material, but can reduce ecosystem functioning, limit ecosystem services, reduce stream biodiversity, and impact human health.
Pollutant sources of lotic systems are hard to control because they can derive, often in small amounts, over a very wide area and enter the system at many locations along its length. While direct pollution of lotic systems has been greatly reduced in the United States under the government's Clean Water Act, contaminants from diffuse non-point sources remain a large problem. Agricultural fields often deliver large quantities of sediments, nutrients, and chemicals to nearby streams and rivers. Urban and residential areas can also add to this pollution when contaminants are accumulated on impervious surfaces such as roads and parking lots that then drain into the system. Elevated nutrient concentrations, especially nitrogen and phosphorus which are key components of fertilizers, can increase periphyton growth, which can be particularly dangerous in slow-moving streams. Another pollutant, acid rain, forms from sulfur dioxide and nitrous oxide emitted from factories and power stations. These substances readily dissolve in atmospheric moisture and enter lotic systems through precipitation. This can lower the pH of these sites, affecting all trophic levels from algae to vertebrates. Mean species richness and total species numbers within a system decrease with decreasing pH.
Flow modification
Flow modification can occur as a result of dams, water regulation and extraction, channel modification, and the destruction of the river floodplain and adjacent riparian zones.
Dams alter the flow, temperature, and sediment regime of lotic systems. Additionally, many rivers are dammed at multiple locations, amplifying the impact. Dams can cause enhanced clarity and reduced variability in stream flow, which in turn cause an increase in periphyton abundance. Invertebrates immediately below a dam can show reductions in species richness due to an overall reduction in habitat heterogeneity. Also, thermal changes can affect insect development, with abnormally warm winter temperatures obscuring cues to break egg diapause and overly cool summer temperatures leaving too few acceptable days to complete growth. Finally, dams fragment river systems, isolating previously continuous populations, and preventing the migrations of anadromous and catadromous species.
Invasive species
Invasive species have been introduced to lotic systems through both purposeful events (e.g. stocking game and food species) as well as unintentional events (e.g. hitchhikers on boats or fishing waders). These organisms can affect natives via competition for prey or habitat, predation, habitat alteration, hybridization, or the introduction of harmful diseases and parasites. Once established, these species can be difficult to control or eradicate, particularly because of the connectivity of lotic systems. Invasive species can be especially harmful in areas that have endangered biota, such as mussels in the Southeast United States, or those that have localized endemic species, like lotic systems west of the Rocky Mountains, where many species evolved in isolation.
See also
Betty's Brain software that "learns" about river ecosystems
Flood pulse concept
Lake ecosystem
Rheophile
Riparian zone
River continuum concept
River drainage system
RIVPACS
The Riverkeepers
Upland and lowland rivers
References
Further reading
Brown, A. L. 1987. Freshwater Ecology. Heinimann Educational Books, London. P. 163.
Carlisle, D. M. and M. D. Woodside. 2013. Ecological health in the nation's streams, United States Geological Survey. P. 6.
Edington, J. M., Edington, M. A., and J. A. Dorman. 1984. Habitat partitioning amongst hydrophyschid larvae of a Malaysian stream. Entomologica 30: 123–129.
Hynes, H. B. N. 1970. Ecology of Running Waters. Originally published in Toronto by University of Toronto Press, 555 p.
Morin, P. J. 1999. Community Ecology. Blackwell Science, Oxford. P. 424.
Ward, J. V. 1992. Aquatic Insect Ecology: biology and habitat. Wiley, New York. P. 456.
External links
USGS real time stream flow data for gauged systems nationwide
Aquatic ecology
Ecosystems
Freshwater ecology
Limnology
Riparian zone
Rivers
Water streams | River ecosystem | [
"Biology",
"Environmental_science"
] | 9,102 | [
"Hydrology",
"Symbiosis",
"Aquatic ecology",
"Ecosystems",
"Riparian zone"
] |
4,189,435 | https://en.wikipedia.org/wiki/Dill%20oil | Dill oil is an essential oil extracted from the seeds or leaves/stems (dillweed) of the Dill plant. It can be used with water to create dill water. Dill (Anethum graveolens) is an annual herb in the celery family Apiaceae. It is the sole species of the genus Anethum.
Origin
Also known as Indian Dill, originally from Southwest Asia, Dill is an annual or biennial herb that grows up to 1 meter (3 feet). It has green feathery leaves and umbels of small yellow flowers, followed by tiny compressed seeds.
It was popular with the Egyptians, Greeks and Romans, who called it "Anethon" from which the botanical name was derived. The common name comes from the Anglo-Saxon dylle or dylla, which then changed to dill. The word means 'to lull' – referring to its soothing properties. In the Middle Ages it was used as a charm against witchcraft.
From 812 onwards, when Charlemagne, King of the Franks, Emperor of the Romans, ordered the extensive cultivation of this herb, it has been widely used, especially, as a culinary herb.
Properties
Dill oil is known for its grass-like smell and its pale yellow color, with a watery viscosity.
Production
Dill oil is extracted by steam distillation, mainly from the seeds, or the whole herb, fresh or partly dried.
References
Essential oils | Dill oil | [
"Chemistry"
] | 301 | [
"Essential oils",
"Natural products"
] |
4,189,740 | https://en.wikipedia.org/wiki/Plant%20defense%20against%20herbivory | Plant defense against herbivory or host-plant resistance is a range of adaptations evolved by plants which improve their survival and reproduction by reducing the impact of herbivores. Many plants produce secondary metabolites, known as allelochemicals, that influence the behavior, growth, or survival of herbivores. These chemical defenses can act as repellents or toxins to herbivores or reduce plant digestibility. Another defensive strategy of plants is changing their attractiveness. Plants can sense being touched, and they can respond with strategies to defend against herbivores. Plants alter their appearance by changing their size or quality in a way that prevents overconsumption by large herbivores, reducing the rate at which they are consumed.
Other defensive strategies used by plants include escaping or avoiding herbivores at any time in any placefor example, by growing in a location where plants are not easily found or accessed by herbivores or by changing seasonal growth patterns. Another approach diverts herbivores toward eating non-essential parts or enhances the ability of a plant to recover from the damage caused by herbivory. Some plants support the presence of natural enemies of herbivores, which protect the plant. Each type of defense can be either constitutive (always present in the plant) or induced (produced in reaction to damage or stress caused by herbivores).
Historically, insects have been the most significant herbivores, and the evolution of land plants is closely associated with the evolution of insects. While most plant defenses are directed against insects, other defenses have evolved that are aimed at vertebrate herbivores, such as birds and mammals. The study of plant defenses against herbivory is important from an evolutionary viewpoint; for the direct impact that these defenses have on agriculture, including human and livestock food sources; as beneficial 'biological control agents' in biological pest control programs; and in the search for plants of medical importance.
Evolution of defensive traits
The earliest land plants evolved from aquatic plants around (Ma) in the Ordovician period. Many plants have adapted to an iodine-deficient terrestrial environment by removing iodine from their metabolism; in fact, iodine is essential only for animal cells. An important antiparasitic action is caused by the blockage in the transport of iodide of animal cells, inhibiting sodium-iodide symporter (NIS). Many plant pesticides are glycosides (such as cardiac digitoxin) and cyanogenic glycosides that liberate cyanide, which, by blocking cytochrome c oxidase and NIS, is poisonous only for a large part of parasites and herbivores and not for the plant cells, in which it seems useful in the seed dormancy phase. Iodide is not itself a pesticide, but is oxidized by vegetable peroxidase to iodine, which is a strong oxidant able to kill bacteria, fungi, and protozoa.
The Cretaceous period saw the appearance of more plant defense mechanisms. The diversification of flowering plants (angiosperms) at that time is associated with the sudden burst of speciation in insects. This diversification of insects represented a major selective force in plant evolution and led to the selection of plants that had defensive adaptations. Early insect herbivores were mandibulate and bit or chewed vegetation, but the evolution of vascular plants led to the co-evolution of other forms of herbivory, such as sap-sucking, leaf mining, gall forming, and nectar-feeding.
The relative abundance of different species of plants in ecological communities including forests and grasslands may be determined in part by the level of defensive compounds in the different species. Since the cost of replacing damaged leaves is higher in conditions where resources are scarce, it may be that plants growing in areas where water and nutrients are scarce invest more resources into anti-herbivore defenses, resulting in slower plant growth.
Records of herbivores
Knowledge of herbivory in geological time comes from three sources: fossilized plants, which may preserve evidence of defense (such as spines) or herbivory-related damage; the observation of plant debris in fossilised animal feces; and the structure of herbivore mouthparts.
Long thought to be a Mesozoic phenomenon, evidence for herbivory is found almost as soon as fossils can show it. As previously discussed, the first land plants emerged around 450 million years ago; however, herbivory, and therefore the need for plant defenses, undoubtedly evolved among aquatic organisms in ancient lakes and oceans. Within 20 million years of the first fossils of sporangia and stems towards the close of the Silurian, around , there is evidence that plants were being consumed. Animals fed on the spores of early Devonian plants, and the Rhynie chert provides evidence that organisms fed on plants using a "pierce and suck" technique.
During the ensuing 75 million years, plants evolved a range of more complex organs, from roots to seeds. There was a gap of 50 to 100 million years between each organ's evolution and its being eaten. Hole feeding and skeletonization are recorded in the early Permian, with surface fluid feeding evolving by the end of that period.
Co-evolution
Herbivores are dependent on plants for food and have evolved mechanisms to obtain this food despite the evolution of a diverse arsenal of plant defenses. Herbivore adaptations to plant defense have been likened to offensive traits and consist of adaptations that allow increased feeding and use of a host plant. Relationships between herbivores and their host plants often result in reciprocal evolutionary change, called co-evolution. When an herbivore eats a plant, it selects for plants that can mount a defensive response. In cases where this relationship demonstrates specificity (the evolution of each trait is due to the other) and reciprocity (both traits must evolve), the species are thought to have co-evolved.
The "escape and radiation" mechanism for co-evolution presents the idea that adaptations in herbivores and their host plants have been the driving force behind speciation and have played a role in the radiation of insect species during the age of angiosperms. Some herbivores have evolved ways to hijack plant defenses to their own benefit by sequestering these chemicals and using them to protect themselves from predators. Plant defenses against herbivores are generally not complete, so plants tend to evolve some tolerance to herbivory.
Types
Plant defenses can be classified as constitutive or induced. Constitutive defenses are always present, while induced defenses are produced or mobilized to the site where a plant is injured. There is wide variation in the composition and concentration of constitutive defenses; these range from mechanical defenses to digestibility reducers and toxins. Many external mechanical defenses and quantitative defenses are constitutive, as they require large amounts of resources to produce and are costly to mobilize. A variety of molecular and biochemical approaches are used to determine the mechanisms of constitutive and induced defensive responses.
Induced defenses include secondary metabolites and morphological and physiological changes. An advantage of inducible, as opposed to constitutive defenses, is that they are only produced when needed, and are therefore potentially less costly, especially when herbivory is variable. Modes of induced defence include systemic acquired resistance and plant-induced systemic resistance.
Chemical defenses
The evolution of chemical defenses in plants is linked to the emergence of chemical substances that are not involved in the essential photosynthetic and metabolic activities. These substances, secondary metabolites, are organic compounds that are not directly involved in the normal growth, development or reproduction of organisms, and often produced as by-products during the synthesis of primary metabolic products. Examples of these byproducts include phenolics, flavonoids, and tannins. Although these secondary metabolites have been thought to play a major role in defenses against herbivores, a meta-analysis of recent relevant studies has suggested that they have either a more minimal (when compared to other non-secondary metabolites, such as primary chemistry and physiology) or more complex involvement in defense.
Plants can communicate through the air. Pheromone release and other scents can be detected by leaves and regulate plant immune response. In other words, plants produce volatile organic compounds (VOC) to warn other plants of danger and change their behavioral state to better respond to threats and survival. These warning signals produced by infected neighboring trees allow the undamaged trees to provocatively activate the necessary defense mechanisms. Within the plant itself, it transmits warning, nonvolatile signals as well as airborne signals to surrounding undamaged trees to strengthen their defense/immune system. For instance, poplar and sugar maple trees demonstrated that they received tannins from nearby damaged trees. In sagebrush, damaged plants send out airborne compounds, such as methyl jasmonate, to undamaged plants to increase proteinase inhibitor production and resistance to herbivory.
The release of unique VOCs and extrafloral nectar (EFN) allow plants to protect themselves against herbivores by attracting animals from the third trophic level. For example, caterpillar-damaged plants guide parasitic wasps to prey on victims through the release of chemical signals.The sources of these compounds are most likely from glands in the leaves which are ruptured upon the chewing of an herbivore. The injury by herbivores induces the release of linolenic acid and other enzymatic reactions in an octadecanoid cascade, leading to the synthesis of jasmonic acid, a hormone which plays a central role in regulating immune responses. Jasmonic acid induces the release of VOCs and EFN which attract parasitic wasps and predatory mites to detect and feed on herbivores. These volatile organic compounds can also be released to other nearby plants to be prepared for the potential threats. The volatile compounds emitted by plants are easily detected by third trophic level organisms as these signals are unique to herbivore damage. An experiment conducted to measure the VOCs from growing plants shows that signals are released instantaneously upon the herbivory damage and slowly dropped after the damage stopped. It was also observed that plants release the strongest signals during the time of day which animals tend to forage.
Since trees are sessile, they have established unique internal defense systems. For instance, when some trees experience herbivory, they release compounds that make their vegetation less palatable. The herbivores saliva left on the leaves of the tree sends a chemical signal to the tree's cells. The tree cells respond by increasing the concentration of salicylic acid (hormone) production. Salicylic acid is a phytohormone that is one of the essential hormones for regulating plants' immune systems. This hormone then signals to increase the production of tree chemicals called tannins within its leaves.
Antiherbivory compounds
Plants have evolved many secondary metabolites involved in plant defense, which are collectively known as antiherbivory compounds and can be classified into three sub-groups: nitrogen compounds (including alkaloids, cyanogenic glycosides, glucosinolates and benzoxazinoids), terpenoids, and phenolics.
Alkaloids are derived from various amino acids. Over 3,000 alkaloids are known, including nicotine, caffeine, morphine, cocaine, colchicine, ergolines, strychnine, and quinine. Alkaloids have pharmacological effects on humans and other animals. Some alkaloids can inhibit or activate enzymes, or alter carbohydrate and fat storage by inhibiting the formation phosphodiester bonds involved in their breakdown. Certain alkaloids bind to nucleic acids and can inhibit synthesis of proteins and affect DNA repair mechanisms. Alkaloids can also affect cell membrane and cytoskeletal structure causing the cells to weaken, collapse, or leak, and can affect nerve transmission. Although alkaloids act on a diversity of metabolic systems in humans and other animals, they almost uniformly invoke an aversively bitter taste.
Cyanogenic glycosides are stored in inactive forms in plant vacuoles. They become toxic when herbivores eat the plant and break cell membranes allowing the glycosides to come into contact with enzymes in the cytoplasm releasing hydrogen cyanide which blocks cellular respiration. Glucosinolates are activated in much the same way as cyanogenic glucosides, and the products can cause gastroenteritis, salivation, diarrhea, and irritation of the mouth. Benzoxazinoids, such as DIMBOA, are secondary defence metabolites characteristic of certain grasses (Poaceae). Like cyanogenic glycosides, they are stored as inactive glucosides in the plant vacuole. Upon tissue disruption they get into contact with β-glucosidases from the chloroplasts, which enzymatically release the toxic aglucones. Whereas some benzoxazinoids are constitutively present, others are only synthesized following herbivore infestation, and thus, considered inducible plant defenses against herbivory.
The terpenoids, sometimes referred to as isoprenoids, are organic chemicals similar to terpenes, derived from five-carbon isoprene units. There are over 10,000 known types of terpenoids. Most are multicyclic structures which differ from one another in both functional groups, and in basic carbon skeletons. Monoterpenoids, containing 2 isoprene units, are volatile essential oils such as citronella, limonene, menthol, camphor, and pinene. Diterpenoids, 4 isoprene units, are widely distributed in latex and resins, and can be quite toxic. Diterpenes are responsible for making Rhododendron leaves poisonous. Plant steroids and sterols are also produced from terpenoid precursors, including vitamin D, glycosides (such as digitalis) and saponins (which lyse red blood cells of herbivores).
Phenolics, sometimes called phenols, consist of an aromatic 6-carbon ring bonded to a hydroxy group. Some phenols have antiseptic properties, while others disrupt endocrine activity. Phenolics range from simple tannins to the more complex flavonoids that give plants much of their red, blue, yellow, and white pigments. Complex phenolics called polyphenols are capable of producing many different types of effects on humans, including antioxidant properties. Some examples of phenolics used for defense in plants are: lignin, silymarin and cannabinoids. Condensed tannins, polymers composed of 2 to 50 (or more) flavonoid molecules, inhibit herbivore digestion by binding to consumed plant proteins and making them more difficult for animals to digest, and by interfering with protein absorption and digestive enzymes.
In addition, some plants use fatty acid derivatives, amino acids and even peptides as defenses. The cholinergic toxin cicutoxin of water hemlock is a polyyne derived from the fatty acid metabolism. Oxalyldiaminopropionic acid is a neurotoxic amino acid produced as a defensive metabolite in the grass pea (Lathyrus sativus). The synthesis of fluoroacetate in several plants is an example of the use of small molecules to disrupt the metabolism of herbivores, in this case the citric acid cycle.
Plants interact by producing allelochemicals which interfere with the growth of other plants (allelopathy). These have a role in plant defense and may be used to suppress competitors such as weeds of crops. A result may be larger plants better able to survive damage by herbivores.
Enzymes
Premier examples are substances activated by the enzyme myrosinase. This enzyme converts glucosinolates to various compounds that are toxic to herbivorous insects. One product of this enzyme is allyl isothiocyanate, the pungent ingredient in horseradish sauces.
The myrosinase is released only upon crushing the flesh of horseradish. Since allyl isothiocyanate is harmful to the plant as well as the insect, it is stored in the harmless form of the glucosinolate, separate from the myrosinase enzyme.
Mechanical defenses
See the review of mechanical defenses by Lucas et al., 2000, which remains relevant and well regarded in the subject . Many plants have external structural defenses that discourage herbivory. Structural defenses can be described as morphological or physical traits that give the plant a fitness advantage by deterring herbivores from feeding. Depending on the herbivore's physical characteristics (i.e. size and defensive armor), plant structural defenses on stems and leaves can deter, injure, or kill the grazer. Some defensive compounds are produced internally but are released onto the plant's surface; for example, resins, lignins, silica, and wax cover the epidermis of terrestrial plants and alter the texture of the plant tissue. The leaves of holly plants, for instance, are very smooth and slippery making feeding difficult. Some plants produce gummosis or sap that traps insects.
Spines and thorns
A plant's leaves and stem may be covered with sharp prickles, spines, thorns or trichomes- hairs on the leaf often with barbs, sometimes containing irritants or poisons. Plant structural features such as spines, thorns and awns reduce feeding by large ungulate herbivores (e.g. kudu, impala, and goats) by restricting the herbivores' feeding rate, or by wearing down the molars. Trichomes are frequently associated with lower rates of plant tissue digestion by insect herbivores. Raphides are sharp needles of calcium oxalate or calcium carbonate in plant tissues, making ingestion painful, damaging a herbivore's mouth and gullet and causing more efficient delivery of the plant's toxins. The structure of a plant, its branching and leaf arrangement may also be evolved to reduce herbivore impact. The shrubs of New Zealand have evolved special wide-branching adaptations believed to be a response to browsing birds such as moas. Similarly, African Acacia trees have long spines low in the canopy, but very short spines in the high canopy, which is comparatively safe from herbivores such as giraffes.
Trees such as palms protect their fruit by multiple layers of armor, needing efficient tools to break through to the seed contents. Some plants, notably the grasses, use indigestible silica (and many plants use other relatively indigestible materials such as lignin) to defend themselves against vertebrate and invertebrate herbivores. Plants take up silicon from the soil and deposit it in their tissues in the form of solid silica phytoliths. These mechanically reduce the digestibility of plant tissue, causing rapid wear to vertebrate teeth and to insect mandibles, and are effective against herbivores above and below ground. The mechanism may offer future sustainable pest-control strategies.
Thigmonastic movements
Thigmonastic movements, those that occur in response to touch, are used as a defense in some plants. The leaves of the sensitive plant, Mimosa pudica, close up rapidly in response to direct touch, vibration, or even electrical and thermal stimuli. The proximate cause of this mechanical response is an abrupt change in the turgor pressure in the pulvini at the base of leaves resulting from osmotic phenomena. This is then spread via both electrical and chemical means through the plant; only a single leaflet need be disturbed. This response lowers the surface area available to herbivores, which are presented with the underside of each leaflet, and results in a wilted appearance. It may also physically dislodge small herbivores, such as insects.
Carnivorous plants
Carnivory in plants has evolved at least six times independently. Some examples include the Venus flytrap, pitcher plant, and butterwort. Many of these plants have evolved in nutrient-poor soil, and must procure nutrients from other sources. They use insects and small birds as a way to gain the minerals they need through carnivory. Carnivorous plants do not use carnivory as defense, but to get the nutrients they need.
Mimicry and camouflage
Some plants make use of various forms of mimicry to reduce herbivory. One mechanism is to mimic the presence of insect eggs on their leaves, dissuading insect species from laying their eggs there. Because female butterflies are less likely to lay their eggs on plants that already have butterfly eggs, some species of neotropical vines of the genus Passiflora (passion flowers) make use of Gilbertian mimicry: they possess physical structures resembling the yellow eggs of Heliconius butterflies on their leaves, which discourage oviposition by butterflies. Other plants make use of Batesian mimicry, with structures that imitate thorns or other objects to dissuade herbivores directly. A further approach is camouflage; the vine Boquila trifoliolata mimics the leaves of its host plant, while the pebble plant Lithops makes itself hard to spot among the stones of the Southern African environment.
Indirect defenses
Another category of plant defenses are those features that indirectly protect the plant by enhancing the probability of attracting the natural enemies of herbivores. Such an arrangement is known as mutualism, in this case of the "enemy of my enemy" variety. One such feature are semiochemicals, given off by plants. Semiochemicals are a group of volatile organic compounds involved in interactions between organisms. One group of semiochemicals are allelochemicals; consisting of allomones, which play a defensive role in interspecies communication, and kairomones, which are used by members of higher trophic levels to locate food sources. When a plant is attacked it releases allelochemics containing an abnormal ratio of these s (HIPVs). Predators sense these volatiles as food cues, attracting them to the damaged plant, and to feeding herbivores. The subsequent reduction in the number of herbivores confers a fitness benefit to the plant and demonstrates the indirect defensive capabilities of semiochemicals. Induced volatiles also have drawbacks, however; some studies have suggested that these volatiles attract herbivores. Crop domestication has increased yield, sometimes at the expense of HIPV production. Orre Gordon et al 2013 tests several methods of artificially restoring the plant-predator partnership, by combining companion planting and synthetic predator attractants. They describe several strategies which work and several which do not.
Plants sometimes provide housing and food items for natural enemies of herbivores, known as "biotic" defense mechanisms, to maintain their presence. For example, trees from the genus Macaranga have adapted their thin stem walls to create ideal housing for ants (genus Crematogaster), which, in turn, protects the plant from herbivores. In addition to providing housing, the plant also provides the ant with its exclusive food source; from the food bodies produced by the plant. Similarly, several Acacia tree species have developed stipular spines (direct defenses) that are swollen at the base, forming a hollow structure that provides housing for protective ants. These Acacia trees also produce nectar in extrafloral nectaries on their leaves as food for the ants.
Plant use of endophytic fungi in defense is common. Most plants have endophytes, microbial organisms that live within them. While some cause disease, others protect plants from herbivores and pathogenic microbes. Endophytes can help the plant by producing toxins harmful to other organisms that would attack the plant, such as alkaloid producing fungi which are common in grasses such as tall fescue (Festuca arundinacea), which is infected by Neotyphodium coenophialum.
Trees of the same species form alliances with other tree species to improve their survival rate. They communicate and have dependent relationships through connections below the soil called underground mycorrhiza networks, which allows them to share water/nutrients and various signals for predatory attacks while also protecting the immune system. Within a forest of trees, the ones getting attacked send communication distress signals that alerts neighboring trees to alter their behavior (defense). Trees and fungi have a symbiotic relationship: fungi, intertwined with the trees' roots, support communication between trees to locate nutrients; in return, the fungi receive some of the sugar that trees photosynthesize. Trees send out several forms of communication including chemical, hormonal, and slow pulsing electric signals. Farmers investigated the electrical signals between trees, using a voltage-based signal system, similar to an animal's nervous system, where a tree faces distress and releases a warning signal to surrounding trees.
Leaf shedding and color
There have been suggestions that leaf shedding may be a response that provides protection against diseases and certain kinds of pests such as leaf miners and gall forming insects. Other responses such as the change of leaf colors prior to fall have also been suggested as adaptations that may help undermine the camouflage of herbivores. Autumn leaf color has also been suggested to act as an honest warning signal of defensive commitment towards insect pests that migrate to the trees in autumn.
Costs and benefits
Defensive structures and chemicals are costly as they require resources that could otherwise be used by plants to maximize growth and reproduction. In some situations, plant growth slows down when most of the nutrients are being used for the generation of toxins or regeneration of plant parts. Many models have been proposed to explore how and why some plants make this investment in defenses against herbivores.
Optimal defense hypothesis
The optimal defense hypothesis attempts to explain how the kinds of defenses a particular plant might use reflect the threats each individual plant faces. This model considers three main factors: risk of attack, value of the plant part, and the cost of defense.
The first factor determining optimal defense is risk: how likely is it that a plant or certain plant parts will be attacked? This is also related to the plant apparency hypothesis, which states that a plant will invest heavily in broadly effective defenses when the plant is easily found by herbivores. Examples of apparent plants that produce generalized protections include long-living trees, shrubs, and perennial grasses. Unapparent plants, such as short-lived plants of early successional stages, on the other hand, preferentially invest in small amounts of qualitative toxins that are effective against all but the most specialized herbivores.
The second factor is the value of protection: would the plant be less able to survive and reproduce after removal of part of its structure by a herbivore? Not all plant parts are of equal evolutionary value, thus valuable parts contain more defenses. A plant's stage of development at the time of feeding also affects the resulting change in fitness. Experimentally, the fitness value of a plant structure is determined by removing that part of the plant and observing the effect. In general, reproductive parts are not as easily replaced as vegetative parts, terminal leaves have greater value than basal leaves, and the loss of plant parts mid-season has a greater negative effect on fitness than removal at the beginning or end of the season. Seeds in particular tend to be very well protected. For example, the seeds of many edible fruits and nuts contain cyanogenic glycosides such as amygdalin. This results from the need to balance the effort needed to make the fruit attractive to animal dispersers while ensuring that the seeds are not destroyed by the animal.
The final consideration is cost: how much will a particular defensive strategy cost a plant in energy and materials? This is particularly important, as energy spent on defense cannot be used for other functions, such as reproduction and growth. The optimal defense hypothesis predicts that plants will allocate more energy towards defense when the benefits of protection outweigh the costs, specifically in situations where there is high herbivore pressure.
Carbon:nutrient balance hypothesis
The carbon:nutrient balance hypothesis, also known as the environmental constraint hypothesis or Carbon Nutrient Balance Model (CNBM), states that the various types of plant defenses are responses to variations in the levels of nutrients in the environment. This hypothesis predicts the Carbon/Nitrogen ratio in plants determines which secondary metabolites will be synthesized. For example, plants growing in nitrogen-poor soils will use carbon-based defenses (mostly digestibility reducers), while those growing in low-carbon environments (such as shady conditions) are more likely to produce nitrogen-based toxins. The hypothesis further predicts that plants can change their defenses in response to changes in nutrients. For example, if plants are grown in low-nitrogen conditions, then these plants will implement a defensive strategy composed of constitutive carbon-based defenses. If nutrient levels subsequently increase, by for example the addition of fertilizers, these carbon-based defenses will decrease.
Growth rate hypothesis
The growth rate hypothesis, also known as the resource availability hypothesis, states that defense strategies are determined by the inherent growth rate of the plant, which is in turn determined by the resources available to the plant. A major assumption is that available resources are the limiting factor in determining the maximum growth rate of a plant species. This model predicts that the level of defense investment will increase as the potential of growth decreases. Additionally, plants in resource-poor areas, with inherently slow-growth rates, tend to have long-lived leaves and twigs, and the loss of plant appendages may result in a loss of scarce and valuable nutrients.
One test of this model involved a reciprocal transplants of seedlings of 20 species of trees between clay soils (nutrient rich) and white sand (nutrient poor) to determine whether trade-offs between growth rate and defenses restrict species to one habitat. When planted in white sand and protected from herbivores, seedlings originating from clay outgrew those originating from the nutrient-poor sand, but in the presence of herbivores the seedlings originating from white sand performed better, likely due to their higher levels of constitutive carbon-based defenses. These finding suggest that defensive strategies limit the habitats of some plants.
Growth-differentiation balance hypothesis
The growth-differentiation balance hypothesis states that plant defenses are a result of a tradeoff between "growth-related processes" and "differentiation-related processes" in different environments. Differentiation-related processes are defined as "processes that enhance the structure or function of existing cells (i.e. maturation and specialization)." A plant will produce chemical defenses only when energy is available from photosynthesis, and plants with the highest concentrations of secondary metabolites are the ones with an intermediate level of available resources.
Synthesis tradeoffs
The vast majority of plant resistances to herbivores are either unrelated to each other, or are positively correlated. However there are some negative correlations: In Pastinaca sativa resistances to various biotypes of Depressaria pastinacella, because the secondary metabolites involved are negatively correlated with each other; and in the resistances of Diplacus aurantiacus.
In Brassica rapa, resistance to Peronospora parasitica and growth rate are negatively correlated.
Mutualism and overcompensation of plants
Many plants do not have secondary metabolites, chemical processes, or mechanical defenses to help them fend off herbivores. Instead, these plants rely on overcompensation (which is regarded as a form of mutualism) when they are attacked by herbivores. Overcompensation is defined as having higher fitness when attacked by a herbivore. This a mutual relationship; the herbivore is satisfied with a meal, while the plant starts growing the missing part quickly. These plants have a higher chance of reproducing, and their fitness is increased.
Importance to humans
Agriculture
Crop plants can be bred for their ability to resist herbivory, thus protecting themselves from damage with reduced use of pesticides.
In addition, biological pest control sometimes makes use of plant defenses to reduce crop damage by herbivores. Techniques include polyculture, the planting together of two or more species such as a primary crop and a secondary plant. This can allow the secondary plant's defensive chemicals to protect the crop planted with it.
The variation of plant susceptibility to pests was probably known even in the early stages of agriculture in humans. In historic times, the observation of such variations in susceptibility have provided solutions for major socio-economic problems. The hemipteran pest insect phylloxera was introduced from North America to France in 1860 and in 25 years it destroyed nearly a third (100,000 km2) of French vineyards. Charles Valentine Riley noted that the American species Vitis labrusca was resistant to Phylloxera. Riley, with J. E. Planchon, helped save the French wine industry by suggesting the grafting of the susceptible but high quality grapes onto Vitis labrusca root stocks. The formal study of plant resistance to herbivory was first covered extensively in 1951 by Reginald Henry Painter, who is widely regarded as the founder of this area of research, in his book Plant Resistance to Insects. While this work pioneered further research in the US, the work of Chesnokov was the basis of further research in the USSR.
Fresh growth of grass is sometimes high in prussic acid content and can cause poisoning of grazing livestock. The production of cyanogenic chemicals in grasses is primarily a defense against herbivores.
The human innovation of cooking may have been particularly helpful in overcoming many of the defensive chemicals of plants. Many enzyme inhibitors in cereal grains and pulses, such as trypsin inhibitors prevalent in pulse crops, are denatured by cooking, making them digestible.
It has been known since the late 17th century that plants contain noxious chemicals which are avoided by insects. These chemicals have been used by man as early insecticides; in 1690 nicotine was extracted from tobacco and used as a contact insecticide. In 1773, insect infested plants were treated with nicotine fumigation by heating tobacco and blowing the smoke over the plants. The flowers of Chrysanthemum species contain pyrethrin which is a potent insecticide. In later years, the applications of plant resistance became an important area of research in agriculture and plant breeding, particularly because they can serve as a safe and low-cost alternative to the use of pesticides. The important role of secondary plant substances in plant defense was described in the late 1950s by Vincent Dethier and G.S. Fraenkel. The use of botanical pesticides is widespread, including azadirachtin from the neem (Azadirachta indica), d-Limonene from Citrus species, rotenone from Derris, capsaicin from chili pepper, and pyrethrum from Chrysanthemum.
The selective breeding of crop plants often involves selection against the plant's intrinsic resistance strategies. This makes crop plant varieties particularly susceptible to pests unlike their wild relatives. In breeding for host-plant resistance, it is often the wild relatives that provide the source of resistance genes. These genes are incorporated using conventional approaches to plant breeding, but have been augmented by recombinant techniques, which allow introduction of genes from completely unrelated organisms. The most famous transgenic approach is the introduction of genes from the bacterial species, Bacillus thuringiensis, into plants. The bacterium produces proteins that, when ingested, kill lepidopteran caterpillars. The gene encoding for these highly toxic proteins, when introduced into the host plant genome so that it produces the same toxic proteins, confers resistance against caterpillars. This approach is controversial, however, due to the possibility of ecological and toxicological side effects.
Pharmaceutical
Many currently available pharmaceuticals are derived from the secondary metabolites plants use to protect themselves from herbivores, including opium, aspirin, cocaine, and atropine. These chemicals have evolved to affect the biochemistry of insects in very specific ways. However, many of these biochemical pathways are conserved in vertebrates, including humans, and the chemicals act on human biochemistry in ways similar to that of insects. It has therefore been suggested that the study of plant-insect interactions may help in bioprospecting.
There is evidence that humans began using plant alkaloids in medical preparations as early as 3000 B.C. Although the active components of most medicinal plants have been isolated only relatively recently (beginning in the early 19th century) these substances have been used as drugs throughout the human history in potions, medicines, teas and as poisons. For example, to combat herbivory by the larvae of some Lepidoptera species, Cinchona trees produce a variety of alkaloids, the most familiar of which is quinine, which is extremely bitter, making the bark of the tree quite unpalatable.
Throughout history mandrakes (Mandragora officinarum) have been highly sought after for their reputed aphrodisiac properties. However, the roots of the mandrake plant also contain large quantities of the alkaloid scopolamine, which, at high doses, acts as a central nervous system depressant, and makes the plant highly toxic to herbivores. Scopolamine was later found to be medicinally used for pain management prior to and during labor; in smaller doses it is used to prevent motion sickness. One of the best-known medicinally valuable terpenes is an anticancer drug, taxol, isolated from the bark of the Pacific yew, Taxus brevifolia, in the early 1960s.
See also
Anti-predator adaptation
Biopesticide
Chemical ecology
List of beneficial weeds
List of companion plants
List of pest-repelling plants
Plant disease resistance
Plant tolerance to herbivory
Plant communication
Tritrophic interactions in plant defense
References
Further reading
External links
Bruce A. Kimball Evolutionary Plant Defense Strategies Life Histories and Contributions to Future Generations
Plant Defense Systems & Medicinal Botany
Herbivore Defenses of Senecio viscosus L.
Sue Hartley Royal Institution Christmas Lectures 2009: The Animals Strike Back
Herbivory
Plant physiology
Biological pest control
Ecological restoration
Habitat management equipment and methods
Sustainable agriculture
Antipredator adaptations
Chemical ecology | Plant defense against herbivory | [
"Chemistry",
"Engineering",
"Biology"
] | 7,956 | [
"Plant physiology",
"Chemical ecology",
"Ecological restoration",
"Plants",
"Biological defense mechanisms",
"Herbivory",
"Antipredator adaptations",
"Environmental engineering",
"Biochemistry",
"Eating behaviors"
] |
4,190,350 | https://en.wikipedia.org/wiki/Overdispersion | In statistics, overdispersion is the presence of greater variability (statistical dispersion) in a data set than would be expected based on a given statistical model.
A common task in applied statistics is choosing a parametric model to fit a given set of empirical observations. This necessitates an assessment of the fit of the chosen model. It is usually possible to choose the model parameters in such a way that the theoretical population mean of the model is approximately equal to the sample mean. However, especially for simple models with few parameters, theoretical predictions may not match empirical observations for higher moments. When the observed variance is higher than the variance of a theoretical model, overdispersion has occurred. Conversely, underdispersion means that there was less variation in the data than predicted. Overdispersion is a very common feature in applied data analysis because in practice, populations are frequently heterogeneous (non-uniform) contrary to the assumptions implicit within widely used simple parametric models.
Examples
Poisson
Overdispersion is often encountered when fitting very simple parametric models, such as those based on the Poisson distribution. The Poisson distribution has one free parameter and does not allow for the variance to be adjusted independently of the mean. The choice of a distribution from the Poisson family is often dictated by the nature of the empirical data. For example, Poisson regression analysis is commonly used to model count data. If overdispersion is a feature, an alternative model with additional free parameters may provide a better fit. In the case of count data, a Poisson mixture model like the negative binomial distribution can be proposed instead, in which the mean of the Poisson distribution can itself be thought of as a random variable drawn – in this case – from the gamma distribution thereby introducing an additional free parameter (note the resulting negative binomial distribution is completely characterized by two parameters).
Binomial
As a more concrete example, it has been observed that the number of boys born to families does not conform faithfully to a binomial distribution as might be expected. Instead, the sex ratios of families seem to skew toward either boys or girls (see, for example the Trivers–Willard hypothesis for one possible explanation) i.e. there are more all-boy families, more all-girl families and not enough families close to the population 51:49 boy-to-girl mean ratio than expected from a binomial distribution, and the resulting empirical variance is larger than specified by a binomial model.
In this case, the beta-binomial model distribution is a popular and analytically tractable alternative model to the binomial distribution since it provides a better fit to the observed data. To capture the heterogeneity of the families, one can think of the probability parameter of the binomial model (say, probability of being a boy) is itself a random variable (i.e. random effects model) drawn for each family from a beta distribution as the mixing distribution. The resulting compound distribution (beta-binomial) has an additional free parameter.
Another common model for overdispersion—when some of the observations are not Bernoulli—arises from introducing a normal random variable into a logistic model. Software is widely available for fitting this type of multilevel model. In this case, if the variance of the normal variable is zero, the model reduces to the standard (undispersed) logistic regression. This model has an additional free parameter, namely the variance of the normal variable.
With respect to binomial random variables, the concept of overdispersion makes sense only if n>1 (i.e. overdispersion is nonsensical for Bernoulli random variables).
Normal distribution
As the normal distribution (Gaussian) has variance as a parameter, any data with finite variance (including any finite data) can be modeled with a normal distribution with the exact variance – the normal distribution is a two-parameter model, with mean and variance. Thus, in the absence of an underlying model, there is no notion of data being overdispersed relative to the normal model, though the fit may be poor in other respects (such as the higher moments of skew, kurtosis, etc.). However, in the case that the data is modeled by a normal distribution with an expected variation, it can be over- or under-dispersed relative to that prediction.
For example, in a statistical survey, the margin of error (determined by sample size) predicts the sampling error and hence dispersion of results on repeated surveys. If one performs a meta-analysis of repeated surveys of a fixed population (say with a given sample size, so margin of error is the same), one expects the results to fall on normal distribution with standard deviation equal to the margin of error. However, in the presence of study heterogeneity where studies have different sampling bias, the distribution is instead a compound distribution and will be overdistributed relative to the predicted distribution. For example, given repeated opinion polls all with a margin of error of 3%, if they are conducted by different polling organizations, one expects the results to have standard deviation greater than 3%, due to pollster bias from different methodologies.
Differences in terminology among disciplines
Over- and underdispersion are terms which have been adopted in branches of the biological sciences. In parasitology, the term 'overdispersion' is generally used as defined here – meaning a distribution with a higher than expected variance.
In some areas of ecology, however, meanings have been transposed, so that overdispersion is actually taken to mean more even (lower variance) than expected. This confusion has caused some ecologists to suggest that the terms 'aggregated', or 'contagious', would be better used in ecology for 'overdispersed'. Such preferences are creeping into parasitology too. Generally this suggestion has not been heeded, and confusion persists in the literature.
Furthermore in demography, overdispersion is often evident in the analysis of death count data, but demographers prefer the term 'unobserved heterogeneity'.
See also
Index of dispersion
Compound probability distribution
Quasi-likelihood
References
Probability distribution fitting
Point processes
Spatial analysis | Overdispersion | [
"Physics",
"Mathematics"
] | 1,298 | [
"Point (geometry)",
"Spatial analysis",
"Point processes",
"Space",
"Spacetime"
] |
7,282,499 | https://en.wikipedia.org/wiki/Surface-area-to-volume%20ratio | The surface-area-to-volume ratio or surface-to-volume ratio (denoted as SA:V, SA/V, or sa/vol) is the ratio between surface area and volume of an object or collection of objects.
SA:V is an important concept in science and engineering. It is used to explain the relation between structure and function in processes occurring through the surface the volume. Good examples for such processes are processes governed by the heat equation, that is, diffusion and heat transfer by thermal conduction. SA:V is used to explain the diffusion of small molecules, like oxygen and carbon dioxide between air, blood and cells, water loss by animals, bacterial morphogenesis, organism's thermoregulation, design of artificial bone tissue, artificial lungs and many more biological and biotechnological structures. For more examples see Glazier.
The relation between SA:V and diffusion or heat conduction rate is explained from flux and surface perspective, focusing on the surface of a body as the place where diffusion, or heat conduction, takes place, i.e., the larger the SA:V there is more surface area per unit volume through which material can diffuse, therefore, the diffusion or heat conduction, will be faster. Similar explanation appears in the literature: "Small size implies a large ratio of surface area to volume, thereby helping to maximize the uptake of nutrients across the plasma membrane", and elsewhere.
For a given volume, the object with the smallest surface area (and therefore with the smallest SA:V) is a ball, a consequence of the isoperimetric inequality in 3 dimensions. By contrast, objects with acute-angled spikes will have very large surface area for a given volume.
For solid spheres
A solid sphere or ball is a three-dimensional object, being the solid figure bounded by a sphere. (In geometry, the term sphere properly refers only to the surface, so a sphere thus lacks volume in this context.)
For an ordinary three-dimensional ball, the SA:V can be calculated using the standard equations for the surface and volume, which are, respectively, and . For the unit case in which r = 1 the SA:V is thus 3. For the general case, SA:V equals 3/r, in an inverse relationship with the radius - if the radius is doubled, the SA:V halves (see figure).
For n-dimensional balls
Balls exist in any dimension and are generically called n-balls or hyperballs, where n is the number of dimensions.
The same reasoning can be generalized to n-balls using the general equations for volume and surface area, which are:
So the ratio equals . Thus, the same linear relationship between area and volume holds for any number of dimensions (see figure): doubling the radius always halves the ratio.
Dimension and units
The surface-area-to-volume ratio has physical dimension inverse length (L−1) and is therefore expressed in units of inverse metre (m−1) or its prefixed unit multiples and submultiples. As an example, a cube with sides of length 1 cm will have a surface area of 6 cm2 and a volume of 1 cm3. The surface to volume ratio for this cube is thus
.
For a given shape, SA:V is inversely proportional to size. A cube 2 cm on a side has a ratio of 3 cm−1, half that of a cube 1 cm on a side. Conversely, preserving SA:V as size increases requires changing to a less compact shape.
Applications
Physical chemistry
Materials with high surface area to volume ratio (e.g. very small diameter, very porous, or otherwise not compact) react at much faster rates than monolithic materials, because more surface is available to react. An example is grain dust: while grain is not typically flammable, grain dust is explosive. Finely ground salt dissolves much more quickly than coarse salt.
A high surface area to volume ratio provides a strong "driving force" to speed up thermodynamic processes that minimize free energy.
Biology
The ratio between the surface area and volume of cells and organisms has an enormous impact on their biology, including their physiology and behavior. For example, many aquatic microorganisms have increased surface area to increase their drag in the water. This reduces their rate of sink and allows them to remain near the surface with less energy expenditure.
An increased surface area to volume ratio also means increased exposure to the environment. The finely-branched appendages of filter feeders such as krill provide a large surface area to sift the water for food.
Individual organs like the lung have numerous internal branchings that increase the surface area; in the case of the lung, the large surface supports gas exchange, bringing oxygen into the blood and releasing carbon dioxide from the blood. Similarly, the small intestine has a finely wrinkled internal surface, allowing the body to absorb nutrients efficiently.
Cells can achieve a high surface area to volume ratio with an elaborately convoluted surface, like the microvilli lining the small intestine.
Increased surface area can also lead to biological problems. More contact with the environment through the surface of a cell or an organ (relative to its volume) increases loss of water and dissolved substances. High surface area to volume ratios also present problems of temperature control in unfavorable environments.
The surface to volume ratios of organisms of different sizes also leads to some biological rules such as Allen's rule, Bergmann's rule and gigantothermy.
Fire spread
In the context of wildfires, the ratio of the surface area of a solid fuel to its volume is an important measurement. Fire spread behavior is frequently correlated to the surface-area-to-volume ratio of the fuel (e.g. leaves and branches). The higher its value, the faster a particle responds to changes in environmental conditions, such as temperature or moisture. Higher values are also correlated to shorter fuel ignition times, and hence faster fire spread rates.
Planetary cooling
A body of icy or rocky material in outer space may, if it can build and retain sufficient heat, develop a differentiated interior and alter its surface through volcanic or tectonic activity. The length of time through which a planetary body can maintain surface-altering activity depends on how well it retains heat, and this is governed by its surface area-to-volume ratio. For Vesta (r=263 km), the ratio is so high that astronomers were surprised to find that it did differentiate and have brief volcanic activity. The moon, Mercury and Mars have radii in the low thousands of kilometers; all three retained heat well enough to be thoroughly differentiated although after a billion years or so they became too cool to show anything more than very localized and infrequent volcanic activity. As of April 2019, however, NASA has announced the detection of a "marsquake" measured on April 6, 2019, by NASA's InSight lander. Venus and Earth (r>6,000 km) have sufficiently low surface area-to-volume ratios (roughly half that of Mars and much lower than all other known rocky bodies) so that their heat loss is minimal.
Mathematical examples
See also
Compactness measure of a shape
Dust explosion
Square–cube law
Specific surface area
References
Specific
External links
Sizes of Organisms: The Surface Area:Volume Ratio
National Wildfire Coordinating Group: Surface Area to Volume Ratio
Previous link not working, references are in this document, PDF
Further reading
On Being the Right Size, J.B.S. Haldane
Chemical kinetics
Cell biology
Physiology
Ratios | Surface-area-to-volume ratio | [
"Chemistry",
"Mathematics",
"Biology"
] | 1,544 | [
"Chemical reaction engineering",
"Cell biology",
"Physiology",
"Arithmetic",
"Chemical kinetics",
"Ratios"
] |
7,283,344 | https://en.wikipedia.org/wiki/Electrochemical%20window | The electrochemical window (EW) of a substance is the electrode electric potential range between which the substance is neither oxidized nor reduced. The EW is one of the most important characteristics to be identified for solvents and electrolytes used in electrochemical applications. The EW is a term that is commonly used to indicate the potential range and the potential difference. It is calculated by subtracting the reduction potential (cathodic limit) from the oxidation potential (anodic limit).
When the substance of interest is water, it is often referred to as the water window.
This range is important for the efficiency of an electrode. Out of this range, the electrodes will react with the electrolyte, instead of driving the electrochemical reaction.
In principle, ammonia has an extremely small electrochemical window, but thermodynamically-favored reactions less than 1 V outside the window are very slow. Consequently, the electrochemical window for many practical reactions is much larger, comparable to water. Ionic liquids famously have a very large electrochemical window, about 4–5 V.
The importance of electrochemical window (EW) in organic batteries
The electrochemical window (EW) is an important concept in organic electrosynthesis and design of batteries, especially organic batteries. This is because at higher voltage (greater than 4.0 V) organic electrolytes decompose and interferes with the oxidation and reduction of the organic cathode/anode materials. For this reason, the best organic electrolytes should be characterized by a wider range of electrochemical window, i.e., greater than the working range of the battery cell voltage. For example, the electrochemical window of Lithium bis- (trifluoromethanesulfonyl)imide, commercially known as LiTFSI is about 3.0 V because it can operate in the range of 1.9 -4.9 V. On the other hand, for electrolytes that are characterized by narrow electrochemical window, they are prone to irreversible decomposition, which in turn triggers the battery capacity decaying during subsequent battery cycling.
The electrochemical window of organic electrolyte depends on many factors that include temperature, molecular frontier orbitals such LUMO (Lowest Unoccupied Molecular Orbital) and HOMO (Highest occupied Molecular Orbital) because the mechanisms of reduction (electron gaining) and oxidation (electron loss) are governed by band gap between HOMO and LUMO. Solvation energy also plays an important role in defining the electrochemical window of the electrolyte.
In order to safeguard the thermodynamic stability working conditions of the electrode materials in a given electrolyte, the electrochemical potentials of the electrode materials (anode and cathode) must be comprised within the electrochemical stability of the electrolyte. This condition is very succinct because electrolyte might be oxidized when the cathode material possess an electrochemical potential, which is less than the electrolyte oxidation potential. When the electrochemical potential of the anode material is quite higher than the reduction potential of the electrolyte, the electrolyte will be degraded through reduction process.
Limitation of Electrochemical window
One of the shortcoming of electrochemical window (EW) in predicting the stability of the electrolyte towards anode or cathode materials ignores the voltage and the ionic conductivity, which are also important.
References
Electrochemistry | Electrochemical window | [
"Chemistry"
] | 709 | [
"Electrochemistry"
] |
7,290,730 | https://en.wikipedia.org/wiki/Rotation%20formalisms%20in%20three%20dimensions | In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational (or angular) kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.
According to Euler's rotation theorem, the rotation of a rigid body (or three-dimensional coordinate system with a fixed origin) is described by a single rotation about some axis. Such a rotation may be uniquely described by a minimum of three real parameters. However, for various reasons, there are several ways to represent it. Many of these representations use more than the necessary minimum of three parameters, although each of them still has only three degrees of freedom.
An example where rotation representation is used is in computer vision, where an automated observer needs to track a target. Consider a rigid body, with three orthogonal unit vectors fixed to its body (representing the three axes of the object's local coordinate system). The basic problem is to specify the orientation of these three unit vectors, and hence the rigid body, with respect to the observer's coordinate system, regarded as a reference placement in space.
Rotations and motions
Rotation formalisms are focused on proper (orientation-preserving) motions of the Euclidean space with one fixed point, that a rotation refers to. Although physical motions with a fixed point are an important case (such as ones described in the center-of-mass frame, or motions of a joint), this approach creates a knowledge about all motions. Any proper motion of the Euclidean space decomposes to a rotation around the origin and a translation. Whichever the order of their composition will be, the "pure" rotation component wouldn't change, uniquely determined by the complete motion.
One can also understand "pure" rotations as linear maps in a vector space equipped with Euclidean structure, not as maps of points of a corresponding affine space. In other words, a rotation formalism captures only the rotational part of a motion, that contains three degrees of freedom, and ignores the translational part, that contains another three.
When representing a rotation as numbers in a computer, some people prefer the quaternion representation or the axis+angle representation, because they avoid the gimbal lock that can occur with Euler rotations.
Formalism alternatives
Rotation matrix
The above-mentioned triad of unit vectors is also called a basis. Specifying the coordinates (components) of vectors of this basis in its current (rotated) position, in terms of the reference (non-rotated) coordinate axes, will completely describe the rotation. The three unit vectors, , and , that form the rotated basis each consist of 3 coordinates, yielding a total of 9 parameters.
These parameters can be written as the elements of a matrix , called a rotation matrix. Typically, the coordinates of each of these vectors are arranged along a column of the matrix (however, beware that an alternative definition of rotation matrix exists and is widely used, where the vectors' coordinates defined above are arranged by rows)
The elements of the rotation matrix are not all independent—as Euler's rotation theorem dictates, the rotation matrix has only three degrees of freedom.
The rotation matrix has the following properties:
is a real, orthogonal matrix, hence each of its rows or columns represents a unit vector.
The eigenvalues of are where is the standard imaginary unit with the property
The determinant of is +1, equivalent to the product of its eigenvalues.
The trace of is , equivalent to the sum of its eigenvalues.
The angle which appears in the eigenvalue expression corresponds to the angle of the Euler axis and angle representation. The eigenvector corresponding to the eigenvalue of 1 is the accompanying Euler axis, since the axis is the only (nonzero) vector which remains unchanged by left-multiplying (rotating) it with the rotation matrix.
The above properties are equivalent to
which is another way of stating that form a 3D orthonormal basis. These statements comprise a total of 6 conditions (the cross product contains 3), leaving the rotation matrix with just 3 degrees of freedom, as required.
Two successive rotations represented by matrices and are easily combined as elements of a group,
(Note the order, since the vector being rotated is multiplied from the right).
The ease by which vectors can be rotated using a rotation matrix, as well as the ease of combining successive rotations, make the rotation matrix a useful and popular way to represent rotations, even though it is less concise than other representations.
Euler axis and angle (rotation vector)
From Euler's rotation theorem we know that any rotation can be expressed as a single rotation about some axis. The axis is the unit vector (unique except for sign) which remains unchanged by the rotation. The magnitude of the angle is also unique, with its sign being determined by the sign of the rotation axis.
The axis can be represented as a three-dimensional unit vector
and the angle by a scalar .
Since the axis is normalized, it has only two degrees of freedom. The angle adds the third degree of freedom to this rotation representation.
One may wish to express rotation as a rotation vector, or Euler vector, an un-normalized three-dimensional vector the direction of which specifies the axis, and the length of which is ,
The rotation vector is useful in some contexts, as it represents a three-dimensional rotation with only three scalar values (its components), representing the three degrees of freedom. This is also true for representations based on sequences of three Euler angles (see below).
If the rotation angle is zero, the axis is not uniquely defined. Combining two successive rotations, each represented by an Euler axis and angle, is not straightforward, and in fact does not satisfy the law of vector addition, which shows that finite rotations are not really vectors at all. It is best to employ the rotation matrix or quaternion notation, calculate the product, and then convert back to Euler axis and angle.
Euler rotations
The idea behind Euler rotations is to split the complete rotation of the coordinate system into three simpler constitutive rotations, called precession, nutation, and intrinsic rotation, being each one of them an increment on one of the Euler angles. Notice that the outer matrix will represent a rotation around one of the axes of the reference frame, and the inner matrix represents a rotation around one of the moving frame axes. The middle matrix represents a rotation around an intermediate axis called line of nodes.
However, the definition of Euler angles is not unique and in the literature many different conventions are used. These conventions depend on the axes about which the rotations are carried out, and their sequence (since rotations on a sphere are non-commutative).
The convention being used is usually indicated by specifying the axes about which the consecutive rotations (before being composed) take place, referring to them by index or letter . The engineering and robotics communities typically use 3-1-3 Euler angles. Notice that after composing the independent rotations, they do not rotate about their axis anymore. The most external matrix rotates the other two, leaving the second rotation matrix over the line of nodes, and the third one in a frame comoving with the body. There are possible combinations of three basic rotations but only of them can be used for representing arbitrary 3D rotations as Euler angles. These 12 combinations avoid consecutive rotations around the same axis (such as XXY) which would reduce the degrees of freedom that can be represented.
Therefore, Euler angles are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. Other conventions (e.g., rotation matrix or quaternions) are used to avoid this problem.
In aviation orientation of the aircraft is usually expressed as intrinsic Tait-Bryan angles following the convention, which are called heading, elevation, and bank (or synonymously, yaw, pitch, and roll).
Quaternions
Quaternions, which form a four-dimensional vector space, have proven very useful in representing rotations due to several advantages over the other representations mentioned in this article.
A quaternion representation of rotation is written as a versor (normalized quaternion):
The above definition stores the quaternion as an array following the convention used in (Wertz 1980) and (Markley 2003). An alternative definition, used for example in (Coutsias 1999) and (Schmidt 2001), defines the "scalar" term as the first quaternion element, with the other elements shifted down one position.
In terms of the Euler axis
and angle this versor's components are expressed as follows:
Inspection shows that the quaternion parametrization obeys the following constraint:
The last term (in our definition) is often called the scalar term, which has its origin in quaternions when understood as the mathematical extension of the complex numbers, written as
and where are the hypercomplex numbers satisfying
Quaternion multiplication, which is used to specify a composite rotation, is performed in the same manner as multiplication of complex numbers, except that the order of the elements must be taken into account, since multiplication is not commutative. In matrix notation we can write quaternion multiplication as
Combining two consecutive quaternion rotations is therefore just as simple as using the rotation matrix. Just as two successive rotation matrices, followed by , are combined as
we can represent this with quaternion parameters in a similarly concise way:
Quaternions are a very popular parametrization due to the following properties:
More compact than the matrix representation and less susceptible to round-off errors
The quaternion elements vary continuously over the unit sphere in , (denoted by ) as the orientation changes, avoiding discontinuous jumps (inherent to three-dimensional parameterizations)
Expression of the rotation matrix in terms of quaternion parameters involves no trigonometric functions
It is simple to combine two individual rotations represented as quaternions using a quaternion product
Like rotation matrices, quaternions must sometimes be renormalized due to rounding errors, to make sure that they correspond to valid rotations. The computational cost of renormalizing a quaternion, however, is much less than for normalizing a matrix.
Quaternions also capture the spinorial character of rotations in three dimensions. For a three-dimensional object connected to its (fixed) surroundings by slack strings or bands, the strings or bands can be untangled after two complete turns about some fixed axis from an initial untangled state. Algebraically, the quaternion describing such a rotation changes from a scalar +1 (initially), through (scalar + pseudovector) values to scalar −1 (at one full turn), through (scalar + pseudovector) values back to scalar +1 (at two full turns). This cycle repeats every 2 turns. After turns (integer ), without any intermediate untangling attempts, the strings/bands can be partially untangled back to the turns state with each application of the same procedure used in untangling from 2 turns to 0 turns. Applying the same procedure times will take a -tangled object back to the untangled or 0 turn state. The untangling process also removes any rotation-generated twisting about the strings/bands themselves. Simple 3D mechanical models can be used to demonstrate these facts.
Rodrigues vector
The Rodrigues vector (sometimes called the Gibbs vector, with coordinates called Rodrigues parameters) can be expressed in terms of the axis and angle of the rotation as follows:
This representation is a higher-dimensional analog of the gnomonic projection, mapping unit quaternions from a 3-sphere onto the 3-dimensional pure-vector hyperplane.
It has a discontinuity at 180° ( radians): as any rotation vector tends to an angle of radians, its tangent tends to infinity.
A rotation followed by a rotation in the Rodrigues representation has the simple rotation composition form
Today, the most straightforward way to prove this formula is in the (faithful) doublet representation, where , etc.
The combinatoric features of the Pauli matrix derivation just mentioned are also identical to the equivalent quaternion derivation below. Construct a quaternion associated with a spatial rotation as,
Then the composition of the rotation with is the rotation , with rotation axis and angle defined by the product of the quaternions,
that is
Expand this quaternion product to
Divide both sides of this equation by the identity resulting from the previous one,
and evaluate
This is Rodrigues' formula for the axis of a composite rotation defined in terms of the axes of the two component rotations. He derived this formula in 1840 (see page 408). The three rotation axes , , and form a spherical triangle and the dihedral angles between the planes formed by the sides of this triangle are defined by the rotation angles.
Modified Rodrigues Parameters (MRPs)
Modified Rodrigues parameters (MRPs) can be expressed in terms of Euler axis and angle by
Its components can be expressed in terms of the components of a unit quaternion representing the same rotation as
The modified Rodrigues vector is a stereographic projection mapping unit quaternions from a 3-sphere onto the 3-dimensional pure-vector hyperplane. The projection of the opposite quaternion results in a different modified Rodrigues vector than the projection of the original quaternion . Comparing components one obtains that
Notably, if one of these vectors lies inside the unit 3-sphere, the other will lie outside.
Cayley–Klein parameters
See definition at Wolfram Mathworld.
Higher-dimensional analogues
Vector transformation law
Active rotations of a 3D vector in Euclidean space around an axis over an angle can be easily written in terms of dot and cross products as follows:
wherein
is the longitudinal component of along , given by the dot product,
is the transverse component of with respect to , and
is the cross product of with .
The above formula shows that the longitudinal component of remains unchanged, whereas the transverse portion of is rotated in the plane perpendicular to . This plane is spanned by the transverse portion of itself and a direction perpendicular to both and . The rotation is directly identifiable in the equation as a 2D rotation over an angle .
Passive rotations can be described by the same formula, but with an inverse sign of either or .
Conversion formulae between formalisms
Rotation matrix ↔ Euler angles
The Euler angles can be extracted from the rotation matrix by inspecting the rotation matrix in analytical form.
Rotation matrix → Euler angles ( extrinsic)
Using the -convention, the 3-1-3 extrinsic Euler angles , and (around the -axis, -axis and again the -axis) can be obtained as follows:
Note that is equivalent to where it also takes into account the quadrant that the point is in; see atan2.
When implementing the conversion, one has to take into account several situations:
There are generally two solutions in the interval . The above formula works only when is within the interval .
For the special case , and will be derived from and .
There are infinitely many but countably many solutions outside of the interval .
Whether all mathematical solutions apply for a given application depends on the situation.
Euler angles ( intrinsic) → rotation matrix
The rotation matrix is generated from the 3-2-1 intrinsic Euler angles by multiplying the three matrices generated by rotations about the axes.
The axes of the rotation depend on the specific convention being used. For the -convention the rotations are about the -, - and -axes with angles , and , the individual matrices are as follows:
This yields
Note: This is valid for a right-hand system, which is the convention used in almost all engineering and physics disciplines.
The interpretation of these right-handed rotation matrices is that they express coordinate transformations (passive) as opposed to point transformations (active). Because expresses a rotation from the local frame to the global frame (i.e., encodes the axes of frame with respect to frame ), the elementary rotation matrices are composed as above. Because the inverse rotation is just the rotation transposed, if we wanted the global-to-local rotation from frame to frame , we would write
Rotation matrix ↔ Euler axis/angle
If the Euler angle is not a multiple of , the Euler axis and angle can be computed from the elements of the rotation matrix as follows:
Alternatively, the following method can be used:
Eigendecomposition of the rotation matrix yields the eigenvalues 1 and . The Euler axis is the eigenvector corresponding to the eigenvalue of 1, and can be computed from the remaining eigenvalues.
The Euler axis can be also found using singular value decomposition since it is the normalized vector spanning the null-space of the matrix .
To convert the other way the rotation matrix corresponding to an Euler axis and angle can be computed according to Rodrigues' rotation formula (with appropriate modification) as follows:
with the identity matrix, and
is the cross-product matrix.
This expands to:
Rotation matrix ↔ quaternion
When computing a quaternion from the rotation matrix there is a sign ambiguity, since and represent the same rotation.
One way of computing the quaternion
from the rotation matrix is as follows:
There are three other mathematically equivalent ways to compute . Numerical inaccuracy can be reduced by avoiding situations in which the denominator is close to zero. One of the other three methods looks as follows:
The rotation matrix corresponding to the quaternion can be computed as follows:
where
which gives
or equivalently
This is called the Euler–Rodrigues formula for the transformation matrix
Euler angles ↔ quaternion
Euler angles ( extrinsic) → quaternion
We will consider the -convention 3-1-3 extrinsic Euler angles for the following algorithm. The terms of the algorithm depend on the convention used.
We can compute the quaternion
from the Euler angles as follows:
Euler angles ( intrinsic) → quaternion
A quaternion equivalent to yaw (), pitch () and roll () angles. or intrinsic Tait–Bryan angles following the convention, can be computed by
Quaternion → Euler angles ( extrinsic)
Given the rotation quaternion
the -convention 3-1-3 extrinsic Euler Angles can be computed by
Quaternion → Euler angles ( intrinsic)
Given the rotation quaternion
yaw, pitch and roll angles, or intrinsic Tait–Bryan angles following the convention, can be computed by
Euler axis–angle ↔ quaternion
Given the Euler axis and angle , the quaternion
can be computed by
Given the rotation quaternion , define
Then the Euler axis and angle can be computed by
Rotation matrix ↔ Rodrigues vector
Rodrigues vector → Rotation matrix
Since the definition of the Rodrigues vector can be related to rotation quaternions:By making use of the following propertyThe formula can be obtained by factoring from the final expression obtained for quaternions:
Leading to the final formula:
Conversion formulae for derivatives
Rotation matrix ↔ angular velocities
The angular velocity vector
can be extracted from the time derivative of the rotation matrix by the following relation:
The derivation is adapted from Ioffe as follows:
For any vector , consider and differentiate it:
The derivative of a vector is the linear velocity of its tip. Since is a rotation matrix, by definition the length of is always equal to the length of , and hence it does not change with time. Thus, when rotates, its tip moves along a circle, and the linear velocity of its tip is tangential to the circle; i.e., always perpendicular to . In this specific case, the relationship between the linear velocity vector and the angular velocity vector is
(see circular motion and cross product).
By the transitivity of the abovementioned equations,
which implies
Quaternion ↔ angular velocities
The angular velocity vector
can be obtained from the derivative of the quaternion as follows:
where is the conjugate (inverse) of .
Conversely, the derivative of the quaternion is
Rotors in a geometric algebra
The formalism of geometric algebra (GA) provides an extension and interpretation of the quaternion method. Central to GA is the geometric product of vectors, an extension of the traditional inner and cross products, given by
where the symbol denotes the exterior product or wedge product. This product of vectors , and produces two terms: a scalar part from the inner product and a bivector part from the wedge product. This bivector describes the plane perpendicular to what the cross product of the vectors would return.
Bivectors in GA have some unusual properties compared to vectors. Under the geometric product, bivectors have a negative square: the bivector describes the -plane. Its square is . Because the unit basis vectors are orthogonal to each other, the geometric product reduces to the antisymmetric outer product, so and can be swapped freely at the cost of a factor of −1. The square reduces to since the basis vectors themselves square to +1.
This result holds generally for all bivectors, and as a result the bivector plays a role similar to the imaginary unit. Geometric algebra uses bivectors in its analogue to the quaternion, the rotor, given by
where is a unit bivector that describes the plane of rotation. Because squares to −1, the power series expansion of generates the trigonometric functions. The rotation formula that maps a vector to a rotated vector is then
where
is the reverse of (reversing the order of the vectors in is equivalent to changing its sign).
Example. A rotation about the axis
can be accomplished by converting to its dual bivector,
where is the unit volume element, the only trivector (pseudoscalar) in three-dimensional space. The result is
In three-dimensional space, however, it is often simpler to leave the expression for , using the fact that commutes with all objects in 3D and also squares to −1. A rotation of the vector in this plane by an angle is then
Recognizing that
and that is the reflection of about the plane perpendicular to gives a geometric interpretation to the rotation operation: the rotation preserves the components that are parallel to and changes only those that are perpendicular. The terms are then computed:
The result of the rotation is then
A simple check on this result is the angle . Such a rotation should map to . Indeed, the rotation reduces to
exactly as expected. This rotation formula is valid not only for vectors but for any multivector. In addition, when Euler angles are used, the complexity of the operation is much reduced. Compounded rotations come from multiplying the rotors, so the total rotor from Euler angles is
but
These rotors come back out of the exponentials like so:
where refers to rotation in the original coordinates. Similarly for the rotation,
Noting that and commute (rotations in the same plane must commute), and the total rotor becomes
Thus, the compounded rotations of Euler angles become a series of equivalent rotations in the original fixed frame.
While rotors in geometric algebra work almost identically to quaternions in three dimensions, the power of this formalism is its generality: this method is appropriate and valid in spaces with any number of dimensions. In 3D, rotations have three degrees of freedom, a degree for each linearly independent plane (bivector) the rotation can take place in. It has been known that pairs of quaternions can be used to generate rotations in 4D, yielding six degrees of freedom, and the geometric algebra approach verifies this result: in 4D, there are six linearly independent bivectors that can be used as the generators of rotations.
See also
Euler filter
Orientation (geometry)
Rotation around a fixed axis
References
Further reading
External links
EuclideanSpace has a wealth of information on rotation representation
Q36. How do I generate a rotation matrix from Euler angles? and Q37. How do I convert a rotation matrix to Euler angles? — The Matrix and Quaternions FAQ
Imaginary numbers are not Real – the Geometric Algebra of Spacetime – Section "Rotations and Geometric Algebra" derives and applies the rotor description of rotations
Starlino's DCM Tutorial – Direction cosine matrix theory tutorial and applications. Space orientation estimation algorithm using accelerometer, gyroscope and magnetometer IMU devices. Using complimentary filter (popular alternative to Kalman filter) with DCM matrix.
Rotation
Euclidean symmetries
Orientation (geometry)
Rigid bodies mechanics | Rotation formalisms in three dimensions | [
"Physics",
"Mathematics"
] | 5,198 | [
"Physical phenomena",
"Functions and mappings",
"Euclidean symmetries",
"Mathematical objects",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Topology",
"Space",
"Mathematical relations",
"Geometry",
"Spacetime",
"Orientation (geometry)",
"Symmetry"
] |
7,290,910 | https://en.wikipedia.org/wiki/B%E2%80%93Bbar%20oscillation | Neutral B meson oscillations (or – oscillations) are one of the manifestations of the neutral particle oscillation, a fundamental prediction of the Standard Model of particle physics. It is the phenomenon of B mesons changing (or oscillating) between their matter and antimatter forms before their decay. The meson can exist as either a bound state of a strange antiquark and a bottom quark, or a strange quark and bottom antiquark. The oscillations in the neutral B sector are analogous to the phenomena that produce long and short-lived neutral kaons.
– mixing was observed by the CDF experiment at Fermilab in 2006 and by LHCb at CERN in 2011 and 2021.
Excess of matter over antimatter
The Standard Model predicts that regular matter are slightly favored in these oscillations over their antimatter counterpart, making strange B mesons of special interest to particle physicists. The observation of the – mixing phenomena led physicists to propose the construction of the so-named "B factories" in the early 1990s. They realized that a precise – oscillation measure could pin down the unitarity triangle and perhaps explain the excess of matter over antimatter in the universe. To this end construction began on two "B factories" in the late nineties, one at the Stanford Linear Accelerator Center (SLAC) in California and one at KEK in Japan.
These B factories, BaBar and Belle, were set at the (4S) resonance which is just above the threshold for decay into two B mesons.
On 14 May 2010, physicists at the Fermi National Accelerator Laboratory reported that the oscillations decayed into matter 1% more often than into antimatter, which may help explain the abundance of matter over antimatter in the observed Universe. However, more recent results at LHCb in 2011, 2012, and 2021 with larger data samples have demonstrated no significant deviation from the Standard Model prediction of very nearly zero asymmetry.
See also
Baryogenesis
CP Violation
Kaon
Neutral particle oscillation
Strange B meson
References
Further reading
— paper describing the discovery of B-meson mixing by the ARGUS collaboration
— announcement of the 5 sigma discovery
External links
BaBar Public Homepage
Belle Public Homepage
B physics | B–Bbar oscillation | [
"Physics"
] | 479 | [
"Particle physics stubs",
"Particle physics"
] |
7,291,166 | https://en.wikipedia.org/wiki/Strange%20B%20meson | The meson is a meson composed of a bottom antiquark and a strange quark. Its antiparticle is the meson, composed of a bottom quark and a strange antiquark.
B–B oscillations
Strange B mesons are noted for their ability to oscillate between matter and antimatter via a box-diagram with measured by CDF experiment at Fermilab.
That is, a meson composed of a bottom quark and strange antiquark, the strange meson, can spontaneously change into an bottom antiquark and strange quark pair, the strange meson, and vice versa.
On 25 September 2006, Fermilab announced that they had claimed discovery of previously-only-theorized Bs meson oscillation. According to Fermilab's press release:
Ronald Kotulak, writing for the Chicago Tribune, called the particle "bizarre" and stated that the meson "may open the door to a new era of physics" with its proven interactions with the "spooky realm of antimatter".
Better understanding of the meson is one of the main objectives of the LHCb experiment conducted at the Large Hadron Collider. On 24 April 2013, CERN physicists in the LHCb collaboration announced that they had observed CP violation in the decay of strange mesons for the first time. Scientists found the Bs meson decaying into two muons for the first time, with Large Hadron Collider experiments casting doubt on the scientific theory of supersymmetry.
CERN physicist Tara Shears described the CP violation observations as "verification of the validity of the Standard Model of physics".
Rare decays
The rare decays of the Bs meson are an important test of the Standard Model. The branching fraction of the strange b-meson to a pair of muons is very precisely predicted with a value of Br(Bs→ μ+μ−)SM = (3.66 ± 0.23) × 10−9. Any variation from this rate would indicate possible physics beyond the Standard Model, such as supersymmetry. The first definitive measurement was made from a combination of LHCb and CMS experiment data:
This result is compatible with the Standard Model and set limits on possible extensions.
See also
B meson
B– oscillation
References
External links
Mesons
Strange quark
B physics | Strange B meson | [
"Physics"
] | 494 | [
"Particle physics stubs",
"Particle physics"
] |
11,935,110 | https://en.wikipedia.org/wiki/STAT6 | Signal transducer and activator of transcription 6 (STAT6) is a transcription factor that belongs to the Signal Transducer and Activator of Transcription (STAT) family of proteins. The proteins of STAT family transmit signals from a receptor complex to the nucleus and activate gene expression. Similarly as other STAT family proteins, STAT6 is also activated by growth factors and cytokines. STAT6 is mainly activated by cytokines interleukin-4 and interleukin-13.
Molecular biology
In the human genome, STAT6 protein is encoded by the STAT6 gene, located on the chromosome 12q13.3-q14.1. The gene encompasses over 19 kb and consists of 23 exons. STAT6 shares structural similarity with the other STAT proteins and is composed of the N-terminal domain, DNA binding domain, SH3- like domain, SH2 domain and transactivation domain (TAD).
STAT proteins are activated by the Janus family (JAKs) tyrosine kinases in response to cytokine exposure. STAT6 is activated by cytokines interleukin-4 (IL-4), and interleukin-13 (IL-13) with their receptors that both contain the α subunit of the IL-4 receptor (IL-4Rα). Tyrosine phosporylation of STAT6 after stimulation by IL-4 results in the formation of STAT6 homodimers that bind specific DNA elements via a DNA-binding domain.
Function
STAT6-mediated signaling pathway is required for the development of T-helper type 2 (Th2) cells and Th2 immune response. Expression of Th2 cytokines, including IL-4, IL-13, and IL-5, was reduced in STAT6-deficient mice. STAT 6 protein is crucial in IL4 mediated biological responses. It was found that STAT6 induce the expression of BCL2L1/BCL-X(L), which is responsible for the anti-apoptotic activity of IL4. IL-4 stimulates the phosphorylation of IL-4 receptor, which recruits cytosolic STAT6 by its SH2 domain and STAT6 is phosphorylated on tyrosine 641 (Y641) by JAK1, which results in the dimerization and nuclear translocation of STAT6 to activate target genes. Knockout studies in mice suggested the roles of this gene in differentiation of T helper 2 (Th2), expression of cell surface markers, and class switch of immunoglobulins.
Activation of STAT6 signaling pathway is necessary in macrophage function, and is required for the M2 subtype activation of macrophages. STAT6 protein also regulates other transcription factor as Gata3, which is important regulator of Th2 differentiation. STAT6 is also required for the development of IL-9-secreting T cells.
STAT6 also plays a critical role in Th2 lung inflammatory responses including clearance of parasitic infections and in the pathogenesis of asthma. Th2-cell derived cytokines as IL-4 and IL-13 induce the production of IgE which is a major mediator in allergic response. Association studies searching for relation of polymorphisms in STAT6 with IgE level or asthma discovered a few polymorphisms significantly associated with examined traits. Only two polymorphisms showed repeatedly significant clinical association and/or functional effect on STAT6 function (GT repeats in exon 1 and rs324011 polymorphism in intron 2).
Interactions
STAT6 has been shown to interact with:
CREB-binding protein,
EP300,
IRF4,
NFKB1,
Nuclear receptor coactivator 1, and
SND1.
Pathology
Gene fusion
Recurrent somatic fusions of the two genes, NGFI-A–binding protein 2 (NAB2) and STAT6, located at chromosomal region 12q13, have been identified in solitary fibrous tumors.
Amplification
STAT6 is amplified in a subset of dedifferentiated liposarcoma.
See also
Interleukin 4
References
Further reading
External links
Gene expression
Immune system
Proteins
Transcription factors
Signal transduction | STAT6 | [
"Chemistry",
"Biology"
] | 869 | [
"Biomolecules by chemical classification",
"Immune system",
"Gene expression",
"Signal transduction",
"Organ systems",
"Molecular genetics",
"Induced stem cells",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins",
"Neurochemistry",
"Transcription factors"
] |
11,939,462 | https://en.wikipedia.org/wiki/Ligase%20ribozyme | The RNA Ligase ribozyme was the first of several types of synthetic ribozymes produced by in vitro evolution and selection techniques. They are an important class of ribozymes because they catalyze the assembly of RNA fragments into phosphodiester RNA polymers, a reaction required of all extant nucleic acid polymerases and thought to be required for any self-replicating molecule. Ideas that the origin of life may have involved the first self-replicating molecules being ribozymes are called RNA World hypotheses. Ligase ribozymes may have been part of such a pre-biotic RNA world.
In order to copy RNA, fragments or monomers (individual building blocks) that have 5′-triphosphates must be ligated together. This is true for modern (protein-based) polymerases, and is also the most likely mechanism by which a ribozyme self-replicase in an RNA world might function. Yet no one has found a natural ribozyme that can perform this reaction.
In vitro evolution and selection
RNA in vitro evolution or SELEX enables the artificial evolution and selection of RNA molecules that possess a desired property, such as binding affinity for a particular ligand or an activity such as that of an enzyme or catalyst. The first such selections involved isolation of various aptamers that bind to small molecules. The first catalytic RNAs produced by in vitro evolution were RNA ligases, catalytic RNAs that join two RNA fragments to produce a single adduct. The most active ligase known to date is the Class I ligase, isolated from random sequence (work of David Bartel, while in the Szostak lab). Other examples of RNA ligases include the L1 ligase (Robertson and Ellington), the R3C ligase (Joyce), the DSL ligase (Inoue). All these ligases catalyze the formation of a 3′–5′ phosphodiester bond between two RNA fragments.
The L1 ligase
Michael Robertson and Andrew Ellington evolved a ligase ribozyme that performs the desired 5′–3′ RNA assembly reaction, and called this the L1 ligase. To better understand the details of how this ribozyme folds into a structure that permits it to catalyze this fundamental reaction, the X-ray crystal structure has been solved. The structure is composed of three helical stems called stem A, B and C, that connect at a three helix junction.
References
Further reading
Non-coding RNA
Ribozymes
RNA splicing | Ligase ribozyme | [
"Chemistry"
] | 531 | [
"Catalysis",
"Ribozymes"
] |
11,944,078 | https://en.wikipedia.org/wiki/Efficient%20energy%20use | Efficient energy use, or energy efficiency, is the process of reducing the amount of energy required to provide products and services. There are many technologies and methods available that are more energy efficient than conventional systems. For example, insulating a building allows it to use less heating and cooling energy while still maintaining a comfortable temperature. Another method is to remove energy subsidies that promote high energy consumption and inefficient energy use. Improved energy efficiency in buildings, industrial processes and transportation could reduce the world's energy needs in 2050 by one third.
There are two main motivations to improve energy efficiency. Firstly, one motivation is to achieve cost savings during the operation of the appliance or process. However, installing an energy-efficient technology comes with an upfront cost, the capital cost. The different types of costs can be analyzed and compared with a life-cycle assessment. Another motivation for energy efficiency is to reduce greenhouse gas emissions and hence work towards climate action. A focus on energy efficiency can also have a national security benefit because it can reduce the amount of energy that has to be imported from other countries.
Energy efficiency and renewable energy go hand in hand for sustainable energy policies. They are high priority actions in the energy hierarchy.
Aims
Energy productivity, which measures the output and quality of goods and services per unit of energy input, can come from either reducing the amount of energy required to produce something, or from increasing the quantity or quality of goods and services from the same amount of energy.
From the point of view of an energy consumer, the main motivation of energy efficiency is often simply saving money by lowering the cost of purchasing energy. Additionally, from an energy policy point of view, there has been a long trend in a wider recognition of energy efficiency as the "first fuel", meaning the ability to replace or avoid the consumption of actual fuels. In fact, International Energy Agency has calculated that the application of energy efficiency measures in the years 1974-2010 has succeeded in avoiding more energy consumption in its member states than is the consumption of any particular fuel, including fossil fuels (i.e. oil, coal and natural gas).
Moreover, it has long been recognized that energy efficiency brings other benefits additional to the reduction of energy consumption. Some estimates of the value of these other benefits, often called multiple benefits, co-benefits, ancillary benefits or non-energy benefits, have put their summed value even higher than that of the direct energy benefits.
These multiple benefits of energy efficiency include things such as reduced greenhouse gas emissions, reduced air pollution and improved health, and improved energy security. Methods for calculating the monetary value of these multiple benefits have been developed, including e.g. the choice experiment method for improvements that have a subjective component (such as aesthetics or comfort) and Tuominen-Seppänen method for price risk reduction. When included in the analysis, the economic benefit of energy efficiency investments can be shown to be significantly higher than simply the value of the saved energy.
Energy efficiency has proved to be a cost-effective strategy for building economies without necessarily increasing energy consumption. For example, the state of California began implementing energy-efficiency measures in the mid-1970s, including building code and appliance standards with strict efficiency requirements. During the following years, California's energy consumption has remained approximately flat on a per capita basis while national US consumption doubled. As part of its strategy, California implemented a "loading order" for new energy resources that puts energy efficiency first, renewable electricity supplies second, and new fossil-fired power plants last. States such as Connecticut and New York have created quasi-public Green Banks to help residential and commercial building-owners finance energy efficiency upgrades that reduce emissions and cut consumers' energy costs.
Related concepts
Energy conservation
Energy conservation is broader than energy efficiency in including active efforts to decrease energy consumption, for example through behaviour change, in addition to using energy more efficiently. Examples of conservation without efficiency improvements are heating a room less in winter, using the car less, air-drying your clothes instead of using the dryer, or enabling energy saving modes on a computer. As with other definitions, the boundary between efficient energy use and energy conservation can be fuzzy, but both are important in environmental and economic terms.
Sustainable energy
Energy efficiency—using less energy to deliver the same goods or services, or delivering comparable services with less goods—is a cornerstone of many sustainable energy strategies. The International Energy Agency (IEA) has estimated that increasing energy efficiency could achieve 40% of greenhouse gas emission reductions needed to fulfil the Paris Agreement's goals. Energy can be conserved by increasing the technical efficiency of appliances, vehicles, industrial processes, and buildings.
Unintended consequences
If the demand for energy services remains constant, improving energy efficiency will reduce energy consumption and carbon emissions. However, many efficiency improvements do not reduce energy consumption by the amount predicted by simple engineering models. This is because they make energy services cheaper, and so consumption of those services increases. For example, since fuel efficient vehicles make travel cheaper, consumers may choose to drive farther, thereby offsetting some of the potential energy savings. Similarly, an extensive historical analysis of technological efficiency improvements has conclusively shown that energy efficiency improvements were almost always outpaced by economic growth, resulting in a net increase in resource use and associated pollution. These are examples of the direct rebound effect.
Estimates of the size of the rebound effect range from roughly 5% to 40%. The rebound effect is likely to be less than 30% at the household level and may be closer to 10% for transport. A rebound effect of 30% implies that improvements in energy efficiency should achieve 70% of the reduction in energy consumption projected using engineering models.
Options
Appliances
Modern appliances, such as, freezers, ovens, stoves, dishwashers, clothes washers and dryers, use significantly less energy than older appliances. Current energy-efficient refrigerators, for example, use 40 percent less energy than conventional models did in 2001. Following this, if all households in Europe changed their more than ten-year-old appliances into new ones, 20 billion kWh of electricity would be saved annually, hence reducing CO2 emissions by almost 18 billion kg. In the US, the corresponding figures would be 17 billion kWh of electricity and CO2. According to a 2009 study from McKinsey & Company the replacement of old appliances is one of the most efficient global measures to reduce emissions of greenhouse gases. Modern power management systems also reduce energy usage by idle appliances by turning them off or putting them into a low-energy mode after a certain time. Many countries identify energy-efficient appliances using energy input labeling.
The impact of energy efficiency on peak demand depends on when the appliance is used. For example, an air conditioner uses more energy during the afternoon when it is hot. Therefore, an energy-efficient air conditioner will have a larger impact on peak demand than off-peak demand. An energy-efficient dishwasher, on the other hand, uses more energy during the late evening when people do their dishes. This appliance may have little to no impact on peak demand.
Over the period 2001–2021, tech companies have replaced traditional silicon switches in an electric circuit with quicker gallium nitride transistors to make new gadgets as energy efficient as feasible. Gallium nitride transistors are, however, more costly. This is a significant change in lowering the carbon footprint.
Building design
A building's location and surroundings play a key role in regulating its temperature and illumination. For example, trees, landscaping, and hills can provide shade and block wind. In cooler climates, designing northern hemisphere buildings with south facing windows and southern hemisphere buildings with north facing windows increases the amount of sun (ultimately heat energy) entering the building, minimizing energy use, by maximizing passive solar heating. Tight building design, including energy-efficient windows, well-sealed doors, and additional thermal insulation of walls, basement slabs, and foundations can reduce heat loss by 25 to 50 percent.
Dark roofs may become up to 39 °C (70 °F) hotter than the most reflective white surfaces. They transmit some of this additional heat inside the building. US Studies have shown that lightly colored roofs use 40 percent less energy for cooling than buildings with darker roofs. White roof systems save more energy in sunnier climates. Advanced electronic heating and cooling systems can moderate energy consumption and improve the comfort of people in the building.
Proper placement of windows and skylights as well as the use of architectural features that reflect light into a building can reduce the need for artificial lighting. Increased use of natural and task lighting has been shown by one study to increase productivity in schools and offices. Compact fluorescent lamps use two-thirds less energy and may last 6 to 10 times longer than incandescent light bulbs. Newer fluorescent lights produce a natural light, and in most applications they are cost effective, despite their higher initial cost, with payback periods as low as a few months. LED lamps use only about 10% of the energy an incandescent lamp requires.
Leadership in Energy and Environmental Design (LEED) is a rating system organized by the US Green Building Council (USGBC) to promote environmental responsibility in building design. They currently offer four levels of certification for existing buildings (LEED-EBOM) and new construction (LEED-NC) based on a building's compliance with the following criteria: Sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, and innovation in design. In 2013, USGBC developed the LEED Dynamic Plaque, a tool to track building performance against LEED metrics and a potential path to recertification. The following year, the council collaborated with Honeywell to pull data on energy and water use, as well as indoor air quality from a BAS to automatically update the plaque, providing a near-real-time view of performance. The USGBC office in Washington, D.C. is one of the first buildings to feature the live-updating LEED Dynamic Plaque.
Industry
Industries use a large amount of energy to power a diverse range of manufacturing and resource extraction processes. Many industrial processes require large amounts of heat and mechanical power, most of which is delivered as natural gas, petroleum fuels, and electricity. In addition some industries generate fuel from waste products that can be used to provide additional energy.
Because industrial processes are so diverse it is impossible to describe the multitude of possible opportunities for energy efficiency in industry. Many depend on the specific technologies and processes in use at each industrial facility. There are, however, a number of processes and energy services that are widely used in many industries.
Various industries generate steam and electricity for subsequent use within their facilities. When electricity is generated, the heat that is produced as a by-product can be captured and used for process steam, heating or other industrial purposes. Conventional electricity generation is about 30% efficient, whereas combined heat and power (also called co-generation) converts up to 90 percent of the fuel into usable energy.
Advanced boilers and furnaces can operate at higher temperatures while burning less fuel. These technologies are more efficient and produce fewer pollutants.
Over 45 percent of the fuel used by US manufacturers is burnt to make steam. The typical industrial facility can reduce this energy usage 20 percent (according to the US Department of Energy) by insulating steam and condensate return lines, stopping steam leakage, and maintaining steam traps.
Electric motors usually run at a constant speed, but a variable speed drive allows the motor's energy output to match the required load. This achieves energy savings ranging from 3 to 60 percent, depending on how the motor is used. Motor coils made of superconducting materials can also reduce energy losses. Motors may also benefit from voltage optimization.
Industry uses a large number of pumps and compressors of all shapes and sizes and in a wide variety of applications. The efficiency of pumps and compressors depends on many factors but often improvements can be made by implementing better process control and better maintenance practices. Compressors are commonly used to provide compressed air which is used for sand blasting, painting, and other power tools. According to the US Department of Energy, optimizing compressed air systems by installing variable speed drives, along with preventive maintenance to detect and fix air leaks, can improve energy efficiency 20 to 50 percent.
Transportation
Automobiles
The estimated energy efficiency for an automobile is 280 Passenger-Mile/106 Btu. There are several ways to enhance a vehicle's energy efficiency. Using improved aerodynamics to minimize drag can increase vehicle fuel efficiency. Reducing vehicle weight can also improve fuel economy, which is why composite materials are widely used in car bodies.
More advanced tires, with decreased tire to road friction and rolling resistance, can save gasoline. Fuel economy can be improved by up to 3.3% by keeping tires inflated to the correct pressure. Replacing a clogged air filter can improve a cars fuel consumption by as much as 10 percent on older vehicles. On newer vehicles (1980s and up) with fuel-injected, computer-controlled engines, a clogged air filter has no effect on mpg but replacing it may improve acceleration by 6-11 percent. Aerodynamics also aid in efficiency of a vehicle. The design of a car impacts the amount of gas needed to move it through air. Aerodynamics involves the air around the car, which can affect the efficiency of the energy expended.
Turbochargers can increase fuel efficiency by allowing a smaller displacement engine. The 'Engine of the year 2011' is the Fiat TwinAir engine equipped with an MHI turbocharger. "Compared with a 1.2-liter 8v engine, the new 85 HP turbo has 23% more power and a 30% better performance index. The performance of the two-cylinder is not only equivalent to a 1.4-liter 16v engine, but fuel consumption is 30% lower."
Energy-efficient vehicles may reach twice the fuel efficiency of the average automobile. Cutting-edge designs, such as the diesel Mercedes-Benz Bionic concept vehicle have achieved a fuel efficiency as high as , four times the current conventional automotive average.
The mainstream trend in automotive efficiency is the rise of electric vehicles (all-electric or hybrid electric). Electric engines have more than double the efficiency of internal combustion engines. Hybrids, like the Toyota Prius, use regenerative braking to recapture energy that would dissipate in normal cars; the effect is especially pronounced in city driving. Plug-in hybrids also have increased battery capacity, which makes it possible to drive for limited distances without burning any gasoline; in this case, energy efficiency is dictated by whatever process (such as coal-burning, hydroelectric, or renewable source) created the power. Plug-ins can typically drive for around purely on electricity without recharging; if the battery runs low, a gas engine kicks in allowing for extended range. Finally, all-electric cars are also growing in popularity; the Tesla Model S sedan is the only high-performance all-electric car currently on the market.
Street lighting
Cities around the globe light up millions of streets with 300 million lights. Some cities are seeking to reduce street light power consumption by dimming lights during off-peak hours or switching to LED lamps. LED lamps are known to reduce the energy consumption by 50% to 80%.
Aircraft
There are several ways to improve aviation's use of energy through modifications aircraft and air traffic management. Aircraft improve with better aerodynamics, engines and weight. Seat density and cargo load factors contribute to efficiency.
Air traffic management systems can allow automation of takeoff, landing, and collision avoidance, as well as within airports, from simple things like HVAC and lighting to more complex tasks such as security and scanning.
International Action
International agreements and pledges
At the 2023 United Nations Climate Change Conference, one of the adopted declaration was the GLOBAL RENEWABLES AND ENERGY EFFICIENCY PLEDGE signed by 123 countries. The declaration includes obligations to consider energy efficiency as "first fuel" and double the rate of increase in energy efficiency from 2% per year to 4% per year by the year 2030. China and India did not signed this pledge.
International standards
International standards ISO17743 and ISO17742 provide a documented methodology for calculating and reporting on energy savings and energy efficiency for countries and cities.
Examples by country or region
Europe
The first EU-wide energy efficiency target was set in 1998. Member states agreed to improve energy efficiency by 1 percent a year over twelve years. In addition, legislation about products, industry, transport and buildings has contributed to a general energy efficiency framework. More effort is needed to address heating and cooling: there is more heat wasted during electricity production in Europe than is required to heat all buildings in the continent. All in all, EU energy efficiency legislation is estimated to deliver savings worth the equivalent of up to 326 million tons of oil per year by 2020.
The EU set itself a 20% energy savings target by 2020 compared to 1990 levels, but member states decide individually how energy savings will be achieved. At an EU summit in October 2014, EU countries agreed on a new energy efficiency target of 27% or greater by 2030. One mechanism used to achieve the target of 27% is the 'Suppliers Obligations & White Certificates'. The ongoing debate around the 2016 Clean Energy Package also puts an emphasis on energy efficiency, but the goal will probably remain around 30% greater efficiency compared to 1990 levels. Some have argued that this will not be enough for the EU to meet its Paris Agreement goals of reducing greenhouse gas emissions by 40% compared to 1990 levels.
In the European Union, 78% of enterprises proposed energy-saving methods in 2023, 67% listed energy contract renegotiation as a strategy, and 62% stated passing on costs to consumers as a plan to deal with energy market trends. Larger organisations were found more likely to invest in energy efficiency, green innovation, and climate change, with a significant rise in energy efficiency investments reported by SMEs and mid-cap companies.
Germany
Energy efficiency is central to energy policy in Germany.
As of late 2015, national policy includes the following efficiency and consumption targets (with actual values for 2014):
Recent progress toward improved efficiency has been steady aside from the financial crisis of 2007–08.
Some however believe energy efficiency is still under-recognized in terms of its contribution to Germany's energy transformation (or Energiewende).
Efforts to reduce final energy consumption in transport sector have not been successful, with a growth of 1.7% between 2005 and 2014. This growth is due to both road passenger and road freight transport. Both sectors increased their overall distance travelled to record the highest figures ever for Germany. Rebound effects played a significant role, both between improved vehicle efficiency and the distance travelled, and between improved vehicle efficiency and an increase in vehicle weights and engine power.
In 2014, the German federal government released its National Action Plan on Energy Efficiency (NAPE).
The areas covered are the energy efficiency of buildings, energy conservation for companies, consumer energy efficiency, and transport energy efficiency. The central short-term measures of NAPE include the introduction of competitive tendering for energy efficiency, the raising of funding for building renovation, the introduction of tax incentives for efficiency measures in the building sector, and the setting up energy efficiency networks together with business and industry.
In 2016, the German government released a green paper on energy efficiency for public consultation (in German). It outlines the potential challenges and actions needed to reduce energy consumption in Germany over the coming decades. At the document's launch, economics and energy minister Sigmar Gabriel said "we do not need to produce, store, transmit and pay for the energy that we save". The green paper prioritizes the efficient use of energy as the "first" response and also outlines opportunities for sector coupling, including using renewable power for heating and transport. Other proposals include a flexible energy tax which rises as petrol prices fall, thereby incentivizing fuel conservation despite low oil prices.
Spain
In Spain, four out of every five buildings use more energy than they should. They are either inadequately insulated or consume energy inefficiently.
The Unión de Créditos Immobiliarios (UCI), which has operations in Spain and Portugal, is increasing loans to homeowners and building management groups for energy-efficiency initiatives. Their Residential Energy Rehabilitation initiative aims to remodel and encourage the use of renewable energy in at least 3720 homes in Madrid, Barcelona, Valencia, and Seville. The works are expected to mobilize around €46.5 million in energy efficiency upgrades by 2025 and save approximately 8.1 GWh of energy. It has the ability to reduce carbon emissions by 7,545 tonnes per year.
Poland
In May 2016 Poland adopted a new Act on Energy Efficiency, to enter into force on 1October 2016.
Australia
In July 2009, the Council of Australian Governments, which represents the individual states and territories of Australia, agreed to a National Strategy on Energy Efficiency (NSEE). This is a ten-year plan accelerating the implementation of a nationwide adoption of energy-efficient practices and a preparation for the country's transformation into a low carbon future. The overriding agreement that governs this strategy is the National Partnership Agreement on Energy Efficiency.
Canada
In August 2017, the Government of Canada released Build Smart - Canada's Buildings Strategy, as a key driver of the Pan-Canadian Framework on Clean Growth and Climate Change, Canada's national climate strategy.
United States
A 2011 Energy Modeling Forum study covering the United States examined how energy efficiency opportunities will shape future fuel and electricity demand over the next several decades. The US economy is already set to lower its energy and carbon intensity, but explicit policies will be necessary to meet climate goals. These policies include: a carbon tax, mandated standards for more efficient appliances, buildings and vehicles, and subsidies or reductions in the upfront costs of new more energy-efficient equipment.
Programs and organizations:
Alliance to Save Energy
American Council for an Energy-Efficient Economy
Building Codes Assistance Project
Building Energy Codes Program
Consortium for Energy Efficiency
Energy Star, from United States Environmental Protection Agency
See also
Carbon footprint
Energy audit
Energy conservation measures
Energy efficiency implementation
Energy law
Energy recovery
Energy recycling
Energy resilience
List of least carbon efficient power stations
Waste-to-energy
References
Energy efficiency
Energy policy
Industrial ecology
Sustainable energy | Efficient energy use | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 4,533 | [
"Industrial engineering",
"Energy policy",
"Environmental engineering",
"Industrial ecology",
"Environmental social science"
] |
11,944,175 | https://en.wikipedia.org/wiki/Zirconia%20toughened%20alumina | Zirconia toughened alumina is a ceramic material comprising alumina and zirconia. It is a composite ceramic material with zirconia grains in the alumina matrix.
It is also known in industry as ZTA.
Zirconia aluminia (or zirconia toughened alumina), a combination of zirconium oxide and aluminum oxide, is part of a class of composite ceramics called AZ composites. Noted for their mechanical properties, AZ composites are commonly used in structural applications, as cutting tools, and in many medical applications. Additionally, AZ composites feature high strength, fracture toughness, elasticity, hardness, and wear resistance. Zirconia toughened alumina (ZTA), in particular, offers several key properties.
Structure
The mechanical robustness compared to alumina is attributed to the displacive phase transformation of the metastable tetragonal zirconia grains when the material is stressed. The stress concentration at a crack tip can cause a transformation from a tetragonal crystal structure to a monoclinic one, which has an associated volume expansion of zirconia. This volume expansion effectively pushes back the propagation of the crack and results in higher toughness and strength. A common specimen of Zirconia Toughened Alumina will have 10-20% zirconium oxides. The 20-30% increase in strength often meets the design criteria needed at a much lower cost. Depending on the percentage that is Zirconium, the properties of this ceramic can be manipulated for the applications required. Zirconia Toughened Alumina is generally referred as the intermediary between Alumina and Zirconium and as priced as such. This gives the ZTA a much lower price range than other similar materials. The increase in composite strength is done by a process called Stress Induced Transformation Toughening. This process causes internal strains, which causes crack in the structure of the Zirconium. Because of the crack, the Zirconium particles allowed to switch phases and move more freely amongst the Alumina particles. This causes an increase in Zirconia particles with the same amount of Alumina particles, creating the increase is strength.
Chemical and mechanical properties
Uses
Recently, there have been many uses for Zirconia Toughened Alumina, including valve seals, bushing, pump components, joint implants, wire bonding capillaries, cutting tool inserts, and many more. ZTA has a diverse range of properties, giving its importance in an array of applications. In the medical industry, ZTA serves as a ceramic that can be used in joint replacement and rehabilitation. ZTA's high wear resistance helps create high performance implants. Because of ZTA's high strength and corrosion resistance, it enables the material to withstand heavy loads without succumbing to degradation; giving ZTA many uses in load bearing applications. ZTA's toughness also means that it has many uses in cutting tools. ZTA and other Alumina are often used in metal cutting applications. Certain engine components, labware, industrial crucibles, refractory tubes can be manufactured using ZTA. Also, certain abrasive applications, such as sandblasting, can also be manufactured using ZTA.
References
External links
Material Properties Data: Zirconia-Toughened Alumina (ZTA)
Ceramic materials
Zirconium dioxide
Aluminium compounds
Composite materials | Zirconia toughened alumina | [
"Physics",
"Engineering"
] | 725 | [
"Composite materials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
9,472,437 | https://en.wikipedia.org/wiki/Connected%20Mathematics | Connected Mathematics is a comprehensive mathematics program intended for U.S. students in grades 6–8. The curriculum design, text materials for students, and supporting resources for teachers were created and have been progressively refined by the Connected Mathematics Project (CMP) at Michigan State University with advice and contributions from many mathematics teachers, curriculum developers, mathematicians, and mathematics education researchers.
The current third edition of Connected Mathematics is a major revision of the program to reflect new expectations of the Common Core State Standards for Mathematics and what the authors have learned from over twenty years of field experience by thousands of teachers working with millions of middle grades students. This CMP3 program is now published in paper and electronic form by Pearson Education.
Core principles
The first edition of Connected Mathematics, developed with financial support from the National Science Foundation, was designed to provide instructional materials for middle grades mathematics. It was based on the 1989 Curriculum and Evaluation Standards and the 1991 Professional Standards for Teaching Mathematics from the National Council of Teachers of Mathematics. These standards highlighted four core features of the curriculum:
Comprehensive coverage of mathematical concepts and skills across four content strands—number, algebra, geometry and measurement, and probability and statistics.
Connections between the concepts and methods of the four major content strands, and between the abstractions of mathematics and their applications in real-world problem contexts.
Instructional materials that transform classrooms into dynamic environments where students learn by solving problems and sharing their thinking with others, while teachers encourage and support students to be curious, to ask questions, and to enjoy learning and using mathematics.
Developing students' understanding of mathematical concepts, principles, procedures, and habits of mind, and fostering the disposition to use mathematical reasoning in making sense of new situations and solving problems.
These principles have guided the development and refinement of the Connected Mathematics program for over twenty years. The first edition was published in 1995; a major revision, also supported by National Science Foundation funding, was published in 2006; and the current third edition was published in 2014. In the third edition, the collection of units was expanded to cover Common Core Standards for both grade eight and Algebra I.
Each CMP grade level course aims to advance student understanding, skills, and problem-solving in every content strand, with increasing sophistication and challenge over the middle school grades. The problem tasks for students are designed to make connections within mathematics, between mathematics and other subject areas, and/or to real-world settings that appeal to students.
Curriculum units consist of 3–5 investigations, each focused on a key mathematical idea; each investigation consists of several major problems that the teacher and students explore in class. Applications/Connections/Extensions problem sets are included for each investigation to help students practice, apply, connect, and extend essential understandings.
While engaged in collaborative problem-solving and classroom discourse about mathematics, students are explicitly encouraged to reflect on their use of what the NCTM standards once called mathematical processes and now refer to as mathematical practices—making sense of problems and solving them, reasoning abstractly and quantitatively, constructing arguments and critiquing the reasoning of others, modeling with mathematics, using mathematical tools strategically, seeking and using structure, expressing regularity in repeated reasoning, and communicating ideas and results with precision.
Implementation challenges
The introduction of new curriculum content, instructional materials, and teaching methods is challenging in K–12 education. When the proposed changes contrast with long-standing traditional practice, it is common to hear concerns from parents, teachers, and other professionals, as well as from students who have been successful and comfortable in traditional classrooms. In recognition of this innovation challenge, the National Science Foundation complemented its investment in new curriculum materials with substantial investments in professional development for teachers. By funding state and urban systemic initiatives, local systemic change projects, and math-science partnership programs, as well as national centers for standards-based school mathematics curriculum dissemination and implementation, the NSF provided powerful support for the adoption and implementation of the various reform mathematics curricula developed during the standards era.
In addition to those programs, for nearly twenty years, CMP has sponsored summer Getting to Know CMP institutes, workshops for leaders of CMP implementation, and an annual User's Conference for the sharing of implementation experiences and insights, all on the campus of Michigan State University. The whole reform curriculum effort has greatly enhanced the field's understanding of what works in that important and challenging process—the clearest message being that significant lasting change takes time, persistent effort, and coordination of work by teachers at all levels in a system.
Research findings
Connected Mathematics has become the most widely used of the middle school curriculum materials developed to implement the NCTM Standards. The effects of its use have been described in expository journal articles and evaluated in mathematics education research projects. Many of the research studies are master's or doctoral dissertation research projects focused on specific aspects of the CMP classroom experience and student learning. But there have also been a number of large-scale independent evaluations of the results of the program.
In the large-scale controlled research studies the most common (but by no means universal) pattern of results has been better performance by CMP students on measures of conceptual understanding and problem solving and no significant difference between students of CMP and traditional curriculum materials on measures of routine skills and factual knowledge. For example, this pattern is what the LieCal project found from a longitudinal study comparing learning by students in CMP and traditional middle grades curricula:
(1) Students did not sacrifice basic mathematical skills if they were taught using a standards-based or reform mathematics curriculum like CMP; (2) African American students experienced greater gains in symbol manipulation when they used a traditional curriculum; (3) the use of either the CMP or a non-CMP curriculum improved the mathematics achievement of all students, including students of color; (4) the use of CMP contributed to significantly higher problem-solving growth for all ethnic groups; and (5) a high level of conceptual emphasis in a classroom improved the students’ ability to represent problem situations.
Perhaps the most telling result of all is reported in the 2008 study by James Tarr and colleagues at the University of Missouri. While finding no overall significant effects from use of reform or traditional curriculum materials, the study did discover effects favoring the NSF-funded curricula when those programs were implemented with high or even moderate levels of fidelity to Standards-based learning environments. That is, when the innovative programs are used as designed, they produce positive effects.
Historical controversy
Like other curricula designed and developed during the 1990s to implement the NCTM Standards, Connected Math was criticized by supporters of more traditional curricula. Critics made the following claims:
Reform curricula like CMP pay too little attention to the development of basic computational skills in number and algebra;
Student investigation and discovery of key mathematical concepts and skills might lead to critical gaps and misconceptions in their knowledge.
Emphasis on mathematics in real-world contexts might cause students to miss abstractions and generalizations that are the powerful heart of the subject.
The lack of explanatory prose in textbooks makes it hard for parents to help their children with homework and puts students with weak note-taking abilities, poor handwriting, slow handwriting, and attention deficits at a distinct disadvantage. Additionally, with limited explanatory written materials, students who miss one or more days of school will struggle to catch up on missed materials.
Small-group learning is less efficient than teacher-led direct instructional methods, and the most able and interested students might be held back by having to collaborate with less able and motivated students.
The CMP program does not take into account the needs of students with minor learning disabilities or other disabilities who might be integrated into general education classrooms but still need extra help and need associated or modified learning materials.
The publishers and creators of CMP have stated that reassuring results from a variety of research projects blunted concerns about basic skill mastery, missing knowledge, and student misconceptions resulting from use of CMP and other reform curricula. However, many teachers and parents remain wary.
References
External links
Connected Mathematics Project http://connectedmath.msu.edu/
Pearson http://www.connectedmathematics3.com
Common Core State Standards http://www.corestandards.org/Math
Education reform in the United States
Mathematics education
Mathematics education reform
Standards-based education
Algebra education | Connected Mathematics | [
"Mathematics"
] | 1,699 | [
"Algebra education",
"Algebra"
] |
9,476,628 | https://en.wikipedia.org/wiki/List%20of%20Y-DNA%20single-nucleotide%20polymorphisms |
See also
Single-nucleotide polymorphism
Unique-event polymorphism
Human Y-chromosome DNA haplogroups
List of Y-STR markers
External links
Sequence information for 218 M series markers published by 2001
ISOGG Y-DNA SNP Index - 2007
Karafet et al. (2008) Supplemental Research Data
DNA
Y DNA
Human evolution
Human population genetics
Genetic genealogy
Phylogenetics
Bioinformatics
Evolutionary biology
Molecular genetics | List of Y-DNA single-nucleotide polymorphisms | [
"Chemistry",
"Engineering",
"Biology"
] | 87 | [
"Evolutionary biology",
"Biological engineering",
"Taxonomy (biology)",
"Bioinformatics",
"Molecular genetics",
"Molecular biology",
"Phylogenetics"
] |
9,477,975 | https://en.wikipedia.org/wiki/Completion%20of%20a%20ring | In abstract algebra, a completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have a simpler structure than general ones, and Hensel's lemma applies to them. In algebraic geometry, a completion of a ring of functions R on a space X concentrates on a formal neighborhood of a point of X: heuristically, this is a neighborhood so small that all Taylor series centered at the point are convergent. An algebraic completion is constructed in a manner analogous to completion of a metric space with Cauchy sequences, and agrees with it in the case when R has a metric given by a non-Archimedean absolute value.
General construction
Suppose that E is an abelian group with a descending filtration
of subgroups. One then defines the completion (with respect to the filtration) as the inverse limit:
This is again an abelian group. Usually E is an additive abelian group. If E has additional algebraic structure compatible with the filtration, for instance E is a filtered ring, a filtered module, or a filtered vector space, then its completion is again an object with the same structure that is complete in the topology determined by the filtration. This construction may be applied both to commutative and noncommutative rings. As may be expected, when the intersection of the equals zero, this produces a complete topological ring.
Krull topology
In commutative algebra, the filtration on a commutative ring R by the powers of a proper ideal I determines the Krull (after Wolfgang Krull) or I-adic topology on R. The case of a maximal ideal is especially important, for example the distinguished maximal ideal of a valuation ring. The basis of open neighbourhoods of 0 in R is given by the powers In, which are nested and form a descending filtration on R:
(Open neighborhoods of any r ∈ R are given by cosets r + In.) The (I-adic) completion is the inverse limit of the factor rings,
pronounced "R I hat". The kernel of the canonical map from the ring to its completion is the intersection of the powers of I. Thus is injective if and only if this intersection reduces to the zero element of the ring; by the Krull intersection theorem, this is the case for any commutative Noetherian ring which is an integral domain or a local ring.
There is a related topology on R-modules, also called Krull or I-adic topology. A basis of open neighborhoods of a module M is given by the sets of the form
The I-adic completion of an R-module M is the inverse limit of the quotients
This procedure converts any module over R into a complete topological module over . [that is wrong in general! Only if the ideal is finite generated it is the case.]
Examples
The ring of p-adic integers is obtained by completing the ring of integers at the ideal (p).
Let R = K[x1,...,xn] be the polynomial ring in n variables over a field K and be the maximal ideal generated by the variables. Then the completion is the ring K[[x1,...,xn]] of formal power series in n variables over K.
Given a noetherian ring and an ideal the -adic completion of is an image of a formal power series ring, specifically, the image of the surjection
The kernel is the ideal
Completions can also be used to analyze the local structure of singularities of a scheme. For example, the affine schemes associated to and the nodal cubic plane curve have similar looking singularities at the origin when viewing their graphs (both look like a plus sign). Notice that in the second case, any Zariski neighborhood of the origin is still an irreducible curve. If we use completions, then we are looking at a "small enough" neighborhood where the node has two components. Taking the localizations of these rings along the ideal and completing gives and respectively, where is the formal square root of in More explicitly, the power series:
Since both rings are given by the intersection of two ideals generated by a homogeneous degree 1 polynomial, we can see algebraically that the singularities "look" the same. This is because such a scheme is the union of two non-equal linear subspaces of the affine plane.
Properties
The completion of a Noetherian ring with respect to some ideal is a Noetherian ring.
The completion of a Noetherian local ring with respect to the unique maximal ideal is a Noetherian local ring.
The completion is a functorial operation: a continuous map f: R → S of topological rings gives rise to a map of their completions,
Moreover, if M and N are two modules over the same topological ring R and f: M → N is a continuous module map then f uniquely extends to the map of the completions:
where are modules over
The completion of a Noetherian ring R is a flat module over R.
The completion of a finitely generated module M over a Noetherian ring R can be obtained by extension of scalars:
Together with the previous property, this implies that the functor of completion on finitely generated R-modules is exact: it preserves short exact sequences. In particular, taking quotients of rings commutes with completion, meaning that for any quotient R-algebra , there is an isomorphism
Cohen structure theorem (equicharacteristic case). Let R be a complete local Noetherian commutative ring with maximal ideal and residue field K. If R contains a field, then
for some n and some ideal I (Eisenbud, Theorem 7.7).
See also
Formal scheme
Profinite integer
Locally compact field
Zariski ring
Linear topology
Quasi-unmixed ring
Citations
References
David Eisenbud, Commutative algebra. With a view toward algebraic geometry. Graduate Texts in Mathematics, 150. Springer-Verlag, New York, 1995. xvi+785 pp. ;
Commutative algebra
Topological algebra | Completion of a ring | [
"Mathematics"
] | 1,302 | [
"Topological algebra",
"Fields of abstract algebra",
"Commutative algebra",
"Topology"
] |
9,478,630 | https://en.wikipedia.org/wiki/Integral%20element | In commutative algebra, an element b of a commutative ring B is said to be integral over a subring A of B if b is a root of some monic polynomial over A.
If A, B are fields, then the notions of "integral over" and of an "integral extension" are precisely "algebraic over" and "algebraic extensions" in field theory (since the root of any polynomial is the root of a monic polynomial).
The case of greatest interest in number theory is that of complex numbers integral over Z (e.g., or ); in this context, the integral elements are usually called algebraic integers. The algebraic integers in a finite extension field k of the rationals Q form a subring of k, called the ring of integers of k, a central object of study in algebraic number theory.
In this article, the term ring will be understood to mean commutative ring with a multiplicative identity.
Definition
Let be a ring and let be a subring of
An element of is said to be integral over if for some there exists in such that
The set of elements of that are integral over is called the integral closure of in The integral closure of any subring in is, itself, a subring of and contains If every element of is integral over then we say that is integral over , or equivalently is an integral extension of
Examples
Integral closure in algebraic number theory
There are many examples of integral closure which can be found in algebraic number theory since it is fundamental for defining the ring of integers for an algebraic field extension (or ).
Integral closure of integers in rationals
Integers are the only elements of Q that are integral over Z. In other words, Z is the integral closure of Z in Q.
Quadratic extensions
The Gaussian integers are the complex numbers of the form , and are integral over Z. is then the integral closure of Z in . Typically this ring is denoted .
The integral closure of Z in is the ring
This example and the previous one are examples of quadratic integers. The integral closure of a quadratic extension can be found by constructing the minimal polynomial of an arbitrary element and finding number-theoretic criterion for the polynomial to have integral coefficients. This analysis can be found in the quadratic extensions article.
Roots of unity
Let ζ be a root of unity. Then the integral closure of Z in the cyclotomic field Q(ζ) is Z[ζ]. This can be found by using the minimal polynomial and using Eisenstein's criterion.
Ring of algebraic integers
The integral closure of Z in the field of complex numbers C, or the algebraic closure is called the ring of algebraic integers.
Other
The roots of unity, nilpotent elements and idempotent elements in any ring are integral over Z.
Integral closure in algebraic geometry
In geometry, integral closure is closely related with normalization and normal schemes. It is the first step in resolution of singularities since it gives a process for resolving singularities of codimension 1.
For example, the integral closure of is the ring since geometrically, the first ring corresponds to the -plane unioned with the -plane. They have a codimension 1 singularity along the -axis where they intersect.
Let a finite group G act on a ring A. Then A is integral over AG, the set of elements fixed by G; see Ring of invariants.
Let R be a ring and u a unit in a ring containing R. Then
u−1 is integral over R if and only if u−1 ∈ R[u].
is integral over R.
The integral closure of the homogeneous coordinate ring of a normal projective variety X is the ring of sections
Integrality in algebra
If is an algebraic closure of a field k, then is integral over
The integral closure of C[[x]] in a finite extension of C((x)) is of the form (cf. Puiseux series)
Equivalent definitions
Let B be a ring, and let A be a subring of B. Given an element b in B, the following conditions are equivalent:
(i) b is integral over A;
(ii) the subring A[b] of B generated by A and b is a finitely generated A-module;
(iii) there exists a subring C of B containing A[b] and which is a finitely generated A-module;
(iv) there exists a faithful A[b]-module M such that M is finitely generated as an A-module.
The usual proof of this uses the following variant of the Cayley–Hamilton theorem on determinants:
Theorem Let u be an endomorphism of an A-module M generated by n elements and I an ideal of A such that . Then there is a relation:
This theorem (with I = A and u multiplication by b) gives (iv) ⇒ (i) and the rest is easy. Coincidentally, Nakayama's lemma is also an immediate consequence of this theorem.
Elementary properties
Integral closure forms a ring
It follows from the above four equivalent statements that the set of elements of that are integral over forms a subring of containing . (Proof: If x, y are elements of that are integral over , then are integral over since they stabilize , which is a finitely generated module over and is annihilated only by zero.) This ring is called the integral closure of in .
Transitivity of integrality
Another consequence of the above equivalence is that "integrality" is transitive, in the following sense. Let be a ring containing and . If is integral over and integral over , then is integral over . In particular, if is itself integral over and is integral over , then is also integral over .
Integral closed in fraction field
If happens to be the integral closure of in , then A is said to be integrally closed in . If is the total ring of fractions of , (e.g., the field of fractions when is an integral domain), then one sometimes drops the qualification "in and simply says "integral closure of " and " is integrally closed." For example, the ring of integers is integrally closed in the field .
Transitivity of integral closure with integrally closed domains
Let A be an integral domain with the field of fractions K and A' the integral closure of A in an algebraic field extension L of K. Then the field of fractions of A' is L. In particular, A' is an integrally closed domain.
Transitivity in algebraic number theory
This situation is applicable in algebraic number theory when relating the ring of integers and a field extension. In particular, given a field extension the integral closure of in is the ring of integers .
Remarks
Note that transitivity of integrality above implies that if is integral over , then is a union (equivalently an inductive limit) of subrings that are finitely generated -modules.
If is noetherian, transitivity of integrality can be weakened to the statement:
There exists a finitely generated -submodule of that contains .
Relation with finiteness conditions
Finally, the assumption that be a subring of can be modified a bit. If is a ring homomorphism, then one says is integral if is integral over . In the same way one says is finite ( finitely generated -module) or of finite type ( finitely generated -algebra). In this viewpoint, one has that
is finite if and only if is integral and of finite type.
Or more explicitly,
is a finitely generated -module if and only if is generated as an -algebra by a finite number of elements integral over .
Integral extensions
Cohen-Seidenberg theorems
An integral extension A ⊆ B has the going-up property, the lying over property, and the incomparability property (Cohen–Seidenberg theorems). Explicitly, given a chain of prime ideals in A there exists a in B with (going-up and lying over) and two distinct prime ideals with inclusion relation cannot contract to the same prime ideal (incomparability). In particular, the Krull dimensions of A and B are the same. Furthermore, if A is an integrally closed domain, then the going-down holds (see below).
In general, the going-up implies the lying-over. Thus, in the below, we simply say the "going-up" to mean "going-up" and "lying-over".
When A, B are domains such that B is integral over A, A is a field if and only if B is a field. As a corollary, one has: given a prime ideal of B, is a maximal ideal of B if and only if is a maximal ideal of A. Another corollary: if L/K is an algebraic extension, then any subring of L containing K is a field.
Applications
Let B be a ring that is integral over a subring A and k an algebraically closed field. If is a homomorphism, then f extends to a homomorphism B → k. This follows from the going-up.
Geometric interpretation of going-up
Let be an integral extension of rings. Then the induced map
is a closed map; in fact, for any ideal I and is surjective if f is injective. This is a geometric interpretation of the going-up.
Geometric interpretation of integral extensions
Let B be a ring and A a subring that is a noetherian integrally closed domain (i.e., is a normal scheme). If B is integral over A, then is submersive; i.e., the topology of is the quotient topology. The proof uses the notion of constructible sets. (See also: Torsor (algebraic geometry).)
Integrality, base-change, universally-closed, and geometry
If is integral over , then is integral over R for any A-algebra R. In particular, is closed; i.e., the integral extension induces a "universally closed" map. This leads to a geometric characterization of integral extension. Namely, let B be a ring with only finitely many minimal prime ideals (e.g., integral domain or noetherian ring). Then B is integral over a (subring) A if and only if is closed for any A-algebra R. In particular, every proper map is universally closed.
Galois actions on integral extensions of integrally closed domains
Proposition. Let A be an integrally closed domain with the field of fractions K, L a finite normal extension of K, B the integral closure of A in L. Then the group acts transitively on each fiber of .
Proof. Suppose for any in G. Then, by prime avoidance, there is an element x in such that for any . G fixes the element and thus y is purely inseparable over K. Then some power belongs to K; since A is integrally closed we have: Thus, we found is in but not in ; i.e., .
Application to algebraic number theory
The Galois group then acts on all of the prime ideals lying over a fixed prime ideal . That is, if
then there is a Galois action on the set . This is called the Splitting of prime ideals in Galois extensions.
Remarks
The same idea in the proof shows that if is a purely inseparable extension (need not be normal), then is bijective.
Let A, K, etc. as before but assume L is only a finite field extension of K. Then
(i) has finite fibers.
(ii) the going-down holds between A and B: given , there exists that contracts to it.
Indeed, in both statements, by enlarging L, we can assume L is a normal extension. Then (i) is immediate. As for (ii), by the going-up, we can find a chain that contracts to . By transitivity, there is such that and then are the desired chain.
Integral closure
Let A ⊂ B be rings and A' the integral closure of A in B. (See above for the definition.)
Integral closures behave nicely under various constructions. Specifically, for a multiplicatively closed subset S of A, the localization S−1A' is the integral closure of S−1A in S−1B, and is the integral closure of in . If are subrings of rings , then the integral closure of in is where are the integral closures of in .
The integral closure of a local ring A in, say, B, need not be local. (If this is the case, the ring is called unibranch.) This is the case for example when A is Henselian and B is a field extension of the field of fractions of A.
If A is a subring of a field K, then the integral closure of A in K is the intersection of all valuation rings of K containing A.
Let A be an -graded subring of an -graded ring B. Then the integral closure of A in B is an -graded subring of B.
There is also a concept of the integral closure of an ideal. The integral closure of an ideal , usually denoted by , is the set of all elements such that there exists a monic polynomial
with with as a root. The radical of an ideal is integrally closed.
For noetherian rings, there are alternate definitions as well.
if there exists a not contained in any minimal prime, such that for all .
if in the normalized blow-up of I, the pull back of r is contained in the inverse image of I. The blow-up of an ideal is an operation of schemes which replaces the given ideal with a principal ideal. The normalization of a scheme is simply the scheme corresponding to the integral closure of all of its rings.
The notion of integral closure of an ideal is used in some proofs of the going-down theorem.
Conductor
Let B be a ring and A a subring of B such that B is integral over A. Then the annihilator of the A-module B/A is called the conductor of A in B. Because the notion has origin in algebraic number theory, the conductor is denoted by . Explicitly, consists of elements a in A such that . (cf. idealizer in abstract algebra.) It is the largest ideal of A that is also an ideal of B. If S is a multiplicatively closed subset of A, then
.
If B is a subring of the total ring of fractions of A, then we may identify
.
Example: Let k be a field and let (i.e., A is the coordinate ring of the affine curve ). B is the integral closure of A in . The conductor of A in B is the ideal . More generally, the conductor of , a, b relatively prime, is with .
Suppose B is the integral closure of an integral domain A in the field of fractions of A such that the A-module is finitely generated. Then the conductor of A is an ideal defining the support of ; thus, A coincides with B in the complement of in . In particular, the set , the complement of , is an open set.
Finiteness of integral closure
An important but difficult question is on the finiteness of the integral closure of a finitely generated algebra. There are several known results.
The integral closure of a Dedekind domain in a finite extension of the field of fractions is a Dedekind domain; in particular, a noetherian ring. This is a consequence of the Krull–Akizuki theorem. In general, the integral closure of a noetherian domain of dimension at most 2 is noetherian; Nagata gave an example of dimension 3 noetherian domain whose integral closure is not noetherian. A nicer statement is this: the integral closure of a noetherian domain is a Krull domain (Mori–Nagata theorem). Nagata also gave an example of dimension 1 noetherian local domain such that the integral closure is not finite over that domain.
Let A be a noetherian integrally closed domain with field of fractions K. If L/K is a finite separable extension, then the integral closure of A in L is a finitely generated A-module. This is easy and standard (uses the fact that the trace defines a non-degenerate bilinear form).
Let A be a finitely generated algebra over a field k that is an integral domain with field of fractions K. If L is a finite extension of K, then the integral closure of A in L is a finitely generated A-module and is also a finitely generated k-algebra. The result is due to Noether and can be shown using the Noether normalization lemma as follows. It is clear that it is enough to show the assertion when L/K is either separable or purely inseparable. The separable case is noted above, so assume L/K is purely inseparable. By the normalization lemma, A is integral over the polynomial ring . Since L/K is a finite purely inseparable extension, there is a power q of a prime number such that every element of L is a q-th root of an element in K. Let be a finite extension of k containing all q-th roots of coefficients of finitely many rational functions that generate L. Then we have: The ring on the right is the field of fractions of , which is the integral closure of S; thus, contains . Hence, is finite over S; a fortiori, over A. The result remains true if we replace k by Z.
The integral closure of a complete local noetherian domain A in a finite extension of the field of fractions of A is finite over A. More precisely, for a local noetherian ring R, we have the following chains of implications:
(i) A complete A is a Nagata ring
(ii) A is a Nagata domain A analytically unramified the integral closure of the completion is finite over the integral closure of A is finite over A.
Noether's normalization lemma
Noether's normalisation lemma is a theorem in commutative algebra. Given a field K and a finitely generated K-algebra A, the theorem says it is possible to find elements y1, y2, ..., ym in A that are algebraically independent over K such that A is finite (and hence integral) over B = K[y1,..., ym]. Thus the extension K ⊂ A can be written as a composite K ⊂ B ⊂ A where K ⊂ B is a purely transcendental extension and B ⊂ A is finite.
Integral morphisms
In algebraic geometry, a morphism of schemes is integral if it is affine and if for some (equivalently, every) affine open cover of Y, every map is of the form where A is an integral B-algebra. The class of integral morphisms is more general than the class of finite morphisms because there are integral extensions that are not finite, such as, in many cases, the algebraic closure of a field over the field.
Absolute integral closure
Let A be an integral domain and L (some) algebraic closure of the field of fractions of A. Then the integral closure of A in L is called the absolute integral closure of A. It is unique up to a non-canonical isomorphism. The ring of all algebraic integers is an example (and thus is typically not noetherian).
See also
Normal scheme
Noether normalization lemma
Algebraic integer
Splitting of prime ideals in Galois extensions
Torsor (algebraic geometry)
Notes
References
H. Matsumura Commutative ring theory. Translated from the Japanese by M. Reid. Second edition. Cambridge Studies in Advanced Mathematics, 8.
M. Reid, Undergraduate Commutative Algebra, London Mathematical Society, 29, Cambridge University Press, 1995.
Further reading
Irena Swanson, Integral closures of ideals and rings
Do DG-algebras have any sensible notion of integral closure?
Is always an integral extension of for a regular sequence ?]
Commutative algebra
Ring theory
Algebraic structures | Integral element | [
"Mathematics"
] | 4,168 | [
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures",
"Commutative algebra"
] |
9,478,903 | https://en.wikipedia.org/wiki/Wild%20Fermentation | Wild Fermentation: The Flavor, Nutrition, and Craft of Live-Culture Foods is a 2003 book by Sandor Katz that discusses the ancient practice of fermentation. While most of the conventional literature assumes the use of modern technology, Wild Fermentation focuses more on the practice and culture of fermenting food.
The term "wild fermentation" refers to the reliance on naturally occurring bacteria and yeast to ferment food. For example, conventional bread making requires the use of a commercial, highly specialized yeast, while wild-fermented bread relies on naturally occurring cultures that are found on the flour, in the air, and so on. Similarly, the book's instructions on sauerkraut require only cabbage and salt, relying on the cultures that naturally exist on the vegetable to perform the fermentation.
The book also discusses some foods that are not, strictly speaking, wild ferments such as miso, yogurt, kefir, and nattō.
Beyond food, the book includes some discussion of social, personal, and political issues, such as the legality of raw milk cheeses in the United States.
Newsweek has referred to Wild Fermentation as the "fermentation bible".
References
External links
Wild Fermentation updated and revised edition on author website
2003 non-fiction books
Fermentation
Chelsea Green Publishing books | Wild Fermentation | [
"Chemistry",
"Biology"
] | 279 | [
"Biochemistry",
"Cellular respiration",
"Fermentation"
] |
9,481,141 | https://en.wikipedia.org/wiki/Polyvinyl%20nitrate | Polyvinyl nitrate (abbreviated: PVN) is a high-energy polymer with the idealized formula of [CH2CH(ONO2)]. Polyvinyl nitrate is a long carbon chain (polymer) with nitrate groups (-O-NO2) bonded randomly along the chain. PVN is a white, fibrous solid, and is soluble in polar organic solvents such as acetone. PVN can be prepared by nitrating polyvinyl alcohol with an excess of nitric acid. Because PVN is also a nitrate ester such as nitroglycerin (a common explosive), it exhibits energetic properties and is commonly used in explosives and propellants.
Preparation
Polyvinyl nitrate was first synthesized by submersing polyvinyl alcohol (PVA) in a solution of concentrated sulfuric and nitric acids. This causes the PVA to lose a hydrogen atom from its hydroxy group (deprotonation), and the nitric acid (HNO3) to lose a NO2+ when in sulfuric acid. The NO2+ attaches to the oxygen in the PVA and creates a nitrate group, producing polyvinyl nitrate. This method results in a low nitrogen content of 10% and an overall yield of 80%. This method is inferior, as PVA has a low solubility in sulfuric acid and a slow rate of nitration for PVA. This meant that a lot of sulfuric acid was needed relative to PVA and did not produce a high nitrogen PVN, which is desirable for its energetic properties.
An improved method is where PVA is nitrated without sulfuric acid; however, when this solution is exposed to air, the PVA combusts. In this new method, either the PVA nitration is done in an inert gas (carbon dioxide or nitrogen) or the PVA powder is clumped into larger particles and submerged underneath the nitric acid to limit the amount of air exposure.
Currently, the most common method is when PVA powder is dissolved in acetic anhydride at -10°C. Then cooled nitric acid is slowly added. This produces a high nitrogen content PVN within about 5-7 hours. Because acetic anhydride was used as the solvent instead of sulfuric acid, the PVA will not combust when exposed to air.
Physical properties
PVN is a white thermoplastic with a softening point of 40-50°C. The theoretical maximum nitrogen content of PVN is 15.73%. PVN is a polymer that has an atactic configuration, meaning the nitrate groups are randomly distributed along the main chain. Fibrous PVN increases in crystallinity as the nitrogen content increases, showing that the PVN molecules organize themselves more orderly as nitrogen percent increases. Intramolecularly, the geometry of the polymer is planar zigzag. The porous PVN can be gelatinized when added to acetone at room temperature. This creates a viscous slurry and loses its fibrous and porous nature; however, it retains most of its energetic properties.
Chemical properties
Combustion
Polyvinyl nitrate is a high-energy polymer due to the significant presence of O - NO2 groups, similar to nitrocellulose and nitroglycerin. These nitrate groups have an activation energy of 53 kcal/mol are the primary cause of PVN's high chemical potential energy. The complete combustion reaction of PVN assuming full nitration is:
2CH2CH(ONO2) + 5/2O2 -> 4CO2 + N2 + 3H2O
When burned, PVN samples with less nitrogen had a significantly higher heat of combustion because there were more hydrogen molecules and more heat was generated when oxygen was present. The heat of combustion was about 3,000 cal/g for 15.71% N and 3,700 cal/g for 11.76% N. Alternatively, PVN samples with a higher nitrogen content had a significantly higher heat of explosion as it had more O - NO2 groups as it had more oxygen leading to more complete combustion. This leads to a more complete combustion and more heat generated when burned in inert or low oxygen environments.
Stability
Nitrate esters, in general, are unstable because of the weak N - O bond and tend to decompose at higher temperatures. Fibrous PVN is relatively stable at 80°C and is less stable as the nitrogen content increases. Gelatinized PVN is less stable than fibrous PVN.
Activation energy
Ignition temperature is the temperature at which a substance combusts spontaneously and requires no other additional energy (other than the temperature)/ This temperature can be used to determine the activation energy. For samples of varying nitrogen content, the ignition temperature decreases as nitrogen percentage increases, showing that PVN is more ignitable as nitrogen content increases. Using the Semenov equation:
where D is the ignition delay (the time it takes for a substance to ignite), E is the activation energy, R is the universal gas constant, T is absolute temperature, and C is a constant, dependent on the material.
The activation energy is greater than 13 kcal/mol and reaches 16 kcal/mol (at 15.71% nitrogen, near theoretical maximum) and varies greatly between different nitrogen concentrations and has no linear pattern between activation energy and the degree of nitration.
Impact sensitivity
The height at which a mass is dropped on PVN and causes an explosion shows the sensitivity of PVN to impacts. As nitrogen content increases, fibrous PVN is more sensitive to impacts. Gelatinous PVN is similar to fibrous PVN in impact sensitivity.
Applications
Because of the nitrate groups of PVN, polyvinyl nitrate is mainly used for its explosive and energetic capabilities. Structurally, PVN is similar to nitrocellulose in that it is a polymer with several nitrate groups off the main branch, differing only in their main chain (carbon and cellulose respectively). Because of this similarity, PVN is typically used in explosives and propellants as a binder. In explosives, a binder is used to form an explosive where the explosive materials are difficult to mold (see Polymer-bonded explosive (PBX)). A common binder polymer is hydroxyl-terminated polybutadiene (HTPB) or glycidyl azide polymer (GAP). Moreover, the binder needs a plasticizer such as dioctyl adipate (DOP) or 2-nitrodiphenylamine (2-NDPA) to make the explosive more flexible. Polyvinyl nitrate combines the traits of both a binder and a plasticizer, as this polymer binds the explosive ingredients together and is flexible at is softening point (40-50°C). Moreover, PVN adds to the explosive's overall energetic potential due to its nitrate groups.
An example composition including polyvinyl nitrate is PVN, nitrocellulose and/or polyvinyl acetate, and 2-nitrodiphenylamine. This creates a moldable thermoplastic that can be combined with a powder containing nitrocellulose to create a cartridge case where the PVN composition acts as a propellant and assists as an explosive material.
See also
Nitrate ester
Polyvinyl ester
Vinyl polymer
References
Explosive chemicals
Explosive polymers
Nitrate esters
Plastics
Vinyl polymers | Polyvinyl nitrate | [
"Physics",
"Chemistry"
] | 1,549 | [
"Explosive chemicals",
"Amorphous solids",
"Unsolved problems in physics",
"Plastics"
] |
9,481,422 | https://en.wikipedia.org/wiki/24-cell%20honeycomb | In four-dimensional Euclidean geometry, the 24-cell honeycomb, or icositetrachoric honeycomb is a regular space-filling tessellation (or honeycomb) of 4-dimensional Euclidean space by regular 24-cells. It can be represented by Schläfli symbol {3,4,3,3}.
The dual tessellation by regular 16-cell honeycomb has Schläfli symbol {3,3,4,3}. Together with the tesseractic honeycomb (or 4-cubic honeycomb) these are the only regular tessellations of Euclidean 4-space.
Coordinates
The 24-cell honeycomb can be constructed as the Voronoi tessellation of the D4 or F4 root lattice. Each 24-cell is then centered at a D4 lattice point, i.e. one of
These points can also be described as Hurwitz quaternions with even square norm.
The vertices of the honeycomb lie at the deep holes of the D4 lattice. These are the Hurwitz quaternions with odd square norm.
It can be constructed as a birectified tesseractic honeycomb, by taking a tesseractic honeycomb and placing vertices at the centers of all the square faces. The 24-cell facets exist between these vertices as rectified 16-cells. If the coordinates of the tesseractic honeycomb are integers (i,j,k,l), the birectified tesseractic honeycomb vertices can be placed at all permutations of half-unit shifts in two of the four dimensions, thus: (i+,j+,k,l), (i+,j,k+,l), (i+,j,k,l+), (i,j+,k+,l), (i,j+,k,l+), (i,j,k+,l+).
Configuration
Each 24-cell in the 24-cell honeycomb has 24 neighboring 24-cells. With each neighbor it shares exactly one octahedral cell.
It has 24 more neighbors such that with each of these it shares a single vertex.
It has no neighbors with which it shares only an edge or only a face.
The vertex figure of the 24-cell honeycomb is a tesseract (4-dimensional cube). So there are 16 edges, 32 triangles, 24 octahedra, and 8 24-cells meeting at every vertex. The edge figure is a tetrahedron, so there are 4 triangles, 6 octahedra, and 4 24-cells surrounding every edge. Finally, the face figure is a triangle, so there are 3 octahedra and 3 24-cells meeting at every face.
Cross-sections
One way to visualize a 4-dimensional figure is to consider various 3-dimensional cross-sections. That is, the intersection of various hyperplanes with the figure in question. Applying this technique to the 24-cell honeycomb gives rise to various 3-dimensional honeycombs with varying degrees of regularity.
A vertex-first cross-section uses some hyperplane orthogonal to a line joining opposite vertices of one of the 24-cells. For instance, one could take any of the coordinate hyperplanes in the coordinate system given above (i.e. the planes determined by xi = 0). The cross-section of {3,4,3,3} by one of these hyperplanes gives a rhombic dodecahedral honeycomb. Each of the rhombic dodecahedra corresponds to a maximal cross-section of one of the 24-cells intersecting the hyperplane (the center of each such (4-dimensional) 24-cell lies in the hyperplane). Accordingly, the rhombic dodecahedral honeycomb is the Voronoi tessellation of the D3 root lattice (a face-centered cubic lattice). Shifting this hyperplane halfway to one of the vertices (e.g. xi = ) gives rise to a regular cubic honeycomb. In this case the center of each 24-cell lies off the hyperplane. Shifting again, so the hyperplane intersects the vertex, gives another rhombic dodecahedral honeycomb but with new 24-cells (the former ones having shrunk to points). In general, for any integer n, the cross-section through xi = n is a rhombic dodecahedral honeycomb, and the cross-section through xi = n + is a cubic honeycomb. As the hyperplane moves through 4-space, the cross-section morphs between the two periodically.
A cell-first cross-section uses some hyperplane parallel to one of the octahedral cells of a 24-cell. Consider, for instance, some hyperplane orthogonal to the vector (1,1,0,0). The cross-section of {3,4,3,3} by this hyperplane is a rectified cubic honeycomb. Each cuboctahedron in this honeycomb is a maximal cross-section of a 24-cell whose center lies in the plane. Meanwhile, each octahedron is a boundary cell of a (4-dimensional) 24-cell whose center lies off the plane. Shifting this hyperplane till it lies halfway between the center of a 24-cell and the boundary, one obtains a bitruncated cubic honeycomb. The cuboctahedra have shrunk, and the octahedra have grown until they are both truncated octahedra. Shifting again, so the hyperplane intersects the boundary of the central 24-cell gives a rectified cubic honeycomb again, the cuboctahedra and octahedra having swapped positions. As the hyperplane sweeps through 4-space, the cross-section morphs between these two honeycombs periodically.
Kissing number
If a 3-sphere is inscribed in each hypercell of this tessellation, the resulting arrangement is the densest known regular sphere packing in four dimensions, with the kissing number 24. The packing density of this arrangement is
Each inscribed 3-sphere kisses 24 others at the centers of the octahedral facets of its 24-cell, since each such octahedral cell is shared with an adjacent 24-cell. In a unit-edge-length tessellation, the diameter of the spheres (the distance between the centers of kissing spheres) is .
Just outside this surrounding shell of 24 kissing 3-spheres is another less dense shell of 24 3-spheres which do not kiss each other or the central 3-sphere; they are inscribed in 24-cells with which the central 24-cell shares only a single vertex (rather than an octahedral cell). The center-to-center distance between one of these spheres and any of its shell neighbors or the central sphere is 2.
Alternatively, the same sphere packing arrangement with kissing number 24 can be carried out with smaller 3-spheres of edge-length-diameter, by locating them at the centers and the vertices of the 24-cells. (This is equivalent to locating them at the vertices of a 16-cell honeycomb of unit-edge-length.) In this case the central 3-sphere kisses 24 others at the centers of the cubical facets of the three tesseracts inscribed in the 24-cell. (This is the unique body-centered cubic packing of edge-length spheres of the tesseractic honeycomb.)
Just outside this shell of kissing 3-spheres of diameter 1 is another less dense shell of 24 non-kissing 3-spheres of diameter 1; they are centered in the adjacent 24-cells with which the central 24-cell shares an octahedral facet. The center-to-center distance between one of these spheres and any of its shell neighbors or the central sphere is .
Symmetry constructions
There are five different Wythoff constructions of this tessellation as a uniform polytope. They are geometrically identical to the regular form, but the symmetry differences can be represented by colored 24-cell facets. In all cases, eight 24-cells meet at each vertex, but the vertex figures have different symmetry generators.
See also
Other uniform honeycombs in 4-space:
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
Truncated 24-cell honeycomb
Rectified 24-cell honeycomb
Snub 24-cell honeycomb
Notes
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) - Model 88
o4o3x3o4o, o3x3o *b3o4o, o3x3o *b3o4o, o3x3o4o3o, o3o3o4o3x - icot - O88
5-polytopes
Honeycombs (geometry)
Regular tessellations | 24-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,992 | [
"Regular tessellations",
"Honeycombs (geometry)",
"Tessellation",
"Crystallography",
"Symmetry"
] |
9,482,345 | https://en.wikipedia.org/wiki/Dose%20profile | In external beam Radiotherapy, transverse and longitudinal dose measurements are taken by a radiation detector in order to characterise the radiation beams from medical linear accelerators. Typically, an ionisation chamber and water phantom are used to create these radiation dose profiles. Water is used due to its tissue equivalence.
Transverse dose measurements are performed in the x (crossplane) or y (inplane) directions perpendicular to the radiation beam, and at a given depth (z) in the phantom. These are known as dose profiles.
Dose measurements taken along the z direction create radiation dose distribution known as a depth-dose curve.
See also
Dosimetry
Percentage depth dose curve
References
Cancer treatments
Radiation
Radiation therapy
Medical physics | Dose profile | [
"Physics",
"Chemistry"
] | 140 | [
"Transport phenomena",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Waves",
"Radiation",
"Medical physics"
] |
2,223,114 | https://en.wikipedia.org/wiki/Baryon%20asymmetry | In physical cosmology, the baryon asymmetry problem, also known as the matter asymmetry problem or the matter–antimatter asymmetry problem, is the observed imbalance in baryonic matter (the type of matter experienced in everyday life) and antibaryonic matter in the observable universe. Neither the standard model of particle physics nor the theory of general relativity provides a known explanation for why this should be so, and it is a natural assumption that the universe is neutral with all conserved charges. The Big Bang should have produced equal amounts of matter and antimatter. Since this does not seem to have been the case, it is likely some physical laws must have acted differently or did not exist for matter and/or antimatter. Several competing hypotheses exist to explain the imbalance of matter and antimatter that resulted in baryogenesis. However, there is as of yet no consensus theory to explain the phenomenon, which has been described as "one of the great mysteries in physics".
Sakharov conditions
In 1967, Andrei Sakharov proposed a set of three necessary conditions that a baryon-generating interaction must satisfy to produce matter and antimatter at different rates. These conditions were inspired by the recent discoveries of the Cosmic microwave background and CP violation in the neutral kaon system. The three necessary "Sakharov conditions" are:
Baryon number violation.
C-symmetry and CP-symmetry violation.
Interactions out of thermal equilibrium.
Baryon number violation
Baryon number violation is a necessary condition to produce an excess of baryons over anti-baryons. But C-symmetry violation is also needed so that the interactions which produce more baryons than anti-baryons will not be counterbalanced by interactions which produce more anti-baryons than baryons. CP-symmetry violation is similarly required because otherwise equal numbers of left-handed baryons and right-handed anti-baryons would be produced, as well as equal numbers of left-handed anti-baryons and right-handed baryons. Finally, the interactions must be out of thermal equilibrium, since otherwise CPT symmetry would assure compensation between processes increasing and decreasing the baryon number.
Currently, there is no experimental evidence of particle interactions where the conservation of baryon number is broken perturbatively: this would appear to suggest that all observed particle reactions have equal baryon number before and after. Mathematically, the commutator of the baryon number quantum operator with the (perturbative) Standard Model hamiltonian is zero: . However, the Standard Model is known to violate the conservation of baryon number only non-perturbatively: a global U(1) anomaly. To account for baryon violation in baryogenesis, such events (including proton decay) can occur in Grand Unification Theories (GUTs) and supersymmetric (SUSY) models via hypothetical massive bosons such as the X boson.
CP-symmetry violation
The second condition for generating baryon asymmetry—violation of charge-parity symmetry—is that a process is able to happen at a different rate to its antimatter counterpart. In the Standard Model, CP violation appears as a complex phase in the quark mixing matrix of the weak interaction. There may also be a non-zero CP-violating phase in the neutrino mixing matrix, but this is currently unmeasured. The first in a series of basic physics principles to be violated was parity through Chien-Shiung Wu's experiment. This led to CP violation being verified in the 1964 Fitch–Cronin experiment with neutral kaons, which resulted in the 1980 Nobel Prize in Physics (direct CP violation, that is violation of CP symmetry in a decay process, was discovered later, in 1999). Due to CPT symmetry, violation of CP symmetry demands violation of time inversion symmetry, or T-symmetry. Despite the allowance for CP violation in the Standard Model, it is insufficient to account for the observed baryon asymmetry of the universe (BAU) given the limits on baryon number violation, meaning that beyond-Standard Model sources are needed.
A possible new source of CP violation was found at the Large Hadron Collider (LHC) by the LHCb collaboration during the first three years of LHC operations (beginning March 2010). The experiment analyzed the decays of two particles, the bottom Lambda (Λb0) and its antiparticle, and compared the distributions of decay products. The data showed an asymmetry of up to 20% of CP-violation sensitive quantities, implying a breaking of CP-symmetry. This analysis will need to be confirmed by more data from subsequent runs of the LHC.
One method to search for additional CP-violation is the search for electric dipole moments of fundamental or composed particles. The existence of electric dipole moments in equilibrium states requires violation of T-symmetry. That way finding a non zero electric dipole moment would imply the existence of T-violating interactions in the vacuum corrections to the measured particle. So far all measurements are consistent with zero putting strong bounds on the properties of the yet unknown new CP-violating interactions.
Interactions out of thermal equilibrium
In the out-of-equilibrium decay scenario, the last condition states that the rate of a reaction which generates baryon-asymmetry must be less than the rate of expansion of the universe. In this situation the particles and their corresponding antiparticles do not achieve thermal equilibrium due to rapid expansion decreasing the occurrence of pair-annihilation.
Other explanations
Regions of the universe where antimatter dominates
Another possible explanation of the apparent baryon asymmetry is that matter and antimatter are essentially separated into different, widely distant regions of the universe. The formation of antimatter galaxies was originally thought to explain the baryon asymmetry, as from a distance, antimatter atoms are indistinguishable from matter atoms; both produce light (photons) in the same way. Along the boundary between matter and antimatter regions, however, annihilation (and the subsequent production of gamma radiation) would be detectable, depending on its distance and the density of matter and antimatter. Such boundaries, if they exist, would likely lie in deep intergalactic space. The density of matter in intergalactic space is reasonably well established at about one atom per cubic meter. Assuming this is a typical density near a boundary, the gamma ray luminosity of the boundary interaction zone can be calculated. No such zones have been detected, but 30 years of research have placed bounds on how far they might be. On the basis of such analyses, it is now deemed unlikely that any region within the observable universe is dominated by antimatter.
Mirror anti-universe
The state of the universe, as it is, does not violate the CPT symmetry, because the Big Bang could be considered as a double sided event, both classically and quantum mechanically, consisting of a universe-antiuniverse pair. This means that this universe is the charge (C), parity (P) and time (T) image of the anti-universe. This pair emerged from the Big Bang epochs not directly into a hot, radiation-dominated era. The antiuniverse would flow back in time from the Big Bang, becoming bigger as it does so, and would be also dominated by antimatter. Its spatial properties are inverted if compared to those in our universe, a situation analogous to creating electron–positron pairs in a vacuum. This model, devised by physicists from the Perimeter Institute for Theoretical Physics in Canada, proposes that temperature fluctuations in the cosmic microwave background (CMB) are due to the quantum-mechanical nature of space-time near the Big Bang singularity. This means that a point in the future of our universe and a point in the distant past of the anti-universe would provide fixed classical points, while all possible quantum-based permutations would exist in between. Quantum uncertainty causes the universe and antiuniverse to not be exact mirror images of each other.
This model has not shown if it can reproduce certain observations regarding the inflation scenario, such as explaining the uniformity of the cosmos on large scales. However, it provides a natural and straightforward explanation for dark matter. Such a universe-antiuniverse pair would produce large numbers of superheavy neutrinos, also known as sterile neutrinos. These neutrinos might also be the source of recently observed bursts of high-energy cosmic rays.
Baryon asymmetry parameter
The challenges to the physics theories are then to explain how to produce the predominance of matter over antimatter, and also the magnitude of this asymmetry. An important quantifier is the asymmetry parameter,
This quantity relates the overall number density difference between baryons and antibaryons (nB and n, respectively) and the number density of cosmic background radiation photons nγ.
According to the Big Bang model, matter decoupled from the cosmic background radiation (CBR) at a temperature of roughly kelvin, corresponding to an average kinetic energy of / () = . After the decoupling, the total number of CBR photons remains constant. Therefore, due to space-time expansion, the photon density decreases. The photon density at equilibrium temperature T per cubic centimeter, is given by
with kB as the Boltzmann constant, ħ as the Planck constant divided by 2 and c as the speed of light in vacuum, and ζ(3) as Apéry's constant. At the current CBR photon temperature of , this corresponds to a photon density nγ of around 411 CBR photons per cubic centimeter.
Therefore, the asymmetry parameter η, as defined above, is not the "good" parameter. Instead, the preferred asymmetry parameter uses the entropy density s,
because the entropy density of the universe remained reasonably constant throughout most of its evolution. The entropy density is
with p and ρ as the pressure and density from the energy density tensor Tμν, and g* as the effective number of degrees of freedom for "massless" particles (inasmuch as mc2 ≪ kBT holds) at temperature T,
for bosons and fermions with gi and gj degrees of freedom at temperatures Ti and Tj respectively. Presently, s = .
See also
Baryogenesis
CP violation
List of unsolved problems in physics
References
Concepts in astrophysics
Antimatter
Asymmetry
Unsolved problems in physics | Baryon asymmetry | [
"Physics"
] | 2,172 | [
"Symmetry",
"Antimatter",
"Concepts in astrophysics",
"Unsolved problems in physics",
"Astrophysics",
"Asymmetry",
"Matter"
] |
2,223,535 | https://en.wikipedia.org/wiki/Mass%20flow%20rate | In physics and engineering, mass flow rate is the rate at which mass of a substance changes over time. Its unit is kilogram per second (kg/s) in SI units, and slug per second or pound per second in US customary units. The common symbol is (pronounced "m-dot"), although sometimes (Greek lowercase mu) is used.
Sometimes, mass flow rate as defined here is termed "mass flux" or "mass current".
Confusingly, "mass flow" is also a term for mass flux, the rate of mass flow per unit of area.
Formulation
Mass flow rate is defined by the limit
i.e., the flow of mass through a surface per time .
The overdot on is Newton's notation for a time derivative. Since mass is a scalar quantity, the mass flow rate (the time derivative of mass) is also a scalar quantity. The change in mass is the amount that flows after crossing the boundary for some time duration, not the initial amount of mass at the boundary minus the final amount at the boundary, since the change in mass flowing through the area would be zero for steady flow.
Alternative equations
Mass flow rate can also be calculated by
where
The above equation is only true for a flat, plane area. In general, including cases where the area is curved, the equation becomes a surface integral:
The area required to calculate the mass flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface, e.g. for substances passing through a filter or a membrane, the real surface is the (generally curved) surface area of the filter, macroscopically - ignoring the area spanned by the holes in the filter/membrane. The spaces would be cross-sectional areas. For liquids passing through a pipe, the area is the cross-section of the pipe, at the section considered. The vector area is a combination of the magnitude of the area through which the mass passes through, , and a unit vector normal to the area, . The relation is .
The reason for the dot product is as follows. The only mass flowing through the cross-section is the amount normal to the area, i.e. parallel to the unit normal. This amount is
where is the angle between the unit normal and the velocity of mass elements. The amount passing through the cross-section is reduced by the factor , as increases less mass passes through. All mass which passes in tangential directions to the area, that is perpendicular to the unit normal, doesn't actually pass through the area, so the mass passing through the area is zero. This occurs when :
These results are equivalent to the equation containing the dot product. Sometimes these equations are used to define the mass flow rate.
Considering flow through porous media, a special quantity, superficial mass flow rate, can be introduced. It is related with superficial velocity, , with the following relationship:
The quantity can be used in particle Reynolds number or mass transfer coefficient calculation for fixed and fluidized bed systems.
Usage
In the elementary form of the continuity equation for mass, in hydrodynamics:
In elementary classical mechanics, mass flow rate is encountered when dealing with objects of variable mass, such as a rocket ejecting spent fuel. Often, descriptions of such objects erroneously invoke Newton's second law by treating both the mass and the velocity as time-dependent and then applying the derivative product rule. A correct description of such an object requires the application of Newton's second law to the entire, constant-mass system consisting of both the object and its ejected mass.
Mass flow rate can be used to calculate the energy flow rate of a fluid:
where is the unit mass energy of a system.
Energy flow rate has SI units of kilojoule per second or kilowatt.
See also
Continuity equation
Fluid dynamics
Mass flow controller
Mass flow meter
Mass flux
Orifice plate
Standard cubic centimetres per minute
Thermal mass flow meter
Volumetric flow rate
Notes
References
Fluid dynamics
Temporal rates
Mass
Mechanical quantities | Mass flow rate | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 815 | [
"Temporal quantities",
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Mass",
"Temporal rates",
"Size",
"Mechanics",
"Piping",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
2,225,052 | https://en.wikipedia.org/wiki/International%20Society%20for%20the%20Interdisciplinary%20Study%20of%20Symmetry | The International Symmetry Society ("International Society for the Interdisciplinary Study of Symmetry"; abbreviated name SIS) is an international non-governmental, non-profit organization registered in Hungary (Budapest, Tisza u. 7, H-1029).
Its main objectives are:
to bring together artists and scientists, educators and students devoted to, or interested in, the research and understanding of the concept and application of symmetry (asymmetry, dissymmetry);
to provide regular information to the general public about events in symmetry studies;
to ensure a regular forum (including the organization of symposia and the publication of a periodical) for all those interested in symmetry studies.
The topic was introduced for the first time by Russian and Polish scholars. Then in 1952, Hermann Weyl published his fascinating book Symmetry, which was later translated into 10 languages. Since then, it has become an attractive subject of research in various fields. A variety of manifestations of the principle of symmetry in sculpture, painting, architecture, ornament, and design, in organic and inorganic nature, has been revealed; the philosophical and mathematical significance of this principle has been studied.
During the 1980s, the discussions concerning the nature of the world, whether it was essentially probabilistic or naturally geometric, revived the interest of the researchers in the topic. The intellectual atmosphere of this period facilitated the idea of the establishment of a new institution devoted to the study of all forms of complexity and patterns of symmetry and orderly structures pervading science, nature and society, which ultimately led to the establishment of the International Society for the Interdisciplinary Study of Symmetry.
The Society's community comprises several branches of science and art, while symmetry studies have gained the rank of an individual interdisciplinary field in the judgement of the scientific community. The Society has members on over 40 countries on all continents.
The Society was founded in 1989 following a successful international meeting in Budapest.
It has operated continuously since its foundation, publishing printed and web journals and hosting an International Congress and Exhibition entitled Symmetry: Art and Science every three years:
1989 in Budapest, Hungary
1992 in Hiroshima, Japan
1995 in Washington DC, US
1998 in Haifa, Israel
2001 in Sydney, Australia
2004 in Tihany, Hungary
2007 in Buenos Aires, Argentina
2010 in Gmünd, Austria
2013 in Crete, Greece
2016 in Adelaide, Australia
2019 in Kanazawa, Japan
2022 in Porto, Portugal
Interim, full conferences have been held in
Tsukuba Science City (co-organized with Katachi no kagaku kai, Japan), 1994 and 1998
Brussels (2002)
Lviv [Lemberg] (2008)
Kraków and Wroclaw (2008).
A new series of conferences under the general heading Logics of Image was launched in 2013 and is planned to take place every two years. This series is co-organised with the Research Group on Universal Logic:
ISSC 2016: Logics of Image - Visualization, Iconicity, Imagination and Human Creativity, in Santorini, Greece
ISSC 2018: Logics of Image - Visual Learning, Logic and Philosophy of Form in East and West, in Crete, Greece
The President of the International Society for the Interdisciplinary Study of Symmetry is Dénes Nagy.
The Society is governed by a number of special Boards and Committees.
The International Advisory Board consists of:
Rima Ajlouni (United States of America)
Alireza Behnejad (UK),
Beth Cardier (Australia)
Oleh Bodnar (Ukraine),
Beth Cardier (Australia),
Liu Dun (China),
Shozo Isihara (Japan),
Ritsuko Izuhara (Japan),
Eugene Katz (Israel),
Patricia Muñoz (Argentina, representing SEMA),
Janusz Rębielak (Poland),
Vera Viana (Portugal).
Dmitry Weise (Russia).
Among the Honorary Members of the Society are:
Carol Bier (USA)
Jürgen Bokowski (Germany)
Michael Burt (Israel)
Donald Crowe ([United States of America])
Istvan Hargittai (Hungary)
William Huff ([United States of America])
Peter Klein (Germany)
Koryo Miura (Japan)
Tohru Ogawa (Japan)
Werner Schulze (Austria)
Caspar Schwabe (Switzerland)
Dan Shechtman (Israel)
Ryuji Takaki (Japan)
Honorary Members of the Society (died)
Johann Jakob Burckhardt (Switzerland)
Harold S. M. Coxeter (Canada)
Victor A. Frank-Kamenetsky (Russia)
Heinrich Heesch (Germany)
Kodi Husimi (Japan)
Michael Longuet-Higgins (UK and [United States of America])
Yuval Ne’eman (Israel)
Ilarion I. Shafranovskii (Russia)
Cyril Smith ([United States of America])
Eugene P. Wigner ([United States of America])
External links
Home Page
Facebook site
References
Organizations established in 1989
Symmetry | International Society for the Interdisciplinary Study of Symmetry | [
"Physics",
"Mathematics"
] | 1,004 | [
"Geometry",
"Symmetry"
] |
3,062,954 | https://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman%20absorber%20theory | The Wheeler–Feynman absorber theory (also called the Wheeler–Feynman time-symmetric theory), named after its originators, the physicists Richard Feynman and John Archibald Wheeler, is a theory of electrodynamics based on a relativistic correct extension of action at a distance electron particles. The theory postulates no independent electromagnetic field. Rather, the whole theory is encapsulated by the Lorentz-invariant action of particle trajectories defined as
where .
The absorber theory is invariant under time-reversal transformation, consistent with the lack of any physical basis for microscopic time-reversal symmetry breaking. Another key principle resulting from this interpretation, and somewhat reminiscent of Mach's principle and the work of Hugo Tetrode, is that elementary particles are not self-interacting. This immediately removes the problem of electron self-energy giving an infinity in the energy of an electromagnetic field.
Motivation
Wheeler and Feynman begin by observing that classical electromagnetic field theory was designed before the discovery of electrons: charge is a continuous substance in the theory. An electron particle does not naturally fit in to the theory: should a point charge see the effect of its own field? They reconsider the fundamental problem of a collection of point charges, taking up a field-free action at a distance theory developed separately by Karl Schwarzschild, Hugo Tetrode, and Adriaan Fokker. Unlike instantaneous action at a distance theories of the early 1800s these "direct interaction" theories are based on interaction propagation at the speed of light. They differ from the classical field theory in three ways 1) no independent field is postulated; 2) the point charges do not act upon themselves; 3) the equations are time symmetric. Wheeler and Feynman propose to develop these equations into a relativistically correct generalization of electromagnetism based on Newtonian mechanics.
Problems with previous direct-interaction theories
The Tetrode-Fokker work left unsolved two major problems. First, in a non-instantaneous action at a distance theory, the equal action-reaction of Newton's laws of motion conflicts with causality. If an action propagates forward in time, the reaction would necessarily propagate backwards in time. Second, existing explanations of radiation reaction force or radiation resistance depended upon accelerating electrons interacting with their own field; the direct interaction models explicitly omit self-interaction.
Absorber and radiation resistance
Wheeler and Feynman postulate the "universe" of all other electrons as an absorber of radiation to overcome these issues and extend the direct interaction theories.
Rather than considering an unphysical isolated point charge, they model all charges in the universe with a uniform absorber in a shell around a charge. As the charge moves relative to the absorber, it radiates into the absorber which "pushes back", causing the radiation resistance.
Key result
Feynman and Wheeler obtained their result in a very simple and elegant way. They considered all the charged particles (emitters) present in our universe and assumed all of them to generate time-reversal symmetric waves. The resulting field is
Then they observed that if the relation
holds, then , being a solution of the homogeneous Maxwell equation, can be used to obtain the total field
The total field is then the observed pure retarded field.
The assumption that the free field is identically zero is the core of the absorber idea. It means that the radiation emitted by each particle is completely absorbed by all other particles present in the universe. To better understand this point, it may be useful to consider how the absorption mechanism works in common materials. At the microscopic scale, it results from the sum of the incoming electromagnetic wave and the waves generated from the electrons of the material, which react to the external perturbation. If the incoming wave is absorbed, the result is a zero outgoing field. In the absorber theory the same concept is used, however, in presence of both retarded and advanced waves.
Arrow of time ambiguity
The resulting wave appears to have a preferred time direction, because it respects causality. However, this is only an illusion. Indeed, it is always possible to reverse the time direction by simply exchanging the labels emitter and absorber. Thus, the apparently preferred time direction results from the arbitrary labelling. Wheeler and Feynman claimed that thermodynamics picked the observed direction; cosmological selections have also been proposed.
The requirement of time-reversal symmetry, in general, is difficult to reconcile with the principle of causality. Maxwell's equations and the equations for electromagnetic waves have, in general, two possible solutions: a retarded (delayed) solution and an advanced one. Accordingly, any charged particle generates waves, say at time and point , which will arrive at point at the instant (here is the speed of light), after the emission (retarded solution), and other waves, which will arrive at the same place at the instant , before the emission (advanced solution). The latter, however, violates the causality principle: advanced waves could be detected before their emission. Thus the advanced solutions are usually discarded in the interpretation of electromagnetic waves.
In the absorber theory, instead charged particles are considered as both emitters and absorbers, and the emission process is connected with the absorption process as follows: Both the retarded waves from emitter to absorber and the advanced waves from absorber to emitter are considered. The sum of the two, however, results in causal waves, although the anti-causal (advanced) solutions are not discarded a priori.
Alternatively, the way that Wheeler/Feynman came up with the primary equation is: They assumed that their Lagrangian only interacted when and where the fields for the individual particles were separated by a proper time of zero. So since only massless particles propagate from emission to detection with zero proper time separation, this Lagrangian automatically demands an electromagnetic like interaction.
New interpretation of radiation damping
One of the major results of the absorber theory is the elegant and clear interpretation of the electromagnetic radiation process. A charged particle that experiences acceleration is known to emit electromagnetic waves, i.e., to lose energy. Thus, the Newtonian equation for the particle must contain a dissipative force (damping term), which takes into account this energy loss. In the causal interpretation of electromagnetism, Hendrik Lorentz and Max Abraham proposed that such a force, later called Abraham–Lorentz force, is due to the retarded self-interaction of the particle with its own field. This first interpretation, however, is not completely satisfactory, as it leads to divergences in the theory and needs some assumptions on the structure of charge distribution of the particle. Paul Dirac generalized the formula to make it relativistically invariant. While doing so, he also suggested a different interpretation. He showed that the damping term can be expressed in terms of a free field acting on the particle at its own position:
However, Dirac did not propose any physical explanation of this interpretation.
A clear and simple explanation can instead be obtained in the framework of absorber theory, starting from the simple idea that each particle does not interact with itself. This is actually the opposite of the first Abraham–Lorentz proposal. The field acting on the particle at its own position (the point ) is then
If we sum the free-field term of this expression, we obtain
and, thanks to Dirac's result,
Thus, the damping force is obtained without the need for self-interaction, which is known to lead to divergences, and also giving a physical justification to the expression derived by Dirac.
Developments since original formulation
Gravity theory
Inspired by the Machian nature of the Wheeler–Feynman absorber theory for electrodynamics, Fred Hoyle and Jayant Narlikar proposed their own theory of gravity in the context of general relativity. This model still exists in spite of recent astronomical observations that have challenged the theory. Stephen Hawking had criticized the original Hoyle-Narlikar theory believing that the advanced waves going off to infinity would lead to a divergence, as indeed they would, if the universe were only expanding.
Transactional interpretation of quantum mechanics
Again inspired by the Wheeler–Feynman absorber theory, the transactional interpretation of quantum mechanics (TIQM) first proposed in 1986 by John G. Cramer, describes quantum interactions in terms of a standing wave formed by retarded (forward-in-time) and advanced (backward-in-time) waves. Cramer claims it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes, such as quantum nonlocality, quantum entanglement and retrocausality.
Attempted resolution of causality
T. C. Scott and R. A. Moore demonstrated that the apparent acausality suggested by the presence of advanced Liénard–Wiechert potentials could be removed by recasting the theory in terms of retarded potentials only, without the complications of the absorber idea.
The Lagrangian describing a particle () under the influence of the time-symmetric potential generated by another particle () is
where is the relativistic kinetic energy functional of particle , and and are respectively the retarded and advanced Liénard–Wiechert potentials acting on particle and generated by particle . The corresponding Lagrangian for particle is
It was originally demonstrated with computer algebra and then proven analytically that
is a total time derivative, i.e. a divergence in the calculus of variations, and thus it gives no contribution to the Euler–Lagrange equations. Thanks to this result the advanced potentials can be eliminated; here the total derivative plays the same role as the free field. The Lagrangian for the N-body system is therefore
The resulting Lagrangian is symmetric under the exchange of with . For this Lagrangian will generate exactly the same equations of motion of and . Therefore, from the point of view of an outside observer, everything is causal. This formulation reflects particle-particle symmetry with the variational principle applied to the N-particle system as a whole, and thus Tetrode's Machian principle. Only if we isolate the forces acting on a particular body do the advanced potentials make their appearance. This recasting of the problem comes at a price: the N-body Lagrangian depends on all the time derivatives of the curves traced by all particles, i.e. the Lagrangian is infinite-order. However, much progress was made in examining the unresolved issue of quantizing the theory. Also, this formulation recovers the Darwin Lagrangian, from which the Breit equation was originally derived, but without the dissipative terms. This ensures agreement with theory and experiment, up to but not including the Lamb shift. Numerical solutions for the classical problem were also found. Furthermore, Moore showed that a model by Feynman and Albert Hibbs is amenable to the methods of higher than first-order Lagrangians and revealed chaotic-like solutions. Moore and Scott showed that the radiation reaction can be alternatively derived using the notion that, on average, the net dipole moment is zero for a collection of charged particles, thereby avoiding the complications of the absorber theory.
This apparent acausality may be viewed as merely apparent, and this entire problem goes away. An opposing view was held by Einstein.
Alternative Lamb shift calculation
As mentioned previously, a serious criticism against the absorber theory is that its Machian assumption that point particles do not act on themselves does not allow (infinite) self-energies and consequently an explanation for the Lamb shift according to quantum electrodynamics (QED). Ed Jaynes proposed an alternate model where the Lamb-like shift is due instead to the interaction with other particles very much along the same notions of the Wheeler–Feynman absorber theory itself. One simple model is to calculate the motion of an oscillator coupled directly with many other oscillators. Jaynes has shown that it is easy to get both spontaneous emission and Lamb shift behavior in classical mechanics. Furthermore, Jaynes' alternative provides a solution to the process of "addition and subtraction of infinities" associated with renormalization.
This model leads to the same type of Bethe logarithm (an essential part of the Lamb shift calculation), vindicating Jaynes' claim that two different physical models can be mathematically isomorphic to each other and therefore yield the same results, a point also apparently made by Scott and Moore on the issue of causality.
Relationship to quantum field theory
This universal absorber theory is mentioned in the chapter titled "Monster Minds" in Feynman's autobiographical work Surely You're Joking, Mr. Feynman! and in Vol. II of the Feynman Lectures on Physics. It led to the formulation of a framework of quantum mechanics using a Lagrangian and action as starting points, rather than a Hamiltonian, namely the formulation using Feynman path integrals, which proved useful in Feynman's earliest calculations in quantum electrodynamics and quantum field theory in general. Both retarded and advanced fields appear respectively as retarded and advanced propagators and also in the Feynman propagator and the Dyson propagator. In hindsight, the relationship between retarded and advanced potentials shown here is not so surprising in view of the fact that, in quantum field theory, the advanced propagator can be obtained from the retarded propagator by exchanging the roles of field source and test particle (usually within the kernel of a Green's function formalism). In quantum field theory, advanced and retarded fields are simply viewed as mathematical solutions of Maxwell's equations whose combinations are decided by the boundary conditions.
See also
Abraham–Lorentz force
Causality
Paradox of radiation of charged particles in a gravitational field
Retrocausality
Symmetry in physics and T-symmetry
Transactional interpretation
Two-state vector formalism
Notes
Sources
Electromagnetism
Richard Feynman | Wheeler–Feynman absorber theory | [
"Physics"
] | 2,920 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
3,063,673 | https://en.wikipedia.org/wiki/Control%20variates | The control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate of an unknown quantity.
Underlying principle
Let the unknown parameter of interest be , and assume we have a statistic such that the expected value of m is μ: , i.e. m is an unbiased estimator for μ. Suppose we calculate another statistic such that is a known value. Then
is also an unbiased estimator for for any choice of the coefficient .
The variance of the resulting estimator is
By differentiating the above expression with respect to , it can be shown that choosing the optimal coefficient
minimizes the variance of . (Note that this coefficient is the same as the coefficient obtained from a linear regression.) With this choice,
where
is the correlation coefficient of and . The greater the value of , the greater the variance reduction achieved.
In the case that , , and/or are unknown, they can be estimated across the Monte Carlo replicates. This is equivalent to solving a certain least squares system; therefore this technique is also known as regression sampling.
When the expectation of the control variable, , is not known analytically, it is still possible to increase the precision in estimating (for a given fixed simulation budget), provided that the two conditions are met: 1) evaluating is significantly cheaper than computing ; 2) the magnitude of the correlation coefficient is close to unity.
Example
We would like to estimate
using Monte Carlo integration. This integral is the expected value of , where
and U follows a uniform distribution [0, 1].
Using a sample of size n denote the points in the sample as . Then the estimate is given by
Now we introduce as a control variate with a known expected value and combine the two into a new estimate
Using realizations and an estimated optimal coefficient we obtain the following results
The variance was significantly reduced after using the control variates technique. (The exact result is .)
See also
Antithetic variates
Importance sampling
Notes
References
Ross, Sheldon M. (2002) Simulation 3rd edition
Averill M. Law & W. David Kelton (2000), Simulation Modeling and Analysis, 3rd edition.
S. P. Meyn (2007) Control Techniques for Complex Networks, Cambridge University Press. . Downloadable draft (Section 11.4: Control variates and shadow functions)
Monte Carlo methods
Statistical randomness
Computational statistics
Variance reduction | Control variates | [
"Physics",
"Mathematics"
] | 502 | [
"Monte Carlo methods",
"Computational statistics",
"Computational mathematics",
"Computational physics"
] |
3,065,512 | https://en.wikipedia.org/wiki/Hyaline | A hyaline substance is one with a glassy appearance. The word is derived from , and .
Histopathology
Hyaline cartilage is named after its glassy appearance on fresh gross pathology. On light microscopy of H&E stained slides, the extracellular matrix of hyaline cartilage looks homogeneously pink, and the term "hyaline" is used to describe similarly homogeneously pink material besides the cartilage. Hyaline material is usually acellular and proteinaceous. For example, arterial hyaline is seen in aging, high blood pressure, diabetes mellitus and in association with some drugs (e.g. calcineurin inhibitors). It is bright pink with PAS staining.
Ichthyology and entomology
In ichthyology and entomology, hyaline denotes a colorless, transparent substance, such as unpigmented fins of fishes or clear insect wings.
Botany
In botany, hyaline refers to thin and translucent plant parts, such as the margins of some sepals, bracts and leaves.
See also
Hyaline arteriolosclerosis
Hyaloid canal, which passes through the eye
Hyalopilitic
Hyaloserositis
Infant respiratory distress syndrome, previously known as hyaline membrane disease
References
Taber's Cyclopedic Medical Dictionary, 19th Edition. Donald Venes ed. 1997 F.A. Davis. Page 1008.
Histopathology
Fungal morphology and anatomy | Hyaline | [
"Chemistry"
] | 306 | [
"Histopathology",
"Microscopy"
] |
3,065,940 | https://en.wikipedia.org/wiki/Optical%20vortex | An optical vortex (also known as a photonic quantum vortex, screw dislocation or phase singularity) is a zero of an optical field; a point of zero intensity. The term is also used to describe a beam of light that has such a zero in it. The study of these phenomena is known as singular optics.
Explanation
In an optical vortex, light is twisted like a corkscrew around its axis of travel. Because of the twisting, the light waves at the axis itself cancel each other out. When projected onto a flat surface, an optical vortex looks like a ring of light, with a dark hole in the center. The vortex is given a number, called the topological charge, according to how many twists the light does in one wavelength. The number is always an integer, and can be positive or negative, depending on the direction of the twist. The higher the number of the twist, the faster the light is spinning around the axis.
This spinning carries orbital angular momentum with the wave train, and will induce torque on an electric dipole. Orbital angular momentum is distinct from the more commonly encountered spin angular momentum, which produces circular polarization. Orbital angular momentum of light can be observed in the orbiting motion of trapped particles. Interfering an optical vortex with a plane wave of light reveals the spiral phase as concentric spirals. The number of arms in the spiral equals the topological charge.
Optical vortices are studied by creating them in the lab in various ways. They can be generated directly in a laser, or a laser beam can be twisted into a vortex using any of several methods, such as computer-generated holograms, spiral-phase delay structures, or birefringent vortices in materials.
Properties
An optical singularity is a zero of an optical field. The phase in the field circulates around these points of zero intensity (giving rise to the name vortex). Vortices are points in 2D fields and lines in 3D fields (as they have codimension two). Integrating the phase of the field around a path enclosing a vortex yields an integer multiple of 2. This integer is known as the topological charge, or strength, of the vortex.
A hypergeometric-Gaussian mode (HyGG) has an optical vortex in its center. The beam, which has the form
is a solution to the paraxial wave equation (see paraxial approximation, and the Fourier optics article for the actual equation) consisting of the Bessel function. Photons in a hypergeometric-Gaussian beam have an orbital angular momentum of mħ. The integer m also gives the strength of the vortex at the beam's centre. Spin angular momentum of circularly polarized light can be converted into orbital angular momentum.
Creation
Several methods exist to create hypergeometric-Gaussian modes, including with a spiral phase plate, computer-generated holograms, mode conversion, a q-plate, or a spatial light modulator.
Static spiral phase plate(s) or mirror(s) are spiral-shaped pieces of crystal or plastic that are engineered specifically to the desired topological charge and incident wavelength. They are efficient, yet expensive. Adjustable spiral phase plates can be made by moving a wedge between two sides of a cracked piece of plastic. Off-axis spiral phase mirrors can be used to mode convert high-power and ultra-short lasers.
Computer-generated holograms (CGHs) are the calculated interferogram between a plane wave and a Laguerre-Gaussian beam which is transferred to film. The CGH resembles a common Ronchi linear diffraction grating, save a "fork" dislocation. An incident laser beam creates a diffraction pattern with vortices whose topological charge increases with diffraction order. The zero order is Gaussian, and the vortices have opposite helicity on either side of this undiffracted beam. The number of prongs in the CGH fork is directly related to the topological charge of the first diffraction order vortex. The CGH can be blazed to direct more intensity into the first order. Bleaching transforms it from an intensity grating to a phase grating, which increases efficiency.
Mode conversion requires Hermite-Gaussian (HG) modes, which can easily be made inside the laser cavity or externally by less accurate means. A pair of astigmatic lenses introduces a Gouy phase shift which creates an LG beam with azimuthal and radial indices dependent upon the input HG.
A spatial light modulator is a computer-controlled electronic liquid-crystal device which can create dynamic vortices, arrays of vortices, and other types of beams by creating a hologram of varying refractive indices. This hologram may be a fork pattern, a spiral phase plate, or some similar pattern with non-zero topological charge.
Deformable mirror made of segments can be used to dynamically (with a rate of up to a few kHz) create vortices, even if illuminated by high power lasers.
A q-plate is a birefringent liquid crystal plate with an azimuthal distribution of the local optical axis, which has a topological charge q at its center defect. The q-plate with topological charge q can generate a charge vortex based on the input beam polarization.
An s-plate is a similar technology to a q-plate, using a high-intensity UV laser to permanently etch a birefringent pattern into silica glass with an azimuthal variation in the fast axis with topological charge of s. Unlike a q-plate, which may be wavelength tuned by adjusting the bias voltage on the liquid crystal, an s-plate only works for one wavelength of light.
At radio frequencies it is trivial to produce a (non optical) electromagnetic vortex. Simply arrange a one wavelength or greater diameter ring of antennas such that the phase shift of the broadcast antennas varies an integral multiple of 2 around the ring.
Nanophotonic metasurfaces can enable transverse phase modulation to create optical vortices. The vortex beams can be generated in either free space or on an integrated photonic chip.
A spiral lens can “[incorporate] the elements necessary to make an optical vortex directly into its surface.” Spiralizing a diopter can achieve multifocality, allowing—for instance in ophthalmic applications—increased acuity over a wide range of focal distances and light levels.
Detection
An optical vortex, being fundamentally a phase structure, cannot be detected from its intensity profile alone. Furthermore, as vortex beams of the same order have roughly identical intensity profiles, they cannot be solely characterized from their intensity distributions. As a result, a wide range of interferometric techniques are employed.
The simplest of the techniques is to interfere a vortex beam with an inclined plane wave, which results in a fork-like interferogram. By making a count of the number of forks in the pattern and their relative orientations, the vortex order and its corresponding sign can be precisely estimated.
A vortex beam can be deformed into its characteristic lobe structure while passing through a tilted lens. This happens as a result of a self-interference between different phase points in a vortex. A vortex beam of order will be split into lobes, roughly around the depth of focus of a tilted convex lens. Furthermore, the orientation of lobes (right and left diagonal), determine the positive and negative orbital angular momentum orders.
A vortex beam generates a lobe structure when interfered with a vortex of opposite sign. This technique offers no mechanism to characterize the signs, however. This technique can be employed by placing a Dove prism in one of the paths of a Mach–Zehnder interferometer, pumped with a vortex profile.
Applications
There are a broad variety of applications of optical vortices in diverse areas of communications and imaging.
Extrasolar planets have only recently been directly detected, as their parent star is so bright. Progress has been made in creating an optical vortex coronagraph to directly observe planets with too low a contrast ratio to their parent to be observed with other techniques.
Optical vortices are used in optical tweezers to manipulate micrometer-sized particles such as cells. Such particles can be rotated in orbits around the axis of the beam using OAM. Micro-motors have also been created using optical vortex tweezers.
Optical vortices can significantly improve communication bandwidth. For instance, twisted radio beams could increase radio spectral efficiency by using the large number of vortical states. The amount of phase front ‘twisting’ indicates the orbital angular momentum state number, and beams with different orbital angular momentum are orthogonal. Such orbital angular momentum based multiplexing can potentially increase the system capacity and spectral efficiency of millimetre-wave wireless communication.
Similarly, early experimental results for orbital angular momentum multiplexing in the optical domain have shown results over short distances, but longer distance demonstrations are still forthcoming. The main challenge that these demonstrations have faced is that conventional optical fibers change the spin angular momentum of vortices as they propagate, and may change the orbital angular momentum when bent or stressed. So far stable propagation of up to 50 meters has been demonstrated in specialty optical fibers. Free-space transmission of orbital angular momentum modes of light over a distance of 143 km has been demonstrated to be able to support encoding of information with good robustness.
Current computers use electronics that have two states, zero and one. Quantum computing could use light to encode and store information. Optical vortices theoretically have an infinite number of states in free space, as there is no limit to the topological charge. This could allow for faster data manipulation. The cryptography community is also interested in optical vortices for the promise of higher bandwidth communication discussed above.
In optical microscopy, optical vortices may be used to achieve spatial resolution beyond normal diffraction limits using a technique called Stimulated Emission Depletion (STED) Microscopy. This technique takes advantage of the low intensity at the singularity in the center of the beam to deplete the fluorophores around a desired area with a high-intensity optical vortex beam without depleting fluorophores in the desired target area.
Optical vortices can be also directly (resonantly) transferred into polariton fluids of light and matter to study the dynamics of quantum vortices upon linear or nonlinear interaction regimes.
Optical vortices can be identified in the non-local correlations of entangled photon pairs.
See also
Orbital angular momentum of light
References
External links
Video of propagation simulation of Vortex Diffractive Optical Element from near field to far field by Holo/Or .
Optical vortices and optical tweezers at the University of Glasgow.
Singular Optics Master list by Grover Swartzlander Jr., University of Arizona, Tucson.
Optical vortex coronograph, Gregory Foo, et al., University of Arizona, Tucson.
Optical tweezers, David Grier, New York University.
Selected Publications on Optical Vortices at Australian National University.
Physical optics
Orbital angular momentum of waves
Vortices | Optical vortex | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,272 | [
"Physical phenomena",
"Physical quantities",
"Vortices",
"Angular momentum of light",
"Waves",
"Orbital angular momentum of waves",
"Dynamical systems",
"Fluid dynamics",
"Angular momentum",
"Moment (physics)"
] |
3,067,278 | https://en.wikipedia.org/wiki/Heisenberg%27s%20microscope | Heisenberg's microscope is a thought experiment proposed by Werner Heisenberg that has served as the nucleus of some commonly held ideas about quantum mechanics. In particular, it provides an argument for the uncertainty principle on the basis of the principles of classical optics.
The concept was criticized by Heisenberg's mentor Niels Bohr, and theoretical and experimental developments have suggested that Heisenberg's intuitive explanation of his mathematical result might be misleading. While the act of measurement does lead to uncertainty, the loss of precision is less than that predicted by Heisenberg's argument when measured at the level of an individual state. The formal mathematical result remains valid, however, and the original intuitive argument has also been vindicated mathematically when the notion of disturbance is expanded to be independent of any specific state.
Heisenberg's argument
Heisenberg supposes that an electron is like a classical particle, moving in the direction along a line below the microscope. Let the cone of light rays leaving the microscope lens and focusing on the electron make an angle with the electron. Let be the wavelength of the light rays. Then, according to the laws of classical optics, the microscope can only resolve the position of the electron up to an accuracy of
An observer perceives an image of the particle because the light rays strike the particle and bounce back through the microscope to the observer's eye. We know from experimental evidence that when a photon strikes an electron, the latter has a Compton recoil with momentum proportional to , where is the Planck constant. However, the extent of "recoil cannot be exactly known, since the direction of the scattered photon is undetermined within the bundle of rays entering the microscope." In particular, the electron's momentum in the direction is only determined up to
Combining the relations for and , we thus have
,
which is an approximate expression of Heisenberg's uncertainty principle.
Analysis of argument
Although the thought experiment was formulated as an introduction to Heisenberg's uncertainty principle, one of the pillars of modern physics, it attacks the very premises under which it was constructed, thereby contributing to the development of an area of physics—namely, quantum mechanics—that redefined the terms under which the original thought experiment was conceived.
Some interpretations of quantum mechanics question whether an electron actually has a determinate position before it is disturbed by the measurement used to establish said determinate position. Under the Copenhagen interpretation, an electron has some probability of showing up at any point in the universe, though the probability that it will be far from where one expects becomes very low at great distances from the neighborhood in which it is originally found. In other words, the "position" of an electron can only be stated in terms of a probability distribution, as can predictions of where it may move.
See also
Atom localization
Quantum mechanics
Basics of quantum mechanics
Interpretation of quantum mechanics
Philosophical interpretation of classical physics
Schrödinger's cat
Uncertainty principle
Quantum field theory
Electromagnetic radiation
References
Sources
External links
History of Heisenberg's Microscope
Lectures on Heisenberg's Microscope
Thought experiments in quantum mechanics
Werner Heisenberg | Heisenberg's microscope | [
"Physics"
] | 626 | [
"Quantum mechanics",
"Thought experiments in quantum mechanics"
] |
3,068,500 | https://en.wikipedia.org/wiki/Cream%20%28pharmacy%29 | A cream is a preparation usually for application to the skin. Creams for application to mucous membranes such as those of the rectum or vagina are also used. Creams may be considered pharmaceutical products, since even cosmetic creams are manufactured using techniques developed by pharmacy and unmedicated creams are highly used in a variety of skin conditions (dermatoses). The use of the finger tip unit concept may be helpful in guiding how much topical cream is required to cover different areas.
Creams are semi-solid emulsions of oil and water. They are divided into two types: oil-in-water (O/W) creams which are composed of small droplets of oil dispersed in a continuous water phase, and water-in-oil (W/O) creams which are composed of small droplets of water dispersed in a continuous oily phase. Oil-in-water creams are more comfortable and cosmetically acceptable as they are less greasy and more easily washed off using water. Water-in-oil creams are more difficult to handle but many drugs which are incorporated into creams are hydrophobic and will be released more readily from a water-in-oil cream than an oil-in-water cream. Water-in-oil creams are also more moisturising as they provide an oily barrier which reduces water loss from the stratum corneum, the outermost layer of the skin.
Uses
The provision of a barrier to protect the skin
This may be a physical barrier or a chemical barrier as with sunscreens
To aid in the retention of moisture (especially water-in-oil creams)
Cleansing
Emollient effects
As a vehicle for drug substances such as local anaesthetics, anti-inflammatories (NSAIDs or corticosteroids), hormones, antibiotics, antifungals or counter-irritants.
Creams are semisolid dosage forms containing more than 20% water or volatile components and typically less than 50% hydrocarbons, waxes, or polyols as vehicles. They may also contain one or more drug substances dissolved or dispersed in a suitable cream base. This term has traditionally been applied to semisolids that possess a relatively fluid consistency formulated as either water-in-oil (e.g., cold cream) or oil-in-water (e.g., fluocinolone acetonide cream) emulsions. However, more recently the term has been restricted to products consisting of oil-in-water emulsions or aqueous microcrystalline dispersions of long-chain fatty acids or alcohols that are water washable and more cosmetically and aesthetically acceptable. Creams can be used for administering drugs via the vaginal route (e.g., Triple Sulfa vaginal cream). Creams are also used to treat sun burns.
Composition
There are four main ingredients of the cold cream:
Water
Oil
Emulsifier
Thickening agent
Topical medication forms
There are many types of preparations applied to a body surface, such as:
ointments – consist of a single-phase in which solids or liquids may be dispersed. There are hydrophobic, water-emulsifying, and hydrophilic ointments.
creams – consist of a lipophilic phase and an aqueous phase. There are lipophilic (W/O) and hydrophilic (O/W) creams, depending on the continuous phase.
gels – consist of liquids gelled by suitable gelling agents. There are lipophilic gels (oleogels) and Hydrophilic gels (hydrogels).
pastes – contain large proportions of solids finely dispersed in the basis.
poultices – consist of a hydrophilic heat-retentive basis in which solids or liquids are dispersed. They are usually spread thickly on a suitable dressing and heated before application to the skin.
topical powders – consist of solid, loose, dry particles of varying degrees of fineness.
medicated plasters – consist of an adhesive basis spread as a uniform layer on an appropriate support made of natural or synthetic material.
See also
Lotion topical
References
External links
Dosage forms
Drug delivery devices | Cream (pharmacy) | [
"Chemistry"
] | 869 | [
"Pharmacology",
"Drug delivery devices"
] |
3,069,483 | https://en.wikipedia.org/wiki/Dynamic%20recrystallization | Dynamic recrystallization (DRX) is a type of recrystallization process, found within the fields of metallurgy and geology. In dynamic recrystallization, as opposed to static recrystallization, the nucleation and growth of new grains occurs during deformation rather than afterwards as part of a separate heat treatment. The reduction of grain size increases the risk of grain boundary sliding at elevated temperatures, while also decreasing dislocation mobility within the material. The new grains are less strained, causing a decrease in the hardening of a material. Dynamic recrystallization allows for new grain sizes and orientation, which can prevent crack propagation. Rather than strain causing the material to fracture, strain can initiate the growth of a new grain, consuming atoms from neighboring pre-existing grains. After dynamic recrystallization, the ductility of the material increases.
In a stress–strain curve, the onset of dynamic recrystallization can be recognized by a distinct peak in the flow stress in hot working data, due to the softening effect of recrystallization. However, not all materials display well-defined peaks when tested under hot working conditions. The onset of DRX can also be detected from inflection point in plots of the strain hardening rate against stress. It has been shown that this technique can be used to establish the occurrence of DRX when this cannot be determined unambiguously from the shape of the flow curve.
If stress oscillations appear before reaching the steady state, then several recrystallization and grain growth cycles occur and the stress behavior is said to be of the cyclic or multiple peak type. The particular stress behavior before reaching the steady state depends on the initial grain size, temperature, and strain rate.
DRX can occur in various forms, including:
Geometric dynamic recrystallization
Discontinuous dynamic recrystallization
Continuous dynamic recrystallization
Dynamic recrystallization is dependent on the rate of dislocation creation and movement. It is also dependent on the recovery rate (the rate at which dislocations annihilate). The interplay between work hardening and dynamic recovery determines grain structure. It also determines the susceptibility of grains to various types of dynamic recrystallization. Regardless of the mechanism, for dynamic crystallization to occur, the material must have experienced a critical deformation. The final grain size increases with increased stress. To achieve very fine-grained structures the stresses have to be high.
Some authors have used the term 'postdynamic' or 'metadynamic' to describe recrystallization that occurs during the cooling phase of a hot-working process or between successive passes. This emphasises the fact that the recrystallization is directly linked to the process in question, while acknowledging that there is no concurrent deformation.
Geometric Dynamic Recrystallization (GDRX)
Geometric dynamic recrystallization occurs in grains with local serrations. Upon deformation, grains undergoing GDRX elongate until the thickness of the grain falls below a threshold (below which the serration boundaries intersect and small grains pinch off into equiaxed grains). The serrations may predate stresses being exerted on the material, or may result from the material’s deformation.
Geometric Dynamic Recrystallization has 6 main characteristics:
It generally occurs with deformation at elevated temperatures, in materials with high stacking fault energy
Stress increases and then declines to a steady state
Subgrain formation requires a critical deformation
Subgrain misorientation peaks at 2˚
There is little texture change
Pinning of grain boundaries causes an increase in the required strain
While GDRX is primarily affected by the initial grain size and strain (geometry-dependent), other factors that occur during the hot working process complicate the development of predictive modeling (which tend to oversimplify the process) and can lead to incomplete recrystallization. The equiaxed grain formation does not occur immediately and uniformly along the entire grain once the threshold stress is reached, as individual regions are subjected to different strains/stresses. In practice, a generally sinusoidal edge (as predicted by Martorano et al.) gradually forms as the grains begin to pinch off as they each reach the threshold. More sophisticated models consider complex initial grain geometries, local pressures along grain boundaries, and hot working temperature, but the models are unable to make accurate predictions throughout the entire stress regime and the evolution of the overall microstructure. Additionally, grain boundaries may migrate during GDRX at high temperatures and GB curvatures, dragging along subgrain boundaries and resulting in unwanted growth of the original grain. This new, larger grain will require far more deformation for GDRX to occur, and the local area will be weaker rather than strengthened. Lastly, recrystallization can be accelerated as grains are shifted and stretched, causing subgrain boundaries to become grain boundaries (angle increases). The affected grains are thinner and longer, and thus more easily undergo deformation.
Discontinuous Dynamic Recrystallization
Discontinuous recrystallization is heterogeneous; there are distinct nucleation and growth stages. It is common in materials with low stacking-fault energy. Nucleation then occurs, generating new strain-free grains which absorb the pre-existing strained grains. It occurs more easily at grain boundaries, decreasing the grain size and thereby increasing the amount of nucleation sites. This further increases the rate of discontinuous dynamic recrystallization.
Discontinuous Dynamic Recrystallization has 5 main characteristics:
Recrystallization does not occur until the threshold strain has been reached
The stress-strain curve may have several peaks – there is not a universal equation
Nucleation generally occurs along pre-existing grain boundaries
Recrystallization rates increase as the initial grain size decreases
There is a steady grain size which is approached as recrystallization proceeds
Discontinuous dynamic recrystallization is caused by the interplay of work hardening and recovery. If the annihilation of dislocations is slow relative to the rate at which they are generated, dislocations accumulate. Once critical dislocation density is achieved, nucleation occurs on grain boundaries. Grain boundary migration, or the atoms transfer from a large pre-existing grain to a smaller nucleus, allows the growth of the new nuclei at the expense of the pre-existing grains. The nucleation can occur through the bulging of existing grain boundaries. A bulge forms if the subgrains abutting a grain boundary are of different sizes, causing a disparity in energy from the two subgrains. If the bulge achieves a critical radius, it will successfully transition to a stable nucleus and continue its growth. This can be modeled using Cahn’s theories pertaining to nucleation and growth.
Discontinuous dynamic recrystallization commonly produces a ‘necklace’ microstructure. Since new grain growth is energetically favorable along grain boundaries, new grain formation and bulging preferentially occurs along pre-existing grain boundaries. This generates layers of new, very fine grains along the grain boundary initially leaving the interior of the pre-existing grain unaffected. As the dynamic recrystallization continues, it consumes the unrecrystallized region. As deformation continues, the recrystallization does not maintain coherency between layers of new nuclei, producing a random texture.
Continuous Dynamic Recrystallization
Continuous dynamic recrystallization is common in materials with high stacking-fault energies. It occurs when low angle grain boundaries form and evolve into high angle boundaries, forming new grains in the process. For continuous dynamic recrystallization there is no clear distinction between nucleation and growth phases of the new grains.
Continuous Dynamic Recrystallization has 4 main characteristics:
As strain increases, stress increases
As strain increases, subgrain boundary misorientation increases
As low angle grain boundaries evolve into high angle grain boundaries, the misorientation increases homogeneously
As deformation increases, crystallite size decreases
There are three main mechanisms of continuous dynamic recrystallization:
First, continuous dynamic recrystallization can occur when low angle grain boundaries are assembled from dislocations formed within the grain. When the material is subjected to continued stress, the misorientation angle increases until the critical angle is achieved, creating a high angle grain boundary. This evolution can be promoted by the pinning of subgrain boundaries.
Second, continuous dynamic recrystallization can occur through subgrain rotation recrystallization; subgrains rotate increasing the misorientation angle. Once the misorientation angle exceeds the critical angle, the former subgrains qualify as independent grains.
Third, continuous dynamic recrystallization can occur due to deformation caused by microshear bands. Subgrains are assembled by dislocations within the grain formed during work hardening. If microshear bands are formed within the grain, the stress they introduce rapidly increases the misorientation of low angle grain boundaries, transforming them into high angle grain boundaries. However, the impact of microshear bands are localized, so this mechanism preferentially impacts regions which deform heterogeneously, such as microshear bands or areas near pre-existing grain boundaries. As recrystallization proceeds, it spreads out from these zones, generating a homogenous, equiaxed microstructure.
Mathematical Formulas
Based on the method developed by Poliak and Jonas, a few models are developed in order to describe the critical strain for the onset of DRX as a function of the peak strain of the stress–strain curve. The models are derived for the systems with single peak, i.e. for the materials with medium to low stacking fault energy values. The models can be found in the following papers:
Determination of flow stress and the critical strain for the onset of dynamic recrystallization using a sine function
Determination of flow stress and the critical strain for the onset of dynamic recrystallization using a hyperbolic tangent function
Determination of critical strain for initiation of dynamic recrystallization
Characteristic points of stress–strain curve at high temperature
The DRX behavior for systems with multiple peaks (and single peak as well) can be modeled considering the interaction of multiple grains during deformation. I. e. the ensemble model describes the transition between single and multi peak behavior based on the initial grain size. It can also describe the effect of transient changes of the strain rate on the shape of the flow curve. The model can be found in the following paper:
A new unified approach for modeling recrystallization during hot rolling of steel
Literature
A one-parmenter approach to determining the critical conditions for the initiation of dynamic recrystallization, onset of DRX
Flow Curve Analysis of 17–4 PH Stainless Steel under Hot Compression Test, comprehensive study of DRX
Constitutive relations to model the hot flow of commercial purity copper, chapter 6, doctoral thesis by V.G. García, UPC (2004)
A review of dynamic recrystallization phenomena in metallic materials, Latest review paper on DRX
A Cellular Automaton Model of Dynamic Recrystallization: Introduction & Source Code, Software simulating DRX by CA: Introduction, Video of software run
References
Metallurgy
Geology | Dynamic recrystallization | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,315 | [
"Metallurgy",
"Materials science",
"nan"
] |
3,069,503 | https://en.wikipedia.org/wiki/Structural%20health%20monitoring | Structural health monitoring (SHM) involves the observation and analysis of a system over time using periodically sampled response measurements to monitor changes to the material and geometric properties of engineering structures such as bridges and buildings.
In an operational environment, structures degrade with age and use. Long term SHM outputs periodically updated information regarding the ability of the structure to continue performing its intended function. After extreme events, such as earthquakes or blast loading, SHM is used for rapid condition screening. SHM is intended to provide reliable information regarding the integrity of the structure in near real time.
The SHM process involves selecting the excitation methods, the sensor types, number and locations, and the data acquisition/storage/transmittal hardware commonly called health and usage monitoring systems. Measurements may be taken to either directly detect any degradation or damage that may occur to a system or indirectly by measuring the size and frequency of loads experienced to allow the state of the system to be predicted.
To directly monitor the state of a system it is necessary to identify features in the acquired data that allows one to distinguish between the undamaged and damaged structure. One of the most common feature extraction methods is based on correlating measured system response quantities, such a vibration amplitude or frequency, with observations of the degraded system. Damage accumulation testing, during which significant structural components of the system under study are degraded by subjecting them to realistic loading conditions, can also be used to identify appropriate features. This process may involve induced-damage testing, fatigue testing, corrosion growth, or temperature cycling to accumulate certain types of damage in an accelerated fashion.
Introduction
Qualitative and non-continuous methods have long been used to evaluate structures for their capacity to serve their intended purpose. Since the beginning of the 19th century, railroad wheel-tappers have used the sound of a hammer striking the train wheel to evaluate if damage was present. In rotating machinery, vibration monitoring has been used for decades as a performance evaluation technique. Two techniques in the field of SHM are wave propagation based techniques and vibration based techniques. Broadly the literature for vibration based SHM can be divided into two aspects, the first wherein models are proposed for the damage to determine the dynamic characteristics, also known as the direct problem, and the second, wherein the dynamic characteristics are used to determine damage characteristics, also known as the inverse problem.
Several fundamental axioms, or general principles, have emerged:
Axiom I: All materials have inherent flaws or defects;
Axiom II: The assessment of damage requires a comparison between two system states;
Axiom III: Identifying the existence and location of damage can be done in an unsupervised learning mode, but identifying the type of damage present and the damage severity can generally only be done in a supervised learning mode;
Axiom IVa: Sensors cannot measure damage. Feature extraction through signal processing and statistical classification is necessary to convert sensor data into damage information;
Axiom IVb: Without intelligent feature extraction, the more sensitive a measurement is to damage, the more sensitive it is to changing operational and environmental conditions;
Axiom V: The length- and time-scales associated with damage initiation and evolution dictate the required properties of the SHM sensing system;
Axiom VI: There is a trade-off between the sensitivity to damage of an algorithm and its noise rejection capability;
Axiom VII: The size of damage that can be detected from changes in system dynamics is inversely proportional to the frequency range of excitation.
SHM System's elements typically include:
Structure
Sensors
Data acquisition systems
Data transfer and storage mechanism
Data management
Data interpretation and diagnosis:
System Identification
Structural model update
Structural condition assessment
Prediction of remaining service life
An example of this technology is embedding sensors in structures like bridges and aircraft. These sensors provide real time monitoring of various structural changes like stress and strain. In the case of civil engineering structures, the data provided by the sensors is usually transmitted to a remote data acquisition centres. With the aid of modern technology, real time control of structures (Active Structural Control) based on the information of sensors is possible
Health assessment of engineered structures of bridges, buildings and other related infrastructures
Commonly known as Structural Health Assessment (SHA) or SHM, this concept is widely applied to various forms of infrastructures, especially as countries all over the world enter into an even greater period of construction of various infrastructures ranging from bridges to skyscrapers. Especially so when damages to structures are concerned, it is important to note that there are stages of increasing difficulty that require the knowledge of previous stages, namely:
Detecting the existence of the damage on the structure
Locating the damage
Identifying the types of damage
Quantifying the severity of the damage
It is necessary to employ signal processing and statistical classification to convert sensor data on the infrastructural health status into damage info for assessment.
Operational evaluation
Operational evaluation attempts to answer four questions regarding the implementation of a damage identification capability:
i) What are the life-safety and/or economic justification for performing the SHM?
ii) How is damage defined for the system being investigated and, for multiple damage possibilities, which cases are of the most concern?
iii) What are the conditions, both operational and environmental, under which the system to be monitored functions?
iv) What are the limitations on acquiring data in the operational environment?
Operational evaluation begins to set the limitations on what will be monitored and how the monitoring will be accomplished. This evaluation starts to tailor the damage identification process to features that are unique to the system being monitored and tries to take advantage of unique features of the damage that is to be detected.
Data acquisition, normalization and cleansing
The data acquisition portion of the SHM process involves selecting the excitation methods, the sensor types, number and locations, and the data acquisition/storage/transmittal hardware. Again, this process will be application specific. Economic considerations will play a major role in making these decisions. The intervals at which data should be collected is another consideration that must be addressed.
Because data can be measured under varying conditions, the ability to normalize the data becomes very important to the damage identification process. As it applies to SHM, data normalization is the process of separating changes in sensor reading caused by damage from those caused by varying operational and environmental conditions. One of the most common procedures is to normalize the measured responses by the measured inputs. When environmental or operational variability is an issue, the need can arise to normalize the data in some temporal fashion to facilitate the comparison of data measured at similar times of an environmental or operational cycle. Sources of variability in the data acquisition process and with the system being monitored need to be identified and minimized to the extent possible. In general, not all sources of variability can be eliminated. Therefore, it is necessary to make the appropriate measurements such that these sources can be statistically quantified. Variability can arise from changing environmental and test conditions, changes in the data reduction process, and unit-to-unit inconsistencies.
Data cleansing is the process of selectively choosing data to pass on to or reject from the feature selection process. The data cleansing process is usually based on knowledge gained by individuals directly involved with the data acquisition. As an example, an inspection of the test setup may reveal that a sensor was loosely mounted and, hence, based on the judgment of the individuals performing the measurement, this set of data or the data from that particular sensor may be selectively deleted from the feature selection process. Signal processing techniques such as filtering and re-sampling can also be thought of as data cleansing procedures.
Finally, the data acquisition, normalization, and cleansing portion of SHM process should not be static. Insight gained from the feature selection process and the statistical model development process will provide information regarding changes that can improve the data acquisition process.
Feature extraction and data compression
The area of the SHM process that receives the most attention in the technical literature is the identification of data features that allows one to distinguish between the undamaged and damaged structure. Inherent in this feature selection process is the condensation of the data. The best features for damage identification are, again, application specific.
One of the most common feature extraction methods is based on correlating measured system response quantities, such a vibration amplitude or frequency, with the first-hand observations of the degrading system. Another method of developing features for damage identification is to apply engineered flaws, similar to ones expected in actual operating conditions, to systems and develop an initial understanding of the parameters that are sensitive to the expected damage. The flawed system can also be used to validate that the diagnostic measurements are sensitive enough to distinguish between features identified from the undamaged and damaged system. The use of analytical tools such as experimentally-validated finite element models can be a great asset in this process. In many cases the analytical tools are used to perform numerical experiments where the flaws are introduced through computer simulation. Damage accumulation testing, during which significant structural components of the system under study are degraded by subjecting them to realistic loading conditions, can also be used to identify appropriate features. This process may involve induced-damage testing, fatigue testing, corrosion growth, or temperature cycling to accumulate certain types of damage in an accelerated fashion. Insight into the appropriate features can be gained from several types of analytical and experimental studies as described above and is usually the result of information obtained from some combination of these studies.
The operational implementation and diagnostic measurement technologies needed to perform SHM produce more data than traditional uses of structural dynamics information. A condensation of the data is advantageous and necessary when comparisons of many feature sets obtained over the lifetime of the structure are envisioned. Also, because data will be acquired from a structure over an extended period of time and in an operational environment, robust data reduction techniques must be developed to retain feature sensitivity to the structural changes of interest in the presence of environmental and operational variability. To further aid in the extraction and recording of quality data needed to perform SHM, the statistical significance of the features should be characterized and used in the condensation process.
Statistical model development
The portion of the SHM process that has received the least attention in the technical literature is the development of statistical models for discrimination between features from the undamaged and damaged structures. Statistical model development is concerned with the implementation of the algorithms that operate on the extracted features to quantify the damage state of the structure. The algorithms used in statistical model development usually fall into three categories. When data are available from both the undamaged and damaged structure, the statistical pattern recognition algorithms fall into the general classification category, commonly referred to as supervised learning. Group classification and regression analysis are categories of supervised learning algorithms. Unsupervised learning refers to algorithms that are applied to data not containing examples from the damaged structure. Outlier or novelty detection is the primary class of algorithms applied in unsupervised learning applications. All of the algorithms analyze statistical distributions of the measured or derived features to enhance the damage identification process.
Specific structures
Bridges
Health monitoring of large bridges can be performed by simultaneous measurement of loads on the bridge and effects of these loads. It typically includes monitoring of:
Wind and weather
Traffic
Prestressing and stay cables
Deck
Pylons
Ground
Provided with this knowledge, the engineer can:
Estimate the loads and their effects
Estimate the state of fatigue or other limit state
Forecast the probable evolution of the bridge's health
The state of Oregon in the United States, Department of Transportation Bridge Engineering Department has developed and implemented a Structural Health Monitoring (SHM) program as referenced in this technical paper by Steven Lovejoy, Senior Engineer.
References are available that provide an introduction to the application of fiber optic sensors to Structural Health Monitoring on bridges.
Examples
The following projects are currently known as some of the biggest on-going bridge monitoring
The California Department of Transportation is supporting development of the Bridge rapid assessment center for extreme events (BRACE2) to facilitate real-time structural health monitoring across the California highway network.
Bridges in Hong Kong. The Wind and Structural Health Monitoring System is a sophisticated bridge monitoring system, costing US$1.3 million, used by the Hong Kong Highways Department to ensure road user comfort and safety of the Tsing Ma, Ting Kau, Kap Shui Mun and Stonecutters bridges. The sensory system consists of approximately 900 sensors and their relevant interfacing units. With more than 350 sensors on the Tsing Ma bridge, 350 on Ting Kau and 200 on Kap Shui Mun, the structural behaviour of the bridges is measured 24 hours a day, seven days a week. The sensors include accelerometers, strain gauges, displacement transducers, level sensing stations, anemometers, temperature sensors, dynamic weight-in-motion sensors and GPS receivers. They measure everything from tarmac temperature and strains in structural members to wind speed and the deflection and rotation of the kilometres of cables and any movement of the bridge decks and towers.
The Rio–Antirrio bridge, Greece: has more than 100 sensors monitoring the structure and the traffic in real time.
Millau Viaduc, France: has one of the largest systems with fiber optics in the world which is considered state of the art.
The Huey P Long bridge, USA: has over 800 static and dynamic strain gauges designed to measure axial and bending load effects.
The Fatih Sultan Mehmet Bridge, Turkey: also known as the Second Bosphorus Bridge. It has been monitored using an innovative wireless sensor network with normal traffic condition.
Masjid al-Haram#Third Saudi expansion, Mecca, Saudi Arabia : has more than 600 sensors ( Concrete pressure cell, Embedment type strain gauge, Sister bar strain gauge, etc.) installed at foundation and concrete columns. This project is under construction.
The Sydney Harbour Bridge in Australia is currently implementing a monitoring system involving over 2,400 sensors. Asset managers and bridge inspectors have mobile and web browser decision support tools based on analysis of sensor data.
The Queensferry Crossing, currently under construction across the Firth of Forth, will have a monitoring system including more than 2,000 sensors upon its completion. Asset managers will have access to data for all sensors from a web-based data management interface, including automated data analysis.
The Penang Second Bridge in Penang, Malaysia has completed the implementation and it's monitoring the bridge element with 3,000 sensors. For the safety of bridge users and as protection of such an investment, the firm responsible for the bridge wanted a structural health monitoring system. The system is used for disaster control, structural health management and data analysis. There were many considerations before implementation which included: force (wind, earthquake, temperature, vehicles); weather (air temperature, wind, humidity and precipitation); and response (strain, acceleration, cable tension, displacement and tilt).
The Lakhta Center, Russia: has more than 3000 sensors and more than 8000 parameters monitoring the structure in real time.
See also
Deformation monitoring
Civionics
Structural Health Monitoring, a peer-reviewed journal devoted to the subject
Value of structural health information
References
External links
NDT.net Open Access Database contains EWSHM proceedings and much more SHM articles
International Society for Structural Health Monitoring of Intelligent Infrastructure (ISHMII)
SHM at low cost for earthquake zones
Journals
SHM Proceedings (NDT.net)
Journal of Structural Health Monitoring (sagepub)
Journal of Intelligent Material Systems & Structures (sagepub)
Structural Durability & Health Monitoring (techscience)
Structural Control and Health Monitoring (John Wiley & Sons, Ltd.)
Journal of Civil Structural Health Monitoring (Springer)
Smart Materials and Structures (IOP)
Smart Materials Bulletin (science direct)
Structural engineering
Maintenance
Infrastructure asset management | Structural health monitoring | [
"Engineering"
] | 3,173 | [
"Structural engineering",
"Construction",
"Civil engineering",
"Maintenance",
"Mechanical engineering"
] |
3,069,520 | https://en.wikipedia.org/wiki/Measurement%20uncertainty | In metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a quantity measured on an interval or ratio scale.
All measurements are subject to uncertainty and a measurement result is complete only when it is accompanied by a statement of the associated uncertainty, such as the standard deviation. By international agreement, this uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity value. It is a non-negative parameter.
The measurement uncertainty is often taken as the standard deviation of a state-of-knowledge probability distribution over the possible values that could be attributed to a measured quantity. Relative uncertainty is the measurement uncertainty relative to the magnitude of a particular single choice for the value for the measured quantity, when this choice is nonzero. This particular single choice is usually called the measured value, which may be optimal in some well-defined sense (e.g., a mean, median, or mode). Thus, the relative measurement uncertainty is the measurement uncertainty divided by the absolute value of the measured value, when the measured value is not zero.
Background
The purpose of measurement is to provide information about a quantity of interest – a measurand. Measurands on ratio or interval scales include the size of a cylindrical feature, the volume of a vessel, the potential difference between the terminals of a battery, or the mass concentration of lead in a flask of water.
No measurement is exact. When a quantity is measured, the outcome depends on the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects. Even if the quantity were to be measured several times, in the same way and in the same circumstances, a different measured value would in general be obtained each time, assuming the measuring system has sufficient resolution to distinguish between the values.
The dispersion of the measured values would relate to how well the measurement is performed. If measured on a ratio or interval scale, their average would provide an estimate of the true value of the quantity that generally would be more reliable than an individual measured value.
The dispersion and the number of measured values would provide information relating to the average value as an estimate of the true value.
However, this information would not generally be adequate.
The measuring system may provide measured values that are not dispersed about the true value, but about some value offset from it. Take a domestic bathroom scale. Suppose it is not set to show zero when there is nobody on the scale, but to show some value offset from zero. Then, no matter how many times the person's mass were re-measured, the effect of this offset would be inherently present in the average of the values.
The "Guide to the Expression of Uncertainty in Measurement" (commonly known as the GUM) is the definitive document on this subject. The GUM has been adopted by all major National Measurement Institutes (NMIs) and by international laboratory accreditation standards such as ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories, which is required for international laboratory accreditation, and is employed in most modern national and international documentary standards on measurement methods and technology. See Joint Committee for Guides in Metrology.
Measurement uncertainty has important economic consequences for calibration and measurement activities. In calibration reports, the magnitude of the uncertainty is often taken as an indication of the quality of the laboratory, and smaller uncertainty values generally are of higher value and of higher cost. The American Society of Mechanical Engineers (ASME) has produced a suite of standards addressing various aspects of measurement uncertainty. For example, ASME standards are used to address the role of measurement uncertainty when accepting or rejecting products based on a measurement result and a product specification, to provide a simplified approach (relative to the GUM) to the evaluation of dimensional measurement uncertainty, to resolve disagreements over the magnitude of the measurement uncertainty statement, and to provide guidance on the risks involved in any product acceptance/rejection decision.
Indirect measurement
The above discussion concerns the direct measurement of a quantity, which incidentally occurs rarely. For example, the bathroom scale may convert a measured extension of a spring into an estimate of the measurand, the mass of the person on the scale. The particular relationship between extension and mass is determined by the calibration of the scale. A measurement model converts a quantity value into the corresponding value of the measurand.
There are many types of measurement in practice and therefore many models. A simple measurement model (for example for a scale, where the mass is proportional to the extension of the spring) might be sufficient for everyday domestic use. Alternatively, a more sophisticated model of a weighing, involving additional effects such as air buoyancy, is capable of delivering better results for industrial or scientific purposes. In general there are often several different quantities, for example temperature, humidity and displacement, that contribute to the definition of the measurand, and that need to be measured.
Correction terms should be included in the measurement model when the conditions of measurement are not exactly as stipulated. These terms correspond to systematic errors. Given an estimate of a correction term, the relevant quantity should be corrected by this estimate. There will be an uncertainty associated with the estimate, even if the estimate is zero, as is often the case. Instances of systematic errors arise in height measurement, when the alignment of the measuring instrument is not perfectly vertical, and the ambient temperature is different from that prescribed. Neither the alignment of the instrument nor the ambient temperature is specified exactly, but information concerning these effects is available, for example the lack of alignment is at most 0.001° and the ambient temperature at the time of measurement differs from that stipulated by at most 2 °C.
As well as raw data representing measured values, there is another form of data that is frequently needed in a measurement model. Some such data relate to quantities representing physical constants, each of which is known imperfectly. Examples are material constants such as modulus of elasticity and specific heat. There are often other relevant data given in reference books, calibration certificates, etc., regarded as estimates of further quantities.
The items required by a measurement model to define a measurand are known as input quantities in a measurement model. The model is often referred to as a functional relationship. The output quantity in a measurement model is the measurand.
Formally, the output quantity, denoted by , about which information is required, is often related to input quantities, denoted by , about which information is available, by a measurement model in the form of
where is known as the measurement function. A general expression for a measurement model is
It is taken that a procedure exists for calculating given , and that is uniquely defined by this equation.
Propagation of distributions
The true values of the input quantities are unknown.
In the GUM approach, are characterized by probability distributions and treated mathematically as random variables.
These distributions describe the respective probabilities of their true values lying in different intervals, and are assigned based on available knowledge concerning .
Sometimes, some or all of are interrelated and the relevant distributions, which are known as joint, apply to these quantities taken together.
Consider estimates , respectively, of the input quantities , obtained from certificates and reports, manufacturers' specifications, the analysis of measurement data, and so on.
The probability distributions characterizing are chosen such that the estimates , respectively, are the expectations of .
Moreover, for the th input quantity, consider a so-called standard uncertainty, given the symbol , defined as the standard deviation of the input quantity .
This standard uncertainty is said to be associated with the (corresponding) estimate .
The use of available knowledge to establish a probability distribution to characterize each quantity of interest applies to the and also to .
In the latter case, the characterizing probability distribution for is determined by the measurement model together with the probability distributions for the .
The determination of the probability distribution for from this information is known as the propagation of distributions.
The figure below depicts a measurement model in the case where and are each characterized by a (different) rectangular, or uniform, probability distribution.
has a symmetric trapezoidal probability distribution in this case.
Once the input quantities have been characterized by appropriate probability distributions, and the measurement model has been developed, the probability distribution for the measurand is fully specified in terms of this information. In particular, the expectation of is used as the estimate of , and the standard deviation of as the standard uncertainty associated with this estimate.
Often an interval containing with a specified probability is required. Such an interval, a coverage interval, can be deduced from the probability distribution for . The specified probability is known as the coverage probability. For a given coverage probability, there is more than one coverage interval. The probabilistically symmetric coverage interval is an interval for which the probabilities (summing to one minus the coverage probability) of a value to the left and the right of the interval are equal. The shortest coverage interval is an interval for which the length is least over all coverage intervals having the same coverage probability.
Prior knowledge about the true value of the output quantity can also be considered. For the domestic bathroom scale, the fact that the person's mass is positive, and that it is the mass of a person, rather than that of a motor car, that is being measured, both constitute prior knowledge about the possible values of the measurand in this example. Such additional information can be used to provide a probability distribution for that can give a smaller standard deviation for and hence a smaller standard uncertainty associated with the estimate of .
Type A and Type B evaluation of uncertainty
Knowledge about an input quantity is inferred from repeated measured values ("Type A evaluation of uncertainty"), or scientific judgement or other information concerning the possible values of the quantity ("Type B evaluation of uncertainty").
In Type A evaluations of measurement uncertainty, the assumption is often made that the distribution best describing an input quantity given repeated measured values of it (obtained independently) is a Gaussian distribution.
then has expectation equal to the average measured value and standard deviation equal to the standard deviation of the average.
When the uncertainty is evaluated from a small number of measured values (regarded as instances of a quantity characterized by a Gaussian distribution), the corresponding distribution can be taken as a t-distribution.
Other considerations apply when the measured values are not obtained independently.
For a Type B evaluation of uncertainty, often the only available information is that lies in a specified interval [].
In such a case, knowledge of the quantity can be characterized by a rectangular probability distribution with limits and .
If different information were available, a probability distribution consistent with that information would be used.
Sensitivity coefficients
Sensitivity coefficients describe how the estimate of would be influenced by small changes in the estimates of the input quantities .
For the measurement model , the sensitivity coefficient equals the partial derivative of first order of with respect to evaluated at , , etc.
For a linear measurement model
with independent, a change in equal to would give a change in
This statement would generally be approximate for measurement models .
The relative magnitudes of the terms are useful in assessing the respective contributions from the input quantities to the standard uncertainty associated with .
The standard uncertainty associated with the estimate of the output quantity is not given by the sum of the , but these terms combined in quadrature, namely by an expression that is generally approximate for measurement models :
which is known as the law of propagation of uncertainty.
When the input quantities contain dependencies, the above formula is augmented by terms containing covariances, which may increase or decrease .
Uncertainty evaluation
The main stages of uncertainty evaluation constitute formulation and calculation, the latter consisting of propagation and summarizing.
The formulation stage constitutes
defining the output quantity (the measurand),
identifying the input quantities on which depends,
developing a measurement model relating to the input quantities, and
on the basis of available knowledge, assigning probability distributions — Gaussian, rectangular, etc. — to the input quantities (or a joint probability distribution to those input quantities that are not independent).
The calculation stage consists of propagating the probability distributions for the input quantities through the measurement model to obtain the probability distribution for the output quantity , and summarizing by using this distribution to obtain
the expectation of , taken as an estimate of ,
the standard deviation of , taken as the standard uncertainty associated with , and
a coverage interval containing with a specified coverage probability.
The propagation stage of uncertainty evaluation is known as the propagation of distributions, various approaches for which are available, including
the GUM uncertainty framework, constituting the application of the law of propagation of uncertainty, and the characterization of the output quantity by a Gaussian or a -distribution,
analytic methods, in which mathematical analysis is used to derive an algebraic form for the probability distribution for , and
a Monte Carlo method, in which an approximation to the distribution function for is established numerically by making random draws from the probability distributions for the input quantities, and evaluating the model at the resulting values.
For any particular uncertainty evaluation problem, approach 1), 2) or 3) (or some other approach) is used, 1) being generally approximate, 2) exact, and 3) providing a solution with a numerical accuracy that can be controlled.
Models with any number of output quantities
When the measurement model is multivariate, that is, it has any number of output quantities, the above concepts can be extended. The output quantities are now described by a joint probability distribution, the coverage interval becomes a coverage region, the law of propagation of uncertainty has a natural generalization, and a calculation procedure that implements a multivariate Monte Carlo method is available.
Uncertainty as an interval
The most common view of measurement uncertainty uses random variables as mathematical models for uncertain quantities and simple probability distributions as sufficient for representing measurement uncertainties. In some situations, however, a mathematical interval might be a better model of uncertainty than a probability
distribution. This may include situations involving periodic measurements, binned data values, censoring, detection limits, or plus-minus ranges of measurements where no particular probability distribution seems justified or where one cannot assume that the errors among individual measurements are completely independent.
A more robust representation of measurement uncertainty in such cases can be fashioned from intervals. An interval [a, b] is different from a rectangular or uniform probability distribution over the same range in that the latter suggests that the true value lies inside the right half of the range [(a + b)/2, b] with probability one half, and within any subinterval of [a, b] with probability equal to the width of the subinterval divided by b − a. The interval makes no such claims, except simply that the measurement lies somewhere within the interval. Distributions of such measurement intervals can be summarized as probability boxes and Dempster–Shafer structures over the real numbers, which incorporate both aleatoric and epistemic uncertainties.
See also
References
Further reading
Bich, W., Cox, M. G., and Harris, P. M. Evolution of the "Guide to the Expression of Uncertainty in Measurement". Metrologia, 43(4):S161–S166, 2006.
Cox, M. G., and Harris, P. M. SSfM Best Practice Guide No. 6, Uncertainty evaluation. Technical report DEM-ES-011, National Physical Laboratory, 2006.
Ellison S. L. R., Williams A. (Eds). Eurachem/CITAC guide: Quantifying Uncertainty in Analytical Measurement, Third edition, (2012) ISBN 978-0-948926-30-3. Available from www.eurachem.org.
Grabe, M. Generalized Gaussian Error Calculus, Springer 2010.
EA. Expression of the uncertainty of measurement in calibration. Technical Report EA-4/02, European Co-operation for Accreditation, 1999.
Lira., I. Evaluating the Uncertainty of Measurement. Fundamentals and Practical Guidance. Institute of Physics, Bristol, UK, 2002.
Majcen N., Taylor P. (Editors), Practical examples on traceability, measurement uncertainty and validation in chemistry, Vol 1, 2010; .
Possolo A and Iyer H K 2017 Concepts and tools for the evaluation of measurement uncertainty Rev. Sci. Instrum.,88 011301 (2017).
UKAS M3003 The Expression of Uncertainty and Confidence in Measurement (Edition 3, November 2012) UKAS
External links
NPLUnc
Estimate of temperature and its uncertainty in small systems, 2011.
Introduction to evaluating uncertainty of measurement
JCGM 200:2008. International Vocabulary of Metrology – Basic and general concepts and associated terms, 3rd Edition. Joint Committee for Guides in Metrology.
ISO 3534-1:2006. Statistics – Vocabulary and symbols – Part 1: General statistical terms and terms used in probability. ISO
JCGM 106:2012. Evaluation of measurement data – The role of measurement uncertainty in conformity assessment. Joint Committee for Guides in Metrology.
NIST. Uncertainty of measurement results.
Measurement | Measurement uncertainty | [
"Physics",
"Mathematics"
] | 3,461 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
3,069,854 | https://en.wikipedia.org/wiki/Electrolyte%E2%80%93insulator%E2%80%93semiconductor%20sensor | Within electronics, an Electrolyte–insulator–semiconductor (EIS) sensor is a sensor that is made of these three components:
an electrolyte with the chemical that should be measured
an insulator that allows field-effect interaction, without leak currents between the two other components
a semiconductor to register the chemical changes
The EIS sensor can be used in combination with other structures, for example to construct a light-addressable potentiometric sensor (LAPS).
References
Sensors | Electrolyte–insulator–semiconductor sensor | [
"Technology",
"Engineering"
] | 99 | [
"Sensors",
"Measuring instruments"
] |
3,070,443 | https://en.wikipedia.org/wiki/Stem-cell%20line | A stem cell line is a group of stem cells that is cultured in vitro and can be propagated indefinitely. Stem cell lines are derived from either animal or human tissues and come from one of three sources: embryonic stem cells, adult stem cells, or induced pluripotent stem cells. They are commonly used in research and regenerative medicine.
Properties
By definition, stem cells possess two properties: (1) they can self-renew, which means that they can divide indefinitely while remaining in an undifferentiated state; and (2) they are pluripotent or multipotent, which means that they can differentiate to form specialized cell types. Due to the self-renewal capacity of stem cells, a stem cell line can be cultured in vitro indefinitely.
A stem-cell line is distinctly different from an immortalized cell line, such as the HeLa line. While stem cells can propagate indefinitely in culture due to their inherent properties, immortalized cells would not normally divide indefinitely but have gained this ability due to mutation. Immortalized cell lines can be generated from cells isolated from tumors, or mutations can be introduced to make the cells immortal.
A stem cell line is also distinct from primary cells. Primary cells are cells that have been isolated and then used immediately. Primary cells cannot divide indefinitely and thus cannot be cultured for long periods of time in vitro.
Types and methods of derivation
Embryonic stem cell line
An embryonic stem cell line is created from cells derived from the inner cell mass of a blastocyst, an early stage, pre-implantation embryo. In humans, the blastocyst stage occurs 4–5 days post fertilization. To create an embryonic stem cell line, the inner cell-mass is removed from the blastocyst, separated from the trophoectoderm, and cultured on a layer of supportive cells in vitro. In the derivation of human embryonic stem cell lines, embryos left over from in vitro fertilization (IVF) procedures are used. The fact that the blastocyst is destroyed during the process has raised controversy and ethical concerns.
Embryonic stem cells are pluripotent, meaning they can differentiate to form all cell types in the body. In vitro, embryonic stem cells can be cultured under defined conditions to keep them in their pluripotent state, or they can be stimulated with biochemical and physical cues to differentiate them to different cell types.
Adult stem cell line
Adult stem cells are found in juvenile or adult tissues. Adult stem cells are multipotent: they can generate a limited number of differentiated cell types (unlike pluripotent embryonic stem cells). Types of adult stem cells include hematopoietic stem cells and mesenchymal stem cells. Hematopoietic stem cells are found in the bone marrow and generate all cells of the immune system all blood cell types. Mesenchymal stem cells are found in umbilical cord blood, amniotic fluid, and adipose tissue and can generate a number of cell types, including osteoblasts, chondrocytes, and adipocytes. In medicine, adult stem cells are mostly commonly used in bone marrow transplants to treat many bone and blood cancers as well as some autoimmune diseases. (See Hematopoietic stem cell transplantation)
Of the types of adult stem cells have successfully been isolated and identified, only mesenchymal stem cells can successfully be grown in culture for long periods of time. Other adult stem cell types, such as hematopoietic stem cells, are difficult to grow and propagate in vitro. Identifying methods for maintaining hematopoietic stem cells in vitro is an active area of research. Thus, while mesenchymal stem cell lines exist, other types of adult stem cells that are grown in vitro can better be classified as primary cells.
Induced pluripotent stem-cell (iPSC) line
Induced pluripotent stem cell (iPSC) lines are pluripotent stem cells that have been generated from adult/somatic cells. The method of generating iPSCs was developed by Shinya Yamanaka's lab in 2006; his group demonstrated that the introduction of four specific genes could induce somatic cells to revert to a pluripotent stem cell state.
Compared to embryonic stem-cell lines, iPSC lines are also pluripotent in nature but can be derived without the use of human embryos—a process that has raised ethical concerns. Furthermore, patient-specific iPSC cell lines can be generated—that is, cell lines that are genetically matched to an individual. Patient-specific iPSC lines have been generated for the purposes of studying diseases and for developing patient-specific medical therapies.
Methods of culture
Stem-cell lines are grown and maintained at specific temperature and atmospheric conditions (37 degrees Celsius and 5% ) in incubators. Culture conditions such as the cell growth medium and surface on which cells are grown vary widely depending on the specific stem cell line. Different biochemical factors can be added to the medium to control the cell phenotype—for example to keep stem cells in a pluripotent state or to differentiate them to a specific cell type.
Uses
Stem-cell lines are used in research and regenerative medicine. They can be used to study stem-cell biology and early human development. In the field of regenerative medicine, it has been proposed that stem cells be used in cell-based therapies to replace injured or diseased cells and tissues. Examples of conditions that researchers are working to develop stem-cell-based treatments for include neurodegenerative diseases, diabetes, and spinal cord injuries.
Stem-cell in-vitro
Stem cells could be used as an ideal in vitro platform to study developmental changes at the molecular level. Neural stem cells (NSC) for examples have been used as a model to study the mechanisms behind the differentiation and maturation of cells of the central nervous system (CNS). These studies are gaining more attention recently since they can be optimised and relevant to modelling neurodegenerative diseases and brain tumors.
Ethical issues
There is controversy associated with the derivation and use of human embryonic stem cell lines. This controversy stems from the fact that derivation of human embryonic stem cells requires the destruction of a blastocyst-stage, pre-implantation human embryo. There is a wide range of viewpoints regarding the moral consideration that blastocyst-stage human embryos should be given.
Access to human embryonic stem-cell lines
United States
In the United States, Executive Order 13505 established that federal money can be used for research in which approved human embryonic stem-cell (hESC) lines are used, but it cannot be used to derive new lines. The National Institutes of Health (NIH) Guidelines on Human Stem Cell Research, effective July 7, 2009, implemented the Executive Order 13505 by establishing criteria which hESC lines must meet to be approved for funding. The NIH Human Embryonic Stem Cell Registry can be accessed online and has updated information on cell lines eligible for NIH funding. There are 486 approved lines as of January 2022.
Studies have found that approved hESC lines are not uniformly used in the US data from cell banks and surveys of researchers indicate that only a handful of the available hESC lines are routinely used in research. Access and utility are cited as the two primary factors influencing what hESC lines scientists choose to work with.
A 2011 survey of stem cell scientists in the US who use hESC lines in their research found that 54% of respondents used two or fewer lines and 75% used three or fewer lines.
Another study tracked cell-line requests fulfilled from the largest US repositories, the National Stem Cell Bank (NSCB) and the Harvard Stem Cell Institute (HSCI; Cambridge, MA, USA), for the periods March 1999 – December 2008 (for NSCB) and April 2004 – December 2008 (for HSCI). For NSCB, out of twenty-one approved cell lines, 77% of requests were for two of the lines (H1 and H9). For HSCI, out of the 17 lines requested more than once, 24.7% of requests were for the two most commonly requested lines.
See also
Stem cell
Embryonic stem cell
Induced pluripotent stem cell
Induced stem cells
Adult stem cell
Cell culture
Immortalised cell line
Stem-cell controversy
Stem-cell treatments
Regenerative Medicine
References
Stem cells
Induced stem cells
Cell culture | Stem-cell line | [
"Biology"
] | 1,758 | [
"Model organisms",
"Cell culture",
"Induced stem cells",
"Stem cell research"
] |
14,626,122 | https://en.wikipedia.org/wiki/Lactate%20dehydrogenase | Lactate dehydrogenase (LDH or LD) is an enzyme found in nearly all living cells. LDH catalyzes the conversion of pyruvate to lactate and back, as it converts NAD+ to NADH and back. A dehydrogenase is an enzyme that transfers a hydride from one molecule to another.
LDH exists in four distinct enzyme classes. This article is specifically about the NAD(P)-dependent L-lactate dehydrogenase. Other LDHs act on D-lactate and/or are dependent on cytochrome c: D-lactate dehydrogenase (cytochrome) and L-lactate dehydrogenase (cytochrome).
LDH is expressed extensively in body tissues, such as blood cells and heart muscle. Because it is released during tissue damage, it is a marker of common injuries and disease such as heart failure.
Reaction
Lactate dehydrogenase catalyzes the interconversion of pyruvate and lactate with concomitant interconversion of NADH and NAD+. It converts pyruvate, the final product of glycolysis, to lactate when oxygen is absent or in short supply, and it performs the reverse reaction during the Cori cycle in the liver. At high concentrations of lactate, the enzyme exhibits feedback inhibition, and the rate of conversion of pyruvate to lactate is decreased. It also catalyzes the dehydrogenation of 2-hydroxybutyrate, but this is a much poorer substrate than lactate.
Active site
LDH in humans uses His(193) as the proton acceptor, and works in unison with the coenzyme (Arg99 and Asn138), and substrate (Arg106; Arg169; Thr248) binding residues. The His(193) active site, is not only found in the human form of LDH, but is found in many different animals, showing the convergent evolution of LDH. The two different subunits of LDH (LDHA, also known as the M subunit of LDH, and LDHB, also known as the H subunit of LDH) both retain the same active site and the same amino acids participating in the reaction. The noticeable difference between the two subunits that make up LDH's tertiary structure is the replacement of alanine (in the M chain) with a glutamine (in the H chain). This tiny but notable change is believed to be the reason the H subunit can bind NAD faster, and the M subunit's catalytic activity isn't reduced in the presence of acetylpyridine adenine dinucleotide, whereas the H subunit's activity is reduced fivefold.
Isoenzymes
Enzymatically active lactate dehydrogenase is consisting of four subunits (tetramer). The two most common subunits are the LDH-M and LDH-H peptides, named for their discovery in muscle and heart tissue, and encoded by the LDHA and LDHB genes, respectively. These two subunits can form five possible tetramers (isoenzymes): LDH-1 (4H), LDH-5 (4M), and the three mixed tetramers (LDH-2/3H1M, LDH-3/2H2M, LDH-4/1H3M). These five isoforms are enzymatically similar but show different tissue distribution.
LDH-1 (4H)—in the heart and in RBC (red blood cells), as well as the brain
LDH-2 (3H1M)—in the reticuloendothelial system
LDH-3 (2H2M)—in the lungs
LDH-4 (1H3M)—in the kidneys, placenta, and pancreas
LDH-5 (4M)—in the liver and striated muscle, also present in the brain
LDH-2 is usually the predominant form in the serum. An LDH-1 level higher than the LDH-2 level (a "flipped pattern") suggests myocardial infarction (damage to heart tissues releases heart LDH, which is rich in LDH-1, into the bloodstream). The use of this phenomenon to diagnose infarction has been largely superseded by the use of Troponin I or T measurement.
There are two more mammalian LDH subunits that can be included in LDH tetramers: LDHC and LDHBx. LDHC is a testes-specific LDH protein, that is encoded by the LDHC gene. LDHBx is a peroxisome-specific LDH protein. LDHBx is the readthrough-form of LDHB. LDHBx is generated by translation of the LDHB mRNA, but the stop codon is interpreted as an amino acid-encoding codon. In consequence, translation continues to the next stop codon. This leads to the addition of seven amino acid residues to the normal LDH-H protein. The extension contains a peroxisomal targeting signal, so that LDHBx is imported into the peroxisome.
Protein families
The family also contains L-lactate dehydrogenases that catalyse the conversion of pyruvate to L-lactate, the last step in anaerobic glycolysis. Malate dehydrogenases that catalyse the interconversion of malate to oxaloacetate and participate in the citric acid cycle, and L-2-hydroxyisocaproate dehydrogenases are also members of the family. The N-terminus is a Rossmann NAD-binding fold and the C-terminus is an unusual alpha+beta fold.
Interactive pathway map
Enzyme regulation
This protein may use the morpheein model of allosteric regulation.
Ethanol-induced hypoglycemia
Ethanol is dehydrogenated to acetaldehyde by alcohol dehydrogenase, and further into acetyl CoA by acetaldehyde dehydrogenase. During this reaction 2 NADH are produced. If large amounts of ethanol are present, then large amounts of NADH are produced, leading to a depletion of NAD+. Thus, the conversion of pyruvate to lactate is increased due to the associated regeneration of NAD+. Therefore, anion-gap metabolic acidosis (lactic acidosis) may ensue in ethanol poisoning.
The increased NADH/NAD+ ratio also can cause hypoglycemia in an (otherwise) fasting individual who has been drinking and is dependent on gluconeogenesis to maintain blood glucose levels. Alanine and lactate are major gluconeogenic precursors that enter gluconeogenesis as pyruvate. The high NADH/NAD+ ratio shifts the lactate dehydrogenase equilibrium to lactate, so that less pyruvate can be formed and, therefore, gluconeogenesis is impaired.
Substrate regulation
LDH is also regulated by the relative concentrations of its substrates. LDH becomes more active under periods of extreme muscular output due to an increase in substrates for the LDH reaction. When skeletal muscles are pushed to produce high levels of power, the demand for ATP in regards to aerobic ATP supply leads to an accumulation of free ADP, AMP, and Pi. The subsequent glycolytic flux, specifically production of pyruvate, exceeds the capacity for pyruvate dehydrogenase and other shuttle enzymes to metabolize pyruvate. The flux through LDH increases in response to increased levels of pyruvate and NADH to metabolize pyruvate into lactate.
Transcriptional regulation
LDH undergoes transcriptional regulation by PGC-1α. PGC-1α regulates LDH by decreasing LDH A mRNA transcription and the enzymatic activity of pyruvate to lactate conversion.
Genetics
The M and H subunits are encoded by two different genes:
The M subunit is encoded by LDHA, located on chromosome 11p15.4 ().
The H subunit is encoded by LDHB, located on chromosome 12p12.2-p12.1 ().
A third isoform, LDHC or LDHX, is expressed only in the testis (); its gene is likely a duplicate of LDHA and is also located on the eleventh chromosome (11p15.5-p15.3).
The fourth isoform is localized in the peroxisome. It is tetramer containing one LDHBx subunit, which is also encoded by the LDHB gene. The LDHBx protein is seven amino acids longer than the LDHB (LDH-H) protein. This amino acid extension is generated by functional translational readthrough.
Mutations of the M subunit have been linked to the rare disease exertional myoglobinuria (see OMIM article), and mutations of the H subunit have been described but do not appear to lead to disease.
Mutations
In rare cases, a mutation in the genes controlling the production of lactate dehydrogenase will lead to a medical condition known as lactate dehydrogenase deficiency. Depending on which gene carries the mutation, one of two types will occur: either lactate dehydrogenase-A deficiency (also known as glycogen storage disease XI) or lactate dehydrogenase-B deficiency. Both of these conditions affect how the body breaks down sugars, primarily in certain muscle cells. Lactate dehydrogenase-A deficiency is caused by a mutation to the LDHA gene, while lactate dehydrogenase-B deficiency is caused by a mutation to the LDHB gene.
This condition is inherited in an autosomal recessive pattern, meaning that both parents must contribute a mutated gene in order for this condition to be expressed.
A complete lactate dehydrogenase enzyme consists of four protein subunits. Since the two most common subunits found in lactate dehydrogenase are encoded by the LDHA and LDHB genes, either variation of this disease causes abnormalities in many of the lactate dehydrogenase enzymes found in the body. In the case of lactate dehydrogenase-A deficiency, mutations to the LDHA gene result in the production of an abnormal lactate dehydrogenase-A subunit that cannot bind to the other subunits to form the complete enzyme. This lack of a functional subunit reduces the amount of enzyme formed, leading to an overall decrease in activity. During the anaerobic phase of glycolysis (the Cori cycle), the mutated enzyme is unable to convert pyruvate into lactate to produce the extra energy the cells need. Since this subunit has the highest concentration in the LDH enzymes found in the skeletal muscles (which are the primary muscles responsible for movement), high-intensity physical activity will lead to an insufficient amount of energy being produced during this anaerobic phase. This in turn will cause the muscle tissue to weaken and eventually break down, a condition known as rhabdomyolysis. The process of rhabdomyolysis also releases myoglobin into the blood, which will eventually end up in the urine and cause it to become red or brown: another condition known as myoglobinuria. Some other common symptoms are exercise intolerance, which consists of fatigue, muscle pain, and cramps during exercise, and skin rashes. In severe cases, myoglobinuria can damage the kidneys and lead to life-threatening kidney failure. In order to obtain a definitive diagnosis, a muscle biopsy may be performed to confirm low or absent LDH activity. There is currently no specific treatment for this condition.
In the case of lactate dehydrogenase-B deficiency, mutations to the LDHB gene result in the production of an abnormal lactate dehydrogenase-B subunit that cannot bind to the other subunits to form the complete enzyme. As with lactate dehydrogenase-A deficiency, this mutation reduces the overall effectiveness in the enzyme. However, there are some major differences between these two cases. The first is the location where the condition manifests itself. With lactate dehydrogenase-B deficiency, the highest concentration of B subunits can be found within the cardiac muscle, or the heart. Within the heart, lactate dehydrogenase plays the role of converting lactate back into pyruvate so that the pyruvate can be used again to create more energy. With the mutated enzyme, the overall rate of this conversion is decreased. However, unlike lactate dehydrogenase-A deficiency, this mutation does not appear to cause any symptoms or health problems linked to this condition. At the present moment, it is unclear why this is the case. Affected individuals are usually discovered only when routine blood tests indicate low LDH levels present within the blood.
Role in muscular fatigue
The onset of acidosis during periods of intense exercise is commonly attributed to accumulation of hydrogens that are dissociated from lactate. Previously, lactic acid was thought to cause fatigue. From this reasoning, the idea of lactate production being a primary cause of muscle fatigue during exercise was widely adopted. A closer, mechanistic analysis of lactate production under “anaerobic” conditions shows that there is no biochemical evidence for the production of lactate through LDH contributing to acidosis. While LDH activity is correlated to muscle fatigue, the production of lactate by means of the LDH complex works as a system to delay the onset of muscle fatigue. George Brooks and Colleagues at UC Berkeley where the lactate shuttle was discovered showed that lactate was actually a metabolic fuel not a waste product or the cause of fatigue.
LDH works to prevent muscular failure and fatigue in multiple ways. The lactate-forming reaction generates cytosolic NAD+, which feeds into the glyceraldehyde 3-phosphate dehydrogenase reaction to help maintain cytosolic redox potential and promote substrate flux through the second phase of glycolysis to promote ATP generation. This, in effect, provides more energy to contracting muscles under heavy workloads. The production and removal of lactate from the cell also ejects a proton consumed in the LDH reaction- the removal of excess protons produced in the wake of this fermentation reaction serves to act as a buffer system for muscle acidosis. Once proton accumulation exceeds the rate of uptake in lactate production and removal through the LDH symport, muscular acidosis occurs.
Blood test
On blood tests, an elevated level of lactate dehydrogenase usually indicates tissue damage, which has multiple potential causes, reflecting its widespread tissue distribution:
Hemolytic anemia
Vitamin B12 deficiency anemia
Infections such as infectious mononucleosis, meningitis, encephalitis, HIV/AIDS. It is notably increased in sepsis.
Infarction, such as bowel infarction, myocardial infarction and lung infarction
Acute kidney disease
Acute liver disease
Rhabdomyolysis
Pancreatitis
Bone fractures
Cancers, notably testicular cancer and lymphoma. A high LDH after chemotherapy may indicate that it has not been successful.
Severe shock
Hypoxia
Low and normal levels of LDH do not usually indicate any pathology. Low levels may be caused by large intake of vitamin C.
LDH is a protein that normally appears throughout the body in small amounts.
Testing in cancer
Many cancers can raise LDH levels, so LDH may be used as a tumor marker, but at the same time, it is not useful in identifying a specific kind of cancer. Measuring LDH levels can be helpful in monitoring treatment for cancer. Noncancerous conditions that can raise LDH levels include heart failure, hypothyroidism, anemia, pre-eclampsia, meningitis, encephalitis, acute pancreatitis, HIV and lung or liver disease.
Tissue breakdown releases LDH, and therefore, LDH can be measured as a surrogate for tissue breakdown (e.g., hemolysis). LDH is measured by the lactate dehydrogenase (LDH) test (also known as the LDH test or lactic acid dehydrogenase test). Comparison of the measured LDH values with the normal range help guide diagnosis.
Hemolysis
In medicine, LDH is often used as a marker of tissue breakdown as LDH is abundant in red blood cells and can function as a marker for hemolysis. A blood sample that has been handled incorrectly can show false-positively high levels of LDH due to erythrocyte damage.
It can also be used as a marker of myocardial infarction. Following a myocardial infarction, levels of LDH peak at 3–4 days and remain elevated for up to 10 days. In this way, elevated levels of LDH (where the level of LDH1 is higher than that of LDH2, i.e. the LDH Flip, as normally, in serum, LDH2 is higher than LDH1) can be useful for determining whether a patient has had a myocardial infarction if they come to doctors several days after an episode of chest pain.
Tissue turnover
Other uses are assessment of tissue breakdown in general; this is possible when there are no other indicators of hemolysis. It is used to follow up cancer (especially lymphoma) patients, as cancer cells have a high rate of turnover, with destroyed cells leading to an elevated LDH activity.
HIV
LDH is often measured in HIV patients as a non-specific marker for pneumonia due to Pneumocystis jirovecii (PCP). Elevated LDH in the setting of upper respiratory symptoms in a HIV patient suggests, but is not diagnostic for, PCP. However, in HIV-positive patients with respiratory symptoms, a very high LDH level (>600 IU/L) indicated histoplasmosis (9.33 times more likely) in a study of 120 PCP and 30 histoplasmosis patients.
Testing in other body fluids
Exudates and transudates
Measuring LDH in fluid aspirated from a pleural effusion (or pericardial effusion) can help in the distinction between exudates (actively secreted fluid, e.g., due to inflammation) or transudates (passively secreted fluid, due to a high hydrostatic pressure or a low oncotic pressure). The usual criterion (included in Light's criteria) is that a ratio of pleural LDH to serum LDH greater than 0.6 or the upper limit of the normal laboratory value for serum LDH indicates an exudate, while a ratio of less indicates a transudate. Different laboratories have different values for the upper limit of serum LDH, but examples include 200 and 300 IU/L. In empyema, the LDH levels, in general, will exceed 1000 IU/L.
Meningitis and encephalitis
High levels of lactate dehydrogenase in cerebrospinal fluid are often associated with bacterial meningitis. In the case of viral meningitis, high LDH, in general, indicates the presence of encephalitis and poor prognosis.
In cancer treatment
LDH is involved in tumor initiation and metabolism. Cancer cells rely on increased glycolysis resulting in increased lactate production in addition to aerobic respiration in the mitochondria, even under oxygen-sufficient conditions (a process known as the Warburg effect). This state of fermentative glycolysis is catalyzed by the A form of LDH. This mechanism allows tumorous cells to convert the majority of their glucose stores into lactate regardless of oxygen availability, shifting use of glucose metabolites from simple energy production to the promotion of accelerated cell growth and replication.
LDH A and the possibility of inhibiting its activity has been identified as a promising target in cancer treatments focused on preventing carcinogenic cells from proliferating. Chemical inhibition of LDH A has demonstrated marked changes in metabolic processes and overall survival of carcinoma cells. Oxamate is a cytosolic inhibitor of LDH A that significantly decreases ATP production in tumorous cells as well as increasing production of reactive oxygen species (ROS). These ROS drive cancer cell proliferation by activating kinases that drive cell cycle progression growth factors at low concentrations, but can damage DNA through oxidative stress at higher concentrations. Secondary lipid oxidation products can also inactivate LDH and impact its ability to regenerate NADH, directly disrupting the enzymes ability to convert lactate to pyruvate.
While recent studies have shown that LDH activity is not necessarily an indicator of metastatic risk, LDH expression can act as a general marker in the prognosis of cancers. Expression of LDH5 and VEGF in tumors and the stroma has been found to be a strong prognostic factor for diffuse or mixed-type gastric cancers.
Prokaryotes
A cap-membrane-binding domain is found in prokaryotic lactate dehydrogenase. This consists of a large seven-stranded antiparallel beta-sheet flanked on both sides by alpha-helices. It allows for membrane association.
See also
Dehydrogenase
Erythrocyte lactate transporter defect (formerly, myopathy due to lactate transport defect)
Glycogen storage disease
Lactate
Metabolic myopathies
Oxidoreductase
References
Further reading
External links
Chemical pathology
Tumor markers
EC 1.1.1
Enzymes of known structure | Lactate dehydrogenase | [
"Chemistry",
"Biology"
] | 4,565 | [
"Biomarkers",
"Exercise biochemistry",
"Tumor markers",
"Biochemistry",
"Chemical pathology"
] |
14,626,533 | https://en.wikipedia.org/wiki/Retinal%20G%20protein%20coupled%20receptor | RPE-retinal G protein-coupled receptor also known as RGR-opsin is a protein that in humans is encoded by the RGR gene. RGR-opsin is a member of the rhodopsin-like receptor subfamily of GPCR. Like other opsins which bind retinaldehyde, it contains a conserved lysine residue in the seventh transmembrane domain. RGR-opsin comes in different isoforms produced by alternative splicing.
Function
RGR-opsin preferentially binds all-trans-retinal, which is the dominant form in the dark adapted retina, upon light exposure it is isomerized to 11-cis-retinal. Therefore, RGR-opsin presumably acts as a photoisomerase to convert all-trans-retinal to 11-cis-retinal, similar to retinochrome in invertebrates. 11-cis-retinal is isomerized back within rhodopsin and the iodopsins in the rods and cones of the retina. RGR-opsin is exclusively expressed in tissue close to the rods and cones, the retinal pigment epithelium (RPE) and Müller cells.
Phylogeny
The RGR-opsins are restricted to the echinoderms, the hemichordates the craniates. The craniates are the taxon that contains mammals and with them humans. The RGR-opsins are one of the seven subgroups of the chromopsins. The other groups are the peropsins, the retinochromes, the nemopsins, the astropsins, the varropsins, and the gluopsins. The chromopsins are one of three subgroups of the tetraopsins (also known as RGR/Go or Group 4 opsins). The other groups are the neuropsins and the Go-opsins. The tetraopsins are one of the five major groups of the animal opsins, also known as type 2 opsins). The other groups are the ciliary opsins (c-opsins, cilopsins), the rhabdomeric opsins (r-opsins, rhabopsins), the xenopsins, and the nessopsins. Four of these subclades occur in Bilateria (all but the nessopsins). However, the bilaterian clades constitute a paraphyletic taxon without the opsins from the cnidarians.
In the phylogeny above, Each clade contains sequences from opsins and other G protein-coupled receptors. The number of sequences and two pie charts are shown next to the clade. The first pie chart shows the percentage of a certain amino acid at the position in the sequences corresponding to position 296 in cattle rhodopsin. The amino acids are color-coded. The colors are red for lysine (K), purple for glutamic acid (E), orange for arginine (R), dark and mid-gray for other amino acids, and light gray for sequences that have no data at that position. The second pie chart gives the taxon composition for each clade, green stands for craniates, dark green for cephalochordates, mid green for echinoderms, brown for nematodes, pale pink for annelids, dark blue for arthropods, light blue for mollusks, and purple for cnidarians. The branches to the clades have pie charts, which give support values for the branches. The values are from right to left SH-aLRT/aBayes/UFBoot. The branches are considered supported when SH-aLRT ≥ 80%, aBayes ≥ 0.95, and UFBoot ≥ 95%. If a support value is above its threshold the pie chart is black otherwise gray.
Clinical significance
RGR-opsin may be associated with autosomal recessive and autosomal dominant retinitis pigmentosa (arRP and adRP, respectively).
Interactions
RGR-opsin has been shown to interact with KIAA1279.
References
Further reading
G protein-coupled receptors | Retinal G protein coupled receptor | [
"Chemistry"
] | 881 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,626,877 | https://en.wikipedia.org/wiki/Paracrystallinity | In materials science, paracrystalline materials are defined as having short- and medium-range ordering in their lattice (similar to the liquid crystal phases) but lacking crystal-like long-range ordering at least in one direction.
Origin and definition
The words "paracrystallinity" and "paracrystal" were coined by the late Friedrich Rinne in the year 1933. Their German equivalents, e.g. "Parakristall", appeared in print one year earlier.
A general theory of paracrystals has been formulated in a basic textbook, and then further developed/refined by various authors.
Rolf Hosemann's definition of an ideal paracrystal is: "The electron density distribution of any material is equivalent to that of a paracrystal when there is for every building block one ideal point so that the distance statistics to other ideal points are identical for all of these points. The electron configuration of each building block around its ideal point is statistically independent of its counterpart in neighboring building blocks. A building block corresponds then to the material content of a cell of this "blurred" space lattice, which is to be considered a paracrystal."
Theory
Ordering is the regularity in which atoms appear in a predictable lattice, as measured from one point. In a highly ordered, perfectly crystalline material, or single crystal, the location of every atom in the structure can be described exactly measuring out from a single origin. Conversely, in a disordered structure such as a liquid or amorphous solid, the location of the nearest and, perhaps, second-nearest neighbors can be described from an origin (with some degree of uncertainty) and the ability to predict locations decreases rapidly from there out. The distance at which atom locations can be predicted is referred to as the correlation length . A paracrystalline material exhibits a correlation somewhere between the fully amorphous and fully crystalline.
The primary, most accessible source of crystallinity information is X-ray diffraction and cryo-electron microscopy, although other techniques may be needed to observe the complex structure of paracrystalline materials, such as fluctuation electron microscopy in combination with density of states modeling of electronic and vibrational states. Scanning transmission electron microscopy can provide real-space and reciprocal space characterization of paracrystallinity in nanoscale material, such as quantum dot solids.
The scattering of X-rays, neutrons and electrons on paracrystals is quantitatively described by the theories of the ideal and real paracrystal.
Numerical differences in analyses of diffraction experiments on the basis of either of these two theories of paracrystallinity can often be neglected.
Just like ideal crystals, ideal paracrystals extend theoretically to infinity. Real paracrystals, on the other hand, follow the empirical α*-law, which restricts their size. That size is also indirectly proportional to the components of the tensor of the paracrystalline distortion. Larger solid state aggregates are then composed of micro-paracrystals.
Applications
The paracrystal model has been useful, for example, in describing the state of partially amorphous semiconductor materials after deposition. It has also been successfully applied to synthetic polymers, liquid crystals, biopolymers, quantum dot solids, and biomembranes.
See also
Amorphous solid
Crystallite
Crystallography
DNA
Single crystal
X-ray pattern of a B-DNA paracrystal
X-ray scattering techniques
References
Phases of matter | Paracrystallinity | [
"Physics",
"Chemistry"
] | 722 | [
"Phases of matter",
"Matter"
] |
14,627,460 | https://en.wikipedia.org/wiki/Seismic%20base%20isolation | Seismic base isolation, also known as base isolation, or base isolation system, is one of the most popular means of protecting a structure against earthquake forces. It is a collection of structural elements which should substantially decouple a superstructure from its substructure that is in turn resting on the shaking ground, thus protecting a building or non-building structure's integrity.
Base isolation is one of the most powerful tools of earthquake engineering pertaining to the passive structural vibration control technologies.
The isolation can be obtained by the use of various techniques like rubber bearings, friction bearings, ball bearings, spring systems and other means. It is meant to enable a building or non-building structure to survive a potentially devastating seismic impact through a proper initial design or subsequent modifications. In some cases, application of base isolation can raise both a structure's seismic performance and its seismic sustainability considerably. Contrary to popular belief, base isolation does not make a building earthquake proof.
Base isolation system consists of isolation units with or without isolation components, where:
Isolation units are the basic elements of a base isolation system which are intended to provide the aforementioned decoupling effect to a building or non-building structure.
Isolation components are the connections between isolation units and their parts having no decoupling effect of their own.
Isolation units could consist of shear or sliding units.
This technology can be used for both new structural design and seismic retrofit. In process of seismic retrofit, some of the most prominent U.S. monuments, e.g. Pasadena City Hall, San Francisco City Hall, Salt Lake City and County Building or LA City Hall were mounted on base isolation systems. It required creating rigidity diaphragms and moats around the buildings, as well as making provisions against overturning and P-Delta Effect.
Base isolation is also used on a smaller scale—sometimes down to a single room in a building. Isolated raised-floor systems are used to safeguard essential equipment against earthquakes. The technique has been incorporated to protect statues and other works of art—see, for instance, Rodin's Gates of Hell at the National Museum of Western Art in Tokyo's Ueno Park.
Base isolation units consist of Linear-motion bearings, that allow the building to move, oil dampers that absorb the forces generated by the movement of the building, and laminated rubber bearings that allow the building to return to its original position when the earthquake has ended.
History
Base isolator bearings were pioneered in New Zealand by Dr Bill Robinson during the 1970s. The bearing, which consists of layers of rubber and steel with a lead core, was invented by Dr Robinson in 1974. Later, in 2018, the technology was commercialized by Kamalakannan Ganesan and subsequently made patent-free, allowing for broader access and application of this earthquake-resistant technology
The earliest uses of base isolation systems date back all the way to 550 B.C. in the construction of the Tomb of Cyrus the Great in Pasargadae, Iran. More than 90% of Iran’s territory, including this historic site, is located in the Alpine-Himalaya belt, which is one of the Earth’s most active seismic zones. Historians discovered that this structure, predominantly composed of limestone, was designed to have two foundations. The first and lower foundation, composed of stones that were bonded together with a lime plaster and sand mortar, known as Saroj mortar, was designed to move in the case of an earthquake. The top foundation layer, which formed a large plate that was in no way attached to the structure’s base, was composed of polished stones. The reason this second foundation was not tied down to the base was that in the case of an earthquake, this plate-like layer would be able to slide freely over the structure’s first foundation. As historians discovered thousands of years later, this system worked exactly as its designers had predicted, and as a result, the Tomb of Cyrus the Great still stands today. The development of the idea of base isolation can be divided into two eras. In ancient times the isolation was performed through the construction of multilayered cut stones (or by laying sand or gravel under the foundation) while in recent history, beside layers of gravel or sand as an isolation interface wooden logs between the ground and the foundation are used.
Research
Through the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES), researchers are studying the performance of base isolation systems.
The project, a collaboration among researchers at University of Nevada, Reno; University of California, Berkeley; University of Wisconsin, Green Bay; and the University at Buffalo is conducting a strategic assessment of the economic, technical, and procedural barriers to the widespread adoption of seismic isolation in the United States.
NEES resources have been used for experimental and numerical simulation, data mining, networking and collaboration to understand the complex interrelationship among the factors controlling the overall performance of an isolated structural system.
This project involves earthquake shaking table and hybrid tests at the NEES experimental facilities at the University of California, Berkeley, and the University at Buffalo, aimed at understanding ultimate performance limits to examine the propagation of local isolation failures (e.g., bumping against stops, bearing failures, uplift) to the system level response. These tests will include a full-scale, three-dimensional test of an isolated 5-story steel building on the E-Defense shake table in Miki, Hyōgo, Japan.
Seismic isolation research in the middle and late 1970s was largely predicated on the observation that most strong-motion records recorded up to that time had very low spectral acceleration values (2 sec) in the long-period range.
Records obtained from lakebed sites in the 1985 Mexico City earthquake raised concerns of the possibility of resonance, but such examples were considered exceptional and predictable.
One of the early examples of the earthquake design strategy is the one given by Dr. J.A. Calantariens in 1909. It was proposed that the building can be built on a layer of fine sand, mica or talc that would allow the building to slide in an earthquake, thereby reducing the forces transmitted to building.
A detailed literature review of semi-active control systems Michael D. Symans et al. (1999) provides references to both theoretical and experimental research but concentrates on describing the results of experimental work. Specifically, the review focuses on descriptions of the dynamic behavior and distinguishing features of various systems which have been experimentally tested both at the component level and within small scale structural models.
Adaptive base isolation
An adaptive base isolation system includes a tunable isolator that can adjust its properties based on the input to minimize the transferred vibration. Magnetorheological fluid dampers and isolators with Magnetorheological elastomer have been suggested as adaptive base isolators.
Notable buildings and structures on base isolation systems
Tomb of Cyrus
LA City Hall
Oakland City Hall
Pasadena City Hall
San Francisco City Hall
California Palace of the Legion of Honor in San Francisco
M. H. de Young Memorial Museum in San Francisco
Asian Art Museum in San Francisco
James R. Browning United States Court of Appeals Building in San Francisco
San Francisco International Airport's International Terminal, one of the largest base-isolated structures in the world
Salt Lake City and County Building
Başakşehir Çam and Sakura City Hospital in Istanbul
New Zealand Parliament Buildings in Wellington
Museum of New Zealand Te Papa Tongarewa in Wellington
Salt Lake Temple of the Church of Jesus Christ of Latter-day Saints in Salt Lake City (undergoing seismic renovation 2019–2024)
BAPS Shri Swaminarayan Mandir Chino Hills, the first earthquake-proof Hindu temple in the world
Apple Park
See also
Earthquake-resistant structures
Geotechnical engineering
Seismic retrofit
Shock absorber
Shock mount
Vibration isolation
References
Earthquake engineering
Seismic vibration control
Structural connectors
Structural system
New Zealand inventions | Seismic base isolation | [
"Technology",
"Engineering"
] | 1,582 | [
"Structural engineering",
"Building engineering",
"Structural system",
"Structural connectors",
"Civil engineering",
"Seismic vibration control",
"Earthquake engineering"
] |
13,480,124 | https://en.wikipedia.org/wiki/Spinodal%20decomposition | Spinodal decomposition is a mechanism by which a single thermodynamic phase spontaneously separates into two phases (without nucleation). Decomposition occurs when there is no thermodynamic barrier to phase separation. As a result, phase separation via decomposition does not require the nucleation events resulting from thermodynamic fluctuations, which normally trigger phase separation.
Spinodal decomposition is observed when mixtures of metals or polymers separate into two co-existing phases, each rich in one species and poor in the other. When the two phases emerge in approximately equal proportion (each occupying about the same volume or area), characteristic intertwined structures are formed that gradually coarsen (see animation). The dynamics of spinodal decomposition is commonly modeled using the Cahn–Hilliard equation.
Spinodal decomposition is fundamentally different from nucleation and growth. When there is a nucleation barrier to the formation of a second phase, time is taken by the system to overcome that barrier. As there is no barrier (by definition) to spinodal decomposition, some fluctuations (in the order parameter that characterizes the phase) start growing instantly. Furthermore, in spinodal decomposition, the two distinct phases start growing in any location uniformly throughout the volume, whereas a nucleated phase change begins at a discrete number of points.
Spinodal decomposition occurs when a homogenous phase becomes thermodynamically unstable. An unstable phase lies at a maximum in free energy. In contrast, nucleation and growth occur when a homogenous phase becomes metastable. That is, another biphasic system becomes lower in free energy, but the homogenous phase remains at a local minimum in free energy, and so is resistant to small fluctuations. J. Willard Gibbs described two criteria for a metastable phase: that it must remain stable against a small change over a large area.
History
In the early 1940s, Bradley reported the observation of sidebands around the Bragg peaks in the X-ray diffraction pattern of a Cu-Ni-Fe alloy that had been quenched and then annealed inside the miscibility gap. Further observations on the same alloy were made by Daniel and Lipson, who demonstrated that the sidebands could be explained by a periodic modulation of composition in the <100> directions. From the spacing of the sidebands, they were able to determine the wavelength of the modulation, which was of the order of 100 angstroms (10 nm).
The growth of a composition modulation in an initially homogeneous alloy implies uphill diffusion or a negative diffusion coefficient. Becker and Dehlinger had already predicted a negative diffusivity inside the spinodal region of a binary system, but their treatments could not account for the growth of a modulation of a particular wavelength, such as was observed in the Cu-Ni-Fe alloy. In fact, any model based on Fick's law yields a physically unacceptable solution when the diffusion coefficient is negative.
The first explanation of the periodicity was given by Mats Hillert in his 1955 Doctoral Dissertation at MIT. Starting with a regular solution model, he derived a flux equation for one-dimensional diffusion on a discrete lattice. This equation differed from the usual one by the inclusion of a term, which allowed for the effect of the interfacial energy on the driving force of adjacent interatomic planes that differed in composition. Hillert solved the flux equation numerically and found that inside the spinodal it yielded a periodic variation of composition with distance. Furthermore, the wavelength of the modulation was of the same order as that observed in the Cu-Ni-Fe alloys.
Building on Hillert's work, a more flexible continuum model was subsequently developed by John W. Cahn and John Hilliard, who included the effects of coherency strains as well as the gradient energy term. The strains are significant in that they dictate the ultimate morphology of the decomposition in anisotropic materials.
Cahn–Hilliard model for spinodal decomposition
Free energies in the presence of small amplitude fluctuations, e.g. in concentration, can be evaluated using an approximation introduced by Ginzburg and Landau to describe magnetic field gradients in superconductors. This approach allows one to approximate the free energy as an expansion in terms of the concentration gradient , a vector. Since free energy is a scalar and we are probing near its minima, the term proportional to is negligible. The lowest order term is the quadratic expression , a scalar. Here is a parameter that controls the free energy cost of variations in concentration .
The Cahn–Hilliard free energy is then
where is the bulk free energy per unit volume of the homogeneous solution, and the integral is over the volume of the system.
We now want to study the stability of the system with respect to small fluctuations in the concentration , for example a sine wave of amplitude and wavevector , for the wavelength of the concentration wave. To be thermodynamically stable, the free energy change due to any small amplitude concentration fluctuation , must be positive.
We may expand about the average composition co as follows:
and for the perturbation the free energy change is
When this is integrated over the volume , the gives zero, while and integrate to give . So, then
As , thermodynamic stability requires that the term in brackets be positive. The is always positive but tends to zero at small wavevectors, large wavelengths. Since we are interested in macroscopic fluctuations, , stability requires that the second derivative of the free energy be positive. When it is, there is no spinodal decomposition, but when it is negative, spinodal decomposition will occur. Then fluctuations with wavevectors become spontaneously unstable, where the critical wave number is given by:
which corresponds to a fluctuations above a critical wavelength
Dynamics of spinodal decomposition when molecules move via diffusion
Spinodal decomposition can be modeled using a generalized diffusion equation:
for the chemical potential and the mobility. As pointed out by Cahn, this equation can be considered as a phenomenological definition of the mobility M, which must by definition be positive.
It consists of the ratio of the flux to the local gradient in chemical potential. The chemical potential is a variation of the free energy and when this is the Cahn–Hilliard free energy this is
and so
and now we want to see what happens to a small concentration fluctuation - note that now it has time dependence as a wavevector dependence. Here is a growth rate. If then the perturbation shrinks to nothing, the system is stable with respect to small perturbations or fluctuations, and there is no spinodal decomposition. However, if then the perturbation grows and the system is unstable with respect to small perturbations or fluctuations: There is spinodal decomposition.
Substituting in this concentration fluctuation, we get
This gives the same expressions for the stability as above, but it also gives an expression for the growth rate of concentration perturbations
which has a maximum at a wavevector
So, at least at the beginning of spinodal decomposition, we expect the growing concentrations to mostly have this wavevector.
Phase diagram
This type of phase transformation is known as spinodal decomposition, and can be illustrated on a phase diagram exhibiting a miscibility gap. Thus, phase separation occurs whenever a material transition into the unstable region of the phase diagram. The boundary of the unstable region sometimes referred to as the binodal or coexistence curve, is found by performing a common tangent construction of the free-energy diagram. Inside the binodal is a region called the spinodal, which is found by determining where the curvature of the free-energy curve is negative. The binodal and spinodal meet at the critical point. It is when a material is moved into the spinodal region of the phase diagram that spinodal decomposition can occur.
The free energy curve is plotted as a function of composition for a temperature below the convolute temperature, T. Equilibrium phase compositions are those corresponding to the free energy minima. Regions of negative curvature (∂2f/∂c2 < 0 ) lie within the inflection points of the curve (∂2f/∂c2 = 0 ) which are called the spinodes. Their locus as a function of temperature defines the spinodal curve. For compositions within the spinodal, a homogeneous solution is unstable against infinitesimal fluctuations in density or composition, and there is no thermodynamic barrier to the growth of a new phase. Thus, the spinodal represents the limit of physical and chemical stability.
To reach the spinodal region of the phase diagram, a transition must take the material through the binodal region or the critical point. Often phase separation will occur via nucleation during this transition, and spinodal decomposition will not be observed. To observe spinodal decomposition, a very fast transition, often called a quench, is required to move from the stable to the spinodal unstable region of the phase diagram.
In some systems, ordering of the material leads to a compositional instability and this is known as a conditional spinodal, e.g. in the feldspars.
Coherency strains
For most crystalline solid solutions, there is a variation of lattice parameters with the composition. If the lattice of such a solution is to remain coherent in the presence of a composition modulation, mechanical work has to be done to strain the rigid lattice structure. The maintenance of coherency thus affects the driving force for diffusion.
Consider a crystalline solid containing a one-dimensional composition modulation along the x-direction. We calculate the elastic strain energy for a cubic crystal by estimating the work required to deform a slice of material so that it can be added coherently to an existing slab of cross-sectional area. We will assume that the composition modulation is along the x' direction and, as indicated, a prime will be used to distinguish the reference axes from the standard axes of a cubic system (that is, along the <100>).
Let the lattice spacing in the plane of the slab be ao and that of the undeformed slice a. If the slice is to be coherent after the addition of the slab, it must be subjected to a strain ε in the z' and y' directions which is given by:
In the first step, the slice is deformed hydrostatically in order to produce the required strains to the z' and y' directions. We use the linear compressibility of a cubic system 1 / ( c11 + 2 c12 ) where the c's are the elastic constants. The stresses required to produce a hydrostatic strain of δ are therefore given by:
The elastic work per unit volume is given by:
where the ε's are the strains. The work performed per unit volume of the slice during the first step is therefore given by:
In the second step, the sides of the slice parallel to the x' direction are clamped and the stress in this direction is relaxed reversibly. Thus, εz' = εy' = 0. The result is that:
The net work performed on the slice in order to achieve coherency is given by:
or
The final step is to express c1'1' in terms of the constants referred to the standard axes. From the rotation of axes, we obtain the following:
where l, m, n are the direction cosines of the x' axis and, therefore the direction cosines of the composition modulation. Combining these, we obtain the following:
The existence of any shear strain has not been accounted for. Cahn considered this problem, and concluded that shear would be absent for modulations along <100>, <110>, <111> and that for other directions the effect of shear strains would be small. It then follows that the total elastic strain energy of a slab of cross-sectional area A is given by:
We next have to relate the strain δ to the composition variation. Let ao be the lattice parameter of the unstrained solid of the average composition co. Using a Taylor series expansion about co yields the following:
in which
where the derivatives are evaluated at co. Thus, neglecting higher-order terms, we have:
Substituting, we obtain:
This simple result indicates that the strain energy of a composition modulation depends only on the amplitude and is independent of the wavelength. For a given amplitude, the strain energy WE is proportional to Y. Consider a few special cases.
For an isotropic material:
so that:
This equation can also be written in terms of Young's modulus E and Poisson's ratio υ using the standard relationships:
Substituting, we obtain the following:
For most metals, the left-hand side of this equation
is positive, so that the elastic energy will be a minimum for those directions that minimize the term: l2m2 + m2n2 + l2n2. By inspection, those are seen to be <100>. For this case:
the same as for an isotropic material. At least one metal (molybdenum) has an anisotropy of the opposite sign. In this case, the directions for minimum WE will be those that maximize the directional cosine function. These directions are <111>, and
As we will see, the growth rate of the modulations will be a maximum in the directions that minimize Y. These directions, therefore, determine the morphology and structural characteristics of the decomposition in cubic solid solutions.
Rewriting the diffusion equation and including the term derived for the elastic energy yields the following:
or
which can alternatively be written in terms of the diffusion coefficient D as:
The simplest way of solving this equation is by using the method of Fourier transforms.
Fourier transform
The motivation for the Fourier transformation comes from the study of a Fourier series. In the study of a Fourier series, complicated periodic functions are written as the sum of simple waves mathematically represented by sines and cosines. Due to the properties of sine and cosine, it is possible to recover the amount of each wave in the sum by an integral. In many cases it is desirable to use Euler's formula, which states that e2πiθ = cos 2πθ + i sin 2πθ, to write Fourier series in terms of the basic waves e2πiθ, with the distinct advantage of simplifying many unwieldy formulas.
The passage from sines and cosines to complex exponentials makes it necessary for the Fourier coefficients to be complex-valued. The usual interpretation of this complex number is that it gives both the amplitude (or size) of the wave present in the function and the phase (or the initial angle) of the wave. This passage also introduces the need for negative "frequencies". (E.G. If θ were measured in seconds then the waves e2πiθ and e−2πiθ would both complete one cycle per second—but they represent different frequencies in the Fourier transform. Hence, frequency no longer measures the number of cycles per unit time, but is closely related.)
If A(β) is the amplitude of a Fourier component of wavelength λ and wavenumber β = 2π/λ the spatial variation in composition can be expressed by the Fourier integral:
in which the coefficients are defined by the inverse relationship:
Substituting, we obtain on equating coefficients:
This is an ordinary differential equation that has the solution:
in which A(β) is the initial amplitude of the Fourier component of wave wavenumber β and R(β) defined by:
or, expressed in terms of the diffusion coefficient D:
In a similar manner, the new diffusion equation:
has a simple sine wave solution given by:
where is obtained by substituting this solution back into the diffusion equation as follows:
For solids, the elastic strains resulting from coherency add terms to the amplification factor as follows:
where, for isotropic solids:
,
where E is Young's modulus of elasticity, ν is Poisson's ratio, and η is the linear strain per unit composition difference. For anisotropic solids, the elastic term depends on the direction in a manner that can be predicted by elastic constants and how the lattice parameters vary with composition. For the cubic case, Y is a minimum for either (100) or (111) directions, depending only on the sign of the elastic anisotropy.
Thus, by describing any composition fluctuation in terms of its Fourier components, Cahn showed that a solution would be unstable concerning to the sinusoidal fluctuations of a critical wavelength. By relating the elastic strain energy to the amplitudes of such fluctuations, he formalized the wavelength or frequency dependence of the growth of such fluctuations, and thus introduced the principle of selective amplification of Fourier components of certain wavelengths. The treatment yields the expected mean particle size or wavelength of the most rapidly growing fluctuation.
Thus, the amplitude of composition fluctuations should grow continuously until a metastable equilibrium is reached with preferential amplification of components of particular wavelengths. The kinetic amplification factor R is negative when the solution is stable to the fluctuation, zero at the critical wavelength, and positive for longer wavelengths—exhibiting a maximum at exactly times the critical wavelength.
Consider a homogeneous solution within the spinodal. It will initially have a certain amount of fluctuation from the average composition which may be written as a Fourier integral. Each Fourier component of that fluctuation will grow or diminish according to its wavelength.
Because of the maximum in R as a function of wavelength, those components of the fluctuation with times the critical wavelength will grow fastest and will dominate. This "principle of selective amplification" depends on the initial presence of these wavelengths but does not critically depend on their exact amplitude relative to other wavelengths (if the time is large compared with (1/R). It does not depend on any additional assumptions, since different wavelengths can coexist and do not interfere with one another.
Limitations of this theory would appear to arise from this assumption and the absence of an expression formulated to account for irreversible processes during phase separation which may be associated with internal friction and entropy production. In practice, frictional damping is generally present and some of the energy is transformed into thermal energy. Thus, the amplitude and intensity of a one-dimensional wave decrease with distance from the source, and for a three-dimensional wave, the decrease will be greater.
Dynamics in k-space
In the spinodal region of the phase diagram, the free energy can be lowered by allowing the components to separate, thus increasing the relative concentration of a component material in a particular region of the material. The concentration will continue to increase until the material reaches the stable part of the phase diagram. Very large regions of material will change their concentration slowly due to the amount of material that must be moved. Very small regions will shrink away due to the energy cost of maintaining an interface between two dissimilar component materials.
To initiate a homogeneous quench a control parameter, such as temperature, is abruptly and globally changed. For a binary mixture of -type and -type materials, the Landau free-energy
is a good approximation of the free energy near the critical point and is often used to study homogeneous quenches. The mixture concentration is the density difference of the mixture components, the control parameters which determine the stability of the mixture are and , and the interfacial energy cost is determined by .
Diffusive motion often dominates at the length-scale of spinodal decomposition. The equation of motion for a diffusive system is
where is the diffusive mobility, is some random noise such that , and the chemical potential is derived from the Landau free-energy:
We see that if , small fluctuations around have a negative effective diffusive mobility and will grow rather than shrink. To understand the growth dynamics, we disregard the fluctuating currents due to , linearize the equation of motion around and perform a Fourier transform into -space. This leads to
which has an exponential growth solution:
Since the growth rate is exponential, the fastest growing angular wavenumber
will quickly dominate the morphology. We now see that spinodal decomposition results in domains of the characteristic length scale called the spinodal length:
The growth rate of the fastest-growing angular wave number is
where is known as the spinodal time.
The spinodal length and spinodal time can be used to nondimensionalize the equation of motion, resulting in universal scaling for spinodal decomposition.
Spinodal Architected Materials
Spinodal phase decomposition has been used to generate architected materials by interpreting one phase as solid, and the other phase as void. These spinodal architected materials present interesting mechanical properties, such as high energy absorption, insensitivity to imperfections, superior mechanical resilience, and high stiffness-to-weight ratio. Furthermore, by controlling the phase separation, i.e., controlling the proportion of materials, and/or imposing preferential directions in the decompositions, one can control the density, and preferential directions effectively tuning the strength, weight, and anisotropy of the resulting architected material. Another interesting property of spinodal materials is the capability to seamlessly transition between different classes, orientations, and densities, thereby enabling the manufacturing of effectively multi-material structures.
References
Further reading
External links
Brief statement by Mats Hillert
John Cahn's Homepage
Binary alloys
Composition profiles
Copper / Nickel / Tin alloys
Graphical representation of microstructural evolution
Condensed matter physics
Thermodynamics
Materials science
Critical phenomena
Phase transitions | Spinodal decomposition | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 4,472 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Critical phenomena",
"Condensed matter physics",
"Thermodynamics",
"nan",
"Statistical mechanics",
"Matter",
"Dynamical systems"
] |
13,485,805 | https://en.wikipedia.org/wiki/Radio-frequency%20engineering | Radio-frequency (RF) engineering is a subset of electrical engineering involving the application of transmission line, waveguide, antenna, radar, and electromagnetic field principles to the design and application of devices that produce or use signals within the radio band, the frequency range of about 20 kHz up to 300 GHz.
It is incorporated into almost everything that transmits or receives a radio wave, which includes, but is not limited to, mobile phones, radios, Wi-Fi, and two-way radios.
RF engineering is a highly specialized field that typically includes the following areas of expertise:
Design of antenna systems to provide radiative coverage of a specified geographical area by an electromagnetic field or to provide specified sensitivity to an electromagnetic field impinging on the antenna.
Design of coupling and transmission line structures to transport RF energy without radiation.
Application of circuit elements and transmission line structures in the design of oscillators, amplifiers, mixers, detectors, combiners, filters, impedance transforming networks and other devices.
Verification and measurement of performance of radio frequency devices and systems.
To produce quality results, the RF engineer needs to have an in-depth knowledge of mathematics, physics and general electronics theory as well as specialized training in areas such as wave propagation, impedance transformations, filters and microstrip printed circuit board design.
Radio electronics
Radio electronics is concerned with electronic circuits which receive or transmit radio signals.
Typically, such circuits must operate at radio frequency and power levels, which imposes special constraints on their design. These constraints increase in their importance with higher frequencies. At microwave frequencies, the reactance of signal traces becomes a crucial part of the physical layout of the circuit.
List of radio electronics topics:
RF oscillators: Phase-locked loop, voltage-controlled oscillator
Transmitters, transmission lines, transmission line tuners, RF connectors
Antennas, antenna theory
Receivers, tuners
Amplifiers
Modulators, demodulators, detectors
RF filters
RF shielding, ground plane
Direct-sequence spread spectrum (DSSS), noise power
Digital radio
RF power amplifiers
Metal–oxide–semiconductor field-effect transistor (MOSFET)s: Power MOSFET, Laterally-diffused metal-oxide semiconductor (LDMOS)
Bipolar junction transistors
Baseband processors (Complementary metal–oxide–semiconductor (CMOS))
RF CMOS (mixed-signal integrated circuits)
Duties
Radio-frequency engineers are specialists in their respective field and can take on many different roles, such as design, installation, and maintenance. Radio-frequency engineers require many years of extensive experience in the area of study. This type of engineer has experience with transmission systems, device design, and placement of antennas for optimum performance.
The RF engineer job description at a broadcast facility can include maintenance of the station's high-power broadcast transmitters and associated systems. This includes transmitter site emergency power, remote control, main transmission line and antenna adjustments, microwave radio relay STL/TSL links, and more.
In addition, a radio-frequency design engineer must be able to understand electronic hardware design, circuit board material, antenna radiation, and the effect of interfering frequencies that prevent optimum performance within the piece of equipment being developed.
Mathematics
There are many applications of electromagnetic theory to radio-frequency engineering, using conceptual tools such as vector calculus and complex analysis. Topics studied in this area include waveguides and transmission lines, the behavior of radio antennas, and the propagation of radio waves through the Earth's atmosphere. Historically, the subject played a significant role in the development of nonlinear dynamics.
See also
Broadcast engineering
Information theory
Microwave engineering
Overlap zone
Radar engineering
Radio resource management
Radio-frequency current
SPLAT! (software)
List of textbooks in electromagnetism
References
External links
Practical Guide to Radio-Frequency Analysis and Design
Radio spectrum
Radio waves
Radio waves
Electromagnetic spectrum
Broadcast engineering
Electrical engineering
Electronic engineering
Broadcasting occupations
Engineering occupations
MOSFETs
Telecommunications techniques | Radio-frequency engineering | [
"Physics",
"Technology",
"Engineering"
] | 786 | [
"Information and communications technology",
"Broadcast engineering",
"Physical phenomena",
"Telecommunications engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Computer engineering",
"Electromagnetic spectrum",
"Radio technology",
"Waves",
"Motion (physics)",
"Electronic engin... |
1,575,643 | https://en.wikipedia.org/wiki/Mass%20flow%20meter | A mass flow meter, also known as an inertial flow meter, is a device that measures mass flow rate of a fluid traveling through a tube. The mass flow rate is the mass of the fluid traveling past a fixed point per unit time.
The mass flow meter does not measure the volume per unit time (e.g. cubic meters per second) passing through the device; it measures the mass per unit time (e.g. kilograms per second) flowing through the device. Volumetric flow rate is the mass flow rate divided by the fluid density. If the density is constant, then the relationship is simple. If the fluid has varying density, then the relationship is not simple. For example, the density of the fluid may change with temperature, pressure, or composition. The fluid may also be a combination of phases such as a fluid with entrained bubbles. Actual density can be determined due to dependency of sound velocity on the controlled liquid concentration.
Operating principle of a Coriolis flow meter
The Coriolis flow meter is based on the Coriolis force, which bends rotating objects depending on their velocity.
There are two basic configurations of Coriolis flow meter: the curved tube flow meter and the straight tube flow meter. This article discusses the curved tube design.
The animations on the right do not represent an actually existing Coriolis flow meter design. The purpose of the animations is to illustrate the operating principle, and to show the connection with rotation.
Fluid is being pumped through the mass flow meter. When there is mass flow, the tube twists slightly. The arm through which fluid flows away from the axis of rotation must exert a force on the fluid, to increase its angular momentum, so it bends backwards. The arm through which fluid is pushed back to the axis of rotation must exert a force on the fluid to decrease the fluid's angular momentum again, hence that arm will bend forward. In other words, the inlet arm (containing an outwards directed flow), is lagging behind the overall rotation, the part which in rest is parallel to the axis is now skewed, and the outlet arm (containing an inwards directed flow) leads the overall rotation.
The animation on the right represents how curved tube mass flow meters are designed. The fluid is led through two parallel tubes. An actuator (not shown) induces equal counter vibrations on the sections parallel to the axis, to make the measuring device less sensitive to outside vibrations. The actual frequency of the vibration depends on the size of the mass flow meter, and ranges from 80 to 1000 Hz. The amplitude of the vibration is too small to be seen, but it can be felt by touch.
When no fluid is flowing, the motion of the two tubes is symmetrical, as shown in the left animation. The animation on the right illustrates what happens during mass flow: some twisting of the tubes. The arm carrying the flow away from the axis of rotation must exert a force on the fluid to accelerate the flowing mass to the vibrating speed of the tubes at the outside (increase of absolute angular momentum), so it is lagging behind the overall vibration. The arm through which fluid is pushed back towards the axis of movement must exert a force on the fluid to decrease the fluid's absolute angular speed (angular momentum) again, hence that arm leads the overall vibration.
The inlet arm and the outlet arm vibrate with the same frequency as the overall vibration, but when there is mass flow the two vibrations are out of sync: the inlet arm is behind, the outlet arm is ahead. The two vibrations are shifted in phase with respect to each other, and the degree of phase-shift is a measure for the amount of mass that is flowing through the tubes and line.
Density and volume measurements
The mass flow of a U-shaped Coriolis flow meter is given as:
where Ku is the temperature dependent stiffness of the tube, K is a shape-dependent factor, d is the width, τ is the time lag, ω is the vibration frequency, and Iu is the inertia of the tube. As the inertia of the tube depend on its contents, knowledge of the fluid density is needed for the calculation of an accurate mass flow rate.
If the density changes too often for manual calibration to be sufficient, the Coriolis flow meter can be adapted to measure the density as well. The natural vibration frequency of the flow tubes depends on the combined mass of the tube and the fluid contained in it. By setting the tube in motion and measuring the natural frequency, the mass of the fluid contained in the tube can be deduced. Dividing the mass on the known volume of the tube gives us the density of the fluid.
An instantaneous density measurement allows the calculation of flow in volume per time by dividing mass flow with density.
Calibration
Both mass flow and density measurements depend on the vibration of the tube. Calibration is affected by changes in the rigidity of the flow tubes.
Changes in temperature and pressure will cause the tube rigidity to change, but these can be compensated for through pressure and temperature zero and span compensation factors.
Additional effects on tube rigidity will cause shifts in the calibration factor over time due to degradation of the flow tubes. These effects include pitting, cracking, coating, erosion or corrosion. It is not possible to compensate for these changes dynamically, but efforts to monitor the effects may be made through regular meter calibration or verification checks. If a change is deemed to have occurred, but is considered to be acceptable, the offset may be added to the existing calibration factor to ensure continued accurate measurement.
See also
Coriolis effect
Flow measurement
Gaspard-Gustave Coriolis
Oscillating U-tube
References
External links
Lecture slides on flow measurement, University of Minnesota
Classical mechanics
Flow meters
Mass | Mass flow meter | [
"Physics",
"Chemistry",
"Mathematics",
"Technology",
"Engineering"
] | 1,191 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Measuring instruments",
"Size",
"Mechanics",
"Flow meters",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
1,578,212 | https://en.wikipedia.org/wiki/Hille%E2%80%93Yosida%20theorem | In functional analysis, the Hille–Yosida theorem characterizes the generators of strongly continuous one-parameter semigroups of linear operators on Banach spaces. It is sometimes stated for the special case of contraction semigroups, with the general case being called the Feller–Miyadera–Phillips theorem (after William Feller, Isao Miyadera, and Ralph Phillips). The contraction semigroup case is widely used in the theory of Markov processes. In other scenarios, the closely related Lumer–Phillips theorem is often more useful in determining whether a given operator generates a strongly continuous contraction semigroup. The theorem is named after the mathematicians Einar Hille and Kōsaku Yosida who independently discovered the result around 1948.
Formal definitions
If X is a Banach space, a one-parameter semigroup of operators on X is a family of operators indexed on the non-negative real numbers
{T(t)} t ∈ [0, ∞) such that
The semigroup is said to be strongly continuous, also called a (C0) semigroup, if and only if the mapping
is continuous for all x ∈ X, where [0, ∞) has the usual topology and X has the norm topology.
The infinitesimal generator of a one-parameter semigroup T is an operator A defined on a possibly proper subspace of X as follows:
The domain of A is the set of x ∈ X such that
has a limit as h approaches 0 from the right.
The value of Ax is the value of the above limit. In other words, Ax is the right-derivative at 0 of the function
The infinitesimal generator of a strongly continuous one-parameter semigroup is a closed linear operator defined on a dense linear subspace of X.
The Hille–Yosida theorem provides a necessary and sufficient condition for a closed linear operator A on a Banach space to be the infinitesimal generator of a strongly continuous one-parameter semigroup.
Statement of the theorem
Let A be a linear operator defined on a linear subspace D(A) of the Banach space X, ω a real number, and M > 0. Then A generates a strongly continuous semigroup T that satisfies if and only if
A is closed and D(A) is dense in X,
every real λ > ω belongs to the resolvent set of A and for such λ and for all positive integers n,
Hille-Yosida theorem for contraction semigroups
In the general case the Hille–Yosida theorem is mainly of theoretical importance since the estimates on the powers of the resolvent operator that appear in the statement of the theorem can usually not be checked in concrete examples. In the special case of contraction semigroups (M = 1 and ω = 0 in the above theorem) only the case n = 1 has to be checked and the theorem also becomes of some practical importance. The explicit statement of the Hille–Yosida theorem for contraction semigroups is:
Let A be a linear operator defined on a linear subspace D(A) of the Banach space X. Then A generates a contraction semigroup if and only if
A is closed and D(A) is dense in X,
every real λ > 0 belongs to the resolvent set of A and for such λ,
See also
C0 semigroup
Lumer–Phillips theorem
Stone's theorem on one-parameter unitary groups
Notes
References
Semigroup theory
Theorems in functional analysis | Hille–Yosida theorem | [
"Mathematics"
] | 709 | [
"Theorems in mathematical analysis",
"Mathematical structures",
"Theorems in functional analysis",
"Fields of abstract algebra",
"Algebraic structures",
"Semigroup theory"
] |
1,578,822 | https://en.wikipedia.org/wiki/Hill%27s%20muscle%20model | In biomechanics, Hill's muscle model refers to the 3-element model consisting of a contractile element (CE) in series with a lightly-damped elastic spring element (SE) and in parallel with lightly-damped elastic parallel element (PE). Within this model, the estimated force-velocity relation for the CE element is usually modeled by what is commonly called Hill's equation, which was based on careful experiments involving tetanized muscle contraction where various muscle loads and associated velocities were measured. They were derived by the famous physiologist Archibald Vivian Hill, who by 1938 when he introduced this model and equation had already won the Nobel Prize for Physiology. He continued to publish in this area through 1970. There are many forms of the basic "Hill-based" or "Hill-type" models, with hundreds of publications having used this model structure for experimental and simulation studies. Most major musculoskeletal simulation packages make use of this model.
AV Hill's force-velocity equation for tetanized muscle
This is a popular state equation applicable to skeletal muscle that has been stimulated to show Tetanic contraction. It relates tension to velocity with regard to the internal thermodynamics. The equation is
where
is the tension (or load) in the muscle
is the velocity of contraction
is the maximum isometric tension (or load) generated in the muscle
coefficient of shortening heat
is the maximum velocity, when
Although Hill's equation looks very much like the van der Waals equation, the former has units of energy dissipation, while the latter has units of energy. Hill's equation demonstrates that the relationship between F and v is hyperbolic. Therefore, the higher the load applied to the muscle, the lower the contraction velocity. Similarly, the higher the contraction velocity, the lower the tension in the muscle. This hyperbolic form has been found to fit the empirical constant only during isotonic contractions near resting length.
The muscle tension decreases as the shortening velocity increases. This feature has been attributed to two main causes. The major appears to be the loss in tension as the cross bridges in the contractile element and then reform in a shortened condition. The second cause appears to be the fluid viscosity in both the contractile element and the connective tissue. Whichever the cause of loss of tension, it is a viscous friction and can therefore be modeled as a fluid damper
.
Three-element model
The three-element Hill muscle model is a representation of the muscle mechanical response. The model is constituted by a contractile element (CE) and two non-linear spring elements, one in series (SE) and another in parallel (PE). The active force of the contractile element comes from the force generated by the actin and myosin cross-bridges at the sarcomere level. It is fully extensible when inactive but capable of shortening when activated. The connective tissues (fascia, epimysium, perimysium and endomysium) that surround the contractile element influences the muscle's force-length curve. The parallel element represents the passive force of these connective tissues and has a soft tissue mechanical behavior. The parallel element is responsible for the muscle passive behavior when it is stretched, even when the contractile element is not activated. The series element represents the tendon and the intrinsic elasticity of the myofilaments. It also has a soft tissue response and provides energy storing mechanism.
The net force-length characteristics of a muscle is a combination of the force-length characteristics of both active and passive elements. The forces in the contractile element, in the series element and in the parallel element, , and , respectively, satisfy
On the other hand, the muscle length and the lengths , and of those elements satisfy
During isometric contractions the series elastic component is under tension and therefore is stretched a finite amount. Because the overall length of the muscle is kept constant, the stretching of the series element can only occur if there is an equal shortening of the contractile element itself.
The forces in the parallel, series and contractile elements are defined by:where are strain measures for the different elements defined by:where is the deformed muscle length and is the deformed muscle length due to motion of the contractile element, both from equation (3). is the rest length of the muscle. can be split as . The force term, , is the peak isometric muscle force and the functions are given by:
where are empirical constants. The function from equation (4) represents the muscle activation. It is defined based on the ordinary differential equation:where are time constants related to rise and decay for muscle activation and is a minimum bound, all determined from experiments. is the neural excitation that leads to muscle contraction.
Viscoelasticity
Muscles present viscoelasticity, therefore a viscous damper may be included in the model, when the dynamics of the second-order critically damped twitch is regarded. One common model for muscular viscosity is an exponential form damper, where
is added to the model's global equation, whose and are constants.
See also
Muscle contraction
References
Biomechanics
Equations
Exercise physiology | Hill's muscle model | [
"Physics",
"Mathematics"
] | 1,073 | [
"Biomechanics",
"Mathematical objects",
"Mechanics",
"Equations"
] |
1,579,423 | https://en.wikipedia.org/wiki/Reaction%20quotient | In chemical thermodynamics, the reaction quotient (Qr or just Q) is a dimensionless quantity that provides a measurement of the relative amounts of products and reactants present in a reaction mixture for a reaction with well-defined overall stoichiometry at a particular point in time. Mathematically, it is defined as the ratio of the activities (or molar concentrations) of the product species over those of the reactant species involved in the chemical reaction, taking stoichiometric coefficients of the reaction into account as exponents of the concentrations. In equilibrium, the reaction quotient is constant over time and is equal to the equilibrium constant.
A general chemical reaction in which α moles of a reactant A and β moles of a reactant B react to give ρ moles of a product R and σ moles of a product S can be written as
\it \alpha\,\rm A{} + \it \beta\,\rm B{} <=> \it \rho\,\rm R{} + \it \sigma\,\rm S{}.
The reaction is written as an equilibrium even though, in many cases, it may appear that all of the reactants on one side have been converted to the other side. When any initial mixture of A, B, R, and S is made, and the reaction is allowed to proceed (either in the forward or reverse direction), the reaction quotient Qr, as a function of time t, is defined as
where {X}t denotes the instantaneous activity of a species X at time t.
A compact general definition is
where Пj denotes the product across all j-indexed variables, aj(t) is the activity of species j at time t, and νj is the stoichiometric number (the stoichiometric coefficient multiplied by +1 for products and –1 for starting materials).
Relationship to K (the equilibrium constant)
As the reaction proceeds with the passage of time, the species' activities, and hence the reaction quotient, change in a way that reduces the free energy of the chemical system. The direction of the change is governed by the Gibbs free energy of reaction by the relation
,
where K is a constant independent of initial composition, known as the equilibrium constant. The reaction proceeds in the forward direction (towards larger values of Qr) when ΔrG < 0 or in the reverse direction (towards smaller values of Qr) when ΔrG > 0. Eventually, as the reaction mixture reaches chemical equilibrium, the activities of the components (and thus the reaction quotient) approach constant values. The equilibrium constant is defined to be the asymptotic value approached by the reaction quotient:
and .
The timescale of this process depends on the rate constants of the forward and reverse reactions. In principle, equilibrium is approached asymptotically at t → ∞; in practice, equilibrium is considered to be reached, in a practical sense, when concentrations of the equilibrating species no longer change perceptibly with respect to the analytical instruments and methods used.
If a reaction mixture is initialized with all components having an activity of unity, that is, in their standard states, then
and .
This quantity, ΔrG°, is called the standard Gibbs free energy of reaction.
All reactions, regardless of how favorable, are equilibrium processes, though practically speaking, if no starting material is detected after a certain point by a particular analytical technique in question, the reaction is said to go to completion.
In biochemistry
In biochemistry, the reaction quotient is often referred to as the mass-action ratio with the symbol .
Example
The burning of octane, C8H18 + 25/2 O2 → 8CO2 + 9H2O has a ΔrG° ~ –240 kcal/mol, corresponding to an equilibrium constant of 10175, a number so large that it is of no practical significance, since there are only ~5 × 1024 molecules in a kilogram of octane.
Significance and applications
The reaction quotient plays a crucial role in understanding the direction and extent of a chemical reaction's progress towards equilibrium:
Equilibrium condition: At equilibrium, the reaction quotient (Q) is equal to the equilibrium constant (K) for the reaction. This condition is represented as Q = K, indicating that the forward and reverse reaction rates are equal.
Predicting reaction direction: If Q < K, the reaction will proceed in the forward direction to establish equilibrium. If Q > K, the reaction will proceed in the reverse direction to reach equilibrium.
Extent of reaction: The difference between Q and K provides information about how far the reaction is from equilibrium. A larger difference indicates a greater driving force for the reaction to proceed towards equilibrium.
Reaction kinetics: The reaction quotient can be used to study the kinetics of reversible reactions and determine rate laws, as it is related to the concentrations of reactants and products at any given time.
Equilibrium constant determination: By measuring the concentrations of reactants and products at equilibrium, the equilibrium constant (K) can be calculated from the reaction quotient (Q = K at equilibrium).
The reaction quotient is a powerful concept in chemical kinetics and thermodynamics, enabling the prediction of reaction directions, the extent of reaction progress, and the determination of equilibrium constants. It finds applications in various fields, including chemical engineering, biochemistry, and environmental chemistry, where understanding the behavior of reversible reactions is crucial.
References
External links
Reaction quotient tutorials
tutorial I No longer accessible as of November 2023
tutorial II
tutorial III
Equilibrium chemistry
Physical chemistry | Reaction quotient | [
"Physics",
"Chemistry"
] | 1,170 | [
"Equilibrium chemistry",
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
1,579,458 | https://en.wikipedia.org/wiki/Pargeting | Pargeting (or sometimes pargetting) is a decorative or waterproof plastering applied to building walls. The term, if not the practice, is particularly associated with the English counties of Suffolk and Essex. In the neighbouring county of Norfolk, the term "pinking" is used.
Patrick Leigh Fermor describes similar decorations on pre-World War II buildings in Linz, Austria. "Pargeted façades rose up, painted chocolate, green, purple, cream and blue. They were adorned with medallions in high relief and the stone and plaster scroll-work gave them a feeling of motion and flow."
Pargeting derives from the word 'parget', a Middle English term that is probably derived from the Old French pargeter or parjeter, to throw about, or porgeter, to roughcast a wall. However, the term is more usually applied only to the decoration in relief of the plastering between the studwork on the outside of half-timber houses, or sometimes covering the whole wall.
The devices were stamped on the wet plaster. This seems generally to have been done by sticking a number of pins in a board in certain lines or curves, and then pressing on the wet plaster in various directions, so as to form geometrical figures. Sometimes these devices are in relief, and in the time of Elizabeth I of England represent figures, birds and foliage. Fine examples can be seen at Ipswich, Maidstone, and Newark-on-Trent.
The term is also applied to the lining of the inside of smoke flues to form an even surface for the passage of the smoke.
See also
Harl
Parge coat
Plasterwork
Yeseria
References
External links
Architectural elements
Plastering
Wallcoverings | Pargeting | [
"Chemistry",
"Technology",
"Engineering"
] | 353 | [
"Building engineering",
"Coatings",
"Architecture",
"Architectural elements",
"Plastering",
"Components"
] |
1,579,816 | https://en.wikipedia.org/wiki/Impeller | An impeller, or impellor, is a driven rotor used to increase the pressure and flow of a fluid. It is the opposite of a turbine, which extracts energy from, and reduces the pressure of, a flowing fluid.
Strictly speaking, propellers are a sub-class of impellers where the flow both enters and leaves axially, but in many contexts the term "impeller" is reserved for non-propeller rotors where the flow enters axially and leaves radially, especially when creating suction in a pump or compressor.
In pumps
An impeller is a rotating component of a centrifugal pump that accelerates fluid outward from the center of rotation, thus transferring energy from the motor that drives the pump to the fluid being pumped. The velocity achieved by the impeller transfers into pressure when the outward movement of the fluid is confined by the pump casing. An impeller is usually a short cylinder with an open inlet (called an eye) to accept incoming fluid, vanes to push the fluid radially, and a splined, keyed, or threaded bore to accept a drive shaft.
It can be cheaper to cast an impeller and its spindle as one piece, rather than separately. This combination is sometimes referred to simply as the "rotor."
Types
Open
An open impeller has a hub with attached vanes and is mounted on a shaft. The vanes do not have a wall, making open impellers slightly weaker than closed or semi-closed impellers. However, as the side plate is not fixed to the inlet side of the vane, the blade stresses are significantly lower. In pumps, the fluid enters the impeller's eye, where vanes add energy and direct it to the nozzle discharge. A close clearance between vanes and pump volute or back plate prevent most of fluid from flowing back. Wear on the bowl and edge of vane can be compensated by adjusting the clearance to maintain efficiency over time. Because the internal parts are visible, open impellers are easier to inspect for damage and maintain than closed impellers. They can also be more easily modified to change flow properties. Open impellers operate on a narrow range of specific speed. Open impellers are usually faster and easier to maintain. For small pumps and those dealing with suspended solids, open impellers are generally used. Sand locking does not occur as easily as with closed type.
Semi-closed
A semi-closed impeller has an additional back wall, giving it more strength. These impellers can pass mixed solid-liquid mixtures at the cost of reduced efficiency.
Closed or shrouded
The construction of closed impellers includes additional back and front walls on both sides of vanes that enhances its strength. This also reduces the thrust load on the shaft, increasing bearing life and reliability and reducing shafting cost. However, this more complicated design, including the use of additional wear rings, makes closed impellers more difficult to manufacture and more expensive than open impellers. A closed impeller's efficiency decreases as wear ring clearance increases with use. However, adjustment of impeller bowl clearance does not affect the wear on vanes as critically as open impeller. Closed impellers can be used on a wider range specific speed than open impellers. They are generally used in large pumps and clear water applications. These impellers can't perform effectively with solids and become difficult to clean if clogged.
Screw
The screw impeller design aligns more with an axial progressive channel that allows for solids to be openly handled when rotating.
In centrifugal compressors
The main part of a centrifugal compressor is the impeller. An open impeller has no cover, therefore it can work at higher speeds. A compressor with a covered impeller can have more stages than one that has an open impeller.
In water jets
Some impellers are similar to small propellers but without the large blades. Among other uses, they are used in water jets to power high speed boats.
Because impellers do not have large blades to turn, they can spin at much higher speeds than propellers. The water forced through the impeller is channeled by the housing, creating a water jet that propels the vessel forward. The housing is normally tapered into a nozzle to increase the speed of the water, which also creates a Venturi effect in which low pressure behind the impeller pulls more water towards the blades, tending to increase the speed.
To work efficiently, there must be a close fit between the impeller and the housing. The housing is normally fitted with a replaceable wear ring which tends to wear as sand or other particles are thrown against the housing side by the impeller.
Vessels using impellers are normally steered by changing the direction of the water jet.
Compare to propeller and jet aircraft engines.
In agitated tanks
Impellers in agitated tanks are used to mix fluids or slurry in the tank. This can be used to combine materials in the form of solids, liquids and gas. Mixing the fluids in a tank is very important if there are gradients in conditions such as temperature or concentration.
There are two types of impellers, depending on the flow regime created (see figure):
Axial flow impeller
Radial flow impeller
Radial flow impellers impose essentially shear stress to the fluid, and are used, for example, to mix immiscible liquids or in general when there is a deformable interface to break. Another application of radial flow impellers is the mixing of very viscous fluids.
Axial flow impellers impose essentially bulk motion and are used on homogenization processes, in which increased fluid volumetric flow rate is important.
Impellers can be further classified principally into three sub-types:
Propeller
Paddles
Turbines
Propellers
Propellers are axial thrust-giving elements. These elements give a very high degree of swirling in the vessel. The flow pattern generated in the fluid resembles a helix.
In washing machines
Some constructions of top loading washing machines use impellers to agitate the laundry during washing.
Firefighting rank badge
Fire services in the United Kingdom and many countries of the Commonwealth use a stylized depiction of an impeller as a rank badge. Officers wear one or more on their epaulettes or the collar of their firefighting uniform as an equivalent to the "pips" worn by the army and police.
In air pumps
Air pumps, such as the roots blower, use meshing impellers to move air through a system. Applications include blast furnaces, ventilation systems, and superchargers for internal combustion engines.
In medicine
Impellers are an integral part of axial-flow pumps, used in ventricular assist devices to augment or fully replace cardiac function.
See also
Axial fan design
Bladelet (impeller)
Centrifugal fan
Rim-driven thruster
Turbine
References
Pumps
Marine propulsion
Fluid dynamics
cs:Rotor | Impeller | [
"Physics",
"Chemistry",
"Engineering"
] | 1,410 | [
"Pumps",
"Turbomachinery",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Marine engineering",
"Piping",
"Marine propulsion",
"Fluid dynamics"
] |
5,594,272 | https://en.wikipedia.org/wiki/Slush | Slush, also called slush ice, is a slurry mixture of small ice crystals (e.g. snow) and liquid water.
In the natural environment, slush forms when ice or snow melts or during mixed precipitation. This often mixes with dirt and other pollutants on the surface, resulting in a gray or muddy brown color. Often, solid ice or snow can block the drainage of fluid water from slushy areas, so slush often goes through multiple freeze/thaw cycles before being able to completely drain and disappear.
In areas where road salt is used to clear roadways, slush forms at lower temperatures in salted areas than it would ordinarily. This can produce a number of different consistencies over the same geographical area with scattered salted areas covered with slush and others covered with frozen precipitation.
Hazards
Because slush behaves like a non-Newtonian fluid, which means it behaves like a mostly solid mass until its inner shear forces rise beyond a specific threshold and beyond can very suddenly become fluid, it is very difficult to predict its behavior. This is the underlying mechanism causing slush avalanches and their unpredictability and thus hidden potential to become a natural hazard without caution.
Slush can also be a problem on an aircraft runway since the effect of excess slush acting on the aircraft's wheels can have a resisting effect during takeoff, making its projection unstable, which can cause an accident such as the Munich air disaster. Slush on roads can also make roads slippery and increase the braking distances for cars and trucks, increasing the possibility of rear end crashes and other road accidents.
Slush can refreeze and become hazardous to vehicles and pedestrians.
In some cases though, slush can be beneficial. When snow hits the slush, it partially melts and also becomes slush on contact. This prevents roads from becoming too congested with snow or sleet.
References
Snow or ice weather phenomena
Forms of water
Water ice | Slush | [
"Physics",
"Chemistry"
] | 402 | [
"Forms of water",
"Phases of matter",
"Matter"
] |
5,595,981 | https://en.wikipedia.org/wiki/P21 | p21Cip1 (alternatively p21Waf1), also known as cyclin-dependent kinase inhibitor 1 or CDK-interacting protein 1, is a cyclin-dependent kinase inhibitor (CKI) that is capable of inhibiting all cyclin/CDK complexes, though is primarily associated with inhibition of CDK2. p21 represents a major target of p53 activity and thus is associated with linking DNA damage to cell cycle arrest. This protein is encoded by the CDKN1A gene located on chromosome 6 (6p21.2) in humans.
Function
CDK inhibition
p21 is a potent cyclin-dependent kinase inhibitor (CKI). The p21 (CIP1/WAF1) protein binds to and inhibits the activity of cyclin-CDK2, -CDK1, and -CDK4/6 complexes, and thus functions as a regulator of cell cycle progression at G1 and S phase. The binding of p21 to CDK complexes occurs through p21's N-terminal domain, which is homologous to the other CIP/KIP CDK inhibitors p27 and p57. Specifically it contains a Cy1 motif in the N-terminal half, and weaker Cy2 motif in the C-terminal domain that allow it to bind CDK in a region that blocks its ability to complex with cyclins and thus prevent CDK activation.
Experiments looking at CDK2 activity within single cells have also shown p21 to be responsible for a bifurcation in CDK2 activity following mitosis, cells with high p21 enter a G0/quiescent state, whilst those with low p21 continue to proliferate. Follow up work, found evidence that this bistability is underpinned by double negative feedback between p21 and CDK2, where CDK2 inhibits p21 activity via ubiquitin ligase activity.
PCNA inhibition
p21 interacts with proliferating cell nuclear antigen (PCNA), a DNA polymerase accessory factor, and plays a regulatory role in S phase DNA replication and DNA damage repair. Specifically, p21 has a high affinity for the PIP-box binding region on PCNA, binding of p21 to this region is proposed to block the binding of processivity factors necessary for PCNA dependent S-phase DNA synthesis, but not PCNA dependent nucleotide excision repair (NER). As such, p21 acts as an effective inhibitor of S-phase DNA synthesis though permits NER, leading to the proposal that p21 acts to preferentially select polymerase processivity factors depending on the context of DNA synthesis.
Apoptosis inhibition
This protein was reported to be specifically cleaved by CASP3-like caspases, which thus leads to a dramatic activation of CDK2, and may be instrumental in the execution of apoptosis following caspase activation. However p21 may inhibit apoptosis and does not induce cell death on its own. The ability of p21 to inhibit apoptosis in response to replication fork stress has also been reported.
Regulation
p53 dependent response
Studies of p53 dependent cell cycle arrest in response to DNA damage identified p21 as the primary mediator of downstream cell cycle arrest. Notably, El-Deiry et al. identified a protein p21 (WAF1) which was present in cells expressing wild type p53 but not those with mutant p53, moreover constitutive expression of p21 led to cell cycle arrest in a number of cell types. Dulcic et al. also found that γ-irradiation of fibroblasts induced a p53 and p21 dependent cell cycle arrest, here p21 was found bound to inactive cyclin E/CDK2 complexes. Working in mouse models, it was also shown that whilst mice lacking p21 were healthy, spontaneous tumours developed and G1 checkpoint control was compromised in cells derived from these mice. Taken together, these studies thus defined p21 as the primary mediator of p53-dependent cell cycle arrest in response to DNA damage.
Recent work exploring p21 activation in response to DNA damage at a single-cell level have demonstrated that pulsatile p53 activity leads to subsequent pulses of p21, and that the strength of p21 activation is cell cycle phase dependent. Moreover, studies of p21-levels in populations of cycling cells, not exposed to DNA damaging agents, have shown that DNA damage occurring in mother cell S-phase can induce p21 accumulation over both mother G2 and daughter G1 phases which subsequently induces cell cycle arrest; this responsible for the bifurcation in CDK2 activity observed in Spencer et al..
Degradation
p21 is negatively regulated by ubiquitin ligases both over the course of the cell cycle and in response to DNA damage. Specifically, over the G1/S transition it has been demonstrated that the E3 ubiquitin ligase complex SCFSkp2 induces degradation of p21. Studies have also demonstrated that the E3 ubiquitin ligase complex CRL4Cdt2 degrades p21 in a PCNA dependent manner over S-phase, necessary to prevent p21 dependent re-replication, as well as in response to UV irradiation. Recent work has now found that in human cell lines SCFSkp2 degrades p21 towards the end of G1 phase, allowing cells to exit a quiescent state, whilst CRL4Cdt2 acts to degrade p21 at a much higher rate than SCFSkp2 over the G1/S transition and subsequently maintain low levels of p21 throughout S-phase.
Clinical significance
Cytoplasmic p21 expression can be significantly correlated with lymph node metastasis, distant metastases, advanced TNM stage (a classification of cancer staging that stands for: tumor size, describing nearby lymph nodes, and distant metastasis), depth of invasion and OS (overall survival rate). A study on immunohistochemical markers in malignant thymic epithelial tumors shows that p21 expression has a negatively influenced survival and significantly correlated with WHO (World Health Organization) type B2/B3. When combined with low p27 and high p53, DFS (Disease-Free Survival) decreases.
p21 mediates the resistance of hematopoietic cells to an infection with HIV by complexing with the HIV integrase and thereby aborting chromosomal integration of the provirus. HIV infected individuals who naturally suppress viral replication have elevated levels of p21 and its associated mRNA. p21 expression affects at least two stages in the HIV life cycle inside CD4 T cells, significantly limiting production of new viruses.
Metastatic canine mammary tumors display increased levels of p21 in the primary tumors but also in their metastases, despite increased cell proliferation.
Mice that lack the p21 gene gain the ability to regenerate lost appendages.
Interactions
P21 has been shown to interact with:
Nrf2
BCCIP,
CIZ1,
CUL4A,
CCNE1,
CDK,
DDB1,
DTL,
GADD45A,
GADD45G,
HDAC,
PCNA,
PIM1,
TK1, and
TSG101.
References
Further reading
External links
Drosophila dacapo - The Interactive Fly
Cell cycle regulators
Tumor suppressor genes | P21 | [
"Chemistry"
] | 1,539 | [
"Cell cycle regulators",
"Signal transduction"
] |
5,599,330 | https://en.wikipedia.org/wiki/Sensitivity%20and%20specificity | In medicine and statistics, sensitivity and specificity mathematically describe the accuracy of a test that reports the presence or absence of a medical condition. If individuals who have the condition are considered "positive" and those who do not are considered "negative", then sensitivity is a measure of how well a test can identify true positives and specificity is a measure of how well a test can identify true negatives:
Sensitivity (true positive rate) is the probability of a positive test result, conditioned on the individual truly being positive.
Specificity (true negative rate) is the probability of a negative test result, conditioned on the individual truly being negative.
If the true status of the condition cannot be known, sensitivity and specificity can be defined relative to a "gold standard test" which is assumed correct. For all testing, both diagnoses and screening, there is usually a trade-off between sensitivity and specificity, such that higher sensitivities will mean lower specificities and vice versa.
A test which reliably detects the presence of a condition, resulting in a high number of true positives and low number of false negatives, will have a high sensitivity. This is especially important when the consequence of failing to treat the condition is serious and/or the treatment is very effective and has minimal side effects.
A test which reliably excludes individuals who do not have the condition, resulting in a high number of true negatives and low number of false positives, will have a high specificity. This is especially important when people who are identified as having a condition may be subjected to more testing, expense, stigma, anxiety, etc.
The terms "sensitivity" and "specificity" were introduced by American biostatistician Jacob Yerushalmy in 1947.
There are different definitions within laboratory quality control, wherein "analytical sensitivity" is defined as the smallest amount of substance in a sample that can accurately be measured by an assay (synonymously to detection limit), and "analytical specificity" is defined as the ability of an assay to measure one particular organism or substance, rather than others. However, this article deals with diagnostic sensitivity and specificity as defined at top.
Application to screening study
Imagine a study evaluating a test that screens people for a disease. Each person taking the test either has or does not have the disease. The test outcome can be positive (classifying the person as having the disease) or negative (classifying the person as not having the disease). The test results for each subject may or may not match the subject's actual status. In that setting:
True positive: Sick people correctly identified as sick
False positive: Healthy people incorrectly identified as sick
True negative: Healthy people correctly identified as healthy
False negative: Sick people incorrectly identified as healthy
After getting the numbers of true positives, false positives, true negatives, and false negatives, the sensitivity and specificity for the test can be calculated. If it turns out that the sensitivity is high then any person who has the disease is likely to be classified as positive by the test. On the other hand, if the specificity is high, any person who does not have the disease is likely to be classified as negative by the test. An NIH web site has a discussion of how these ratios are calculated.
Definition
Sensitivity
Consider the example of a medical test for diagnosing a condition. Sensitivity (sometimes also named the detection rate in a clinical setting) refers to the test's ability to correctly detect ill patients out of those who do have the condition. Mathematically, this can be expressed as:
A negative result in a test with high sensitivity can be useful for "ruling out" disease, since it rarely misdiagnoses those who do have the disease. A test with 100% sensitivity will recognize all patients with the disease by testing positive. In this case, a negative test result would definitively rule out the presence of the disease in a patient. However, a positive result in a test with high sensitivity is not necessarily useful for "ruling in" disease. Suppose a 'bogus' test kit is designed to always give a positive reading. When used on diseased patients, all patients test positive, giving the test 100% sensitivity. However, sensitivity does not take into account false positives. The bogus test also returns positive on all healthy patients, giving it a false positive rate of 100%, rendering it useless for detecting or "ruling in" the disease.
The calculation of sensitivity does not take into account indeterminate test results.
If a test cannot be repeated, indeterminate samples either should be excluded from the analysis (the number of exclusions should be stated when quoting sensitivity) or can be treated as false negatives (which gives the worst-case value for sensitivity and may therefore underestimate it).
A test with a higher sensitivity has a lower type II error rate.
Specificity
Consider the example of a medical test for diagnosing a disease. Specificity refers to the test's ability to correctly reject healthy patients without a condition. Mathematically, this can be written as:
A positive result in a test with high specificity can be useful for "ruling in" disease, since the test rarely gives positive results in healthy patients. A test with 100% specificity will recognize all patients without the disease by testing negative, so a positive test result would definitively rule in the presence of the disease. However, a negative result from a test with high specificity is not necessarily useful for "ruling out" disease. For example, a test that always returns a negative test result will have a specificity of 100% because specificity does not consider false negatives. A test like that would return negative for patients with the disease, making it useless for "ruling out" the disease.
A test with a higher specificity has a lower type I error rate.
Graphical illustration
The above graphical illustration is meant to show the relationship between sensitivity and specificity. The black, dotted line in the center of the graph is where the sensitivity and specificity are the same. As one moves to the left of the black dotted line, the sensitivity increases, reaching its maximum value of 100% at line A, and the specificity decreases. The sensitivity at line A is 100% because at that point there are zero false negatives, meaning that all the negative test results are true negatives. When moving to the right, the opposite applies, the specificity increases until it reaches the B line and becomes 100% and the sensitivity decreases. The specificity at line B is 100% because the number of false positives is zero at that line, meaning all the positive test results are true positives.
The middle solid line in both figures above that show the level of sensitivity and specificity is the test cutoff point. As previously described, moving this line results in a trade-off between the level of sensitivity and specificity. The left-hand side of this line contains the data points that tests below the cut off point and are considered negative (the blue dots indicate the False Negatives (FN), the white dots True Negatives (TN)). The right-hand side of the line shows the data points that tests above the cut off point and are considered positive (red dots indicate False Positives (FP)). Each side contains 40 data points.
For the figure that shows high sensitivity and low specificity, there are 3 FN and 8 FP. Using the fact that positive results = true positives (TP) + FP, we get TP = positive results - FP, or TP = 40 - 8 = 32. The number of sick people in the data set is equal to TP + FN, or 32 + 3 = 35. The sensitivity is therefore 32 / 35 = 91.4%. Using the same method, we get TN = 40 - 3 = 37, and the number of healthy people 37 + 8 = 45, which results in a specificity of 37 / 45 = 82.2 %.
For the figure that shows low sensitivity and high specificity, there are 8 FN and 3 FP. Using the same method as the previous figure, we get TP = 40 - 3 = 37. The number of sick people is 37 + 8 = 45, which gives a sensitivity of 37 / 45 = 82.2 %. There are 40 - 8 = 32 TN. The specificity therefore comes out to 32 / 35 = 91.4%.
The red dot indicates the patient with the medical condition. The red background indicates the area where the test predicts the data point to be positive. The true positive in this figure is 6, and false negatives of 0 (because all positive condition is correctly predicted as positive). Therefore, the sensitivity is 100% (from ). This situation is also illustrated in the previous figure where the dotted line is at position A (the left-hand side is predicted as negative by the model, the right-hand side is predicted as positive by the model). When the dotted line, test cut-off line, is at position A, the test correctly predicts all the population of the true positive class, but it will fail to correctly identify the data point from the true negative class.
Similar to the previously explained figure, the red dot indicates the patient with the medical condition. However, in this case, the green background indicates that the test predicts that all patients are free of the medical condition. The number of data point that is true negative is then 26, and the number of false positives is 0. This result in 100% specificity (from ). Therefore, sensitivity or specificity alone cannot be used to measure the performance of the test.
Medical usage
In medical diagnosis, test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate), whereas test specificity is the ability of the test to correctly identify those without the disease (true negative rate).
If 100 patients known to have a disease were tested, and 43 test positive, then the test has 43% sensitivity. If 100 with no disease are tested and 96 return a completely negative result, then the test has 96% specificity. Sensitivity and specificity are prevalence-independent test characteristics, as their values are intrinsic to the test and do not depend on the disease prevalence in the population of interest. Positive and negative predictive values, but not sensitivity or specificity, are values influenced by the prevalence of disease in the population that is being tested. These concepts are illustrated graphically in this applet Bayesian clinical diagnostic model which show the positive and negative predictive values as a function of the prevalence, sensitivity and specificity.
Misconceptions
It is often claimed that a highly specific test is effective at ruling in a disease when positive, while a highly sensitive test is deemed effective at ruling out a disease when negative. This has led to the widely used mnemonics SPPIN and SNNOUT, according to which a highly specific test, when positive, rules in disease (SP-P-IN), and a highly sensitive test, when negative, rules out disease (SN-N-OUT). Both rules of thumb are, however, inferentially misleading, as the diagnostic power of any test is determined by the prevalence of the condition being tested, the test's sensitivity and its specificity. The SNNOUT mnemonic has some validity when the prevalence of the condition in question is extremely low in the tested sample.
The tradeoff between specificity and sensitivity is explored in ROC analysis as a trade off between TPR and FPR (that is, recall and fallout). Giving them equal weight optimizes informedness = specificity + sensitivity − 1 = TPR − FPR, the magnitude of which gives the probability of an informed decision between the two classes (> 0 represents appropriate use of information, 0 represents chance-level performance, < 0 represents perverse use of information).
Sensitivity index
The sensitivity index or d′ (pronounced "dee-prime") is a statistic used in signal detection theory. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally distributed signal and noise with mean and standard deviations and , and and , respectively, d′ is defined as:
An estimate of d′ can be also found from measurements of the hit rate and false-alarm rate. It is calculated as:
d′ = Z(hit rate) − Z(false alarm rate),
where function Z(p), p ∈ [0, 1], is the inverse of the cumulative Gaussian distribution.
d′ is a dimensionless statistic. A higher d′ indicates that the signal can be more readily detected.
Confusion matrix
The relationship between sensitivity, specificity, and similar terms can be understood using the following table. Consider a group with P positive instances and N negative instances of some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as well as derivations of several metrics using the four outcomes, as follows:
Estimation of errors in quoted sensitivity or specificity
Sensitivity and specificity values alone may be highly misleading. The 'worst-case' sensitivity or specificity must be calculated in order to avoid reliance on experiments with few results. For example, a particular test may easily show 100% sensitivity if tested against the gold standard four times, but a single additional test against the gold standard that gave a poor result would imply a sensitivity of only 80%. A common way to do this is to state the binomial proportion confidence interval, often calculated using a Wilson score interval.
Confidence intervals for sensitivity and specificity can be calculated, giving the range of values within which the correct value lies at a given confidence level (e.g., 95%).
Terminology in information retrieval
In information retrieval, the positive predictive value is called precision, and sensitivity is called recall. Unlike the Specificity vs Sensitivity tradeoff, these measures are both independent of the number of true negatives, which is generally unknown and much larger than the actual numbers of relevant and retrieved documents. This assumption of very large numbers of true negatives versus positives is rare in other applications.
The F-score can be used as a single measure of performance of the test for the positive class. The F-score is the harmonic mean of precision and recall:
In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors.
Terminology in genome analysis
Similarly to the domain of information retrieval, in the research area of gene prediction, the number of true negatives (non-genes) in genomic sequences is generally unknown and much larger than the actual number of genes (true positives). The convenient and intuitively understood term specificity in this research area has been frequently used with the mathematical formula for precision and recall as defined in biostatistics. The pair of thus defined specificity (as positive predictive value) and sensitivity (true positive rate) represent major parameters characterizing the accuracy of gene prediction algorithms.
Conversely, the term specificity in a sense of true negative rate would have little, if any, application in the genome analysis research area.
See also
Notes
References
Further reading
External links
UIC Calculator
Vassar College's Sensitivity/Specificity Calculator
MedCalc Free Online Calculator
Bayesian clinical diagnostic model applet
Accuracy and precision
Bioinformatics
Biostatistics
Cheminformatics
Medical statistics
Statistical ratios
Statistical classification | Sensitivity and specificity | [
"Chemistry",
"Engineering",
"Biology"
] | 3,214 | [
"Biological engineering",
"Bioinformatics",
"Computational chemistry",
"nan",
"Cheminformatics"
] |
5,599,414 | https://en.wikipedia.org/wiki/Heteroduplex%20analysis | Heteroduplex analysis (HDA) is a method in biochemistry used to detect point mutations in DNA (Deoxyribonucleic acid) since 1992. Heteroduplexes are dsDNA molecules that have one or more mismatched pairs, on the other hand homoduplexes are dsDNA which are perfectly paired. This method of analysis depend up on the fact that heteroduplexes shows reduced mobility relative to the homoduplex DNA. heteroduplexes are formed between different DNA alleles. In a mixture of wild-type and mutant amplified DNA, heteroduplexes are formed in mutant alleles and homoduplexes are formed in wild-type alleles. There are two types of heteroduplexes based on type and extent of mutation in the DNA. Small deletions or insertion create bulge-type heteroduplexes which is stable and is verified by electron microscope. Single base substitutions creates more unstable heteroduplexes called bubble-type heteroduplexes, because of low stability it is difficult to visualize in electron microscopy. HDA is widely used for rapid screening of mutation of the 3 bp p.F508del deletion in the CFTR gene.
References
Biochemistry methods
Biochemistry
Molecular biology | Heteroduplex analysis | [
"Chemistry",
"Biology"
] | 265 | [
"Biochemistry methods",
"Biochemistry",
"nan",
"Molecular biology"
] |
5,599,416 | https://en.wikipedia.org/wiki/Annexin%20A5%20affinity%20assay | In molecular biology, an annexin A5 affinity assay is a test to quantify the number of cells undergoing apoptosis. The assay uses the protein annexin A5 to tag apoptotic and dead cells, and the numbers are then counted using either flow cytometry or a fluorescence microscope.
The annexin a5 protein binds to apoptotic cells in a calcium-dependent manner using phosphatidylserine-containing membrane surfaces that are usually present only on the inner leaflet of the membrane.
Background
Apoptosis is a form of programmed cell death that is used by the body to remove unwanted, damaged, or senescent cells from tissues. Removal of apoptotic cells is carried out via phagocytosis by white blood cells such as macrophages and dendritic cells. Phagocytic white blood cells recognize apoptotic cells by their exposure of negatively charged phospholipids (phosphatidylserine) on the cell surface.
In normal cells, the negative phospholipids reside on the inner side of the cellular membrane while the outer surface of the membrane is occupied by uncharged phospholipids. After a cell has entered apoptosis, the negatively charged phospholipids are transported to the outer cell surface by a hypothetical protein known as scramblase. Phagocytic white blood cells express a receptor that can bind to and detect the negatively charged phospholipids on the apoptotic cell surfaces. After detection the apoptotic cells are removed.
Detection of cell death with annexin A5
Healthy individual apoptotic cells are rapidly removed by phagocytes. However, in pathological processes, the removal of apoptotic cells may be delayed or even absent. Dying cells in tissue can be detected with annexin A5. Labeling of annexin A5 with fluorescent or radioactive molecules makes it possible to detect binding of labeled annexin A5 to the cell surface of apoptotic cells. After binding to the phospholipid surface, annexin A5 assembles into a trimeric cluster. This trimer consists of three annexin A5 molecules that are bound to each other via non-covalent protein-protein interactions. The formation of annexin A5 trimers results in the formation of a two-dimensional crystal lattice on the phospholipid membrane. This clustering of annexin A5 on the membrane greatly increases the intensity of annexin A5 when labeled with a fluorescent or radioactive probe. Two-dimensional crystal formation is believed to cause internalization of annexin A5 through a novel process of endocytosis if it occurs on cells that are in the early phase of executing cell death. Internalization amplifies additionally the intensity of the annexin A5 stained cell.
Annexin A5 has been used to successively detect apoptotic cells in vitro and in vivo. Pathological processes in which apoptosis occurs include inflammation, ischemia damage of the heart caused by myocardial infarction, apoptotic white blood cells and smooth muscle cells present in atherosclerotic plaques of blood vessels, transplanted organs in the donor patient that are rejected by the immune system or tumour cells that are exposed to cytostatic drugs during chemotherapy.
The non-invasive detection of diseased tissue with, for example, radioactively labeled annexin A5 is the goal of a recently developed line of research known as Molecular Imaging.
Molecular Imaging of cell death using radioactive annexin A5 can become of clinical significance to diagnose vulnerability of atherosclerotic plaques (unstable atherosclerosis), heart failure, transplant rejection, and to monitor efficacy of anti-cancer therapy.
References
Laboratory techniques
Flow cytometry | Annexin A5 affinity assay | [
"Chemistry",
"Biology"
] | 794 | [
"Flow cytometry",
"nan"
] |
7,300,829 | https://en.wikipedia.org/wiki/Pi%20Josephson%20junction | A Josephson junction (JJ) is a quantum mechanical device which is made of two superconducting electrodes separated by a barrier (thin insulating tunnel barrier, normal metal, semiconductor, ferromagnet, etc.).
A Josephson junction is a Josephson junction in which the Josephson phase φ equals in the ground state, i.e. when no external current or magnetic field is applied.
Background
The supercurrent Is through a Josephson junction is generally given by Is = Icsin(φ),
where φ is the phase difference of the superconducting wave functions of the two
electrodes, i.e. the Josephson phase.
The critical current Ic is the maximum supercurrent that can exist through the Josephson junction.
In experiment, one usually causes some current through the Josephson junction and the junction reacts by changing the Josephson phase. From the above formula it is clear that the phase φ = arcsin(I/Ic), where I is the applied (super)current.
Since the phase is 2-periodic, i.e. φ and φ + 2n are physically equivalent, without losing generality, the discussion below refers to the interval 0 ≤ φ < 2.
When no current (I = 0) exists through the Josephson junction, e.g. when the junction is disconnected, the junction is in the ground state and the Josephson phase across it is zero (φ = 0). The phase can also be φ = , also resulting in no current through the junction. It turns out that the state with φ = is unstable and corresponds to the Josephson energy maximum, while the state φ = 0 corresponds to the Josephson energy minimum and is a ground state.
In certain cases, one may obtain a Josephson junction where the critical current is negative (Ic < 0). In this case, the first Josephson relation becomes
The ground state of such a Josephson junction is and corresponds to the Josephson energy minimum, while the conventional state φ = 0 is unstable and corresponds to the Josephson energy maximum. Such a Josephson junction with in the ground state is called a Josephson junction.
Josephson junctions have quite unusual properties. For example, if one connects (shorts) the superconducting electrodes with the inductance L (e.g. superconducting wire), one may expect the spontaneous supercurrent circulating in the loop, passing through the junction and through inductance clockwise or counterclockwise. This supercurrent is spontaneous and belongs to the ground state of the system. The direction of its circulation is chosen at random. This supercurrent will of course induce a magnetic field which can be detected experimentally. The magnetic flux passing through the loop will have the value from 0 to a half of magnetic flux quanta, i.e. from 0 to Φ0/2, depending on the value of inductance L.
Technologies and physical principles
Ferromagnetic Josephson junctions. Consider a Josephson junction with a ferromagnetic Josephson barrier, i.e. the multilayers superconductor-ferromagnet-superconductor (SFS) or superconductor-insulator-ferromagnet-superconductor (SIFS). In such structures the superconducting order parameter inside the F-layer oscillates in the direction perpendicular to the junction plane. As a result, for certain thicknesses of the F-layer and temperatures, the order parameter may become +1 at one superconducting electrode and −1 at the other superconducting electrode. In this situation one gets a Josephson junction. Note that inside the F-layer the competition of different solutions takes place and the one with the lower energy wins out. Various ferromagnetic junctions have been fabricated: SFS junctions with weak ferromagnetic interlayers; SFS junctions with strong ferromagnetic interlayers, such as Co, Ni, PdFe and NiFe SIFS junctions; and S-Fi-S junctions.
Josephson junctions with unconventional order parameter symmetry. Novel superconductors, notably high temperature cuprate superconductors, have an anisotropic superconducting order parameter which can change its sign depending on the direction. In particular, a so-called d-wave order parameter has a value of +1 if one looks along the crystal axis a and −1 if one looks along the crystal axis b. If one looks along the ab direction (45° between a and b) the order parameter vanishes. By making Josephson junctions between d-wave superconducting films with different orientations or between d-wave and conventional isotropic s-wave superconductors, one can get a phase shift of . Nowadays there are several realizations of Josephson junctions of this type:
tri-crystal grain boundary Josephson junctions,
tetra-crystal grain boundary Josephson junctions,
d-wave/s-wave ramp zigzag Josephson junctions,
tilt-twist grain boundary Josephson junctions,
p-wave based Josephson junctions.
Superconductor–normal metal–superconductor (SNS) Josephson junctions with non-equilibrium electron distribution in N-layer.
Superconductor–quantum dot–superconductor (S-QuDot-S) Josephson junctions (implemented by carbon nanotube Josephson junctions).
Historical developments
Theoretically, the first time the possibility of creating a Josephson junction was discussed by Bulaevskii et al. ,
who considered a Josephson junction with paramagnetic scattering in the barrier. Almost one decade later, the possibility of having a Josephson junction was discussed in the context of heavy fermion p-wave superconductors.
Experimentally, the first Josephson junction was a corner junction made of yttrium barium copper oxide (d-wave) and Pb (s-wave) superconductors. The first unambiguous proof of a Josephson junction with a ferromagnetic barrier was given only a decade later. That work used a weak ferromagnet consisting of a copper-nickel alloy (CuxNi1−x, with x around 0.5) and optimized it so that the Curie temperature was close to the superconducting transition temperature of the superconducting niobium leads.
See also
Josephson effect
φ Josephson junction
Semifluxon
Fractional vortices
Brian D. Josephson
References
Superconductivity
Josephson effect | Pi Josephson junction | [
"Physics",
"Materials_science",
"Engineering"
] | 1,369 | [
"Josephson effect",
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
7,302,010 | https://en.wikipedia.org/wiki/Receptor%E2%80%93ligand%20kinetics | In biochemistry, receptor–ligand kinetics is a branch of chemical kinetics in which the kinetic species are defined by different non-covalent bindings and/or conformations of the molecules involved, which are denoted as receptor(s) and ligand(s). Receptor–ligand binding kinetics also involves the on- and off-rates of binding.
A main goal of receptor–ligand kinetics is to determine the concentrations of the various kinetic species (i.e., the states of the receptor and ligand) at all times, from a given set of initial concentrations and a given set of rate constants. In a few cases, an analytical solution of the rate equations may be determined, but this is relatively rare. However, most rate equations can be integrated numerically, or approximately, using the steady-state approximation. A less ambitious goal is to determine the final equilibrium concentrations of the kinetic species, which is adequate for the interpretation of equilibrium binding data.
A converse goal of receptor–ligand kinetics is to estimate the rate constants and/or dissociation constants of the receptors and ligands from experimental kinetic or equilibrium data. The total concentrations of receptor and ligands are sometimes varied systematically to estimate these constants.
Binding kinetics
The binding constant is a special case of the equilibrium constant . It is associated with the binding and unbinding reaction of receptor (R) and ligand (L) molecules, which is formalized as:
{R} + {L} <=> {RL}.
The reaction is characterized by the on-rate constant and the off-rate constant , which have units of 1/(concentration time) and 1/time, respectively. In equilibrium, the forward binding transition {R} + {L} -> {RL} should be balanced by the backward unbinding transition {RL} -> {R} + {L}. That is,
,
where [{R}], [{L}] and [{RL}] represent the concentration of unbound free receptors, the concentration of unbound free ligand and the concentration of receptor-ligand complexes. The binding constant, or the association constant is defined by
.
Simplest case: single receptor and single ligand bind to form a complex
The simplest example of receptor–ligand kinetics is that of a single ligand L binding to a single receptor R to form a single complex C
{R} + {L} <-> {C}
The equilibrium concentrations are related by the dissociation constant Kd
where k1 and k−1 are the forward and backward rate constants, respectively. The total concentrations of receptor and ligand in the system are constant
Thus, only one concentration of the three ([R], [L] and [C]) is independent; the other two concentrations may be determined from Rtot, Ltot and the independent concentration.
This system is one of the few systems whose kinetics can be determined analytically. Choosing [R] as the independent concentration and representing the concentrations by italic variables for brevity (e.g., ), the kinetic rate equation can be written
Dividing both sides by k1 and introducing the constant 2E = Rtot - Ltot - Kd, the rate equation becomes
where the two equilibrium concentrations are given by the quadratic formula and D is defined
However, only the equilibrium has a positive concentration, corresponding to the equilibrium observed experimentally.
Separation of variables and a partial-fraction expansion yield the integrable ordinary differential equation
whose solution is
or, equivalently,
for association, and
for dissociation, respectively; where the integration constant φ0 is defined
From this solution, the corresponding solutions for the other concentrations and can be obtained.
See also
Binding potential
Patlak plot
Scatchard plot
References
Further reading
D.A. Lauffenburger and J.J. Linderman (1993) Receptors: Models for Binding, Trafficking, and Signaling, Oxford University Press. (hardcover) and 0-19-510663-6 (paperback)
Receptors
Chemical kinetics | Receptor–ligand kinetics | [
"Chemistry"
] | 828 | [
"Receptors",
"Chemical kinetics",
"Chemical reaction engineering",
"Signal transduction"
] |
8,802,094 | https://en.wikipedia.org/wiki/Absorption%20cross%20section | In physics, absorption cross-section is a measure of the probability of an absorption process. More generally, the term cross-section is used in physics to quantify the probability of a certain particle-particle interaction, e.g., scattering, electromagnetic absorption, etc. (Note that light in this context is described as consisting of particles, i.e., photons.) A typical absorption cross-section has units of cm2⋅molecule−1. In honor of the fundamental contribution of Maria Goeppert Mayer to this area, the unit for the two-photon absorption cross section is named the "GM". One GM is 10−50 cm4⋅s⋅photon−1.
In the context of ozone shielding of ultraviolet light, absorption cross section is the ability of a molecule to absorb a photon of a particular wavelength and polarization. Analogously, in the context of nuclear engineering, it refers to the probability of a particle (usually a neutron) being absorbed by a nucleus. Although the units are given as an area, it does not refer to an actual size area, at least partially because the density or state of the target molecule will affect the probability of absorption. Quantitatively, the number of photons absorbed, between the points and along the path of a beam is the product of the number of photons penetrating to depth times the number of absorbing molecules per unit volume times the absorption cross section :
.
The absorption cross-section is closely related to molar absorptivity and mass absorption coefficient.
For a given particle and its energy, the absorption cross-section of the target material can be calculated from mass absorption coefficient using:
where:
is the mass absorption coefficient
is the molar mass in g/mol
is Avogadro constant
This is also commonly expressed as:
where:
is the absorption coefficient
is the atomic number density
See also
Cross section (physics)
Photoionisation cross section
Nuclear cross section
Neutron cross section
Mean free path
Compton scattering
Transmittance
Attenuation
Beer–Lambert law
High energy X-rays
Attenuation coefficient
Absorption spectroscopy
References
Electromagnetism
Nuclear physics
Scattering, absorption and radiative transfer (optics) | Absorption cross section | [
"Physics",
"Chemistry"
] | 439 | [
"Electromagnetism",
"Physical phenomena",
" absorption and radiative transfer (optics)",
"Scattering",
"Fundamental interactions",
"Nuclear physics"
] |
8,802,504 | https://en.wikipedia.org/wiki/Voltage%20graph | In graph theory, a voltage graph is a directed graph whose edges are labelled invertibly by elements of a group. It is formally identical to a gain graph, but it is generally used in topological graph theory as a concise way to specify another graph called the derived graph of the voltage graph.
Typical choices of the groups used for voltage graphs include the two-element group (for defining the bipartite double cover of a graph), free groups (for defining the universal cover of a graph), d-dimensional integer lattices (viewed as a group under vector addition, for defining periodic structures in d-dimensional Euclidean space), and finite cyclic groups for n > 2. When is a cyclic group, the voltage graph may be called a cyclic-voltage graph.
Definition
Formal definition of a -voltage graph, for a given group :
Begin with a digraph G. (The direction is solely for convenience in notation.)
A -voltage on an arc of G is a label of the arc by an element . For instance, in the case where , the label is a number i (mod n).
A -voltage assignment is a function that labels each arc of G with a Π-voltage.
A -voltage graph is a pair such that G is a digraph and α is a voltage assignment.
The voltage group of a voltage graph is the group from which the voltages are assigned.
Note that the voltages of a voltage graph need not satisfy Kirchhoff's voltage law, that the sum of voltages around a closed path is 0 (the identity element of the group), although this law does hold for the derived graphs described below. Thus, the name may be somewhat misleading. It results from the origin of voltage graphs as dual to the current graphs of topological graph theory.
The derived graph
The derived graph of a voltage graph is the graph whose vertex set is and whose edge set is , where the endpoints of an edge (e, k) such that e has tail v and head w are and .
Although voltage graphs are defined for digraphs, they may be extended to undirected graphs by replacing each undirected edge by a pair of oppositely ordered directed edges and by requiring that these edges have labels that are inverse to each other in the group structure. In this case, the derived graph will also have the property that its directed edges form pairs of oppositely oriented edges, so the derived graph may itself be interpreted as being an undirected graph.
The derived graph is a covering graph of the given voltage graph. If no edge label of the voltage graph is the identity element, then the group elements associated with the vertices of the derived graph provide a coloring of the derived graph with a number of colors equal to the group order. An important special case is the bipartite double cover, the derived graph of a voltage graph in which all edges are labeled with the non-identity element of a two-element group. Because the order of the group is two, the derived graph in this case is guaranteed to be bipartite.
Polynomial time algorithms are known for determining whether the derived graph of a -voltage graph contains any directed cycles.
Examples
Any Cayley graph of a group , with a given set of generators, may be defined as the derived graph for a -voltage graph having one vertex and self-loops, each labeled with one of the generators in .
The Petersen graph is the derived graph for a -voltage graph in the shape of a dumbbell with two vertices and three edges: one edge connecting the two vertices, and one self-loop on each vertex. One self-loop is labeled with 1, the other with 2, and the edge connecting the two vertices is labeled 0. More generally, the same construction allows any generalized Petersen graph GP(n,k) to be constructed as a derived graph of the same dumbbell graph with labels 1, 0, and k in the group .
The vertices and edges of any periodic tessellation of the plane may be formed as the derived graph of a finite graph, with voltages in .
Notes
References
.
.
.
.
.
.
Extensions and generalizations of graphs | Voltage graph | [
"Mathematics"
] | 844 | [
"Mathematical relations",
"Graph theory",
"Extensions and generalizations of graphs"
] |
8,806,722 | https://en.wikipedia.org/wiki/Planar%20Hall%20sensor | The planar Hall sensor is a type of magnetic sensor based on the planar Hall effect of ferromagnetic materials. It measures the change in anisotropic magnetoresistance caused by an external magnetic field in the Hall geometry. As opposed to an ordinary Hall sensor, which measures field components perpendicular to the sensor plane, the planar Hall sensor responds to magnetic field components in the sensor plane. Generally speaking, for ferromagnetic materials, the resistance is larger when the current flows along the direction of magnetization than when it flows perpendicular to the magnetization vector. This creates an asymmetric electric field perpendicular to the current, which depends on the magnetization state of the sensor. Exactly controlling the magnetization state is the key to the operation of the planar Hall sensor. From fabrication the magnetization is confined to one certain direction in zero applied field, and the application of a field perpendicular to this direction changes the magnetization state in such a way that the electronic readout is linear with respect to the magnitude of the applied field. This is true for applied fields smaller than a fourth of the intrinsic effective anisotropy field (see ref. 1 for details on the working principle).
The planar Hall sensor has been demonstrated as a magnetic bead detector and to measure the Earth's field with nanotesla precision As a magnetic bead sensor, the planar Hall sensor can be used as sensing principle in a magnetic bioassay. In ref. 5 detection of influenza virusses was demonstrated using an immunoassay imitating a sandwich ELISA based on monoclonal antibodies.
References
Electrical components
Electric and magnetic fields in matter
Hall effect
Spintronics | Planar Hall sensor | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 344 | [
"Physical phenomena",
"Electrical components",
"Hall effect",
"Spintronics",
"Electric and magnetic fields in matter",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Electrical engineering",
"Solid state engineering",
"Components"
] |
17,426,327 | https://en.wikipedia.org/wiki/Droxidopa | Droxidopa, also known as L-threo-dihydroxyphenylserine (L-DOPS) and sold under the brand names Northera and Dops among others, is sympathomimetic medication which is used in the treatment of hypotension (low blood pressure) and for other indications. It is taken by mouth.
Side effects of droxidopa include headache, dizziness, nausea, and hypertension, among others. Droxidopa is a synthetic amino acid precursor which acts as a prodrug to the neurotransmitter norepinephrine (noradrenaline). Hence, it acts as a non-selective agonist of the α- and β-adrenergic receptors. Unlike norepinephrine, but similarly to levodopa (L-DOPA), droxidopa is capable of crossing the protective blood–brain barrier (BBB).
Droxidopa was first described by 1971. It was approved for use in Japan in 1989 and was introduced in the United States in 2014.
Medical uses
Droxidopa is approved for use in the treatment of orthostatic hypotension, intradialytic hypotension (IDH; hemodialysis-induced hypotension), dizziness, and amyloid polyneuropathy. For hypotension, it is specifically used in the treatment of neurogenic orthostatic hypotension (NOH) in dopamine β-hydroxylase deficiency, as well as NOH associated with multiple system atrophy (MSA), familial amyloid polyneuropathy (FAP), and pure autonomic failure (PAF). The drug is also used off-label in the treatment of freezing of gait in Parkinson's disease.
Side effects
With over 20years on the market, droxidopa has proven to have few side effects of which most are mild. The most common side effects reported in clinical trials include headache, dizziness, nausea, hypertension and fatigue.
Pharmacology
Droxidopa is a prodrug of norepinephrine used to increase the concentrations of these neurotransmitters in the body and brain. It is metabolized by aromatic L-amino acid decarboxylase (AAAD), also known as DOPA decarboxylase (DDC). Patients with NOH have depleted levels of norepinephrine which leads to decreased blood pressure or hypotension upon orthostatic challenge. Droxidopa works by increasing the levels of norepinephrine in the peripheral nervous system (PNS), thus enabling the body to maintain blood flow upon and while standing.
Droxidopa can also cross the blood–brain barrier (BBB) where it is converted to norepinephrine from within the brain. Increased levels of norepinephrine in the central nervous system (CNS) may be beneficial to patients in a wide range of indications. Droxidopa can be coupled with a peripheral aromatic L-amino acid decarboxylase inhibitor (AAADI) or DOPA decarboxylase inhibitor (DDC) such as carbidopa (Lodosyn) to increase central norepinephrine concentrations while minimizing increases of peripheral levels.
Chemistry
Droxidopa, also known as (–)-threo-3-(3,4-dihydroxyphenyl)-L-serine (L-DOPS), is a substituted phenethylamine and is chemically analogous to levodopa (L-3,4-dihydroxyphenylalanine; L-DOPA). Whereas levodopa functions as a precursor and prodrug to dopamine, droxidopa is a precursor and prodrug of norepinephrine.
History
Droxidopa was first described in the scientific literature by 1971.
Droxidopa was developed by Sumitomo Pharmaceuticals for the treatment of hypotension, including NOH, and NOH associated with various disorders such as MSA, FAP, and PD, as well as IDH. The drug has been used in Japan and some surrounding Asian areas for these indications since 1989.
Following a merger with Dainippon Pharmaceuticals in 2006, Dainippon Sumitomo Pharma licensed droxidopa to Chelsea Therapeutics to develop and market it worldwide except in Japan, Korea, China, and Taiwan. In February 2014, the United States Food and Drug Administration approved droxidopa for the treatment of symptomatic neurogenic orthostatic hypotension.
Clinical trials
A systematic review and meta-analysis conducted on clinical trials comparing the clinical use of droxidopa and midodrine have found that midodrine was more likely to cause supine hypertension than droxidopa in patients with NOH. Midodrine was also found to be slightly more effective at raising blood pressure but not statistically significantly.
Chelsea Therapeutics obtained orphan drug status (ODS) for droxidopa in the US for NOH, and that of which associated with PD, PAF, and MSA. In 2014, Chelsea Therapeutics was acquired by Lundbeck along with the rights to droxidopa which was launched in the US in Sept 2014.
Society and culture
Names
Droxidopa is the generic name of the drug and its and . Brand names of droxidopa include Dops and Northera.
Research
Droxidopa alone and in combination with carbidopa has been studied in the treatment of attention deficit hyperactivity disorder (ADHD). Droxidopa was under development for the treatment of ADHD, chronic fatigue syndrome, and fibromyalgia, but development for these indications was discontinued.
References
External links
Alpha-adrenergic agonists
Alpha-Amino acids
Antihypotensive agents
Beta-adrenergic agonists
Cardiac stimulants
Catecholamines
Monoamine precursors
Phenylethanolamines
Prodrugs
Sympathomimetics
Vasoconstrictors | Droxidopa | [
"Chemistry"
] | 1,311 | [
"Chemicals in medicine",
"Prodrugs"
] |
17,428,178 | https://en.wikipedia.org/wiki/Deepwater%20drilling | Deepwater drilling, or deep well drilling, is the process of creating holes in the Earth's crust using a drilling rig for oil extraction under the deep sea. There are approximately 3400 deepwater wells in the Gulf of Mexico with depths greater than 150 meters.
Deepwater drilling has not been technologically or economically feasible for many years, but with rising oil prices, more companies are investing in this sector. Major investors include Halliburton, Diamond Offshore, Transocean, Geoservices, and Schlumberger. The deepwater gas and oil market has been back on the rise since the 2010 Deepwater Horizon disaster, with total expenditures of around US$35 billion per year in the market and total global capital expenditures of US$167 billion in the past four years. Industry analysis by business intelligence company Visiongain estimated in 2011 that total expenditures in global deepwater infrastructure would reach US$145 billion.
A HowStuffWorks article explains how and why deepwater drilling is practiced:
In the Deepwater Horizon oil spill of 2010, a large explosion occurred, killing workers and spilling oil into the Gulf of Mexico while a BP oil rig was drilling in deep waters.
History
Some of the earliest evidence of water wells are located in China. The Chinese discovered and made extensive use of deep drilled groundwater for drinking. The Chinese text The Book of Changes, originally a divination text of the Western Zhou dynasty (1046 -771 BC), contains an entry describing how the ancient Chinese maintained their wells and protected their sources of water. Archaeological evidence and old Chinese documents reveal that the prehistoric and ancient Chinese had the aptitude and skills for digging deep water wells for drinking water as early as 6000 to 7000 years ago. A well excavated at the Hemedu excavation site was believed to have been built during the Neolithic era. The well was cased by four rows of logs with a square frame attached to them at the top of the well. 60 additional tile wells southwest of Beijing are also believed to have been built around 600 BC for drinking and irrigation.
Types of deepwater drilling facilities
Drilling in deep waters can be performed by two main types of mobile deepwater drilling rigs: semi-submersible drilling rigs and drillships. Drilling can also be performed from a fixed-position installation such as a fixed platform, or a floating platform, such as a spar platform, a tension-leg platform, or a semi-submersible production platform.
Fixed Platform – A Fixed Platform consists of a tall, (usually) steel structure that supports a deck. Because the Fixed Platform is anchored to the sea floor, it is very costly to build. This type of platform can be installed in water depth up to .
Jack-Up Rig – Jack-up rigs are mobile units with a floating hull that can be moved around; once they arrived at the desired location, the legs are lowered to the seafloor and locked into place. Then the platform is raised up out of the water. That makes this type of rig safer to work on because weather and waves are not an issue.
Compliant Tower Platform – A compliant tower is a particular type of fixed platform. Both are anchored to the seafloor, and both workplaces are above the water surface. However, the compliant tower is taller and narrower and can operate up to 1 kilometer (3,000 feet) water depth.
Semi-Submersible Production Platform – This platform is buoyant, meaning the bulk of it is floating above the surface. However, the well head is typically located on the seafloor, so extra precautions must be taken to prevent a leak. A contributing cause to the oil spill disaster of 2010 was a failure of the leak-preventing system. These rigs can operate anywhere from below the surface.
Tension-Leg Platform – The Tension-leg Platform consists of a floating structure, held in place by tendons that run down to the seafloor. These rigs drill smaller deposits in narrower areas, meaning this is a low-cost way to get a little oil, which attracts many companies. These rigs can drill anywhere from below the surface.
Subsea System – Subsea Systems are actually wellheads, which sit on the seafloor and extract oil straight from the ground. They use pipes to force the oil back up to the surface, and can siphon oil to nearby platform rigs, a ship overhead, a local production hub, or even a faraway onshore site. This makes the Subsea system very versatile and a popular choice for companies.
Spar Platform – Spar Platforms use a large cylinder to support the floating deck from the seafloor. On average, about 90% of the Spar Platform's structure is underwater. Most Spar Platforms are used up to depths of 1 kilometer (3,000 feet), but new technology can extend them to function up to below the surface. That makes it one of the deepest drilling rigs in use today.
2010 Deepwater Horizon oil spill
On 20 April 2010, a BP deepwater oil rig (Deepwater Horizon) exploded, killing 11 and releasing 750,000 cubic meters (200 million gallons) of oil into the Gulf of Mexico. With those numbers, many scientists consider this disaster to be one of the worst environmental disasters in the history of the US.
A large number of animal deaths have resulted from the release of the oil. A Center study estimates that over 82,000 birds, about 6,000 sea turtles, and nearly 26,000 marine mammals were killed from either the initial explosion or the oil spill. Deepwater wellbore integrity has become an increasingly important point in the field of petroleum engineering.
See also
General Offshore drilling, Well drilling, Shallow water drilling, Extraction of petroleum, Age of Oil, Fossil fuel drilling (disambiguation), Energy development, Hubbert peak theory
Other 2010 United States deepwater drilling moratorium, Submersible pump, IntelliServ, Petroleum industry in Mexico, Deepwater Horizon
People Michael Klare, Jason Leopold
References
External articles
Deepwater Drilling: How It Works | Chevron | Video. chevron.com.
HowStuffWorks "Ultra Deep Water Oil Drilling". science.howstuffworks.com.
Rigzone – Deepwater Gulf of Mexico Drilling Activity to Keep Rising. rigzone.com. April 24, 2013.
Chinese inventions
Petroleum production
Petroleum industry | Deepwater drilling | [
"Chemistry"
] | 1,300 | [
"Chemical process engineering",
"Petroleum",
"Petroleum industry"
] |
17,428,977 | https://en.wikipedia.org/wiki/Devitrification | Devitrification is the process of crystallization in a formerly crystal-free (amorphous) glass. The term is derived from the Latin vitreus, meaning glassy and transparent.
Devitrification in glass art
Devitrification occurs in glass art during the firing process of fused glass whereby the surface of the glass develops a whitish scum, crazing, or wrinkles instead of a smooth glossy shine, as the molecules in the glass change their structure into that of crystalline solids. While this condition is normally undesired, in glass art it is possible to use devitrification as a deliberate artistic technique.
Causes of devitrification, commonly referred to as "devit", can include holding a high temperature for too long, which causes the nucleation of crystals. The presence of foreign residue such as dust on the surface of the glass or inside the kiln prior to firing can provide nucleation points where crystals can propagate easily. The chemical compositions of some types of glass can make them more vulnerable to devitrification than others, for example a high lime content can be factor in inducing this condition. In general opaque glass can devit easily as crystals are present in the glass to give its opaque appearance and thus the higher the chance it might devit.
Techniques for avoiding devitrification include cleaning the glass surfaces of dust or unwanted residue, and allowing rapid cooling once the piece reaches the desired temperature, until the temperature approaches the annealing temperature. Devit spray can be purchased to apply to the surfaces of the glass pieces prior to firing which is supposed to help prevent devitrification, however there is disagreement over the long term effectiveness of this solution and whether it should be used as a substitute for proper firing techniques.
Once devit has occurred, there are techniques that can be attempted to fix it, with varying degrees of success. One technique is to cover the surface with a sheet of clear glass and refiring. Since devitrification can change the COE somewhat, and devitrified glass tends to be somewhat harder to melt again, there is the possibility of this technique resulting in a less stable piece, however it has also been used effectively with full-fused pieces with no apparent problems. Applying devit spray and refiring can also be effective. Alternatively, sandblasting, acid bath, or polishing with a pumice stone or rotary brush can be used to remove the unwanted surface.
Devitrification in geology
In a general sense, any crystallization from a magma could be considered devitrification, but the term is most commonly used for the formation of spherulites in otherwise glassy rocks such as obsidian.
The process of conversion of glass material to crystallized material is known as devitrification. Spherulites are evidence of this process. Perlite is due to hydration of glass causing expansion and not necessarily devitrification.
Glass wool
Devitrification can occur in glass wool used in high-temperature applications, resulting in the formation of potentially carcinogenic mineral powders.
References
External links
Encyclopædia Britannica Online
WarmTIPS: Devitrification
Troubleshooting Fusing and Slumping Problems
Tech Report: Devitrification of glass
Glass engineering and science
Glass art
Glass physics | Devitrification | [
"Physics",
"Materials_science",
"Engineering"
] | 661 | [
"Glass engineering and science",
"Glass physics",
"Condensed matter physics",
"Materials science"
] |
17,435,122 | https://en.wikipedia.org/wiki/Feryal%20%C3%96zel | Feryal Özel (born May 27, 1975) is a Turkish-American astrophysicist born in Istanbul, Turkey, specializing in the physics of compact objects and high energy astrophysical phenomena. As of 2022, Özel is the department chair and a professor at the Georgia Institute of Technology School of Physics in Atlanta. She was previously a professor at the University of Arizona in Tucson, in the Astronomy Department and Steward Observatory.
Özel graduated summa cum laude from Columbia University's Fu Foundation School of Engineering and Applied Science and received her PhD at Harvard University with Ramesh Narayan acting as Thesis advisor. She was a Hubble Fellow and member at the Institute for Advanced Study in Princeton, New Jersey. She was a Fellow at the Harvard-Radcliffe Institute and a visiting professor at the Miller Institute at UC Berkeley.
Özel is widely recognized for her contributions to the field of neutron stars, black holes, and magnetars. She is the Modeling lead and member of the Event Horizon Telescope (EHT) that released the first image of a black hole.
Özel received the Maria Goeppert Mayer award from the American Physical Society in 2013 for her outstanding contributions to neutron star astrophysics. Özel has appeared on numerous TV documentaries including Big Ideas on PBS and the Universe series in the History Channel.
Along with Alexey Vikhlinin, Özel is the Science and Technology Definition Team Community Co-chair for the Lynx X-ray Observatory NASA Large Mission Concept Study.
Education
The following list summarizes Prof. Özel's education path:
1992 - Üsküdar American Academy, İstanbul, Turkey
1996 - BSc in Physics and Applied Mathematics, Columbia University, New York City
1997 - MSc in Physics, Niels Bohr Institute, Copenhagen
2002 - PhD in Astrophysics, Harvard University, Cambridge, USA
Honors and awards
Breakthrough Prize, 2020
Chair, Astrophysics Advisory Committee (APAC), NASA, 2019
Fellowship, John Simon Guggenheim Memorial Foundation, 2016
Visiting Miller Professorship, University of California Berkeley, 2014
Maria Goeppert Mayer Award, American Physical Society, 2013
Fellowship, Radcliffe Institute for Advanced Studies, 2012-2013
Bart J. Bok Prize, Harvard University, 2010
Lucas Award, San Diego Astronomy Association, 2010
Visiting Scholar Fellowship, Turkish Scientific and Technical Research Foundation, 2007
Hubble Postdoctoral Fellowship, 2002–2005
Distinguished Scholar Award, Daughters of Atatürk Foundation, 2003
Keck Fellowship, Institute for Advanced Study, 2002
Van Vleck Fellowship, Harvard University, 1999
Kostrup Prize, Niels Bohr Institute, 1997
Niels Bohr Institute Graduate Fellowship, 1996–1997
Applied Mathematics Faculty Award, Columbia University, 1996
Fu Foundation Scholarship, Columbia University, 1994–1996
Research Fellowship, CERN, 1995
Turkish Health and Education Foundation Scholarship, 1992-1994
References
External links
"Big Ideas" Website (Resume)
Personal webpage at the University of Arizona
Nature Magazine online service
List of published articles according to IOP Publishing
List of published articles according to NASA/ADS
Georgia Tech faculty
University of Arizona faculty
American women astronomers
Columbia School of Engineering and Applied Science alumni
Harvard University alumni
Living people
1975 births
Turkish women academics
Academics from Istanbul
People associated with CERN
American astrophysicists
American academics of Turkish descent
Harvard–Smithsonian Center for Astrophysics people
Turkish astronomers
Black holes
Hubble Fellows
Aspen Center for Physics people
Fellows of the American Physical Society | Feryal Özel | [
"Physics",
"Astronomy"
] | 677 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
53,916 | https://en.wikipedia.org/wiki/Herbicide | Herbicides (, ), also commonly known as weed killers, are substances used to control undesired plants, also known as weeds. Selective herbicides control specific weed species while leaving the desired crop relatively unharmed, while non-selective herbicides (sometimes called "total weed killers") kill plants indiscriminately. The combined effects of herbicides, nitrogen fertilizer, and improved cultivars has increased yields (per acre) of major crops by three to six times from 1900 to 2000.
In the United States in 2012, about 91% of all herbicide usage, determined by weight applied, was in agriculture. In 2012, world pesticide expenditures totaled nearly $24.7 billion; herbicides were about 44% of those sales and constituted the biggest portion, followed by insecticides, fungicides, and fumigants. Herbicide is also used in forestry, where certain formulations have been found to suppress hardwood varieties in favor of conifers after clearcutting, as well as pasture systems.
History
Prior to the widespread use of herbicides, cultural controls, such as altering soil pH, salinity, or fertility levels, were used to control weeds. Mechanical control including tillage and flooding were also used to control weeds. In the late 19th and early 20th centuries, inorganic chemicals such as sulfuric acid, arsenic, copper salts, kerosene and sodium chlorate were used to control weeds, but these chemicals were either toxic, flammable or corrosive and were expensive and ineffective at controlling weeds.
First herbicides
The major breakthroughs occurred during the Second World War as the result of research conducted independently in the United Kingdom and the United States into the potential use of herbicides in war. The compound 2,4-D was first synthesized by W. G. Templeman at Imperial Chemical Industries. In 1940, his work with indoleacetic acid and naphthaleneacetic acid indicated that "growth substances applied appropriately would kill certain broad-leaved weeds in cereals without harming the crops," though these substances were too expensive and too short-lived in soil due to degradation by microorganisms to be of practical agricultural use; by 1941, his team succeeded in synthesizing a wide range of chemicals to achieve the same effect at lower cost and better efficacy, including 2,4-D. In the same year, R. Pokorny in the US achieved this as well. Independently, a team under Juda Hirsch Quastel, working at the Rothamsted Experimental Station made the same discovery. Quastel was tasked by the Agricultural Research Council (ARC) to discover methods for improving crop yield. By analyzing soil as a dynamic system, rather than an inert substance, he was able to apply techniques such as perfusion. Quastel was able to quantify the influence of various plant hormones, inhibitors, and other chemicals on the activity of microorganisms in the soil and assess their direct impact on plant growth. While the full work of the unit remained secret, certain discoveries were developed for commercial use after the war, including the 2,4-D compound.
When 2,4-D was commercially released in 1946, it became the first successful selective herbicide, triggering a worldwide revolution in agricultural output. It allowed for greatly enhanced weed control in wheat, maize (corn), rice, and similar cereal grass crops, because it kills dicots (broadleaf plants), but not most monocots (grasses). The low cost of 2,4-D has led to continued usage today, and it remains one of the most commonly used herbicides in the world. Like other acid herbicides, current formulations use either an amine salt (often trimethylamine) or one of many esters of the parent compound.
Further discoveries
The triazine family of herbicides, which includes atrazine, was introduced in the 1950s; they have the current distinction of being the herbicide family of greatest concern regarding groundwater contamination. Atrazine does not break down readily (within a few weeks) after being applied to soils of above-neutral pH. Under alkaline soil conditions, atrazine may be carried into the soil profile as far as the water table by soil water following rainfall causing the aforementioned contamination. Atrazine is thus said to have "carryover", a generally undesirable property for herbicides.
Glyphosate had been first prepared in the 1950s but its herbicidal activity was only recognized in the 1960s. It was marketed as Roundup in 1971. The development of glyphosate-resistant crop plants, it is now used very extensively for selective weed control in growing crops. The pairing of the herbicide with the resistant seed contributed to the consolidation of the seed and chemistry industry in the late 1990s.
Many modern herbicides used in agriculture and gardening are specifically formulated to degrade within a short period after application.
Terminology
Herbicides can be classified/grouped in various ways; for example, according to their activity, the timing of application, method of application, mechanism of their action, and their chemical structures.
Selectivity
Chemical structure of the herbicide is of primary affecting efficacy. 2,4-D, mecoprop, and dicamba control many broadleaf weeds but remain ineffective against turf grasses.
Chemical additives influence selectivity. Surfactants alter the physical properties of the spray solution and the overall phytotoxicity of the herbicide, increasing translocation. Herbicide safeners enhance the selectivity by boosting herbicide resistance by the crop but allowing the herbicide to damage the weed.
Selectivity is determined by the circumstances and technique of application. Climatic factors affect absorption including humidity, light, precipitation, and temperature. Foliage-applied herbicides will enter the leaf more readily at high humidity by lengthening the drying time of the spray droplet and increasing cuticle hydration. Light of high intensity may break down some herbicides and cause the leaf cuticle to thicken, which can interfere with absorption. Precipitation may wash away or remove some foliage-applied herbicides, but it will increase root absorption of soil-applied herbicides. Drought-stressed plants are less likely to translocate herbicides. As temperature increases, herbicides' performance may decrease. Absorption and translocation may be reduced in very cold weather.
Non-selective herbicides
Non-selective herbicides, generally known as defoliants, are used to clear industrial sites, waste grounds, railways, and railway embankments. Paraquat, glufosinate, and glyphosate are non-selective herbicides.
Timing of application
Preplant: Preplant herbicides are nonselective herbicides applied to the soil before planting. Some preplant herbicides may be mechanically incorporated into the soil. The objective for incorporation is to prevent dissipation through photodecomposition and/or volatility. The herbicides kill weeds as they grow through the herbicide-treated zone. Volatile herbicides have to be incorporated into the soil before planting the pasture. Crops grown in soil treated with a preplant herbicide include tomatoes, corn, soybeans, and strawberries. Soil fumigants like metam-sodium and dazomet are in use as preplant herbicides.
Preemergence: Preemergence herbicides are applied before the weed seedlings emerge through the soil surface. Herbicides do not prevent weeds from germinating but they kill weeds as they grow through the herbicide-treated zone by affecting the cell division in the emerging seedling. Dithiopyr and pendimethalin are preemergence herbicides. Weeds that have already emerged before application or activation are not affected by pre-herbicides as their primary growing point escapes the treatment.
Postemergence: These herbicides are applied after weed seedlings have emerged through the soil surface. They can be foliar or root absorbed, selective or nonselective, and contact or systemic. Application of these herbicides is avoided during rain since being washed off the soil makes it ineffective. 2,4-D is a selective, systemic, foliar-absorbed postemergence herbicide.
Method of application
Soil applied: Herbicides applied to the soil are usually taken up by the root or shoot of the emerging seedlings and are used as preplant or preemergence treatment. Several factors influence the effectiveness of soil-applied herbicides. Weeds absorb herbicides by both passive and active mechanisms. Herbicide adsorption to soil colloids or organic matter often reduces the amount available for weed absorption. Positioning of the herbicide in the correct layer of soil is very important, which can be achieved mechanically and by rainfall. Herbicides on the soil surface are subjected to several processes that reduce their availability. Volatility and photolysis are two common processes that reduce the availability of herbicides. Many soil-applied herbicides are absorbed through plant shoots while they are still underground leading to their death or injury. EPTC and trifluralin are soil-applied herbicides.
Foliar applied: These are applied to a portion of the plant above the ground and are absorbed by exposed tissues. These are generally postemergence herbicides and can either be translocated (systemic) throughout the plant or remain at a specific site (contact). External barriers of plants like cuticles, waxes, cell walls etc. affect herbicide absorption and action. Glyphosate, 2,4-D, and dicamba are foliar-applied herbicides.
Persistence
An herbicide is described as having low residual activity if it is neutralized within a short time of application (within a few weeks or months) – typically this is due to rainfall, or reactions in the soil. A herbicide described as having high residual activity will remain potent for the long term in the soil. For some compounds, the residual activity can leave the ground almost permanently barren.
Mechanism of action
Herbicides interfere with the biochemical machinery that supports plant growth. Herbicides often mimic natural plant hormones, enzyme substrates, and cofactors. They interfere with the metabolism in the target plants. Herbicides are often classified according to their site of action because as a general rule, herbicides within the same site of action class produce similar symptoms on susceptible plants. Classification based on the site of action of the herbicide is preferable as herbicide resistance management can be handled more effectively. Classification by mechanism of action (MOA) indicates the first enzyme, protein, or biochemical step affected in the plant following application:
ACCase inhibitors: Acetyl coenzyme A carboxylase (ACCase) is part of the first step of lipid synthesis. Thus, ACCase inhibitors affect cell membrane production in the meristems of the grass plant. The ACCases of grasses are sensitive to these herbicides, whereas the ACCases of dicot plants are not.
ALS inhibitors: Acetolactate synthase (ALS; also known as acetohydroxyacid synthase, or AHAS) is part of the first step in the synthesis of the branched-chain amino acids (valine, leucine, and isoleucine). These herbicides slowly starve affected plants of these amino acids, which eventually leads to the inhibition of DNA synthesis. They affect grasses and dicots alike. The ALS inhibitor family includes various sulfonylureas (SUs) (such as flazasulfuron and metsulfuron-methyl), imidazolinones (IMIs), triazolopyrimidines (TPs), pyrimidinyl oxybenzoates (POBs), and sulfonylamino carbonyl triazolinones (SCTs). The ALS biological pathway exists only in plants and microorganisms (but not animals), thus making the ALS-inhibitors among the safest herbicides.
EPSPS inhibitors: Enolpyruvylshikimate 3-phosphate synthase enzyme (EPSPS) is used in the synthesis of the amino acids tryptophan, phenylalanine and tyrosine. They affect grasses and dicots alike. Glyphosate (Roundup) is a systemic EPSPS inhibitor inactivated by soil contact.
Auxin-like herbicides: The discovery of synthetic auxins inaugurated the era of organic herbicides. They were discovered in the 1940s after a long study of the plant growth regulator auxin. Synthetic auxins mimic this plant hormone in some way. They have several points of action on the cell membrane, and are effective in the control of dicot plants. 2,4-D, 2,4,5-T, and Aminopyralid are examples of synthetic auxin herbicides.
Photosystem II inhibitors reduce electron flow from water to NADP+ at the photochemical step in photosynthesis. They bind to the Qb site on the D1 protein, and prevent quinone from binding to this site. Therefore, this group of compounds causes electrons to accumulate on chlorophyll molecules. As a consequence, oxidation reactions in excess of those normally tolerated by the cell occur, killing the plant. The triazine herbicides (including simazine, cyanazine, atrazine) and urea derivatives (diuron) are photosystem II inhibitors. Other members of this class are chlorbromuron, pyrazon, isoproturon, bromacil, and terbacil.
Photosystem I inhibitors steal electrons from ferredoxins, specifically the normal pathway through FeS to Fdx to NADP+, leading to direct discharge of electrons on oxygen. As a result, reactive oxygen species are produced and oxidation reactions in excess of those normally tolerated by the cell occur, leading to plant death. Bipyridinium herbicides (such as diquat and paraquat) inhibit the FeS to Fdx step of that chain, while diphenyl ether herbicides (such as nitrofen, nitrofluorfen, and acifluorfen) inhibit the Fdx to NADP+ step.
HPPD inhibitors inhibit 4-hydroxyphenylpyruvate dioxygenase, which are involved in tyrosine breakdown. Tyrosine breakdown products are used by plants to make carotenoids, which protect chlorophyll in plants from being destroyed by sunlight. If this happens, the plants turn white due to complete loss of chlorophyll, and the plants die. Mesotrione and sulcotrione are herbicides in this class; a drug, nitisinone, was discovered in the course of developing this class of herbicides.
Complementary to mechanism-based classifications, herbicides are often classified according to their chemical structures or motifs. Similar structural types work in similar ways. For example, aryloxphenoxypropionates herbicides (diclofop chlorazifop, fluazifop) appear to all act as ACCase inhibitors. The so-called cyclohexanedione herbicides, which are used against grasses, include the following commercial products cycloxydim, clethodim, tralkoxydim, butroxydim, sethoxydim, profoxydim, and mesotrione. Knowing about herbicide chemical family grouping serves as a short-term strategy for managing resistance to site of action. The phenoxyacetic acid mimic the natural auxin indoleacetic acid (IAA). This family includes MCPA, 2,4-D, and 2,4,5-T, picloram, dicamba, clopyralid, and triclopyr.
WSSA and HRAC classification
Using the Weed Science Society of America (WSSA) and herbicide Resistance and World Grains (HRAC) systems, herbicides are classified by mode of action. Eventually the Herbicide Resistance Action Committee (HRAC) and the Weed Science Society of America (WSSA) developed a classification system. Groups in the WSSA and the HRAC systems are designated by numbers and letters, inform users awareness of herbicide mode of action and provide more accurate recommendations for resistance management.
Use and application
Most herbicides are applied as water-based sprays using ground equipment. Ground equipment varies in design, but large areas can be sprayed using self-propelled sprayers equipped with long booms, of with spray nozzles spaced every apart. Towed, handheld, and even horse-drawn sprayers are also used. On large areas, herbicides may also at times be applied aerially using helicopters or airplanes, or through irrigation systems (known as chemigation).
Weed-wiping may also be used, where a wick wetted with herbicide is suspended from a boom and dragged or rolled across the tops of the taller weed plants. This allows treatment of taller grassland weeds by direct contact without affecting related but desirable shorter plants in the grassland sward beneath. The method has the benefit of avoiding spray drift. In Wales, a scheme offering free weed-wiper hire was launched in 2015 in an effort to reduce the levels of MCPA in water courses.
There is little difference in forestry in the early growth stages, when the height similarities between growing trees and growing annual crops yields a similar problem with weed competition. Unlike with annuals however, application is mostly unnecessary thereafter and is thus mostly used to decrease the delay between productive economic cycles of lumber crops.
Misuse and misapplication
Herbicide volatilisation or spray drift may result in herbicide affecting neighboring fields or plants, particularly in windy conditions. Sometimes, the wrong field or plants may be sprayed due to error.
Use politically, militarily, and in conflict
Although herbicidal warfare uses chemical substances, its main purpose is to disrupt agricultural food production or to destroy plants which provide cover or concealment to the enemy. During the Malayan Emergency, British Commonwealth forces deployed herbicides and defoliants in the Malaysian countryside in order to deprive Malayan National Liberation Army (MNLA) insurgents of cover, potential sources of food and to flush them out of the jungle. Deployment of herbicides and defoliants served the dual purpose of thinning jungle trails to prevent ambushes and destroying crop fields in regions where the MNLA was active to deprive them of potential sources of food. As part of this process, herbicides and defoliants were also sprayed from Royal Air Force aircraft.
The use of herbicides as a chemical weapon by the U.S. military during the Vietnam War has left tangible, long-term impacts upon the Vietnamese people and U.S soldiers that handled the chemicals. More than 20% of South Vietnam's forests and 3.2% of its cultivated land were sprayed at least once between during the war. The government of Vietnam says that up to four million people in Vietnam were exposed to the defoliant, and as many as three million people have suffered illness because of Agent Orange, while the Viet Nam Red Cross Society estimates that up to one million people were disabled or have health problems as a result of exposure to Agent Orange. The United States government has described these figures as unreliable.
Health and environmental effects
Human health
Many questions exist about herbicides' health and environmental effects, because of the many kinds of herbicide and the myriad potential targets, mostly unintended. For example, a 1995 panel of 13 scientists reviewing studies on the carcinogenicity of 2,4-D had divided opinions on the likelihood 2,4-D causes cancer in humans. , studies on phenoxy herbicides were too few to accurately assess the risk of many types of cancer from these herbicides, even although evidence was stronger that exposure to these herbicides is associated with increased risk of soft tissue sarcoma and non-Hodgkin lymphoma.
Toxicity
Herbicides have widely variable toxicity. Acute toxicity, short term exposure effects, and chronic toxicity, from long term environmental or occupational exposure. Much public suspicion of herbicides confuses valid statements of acute toxicity with equally valid statements of lack of chronic toxicity at the recommended levels of usage. For instance, while glyphosate formulations with tallowamine adjuvants are acutely toxic, their use was found to be uncorrelated with any health issues like cancer in a massive US Department of Health study on 90,000 members of farmer families for over a period of 23 years. That is, the study shows lack of chronic toxicity, but cannot question the herbicide's acute toxicity.
Health effects
Some herbicides cause a range of health effects ranging from skin rashes to death. The pathway of attack can arise from intentional or unintentional direct consumption, improper application resulting in the herbicide coming into direct contact with people or wildlife, inhalation of aerial sprays, or food consumption prior to the labelled preharvest interval. Under some conditions, certain herbicides can be transported via leaching or surface runoff to contaminate groundwater or distant surface water sources. Generally, the conditions that promote herbicide transport include intense storm events (particularly shortly after application) and soils with limited capacity to adsorb or retain the herbicides. Herbicide properties that increase likelihood of transport include persistence (resistance to degradation) and high water solubility.
Contamination
Cases have been reported where Phenoxy herbicides are contaminated with dioxins such as TCDD; research has suggested such contamination results in a small rise in cancer risk after occupational exposure to these herbicides. Triazine exposure has been implicated in a likely relationship to increased risk of breast cancer, although a causal relationship remains unclear.
False claims
Herbicide manufacturers have at times made false or misleading claims about the safety of their products. Chemical manufacturer Monsanto Company agreed to change its advertising after pressure from New York attorney general Dennis Vacco; Vacco complained about misleading claims that its spray-on glyphosate-based herbicides, including Roundup, were safer than table salt and "practically non-toxic" to mammals, birds, and fish (though proof that this was ever said is hard to find). Roundup is toxic and has resulted in death after being ingested in quantities ranging from 85 to 200 ml, although it has also been ingested in quantities as large as 500 ml with only mild or moderate symptoms. The manufacturer of Tordon 101 (Dow AgroSciences, owned by the Dow Chemical Company) has claimed Tordon 101 has no effects on animals and insects, in spite of evidence of strong carcinogenic activity of the active ingredient, picloram, in studies on rats.
Ecological effects
Herbicide use generally has negative impacts on many aspects of the environment. Insects, non-targeted plants, animals, and aquatic systems subject to serious damage from herbicides. Impacts are highly variable.
Aquatic life
Atrazine has often been blamed for affecting reproductive behavior of aquatic life, but the data do not support this assertion.
Bird populations
Bird populations are one of many indicators of herbicide damage.Most observed effects are due not to toxicity, but to habitat changes and the decreases in abundance of species on which birds rely for food or shelter. Herbicide use in silviculture, used to favor certain types of growth following clearcutting, can cause significant drops in bird populations. Even when herbicides which have low toxicity to birds are used, they decrease the abundance of many types of vegetation on which the birds rely. Herbicide use in agriculture in the UK has been linked to a decline in seed-eating bird species which rely on the weeds killed by the herbicides. Heavy use of herbicides in neotropical agricultural areas has been one of many factors implicated in limiting the usefulness of such agricultural land for wintering migratory birds.
Resistance
One major complication to the use of herbicides for weed control is the ability of plants to evolve herbicide resistance, rendering the herbicides ineffective against target plants. Out of 31 known herbicide modes of action, weeds have evolved resistance to 21. 268 plant species are known to have evolved herbicide resistance at least once. Herbicide resistance was first observed in 1957, and since has evolved repeatedly in weed species from 30 families across the globe. Weed resistance to herbicides has become a major concern in crop production worldwide.
Resistance to herbicides is often attributed to overuse as well as the strong evolutionary pressure on the affected weeds. Three agricultural practices account for the evolutionary pressure upon weeds to evolve resistance: monoculture, neglecting non-herbicide weed control practices, and reliance on one herbicide for weed control. To minimize resistance, rotational programs of herbicide application, where herbicides with multiple modes of action are used, have been widely promoted. In particular, glyphosate resistance evolved rapidly in part because when glyphosate use first began, it was continuously and heavily relied upon for weed control. This caused incredibly strong selective pressure upon weeds, encouraging mutations conferring glyphosate resistance to persist and spread.
However, in 2015, an expansive study showed an increase in herbicide resistance as a result of rotation, and instead recommended mixing multiple herbicides for simultaneous application. As of 2023, the effectiveness of combining herbicides is also questioned, particularly in light of the rise of non-target site resistance.
Plants developed resistance to atrazine and to ALS-inhibitors relatively early, but more recently, glyphosate resistance has dramatically risen. Marestail is one weed that has developed glyphosate resistance. Glyphosate-resistant weeds are present in the vast majority of soybean, cotton and corn farms in some U.S. states. Weeds that can resist multiple other herbicides are spreading. Few new herbicides are near commercialization, and none with a molecular mode of action for which there is no resistance. Because most herbicides could not kill all weeds, farmers rotate crops and herbicides to stop the development of resistant weeds.
A 2008–2009 survey of 144 populations of waterhemp in 41 Missouri counties revealed glyphosate resistance in 69%. Weeds from some 500 sites throughout Iowa in 2011 and 2012 revealed glyphosate resistance in approximately 64% of waterhemp samples. As of 2023, 58 weed species have developed glyphosate resistance. Weeds resistant to multiple herbicides with completely different biological action modes are on the rise. In Missouri, 43% of waterhemp samples were resistant to two different herbicides; 6% resisted three; and 0.5% resisted four. In Iowa 89% of waterhemp samples resist two or more herbicides, 25% resist three, and 10% resist five.
As of 2023, Palmer amaranth with resistance to six different herbicide modes of action has emerged. Annual bluegrass collected from a golf course in the U.S. state of Tennessee was found in 2020 to be resistant to seven herbicides at once. Rigid ryegrass and annual bluegrass share the distinction of the species with confirmed resistance to the largest number of herbicide modes of action, both with confirmed resistance to 12 different modes of action; however, this number references how many forms of herbicide resistance are known to have emerged in the species at some point, not how many have been found simultaneously in a single plant.
In 2015, Monsanto released crop seed varieties resistant to both dicamba and glyphosate, allowing for use of a greater variety of herbicides on fields without harming the crops. By 2020, five years after the release of dicamba-resistant seed, the first example of dicamba-resistant Palmer amaranth was found in one location.
Evolutionary insights
When mutations occur in the genes responsible for the biological mechanisms that herbicides interfere with, these mutations may cause the herbicide mode of action to work less effectively. This is called target-site resistance. Specific mutations that have the most helpful effect for the plant have been shown to occur in separate instances and dominate throughout resistant weed populations. This is an example of convergent evolution. Some mutations conferring herbicide resistance may have fitness costs, reducing the plant's ability to survive in other ways, but over time, the least costly mutations tend to dominate in weed populations.
Recently, incidences of non-target site resistance have increasingly emerged, such as examples where plants are capable of producing enzymes that neutralize herbicides before they can enter the plant's cells – metabolic resistance. This form of resistance is particularly challenging, since plants can develop non-target-site resistance to herbicides their ancestors were never directly exposed to.
Biochemistry of resistance
Resistance to herbicides can be based on one of the following biochemical mechanisms:
Target-site resistance: In target-site resistance, the genetic change that causes the resistance directly alters the chemical mechanism the herbicide targets. The mutation may relate to an enzyme with a crucial function in a metabolic pathway, or to a component of an electron-transport system. For example, ALS-resistant weeds developed by genetic mutations leading to an altered enzyme. Such changes render the herbicide impotent. Target-site resistance may also be caused by an over-expression of the target enzyme (via gene amplification or changes in a gene promoter). A related mechanism is that an adaptable enzyme such as cytochrome P450 is redesigned to neutralize the pesticide itself.
Non-target-site resistance: In non-target-site resistance, the genetic change giving resistance is not directly related to the target site, but causes the plant to be less susceptible by some other means. Some mechanisms include metabolic detoxification of the herbicide in the weed, reduced uptake and translocation, sequestration of the herbicide, or reduced penetration of the herbicide into the leaf surface. These mechanisms all cause less of the herbicide's active ingredient to reach the target site in the first place.
The following terms are also used to describe cases where plants are resistant to multiple herbicides at once:
Cross-resistance: In this case, a single resistance mechanism causes resistance to several herbicides. The term target-site cross-resistance is used when the herbicides bind to the same target site, whereas non-target-site cross-resistance is due to a single non-target-site mechanism (e.g., enhanced metabolic detoxification) that entails resistance across herbicides with different sites of action.
Multiple resistance: In this situation, two or more resistance mechanisms are present within individual plants, or within a plant population.
Resistance management
Due to herbicide resistance – a major concern in agriculture – a number of products combine herbicides with different means of action. Integrated pest management may use herbicides alongside other pest control methods.
Integrated weed management (IWM) approach utilizes several tactics to combat weeds and forestall resistance. This approach relies less on herbicides and so selection pressure should be reduced. By relying on diverse weed control methods, including non-herbicide methods of weed control, the selection pressure on weeds to evolve resistance can be lowered. Researchers warn that if herbicide resistance is combatted only with more herbicides, "evolution will most likely win." In 2017, the USEPA issued a revised Pesticide Registration Notice (PRN 2017-1), which provides guidance to pesticide registrants on required pesticide resistance management labeling. This requirement applies to all conventional pesticides and is meant to provide end-users with guidance on managing pesticide resistance. An example of a fully executed label compliant with the USEPA resistance management labeling guidance can be seen on the specimen label for the herbicide, cloransulam-methyl, updated in 2022.
Optimising herbicide input to the economic threshold level should avoid the unnecessary use of herbicides and reduce selection pressure. Herbicides should be used to their greatest potential by ensuring that the timing, dose, application method, soil and climatic conditions are optimal for good activity. In the UK, partially resistant grass weeds such as Alopecurus myosuroides (blackgrass) and Avena genus (wild oat) can often be controlled adequately when herbicides are applied at the 2-3 leaf stage, whereas later applications at the 2-3 tiller stage can fail badly. Patch spraying, or applying herbicide to only the badly infested areas of fields, is another means of reducing total herbicide use.
Approaches to treating resistant weeds
Alternative herbicides
When resistance is first suspected or confirmed, the efficacy of alternatives is likely to be the first consideration. If there is resistance to a single group of herbicides, then the use of herbicides from other groups may provide a simple and effective solution, at least in the short term. For example, many triazine-resistant weeds have been readily controlled by the use of alternative herbicides such as dicamba or glyphosate.
Mixtures and sequences
The use of two or more herbicides which have differing modes of action can reduce the selection for resistant genotypes. Ideally, each component in a mixture should:
Be active at different target sites
Have a high level of efficacy
Be detoxified by different biochemical pathways
Have similar persistence in the soil (if it is a residual herbicide)
Exert negative cross-resistance
Synergise the activity of the other component
No mixture is likely to have all these attributes, but the first two listed are the most important. There is a risk that mixtures will select for resistance to both components in the longer term. One practical advantage of sequences of two herbicides compared with mixtures is that a better appraisal of the efficacy of each herbicide component is possible, provided that sufficient time elapses between each application. A disadvantage with sequences is that two separate applications have to be made and it is possible that the later application will be less effective on weeds surviving the first application. If these are resistant, then the second herbicide in the sequence may increase selection for resistant individuals by killing the susceptible plants which were damaged but not killed by the first application, but allowing the larger, less affected, resistant plants to survive. This has been cited as one reason why ALS-resistant Stellaria media has evolved in Scotland recently (2000), despite the regular use of a sequence incorporating mecoprop, a herbicide with a different mode of action.
Natural herbicide
The term organic herbicide has come to mean herbicides intended for organic farming. Few natural herbicides rival the effectiveness of synthetics. Some plants also produce their own herbicides, such as the genus Juglans (walnuts), or the tree of heaven; such actions of natural herbicides, and other related chemical interactions, is called allelopathy. The applicability of these agents is unclear.
Farming practices and resistance: a case study
Herbicide resistance became a critical problem in Australian agriculture after many Australian sheep farmers began to exclusively grow wheat in their pastures in the 1970s. Introduced varieties of ryegrass, while good for grazing sheep, compete intensely with wheat. Ryegrasses produce so many seeds that, if left unchecked, they can completely choke a field. Herbicides provided excellent control, reducing soil disruption because of less need to plough. Within little more than a decade, ryegrass and other weeds began to develop resistance. In response Australian farmers changed methods. By 1983, patches of ryegrass had become immune to Hoegrass (diclofop-methyl), a family of herbicides that inhibit an enzyme called acetyl coenzyme A carboxylase.
Ryegrass populations were large and had substantial genetic diversity because farmers had planted many varieties. Ryegrass is cross-pollinated by wind, so genes shuffle frequently. To control its distribution, farmers sprayed inexpensive Hoegrass, creating selection pressure. In addition, farmers sometimes diluted the herbicide to save money, which allowed some plants to survive application. Farmers turned to a group of herbicides that block acetolactate synthase when resistance appeared. Once again, ryegrass in Australia evolved a kind of "cross-resistance" that allowed it to break down various herbicides rapidly. Four classes of herbicides become ineffective within a few years. In 2013, only two herbicide classes called Photosystem II and long-chain fatty acid inhibitors, were effective against ryegrass.
See also
Bioherbicide
Environmental impact assessment
HRAC classification
Index of pesticide articles
Integrated pest management
List of environmental health hazards
Preemergent herbicide
Soil contamination
Surface runoff
Weed
Weed control
Defoliant
References
Further reading
A Brief History of On-track Weed Control in the N.S.W. SRA during the Steam Era Longworth, Jim Australian Railway Historical Society Bulletin, April, 1996 pp99–116
External links
General Information
National Pesticide Information Center, Information about pesticide-related topics
National Agricultural Statistics Service
Regulatory policy
US EPA
UK Pesticides Safety Directorate
European Commission pesticide information
pmra Pest Management Regulatory Agency of Canada
Pesticides
Soil contamination
Lawn care
Toxicology
Biocides
Chemical anti-agriculture weapons | Herbicide | [
"Chemistry",
"Biology",
"Environmental_science"
] | 7,603 | [
"Herbicides",
"Pesticides",
"Toxicology",
"Chemical weapons",
"Environmental chemistry",
"Soil contamination",
"Chemical anti-agriculture weapons",
"Biocides"
] |
53,932 | https://en.wikipedia.org/wiki/Euclidean%20distance | In mathematics, the Euclidean distance between two points in Euclidean space is the length of the line segment between them. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, and therefore is occasionally called the Pythagorean distance.
These names come from the ancient Greek mathematicians Euclid and Pythagoras. In the Greek deductive geometry exemplified by Euclid's Elements, distances were not represented as numbers but line segments of the same length, which were considered "equal". The notion of distance is inherent in the compass tool used to draw a circle, whose points all have the same distance from a common center point. The connection from the Pythagorean theorem to distance calculation was not made until the 18th century.
The distance between two objects that are not points is usually defined to be the smallest distance among pairs of points from the two objects. Formulas are known for computing distances between different types of objects, such as the distance from a point to a line. In advanced mathematics, the concept of distance has been generalized to abstract metric spaces, and other distances than Euclidean have been studied. In some applications in statistics and optimization, the square of the Euclidean distance is used instead of the distance itself.
Distance formulas
One dimension
The distance between any two points on the real line is the absolute value of the numerical difference of their coordinates, their absolute difference. Thus if and are two points on the real line, then the distance between them is given by:
A more complicated formula, giving the same value, but generalizing more readily to higher dimensions, is:
In this formula, squaring and then taking the square root leaves any positive number unchanged, but replaces any negative number by its absolute value.
Two dimensions
In the Euclidean plane, let point have Cartesian coordinates and let point have coordinates . Then the distance between and is given by:
This can be seen by applying the Pythagorean theorem to a right triangle with horizontal and vertical sides, having the line segment from to as its hypotenuse. The two squared formulas inside the square root give the areas of squares on the horizontal and vertical sides, and the outer square root converts the area of the square on the hypotenuse into the length of the hypotenuse.
It is also possible to compute the distance for points given by polar coordinates. If the polar coordinates of are and the polar coordinates of are , then their distance is given by the law of cosines:
When and are expressed as complex numbers in the complex plane, the same formula for one-dimensional points expressed as real numbers can be used, although here the absolute value sign indicates the complex norm:
Higher dimensions
In three dimensions, for points given by their Cartesian coordinates, the distance is
In general, for points given by Cartesian coordinates in -dimensional Euclidean space, the distance is
The Euclidean distance may also be expressed more compactly in terms of the Euclidean norm of the Euclidean vector difference:
Objects other than points
For pairs of objects that are not both points, the distance can most simply be defined as the smallest distance between any two points from the two objects, although more complicated generalizations from points to sets such as Hausdorff distance are also commonly used. Formulas for computing distances between different types of objects include:
The distance from a point to a line, in the Euclidean plane
The distance from a point to a plane in three-dimensional Euclidean space
The distance between two lines in three-dimensional Euclidean space
The distance from a point to a curve can be used to define its parallel curve, another curve all of whose points have the same distance to the given curve.
Properties
The Euclidean distance is the prototypical example of the distance in a metric space, and obeys all the defining properties of a metric space:
It is symmetric, meaning that for all points and , . That is (unlike road distance with one-way streets) the distance between two points does not depend on which of the two points is the start and which is the destination.
It is positive, meaning that the distance between every two distinct points is a positive number, while the distance from any point to itself is zero.
It obeys the triangle inequality: for every three points , , and , . Intuitively, traveling from to via cannot be any shorter than traveling directly from to .
Another property, Ptolemy's inequality, concerns the Euclidean distances among four points , , , and . It states that
For points in the plane, this can be rephrased as stating that for every quadrilateral, the products of opposite sides of the quadrilateral sum to at least as large a number as the product of its diagonals. However, Ptolemy's inequality applies more generally to points in Euclidean spaces of any dimension, no matter how they are arranged. For points in metric spaces that are not Euclidean spaces, this inequality may not be true. Euclidean distance geometry studies properties of Euclidean distance such as Ptolemy's inequality, and their application in testing whether given sets of distances come from points in a Euclidean space.
According to the Beckman–Quarles theorem, any transformation of the Euclidean plane or of a higher-dimensional Euclidean space that preserves unit distances must be an isometry, preserving all distances.
Squared Euclidean distance
In many applications, and in particular when comparing distances, it may be more convenient to omit the final square root in the calculation of Euclidean distances, as the square root does not change the order ( if and only if ). The value resulting from this omission is the square of the Euclidean distance, and is called the squared Euclidean distance. For instance, the Euclidean minimum spanning tree can be determined using only the ordering between distances, and not their numeric values. Comparing squared distances produces the same result but avoids an unnecessary square-root calculation and sidesteps issues of numerical precision. As an equation, the squared distance can be expressed as a sum of squares:
Beyond its application to distance comparison, squared Euclidean distance is of central importance in statistics, where it is used in the method of least squares, a standard method of fitting statistical estimates to data by minimizing the average of the squared distances between observed and estimated values, and as the simplest form of divergence to compare probability distributions. The addition of squared distances to each other, as is done in least squares fitting, corresponds to an operation on (unsquared) distances called Pythagorean addition. In cluster analysis, squared distances can be used to strengthen the effect of longer distances.
Squared Euclidean distance does not form a metric space, as it does not satisfy the triangle inequality. However it is a smooth, strictly convex function of the two points, unlike the distance, which is non-smooth (near pairs of equal points) and convex but not strictly convex. The squared distance is thus preferred in optimization theory, since it allows convex analysis to be used. Since squaring is a monotonic function of non-negative values, minimizing squared distance is equivalent to minimizing the Euclidean distance, so the optimization problem is equivalent in terms of either, but easier to solve using squared distance.
The collection of all squared distances between pairs of points from a finite set may be stored in a Euclidean distance matrix, and is used in this form in distance geometry.
Generalizations
In more advanced areas of mathematics, when viewing Euclidean space as a vector space, its distance is associated with a norm called the Euclidean norm, defined as the distance of each vector from the origin. One of the important properties of this norm, relative to other norms, is that it remains unchanged under arbitrary rotations of space around the origin. By Dvoretzky's theorem, every finite-dimensional normed vector space has a high-dimensional subspace on which the norm is approximately Euclidean; the Euclidean norm is the
only norm with this property. It can be extended to infinite-dimensional vector spaces as the norm or distance. The Euclidean distance gives Euclidean space the structure of a topological space, the Euclidean topology, with the open balls (subsets of points at less than a given distance from a given point) as its neighborhoods.
Other common distances in real coordinate spaces and function spaces:
Chebyshev distance ( distance), which measures distance as the maximum of the distances in each coordinate.
Taxicab distance ( distance), also called Manhattan distance, which measures distance as the sum of the distances in each coordinate.
Minkowski distance ( distance), a generalization that unifies Euclidean distance, taxicab distance, and Chebyshev distance.
For points on surfaces in three dimensions, the Euclidean distance should be distinguished from the geodesic distance, the length of a shortest curve that belongs to the surface. In particular, for measuring great-circle distances on the Earth or other spherical or near-spherical surfaces, distances that have been used include the haversine distance giving great-circle distances between two points on a sphere from their longitudes and latitudes, and Vincenty's formulae also known as "Vincent distance" for distance on a spheroid.
History
Euclidean distance is the distance in Euclidean space. Both concepts are named after ancient Greek mathematician Euclid, whose Elements became a standard textbook in geometry for many centuries. Concepts of length and distance are widespread across cultures, can be dated to the earliest surviving "protoliterate" bureaucratic documents from Sumer in the fourth millennium BC (far before Euclid), and have been hypothesized to develop in children earlier than the related concepts of speed and time. But the notion of a distance, as a number defined from two points, does not actually appear in Euclid's Elements. Instead, Euclid approaches this concept implicitly, through the congruence of line segments, through the comparison of lengths of line segments, and through the concept of proportionality.
The Pythagorean theorem is also ancient, but it could only take its central role in the measurement of distances after the invention of Cartesian coordinates by René Descartes in 1637. The distance formula itself was first published in 1731 by Alexis Clairaut. Because of this formula, Euclidean distance is also sometimes called Pythagorean distance. Although accurate measurements of long distances on the Earth's surface, which are not Euclidean, had again been studied in many cultures since ancient times (see history of geodesy), the idea that Euclidean distance might not be the only way of measuring distances between points in mathematical spaces came even later, with the 19th-century formulation of non-Euclidean geometry. The definition of the Euclidean norm and Euclidean distance for geometries of more than three dimensions also first appeared in the 19th century, in the work of Augustin-Louis Cauchy.
References
Distance
Length
Metric geometry
Pythagorean theorem
distance | Euclidean distance | [
"Physics",
"Mathematics"
] | 2,205 | [
"Scalar physical quantities",
"Planes (geometry)",
"Physical quantities",
"Distance",
"Quantity",
"Euclidean plane geometry",
"Mathematical objects",
"Equations",
"Size",
"Space",
"Length",
"Pythagorean theorem",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
53,933 | https://en.wikipedia.org/wiki/Permittivity | In electromagnetism, the absolute permittivity, often simply called permittivity and denoted by the Greek letter (epsilon), is a measure of the electric polarizability of a dielectric material. A material with high permittivity polarizes more in response to an applied electric field than a material with low permittivity, thereby storing more energy in the material. In electrostatics, the permittivity plays an important role in determining the capacitance of a capacitor.
In the simplest case, the electric displacement field resulting from an applied electric field E is
More generally, the permittivity is a thermodynamic function of state. It can depend on the frequency, magnitude, and direction of the applied field. The SI unit for permittivity is farad per meter (F/m).
The permittivity is often represented by the relative permittivity which is the ratio of the absolute permittivity and the vacuum permittivity
This dimensionless quantity is also often and ambiguously referred to as the permittivity. Another common term encountered for both absolute and relative permittivity is the dielectric constant which has been deprecated in physics and engineering as well as in chemistry.
By definition, a perfect vacuum has a relative permittivity of exactly 1 whereas at standard temperature and pressure, air has a relative permittivity of
Relative permittivity is directly related to electric susceptibility () by
otherwise written as
The term "permittivity" was introduced in the 1880s by Oliver Heaviside to complement Thomson's (1872) "permeability". Formerly written as , the designation with has been in common use since the 1950s.
Units
The SI unit of permittivity is farad per meter (F/m or F·m−1).
Explanation
In electromagnetism, the electric displacement field represents the distribution of electric charges in a given medium resulting from the presence of an electric field . This distribution includes charge migration and electric dipole reorientation. Its relation to permittivity in the very simple case of linear, homogeneous, isotropic materials with "instantaneous" response to changes in electric field is:
where the permittivity is a scalar. If the medium is anisotropic, the permittivity is a second rank tensor.
In general, permittivity is not a constant, as it can vary with the position in the medium, the frequency of the field applied, humidity, temperature, and other parameters. In a nonlinear medium, the permittivity can depend on the strength of the electric field. Permittivity as a function of frequency can take on real or complex values.
In SI units, permittivity is measured in farads per meter (F/m or A2·s4·kg−1·m−3). The displacement field is measured in units of coulombs per square meter (C/m2), while the electric field is measured in volts per meter (V/m). and describe the interaction between charged objects. is related to the charge densities associated with this interaction, while is related to the forces and potential differences.
Vacuum permittivity
The vacuum permittivity (also called permittivity of free space or the electric constant) is the ratio in free space. It also appears in the Coulomb force constant,
Its value is
where
is the speed of light in free space,
is the vacuum permeability.
The constants and were both defined in SI units to have exact numerical values until the 2019 revision of the SI. Therefore, until that date, could be also stated exactly as a fraction,
even if the result was irrational (because the fraction contained ). In contrast, the ampere was a measured quantity before 2019, but since then the ampere is now exactly defined and it is that is an experimentally measured quantity (with consequent uncertainty) and therefore so is the new 2019 definition of ( remains exactly defined before and since 2019).
Relative permittivity
The linear permittivity of a homogeneous material is usually given relative to that of free space, as a relative permittivity (also called dielectric constant, although this term is deprecated and sometimes only refers to the static, zero-frequency relative permittivity). In an anisotropic material, the relative permittivity may be a tensor, causing birefringence. The actual permittivity is then calculated by multiplying the relative permittivity by :
where (frequently written ) is the electric susceptibility of the material.
The susceptibility is defined as the constant of proportionality (which may be a tensor) relating an electric field to the induced dielectric polarization density such that
where is the electric permittivity of free space.
The susceptibility of a medium is related to its relative permittivity by
So in the case of a vacuum,
The susceptibility is also related to the polarizability of individual particles in the medium by the Clausius-Mossotti relation.
The electric displacement is related to the polarization density by
The permittivity and permeability of a medium together determine the phase velocity of electromagnetic radiation through that medium:
Practical applications
Determining capacitance
The capacitance of a capacitor is based on its design and architecture, meaning it will not change with charging and discharging. The formula for capacitance in a parallel plate capacitor is written as
where is the area of one plate, is the distance between the plates, and is the permittivity of the medium between the two plates. For a capacitor with relative permittivity , it can be said that
Gauss's law
Permittivity is connected to electric flux (and by extension electric field) through Gauss's law. Gauss's law states that for a closed Gaussian surface, ,
where is the net electric flux passing through the surface, is the charge enclosed in the Gaussian surface, is the electric field vector at a given point on the surface, and is a differential area vector on the Gaussian surface.
If the Gaussian surface uniformly encloses an insulated, symmetrical charge arrangement, the formula can be simplified to
where represents the angle between the electric field lines and the normal (perpendicular) to .
If all of the electric field lines cross the surface at 90°, the formula can be further simplified to
Because the surface area of a sphere is the electric field a distance away from a uniform, spherical charge arrangement is
This formula applies to the electric field due to a point charge, outside of a conducting sphere or shell, outside of a uniformly charged insulating sphere, or between the plates of a spherical capacitor.
Dispersion and causality
In general, a material cannot polarize instantaneously in response to an applied field, and so the more general formulation as a function of time is
That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by . The upper limit of this integral can be extended to infinity as well if one defines for . An instantaneous response would correspond to a Dirac delta function susceptibility .
It is convenient to take the Fourier transform with respect to time and write this relationship as a function of frequency. Because of the convolution theorem, the integral becomes a simple product,
This frequency dependence of the susceptibility leads to frequency dependence of the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material.
Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. effectively for ), a consequence of causality, imposes Kramers–Kronig constraints on the susceptibility .
Complex permittivity
As opposed to the response of a vacuum, the response of normal materials to external fields generally depends on the frequency of the field. This frequency dependence reflects the fact that a material's polarization does not change instantaneously when an electric field is applied. The response must always be causal (arising after the applied field), which can be represented by a phase difference. For this reason, permittivity is often treated as a complex function of the (angular) frequency of the applied field:
(since complex numbers allow specification of magnitude and phase). The definition of permittivity therefore becomes
where
and are the amplitudes of the displacement and electric fields, respectively,
is the imaginary unit, .
The response of a medium to static electric fields is described by the low-frequency limit of permittivity, also called the static permittivity (also ):
At the high-frequency limit (meaning optical frequencies), the complex permittivity is commonly referred to as (or sometimes ). At the plasma frequency and below, dielectrics behave as ideal metals, with electron gas behavior. The static permittivity is a good approximation for alternating fields of low frequencies, and as the frequency increases a measurable phase difference emerges between and . The frequency at which the phase shift becomes noticeable depends on temperature and the details of the medium. For moderate field strength (), and remain proportional, and
Since the response of materials to alternating fields is characterized by a complex permittivity, it is natural to separate its real and imaginary parts, which is done by convention in the following way:
where
is the real part of the permittivity;
is the imaginary part of the permittivity;
is the loss angle.
The choice of sign for time-dependence, , dictates the sign convention for the imaginary part of permittivity. The signs used here correspond to those commonly used in physics, whereas for the engineering convention one should reverse all imaginary quantities.
The complex permittivity is usually a complicated function of frequency , since it is a superimposed description of dispersion phenomena occurring at multiple frequencies. The dielectric function must have poles only for frequencies with positive imaginary parts, and therefore satisfies the Kramers–Kronig relations. However, in the narrow frequency ranges that are often studied in practice, the permittivity can be approximated as frequency-independent or by model functions.
At a given frequency, the imaginary part, , leads to absorption loss if it is positive (in the above sign convention) and gain if it is negative. More generally, the imaginary parts of the eigenvalues of the anisotropic dielectric tensor should be considered.
In the case of solids, the complex dielectric function is intimately connected to band structure. The primary quantity that characterizes the electronic structure of any crystalline material is the probability of photon absorption, which is directly related to the imaginary part of the optical dielectric function . The optical dielectric function is given by the fundamental expression:
In this expression, represents the product of the Brillouin zone-averaged transition probability at the energy with the joint density of states,
; is a broadening function, representing the role of scattering in smearing out the energy levels.
In general, the broadening is intermediate between Lorentzian and Gaussian;
for an alloy it is somewhat closer to Gaussian because of strong scattering from statistical fluctuations in the local composition on a nanometer scale.
Tensorial permittivity
According to the Drude model of magnetized plasma, a more general expression which takes into account the interaction of the carriers with an alternating electric field at millimeter and microwave frequencies in an axially magnetized semiconductor requires the expression of the permittivity as a non-diagonal tensor:
If vanishes, then the tensor is diagonal but not proportional to the identity and the medium is said to be a uniaxial medium, which has similar properties to a uniaxial crystal.
Classification of materials
Materials can be classified according to their complex-valued permittivity , upon comparison of its real and imaginary components (or, equivalently, conductivity, , when accounted for in the latter). A perfect conductor has infinite conductivity, , while a perfect dielectric is a material that has no conductivity at all, ; this latter case, of real-valued permittivity (or complex-valued permittivity with zero imaginary component) is also associated with the name lossless media. Generally, when we consider the material to be a low-loss dielectric (although not exactly lossless), whereas is associated with a good conductor; such materials with non-negligible conductivity yield a large amount of loss that inhibit the propagation of electromagnetic waves, thus are also said to be lossy media. Those materials that do not fall under either limit are considered to be general media.
Lossy media
In the case of a lossy medium, i.e. when the conduction current is not negligible, the total current density flowing is:
where
is the conductivity of the medium;
is the real part of the permittivity.
is the complex permittivity
Note that this is using the electrical engineering convention of the complex conjugate ambiguity; the physics/chemistry convention involves the complex conjugate of these equations.
The size of the displacement current is dependent on the frequency of the applied field ; there is no displacement current in a constant field.
In this formalism, the complex permittivity is defined as:
In general, the absorption of electromagnetic energy by dielectrics is covered by a few different mechanisms that influence the shape of the permittivity as a function of frequency:
First are the relaxation effects associated with permanent and induced molecular dipoles. At low frequencies the field changes slowly enough to allow dipoles to reach equilibrium before the field has measurably changed. For frequencies at which dipole orientations cannot follow the applied field because of the viscosity of the medium, absorption of the field's energy leads to energy dissipation. The mechanism of dipoles relaxing is called dielectric relaxation and for ideal dipoles is described by classic Debye relaxation.
Second are the resonance effects, which arise from the rotations or vibrations of atoms, ions, or electrons. These processes are observed in the neighborhood of their characteristic absorption frequencies.
The above effects often combine to cause non-linear effects within capacitors. For example, dielectric absorption refers to the inability of a capacitor that has been charged for a long time to completely discharge when briefly discharged. Although an ideal capacitor would remain at zero volts after being discharged, real capacitors will develop a small voltage, a phenomenon that is also called soakage or battery action. For some dielectrics, such as many polymer films, the resulting voltage may be less than 1–2% of the original voltage. However, it can be as much as 15–25% in the case of electrolytic capacitors or supercapacitors.
Quantum-mechanical interpretation
In terms of quantum mechanics, permittivity is explained by atomic and molecular interactions.
At low frequencies, molecules in polar dielectrics are polarized by an applied electric field, which induces periodic rotations. For example, at the microwave frequency, the microwave field causes the periodic rotation of water molecules, sufficient to break hydrogen bonds. The field does work against the bonds and the energy is absorbed by the material as heat. This is why microwave ovens work very well for materials containing water. There are two maxima of the imaginary component (the absorptive index) of water, one at the microwave frequency, and the other at far ultraviolet (UV) frequency. Both of these resonances are at higher frequencies than the operating frequency of microwave ovens.
At moderate frequencies, the energy is too high to cause rotation, yet too low to affect electrons directly, and is absorbed in the form of resonant molecular vibrations. In water, this is where the absorptive index starts to drop sharply, and the minimum of the imaginary permittivity is at the frequency of blue light (optical regime).
At high frequencies (such as UV and above), molecules cannot relax, and the energy is purely absorbed by atoms, exciting electron energy levels. Thus, these frequencies are classified as ionizing radiation.
While carrying out a complete ab initio (that is, first-principles) modelling is now computationally possible, it has not been widely applied yet. Thus, a phenomenological model is accepted as being an adequate method of capturing experimental behaviors. The Debye model and the Lorentz model use a first-order and second-order (respectively) lumped system parameter linear representation (such as an RC and an LRC resonant circuit).
Measurement
The relative permittivity of a material can be found by a variety of static electrical measurements. The complex permittivity is evaluated over a wide range of frequencies by using different variants of dielectric spectroscopy, covering nearly 21 orders of magnitude from 10−6 to 1015 hertz. Also, by using cryostats and ovens, the dielectric properties of a medium can be characterized over an array of temperatures. In order to study systems for such diverse excitation fields, a number of measurement setups are used, each adequate for a special frequency range.
Various microwave measurement techniques are outlined in Chen et al. Typical errors for the Hakki–Coleman method employing a puck of material between conducting planes are about 0.3%.
Low-frequency time domain measurements ( to Hz)
Low-frequency frequency domain measurements ( to Hz)
Reflective coaxial methods ( to Hz)
Transmission coaxial method ( to Hz)
Quasi-optical methods ( to Hz)
Terahertz time-domain spectroscopy ( to Hz)
Fourier-transform methods ( to Hz)
At infrared and optical frequencies, a common technique is ellipsometry. Dual polarisation interferometry is also used to measure the complex refractive index for very thin films at optical frequencies.
For the 3D measurement of dielectric tensors at optical frequency, Dielectric tensor tomography can be used.
See also
Acoustic attenuation
Density functional theory
Electric-field screening
Green–Kubo relations
Green's function (many-body theory)
Linear response function
Permeability (electromagnetism)
Rotational Brownian motion
References
Further reading
(volume 2 publ. 1978)
External links
– a chapter from an online textbook
Electric and magnetic fields in matter
Physical quantities | Permittivity | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 3,787 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
"Physical properties"
] |
54,000 | https://en.wikipedia.org/wiki/Biophysics | Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. Biophysical research shares significant overlap with biochemistry, molecular biology, physical chemistry, physiology, nanotechnology, bioengineering, computational biology, biomechanics, developmental biology and systems biology.
The term biophysics was originally introduced by Karl Pearson in 1892. The term biophysics is also regularly used in academia to indicate the study of the physical quantities (e.g. electric current, temperature, stress, entropy) in biological systems. Other biological sciences also perform research on the biophysical properties of living organisms including molecular biology, cell biology, chemical biology, and biochemistry.
Overview
Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions.
Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules.
In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology involving both experimental and theoretical tools. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems. Biophysical models are used extensively in the study of electrical conduction in single neurons, as well as neural circuit analysis in both tissue and whole brain.
Medical physics, a branch of biophysics, is any application of physics to medicine or healthcare, ranging from radiology to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanomachines). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom.
History
The studies of Luigi Galvani (1737–1798) laid groundwork for the later field of biophysics. Some of the earlier studies in biophysics were conducted in the 1840s by a group known as the Berlin school of physiologists. Among its members were pioneers such as Hermann von Helmholtz, Ernst Heinrich Weber, Carl F. W. Ludwig, and Johannes Peter Müller.
William T. Bovie (1882–1958) is credited as a leader of the field's further development in the mid-20th century. He was a leader in developing electrosurgery.
The popularity of the field rose when the book What Is Life?'' by Erwin Schrödinger was published. Since 1957, biophysicists have organized themselves into the Biophysical Society which now has about 9,000 members over the world.
Some authors such as Robert Rosen criticize biophysics on the ground that the biophysical method does not take into account the specificity of biological phenomena.
Focus as a subfield
While some colleges and universities have dedicated departments of biophysics, usually at the graduate level, many do not have university-level biophysics departments, instead having groups in related departments such as biochemistry, cell biology, chemistry, computer science, engineering, mathematics, medicine, molecular biology, neuroscience, pharmacology, physics, and physiology. Depending on the strengths of a department at a university differing emphasis will be given to fields of biophysics. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments.
Biology and molecular biology – Gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics, virophysics.
Structural biology – Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof.
Biochemistry and chemistry – biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships.
Computer science – Neural networks, biomolecular and drug databases.
Computational chemistry – molecular dynamics simulation, molecular docking, quantum chemistry
Bioinformatics – sequence alignment, structural alignment, protein structure prediction
Mathematics – graph/network theory, population modeling, dynamical systems, phylogenetics.
Medicine – biophysical research that emphasizes medicine. Medical biophysics is a field closely related to physiology. It explains various aspects and systems of the body from a physical and mathematical perspective. Examples are fluid dynamics of blood flow, gas physics of respiration, radiation in diagnostics/treatment and much more. Biophysics is taught as a preclinical subject in many medical schools, mainly in Europe.
Neuroscience – studying neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permittivity.
Pharmacology and physiology – channelomics, electrophysiology, biomolecular interactions, cellular membranes, polyketides.
Physics – negentropy, stochastic processes, and the development of new physical techniques and instrumentation as well as their application.
Quantum biology – The field of quantum biology applies quantum mechanics to biological objects and problems. Decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing.
Agronomy and agriculture
Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were biologists, chemists or physicists by training.
See also
Biophysical Society
Index of biophysics articles
List of publications in biology – Biophysics
List of publications in physics – Biophysics
List of biophysicists
Outline of biophysics
Biophysical chemistry
European Biophysical Societies' Association
Mathematical and theoretical biology
Medical biophysics
Membrane biophysics
Molecular biophysics
Neurophysics
Physiomics
Virophysics
Single-particle trajectory
References
Sources
External links
Biophysical Society
Journal of Physiology: 2012 virtual issue Biophysics and Beyond
bio-physics-wiki
Link archive of learning resources for students: biophysika.de (60% English, 40% German)
Applied and interdisciplinary physics | Biophysics | [
"Physics",
"Biology"
] | 1,568 | [
"Applied and interdisciplinary physics",
"Biophysics"
] |
54,034 | https://en.wikipedia.org/wiki/Misnay%E2%80%93Schardin%20effect | The Misnay–Schardin effect, or platter effect, is a characteristic of the detonation of a broad sheet of explosive.
Description
Explosive blasts expand directly away from, and perpendicular to, the surface of an explosive. Unlike the blast from a rounded explosive charge, which expands in all directions, the blast produced by an explosive sheet expands primarily perpendicular to its plane, in both directions. However, if one side is backed by a heavy or fixed mass, most of the blast (i.e. most of the rapidly expanding gas and its kinetic energy) will be reflected in the direction away from the mass.
Uses
The Misnay–Schardin effect was studied and experimented with by explosive experts József Misnay (sometimes spelled Misznay incorrectly), a Hungarian, and Hubert Schardin, a German, who initially sought to develop a more effective antitank mine for Nazi Germany. Some sources claim that World War II ended before their design became usable, but they and others continued their work. Misnay designed two weapons: the 43M TAK antitank mine and the 44M LŐTAK side-attack mine. The Hungarian army used these weapons in 1944–1945.
The later AT2 and M18 Claymore mines rely on this effect.
See also
High-explosive squash head
Explosively formed penetrator
Munroe effect
M93 Hornet mine
References
Explosives | Misnay–Schardin effect | [
"Chemistry"
] | 282 | [
"Explosives",
"Explosions"
] |
54,125 | https://en.wikipedia.org/wiki/Breccia | Breccia ( , ; ) is a rock composed of large angular broken fragments of minerals or rocks cemented together by a fine-grained matrix.
The word has its origins in the Italian language, in which it means "rubble". A breccia may have a variety of different origins, as indicated by the named types including sedimentary breccia, fault or tectonic breccia, igneous breccia, impact breccia, and hydrothermal breccia.
A megabreccia is a breccia composed of very large rock fragments, sometimes kilometers across, which can be formed by landslides, impact events, or caldera collapse.
Types
Breccia is composed of coarse rock fragments held together by cement or a fine-grained matrix. Like conglomerate, breccia contains at least 30 percent of gravel-sized particles (particles over 2mm in size), but it is distinguished from conglomerate because the rock fragments have sharp edges that have not been worn down. These indicate that the gravel was deposited very close to its source area, since otherwise the edges would have been rounded during transport. Most of the rounding of rock fragments takes place within the first few kilometers of transport, though complete rounding of pebbles of very hard rock may take up to of river transport.
A megabreccia is a breccia containing very large rock fragments, from at least a meter in size to greater than 400 meters. In some cases, the clasts are so large that the brecciated nature of the rock is not obvious. Megabreccias can be formed by landslides, impact events, or caldera collapse.
Breccias are further classified by their mechanism of formation.
Sedimentary
Sedimentary breccia is breccia formed by sedimentary processes. For example, scree deposited at the base of a cliff may become cemented to form a talus breccia without ever experiencing transport that might round the rock fragments.
Thick sequences of sedimentary (colluvial) breccia are generally formed next to fault scarps in grabens.
Sedimentary breccia may be formed by submarine debris flows. Turbidites occur as fine-grained peripheral deposits to sedimentary breccia flows.
In a karst terrain, a collapse breccia may form due to collapse of rock into a sinkhole or in cave development. Collapse breccias also form by dissolution of underlying evaporite beds.
Fault
Fault or tectonic breccia results from the grinding action of two fault blocks as they slide past each other. Subsequent cementation of these broken fragments may occur by means of the introduction of mineral matter in groundwater.
Igneous
Igneous clastic rocks can be divided into two classes:
Broken, fragmental rocks associated with volcanic eruptions, both of the lava and pyroclastic type;
Broken, fragmental rocks produced by intrusive processes, usually associated with plutons or porphyry stocks.
Volcanic
Volcanic pyroclastic rocks are formed by explosive eruption of lava and any rocks which are entrained within the eruptive column. This may include rocks plucked off the wall of the magma conduit, or physically picked up by the ensuing pyroclastic surge. Lavas, especially rhyolite and dacite flows, tend to form clastic volcanic rocks by a process known as autobrecciation. This occurs when the thick, nearly solid lava breaks up into blocks and these blocks are then reincorporated into the lava flow again and mixed in with the remaining liquid magma. The resulting breccia is uniform in rock type and chemical composition.
Caldera collapse leads to the formation of megabreccias, which are sometimes mistaken for outcrops of the caldera floor. These are instead blocks of precaldera rock, often coming from the unstable oversteepened rim of the caldera. They are distinguished from mesobreccias whose clasts are less than a meter in size and which form layers in the caldera floor. Some clasts of caldera megabreccias can be over a kilometer in length.
Within the volcanic conduits of explosive volcanoes the volcanic breccia environment merges into the intrusive breccia environment. There the upwelling lava tends to solidify during quiescent intervals only to be shattered by ensuing eruptions. This produces an alloclastic volcanic breccia.
Intrusive
Clastic rocks are also commonly found in shallow subvolcanic intrusions such as porphyry stocks, granites and kimberlite pipes, where they are transitional with volcanic breccias. Intrusive rocks can become brecciated in appearance by multiple stages of intrusion, especially if fresh magma is intruded into partly consolidated or solidified magma. This may be seen in many granite intrusions where later aplite veins form a late-stage stockwork through earlier phases of the granite mass. When particularly intense, the rock may appear as a chaotic breccia.
Clastic rocks in mafic and ultramafic intrusions have been found and form via several processes:
consumption and melt-mingling with wall rocks, where the wall rocks are softened and gradually invaded by the hotter ultramafic intrusion (producing taxitic texture);
accumulation of rocks which fall through the magma chamber from the roof, forming chaotic remnants;
autobrecciation of partly consolidated cumulate by fresh magma injections;
accumulation of xenoliths within a feeder conduit or vent conduit, forming a diatreme breccia pipe.
Impact
Impact breccias are thought to be diagnostic of an impact event such as an asteroid or comet striking the Earth and are normally found at impact craters. Impact breccia, a type of impactite, forms during the process of impact cratering when large meteorites or comets impact with the Earth or other rocky planets or asteroids. Breccia of this type may be present on or beneath the floor of the crater, in the rim, or in the ejecta expelled beyond the crater.
Impact breccia may be identified by its occurrence in or around a known impact crater, and/or an association with other products of impact cratering such as shatter cones, impact glass, shocked minerals, and chemical and isotopic evidence of contamination with extraterrestrial material (e.g., iridium and osmium anomalies). An example of an impact breccia is the Neugrund breccia, which was formed in the Neugrund impact.
Hydrothermal
Hydrothermal breccias usually form at shallow crustal levels (<1 km) between 150 and 350 °C, when seismic or volcanic activity causes a void to open along a fault deep underground. The void draws in hot water, and as pressure in the cavity drops, the water violently boils. In addition, the sudden opening of a cavity causes rock at the sides of the fault to destabilise and implode inwards, and the broken rock gets caught up in a churning mixture of rock, steam and boiling water. Rock fragments collide with each other and the sides of the void, and the angular fragments become more rounded. Volatile gases are lost to the steam phase as boiling continues, in particular carbon dioxide. As a result, the chemistry of the fluids changes and ore minerals rapidly precipitate. Breccia-hosted ore deposits are quite common.
The morphology of breccias associated with ore deposits varies from tabular sheeted veins and clastic dikes associated with overpressured sedimentary strata, to large-scale intrusive diatreme breccias (breccia pipes), or even some synsedimentary diatremes formed solely by the overpressure of pore fluid within sedimentary basins. Hydrothermal breccias are usually formed by hydrofracturing of rocks by highly pressured hydrothermal fluids. They are typical of the epithermal ore environment and are intimately associated with intrusive-related ore deposits such as skarns, greisens and porphyry-related mineralisation. Epithermal deposits are mined for copper, silver and gold.
In the mesothermal regime, at much greater depths, fluids under lithostatic pressure can be released during seismic activity associated with mountain building. The pressurised fluids ascend towards shallower crustal levels that are under lower hydrostatic pressure. On their journey, high-pressure fluids crack rock by hydrofracturing, forming an angular in situ breccia. Rounding of rock fragments is less common in the mesothermal regime, as the formational event is brief. If boiling occurs, methane and hydrogen sulfide may be lost to the steam phase, and ore may precipitate. Mesothermal deposits are often mined for gold.
Ornamental uses
For thousands of years, the striking visual appearance of breccias has made them a popular sculptural and architectural material. Breccia was used for column bases in the Minoan palace of Knossos on Crete in about 1800 BC. Breccia was used on a limited scale by the ancient Egyptians; one of the best-known examples is the statue of the goddess Tawaret in the British Museum. Breccia was regarded by the Romans as an especially precious stone and was often used in high-profile public buildings. Many types of marble are brecciated, such as Breccia Oniciata.
See also
References
Further reading
it:Rocce sedimentarie clastiche#Brecce
ja:礫岩#角礫岩 | Breccia | [
"Materials_science"
] | 1,969 | [
"Breccias",
"Fracture mechanics"
] |
54,140 | https://en.wikipedia.org/wiki/Clathrate%20hydrate | Clathrate hydrates, or gas hydrates, clathrates, or hydrates, are crystalline water-based solids physically resembling ice, in which small non-polar molecules (typically gases) or polar molecules with large hydrophobic moieties are trapped inside "cages" of hydrogen bonded, frozen water molecules. In other words, clathrate hydrates are clathrate compounds in which the host molecule is water and the guest molecule is typically a gas or liquid. Without the support of the trapped molecules, the lattice structure of hydrate clathrates would collapse into conventional ice crystal structure or liquid water. Most low molecular weight gases, including , , , , , , , , , and as well as some higher hydrocarbons and freons, will form hydrates at suitable temperatures and pressures. Clathrate hydrates are not officially chemical compounds, as the enclathrated guest molecules are never bonded to the lattice. The formation and decomposition of clathrate hydrates are first order phase transitions, not chemical reactions. Their detailed formation and decomposition mechanisms on a molecular level are still not well understood.
Clathrate hydrates were first documented in 1810 by Sir Humphry Davy who found that water was a primary component of what was earlier thought to be solidified chlorine.
Clathrates have been found to occur naturally in large quantities. Around 6.4 trillion () tonnes of methane is trapped in deposits of methane clathrate on the deep ocean floor. Such deposits can be found on the Norwegian continental shelf in the northern headwall flank of the Storegga Slide. Clathrates can also exist as permafrost, as at the Mallik gas hydrate site in the Mackenzie Delta of northwestern Canadian Arctic. These natural gas hydrates are seen as a potentially vast energy resource and several countries have dedicated national programs to develop this energy resource. Clathrate hydrate has also been of great interest as technology enabler for many applications like seawater desalination, gas storage, carbon dioxide capture & storage, cooling medium for data centre and district cooling etc. Hydrocarbon clathrates cause problems for the petroleum industry, because they can form inside gas pipelines, often resulting in obstructions. Deep sea deposition of carbon dioxide clathrate has been proposed as a method to remove this greenhouse gas from the atmosphere and control climate change. Clathrates are suspected to occur in large quantities on some outer planets, moons and trans-Neptunian objects, binding gas at fairly high temperatures.
History and etymology
Clathrate hydrates were discovered in 1810 by Humphry Davy. Clathrates were studied by P. Pfeiffer in 1927 and in 1930, E. Hertel defined "molecular compounds" as substances decomposed into individual components following the mass action law in solution or gas state. Clathrate hydrates were discovered to form blockages in gas pipelines in 1934 by Hammerschmidt that led to increase in research to avoid hydrate formation. In 1945, H. M. Powell analyzed the crystal structure of these compounds and named them clathrates. Gas production through methane hydrates has since been realized and has been tested for energy production in Japan and China.
The word clathrate is derived from the Latin (), meaning 'with bars, latticed'.
Structure
Gas hydrates usually form two crystallographic cubic structures: structure (Type) I (named sI) and structure (Type) II (named sII) of space groups and respectively. A third hexagonal structure of space group may also be observed (Type H).
The unit cell of Type I consists of 46 water molecules, forming two types of cages – small and large. The unit cell contains two small cages and six large ones. The small cage has the shape of a pentagonal dodecahedron (512) (which is not a regular dodecahedron) and the large one that of a tetradecahedron, specifically a hexagonal truncated trapezohedron (51262). Together, they form a version of the Weaire–Phelan structure. Typical guests forming Type I hydrates are CO2 in carbon dioxide clathrate and CH4 in methane clathrate.
The unit cell of Type II consists of 136 water molecules, again forming two types of cages – small and large. In this case there are sixteen small cages and eight large ones in the unit cell. The small cage again has the shape of a pentagonal dodecahedron (512), but the large one is a hexadecahedron (51264). Type II hydrates are formed by gases like O2 and N2.
The unit cell of Type H consists of 34 water molecules, forming three types of cages – two small ones of different types, and one "huge". In this case, the unit cell consists of three small cages of type 512, two small ones of type 435663 and one huge of type 51268. The formation of Type H requires the cooperation of two guest gases (large and small) to be stable. It is the large cavity that allows structure H hydrates to fit in large molecules (e.g. butane, hydrocarbons), given the presence of other smaller help gases to fill and support the remaining cavities. Structure H hydrates were suggested to exist in the Gulf of Mexico. Thermogenically produced supplies of heavy hydrocarbons are common there.
The molar fraction of water of most clathrate hydrates is 85%. Clathrate hydrates are derived from organic hydrogen-bonded frameworks. These frameworks are prepared from molecules that "self-associate" by multiple hydrogen-bonding interactions. Small molecules or gases (i.e. methane, carbon dioxide, hydrogen) can be encaged as a guest in hydrates. The ideal guest/host ratio for clathrate hydrates range from 0.8 to 0.9. The guest interaction with the host is limited to van der Waals forces. Certain exceptions exist in semiclathrates where guests incorporate into the host structure via hydrogen bonding with the host structure. Hydrates form often with partial guest filling and collapse in the absence of guests occupying the water cages. Like ice, clathrate hydrates are stable at low temperatures and high pressure and possess similar properties like electrical resistivity. Clathrate hydrates are naturally occurring and can be found in the permafrost and oceanic sediments. Hydrates can also be synthesized through seed crystallization or using amorphous precursors for nucleation.
Clathrates have been explored for many applications including: gas storage, gas production, gas separation, desalination, thermoelectrics, photovoltaics, and batteries.
Hydrates on Earth
Natural gas hydrates
Naturally on Earth gas hydrates can be found on the seabed, in ocean sediments, in deep lake sediments (e.g. Lake Baikal), as well as in the permafrost regions. The amount of methane potentially trapped in natural methane hydrate deposits may be significant (1015 to 1017 cubic metres), which makes them of major interest as a potential energy resource. Catastrophic release of methane from the decomposition of such deposits may lead to a global climate change, referred to as the "clathrate gun hypothesis", because CH4 is a more potent greenhouse gas than CO2 (see Atmospheric methane). The fast decomposition of such deposits is considered a geohazard, due to its potential to trigger landslides, earthquakes and tsunamis. However, natural gas hydrates do not contain only methane but also other hydrocarbon gases, as well as H2S and CO2. Air hydrates are frequently observed in polar ice samples.
Pingos are common structures in permafrost regions. Similar structures are found in deep water related to methane vents. Significantly, gas hydrates can even be formed in the absence of a liquid phase. Under that situation, water is dissolved in gas or in liquid hydrocarbon phase.
In 2017, both Japan and China announced that attempts at large-scale resource extraction of methane hydrates from under the seafloor were successful. However, commercial-scale production remains years away.
The 2020 Research Fronts report identified gas hydrate accumulation and mining technology as one of the top 10 research fronts in the geosciences.
Gas hydrates in pipelines
Thermodynamic conditions favouring hydrate formation are often found in pipelines. This is highly undesirable, because the clathrate crystals might agglomerate and plug the line and cause flow assurance failure and damage valves and instrumentation. The results can range from flow reduction to equipment damage.
Hydrate formation, prevention and mitigation philosophy
Hydrates have a strong tendency to agglomerate and to adhere to the pipe wall and thereby plug the pipeline. Once formed, they can be decomposed by increasing the temperature and/or decreasing the pressure. Even under these conditions, the clathrate dissociation is a slow process.
Therefore, preventing hydrate formation appears to be the key to the problem. A hydrate prevention philosophy could typically be based on three levels of security, listed in order of priority:
Avoid operational conditions that might cause formation of hydrates by depressing the hydrate formation temperature using glycol dehydration;
Temporarily change operating conditions in order to avoid hydrate formation;
Prevent formation of hydrates by addition of chemicals that (a) shift the hydrate equilibrium conditions towards lower temperatures and higher pressures or (b) increase hydrate formation time (inhibitors)
The actual philosophy would depend on operational circumstances such as pressure, temperature, type of flow (gas, liquid, presences of water etc.).
Hydrate inhibitors
When operating within a set of parameters where hydrates could be formed, there are still ways to avoid their formation. Altering the gas composition by adding chemicals can lower the hydrate formation temperature and/or delay their formation. Two options generally exist:
Thermodynamic inhibitors
Kinetic inhibitors and anti-agglomerants
The most common thermodynamic inhibitors are methanol, monoethylene glycol (MEG), and diethylene glycol (DEG), commonly referred to as glycol. All may be recovered and recirculated, but the economics of methanol recovery is not favourable in most cases. MEG is preferred over DEG for applications where the temperature is expected to be −10 °C or lower due to high viscosity at low temperatures. Triethylene glycol (TEG) has too low vapour pressure to be suited as an inhibitor injected into a gas stream. More methanol is lost in the gas phase when compared to MEG or DEG.
The use of kinetic inhibitors and anti-agglomerants in actual field operations is a new and evolving technology. It requires extensive tests and optimisation to the actual system. While kinetic inhibitors work by slowing down the kinetics of the nucleation, anti-agglomerants do not stop the nucleation, but stop the agglomeration (sticking together) of gas hydrate crystals. These two kinds of inhibitors are also known as low dosage hydrate inhibitors, because they require much smaller concentrations than the conventional thermodynamic inhibitors. Kinetic inhibitors, which do not require water and hydrocarbon mixture to be effective, are usually polymers or copolymers and anti-agglomerants (requires water and hydrocarbon mixture) are polymers or zwitterionic – usually ammonium and COOH – surfactants being both attracted to hydrates and hydrocarbons.
Empty clathrate hydrates
Empty clathrate hydrates are thermodynamically unstable (guest molecules are of paramount importance to stabilize these structures) with respect to ice, and as such their study using experimental techniques is greatly limited to very specific formation conditions; however, their mechanical stability renders theoretical and computer simulation methods the ideal choice to address their thermodynamic properties. Starting from very cold samples (110–145 K), Falenty et al. degassed Ne–sII clathrates for several hours using vacuum pumping to obtain a so-called ice XVI, while employing neutron diffraction to observe that (i) the empty sII hydrate structure decomposes at and, furthermore, (ii) the empty hydrate shows a negative thermal expansion at , and it is mechanically more stable and has a larger lattice constant at low temperatures than the Ne-filled analogue. The existence of such a porous ice had been theoretically predicted before. From a theoretical perspective, empty hydrates can be probed using Molecular Dynamics or Monte Carlo techniques. Conde et al. used empty hydrates and a fully atomic description of the solid lattice to estimate the phase diagram of H2O at negative pressures and , and obtain the differences in chemical potentials between ice Ih and the empty hydrates, central to the van der Waals−Platteeuw theory. Jacobson et al. performed simulations using a monoatomic (coarse-grained) model developed for H2O that is capable of capturing the tetrahedral symmetry of hydrates. Their calculations revealed that, under 1 atm pressure, sI and sII empty hydrates are metastable regarding the ice phases up to their melting temperatures, and , respectively. Matsui et al. employed molecular dynamics to perform a thorough and systematic study of several ice polymorphs, namely space fullerene ices, zeolitic ices, and aeroices, and interpreted their relative stability in terms of geometrical considerations.
The thermodynamics of metastable empty sI clathrate hydrates have been probed over broad temperature and pressure ranges, and , by Cruz et al. using large-scale simulations and compared with experimental data at 100 kPa. The whole p–V–T surface obtained was fitted by the universal form of the Parsafar and Mason equation of state with an accuracy of 99.7–99.9%. Framework deformation caused by applied temperature followed a parabolic law, and there is a critical temperature above which the isobaric thermal expansion becomes negative, ranging from 194.7 K at 100 kPa to 166.2 K at 500 MPa. Response to the applied (p, T) field was analyzed in terms of angle and distance descriptors of a classical tetrahedral structure and observed to occur essentially by means of angular alteration for (p, T) > (200 MPa, 200 K). The length of the hydrogen bonds responsible for framework integrity was insensitive to the thermodynamic conditions and its average value is .
CO2 hydrate
Clathrate hydrate, which encaged CO2 as guest molecule is termed as CO2 hydrate. The term CO2 hydrates are more commonly used these days with its relevance in anthropogenic CO2 capture and sequestration. A nonstoichiometric compound, carbon dioxide hydrate, is composed of hydrogen-bonded water molecules arranged in ice-like frameworks that are occupied by molecules with appropriate sizes and regions. In structure I, the CO2 hydrate crystallizes as one of two cubic hydrates composed of 46 H2O molecules (or D2O) and eight CO2 molecules occupying both large cavities (tetrakaidecahedral) and small cavities (pentagonal dodecahedral). Researchers believed that oceans and permafrost have immense potential to capture anthropogenic CO2 in the form CO2 hydrates. The utilization of additives to shift the CO2 hydrate equilibrium curve in phase diagram towards higher temperature and lower pressures is still under scrutiny to make extensive large-scale storage of CO2 viable in shallower subsea depths.
See also
Clathrate
Star formation and evolution
Clathrate gun hypothesis
References
Further reading
External links
Gas hydrates, from Leibniz Institute of Marine Sciences, Kiel (IFM-GEOMAR)
The SUGAR Project (Submarine Gas Hydrate Reservoirs), from Leibniz Institute of Marine Sciences, Kiel (IFM-GEOMAR)
Gas hydrates in video and – Background knowledge about gas hydrates, their prevention and removal (by manufacturer of hydrate autoclaves)
>
Ice
Gases
Industrial gases
Natural gas | Clathrate hydrate | [
"Physics",
"Chemistry"
] | 3,351 | [
"Physical phenomena",
"Phase transitions",
"Matter",
"Phases of matter",
"Critical phenomena",
"Hydrates",
"Industrial gases",
"Clathrates",
"Clathrate hydrates",
"Chemical process engineering",
"Statistical mechanics",
"Gases"
] |
54,232 | https://en.wikipedia.org/wiki/Reinforced%20concrete | Reinforced concrete, also called ferroconcrete, is a composite material in which concrete's relatively low tensile strength and ductility are compensated for by the inclusion of reinforcement having higher tensile strength or ductility. The reinforcement is usually, though not necessarily, steel reinforcing bars (known as rebar) and is usually embedded passively in the concrete before the concrete sets. However, post-tensioning is also employed as a technique to reinforce the concrete. In terms of volume used annually, it is one of the most common engineering materials. In corrosion engineering terms, when designed correctly, the alkalinity of the concrete protects the steel rebar from corrosion.
Description
Reinforcing schemes are generally designed to resist tensile stresses in particular regions of the concrete that might cause unacceptable cracking and/or structural failure. Modern reinforced concrete can contain varied reinforcing materials made of steel, polymers or alternate composite material in conjunction with rebar or not. Reinforced concrete may also be permanently stressed (concrete in compression, reinforcement in tension), so as to improve the behavior of the final structure under working loads. In the United States, the most common methods of doing this are known as pre-tensioning and post-tensioning.
For a strong, ductile and durable construction the reinforcement needs to have the following properties at least:
High relative strength
High toleration of tensile strain
Good bond to the concrete, irrespective of pH, moisture, and similar factors
Thermal compatibility, not causing unacceptable stresses (such as expansion or contraction) in response to changing temperatures.
Durability in the concrete environment, irrespective of corrosion or sustained stress for example.
History
French builder was the first one to use iron-reinforced concrete as a building technique. In 1853, Coignet built for himself the first iron reinforced concrete structure, a four-story house at 72 rue Charles Michels in the suburbs of Paris. Coignet's descriptions of reinforcing concrete suggests that he did not do it for means of adding strength to the concrete but for keeping walls in monolithic construction from overturning. The 1872–73 Pippen Building in Brooklyn, although not designed by Coignet, stands as a testament to his technique.
In 1854, English builder William B. Wilkinson reinforced the concrete roof and floors in the two-story house he was constructing. His positioning of the reinforcement demonstrated that, unlike his predecessors, he had knowledge of tensile stresses. Between 1869 and 1870, Henry Eton would design, and Messrs W & T Phillips of London construct the wrought iron reinforced Homersfield Bridge bridge, with a 50' (15.25 meter) span, over the river Waveney, between the English counties of Norfolk and Suffolk.
In 1877, Thaddeus Hyatt, published a report entitled An Account of Some Experiments with Portland-Cement-Concrete Combined with Iron as a Building Material, with Reference to Economy of Metal in Construction and for Security against Fire in the Making of Roofs, Floors, and Walking Surfaces, in which he reported his experiments on the behaviour of reinforced concrete. His work played a major role in the evolution of concrete construction as a proven and studied science. Without Hyatt's work, more dangerous trial and error methods might have been depended on for the advancement in the technology.
Joseph Monier, a 19th-century French gardener, was a pioneer in the development of structural, prefabricated and reinforced concrete, having been dissatisfied with the existing materials available for making durable flowerpots. He was granted a patent for reinforcing concrete flowerpots by means of mixing a wire mesh and a mortar shell. In 1877, Monier was granted another patent for a more advanced technique of reinforcing concrete columns and girders, using iron rods placed in a grid pattern. Though Monier undoubtedly knew that reinforcing concrete would improve its inner cohesion, it is not clear whether he even knew how much the tensile strength of concrete was improved by the reinforcing.
Before the 1870s, the use of concrete construction, though dating back to the Roman Empire, and having been reintroduced in the early 19th century, was not yet a proven scientific technology.
Ernest L. Ransome, an English-born engineer, was an early innovator of reinforced concrete techniques at the end of the 19th century. Using the knowledge of reinforced concrete developed during the previous 50 years, Ransome improved nearly all the styles and techniques of the earlier inventors of reinforced concrete. Ransome's key innovation was to twist the reinforcing steel bar, thereby improving its bond with the concrete. Gaining increasing fame from his concrete constructed buildings, Ransome was able to build two of the first reinforced concrete bridges in North America. One of his bridges still stands on Shelter Island in New Yorks East End, One of the first concrete buildings constructed in the United States was a private home designed by William Ward, completed in 1876. The home was particularly designed to be fireproof.
G. A. Wayss was a German civil engineer and a pioneer of the iron and steel concrete construction. In 1879, Wayss bought the German rights to Monier's patents and, in 1884, his firm, Wayss & Freytag, made the first commercial use of reinforced concrete. Up until the 1890s, Wayss and his firm greatly contributed to the advancement of Monier's system of reinforcing, established it as a well-developed scientific technology.
One of the first skyscrapers made with reinforced concrete was the 16-story Ingalls Building in Cincinnati, constructed in 1904.
The first reinforced concrete building in Southern California was the Laughlin Annex in downtown Los Angeles, constructed in 1905. In 1906, 16 building permits were reportedly issued for reinforced concrete buildings in the City of Los Angeles, including the Temple Auditorium and 8-story Hayward Hotel.
In 1906, a partial collapse of the Bixby Hotel in Long Beach killed 10 workers during construction when shoring was removed prematurely. That event spurred a scrutiny of concrete erection practices and building inspections. The structure was constructed of reinforced concrete frames with hollow clay tile ribbed flooring and hollow clay tile infill walls. That practice was strongly questioned by experts and recommendations for "pure" concrete construction were made, using reinforced concrete for the floors and walls as well as the frames.
In April 1904, Julia Morgan, an American architect and engineer, who pioneered the aesthetic use of reinforced concrete, completed her first reinforced concrete structure, El Campanil, a bell tower at Mills College, which is located across the bay from San Francisco. Two years later, El Campanil survived the 1906 San Francisco earthquake without any damage, which helped build her reputation and launch her prolific career. The 1906 earthquake also changed the public's initial resistance to reinforced concrete as a building material, which had been criticized for its perceived dullness. In 1908, the San Francisco Board of Supervisors changed the city's building codes to allow wider use of reinforced concrete.
In 1906, the National Association of Cement Users (NACU) published Standard No. 1 and, in 1910, the Standard Building Regulations for the Use of Reinforced Concrete.
Use in construction
Many different types of structures and components of structures can be built using reinforced concrete elements including slabs, walls, beams, columns, foundations, frames and more.
Reinforced concrete can be classified as precast or cast-in-place concrete.
Designing and implementing the most efficient floor system is key to creating optimal building structures. Small changes in the design of a floor system can have significant impact on material costs, construction schedule, ultimate strength, operating costs, occupancy levels and end use of a building.
Without reinforcement, constructing modern structures with concrete material would not be possible.
Reinforced concrete elements
When reinforced concrete elements are used in construction, these reinforced concrete elements exhibit basic behavior when subjected to external loads. Reinforced concrete elements may be subject to tension, compression, bending, shear, and/or torsion.
Behavior
Materials
Concrete is a mixture of coarse (stone or brick chips) and fine (generally sand and/or crushed stone) aggregates with a paste of binder material (usually Portland cement) and water. When cement is mixed with a small amount of water, it hydrates to form microscopic opaque crystal lattices encapsulating and locking the aggregate into a rigid shape. The aggregates used for making concrete should be free from harmful substances like organic impurities, silt, clay, lignite, etc. Typical concrete mixes have high resistance to compressive stresses (about ); however, any appreciable tension (e.g., due to bending) will break the microscopic rigid lattice, resulting in cracking and separation of the concrete. For this reason, typical non-reinforced concrete must be well supported to prevent the development of tension.
If a material with high strength in tension, such as steel, is placed in concrete, then the composite material, reinforced concrete, resists not only compression but also bending and other direct tensile actions. A composite section where the concrete resists compression and reinforcement "rebar" resists tension can be made into almost any shape and size for the construction industry.
Key characteristics
Three physical characteristics give reinforced concrete its special properties:
The coefficient of thermal expansion of concrete is similar to that of steel, eliminating large internal stresses due to differences in thermal expansion or contraction.
When the cement paste within the concrete hardens, this conforms to the surface details of the steel, permitting any stress to be transmitted efficiently between the different materials. Usually steel bars are roughened or corrugated to further improve the bond or cohesion between the concrete and steel.
The alkaline chemical environment provided by the alkali reserve (KOH, NaOH) and the portlandite (calcium hydroxide) contained in the hardened cement paste causes a passivating film to form on the surface of the steel, making it much more resistant to corrosion than it would be in neutral or acidic conditions. When the cement paste is exposed to the air and meteoric water reacts with the atmospheric CO2, portlandite and the calcium silicate hydrate (CSH) of the hardened cement paste become progressively carbonated and the high pH gradually decreases from 13.5 – 12.5 to 8.5, the pH of water in equilibrium with calcite (calcium carbonate) and the steel is no longer passivated.
As a rule of thumb, only to give an idea on orders of magnitude, steel is protected at pH above ~11 but starts to corrode below ~10 depending on steel characteristics and local physico-chemical conditions when concrete becomes carbonated. Carbonation of concrete along with chloride ingress are amongst the chief reasons for the failure of reinforcement bars in concrete.
The relative cross-sectional area of steel required for typical reinforced concrete is usually quite small and varies from 1% for most beams and slabs to 6% for some columns. Reinforcing bars are normally round in cross-section and vary in diameter. Reinforced concrete structures sometimes have provisions such as ventilated hollow cores to control their moisture & humidity.
Distribution of concrete (in spite of reinforcement) strength characteristics along the cross-section of vertical reinforced concrete elements is inhomogeneous.
Mechanism of composite action of reinforcement and concrete
The reinforcement in a RC structure, such as a steel bar, has to undergo the same strain or deformation as the surrounding concrete in order to prevent discontinuity, slip or separation of the two materials under load. Maintaining composite action requires transfer of load between the concrete and steel. The direct stress is transferred from the concrete to the bar interface so as to change the tensile stress in the reinforcing bar along its length. This load transfer is achieved by means of bond (anchorage) and is idealized as a continuous stress field that develops in the vicinity of the steel-concrete interface.
The reasons that the two different material components concrete and steel can work together are as follows:
(1) Reinforcement can be well bonded to the concrete, thus they can jointly resist external loads and deform.
(2) The thermal expansion coefficients of concrete and steel are so close
( to for concrete and for steel) that the thermal stress-induced damage to the bond between the two components can be prevented.
(3) Concrete can protect the embedded steel from corrosion and high-temperature induced softening.
Anchorage (bond) in concrete: Codes of specifications
Because the actual bond stress varies along the length of a bar anchored in a zone of tension, current international codes of specifications use the concept of development length rather than bond stress. The main requirement for safety against bond failure is to provide a sufficient extension of the length of the bar beyond the point where the steel is required to develop its yield stress and this length must be at least equal to its development length. However, if the actual available length is inadequate for full development, special anchorages must be provided, such as cogs or hooks or mechanical end plates. The same concept applies to lap splice length mentioned in the codes where splices (overlapping) provided between two adjacent bars in order to maintain the required continuity of stress in the splice zone.
Anticorrosion measures
In wet and cold climates, reinforced concrete for roads, bridges, parking structures and other structures that may be exposed to deicing salt may benefit from use of corrosion-resistant reinforcement such as uncoated, low carbon/chromium (micro composite), epoxy-coated, hot dip galvanized or stainless steel rebar. Good design and a well-chosen concrete mix will provide additional protection for many applications.
Uncoated, low carbon/chromium rebar looks similar to standard carbon steel rebar due to its lack of a coating; its highly corrosion-resistant features are inherent in the steel microstructure. It can be identified by the unique ASTM specified mill marking on its smooth, dark charcoal finish. Epoxy-coated rebar can easily be identified by the light green color of its epoxy coating. Hot dip galvanized rebar may be bright or dull gray depending on length of exposure, and stainless rebar exhibits a typical white metallic sheen that is readily distinguishable from carbon steel reinforcing bar. Reference ASTM standard specifications A1035/A1035M Standard Specification for Deformed and Plain Low-carbon, Chromium, Steel Bars for Concrete Reinforcement, A767 Standard Specification for Hot Dip Galvanized Reinforcing Bars, A775 Standard Specification for Epoxy Coated Steel Reinforcing Bars and A955 Standard Specification for Deformed and Plain Stainless Bars for Concrete Reinforcement.
Another, cheaper way of protecting rebars is coating them with zinc phosphate. Zinc phosphate slowly reacts with calcium cations and the hydroxyl anions present in the cement pore water and forms a stable hydroxyapatite layer.
Penetrating sealants typically must be applied some time after curing. Sealants include paint, plastic foams, films and aluminum foil, felts or fabric mats sealed with tar, and layers of bentonite clay, sometimes used to seal roadbeds.
Corrosion inhibitors, such as calcium nitrite [Ca(NO2)2], can also be added to the water mix before pouring concrete. Generally, 1–2 wt. % of [Ca(NO2)2] with respect to cement weight is needed to prevent corrosion of the rebars. The nitrite anion is a mild oxidizer that oxidizes the soluble and mobile ferrous ions (Fe2+) present at the surface of the corroding steel and causes them to precipitate as an insoluble ferric hydroxide (Fe(OH)3). This causes the passivation of steel at the anodic oxidation sites. Nitrite is a much more active corrosion inhibitor than nitrate, which is a less powerful oxidizer of the divalent iron.
Reinforcement and terminology of beams
A beam bends under bending moment, resulting in a small curvature. At the outer face (tensile face) of the curvature the concrete experiences tensile stress, while at the inner face (compressive face) it experiences compressive stress.
A singly reinforced beam is one in which the concrete element is only reinforced near the tensile face and the reinforcement, called tension steel, is designed to resist the tension.
A doubly reinforced beam is the section in which besides the tensile reinforcement the concrete element is also reinforced near the compressive face to help the concrete resist compression and take stresses. The latter reinforcement is called compression steel. When the compression zone of a concrete is inadequate to resist the compressive moment (positive moment), extra reinforcement has to be provided if the architect limits the dimensions of the section.
An under-reinforced beam is one in which the tension capacity of the tensile reinforcement is smaller than the combined compression capacity of the concrete and the compression steel (under-reinforced at tensile face). When the reinforced concrete element is subject to increasing bending moment, the tension steel yields while the concrete does not reach its ultimate failure condition. As the tension steel yields and stretches, an "under-reinforced" concrete also yields in a ductile manner, exhibiting a large deformation and warning before its ultimate failure. In this case the yield stress of the steel governs the design.
An over-reinforced beam is one in which the tension capacity of the tension steel is greater than the combined compression capacity of the concrete and the compression steel (over-reinforced at tensile face). So the "over-reinforced concrete" beam fails by crushing of the compressive-zone concrete and before the tension zone steel yields, which does not provide any warning before failure as the failure is instantaneous.
A balanced-reinforced beam is one in which both the compressive and tensile zones reach yielding at the same imposed load on the beam, and the concrete will crush and the tensile steel will yield at the same time. This design criterion is however as risky as over-reinforced concrete, because failure is sudden as the concrete crushes at the same time of the tensile steel yields, which gives a very little warning of distress in tension failure.
Steel-reinforced concrete moment-carrying elements should normally be designed to be under-reinforced so that users of the structure will receive warning of impending collapse.
The characteristic strength is the strength of a material where less than 5% of the specimen shows lower strength.
The design strength or nominal strength is the strength of a material, including a material-safety factor. The value of the safety factor generally ranges from 0.75 to 0.85 in Permissible stress design.
The ultimate limit state is the theoretical failure point with a certain probability. It is stated under factored loads and factored resistances.
Reinforced concrete structures are normally designed according to rules and regulations or recommendation of a code such as ACI-318, CEB, Eurocode 2 or the like. WSD, USD or LRFD methods are used in design of RC structural members. Analysis and design of RC members can be carried out by using linear or non-linear approaches. When applying safety factors, building codes normally propose linear approaches, but for some cases non-linear approaches. To see the examples of a non-linear numerical simulation and calculation visit the references:
Prestressed concrete
Prestressing concrete is a technique that greatly increases the load-bearing strength of concrete beams. The reinforcing steel in the bottom part of the beam, which will be subjected to tensile forces when in service, is placed in tension before the concrete is poured around it. Once the concrete has hardened, the tension on the reinforcing steel is released, placing a built-in compressive force on the concrete. When loads are applied, the reinforcing steel takes on more stress and the compressive force in the concrete is reduced, but does not become a tensile force. Since the concrete is always under compression, it is less subject to cracking and failure.
Common failure modes of steel reinforced concrete
Reinforced concrete can fail due to inadequate strength, leading to mechanical failure, or due to a reduction in its durability. Corrosion and freeze/thaw cycles may damage poorly designed or constructed reinforced concrete. When rebar corrodes, the oxidation products (rust) expand and tends to flake, cracking the concrete and unbonding the rebar from the concrete. Typical mechanisms leading to durability problems are discussed below.
Mechanical failure
Cracking of the concrete section is nearly impossible to prevent; however, the size and location of cracks can be limited and controlled by appropriate reinforcement, control joints, curing methodology and concrete mix design. Cracking can allow moisture to penetrate and corrode the reinforcement. This is a serviceability failure in limit state design. Cracking is normally the result of an inadequate quantity of rebar, or rebar spaced at too great a distance. The concrete cracks either under excess loading, or due to internal effects such as early thermal shrinkage while it cures.
Ultimate failure leading to collapse can be caused by crushing the concrete, which occurs when compressive stresses exceed its strength, by yielding or failure of the rebar when bending or shear stresses exceed the strength of the reinforcement, or by bond failure between the concrete and the rebar.
Carbonation
Carbonation, or neutralisation, is a chemical reaction between carbon dioxide in the air and calcium hydroxide and hydrated calcium silicate in the concrete.
When a concrete structure is designed, it is usual to specify the concrete cover for the rebar (the depth of the rebar within the object). The minimum concrete cover is normally regulated by design or building codes. If the reinforcement is too close to the surface, early failure due to corrosion may occur. The concrete cover depth can be measured with a cover meter. However, carbonated concrete incurs a durability problem only when there is also sufficient moisture and oxygen to cause electropotential corrosion of the reinforcing steel.
One method of testing a structure for carbonation is to drill a fresh hole in the surface and then treat the cut surface with phenolphthalein indicator solution. This solution turns pink when in contact with alkaline concrete, making it possible to see the depth of carbonation. Using an existing hole does not suffice because the exposed surface will already be carbonated.
Chlorides
Chlorides can promote the corrosion of embedded rebar if present in sufficiently high concentration. Chloride anions induce both localized corrosion (pitting corrosion) and generalized corrosion of steel reinforcements. For this reason, one should only use fresh raw water or potable water for mixing concrete, ensure that the coarse and fine aggregates do not contain chlorides, rather than admixtures which might contain chlorides.
It was once common for calcium chloride to be used as an admixture to promote rapid set-up of the concrete. It was also mistakenly believed that it would prevent freezing. However, this practice fell into disfavor once the deleterious effects of chlorides became known. It should be avoided whenever possible.
The use of de-icing salts on roadways, used to lower the freezing point of water, is probably one of the primary causes of premature failure of reinforced or prestressed concrete bridge decks, roadways, and parking garages. The use of epoxy-coated reinforcing bars and the application of cathodic protection has mitigated this problem to some extent. Also FRP (fiber-reinforced polymer) rebars are known to be less susceptible to chlorides. Properly designed concrete mixtures that have been allowed to cure properly are effectively impervious to the effects of de-icers.
Another important source of chloride ions is sea water. Sea water contains by weight approximately 3.5% salts. These salts include sodium chloride, magnesium sulfate, calcium sulfate, and bicarbonates. In water these salts dissociate in free ions (Na+, Mg2+, Cl−, , ) and migrate with the water into the capillaries of the concrete. Chloride ions, which make up about 50% of these ions, are particularly aggressive as a cause of corrosion of carbon steel reinforcement bars.
In the 1960s and 1970s it was also relatively common for magnesite, a chloride rich carbonate mineral, to be used as a floor-topping material. This was done principally as a levelling and sound attenuating layer. However it is now known that when these materials come into contact with moisture they produce a weak solution of hydrochloric acid due to the presence of chlorides in the magnesite. Over a period of time (typically decades), the solution causes corrosion of the embedded rebars. This was most commonly found in wet areas or areas repeatedly exposed to moisture.
Alkali silica reaction
This a reaction of amorphous silica (chalcedony, chert, siliceous limestone) sometimes present in the aggregates with the hydroxyl ions (OH−) from the cement pore solution. Poorly crystallized silica (SiO2) dissolves and dissociates at high pH (12.5 - 13.5) in alkaline water. The soluble dissociated silicic acid reacts in the porewater with the calcium hydroxide (portlandite) present in the cement paste to form an expansive calcium silicate hydrate (CSH). The alkali–silica reaction (ASR) causes localised swelling responsible for tensile stress and cracking. The conditions required for alkali silica reaction are threefold:
(1) aggregate containing an alkali-reactive constituent (amorphous silica), (2) sufficient availability of hydroxyl ions (OH−), and (3) sufficient moisture, above 75% relative humidity (RH) within the concrete. This phenomenon is sometimes popularly referred to as "concrete cancer". This reaction occurs independently of the presence of rebars; massive concrete structures such as dams can be affected.
Conversion of high alumina cement
Resistant to weak acids and especially sulfates, this cement cures quickly and has very high durability and strength. It was frequently used after World War II to make precast concrete objects. However, it can lose strength with heat or time (conversion), especially when not properly cured. After the collapse of three roofs made of prestressed concrete beams using high alumina cement, this cement was banned in the UK in 1976. Subsequent inquiries into the matter showed that the beams were improperly manufactured, but the ban remained.
Sulfates
Sulfates (SO4) in the soil or in groundwater, in sufficient concentration, can react with the Portland cement in concrete causing the formation of expansive products, e.g., ettringite or thaumasite, which can lead to early failure of the structure. The most typical attack of this type is on concrete slabs and foundation walls at grades where the sulfate ion, via alternate wetting and drying, can increase in concentration. As the concentration increases, the attack on the Portland cement can begin. For buried structures such as pipe, this type of attack is much rarer, especially in the eastern United States. The sulfate ion concentration increases much slower in the soil mass and is especially dependent upon the initial amount of sulfates in the native soil. A chemical analysis of soil borings to check for the presence of sulfates should be undertaken during the design phase of any project involving concrete in contact with the native soil. If the concentrations are found to be aggressive, various protective coatings can be applied. Also, in the US ASTM C150 Type 5 Portland cement can be used in the mix. This type of cement is designed to be particularly resistant to a sulfate attack.
Steel plate construction
In steel plate construction, stringers join parallel steel plates. The plate assemblies are fabricated off site, and welded together on-site to form steel walls connected by stringers. The walls become the form into which concrete is poured. Steel plate construction speeds reinforced concrete construction by cutting out the time-consuming on-site manual steps of tying rebar and building forms. The method results in excellent strength because the steel is on the outside, where tensile forces are often greatest.
Fiber-reinforced concrete
Fiber reinforcement is mainly used in shotcrete, but can also be used in normal concrete. Fiber-reinforced normal concrete is mostly used for on-ground floors and pavements, but can also be considered for a wide range of construction parts (beams, pillars, foundations, etc.), either alone or with hand-tied rebars.
Concrete reinforced with fibers (which are usually steel, glass, plastic fibers) or cellulose polymer fiber is less expensive than hand-tied rebar. The shape, dimension, and length of the fiber are important. A thin and short fiber, for example short, hair-shaped glass fiber, is only effective during the first hours after pouring the concrete (its function is to reduce cracking while the concrete is stiffening), but it will not increase the concrete tensile strength. A normal-size fiber for European shotcrete (1 mm diameter, 45 mm length—steel or plastic) will increase the concrete's tensile strength. Fiber reinforcement is most often used to supplement or partially replace primary rebar, and in some cases it can be designed to fully replace rebar.
Steel is the strongest commonly available fiber, and comes in different lengths (30 to 80 mm in Europe) and shapes (end-hooks). Steel fibers can only be used on surfaces that can tolerate or avoid corrosion and rust stains. In some cases, a steel-fiber surface is faced with other materials.
Glass fiber is inexpensive and corrosion-proof, but not as ductile as steel. Recently, spun basalt fiber, long available in Eastern Europe, has become available in the U.S. and Western Europe. Basalt fiber is stronger and less expensive than glass, but historically has not resisted the alkaline environment of Portland cement well enough to be used as direct reinforcement. New materials use plastic binders to isolate the basalt fiber from the cement.
The premium fibers are graphite-reinforced plastic fibers, which are nearly as strong as steel, lighter in weight, and corrosion-proof. Some experiments have had promising early results with carbon nanotubes, but the material is still far too expensive for any building.
Non-steel reinforcement
There is considerable overlap between the subjects of non-steel reinforcement and fiber-reinforcement of concrete. The introduction of non-steel reinforcement of concrete is relatively recent; it takes two major forms: non-metallic rebar rods, and non-steel (usually also non-metallic) fibers incorporated into the cement matrix. For example, there is increasing interest in glass fiber reinforced concrete (GFRC) and in various applications of polymer fibers incorporated into concrete. Although currently there is not much suggestion that such materials will replace metal rebar, some of them have major advantages in specific applications, and there also are new applications in which metal rebar simply is not an option. However, the design and application of non-steel reinforcing is fraught with challenges. For one thing, concrete is a highly alkaline environment, in which many materials, including most kinds of glass, have a poor service life. Also, the behavior of such reinforcing materials differs from the behavior of metals, for instance in terms of shear strength, creep and elasticity.
Fiber-reinforced plastic/polymer (FRP) and glass-reinforced plastic (GRP) consist of fibers of polymer, glass, carbon, aramid or other polymers or high-strength fibers set in a resin matrix to form a rebar rod, or grid, or fiber. These rebars are installed in much the same manner as steel rebars. The cost is higher but, suitably applied, the structures have advantages, in particular a dramatic reduction in problems related to corrosion, either by intrinsic concrete alkalinity or by external corrosive fluids that might penetrate the concrete. These structures can be significantly lighter and usually have a longer service life. The cost of these materials has dropped dramatically since their widespread adoption in the aerospace industry and by the military.
In particular, FRP rods are useful for structures where the presence of steel would not be acceptable. For example, MRI machines have huge magnets, and accordingly require non-magnetic buildings. Again, toll booths that read radio tags need reinforced concrete that is transparent to radio waves. Also, where the design life of the concrete structure is more important than its initial costs, non-steel reinforcing often has its advantages where corrosion of reinforcing steel is a major cause of failure. In such situations corrosion-proof reinforcing can extend a structure's life substantially, for example in the intertidal zone. FRP rods may also be useful in situations where it is likely that the concrete structure may be compromised in future years, for example the edges of balconies when balustrades are replaced, and bathroom floors in multi-story construction where the service life of the floor structure is likely to be many times the service life of the waterproofing building membrane.
Plastic reinforcement often is stronger, or at least has a better strength to weight ratio than reinforcing steels. Also, because it resists corrosion, it does not need a protective concrete cover as thick as steel reinforcement does (typically 30 to 50 mm or more). FRP-reinforced structures therefore can be lighter and last longer. Accordingly, for some applications the whole-life cost will be price-competitive with steel-reinforced concrete.
The material properties of FRP or GRP bars differ markedly from steel, so there are differences in the design considerations. FRP or GRP bars have relatively higher tensile strength but lower stiffness, so that deflections are likely to be higher than for equivalent steel-reinforced units. Structures with internal FRP reinforcement typically have an elastic deformability comparable to the plastic deformability (ductility) of steel reinforced structures. Failure in either case is more likely to occur by compression of the concrete than by rupture of the reinforcement. Deflection is always a major design consideration for reinforced concrete. Deflection limits are set to ensure that crack widths in steel-reinforced concrete are controlled to prevent water, air or other aggressive substances reaching the steel and causing corrosion. For FRP-reinforced concrete, aesthetics and possibly water-tightness will be the limiting criteria for crack width control. FRP rods also have relatively lower compressive strengths than steel rebar, and accordingly require different design approaches for reinforced concrete columns.
One drawback to the use of FRP reinforcement is their limited fire resistance. Where fire safety is a consideration, structures employing FRP have to maintain their strength and the anchoring of the forces at temperatures to be expected in the event of fire. For purposes of fireproofing, an adequate thickness of cement concrete cover or protective cladding is necessary. The addition of 1 kg/m3 of polypropylene fibers to concrete has been shown to reduce spalling during a simulated fire. (The improvement is thought to be due to the formation of pathways out of the bulk of the concrete, allowing steam pressure to dissipate.)
Another problem is the effectiveness of shear reinforcement. FRP rebar stirrups formed by bending before hardening generally perform relatively poorly in comparison to steel stirrups or to structures with straight fibers. When strained, the zone between the straight and curved regions are subject to strong bending, shear, and longitudinal stresses. Special design techniques are necessary to deal with such problems.
There is growing interest in applying external reinforcement to existing structures using advanced materials such as composite (fiberglass, basalt, carbon) rebar, which can impart exceptional strength. Worldwide, there are a number of brands of composite rebar recognized by different countries, such as Aslan, DACOT, V-rod, and ComBar. The number of projects using composite rebar increases day by day around the world, in countries ranging from USA, Russia, and South Korea to Germany.
See also
Anchorage in reinforced concrete
Concrete cover
Concrete slab
Corrosion engineering
Cover meter
Falsework
Ferrocement
Formwork
Henri de Miffonis
Interfacial transition zone
Precast concrete
Reinforced concrete structures durability
Reinforced solid
Structural robustness
Types of concrete
References
Further reading / External links
Threlfall A., et al. Reynolds's Reinforced Concrete Designer's Handbook – 11th ed. .
Newby F., Early Reinforced Concrete, Ashgate Variorum, 2001, .
Kim, S., Surek, J and J. Baker-Jarvis. "Electromagnetic Metrology on Concrete and Corrosion." Journal of Research of the National Institute of Standards and Technology, Vol. 116, No. 3 (May–June 2011): 655–669.
Daniel R., Formwork UK "Concrete frame structures.".
Short documentary about reinforced concrete and its challenges, 2024 (The Aesthetic City)
Concrete buildings and structures
Structural engineering
Materials science
Civil engineering | Reinforced concrete | [
"Physics",
"Materials_science",
"Engineering"
] | 7,562 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Materials science",
"Construction",
"Civil engineering",
"nan"
] |
54,423 | https://en.wikipedia.org/wiki/Phase%20transition | In physics, chemistry, and other related fields like biology, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of matter: solid, liquid, and gas, and in rare cases, plasma. A phase of a thermodynamic system and the states of matter have uniform physical properties. During a phase transition of a given medium, certain properties of the medium change as a result of the change of external conditions, such as temperature or pressure. This can be a discontinuous change; for example, a liquid may become gas upon heating to its boiling point, resulting in an abrupt change in volume. The identification of the external conditions at which a transformation occurs defines the phase transition point.
Types of phase transition
States of matter
Phase transitions commonly refer to when a substance transforms between one of the four states of matter to another. At the phase transition point for a substance, for instance the boiling point, the two phases involved - liquid and vapor, have identical free energies and therefore are equally likely to exist. Below the boiling point, the liquid is the more stable state of the two, whereas above the boiling point the gaseous form is the more stable.
Common transitions between the solid, liquid, and gaseous phases of a single component, due to the effects of temperature and/or pressure are identified in the following table:
For a single component, the most stable phase at different temperatures and pressures can be shown on a phase diagram. Such a diagram usually depicts states in equilibrium. A phase transition usually occurs when the pressure or temperature changes and the system crosses from one region to another, like water turning from liquid to solid as soon as the temperature drops below the freezing point. In exception to the usual case, it is sometimes possible to change the state of a system diabatically (as opposed to adiabatically) in such a way that it can be brought past a phase transition point without undergoing a phase transition. The resulting state is metastable, i.e., less stable than the phase to which the transition would have occurred, but not unstable either. This occurs in superheating and supercooling, for example. Metastable states do not appear on usual phase diagrams.
Structural
Phase transitions can also occur when a solid changes to a different structure without changing its chemical makeup. In elements, this is known as allotropy, whereas in compounds it is known as polymorphism. The change from one crystal structure to another, from a crystalline solid to an amorphous solid, or from one amorphous structure to another () are all examples of solid to solid phase transitions.
The martensitic transformation occurs as one of the many phase transformations in carbon steel and stands as a model for displacive phase transformations. Order-disorder transitions such as in alpha-titanium aluminides. As with states of matter, there is also a metastable to equilibrium phase transformation for structural phase transitions. A metastable polymorph which forms rapidly due to lower surface energy will transform to an equilibrium phase given sufficient thermal input to overcome an energetic barrier.
Magnetic
Phase transitions can also describe the change between different kinds of magnetic ordering. The most well-known is the transition between the ferromagnetic and paramagnetic phases of magnetic materials, which occurs at what is called the Curie point. Another example is the transition between differently ordered, commensurate or incommensurate, magnetic structures, such as in cerium antimonide. A simplified but highly useful model of magnetic phase transitions is provided by the Ising Model
Mixtures
Phase transitions involving solutions and mixtures are more complicated than transitions involving a single compound. While chemically pure compounds exhibit a single temperature melting point between solid and liquid phases, mixtures can either have a single melting point, known as congruent melting, or they have different liquidus and solidus temperatures resulting in a temperature span where solid and liquid coexist in equilibrium. This is often the case in solid solutions, where the two components are isostructural.
There are also a number of phase transitions involving three phases: a eutectic transformation, in which a two-component single-phase liquid is cooled and transforms into two solid phases. The same process, but beginning with a solid instead of a liquid is called a eutectoid transformation. A peritectic transformation, in which a two-component single-phase solid is heated and transforms into a solid phase and a liquid phase. A peritectoid reaction is a peritectoid reaction, except involving only solid phases. A monotectic reaction consists of change from a liquid and to a combination of a solid and a second liquid, where the two liquids display a miscibility gap.
Separation into multiple phases can occur via spinodal decomposition, in which a single phase is cooled and separates into two different compositions.
Non-equilibrium mixtures can occur, such as in supersaturation.
Other examples
Other phase changes include:
Transition to a mesophase between solid and liquid, such as one of the "liquid crystal" phases.
The dependence of the adsorption geometry on coverage and temperature, such as for hydrogen on iron (110).
The emergence of superconductivity in certain metals and ceramics when cooled below a critical temperature.
The emergence of metamaterial properties in artificial photonic media as their parameters are varied.<ref>Eds. Zhou, W., and Fan. S., [https://www.sciencedirect.com/bookseries/semiconductors-and-semimetals/vol/100/suppl/C Semiconductors and Semimetals. Vol 100. Photonic Crystal Metasurface Optoelectronics], Elsevier, 2019</ref>
Quantum condensation of bosonic fluids (Bose–Einstein condensation). The superfluid transition in liquid helium is an example of this.
The breaking of symmetries in the laws of physics during the early history of the universe as its temperature cooled.
Isotope fractionation occurs during a phase transition, the ratio of light to heavy isotopes in the involved molecules changes. When water vapor condenses (an equilibrium fractionation), the heavier water isotopes (18O and 2H) become enriched in the liquid phase while the lighter isotopes (16O and 1H) tend toward the vapor phase.
Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables (cf. phases). This condition generally stems from the interactions of a large number of particles in a system, and does not appear in systems that are small. Phase transitions can occur for non-thermodynamic systems, where temperature is not a parameter. Examples include: quantum phase transitions, dynamic phase transitions, and topological (structural) phase transitions. In these types of systems other parameters take the place of temperature. For instance, connection probability replaces temperature for percolating networks.
Classifications
Ehrenfest classification
Paul Ehrenfest classified phase transitions based on the behavior of the thermodynamic free energy as a function of other thermodynamic variables. Under this scheme, phase transitions were labeled by the lowest derivative of the free energy that is discontinuous at the transition. First-order phase transitions exhibit a discontinuity in the first derivative of the free energy with respect to some thermodynamic variable. The various solid/liquid/gas transitions are classified as first-order transitions because they involve a discontinuous change in density, which is the (inverse of the) first derivative of the free energy with respect to pressure. Second-order phase transitions are continuous in the first derivative (the order parameter, which is the first derivative of the free energy with respect to the external field, is continuous across the transition) but exhibit discontinuity in a second derivative of the free energy. These include the ferromagnetic phase transition in materials such as iron, where the magnetization, which is the first derivative of the free energy with respect to the applied magnetic field strength, increases continuously from zero as the temperature is lowered below the Curie temperature. The magnetic susceptibility, the second derivative of the free energy with the field, changes discontinuously. Under the Ehrenfest classification scheme, there could in principle be third, fourth, and higher-order phase transitions. For example, the Gross–Witten–Wadia phase transition in 2-d lattice quantum chromodynamics is a third-order phase transition. The Curie points of many ferromagnetics is also a third-order transition, as shown by their specific heat having a sudden change in slope.
In practice, only the first- and second-order phase transitions are typically observed. The second-order phase transition was for a while controversial, as it seems to require two sheets of the Gibbs free energy to osculate exactly, which is so unlikely as to never occur in practice. Cornelis Gorter replied the criticism by pointing out that the Gibbs free energy surface might have two sheets on one side, but only one sheet on the other side, creating a forked appearance. ( pp. 146--150)
The Ehrenfest classification implicitly allows for continuous phase transformations, where the bonding character of a material changes, but there is no discontinuity in any free energy derivative. An example of this occurs at the supercritical liquid–gas boundaries.
The first example of a phase transition which did not fit into the Ehrenfest classification was the exact solution of the Ising model, discovered in 1944 by Lars Onsager. The exact specific heat differed from the earlier mean-field approximations, which had predicted that it has a simple discontinuity at critical temperature. Instead, the exact specific heat had a logarithmic divergence at the critical temperature. In the following decades, the Ehrenfest classification was replaced by a simplified classification scheme that is able to incorporate such transitions.
Modern classifications
In the modern classification scheme, phase transitions are divided into two broad categories, named similarly to the Ehrenfest classes:
First-order phase transitions are those that involve a latent heat. During such a transition, a system either absorbs or releases a fixed (and typically large) amount of energy per volume. During this process, the temperature of the system will stay constant as heat is added: the system is in a "mixed-phase regime" in which some parts of the system have completed the transition and others have not.Faghri, A., and Zhang, Y., Fundamentals of Multiphase Heat Transfer and Flow, Springer, New York, NY, 2020
Familiar examples are the melting of ice or the boiling of water (the water does not instantly turn into vapor, but forms a turbulent mixture of liquid water and vapor bubbles). Yoseph Imry and Michael Wortis showed that quenched disorder can broaden a first-order transition. That is, the transformation is completed over a finite range of temperatures, but phenomena like supercooling and superheating survive and hysteresis is observed on thermal cycling.
s are also called "continuous phase transitions". They are characterized by a divergent susceptibility, an infinite correlation length, and a power law decay of correlations near criticality. Examples of second-order phase transitions are the ferromagnetic transition, superconducting transition (for a Type-I superconductor the phase transition is second-order at zero external field and for a Type-II superconductor the phase transition is second-order for both normal-state–mixed-state and mixed-state–superconducting-state transitions) and the superfluid transition. In contrast to viscosity, thermal expansion and heat capacity of amorphous materials show a relatively sudden change at the glass transition temperature which enables accurate detection using differential scanning calorimetry measurements. Lev Landau gave a phenomenological theory of second-order phase transitions.
Apart from isolated, simple phase transitions, there exist transition lines as well as multicritical points, when varying external parameters like the magnetic field or composition.
Several transitions are known as infinite-order phase transitions.
They are continuous but break no symmetries. The most famous example is the Kosterlitz–Thouless transition in the two-dimensional XY model. Many quantum phase transitions, e.g., in two-dimensional electron gases, belong to this class.
The liquid–glass transition is observed in many polymers and other liquids that can be supercooled far below the melting point of the crystalline phase. This is atypical in several respects. It is not a transition between thermodynamic ground states: it is widely believed that the true ground state is always crystalline. Glass is a quenched disorder state, and its entropy, density, and so on, depend on the thermal history. Therefore, the glass transition is primarily a dynamic phenomenon: on cooling a liquid, internal degrees of freedom successively fall out of equilibrium. Some theoretical methods predict an underlying phase transition in the hypothetical limit of infinitely long relaxation times. No direct experimental evidence supports the existence of these transitions.
Characteristic properties
Phase coexistence
A disorder-broadened first-order transition occurs over a finite range of temperatures where the fraction of the low-temperature equilibrium phase grows from zero to one (100%) as the temperature is lowered. This continuous variation of the coexisting fractions with temperature raised interesting possibilities. On cooling, some liquids vitrify into a glass rather than transform to the equilibrium crystal phase. This happens if the cooling rate is faster than a critical cooling rate, and is attributed to the molecular motions becoming so slow that the molecules cannot rearrange into the crystal positions. This slowing down happens below a glass-formation temperature Tg, which may depend on the applied pressure. If the first-order freezing transition occurs over a range of temperatures, and Tg falls within this range, then there is an interesting possibility that the transition is arrested when it is partial and incomplete. Extending these ideas to first-order magnetic transitions being arrested at low temperatures, resulted in the observation of incomplete magnetic transitions, with two magnetic phases coexisting, down to the lowest temperature. First reported in the case of a ferromagnetic to anti-ferromagnetic transition, such persistent phase coexistence has now been reported across a variety of first-order magnetic transitions. These include colossal-magnetoresistance manganite materials, magnetocaloric materials, magnetic shape memory materials, and other materials.
The interesting feature of these observations of Tg falling within the temperature range over which the transition occurs is that the first-order magnetic transition is influenced by magnetic field, just like the structural transition is influenced by pressure. The relative ease with which magnetic fields can be controlled, in contrast to pressure, raises the possibility that one can study the interplay between Tg and Tc in an exhaustive way. Phase coexistence across first-order magnetic transitions will then enable the resolution of outstanding issues in understanding glasses.
Critical points
In any system containing liquid and gaseous phases, there exists a special combination of pressure and temperature, known as the critical point, at which the transition between liquid and gas becomes a second-order transition. Near the critical point, the fluid is sufficiently hot and compressed that the distinction between the liquid and gaseous phases is almost non-existent. This is associated with the phenomenon of critical opalescence, a milky appearance of the liquid due to density fluctuations at all possible wavelengths (including those of visible light).
Symmetry
Phase transitions often involve a symmetry breaking process. For instance, the cooling of a fluid into a crystalline solid breaks continuous translation symmetry: each point in the fluid has the same properties, but each point in a crystal does not have the same properties (unless the points are chosen from the lattice points of the crystal lattice). Typically, the high-temperature phase contains more symmetries than the low-temperature phase due to spontaneous symmetry breaking, with the exception of certain accidental symmetries (e.g. the formation of heavy virtual particles, which only occurs at low temperatures).
Order parameters
An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the other. At the critical point, the order parameter susceptibility will usually diverge.
An example of an order parameter is the net magnetization in a ferromagnetic system undergoing a phase transition. For liquid/gas transitions, the order parameter is the difference of the densities.
From a theoretical perspective, order parameters arise from symmetry breaking. When this happens, one needs to introduce one or more extra variables to describe the state of the system. For example, in the ferromagnetic phase, one must provide the net magnetization, whose direction was spontaneously chosen when the system cooled below the Curie point. However, note that order parameters can also be defined for non-symmetry-breaking transitions.
Some phase transitions, such as superconducting and ferromagnetic, can have order parameters for more than one degree of freedom. In such phases, the order parameter may take the form of a complex number, a vector, or even a tensor, the magnitude of which goes to zero at the phase transition.
There also exist dual descriptions of phase transitions in terms of disorder parameters. These indicate the presence of line-like excitations such as vortex- or defect lines.
Relevance in cosmology
Symmetry-breaking phase transitions play an important role in cosmology. As the universe expanded and cooled, the vacuum underwent a series of symmetry-breaking phase transitions. For example, the electroweak transition broke the SU(2)×U(1) symmetry of the electroweak field into the U(1) symmetry of the present-day electromagnetic field. This transition is important to explain the asymmetry between the amount of matter and antimatter in the present-day universe, according to electroweak baryogenesis theory.
Progressive phase transitions in an expanding universe are implicated in the development of order in the universe, as is illustrated by the work of Eric Chaisson and David Layzer.
See also relational order theories and order and disorder.
Critical exponents and universality classes
Continuous phase transitions are easier to study than first-order transitions due to the absence of latent heat, and they have been discovered to have many interesting properties. The phenomena associated with continuous phase transitions are called critical phenomena, due to their association with critical points.
Continuous phase transitions can be characterized by parameters known as critical exponents. The most important one is perhaps the exponent describing the divergence of the thermal correlation length by approaching the transition. For instance, let us examine the behavior of the heat capacity near such a transition. We vary the temperature T of the system while keeping all the other thermodynamic variables fixed and find that the transition occurs at some critical temperature Tc. When T is near Tc, the heat capacity C typically has a power law behavior:
The heat capacity of amorphous materials has such a behaviour near the glass transition temperature where the universal critical exponent α = 0.59 A similar behavior, but with the exponent ν instead of α, applies for the correlation length.
The exponent ν is positive. This is different with α. Its actual value depends on the type of phase transition we are considering.
The critical exponents are not necessarily the same above and below the critical temperature. When a continuous symmetry is explicitly broken down to a discrete symmetry by irrelevant (in the renormalization group sense) anisotropies, then some exponents (such as , the exponent of the susceptibility) are not identical.
For −1 < α < 0, the heat capacity has a "kink" at the transition temperature. This is the behavior of liquid helium at the lambda transition from a normal state to the superfluid state, for which experiments have found α = −0.013 ± 0.003.
At least one experiment was performed in the zero-gravity conditions of an orbiting satellite to minimize pressure differences in the sample. This experimental value of α agrees with theoretical predictions based on variational perturbation theory.
For 0 < α < 1, the heat capacity diverges at the transition temperature (though, since α < 1, the enthalpy stays finite). An example of such behavior is the 3D ferromagnetic phase transition. In the three-dimensional Ising model for uniaxial magnets, detailed theoretical studies have yielded the exponent α ≈ +0.110.
Some model systems do not obey a power-law behavior. For example, mean field theory predicts a finite discontinuity of the heat capacity at the transition temperature, and the two-dimensional Ising model has a logarithmic divergence. However, these systems are limiting cases and an exception to the rule. Real phase transitions exhibit power-law behavior.
Several other critical exponents, β, γ, δ, ν, and η, are defined, examining the power law behavior of a measurable physical quantity near the phase transition. Exponents are related by scaling relations, such as
It can be shown that there are only two independent exponents, e.g. ν and η.
It is a remarkable fact that phase transitions arising in different systems often possess the same set of critical exponents. This phenomenon is known as universality. For example, the critical exponents at the liquid–gas critical point have been found to be independent of the chemical composition of the fluid.
More impressively, but understandably from above, they are an exact match for the critical exponents of the ferromagnetic phase transition in uniaxial magnets. Such systems are said to be in the same universality class. Universality is a prediction of the renormalization group theory of phase transitions, which states that the thermodynamic properties of a system near a phase transition depend only on a small number of features, such as dimensionality and symmetry, and are insensitive to the underlying microscopic properties of the system. Again, the divergence of the correlation length is the essential point.
Critical phenomena
There are also other critical phenomena; e.g., besides static functions there is also critical dynamics. As a consequence, at a phase transition one may observe critical slowing down or speeding up. Connected to the previous phenomenon is also the phenomenon of enhanced fluctuations before the phase transition, as a consequence of lower degree of stability of the initial phase of the system. The large static universality classes of a continuous phase transition split into smaller dynamic universality classes. In addition to the critical exponents, there are also universal relations for certain static or dynamic functions of the magnetic fields and temperature differences from the critical value.
Phase transitions in biological systems
Phase transitions play many important roles in biological systems. Examples include the lipid bilayer formation, the coil-globule transition in the process of protein folding and DNA melting, liquid crystal-like transitions in the process of DNA condensation, and cooperative ligand binding to DNA and proteins with the character of phase transition.
In biological membranes, gel to liquid crystalline phase transitions play a critical role in physiological functioning of biomembranes. In gel phase, due to low fluidity of membrane lipid fatty-acyl chains, membrane proteins have restricted movement and thus are restrained in exercise of their physiological role. Plants depend critically on photosynthesis by chloroplast thylakoid membranes which are exposed cold environmental temperatures. Thylakoid membranes retain innate fluidity even at relatively low temperatures because of high degree of fatty-acyl disorder allowed by their high content of linolenic acid, 18-carbon chain with 3-double bonds. Gel-to-liquid crystalline phase transition temperature of biological membranes can be determined by many techniques including calorimetry, fluorescence, spin label electron paramagnetic resonance and NMR by recording measurements of the concerned parameter by at series of sample temperatures. A simple method for its determination from 13-C NMR line intensities has also been proposed.
It has been proposed that some biological systems might lie near critical points. Examples include neural networks in the salamander retina, bird flocks
gene expression networks in Drosophila, and protein folding. However, it is not clear whether or not alternative reasons could explain some of the phenomena supporting arguments for criticality. It has also been suggested that biological organisms share two key properties of phase transitions: the change of macroscopic behavior and the coherence of a system at a critical point. Phase transitions are prominent feature of motor behavior in biological systems. Spontaneous gait transitions, as well as fatigue-induced motor task disengagements, show typical critical behavior as an intimation of the sudden qualitative change of the previously stable motor behavioral pattern.
The characteristic feature of second order phase transitions is the appearance of fractals in some scale-free properties. It has long been known that protein globules are shaped by interactions with water. There are 20 amino acids that form side groups on protein peptide chains range from hydrophilic to hydrophobic, causing the former to lie near the globular surface, while the latter lie closer to the globular center. Twenty fractals were discovered in solvent associated surface areas of > 5000 protein segments. The existence of these fractals proves that proteins function near critical points of second-order phase transitions.
In groups of organisms in stress (when approaching critical transitions), correlations tend to increase, while at the same time, fluctuations also increase. This effect is supported by many experiments and observations of groups of people, mice, trees, and grassy plants.
Phase transitions in social systems
Phase transitions have been hypothesised to occur in social systems viewed as dynamical systems. A hypothesis proposed in the 1990s and 2000s in the context of peace and armed conflict is that when a conflict that is non-violent shifts to a phase of armed conflict, this is a phase transition from latent to manifest phases within the dynamical system.
Experimental
A variety of methods are applied for studying the various effects. Selected examples are:
Hall effect (measurement of magnetic transitions)
Mössbauer spectroscopy (simultaneous measurement of magnetic and non-magnetic transitions. Limited up to about 800–1000 °C)
Neutron diffraction
Perturbed angular correlation (simultaneous measurement of magnetic and non-magnetic transitions. No temperature limits. Over 2000 °C already performed, theoretical possible up to the highest crystal material, such as tantalum hafnium carbide 4215 °C.)
Raman Spectroscopy
SQUID (measurement of magnetic transitions)
Thermogravimetry (very common)
X-ray diffraction
See also
of second order phase transitions
References
Further reading
Anderson, P.W., Basic Notions of Condensed Matter Physics, Perseus Publishing (1997).
Faghri, A., and Zhang, Y., Fundamentals of Multiphase Heat Transfer and Flow, Springer Nature Switzerland AG, 2020.
Goldenfeld, N., Lectures on Phase Transitions and the Renormalization Group, Perseus Publishing (1992).
M.R. Khoshbin-e-Khoshnazar, Ice Phase Transition as a sample of finite system phase transition, (Physics Education (India) Volume 32. No. 2, Apr - Jun 2016)
Kleinert, H., Gauge Fields in Condensed Matter, Vol. I, "Superfluidity and Vortex lines; Disorder Fields, Phase Transitions", pp. 1–742, World Scientific (Singapore, 1989); Paperback (physik.fu-berlin.de readable online)
(readable online).
Krieger, Martin H., Constitutions of matter : mathematically modelling the most everyday of physical phenomena, University of Chicago Press, 1996. Contains a detailed pedagogical discussion of Onsager's solution of the 2-D Ising Model.
Landau, L.D. and Lifshitz, E.M., Statistical Physics Part 1, vol. 5 of Course of Theoretical Physics, Pergamon Press, 3rd Ed. (1994).
Mussardo G., "Statistical Field Theory. An Introduction to Exactly Solved Models of Statistical Physics", Oxford University Press, 2010.
Schroeder, Manfred R., Fractals, chaos, power laws : minutes from an infinite paradise, New York: W. H. Freeman, 1991. Very well-written book in "semi-popular" style—not a textbook—aimed at an audience with some training in mathematics and the physical sciences. Explains what scaling in phase transitions is all about, among other things.
H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena (Oxford University Press, Oxford and New York 1971).
Yeomans J. M., Statistical Mechanics of Phase Transitions'', Oxford University Press, 1992.
External links
Interactive Phase Transitions on lattices with Java applets
Universality classes from Sklogwiki
Physical phenomena
Critical phenomena | Phase transition | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 6,037 | [
"Physical phenomena",
"Phase transitions",
"Critical phenomena",
"Phases of matter",
"Condensed matter physics",
"Statistical mechanics",
"Matter",
"Dynamical systems"
] |
54,681 | https://en.wikipedia.org/wiki/NP-hardness | In computational complexity theory, a computational problem H is called NP-hard if, for every problem L which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from L to H. That is, assuming a solution for H takes 1 unit time, Hs solution can be used to solve L in polynomial time. As a consequence, finding a polynomial time algorithm to solve a single NP-hard problem would give polynomial time algorithms for all the problems in the complexity class NP. As it is suspected, but unproven, that P≠NP, it is unlikely that any polynomial-time algorithms for NP-hard problems exist.
A simple example of an NP-hard problem is the subset sum problem.
Informally, if H is NP-hard, then it is at least as difficult to solve as the problems in NP. However, the opposite direction is not true: some problems are undecidable, and therefore even more difficult to solve than all problems in NP, but they are probably not NP-hard (unless P=NP).
Definition
A decision problem H is NP-hard when for every problem L in NP, there is a polynomial-time many-one reduction from L to H.
Another definition is to require that there be a polynomial-time reduction from an NP-complete problem G to H. As any problem L in NP reduces in polynomial time to G, L reduces in turn to H in polynomial time so this new definition implies the previous one. It does not restrict the class NP-hard to decision problems, and it also includes search problems or optimization problems.
Consequences
If P ≠ NP, then NP-hard problems could not be solved in polynomial time.
Some NP-hard optimization problems can be polynomial-time approximated up to some constant approximation ratio (in particular, those in APX) or even up to any approximation ratio (those in PTAS or FPTAS). There are many classes of approximability, each one enabling approximation up to a different level.
Examples
All NP-complete problems are also NP-hard (see List of NP-complete problems). For example, the optimization problem of finding the least-cost cyclic route through all nodes of a weighted graph—commonly known as the travelling salesman problem—is NP-hard. The subset sum problem is another example: given a set of integers, does any non-empty subset of them add up to zero? That is a decision problem and happens to be NP-complete.
There are decision problems that are NP-hard but not NP-complete such as the halting problem. That is the problem which asks "given a program and its input, will it run forever?" That is a yes/no question and so is a decision problem. It is easy to prove that the halting problem is NP-hard but not NP-complete. For example, the Boolean satisfiability problem can be reduced to the halting problem by transforming it to the description of a Turing machine that tries all truth value assignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not in NP since all problems in NP are decidable in a finite number of operations, but the halting problem, in general, is undecidable. There are also NP-hard problems that are neither NP-complete nor Undecidable. For instance, the language of true quantified Boolean formulas is decidable in polynomial space, but not in non-deterministic polynomial time (unless NP = PSPACE).
NP-naming convention
NP-hard problems do not have to be elements of the complexity class NP.
As NP plays a central role in computational complexity, it is used as the basis of several classes:
NP Class of computational decision problems for which any given yes-solution can be verified as a solution in polynomial time by a deterministic Turing machine (or solvable by a non-deterministic Turing machine in polynomial time).
NP-hard Class of problems which are at least as hard as the hardest problems in NP. Problems that are NP-hard do not have to be elements of NP; indeed, they may not even be decidable.
NP-complete Class of decision problems which contains the hardest problems in NP. Each NP-complete problem has to be in NP.
NP-easy At most as hard as NP, but not necessarily in NP.
NP-equivalent Decision problems that are both NP-hard and NP-easy, but not necessarily in NP.
NP-intermediate If P and NP are different, then there exist decision problems in the region of NP that fall between P and the NP-complete problems. (If P and NP are the same class, then NP-intermediate problems do not exist because in this case every NP-complete problem would fall in P, and by definition, every problem in NP can be reduced to an NP-complete problem.)
Application areas
NP-hard problems are often tackled with rules-based languages in areas including:
Approximate computing
Configuration
Cryptography
Data mining
Decision support
Phylogenetics
Planning
Process monitoring and control
Rosters or schedules
Routing/vehicle routing
Scheduling
See also
Lists of problems
List of unsolved problems
Reduction (complexity)
Unknowability
References
Complexity classes | NP-hardness | [
"Mathematics"
] | 1,089 | [
"NP-hard problems",
"Mathematical problems",
"Computational problems"
] |
54,717 | https://en.wikipedia.org/wiki/De%20Broglie%E2%80%93Bohm%20theory | The de Broglie–Bohm theory is an interpretation of quantum mechanics which postulates that, in addition to the wavefunction, an actual configuration of particles exists, even when unobserved. The evolution over time of the configuration of all particles is defined by a guiding equation. The evolution of the wave function over time is given by the Schrödinger equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992).
The theory is deterministic and explicitly nonlocal: the velocity of any one particle depends on the value of the guiding equation, which depends on the configuration of all the particles under consideration.
Measurements are a particular case of quantum processes described by the theory—for which it yields the same quantum predictions as other interpretations of quantum mechanics. The theory does not have a "measurement problem", due to the fact that the particles have a definite configuration at all times. The Born rule in de Broglie–Bohm theory is not a postulate. Rather, in this theory, the link between the probability density and the wave function has the status of a theorem, a result of a separate postulate, the "quantum equilibrium hypothesis", which is additional to the basic principles governing the wave function.
There are several equivalent mathematical formulations of the theory.
Overview
De Broglie–Bohm theory is based on the following postulates:
There is a configuration of the universe, described by coordinates , which is an element of the configuration space . The configuration space is different for different versions of pilot-wave theory. For example, this may be the space of positions of particles, or, in case of field theory, the space of field configurations . The configuration evolves (for spin=0) according to the guiding equation where is the probability current or probability flux, and is the momentum operator. Here, is the standard complex-valued wavefunction from quantum theory, which evolves according to Schrödinger's equation This completes the specification of the theory for any quantum theory with Hamilton operator of type .
The configuration is distributed according to at some moment of time , and this consequently holds for all times. Such a state is named quantum equilibrium. With quantum equilibrium, this theory agrees with the results of standard quantum mechanics.
Even though this latter relation is frequently presented as an axiom of the theory, Bohm presented it as derivable from statistical-mechanical arguments in the original papers of 1952. This argument was further supported by the work of Bohm in 1953 and was substantiated by Vigier and Bohm's paper of 1954, in which they introduced stochastic fluid fluctuations that drive a process of asymptotic relaxation from quantum non-equilibrium to quantum equilibrium (ρ → |ψ|2).
Double-slit experiment
The double-slit experiment is an illustration of wave–particle duality. In it, a beam of particles (such as electrons) travels through a barrier that has two slits. If a detector screen is on the side beyond the barrier, the pattern of detected particles shows interference fringes characteristic of waves arriving at the screen from two sources (the two slits); however, the interference pattern is made up of individual dots corresponding to particles that had arrived on the screen. The system seems to exhibit the behaviour of both waves (interference patterns) and particles (dots on the screen).
If this experiment is modified so that one slit is closed, no interference pattern is observed. Thus, the state of both slits affects the final results. It can also be arranged to have a minimally invasive detector at one of the slits to detect which slit the particle went through. When that is done, the interference pattern disappears.
In de Broglie–Bohm theory, the wavefunction is defined at both slits, but each particle has a well-defined trajectory that passes through exactly one of the slits. The final position of the particle on the detector screen and the slit through which the particle passes is determined by the initial position of the particle. Such initial position is not knowable or controllable by the experimenter, so there is an appearance of randomness in the pattern of detection. In Bohm's 1952 papers he used the wavefunction to construct a quantum potential that, when included in Newton's equations, gave the trajectories of the particles streaming through the two slits. In effect the wavefunction interferes with itself and guides the particles by the quantum potential in such a way that the particles avoid the regions in which the interference is destructive and are attracted to the regions in which the interference is constructive, resulting in the interference pattern on the detector screen.
To explain the behavior when the particle is detected to go through one slit, one needs to appreciate the role of the conditional wavefunction and how it results in the collapse of the wavefunction; this is explained below. The basic idea is that the environment registering the detection effectively separates the two wave packets in configuration space.
Theory
Pilot wave
The de Broglie–Bohm theory describes a pilot wave in a configuration space and trajectories of particles as in classical mechanics but defined by non-Newtonian mechanics. At every moment of time there exists not only a wavefunction, but also a well-defined configuration of the whole universe (i.e., the system as defined by the boundary conditions used in solving the Schrödinger equation).
The de Broglie–Bohm theory works on particle positions and trajectories like classical mechanics but the dynamics are different. In classical mechanics, the accelerations of the particles are imparted directly by forces, which exist in physical three-dimensional space. In de Broglie–Bohm theory, the quantum "field exerts a new kind of "quantum-mechanical" force". Bohm hypothesized that each particle has a "complex and subtle inner structure" that provides the capacity to react to the information provided by the wavefunction by the quantum potential. Also, unlike in classical mechanics, physical properties (e.g., mass, charge) are spread out over the wavefunction in de Broglie–Bohm theory, not localized at the position of the particle.
The wavefunction itself, and not the particles, determines the dynamical evolution of the system: the particles do not act back onto the wave function. As Bohm and Hiley worded it, "the Schrödinger equation for the quantum field does not have sources, nor does it have any other way by which the field could be directly affected by the condition of the particles [...] the quantum theory can be understood completely in terms of the assumption that the quantum field has no sources or other forms of dependence on the particles". P. Holland considers this lack of reciprocal action of particles and wave function to be one "[a]mong the many nonclassical properties exhibited by this theory". Holland later called this a merely apparent lack of back reaction, due to the incompleteness of the description.
In what follows below, the setup for one particle moving in is given followed by the setup for N particles moving in 3 dimensions. In the first instance, configuration space and real space are the same, while in the second, real space is still , but configuration space becomes . While the particle positions themselves are in real space, the velocity field and wavefunction are on configuration space, which is how particles are entangled with each other in this theory.
Extensions to this theory include spin and more complicated configuration spaces.
We use variations of for particle positions, while represents the complex-valued wavefunction on configuration space.
Guiding equation
For a spinless single particle moving in , the particle's velocity is
For many particles labeled for the -th particle their velocities are
The main fact to notice is that this velocity field depends on the actual positions of all of the particles in the universe. As explained below, in most experimental situations, the influence of all of those particles can be encapsulated into an effective wavefunction for a subsystem of the universe.
Schrödinger's equation
The one-particle Schrödinger equation governs the time evolution of a complex-valued wavefunction on . The equation represents a quantized version of the total energy of a classical system evolving under a real-valued potential function on :
For many particles, the equation is the same except that and are now on configuration space, :
This is the same wavefunction as in conventional quantum mechanics.
Relation to the Born rule
In Bohm's original papers, he discusses how de Broglie–Bohm theory results in the usual measurement results of quantum mechanics. The main idea is that this is true if the positions of the particles satisfy the statistical distribution given by . And that distribution is guaranteed to be true for all time by the guiding equation if the initial distribution of the particles satisfies .
For a given experiment, one can postulate this as being true and verify it experimentally. But, as argued by Dürr et al., one needs to argue that this distribution for subsystems is typical. The authors argue that , by virtue of its equivariance under the dynamical evolution of the system, is the appropriate measure of typicality for initial conditions of the positions of the particles. The authors then prove that the vast majority of possible initial configurations will give rise to statistics obeying the Born rule (i.e., ) for measurement outcomes. In summary, in a universe governed by the de Broglie–Bohm dynamics, Born rule behavior is typical.
The situation is thus analogous to the situation in classical statistical physics. A low-entropy initial condition will, with overwhelmingly high probability, evolve into a higher-entropy state: behavior consistent with the second law of thermodynamics is typical. There are anomalous initial conditions that would give rise to violations of the second law; however in the absence of some very detailed evidence supporting the realization of one of those conditions, it would be quite unreasonable to expect anything but the actually observed uniform increase of entropy. Similarly in the de Broglie–Bohm theory, there are anomalous initial conditions that would produce measurement statistics in violation of the Born rule (conflicting the predictions of standard quantum theory), but the typicality theorem shows that absent some specific reason to believe one of those special initial conditions was in fact realized, the Born rule behavior is what one should expect.
It is in this qualified sense that the Born rule is, for the de Broglie–Bohm theory, a theorem rather than (as in ordinary quantum theory) an additional postulate.
It can also be shown that a distribution of particles which is not distributed according to the Born rule (that is, a distribution "out of quantum equilibrium") and evolving under the de Broglie–Bohm dynamics is overwhelmingly likely to evolve dynamically into a state distributed as .
The conditional wavefunction of a subsystem
In the formulation of the de Broglie–Bohm theory, there is only a wavefunction for the entire universe (which always evolves by the Schrödinger equation). Here, the "universe" is simply the system limited by the same boundary conditions used to solve the Schrödinger equation. However, once the theory is formulated, it is convenient to introduce a notion of wavefunction also for subsystems of the universe. Let us write the wavefunction of the universe as , where denotes the configuration variables associated to some subsystem (I) of the universe, and denotes the remaining configuration variables. Denote respectively by and the actual configuration of subsystem (I) and of the rest of the universe. For simplicity, we consider here only the spinless case. The conditional wavefunction of subsystem (I) is defined by
It follows immediately from the fact that satisfies the guiding equation that also the configuration satisfies a guiding equation identical to the one presented in the formulation of the theory, with the universal wavefunction replaced with the conditional wavefunction . Also, the fact that is random with probability density given by the square modulus of implies that the conditional probability density of given is given by the square modulus of the (normalized) conditional wavefunction (in the terminology of Dürr et al. this fact is called the fundamental conditional probability formula).
Unlike the universal wavefunction, the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation, but in many situations it does. For instance, if the universal wavefunction factors as
then the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to (this is what standard quantum theory would regard as the wavefunction of subsystem (I)). If, in addition, the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then does satisfy a Schrödinger equation. More generally, assume that the universal wave function can be written in the form
where solves Schrödinger equation and, for all and . Then, again, the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to , and if the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then satisfies a Schrödinger equation.
The fact that the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation is related to the fact that the usual collapse rule of standard quantum theory emerges from the Bohmian formalism when one considers conditional wavefunctions of subsystems.
Extensions
Relativity
Pilot-wave theory is explicitly nonlocal, which is in ostensible conflict with special relativity. Various extensions of "Bohm-like" mechanics exist that attempt to resolve this problem. Bohm himself in 1953 presented an extension of the theory satisfying the Dirac equation for a single particle. However, this was not extensible to the many-particle case because it used an absolute time.
A renewed interest in constructing Lorentz-invariant extensions of Bohmian theory arose in the 1990s; see Bohm and Hiley: The Undivided Universe and references therein. Another approach is given by Dürr et al., who use Bohm–Dirac models and a Lorentz-invariant foliation of space-time.
Thus, Dürr et al. (1999) showed that it is possible to formally restore Lorentz invariance for the Bohm–Dirac theory by introducing additional structure. This approach still requires a foliation of space-time. While this is in conflict with the standard interpretation of relativity, the preferred foliation, if unobservable, does not lead to any empirical conflicts with relativity. In 2013, Dürr et al. suggested that the required foliation could be covariantly determined by the wavefunction.
The relation between nonlocality and preferred foliation can be better understood as follows. In de Broglie–Bohm theory, nonlocality manifests as the fact that the velocity and acceleration of one particle depends on the instantaneous positions of all other particles. On the other hand, in the theory of relativity the concept of instantaneousness does not have an invariant meaning. Thus, to define particle trajectories, one needs an additional rule that defines which space-time points should be considered instantaneous. The simplest way to achieve this is to introduce a preferred foliation of space-time by hand, such that each hypersurface of the foliation defines a hypersurface of equal time.
Initially, it had been considered impossible to set out a description of photon trajectories in the de Broglie–Bohm theory in view of the difficulties of describing bosons relativistically. In 1996, Partha Ghose presented a relativistic quantum-mechanical description of spin-0 and spin-1 bosons starting from the Duffin–Kemmer–Petiau equation, setting out Bohmian trajectories for massive bosons and for massless bosons (and therefore photons). In 2001, Jean-Pierre Vigier emphasized the importance of deriving a well-defined description of light in terms of particle trajectories in the framework of either the Bohmian mechanics or the Nelson stochastic mechanics. The same year, Ghose worked out Bohmian photon trajectories for specific cases. Subsequent weak-measurement experiments yielded trajectories that coincide with the predicted trajectories. The significance of these experimental findings is controversial.
Chris Dewdney and G. Horton have proposed a relativistically covariant, wave-functional formulation of Bohm's quantum field theory and have extended it to a form that allows the inclusion of gravity.
Nikolić has proposed a Lorentz-covariant formulation of the Bohmian interpretation of many-particle wavefunctions. He has developed a generalized relativistic-invariant probabilistic interpretation of quantum theory, in which is no longer a probability density in space, but a probability density in space-time. He uses this generalized probabilistic interpretation to formulate a relativistic-covariant version of de Broglie–Bohm theory without introducing a preferred foliation of space-time. His work also covers the extension of the Bohmian interpretation to a quantization of fields and strings.
Roderick I. Sutherland at the University in Sydney has a Lagrangian formalism for the pilot wave and its beables. It draws on Yakir Aharonov's retrocasual weak measurements to explain many-particle entanglement in a special relativistic way without the need for configuration space. The basic idea was already published by Olivier Costa de Beauregard in the 1950s and is also used by John Cramer in his transactional interpretation except the beables that exist between the von Neumann strong projection operator measurements. Sutherland's Lagrangian includes two-way action-reaction between pilot wave and beables. Therefore, it is a post-quantum non-statistical theory with final boundary conditions that violate the no-signal theorems of quantum theory. Just as special relativity is a limiting case of general relativity when the spacetime curvature vanishes, so, too is statistical no-entanglement signaling quantum theory with the Born rule a limiting case of the post-quantum action-reaction Lagrangian when the reaction is set to zero and the final boundary condition is integrated out.
Spin
To incorporate spin, the wavefunction becomes complex-vector-valued. The value space is called spin space; for a spin-1/2 particle, spin space can be taken to be . The guiding equation is modified by taking inner products in spin space to reduce the complex vectors to complex numbers. The Schrödinger equation is modified by adding a Pauli spin term:
where
— the mass, charge and magnetic moment of the –th particle
— the appropriate spin operator acting in the –th particle's spin space
— spin quantum number of the –th particle ( for electron)
is vector potential in
is the magnetic field in
is the covariant derivative, involving the vector potential, ascribed to the coordinates of –th particle (in SI units)
— the wavefunction defined on the multidimensional configuration space; e.g. a system consisting of two spin-1/2 particles and one spin-1 particle has a wavefunction of the form where is a tensor product, so this spin space is 12-dimensional
is the inner product in spin space :
Stochastic electrodynamics
Stochastic electrodynamics (SED) is an extension of the de Broglie–Bohm interpretation of quantum mechanics, with the electromagnetic zero-point field (ZPF) playing a central role as the guiding pilot-wave. Modern approaches to SED, like those proposed by the group around late Gerhard Grössing, among others, consider wave and particle-like quantum effects as well-coordinated emergent systems. These emergent systems are the result of speculated and calculated sub-quantum interactions with the zero-point field.
Quantum field theory
In Dürr et al., the authors describe an extension of de Broglie–Bohm theory for handling creation and annihilation operators, which they refer to as "Bell-type quantum field theories". The basic idea is that configuration space becomes the (disjoint) space of all possible configurations of any number of particles. For part of the time, the system evolves deterministically under the guiding equation with a fixed number of particles. But under a stochastic process, particles may be created and annihilated. The distribution of creation events is dictated by the wavefunction. The wavefunction itself is evolving at all times over the full multi-particle configuration space.
Hrvoje Nikolić introduces a purely deterministic de Broglie–Bohm theory of particle creation and destruction, according to which particle trajectories are continuous, but particle detectors behave as if particles have been created or destroyed even when a true creation or destruction of particles does not take place.
Curved space
To extend de Broglie–Bohm theory to curved space (Riemannian manifolds in mathematical parlance), one simply notes that all of the elements of these equations make sense, such as gradients and Laplacians. Thus, we use equations that have the same form as above. Topological and boundary conditions may apply in supplementing the evolution of Schrödinger's equation.
For a de Broglie–Bohm theory on curved space with spin, the spin space becomes a vector bundle over configuration space, and the potential in Schrödinger's equation becomes a local self-adjoint operator acting on that space.
The field equations for the de Broglie–Bohm theory in the relativistic case with spin can also be given for curved space-times with torsion.
In a general spacetime with curvature and torsion, the guiding equation for the four-velocity of an elementary fermion particle iswhere the wave function is a spinor, is the corresponding adjoint, are the Dirac matrices, and is a tetrad. If the wave function propagates according to the curved Dirac equation, then the particle moves according to the Mathisson-Papapetrou equations of motion, which are an extension of the geodesic equation. This relativistic wave-particle duality follows from the conservation laws for the spin tensor and energy-momentum tensor, and also from the covariant Heisenberg picture equation of motion.
Exploiting nonlocality
De Broglie and Bohm's causal interpretation of quantum mechanics was later extended by Bohm, Vigier, Hiley, Valentini and others to include stochastic properties. Bohm and other physicists, including Valentini, view the Born rule linking to the probability density function as representing not a basic law, but a result of a system having reached quantum equilibrium during the course of the time development under the Schrödinger equation. It can be shown that, once an equilibrium has been reached, the system remains in such equilibrium over the course of its further evolution: this follows from the continuity equation associated with the Schrödinger evolution of . It is less straightforward to demonstrate whether and how such an equilibrium is reached in the first place.
Antony Valentini has extended de Broglie–Bohm theory to include signal nonlocality that would allow entanglement to be used as a stand-alone communication channel without a secondary classical "key" signal to "unlock" the message encoded in the entanglement. This violates orthodox quantum theory but has the virtue of making the parallel universes of the chaotic inflation theory observable in principle.
Unlike de Broglie–Bohm theory, Valentini's theory the wavefunction evolution also depends on the ontological variables. This introduces an instability, a feedback loop that pushes the hidden variables out of "sub-quantal heat death". The resulting theory becomes nonlinear and non-unitary. Valentini argues that the laws of quantum mechanics are emergent and form a "quantum equilibrium" that is analogous to thermal equilibrium in classical dynamics, such that other "quantum non-equilibrium" distributions may in principle be observed and exploited, for which the statistical predictions of quantum theory are violated. It is controversially argued that quantum theory is merely a special case of a much wider nonlinear physics, a physics in which non-local (superluminal) signalling is possible, and in which the uncertainty principle can be violated.
Results
Below are some highlights of the results that arise out of an analysis of de Broglie–Bohm theory. Experimental results agree with all of quantum mechanics' standard predictions insofar as it has them. But while standard quantum mechanics is limited to discussing the results of "measurements", de Broglie–Bohm theory governs the dynamics of a system without the intervention of outside observers (p. 117 in Bell).
The basis for agreement with standard quantum mechanics is that the particles are distributed according to . This is a statement of observer ignorance: the initial positions are represented by a statistical distribution so deterministic trajectories will result in a statistical distribution.
Measuring spin and polarization
According to ordinary quantum theory, it is not possible to measure the spin or polarization of a particle directly; instead, the component in one direction is measured; the outcome from a single particle may be 1, meaning that the particle is aligned with the measuring apparatus, or −1, meaning that it is aligned the opposite way. An ensemble of particles prepared by a polarizer to be in state 1 will all measure polarized in state 1 in a subsequent apparatus. A polarized ensemble sent through a polarizer set at angle to the first pass will result in some values of 1 and some of −1 with a probability that depends on the relative alignment. For a full explanation of this, see the Stern–Gerlach experiment.
In de Broglie–Bohm theory, the results of a spin experiment cannot be analyzed without some knowledge of the experimental setup. It is possible to modify the setup so that the trajectory of the particle is unaffected, but that the particle with one setup registers as spin-up, while in the other setup it registers as spin-down. Thus, for the de Broglie–Bohm theory, the particle's spin is not an intrinsic property of the particle; instead spin is, so to speak, in the wavefunction of the particle in relation to the particular device being used to measure the spin. This is an illustration of what is sometimes referred to as contextuality and is related to naive realism about operators. Interpretationally, measurement results are a deterministic property of the system and its environment, which includes information about the experimental setup including the context of co-measured observables; in no sense does the system itself possess the property being measured, as would have been the case in classical physics.
Measurements, the quantum formalism, and observer independence
De Broglie–Bohm theory gives the almost results as (non-relativisitic) quantum mechanics. It treats the wavefunction as a fundamental object in the theory, as the wavefunction describes how the particles move. This means that no experiment can distinguish between the two theories. This section outlines the ideas as to how the standard quantum formalism arises out of quantum mechanics.
Collapse of the wavefunction
De Broglie–Bohm theory is a theory that applies primarily to the whole universe. That is, there is a single wavefunction governing the motion of all of the particles in the universe according to the guiding equation. Theoretically, the motion of one particle depends on the positions of all of the other particles in the universe. In some situations, such as in experimental systems, we can represent the system itself in terms of a de Broglie–Bohm theory in which the wavefunction of the system is obtained by conditioning on the environment of the system. Thus, the system can be analyzed with Schrödinger's equation and the guiding equation, with an initial distribution for the particles in the system (see the section on the conditional wavefunction of a subsystem for details).
It requires a special setup for the conditional wavefunction of a system to obey a quantum evolution. When a system interacts with its environment, such as through a measurement, the conditional wavefunction of the system evolves in a different way. The evolution of the universal wavefunction can become such that the wavefunction of the system appears to be in a superposition of distinct states. But if the environment has recorded the results of the experiment, then using the actual Bohmian configuration of the environment to condition on, the conditional wavefunction collapses to just one alternative, the one corresponding with the measurement results.
Collapse of the universal wavefunction never occurs in de Broglie–Bohm theory. Its entire evolution is governed by Schrödinger's equation, and the particles' evolutions are governed by the guiding equation. Collapse only occurs in a phenomenological way for systems that seem to follow their own Schrödinger's equation. As this is an effective description of the system, it is a matter of choice as to what to define the experimental system to include, and this will affect when "collapse" occurs.
Operators as observables
In the standard quantum formalism, measuring observables is generally thought of as measuring operators on the Hilbert space. For example, measuring position is considered to be a measurement of the position operator. This relationship between physical measurements and Hilbert space operators is, for standard quantum mechanics, an additional axiom of the theory. The de Broglie–Bohm theory, by contrast, requires no such measurement axioms (and measurement as such is not a dynamically distinct or special sub-category of physical processes in the theory). In particular, the usual operators-as-observables formalism is, for de Broglie–Bohm theory, a theorem. A major point of the analysis is that many of the measurements of the observables do not correspond to properties of the particles; they are (as in the case of spin discussed above) measurements of the wavefunction.
In the history of de Broglie–Bohm theory, the proponents have often had to deal with claims that this theory is impossible. Such arguments are generally based on inappropriate analysis of operators as observables. If one believes that spin measurements are indeed measuring the spin of a particle that existed prior to the measurement, then one does reach contradictions. De Broglie–Bohm theory deals with this by noting that spin is not a feature of the particle, but rather that of the wavefunction. As such, it only has a definite outcome once the experimental apparatus is chosen. Once that is taken into account, the impossibility theorems become irrelevant.
There are also objections to this theory based on what it says about particular situations usually involving eigenstates of an operator. For example, the ground state of hydrogen is a real wavefunction. According to the guiding equation, this means that the electron is at rest when in this state. Nevertheless, it is distributed according to , and no contradiction to experimental results is possible to detect.
Operators as observables leads many to believe that many operators are equivalent. De Broglie–Bohm theory, from this perspective, chooses the position observable as a favored observable rather than, say, the momentum observable. Again, the link to the position observable is a consequence of the dynamics. The motivation for de Broglie–Bohm theory is to describe a system of particles. This implies that the goal of the theory is to describe the positions of those particles at all times. Other observables do not have this compelling ontological status. Having definite positions explains having definite results such as flashes on a detector screen. Other observables would not lead to that conclusion, but there need not be any problem in defining a mathematical theory for other observables; see Hyman et al. for an exploration of the fact that a probability density and probability current can be defined for any set of commuting operators.
Hidden variables
De Broglie–Bohm theory is often referred to as a "hidden-variable" theory. Bohm used this description in his original papers on the subject, writing: "From the point of view of the usual interpretation, these additional elements or parameters [permitting a detailed causal and continuous description of all processes] could be called 'hidden' variables." Bohm and Hiley later stated that they found Bohm's choice of the term "hidden variables" to be too restrictive. In particular, they argued that a particle is not actually hidden but rather "is what is most directly manifested in an observation [though] its properties cannot be observed with arbitrary precision (within the limits set by uncertainty principle)". However, others nevertheless treat the term "hidden variable" as a suitable description.
Generalized particle trajectories can be extrapolated from numerous weak measurements on an ensemble of equally prepared systems, and such trajectories coincide with the de Broglie–Bohm trajectories. In particular, an experiment with two entangled photons, in which a set of Bohmian trajectories for one of the photons was determined using weak measurements and postselection, can be understood in terms of a nonlocal connection between that photon's trajectory and the other photon's polarization. However, not only the De Broglie–Bohm interpretation, but also many other interpretations of quantum mechanics that do not include such trajectories are consistent with such experimental evidence.
Different predictions
A specialized version of the double slit experiment has been devised to test characteristics of the trajectory predictions.
Experimental realization of this concept disagreed with the Bohm predictions. where they differed from standard quantum mechanics. These conclusions have been the subject of debate.
Heisenberg's uncertainty principle
The Heisenberg's uncertainty principle states that when two complementary measurements are made, there is a limit to the product of their accuracy. As an example, if one measures the position with an accuracy of and the momentum with an accuracy of , then
In de Broglie–Bohm theory, there is always a matter of fact about the position and momentum of a particle. Each particle has a well-defined trajectory, as well as a wavefunction. Observers have limited knowledge as to what this trajectory is (and thus of the position and momentum). It is the lack of knowledge of the particle's trajectory that accounts for the uncertainty relation. What one can know about a particle at any given time is described by the wavefunction. Since the uncertainty relation can be derived from the wavefunction in other interpretations of quantum mechanics, it can be likewise derived (in the epistemic sense mentioned above) on the de Broglie–Bohm theory.
To put the statement differently, the particles' positions are only known statistically. As in classical mechanics, successive observations of the particles' positions refine the experimenter's knowledge of the particles' initial conditions. Thus, with succeeding observations, the initial conditions become more and more restricted. This formalism is consistent with the normal use of the Schrödinger equation.
For the derivation of the uncertainty relation, see Heisenberg uncertainty principle, noting that this article describes the principle from the viewpoint of the Copenhagen interpretation.
Quantum entanglement, Einstein–Podolsky–Rosen paradox, Bell's theorem, and nonlocality
De Broglie–Bohm theory highlighted the issue of nonlocality: it inspired John Stewart Bell to prove his now-famous theorem, which in turn led to the Bell test experiments.
In the Einstein–Podolsky–Rosen paradox, the authors describe a thought experiment that one could perform on a pair of particles that have interacted, the results of which they interpreted as indicating that quantum mechanics is an incomplete theory.
Decades later John Bell proved Bell's theorem (see p. 14 in Bell), in which he showed that, if they are to agree with the empirical predictions of quantum mechanics, all such "hidden-variable" completions of quantum mechanics must either be nonlocal (as the Bohm interpretation is) or give up the assumption that experiments produce unique results (see counterfactual definiteness and many-worlds interpretation). In particular, Bell proved that any local theory with unique results must make empirical predictions satisfying a statistical constraint called "Bell's inequality".
Alain Aspect performed a series of Bell test experiments that test Bell's inequality using an EPR-type setup. Aspect's results show experimentally that Bell's inequality is in fact violated, meaning that the relevant quantum-mechanical predictions are correct. In these Bell test experiments, entangled pairs of particles are created; the particles are separated, traveling to remote measuring apparatus. The orientation of the measuring apparatus can be changed while the particles are in flight, demonstrating the apparent nonlocality of the effect.
The de Broglie–Bohm theory makes the same (empirically correct) predictions for the Bell test experiments as ordinary quantum mechanics. It is able to do this because it is manifestly nonlocal. It is often criticized or rejected based on this; Bell's attitude was: "It is a merit of the de Broglie–Bohm version to bring this [nonlocality] out so explicitly that it cannot be ignored."
The de Broglie–Bohm theory describes the physics in the Bell test experiments as follows: to understand the evolution of the particles, we need to set up a wave equation for both particles; the orientation of the apparatus affects the wavefunction. The particles in the experiment follow the guidance of the wavefunction. It is the wavefunction that carries the faster-than-light effect of changing the orientation of the apparatus. Maudlin provides an analysis of exactly what kind of nonlocality is present and how it is compatible with relativity. Bell has shown that the nonlocality does not allow superluminal communication. Maudlin has shown this in greater detail.
Classical limit
Bohm's formulation of de Broglie–Bohm theory in a classical-looking version has the merits that the emergence of classical behavior seems to follow immediately for any situation in which the quantum potential is negligible, as noted by Bohm in 1952. Modern methods of decoherence are relevant to an analysis of this limit. See Allori et al. for steps towards a rigorous analysis.
Quantum trajectory method
Work by Robert E. Wyatt in the early 2000s attempted to use the Bohm "particles" as an adaptive mesh that follows the actual trajectory of a quantum state in time and space. In the "quantum trajectory" method, one samples the quantum wavefunction with a mesh of quadrature points. One then evolves the quadrature points in time according to the Bohm equations of motion. At each time step, one then re-synthesizes the wavefunction from the points, recomputes the quantum forces, and continues the calculation. (QuickTime movies of this for H + H2 reactive scattering can be found on the Wyatt group web-site at UT Austin.)
This approach has been adapted, extended, and used by a number of researchers in the chemical physics community as a way to compute semi-classical and quasi-classical molecular dynamics. A 2007 issue of The Journal of Physical Chemistry A was dedicated to Prof. Wyatt and his work on "computational Bohmian dynamics".
Eric R. Bittner's group at the University of Houston has advanced a statistical variant of this approach that uses Bayesian sampling technique to sample the quantum density and compute the quantum potential on a structureless mesh of points. This technique was recently used to estimate quantum effects in the heat capacity of small clusters Nen for n ≈ 100.
There remain difficulties using the Bohmian approach, mostly associated with the formation of singularities in the quantum potential due to nodes in the quantum wavefunction. In general, nodes forming due to interference effects lead to the case where This results in an infinite force on the sample particles forcing them to move away from the node and often crossing the path of other sample points (which violates single-valuedness). Various schemes have been developed to overcome this; however, no general solution has yet emerged.
These methods, as does Bohm's Hamilton–Jacobi formulation, do not apply to situations in which the full dynamics of spin need to be taken into account.
The properties of trajectories in the de Broglie–Bohm theory differ significantly from the Moyal quantum trajectories as well as the quantum trajectories from the unraveling of an open quantum system.
Similarities with the many-worlds interpretation
Kim Joris Boström has proposed a non-relativistic quantum mechanical theory that combines elements of de Broglie-Bohm mechanics and Everett's many-worlds. In particular, the unreal many-worlds interpretation of Hawking and Weinberg is similar to the Bohmian concept of unreal empty branch worlds:
Many authors have expressed critical views of de Broglie–Bohm theory by comparing it to Everett's many-worlds approach. Many (but not all) proponents of de Broglie–Bohm theory (such as Bohm and Bell) interpret the universal wavefunction as physically real. According to some supporters of Everett's theory, if the (never collapsing) wavefunction is taken to be physically real, then it is natural to interpret the theory as having the same many worlds as Everett's theory. In the Everettian view the role of the Bohmian particle is to act as a "pointer", tagging, or selecting, just one branch of the universal wavefunction (the assumption that this branch indicates which wave packet determines the observed result of a given experiment is called the "result assumption"); the other branches are designated "empty" and implicitly assumed by Bohm to be devoid of conscious observers. H. Dieter Zeh comments on these "empty" branches:
David Deutsch has expressed the same point more "acerbically":
This conclusion has been challenged by Detlef Dürr and Justin Lazarovici:
The Bohmian, of course, cannot accept this argument. For her, it is decidedly the particle configuration in three-dimensional space and not the wave function on the abstract configuration space that constitutes a world (or rather, the world). Instead, she will accuse the Everettian of not having local beables (in Bell's sense) in her theory, that is, the ontological variables that refer to localized entities in three-dimensional space or four-dimensional spacetime. The many worlds of her theory thus merely appear as a grotesque consequence of this omission.
Occam's-razor criticism
Both Hugh Everett III and Bohm treated the wavefunction as a physically real field. Everett's many-worlds interpretation is an attempt to demonstrate that the wavefunction alone is sufficient to account for all our observations. When we see the particle detectors flash or hear the click of a Geiger counter, Everett's theory interprets this as our wavefunction responding to changes in the detector's wavefunction, which is responding in turn to the passage of another wavefunction (which we think of as a "particle", but is actually just another wave packet). No particle (in the Bohm sense of having a defined position and velocity) exists according to that theory. For this reason Everett sometimes referred to his own many-worlds approach as the "pure wave theory". Of Bohm's 1952 approach, Everett said:
In the Everettian view, then, the Bohm particles are superfluous entities, similar to, and equally as unnecessary as, for example, the luminiferous ether, which was found to be unnecessary in special relativity. This argument is sometimes called the "redundancy argument", since the superfluous particles are redundant in the sense of Occam's razor.
According to Brown & Wallace, the de Broglie–Bohm particles play no role in the solution of the measurement problem. For these authors, the "result assumption" (see above) is inconsistent with the view that there is no measurement problem in the predictable outcome (i.e. single-outcome) case. They also say that a standard tacit assumption of de Broglie–Bohm theory (that an observer becomes aware of configurations of particles of ordinary objects by means of correlations between such configurations and the configuration of the particles in the observer's brain) is unreasonable. This conclusion has been challenged by Valentini, who argues that the entirety of such objections arises from a failure to interpret de Broglie–Bohm theory on its own terms.
According to Peter R. Holland, in a wider Hamiltonian framework, theories can be formulated in which particles do act back on the wave function.
Derivations
De Broglie–Bohm theory has been derived many times and in many ways. Below are six derivations, all of which are very different and lead to different ways of understanding and extending this theory.
Schrödinger's equation can be derived by using Einstein's light quanta hypothesis: and de Broglie's hypothesis: .
The guiding equation can be derived in a similar fashion. We assume a plane wave: . Notice that . Assuming that for the particle's actual velocity, we have that . Thus, we have the guiding equation.
Notice that this derivation does not use Schrödinger's equation.
Preserving the density under the time evolution is another method of derivation. This is the method that Bell cites. It is this method that generalizes to many possible alternative theories. The starting point is the continuity equation for the density . This equation describes a probability flow along a current. We take the velocity field associated with this current as the velocity field whose integral curves yield the motion of the particle.
A method applicable for particles without spin is to do a polar decomposition of the wavefunction and transform Schrödinger's equation into two coupled equations: the continuity equation from above and the Hamilton–Jacobi equation. This is the method used by Bohm in 1952. The decomposition and equations are as follows:
Decomposition: Note that corresponds to the probability density .
Continuity equation: .
Hamilton–Jacobi equation:
The Hamilton–Jacobi equation is the equation derived from a Newtonian system with potential and velocity field The potential is the classical potential that appears in Schrödinger's equation, and the other term involving is the quantum potential, terminology introduced by Bohm.
This leads to viewing the quantum theory as particles moving under the classical force modified by a quantum force. However, unlike standard Newtonian mechanics, the initial velocity field is already specified by , which is a symptom of this being a first-order theory, not a second-order theory.
A fourth derivation was given by Dürr et al. In their derivation, they derive the velocity field by demanding the appropriate transformation properties given by the various symmetries that Schrödinger's equation satisfies, once the wavefunction is suitably transformed. The guiding equation is what emerges from that analysis.
A fifth derivation, given by Dürr et al. is appropriate for generalization to quantum field theory and the Dirac equation. The idea is that a velocity field can also be understood as a first-order differential operator acting on functions. Thus, if we know how it acts on functions, we know what it is. Then given the Hamiltonian operator , the equation to satisfy for all functions (with associated multiplication operator ) is , where is the local Hermitian inner product on the value space of the wavefunction.
This formulation allows for stochastic theories such as the creation and annihilation of particles.
A further derivation has been given by Peter R. Holland, on which he bases his quantum-physics textbook The Quantum Theory of Motion. It is based on three basic postulates and an additional fourth postulate that links the wavefunction to measurement probabilities:
A physical system consists in a spatiotemporally propagating wave and a point particle guided by it.
The wave is described mathematically by a solution to Schrödinger's wave equation.
The particle motion is described by a solution to in dependence on initial condition , with the phase of .The fourth postulate is subsidiary yet consistent with the first three:
The probability to find the particle in the differential volume at time t equals .
History
The theory was historically developed in the 1920s by de Broglie, who, in 1927, was persuaded to abandon it in favour of the then-mainstream Copenhagen interpretation. David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot-wave theory in 1952. Bohm's suggestions were not then widely received, partly due to reasons unrelated to their content, such as Bohm's youthful communist affiliations. The de Broglie–Bohm theory was widely deemed unacceptable by mainstream theorists, mostly because of its explicit non-locality. On the theory, John Stewart Bell, author of the 1964 Bell's theorem wrote in 1982:
Since the 1990s, there has been renewed interest in formulating extensions to de Broglie–Bohm theory, attempting to reconcile it with special relativity and quantum field theory, besides other features such as spin or curved spatial geometries.
De Broglie–Bohm theory has a history of different formulations and names. In this section, each stage is given a name and a main reference.
Pilot-wave theory
Louis de Broglie presented his pilot wave theory at the 1927 Solvay Conference, after close collaboration with Schrödinger, who developed his wave equation for de Broglie's theory. At the end of the presentation, Wolfgang Pauli pointed out that it was not compatible with a semi-classical technique Fermi had previously adopted in the case of inelastic scattering. Contrary to a popular legend, de Broglie actually gave the correct rebuttal that the particular technique could not be generalized for Pauli's purpose, although the audience might have been lost in the technical details and de Broglie's mild manner left the impression that Pauli's objection was valid. He was eventually persuaded to abandon this theory nonetheless because he was "discouraged by criticisms which [it] roused". De Broglie's theory already applies to multiple spin-less particles, but lacks an adequate theory of measurement as no one understood quantum decoherence at the time. An analysis of de Broglie's presentation is given in Bacciagaluppi et al. Also, in 1932 John von Neumann published a no hidden variables proof in his book Mathematical Foundations of Quantum Mechanics, that was widely believed to prove that all hidden-variable theories are impossible. This sealed the fate of de Broglie's theory for the next two decades.
In 1926, Erwin Madelung had developed a hydrodynamic version of Schrödinger's equation, which is incorrectly considered as a basis for the density current derivation of the de Broglie–Bohm theory. The Madelung equations, being quantum analog of Euler equations of fluid dynamics, differ philosophically from the de Broglie–Bohm mechanics and are the basis of the stochastic interpretation of quantum mechanics.
Peter R. Holland has pointed out that, earlier in 1927, Einstein had actually submitted a preprint with a similar proposal but, not convinced, had withdrawn it before publication. According to Holland, failure to appreciate key points of the de Broglie–Bohm theory has led to confusion, the key point being "that the trajectories of a many-body quantum system are correlated not because the particles exert a direct force on one another (à la Coulomb) but because all are acted upon by an entity – mathematically described by the wavefunction or functions of it – that lies beyond them". This entity is the quantum potential.
After publishing his popular textbook Quantum Theory that adhered entirely to the Copenhagen orthodoxy, Bohm was persuaded by Einstein to take a critical look at von Neumann's no hidden variables proof. The result was 'A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I and II' [Bohm 1952]. It was an independent origination of the pilot wave theory, and extended it to incorporate a consistent theory of measurement, and to address a criticism of Pauli that de Broglie did not properly respond to; it is taken to be deterministic (though Bohm hinted in the original papers that there should be disturbances to this, in the way Brownian motion disturbs Newtonian mechanics). This stage is known as the de Broglie–Bohm Theory in Bell's work [Bell 1987] and is the basis for 'The Quantum Theory of Motion' [Holland 1993].
This stage applies to multiple particles, and is deterministic.
The de Broglie–Bohm theory is an example of a hidden-variables theory. Bohm originally hoped that hidden variables could provide a local, causal, objective description that would resolve or eliminate many of the paradoxes of quantum mechanics, such as Schrödinger's cat, the measurement problem and the collapse of the wavefunction. However, Bell's theorem complicates this hope, as it demonstrates that there can be no local hidden-variable theory that is compatible with the predictions of quantum mechanics. The Bohmian interpretation is causal but not local.
Bohm's paper was largely ignored or panned by other physicists. Albert Einstein, who had suggested that Bohm search for a realist alternative to the prevailing Copenhagen approach, did not consider Bohm's interpretation to be a satisfactory answer to the quantum nonlocality question, calling it "too cheap", while Werner Heisenberg considered it a "superfluous 'ideological superstructure' ". Wolfgang Pauli, who had been unconvinced by de Broglie in 1927, conceded to Bohm as follows:
I just received your long letter of 20th November, and I also have studied more thoroughly the details of your paper. I do not see any longer the possibility of any logical contradiction as long as your results agree completely with those of the usual wave mechanics and as long as no means is given to measure the values of your hidden parameters both in the measuring apparatus and in the observe [sic] system. As far as the whole matter stands now, your 'extra wave-mechanical predictions' are still a check, which cannot be cashed.
He subsequently described Bohm's theory as "artificial metaphysics".
According to physicist Max Dresden, when Bohm's theory was presented at the Institute for Advanced Study in Princeton, many of the objections were ad hominem, focusing on Bohm's sympathy with communists as exemplified by his refusal to give testimony to the House Un-American Activities Committee.
In 1979, Chris Philippidis, Chris Dewdney and Basil Hiley were the first to perform numeric computations on the basis of the quantum potential to deduce ensembles of particle trajectories. Their work renewed the interests of physicists in the Bohm interpretation of quantum physics.
Eventually John Bell began to defend the theory. In "Speakable and Unspeakable in Quantum Mechanics" [Bell 1987], several of the papers refer to hidden-variables theories (which include Bohm's).
The trajectories of the Bohm model that would result for particular experimental arrangements were termed "surreal" by some. Still in 2016, mathematical physicist Sheldon Goldstein said of Bohm's theory: "There was a time when you couldn't even talk about it because it was heretical. It probably still is the kiss of death for a physics career to be actually working on Bohm, but maybe that's changing."
Bohmian mechanics
Bohmian mechanics is the same theory, but with an emphasis on the notion of current flow, which is determined on the basis of the quantum equilibrium hypothesis that the probability follows the Born rule. The term "Bohmian mechanics" is also often used to include most of the further extensions past the spin-less version of Bohm. While de Broglie–Bohm theory has Lagrangians and Hamilton-Jacobi equations as a primary focus and backdrop, with the icon of the quantum potential, Bohmian mechanics considers the continuity equation as primary and has the guiding equation as its icon. They are mathematically equivalent in so far as the Hamilton-Jacobi formulation applies, i.e., spin-less particles.
All of non-relativistic quantum mechanics can be fully accounted for in this theory. Recent studies have used this formalism to compute the evolution of many-body quantum systems, with a considerable increase in speed as compared to other quantum-based methods.
Causal interpretation and ontological interpretation
Bohm developed his original ideas, calling them the Causal Interpretation. Later he felt that causal sounded too much like deterministic and preferred to call his theory the Ontological Interpretation. The main reference is "The Undivided Universe" (Bohm, Hiley 1993).
This stage covers work by Bohm and in collaboration with Jean-Pierre Vigier and Basil Hiley. Bohm is clear that this theory is non-deterministic (the work with Hiley includes a stochastic theory). As such, this theory is not strictly speaking a formulation of de Broglie–Bohm theory, but it deserves mention here because the term "Bohm Interpretation" is ambiguous between this theory and de Broglie–Bohm theory.
In 1996 philosopher of science Arthur Fine gave an in-depth analysis of possible interpretations of Bohm's model of 1952.
William Simpson has suggested a hylomorphic interpretation of Bohmian mechanics, in which the cosmos is an Aristotelian substance composed of material particles and a substantial form. The wave function is assigned a dispositional role in choreographing the trajectories of the particles.
Hydrodynamic quantum analogs
Experiments on hydrodynamical analogs of quantum mechanics beginning with the work of Couder and Fort (2006) have purported to show that macroscopic classical pilot-waves can exhibit characteristics previously thought to be restricted to the quantum realm. Hydrodynamic pilot-wave analogs have been claimed to duplicate the double slit experiment, tunneling, quantized orbits, and numerous other quantum phenomena which have led to a resurgence in interest in pilot wave theories.
The analogs have been compared to the Faraday wave.
These results have been disputed: experiments fail to reproduce aspects of the double-slit experiments. High precision measurements in the tunneling case point to a different origin of the unpredictable crossing: rather than initial position uncertainty or environmental noise, interactions at the barrier seem to be involved.
Another classical analog has been reported in surface gravity waves.
Surrealistic trajectories
In 1992, Englert, Scully, Sussman, and Walther proposed experiments that would show particles taking paths that differ from the Bohm trajectories. They described the Bohm trajectories as "surrealistic"; their proposal was later referred to as ESSW after the last names of the authors.
In 2016, Mahler et al. verified the ESSW predictions. However they propose the surealistic effect is a consequence of the nonlocality inherent in Bohm's theory.
See also
Madelung equations
Local hidden-variable theory
Superfluid vacuum theory
Fluid analogs in quantum mechanics
Probability current
Notes
References
Sources
(full text)
(full text)
(Demonstrates incompleteness of the Bohm interpretation in the face of fractal, differentiable-nowhere wavefunctions.)
(Describes a Bohmian resolution to the dilemma posed by non-differentiable wavefunctions.)
Bohmian mechanics on arxiv.org
Further reading
John S. Bell: Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy, Cambridge University Press, 2004,
David Bohm, Basil Hiley: The Undivided Universe: An Ontological Interpretation of Quantum Theory, Routledge Chapman & Hall, 1993,
Detlef Dürr, Sheldon Goldstein, Nino Zanghì: Quantum Physics Without Quantum Philosophy, Springer, 2012,
Detlef Dürr, Stefan Teufel: Bohmian Mechanics: The Physics and Mathematics of Quantum Theory, Springer, 2009,
Peter R. Holland: The quantum theory of motion, Cambridge University Press, 1993 (re-printed 2000, transferred to digital printing 2004),
External links
"Pilot-Wave Hydrodynamics" Bush, J. W. M., Annual Review of Fluid Mechanics, 2015
"Bohmian Mechanics" (Stanford Encyclopedia of Philosophy)
"Bohmian-Mechanics.net", the homepage of the international research network on Bohmian Mechanics that was started by D. Dürr, S. Goldstein and N. Zanghì.
Workgroup Bohmian Mechanics at LMU Munich (D. Dürr)
Bohmian Mechanics Group at University of Innsbruck (G. Grübl)
"Pilot waves, Bohmian metaphysics, and the foundations of quantum mechanics" , lecture course on de Broglie-Bohm theory by Mike Towler, Cambridge University.
"21st-century directions in de Broglie-Bohm theory and beyond", August 2010 international conference on de Broglie-Bohm theory. Site contains slides for all the talks – the latest cutting-edge deBB research.
"Observing the Trajectories of a Single Photon Using Weak Measurement"
"Bohmian trajectories are no longer 'hidden variables'"
The David Bohm Society
De Broglie–Bohm theory inspired visualization of atomic orbitals.
Interpretations of quantum mechanics
Quantum measurement | De Broglie–Bohm theory | [
"Physics"
] | 12,868 | [
"Interpretations of quantum mechanics",
"Quantum measurement",
"Quantum mechanics"
] |
54,813 | https://en.wikipedia.org/wiki/Shellac | Shellac () is a resin secreted by the female lac bug on trees in the forests of India and Thailand. Chemically, it is mainly composed of aleuritic acid, jalaric acid, shellolic acid, and other natural waxes. It is processed and sold as dry flakes and dissolved in alcohol to make liquid shellac, which is used as a brush-on colorant, food glaze and wood finish. Shellac functions as a tough natural primer, sanding sealant, tannin-blocker, odour-blocker, stain, and high-gloss varnish. Shellac was once used in electrical applications as it possesses good insulation qualities and seals out moisture. Phonograph and 78 rpm gramophone records were made of shellac until they were gradually replaced by vinyl. By 1948 shellac was no longer used to make records.
From the time shellac replaced oil and wax finishes in the 19th century, it was one of the dominant wood finishes in the western world until it was largely replaced by nitrocellulose lacquer in the 1920s and 1930s. Besides wood finishing, shellac is used as an ingredient in food, medication and candy as confectioner's glaze, as well as a means of preserving harvested citrus fruit.
Etymology
Shellac comes from shell and lac, a partial calque of French , 'lac in thin pieces', later , 'gum lac'. Most European languages (except Romance ones and Greek) have borrowed the word for the substance from English or from the German equivalent .
Production
Shellac is scraped from the bark of the trees where the female lac bug, Kerria lacca (order Hemiptera, family Kerriidae, also known as Laccifer lacca), secretes it to form a tunnel-like tube as it traverses the branches of the tree. Though these tunnels are sometimes referred to as "cocoons", they are not cocoons in the entomological sense. This insect is in the same superfamily as the insect from which cochineal is obtained. The insects suck the sap of the tree and excrete "sticklac" almost constantly. The least-coloured shellac is produced when the insects feed on the kusum tree (Schleichera).
The number of lac bugs required to produce of shellac has variously been estimated between and . The root word lakh is a unit in the Indian numbering system for and presumably refers to the huge numbers of insects that swarm on host trees, up to .
The raw shellac, which contains bark shavings and lac bugs removed during scraping, is placed in canvas tubes (much like long socks) and heated over a fire. This causes the shellac to liquefy, and it seeps out of the canvas, leaving the bark and bugs behind. The thick, sticky shellac is then dried into a flat sheet and broken into flakes, or dried into "buttons" (pucks/cakes), then bagged and sold. The end-user then crushes it into a fine powder and mixes it with ethyl alcohol before use, to dissolve the flakes and make liquid shellac.
Liquid shellac has a limited shelf life (about 1 year), so is sold in dry form for dissolution before use. Liquid shellac sold in hardware stores is often marked with the production (mixing) date, so the consumer can know whether the shellac inside is still good. Some manufacturers (e.g., Zinsser) have ceased labeling shellac with the production date, but the production date may be discernible from the production lot code. Alternatively, old shellac may be tested to see if it is still usable: a few drops on glass should dry to a hard surface in roughly 15 minutes. Shellac that remains tacky for a long time is no longer usable. Storage life depends on peak temperature, so refrigeration extends shelf life.
The thickness (concentration) of shellac is measured by the unit "pound cut", referring to the amount (in pounds) of shellac flakes dissolved in a gallon of denatured alcohol. For example: a 1-lb. cut of shellac is the strength obtained by dissolving one pound of shellac flakes in a gallon of alcohol (equivalent to ). Most pre-mixed commercial preparations come at a 3-lb. cut. Multiple thin layers of shellac produce a significantly better end result than a few thick layers. Thick layers of shellac do not adhere to the substrate or to each other well, and thus can peel off with relative ease; in addition, thick shellac will obscure fine details in carved designs in wood and other substrates.
Shellac naturally dries to a high-gloss sheen. For applications where a flatter (less shiny) sheen is desired, products containing amorphous silica, such as "Shellac Flat", may be added to the dissolved shellac.
Shellac naturally contains a small amount of wax (3%–5% by volume), which comes from the lac bug. In some preparations, this wax is removed (the resulting product being called "dewaxed shellac"). This is done for applications where the shellac will be coated with something else (such as paint or varnish), so the topcoat will adhere. Waxy (non-dewaxed) shellac appears milky in liquid form, but dries clear.
Colours and availability
Shellac comes in many warm colours, ranging from a very light blonde ("platina") to a very dark brown ("garnet"), with many varieties of brown, yellow, orange and red in between. The colour is influenced by the sap of the tree the lac bug is living on and by the time of harvest. Historically, the most commonly sold shellac is called "orange shellac", and was used extensively as a combination stain and protectant for wood panelling and cabinetry in the 20th century.
Shellac was once very common anywhere paints or varnishes were sold (such as hardware stores). However, cheaper and more abrasion- and chemical-resistant finishes, such as polyurethane, have almost completely replaced it in decorative residential wood finishing such as hardwood floors, wooden wainscoting plank panelling, and kitchen cabinets. These alternative products, however, must be applied over a stain if the user wants the wood to be coloured; clear or blonde shellac may be applied over a stain without affecting the colour of the finished piece, as a protective topcoat. "Wax over shellac" (an application of buffed-on paste wax over several coats of shellac) is often regarded as a beautiful, if fragile, finish for hardwood floors. Luthiers still use shellac to French polish fine acoustic stringed instruments, but it has been replaced by synthetic plastic lacquers and varnishes in many workshops, especially high-volume production environments.
Shellac dissolved in alcohol, typically more dilute than as used in French polish, is now commonly sold as "sanding sealer" by several companies. It is used to seal wooden surfaces, often as preparation for a final more durable finish; it reduces the amount of final coating required by reducing its absorption into the wood.
Properties
Shellac is a natural bioadhesive polymer and is chemically similar to synthetic polymers. It can thus be considered a natural form of plastic.
With a melting point of , it can be classed as a thermoplastic used to bind wood flour, the mixture can be moulded with heat and pressure.
Shellac scratches more easily than most lacquers and varnishes, and application is more labour-intensive, which is why it has been replaced by plastic in most areas. Shellac is much softer than Urushi lacquer, for instance, which is far superior with regard to both chemical and mechanical resistance. But damaged shellac can easily be touched up with another coat of shellac (unlike polyurethane, which chemically cures to a solid) because the new coat merges with and bonds to the existing coat(s).
Shellac is soluble in alkaline solutions of ammonia, sodium borate, sodium carbonate, and sodium hydroxide, and also in various organic solvents. When dissolved in alcohol (typically denatured ethanol) for application, shellac yields a coating of good durability and hardness.
Upon mild hydrolysis shellac gives a complex mix of aliphatic and alicyclic hydroxy acids and their polymers that varies in exact composition depending upon the source of the shellac and the season of collection. The major component of the aliphatic component is aleuritic acid, whereas the main alicyclic component is shellolic acid.
Shellac is UV-resistant, and does not darken as it ages (though the wood under it may do so, as in the case of pine).
History
The earliest written evidence of shellac goes back years, but shellac is known to have been used earlier. According to the ancient Indian epic poem, the Mahabharata, an entire palace was coated with dried shellac.
Shellac was uncommonly used as a dyestuff for as long as there was a trade with the East Indies. According to Merrifield, shellac was first used as a binding agent in artist's pigments in Spain in the year 1220.
The use of overall paint or varnish decoration on large pieces of furniture was first popularised in Venice (then later throughout Italy). There are a number of 13th-century references to painted or varnished cassone, often dowry cassone that were made deliberately impressive as part of dynastic marriages. The definition of varnish is not always clear, but it seems to have been a spirit varnish based on gum benjamin or mastic, both traded around the Mediterranean. At some time, shellac began to be used as well. An article from the Journal of the American Institute of Conservation describes using infrared spectroscopy to identify shellac coating on a 16th-century cassone. This is also the period in history where "varnisher" was identified as a distinct trade, separate from both carpenter and artist.
Another use for shellac is sealing wax. The widespread use of shellac seals in Europe dates back to the 17th century, thanks to the increasing trade with India.
Uses
Historical
In the early- and mid-twentieth century, orange shellac was used as a one-product finish (combination stain and varnish-like topcoat) on decorative wood panelling used on walls and ceilings in homes, particularly in the US. In the American South, use of knotty pine plank panelling covered with orange shellac was once as common in new construction as drywall is today. It was also often used on kitchen cabinets and hardwood floors, prior to the advent of polyurethane.
Until the advent of vinyl, most gramophone records were pressed from shellac compounds. From 1921 to 1928, tons of shellac were used to create 260 million records for Europe. In the 1930s, it was estimated that half of all shellac was used for gramophone records. Use of shellac for records was common until the 1950s and continued into the 1970s in some non-Western countries, as well as for some children's records.
Until recent advances in technology, shellac (French polish) was the only glue used in the making of ballet dancers' pointe shoes, to stiffen the box (toe area) to support the dancer en pointe. Many manufacturers of pointe shoes still use the traditional techniques, and many dancers use shellac to revive a softening pair of shoes.
Shellac was historically used as a protective coating on paintings.
Sheets of Braille were coated with shellac to help protect them from wear due to being read by hand.
Shellac was used from the mid-nineteenth century to produce small moulded goods such as picture frames, boxes, toilet articles, jewelry, inkwells and even dentures. Advances in plastics have rendered shellac obsolete as a moulding compound.
Shellac (both orange and white varieties) was used both in the field and laboratory to glue and stabilise dinosaur bones until about the mid-1960s. While effective at the time, the long-term negative effects of shellac (being organic in nature) on dinosaur bones and other fossils is debated, and shellac is very rarely used by professional conservators and fossil preparators today.
Shellac was used for fixing inductor, motor, generator and transformer windings. It was applied directly to single-layer windings in an alcohol solution. For multi-layer windings, the whole coil was submerged in shellac solution, then drained and placed in a warm location to allow the alcohol to evaporate. The shellac locked the wire turns in place, provided extra insulation, prevented movement and vibration and reduced buzz and hum. In motors and generators it also helps transfer force generated by magnetic attraction and repulsion from the windings to the rotor or armature. In more recent times, shellac has been replaced in these applications by synthetic resins such as polyester resin. Some applications use shellac mixed with other natural or synthetic resins, such as pine resin or phenol-formaldehyde resin, of which Bakelite is the best known, for electrical use. Mixed with other resins, barium sulfate, calcium carbonate, zinc sulfide, aluminium oxide and/or cuprous carbonate (malachite), shellac forms a component of heat-cured capping cement used to fasten the caps or bases to the bulbs of electric lamps.
Current uses
It is the central element of the traditional "French polish" method of finishing furniture, fine string instruments, and pianos.
Shellac, being edible, is used as a glazing agent on pills (see excipient) and sweets, in the form of pharmaceutical glaze (or, "confectioner's glaze"). Because of its acidic properties (resisting stomach acids), shellac-coated pills may be used for a timed enteric or colonic release. Shellac is used as a 'wax' coating on citrus fruit to prolong its shelf/storage life. It is also used to replace the natural wax of the apple, which is removed during the cleaning process. When used for this purpose, it has the food additive E number E904.
Shellac is an odour and stain blocker and so is often used as the base of "all-purpose" primers. Although its durability against abrasives and many common solvents is not very good, shellac provides an excellent barrier against water vapour penetration. Shellac-based primers are an effective sealant to control odours associated with fire damage.
Shellac has traditionally been used as a dye for cotton and, especially, silk cloth in Thailand, particularly in the north-eastern region. It yields a range of warm colours from pale yellow through to dark orange-reds and dark ochre. Naturally dyed silk cloth, including that using shellac, is widely available in the rural northeast, especially in Ban Khwao District, Chaiyaphum province. The Thai name for the insect and the substance is "khrang" (Thai: ครั่ง).
Wood finish
Wood finishing is one of the most traditional and still popular uses of shellac mixed with solvents or alcohol. This dissolved shellac liquid, applied to a piece of wood, is an evaporative finish: the alcohol of the shellac mixture evaporates, leaving behind a protective film.
Shellac as wood finish is natural and non-toxic in its pure form. A finish made of shellac is UV-resistant. For water-resistance and durability, it does not keep up with synthetic finishing products.
Because it is compatible with most other finishes, shellac is also used as a barrier or primer coat on wood to prevent the bleeding of resin or pigments into the final finish, or to prevent wood stain from blotching.
Other
Shellac is used:
in the tying of artificial flies for trout and salmon, where the shellac was used to seal all trimmed materials at the head of the fly.
in combination with wax for preserving and imparting a shine to citrus fruits, such as lemons and oranges.
in dental technology, where it is occasionally used in the production of custom impression trays and temporary denture baseplate production.
as a binder in India ink.
for bicycles, as a protective and decorative coating for bicycle handlebar tape, and as a hard-drying adhesive for tubular tyres, particularly for track racing.
for re-attaching ink sacs when restoring vintage fountain pens, the orange variety preferably.
applied as a coating with either a standard or modified Huon-Stuehrer nozzle, can be economically micro-sprayed onto various smooth candies, such as chocolate coated peanuts. Irregularities on the surface of the product being sprayed may result in the formation of unsightly aggregates ("lac-aggs") which precludes the use of this technique on foods such as walnuts or raisins.
for fixing pads to the key-cups of woodwind instruments.
for luthierie applications, to bind wood fibres down and prevent tear out on the soft spruce soundboards.
to stiffen and impart water-resistance to felt hats, for wood finishing and as a constituent of gossamer (or goss for short), a cheesecloth fabric coated in shellac and ammonia solution used in the shell of traditional silk top and riding hats.
for mounting insects, in the form of a gel adhesive mixture composed of 75% ethyl alcohol.
as a binder in the fabrication of abrasive wheels, imparting flexibility and smoothness not found in vitrified (ceramic bond) wheels. 'Elastic' bonded wheels typically contain plaster of paris, yielding a stronger bond when mixed with shellac; the mixture of dry plaster powder, abrasive (e.g. corundum/aluminium oxide Al2O3), and shellac are heated and the mixture pressed in a mould.
in fireworks pyrotechnic compositions as a low-temperature fuel, where it allows the creation of pure 'greens' and 'blues'- colours difficult to achieve with other fuel mixes.
in jewellery; shellac is often applied to the top of a 'shellac stick' in order to hold small, complex, objects. By melting the shellac, the jeweller can press the object (such as a stone setting mount) into it. The shellac, once cool, can firmly hold the object, allowing it to be manipulated with tools.
in watchmaking, due to its low melting temperature (about ), shellac is used in most mechanical movements to adjust and adhere pallet stones to the pallet fork and secure the roller jewel to the roller table of the balance wheel. Also for securing small parts to a 'wax chuck' (faceplate) in a watchmakers' lathe.
in the early twentieth century, it was used to protect some military rifle stocks.
in Jelly Belly jelly beans, in combination with beeswax to give them their final buff and polish.
in modern traditional archery, shellac is one of the hot-melt glue/resin products used to attach arrowheads to wooden or bamboo arrow shafts.
in alcohol solution as sanding sealer, widely sold to seal sanded surfaces, typically wooden surfaces before a final coat of a more durable finish. Similar to French polish but more dilute.
as a topcoat in nail polish (although not all nail polish sold as "shellac" contains shellac, and some nail polish not labelled in this way does).
in sculpture, to seal plaster and in conjunction with wax or oil-soaps, to act as a barrier during mold-making processes.
as a dilute solution in the sealing of harpsichord soundboards, protecting them from dust and buffering humidity changes while maintaining a bare-wood appearance.
as a waterproofing agent for leather (e.g., for the soles of figure skate boots).
as a way for ballet dancers to harden their pointe shoes, making them last longer.
Gallery
See also
Polymers
Rosin
References
External links
Shellac.net US shellac vendor – properties and uses of dewaxed and non-dewaxed shellac
The Story of Shellac (history)
DIYinfo.org's Shellac Wiki, practical information on everything to do with shellac
Reactive Pyrolysis-Gas Chromatography of Shellac
Shellac A short introduction to the origin of shellac, the history of Japanning and French polishing, and how to conserve and repair these finishes sympathetically
Shellac Application By Smith & Rodger
Wood finishing materials
Food additives
Insect products
Polymers
Resins
Waxes
Excipients
Forestry in India
Non-timber forest products
E-number additives | Shellac | [
"Physics",
"Chemistry",
"Materials_science"
] | 4,311 | [
"Resins",
"Unsolved problems in physics",
"Materials",
"Polymer chemistry",
"Polymers",
"Amorphous solids",
"Matter",
"Waxes"
] |
55,017 | https://en.wikipedia.org/wiki/Fusion%20power | Fusion power is a proposed form of power generation that would generate electricity by using heat from nuclear fusion reactions. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, while releasing energy. Devices designed to harness this energy are known as fusion reactors. Research into fusion reactors began in the 1940s, but as of 2024, no device has reached net power, although net positive reactions have been achieved.
Fusion processes require fuel and a confined environment with sufficient temperature, pressure, and confinement time to create a plasma in which fusion can occur. The combination of these figures that results in a power-producing system is known as the Lawson criterion. In stars the most common fuel is hydrogen, and gravity provides extremely long confinement times that reach the conditions needed for fusion energy production. Proposed fusion reactors generally use heavy hydrogen isotopes such as deuterium and tritium (and especially a mixture of the two), which react more easily than protium (the most common hydrogen isotope) and produce a helium nucleus and an energized neutron, to allow them to reach the Lawson criterion requirements with less extreme conditions. Most designs aim to heat their fuel to around 100 million kelvins, which presents a major challenge in producing a successful design. Tritium is extremely rare on Earth, having a half life of only ~12.3 years. Consequently, during the operation of envisioned fusion reactors, known as breeder reactors, helium cooled pebble beds (HCPBs) are subjected to neutron fluxes to generate tritium to complete the fuel cycle.
As a source of power, nuclear fusion has a number of potential advantages compared to fission. These include reduced radioactivity in operation, little high-level nuclear waste, ample fuel supplies (assuming tritium breeding or some forms of aneutronic fuels), and increased safety. However, the necessary combination of temperature, pressure, and duration has proven to be difficult to produce in a practical and economical manner. A second issue that affects common reactions is managing neutrons that are released during the reaction, which over time degrade many common materials used within the reaction chamber.
Fusion researchers have investigated various confinement concepts. The early emphasis was on three main systems: z-pinch, stellarator, and magnetic mirror. The current leading designs are the tokamak and inertial confinement (ICF) by laser. Both designs are under research at very large scales, most notably the ITER tokamak in France and the National Ignition Facility (NIF) laser in the United States. Researchers are also studying other designs that may offer less expensive approaches. Among these alternatives, there is increasing interest in magnetized target fusion and inertial electrostatic confinement, and new variations of the stellarator.
Background
Mechanism
Fusion reactions occur when two or more atomic nuclei come close enough for long enough that the nuclear force pulling them together exceeds the electrostatic force pushing them apart, fusing them into heavier nuclei. For nuclei heavier than iron-56, the reaction is endothermic, requiring an input of energy. The heavy nuclei bigger than iron have many more protons resulting in a greater repulsive force. For nuclei lighter than iron-56, the reaction is exothermic, releasing energy when they fuse. Since hydrogen has a single proton in its nucleus, it requires the least effort to attain fusion, and yields the most net energy output. Also since it has one electron, hydrogen is the easiest fuel to fully ionize.
The repulsive electrostatic interaction between nuclei operates across larger distances than the strong force, which has a range of roughly one femtometer—the diameter of a proton or neutron. The fuel atoms must be supplied enough kinetic energy to approach one another closely enough for the strong force to overcome the electrostatic repulsion in order to initiate fusion. The "Coulomb barrier" is the quantity of kinetic energy required to move the fuel atoms near enough. Atoms can be heated to extremely high temperatures or accelerated in a particle accelerator to produce this energy.
An atom loses its electrons once it is heated past its ionization energy. An ion is the name for the resultant bare nucleus. The result of this ionization is plasma, which is a heated cloud of ions and free electrons that were formerly bound to them. Plasmas are electrically conducting and magnetically controlled because the charges are separated. This is used by several fusion devices to confine the hot particles.
Cross section
A reaction's cross section, denoted σ, measures the probability that a fusion reaction will happen. This depends on the relative velocity of the two nuclei. Higher relative velocities generally increase the probability, but the probability begins to decrease again at very high energies.
In a plasma, particle velocity can be characterized using a probability distribution. If the plasma is thermalized, the distribution looks like a Gaussian curve, or Maxwell–Boltzmann distribution. In this case, it is useful to use the average particle cross section over the velocity distribution. This is entered into the volumetric fusion rate:
where:
is the energy made by fusion, per time and volume
n is the number density of species A or B, of the particles in the volume
is the cross section of that reaction, average over all the velocities of the two species v
is the energy released by that fusion reaction.
Lawson criterion
The Lawson criterion considers the energy balance between the energy produced in fusion reactions to the energy being lost to the environment. In order to generate usable energy, a system would have to produce more energy than it loses. Lawson assumed an energy balance, shown below.
where:
is the net power from fusion
is the efficiency of capturing the output of the fusion
is the rate of energy generated by the fusion reactions
is the conduction losses as energetic mass leaves the plasma
is the radiation losses as energy leaves as light.
The rate of fusion, and thus Pfusion, depends on the temperature and density of the plasma. The plasma loses energy through conduction and radiation. Conduction occurs when ions, electrons, or neutrals impact other substances, typically a surface of the device, and transfer a portion of their kinetic energy to the other atoms. The rate of conduction is also based on the temperature and density. Radiation is energy that leaves the cloud as light. Radiation also increases with temperature as well as the mass of the ions. Fusion power systems must operate in a region where the rate of fusion is higher than the losses.
Triple product: density, temperature, time
The Lawson criterion argues that a machine holding a thermalized and quasi-neutral plasma has to generate enough energy to overcome its energy losses. The amount of energy released in a given volume is a function of the temperature, and thus the reaction rate on a per-particle basis, the density of particles within that volume, and finally the confinement time, the length of time that energy stays within the volume. This is known as the "triple product": the plasma density, temperature, and confinement time.
In magnetic confinement, the density is low, on the order of a "good vacuum". For instance, in the ITER device the fuel density is about , which is about one-millionth atmospheric density. This means that the temperature and/or confinement time must increase. Fusion-relevant temperatures have been achieved using a variety of heating methods that were developed in the early 1970s. In modern machines, , the major remaining issue was the confinement time. Plasmas in strong magnetic fields are subject to a number of inherent instabilities, which must be suppressed to reach useful durations. One way to do this is to simply make the reactor volume larger, which reduces the rate of leakage due to classical diffusion. This is why ITER is so large.
In contrast, inertial confinement systems approach useful triple product values via higher density, and have short confinement intervals. In NIF, the initial frozen hydrogen fuel load has a density less than water that is increased to about 100 times the density of lead. In these conditions, the rate of fusion is so high that the fuel fuses in the microseconds it takes for the heat generated by the reactions to blow the fuel apart. Although NIF is also large, this is a function of its "driver" design, not inherent to the fusion process.
Energy capture
Multiple approaches have been proposed to capture the energy that fusion produces. The simplest is to heat a fluid. The commonly targeted D-T reaction releases much of its energy as fast-moving neutrons. Electrically neutral, the neutron is unaffected by the confinement scheme. In most designs, it is captured in a thick "blanket" of lithium surrounding the reactor core. When struck by a high-energy neutron, the blanket heats up. It is then actively cooled with a working fluid that drives a turbine to produce power.
Another design proposed to use the neutrons to breed fission fuel in a blanket of nuclear waste, a concept known as a fission-fusion hybrid. In these systems, the power output is enhanced by the fission events, and power is extracted using systems like those in conventional fission reactors.
Designs that use other fuels, notably the proton-boron aneutronic fusion reaction, release much more of their energy in the form of charged particles. In these cases, power extraction systems based on the movement of these charges are possible. Direct energy conversion was developed at Lawrence Livermore National Laboratory (LLNL) in the 1980s as a method to maintain a voltage directly using fusion reaction products. This has demonstrated energy capture efficiency of 48 percent.
Plasma behavior
Plasma is an ionized gas that conducts electricity. In bulk, it is modeled using magnetohydrodynamics, which is a combination of the Navier–Stokes equations governing fluids and Maxwell's equations governing how magnetic and electric fields behave. Fusion exploits several plasma properties, including:
Self-organizing plasma conducts electric and magnetic fields. Its motions generate fields that can in turn contain it.
Diamagnetic plasma can generate its own internal magnetic field. This can reject an externally applied magnetic field, making it diamagnetic.
Magnetic mirrors can reflect plasma when it moves from a low to high density field.:24
Methods
Magnetic confinement
Tokamak: the most well-developed and well-funded approach. This method drives hot plasma around in a magnetically confined torus, with an internal current. When completed, ITER will become the world's largest tokamak. As of September 2018 an estimated 226 experimental tokamaks were either planned, decommissioned or operating (50) worldwide.
Spherical tokamak: also known as spherical torus. A variation on the tokamak with a spherical shape.
Stellarator: Twisted rings of hot plasma. The stellarator attempts to create a natural twisted plasma path, using external magnets. Stellarators were developed by Lyman Spitzer in 1950 and evolved into four designs: Torsatron, Heliotron, Heliac and Helias. One example is Wendelstein 7-X, a German device. It is the world's largest stellarator.
Internal rings: Stellarators create a twisted plasma using external magnets, while tokamaks do so using a current induced in the plasma. Several classes of designs provide this twist using conductors inside the plasma. Early calculations showed that collisions between the plasma and the supports for the conductors would remove energy faster than fusion reactions could replace it. Modern variations, including the Levitated Dipole Experiment (LDX), use a solid superconducting torus that is magnetically levitated inside the reactor chamber.
Magnetic mirror: Developed by Richard F. Post and teams at Lawrence Livermore National Laboratory (LLNL) in the 1960s. Magnetic mirrors reflect plasma back and forth in a line. Variations included the Tandem Mirror, magnetic bottle and the biconic cusp. A series of mirror machines were built by the US government in the 1970s and 1980s, principally at LLNL. However, calculations in the 1970s estimated it was unlikely these would ever be commercially useful.
Bumpy torus: A number of magnetic mirrors are arranged end-to-end in a toroidal ring. Any fuel ions that leak out of one are confined in a neighboring mirror, permitting the plasma pressure to be raised arbitrarily high without loss. An experimental facility, the ELMO Bumpy Torus or EBT was built and tested at Oak Ridge National Laboratory (ORNL) in the 1970s.
Field-reversed configuration: This device traps plasma in a self-organized quasi-stable structure; where the particle motion makes an internal magnetic field which then traps itself.
Spheromak: Similar to a field-reversed configuration, a semi-stable plasma structure made by using the plasmas' self-generated magnetic field. A spheromak has both toroidal and poloidal fields, while a field-reversed configuration has no toroidal field.
Dynomak is a spheromak that is formed and sustained using continuous magnetic flux injection.
Reversed field pinch: Here the plasma moves inside a ring. It has an internal magnetic field. Moving out from the center of this ring, the magnetic field reverses direction.
Inertial confinement
Indirect drive: Lasers heat a structure known as a Hohlraum that becomes so hot it begins to radiate x-ray light. These x-rays heat a fuel pellet, causing it to collapse inward to compress the fuel. The largest system using this method is the National Ignition Facility, followed closely by Laser Mégajoule.
Direct drive: Lasers directly heat the fuel pellet. Notable direct drive experiments have been conducted at the Laboratory for Laser Energetics (LLE) and the GEKKO XII facilities. Good implosions require fuel pellets with close to a perfect shape in order to generate a symmetrical inward shock wave that produces the high-density plasma.
Fast ignition: This method uses two laser blasts. The first blast compresses the fusion fuel, while the second ignites it. this technique had lost favor for energy production.
Magneto-inertial fusion or Magnetized Liner Inertial Fusion: This combines a laser pulse with a magnetic pinch. The pinch community refers to it as magnetized liner inertial fusion while the ICF community refers to it as magneto-inertial fusion.
Ion Beams: Ion beams replace laser beams to heat the fuel. The main difference is that the beam has momentum due to mass, whereas lasers do not. As of 2019 it appears unlikely that ion beams can be sufficiently focused spatially and in time.
Z-machine: Sends an electric current through thin tungsten wires, heating them sufficiently to generate x-rays. Like the indirect drive approach, these x-rays then compress a fuel capsule.
Magnetic or electric pinches
Z-pinch: A current travels in the z-direction through the plasma. The current generates a magnetic field that compresses the plasma. Pinches were the first method for human-made controlled fusion. The z-pinch has inherent instabilities that limit its compression and heating to values too low for practical fusion. The largest such machine, the UK's ZETA, was the last major experiment of the sort. The problems in z-pinch led to the tokamak design. The dense plasma focus is a possibly superior variation.
Theta-pinch: A current circles around the outside of a plasma column, in the theta direction. This induces a magnetic field running down the center of the plasma, as opposed to around it. The early theta-pinch device Scylla was the first to conclusively demonstrate fusion, but later work demonstrated it had inherent limits that made it uninteresting for power production.
Sheared Flow Stabilized Z-Pinch: Research at the University of Washington under Uri Shumlak investigated the use of sheared-flow stabilization to smooth out the instabilities of Z-pinch reactors. This involves accelerating neutral gas along the axis of the pinch. Experimental machines included the FuZE and Zap Flow Z-Pinch experimental reactors. In 2017, British technology investor and entrepreneur Benj Conway, together with physicists Brian Nelson and Uri Shumlak, co-founded Zap Energy to attempt to commercialize the technology for power production.
Screw Pinch: This method combines a theta and z-pinch for improved stabilization.
Inertial electrostatic confinement
Fusor: An electric field heats ions to fusion conditions. The machine typically uses two spherical cages, a cathode inside the anode, inside a vacuum. These machines are not considered a viable approach to net power because of their high conduction and radiation losses. They are simple enough to build that amateurs have fused atoms using them.
Polywell: Attempts to combine magnetic confinement with electrostatic fields, to avoid the conduction losses generated by the cage.
Other
Magnetized target fusion: Confines hot plasma using a magnetic field and squeezes it using inertia. Examples include LANL FRX-L machine, General Fusion (piston compression with liquid metal liner), HyperJet Fusion (plasma jet compression with plasma liner).
Uncontrolled: Fusion has been initiated by man, using uncontrolled fission explosions to stimulate fusion. Early proposals for fusion power included using bombs to initiate reactions. See Project PACER.
Colliding beam fusion: A beam of high energy particles fired at another beam or target can initiate fusion. This was used in the 1970s and 1980s to study the cross sections of fusion reactions. However beam systems cannot be used for power because keeping a beam coherent takes more energy than comes from fusion.
Muon-catalyzed fusion: This approach replaces electrons in diatomic molecules of isotopes of hydrogen with muons—more massive particles with the same electric charge. Their greater mass compresses the nuclei enough such that the strong interaction can cause fusion. As of 2007 producing muons required more energy than can be obtained from muon-catalyzed fusion.
Lattice confinement fusion: Lattice confinement fusion (LCF) is a type of nuclear fusion in which deuteron-saturated metals are exposed to gamma radiation or ion beams, such as in an IEC fusor, avoiding the confined high-temperature plasmas used in other methods of fusion.
Common tools
Many approaches, equipment, and mechanisms are employed across multiple projects to address fusion heating, measurement, and power production.
Machine learning
A deep reinforcement learning system has been used to control a tokamak-based reactor. The system was able to manipulate the magnetic coils to manage the plasma. The system was able to continuously adjust to maintain appropriate behavior (more complex than step-based systems). In 2014, Google began working with California-based fusion company TAE Technologies to control the Joint European Torus (JET) to predict plasma behavior. DeepMind has also developed a control scheme with TCV.
Heating
Electrostatic heating: an electric field can do work on charged ions or electrons, heating them.
Neutral beam injection: hydrogen is ionized and accelerated by an electric field to form a charged beam that is shone through a source of neutral hydrogen gas towards the plasma which itself is ionized and contained by a magnetic field. Some of the intermediate hydrogen gas is accelerated towards the plasma by collisions with the charged beam while remaining neutral: this neutral beam is thus unaffected by the magnetic field and so reaches the plasma. Once inside the plasma the neutral beam transmits energy to the plasma by collisions which ionize it and allow it to be contained by the magnetic field, thereby both heating and refueling the reactor in one operation. The remainder of the charged beam is diverted by magnetic fields onto cooled beam dumps.
Radio frequency heating: a radio wave causes the plasma to oscillate (i.e., microwave oven). This is also known as electron cyclotron resonance heating, using for example gyrotrons, or dielectric heating.
Magnetic reconnection: when plasma gets dense, its electromagnetic properties can change, which can lead to magnetic reconnection. Reconnection helps fusion because it instantly dumps energy into a plasma, heating it quickly. Up to 45% of the magnetic field energy can heat the ions.
Magnetic oscillations: varying electric currents can be supplied to magnetic coils that heat plasma confined within a magnetic wall.
Antiproton annihilation: antiprotons injected into a mass of fusion fuel can induce thermonuclear reactions. This possibility as a method of spacecraft propulsion, known as antimatter-catalyzed nuclear pulse propulsion, was investigated at Pennsylvania State University in connection with the proposed AIMStar project.
Measurement
The diagnostics of a fusion scientific reactor are extremely complex and varied. The diagnostics required for a fusion power reactor will be various but less complicated than those of a scientific reactor as by the time of commercialization, many real-time feedback and control diagnostics will have been perfected. However, the operating environment of a commercial fusion reactor will be harsher for diagnostic systems than in a scientific reactor because continuous operations may involve higher plasma temperatures and higher levels of neutron irradiation. In many proposed approaches, commercialization will require the additional ability to measure and separate diverter gases, for example helium and impurities, and to monitor fuel breeding, for instance the state of a tritium breeding liquid lithium liner. The following are some basic techniques.
Flux loop: a loop of wire is inserted into the magnetic field. As the field passes through the loop, a current is made. The current measures the total magnetic flux through that loop. This has been used on the National Compact Stellarator Experiment, the polywell, and the LDX machines. A Langmuir probe, a metal object placed in a plasma, can be employed. A potential is applied to it, giving it a voltage against the surrounding plasma. The metal collects charged particles, drawing a current. As the voltage changes, the current changes. This makes an IV Curve. The IV-curve can be used to determine the local plasma density, potential and temperature.
Thomson scattering: "Light scatters" from plasma can be used to reconstruct plasma behavior, including density and temperature. It is common in Inertial confinement fusion, Tokamaks, and fusors. In ICF systems, firing a second beam into a gold foil adjacent to the target makes x-rays that traverse the plasma. In tokamaks, this can be done using mirrors and detectors to reflect light.
Neutron detectors: Several types of neutron detectors can record the rate at which neutrons are produced.
X-ray detectors Visible, IR, UV, and X-rays are emitted anytime a particle changes velocity. If the reason is deflection by a magnetic field, the radiation is cyclotron radiation at low speeds and synchrotron radiation at high speeds. If the reason is deflection by another particle, plasma radiates X-rays, known as Bremsstrahlung radiation.
Power production
Neutron blankets absorb neutrons, which heats the blanket. Power can be extracted from the blanket in various ways:
Steam turbines can be driven by heat transferred into a working fluid that turns into steam, driving electric generators.
Neutron blankets: These neutrons can regenerate spent fission fuel. Tritium can be produced using a breeder blanket of liquid lithium or a helium cooled pebble bed made of lithium-bearing ceramic pebbles.
Direct conversion: The kinetic energy of a particle can be converted into voltage. It was first suggested by Richard F. Post in conjunction with magnetic mirrors, in the late 1960s. It has been proposed for Field-Reversed Configurations as well as Dense Plasma Focus devices. The process converts a large fraction of the random energy of the fusion products into directed motion. The particles are then collected on electrodes at various large electrical potentials. This method has demonstrated an experimental efficiency of 48 percent.
Traveling-wave tubes pass charged helium atoms at several megavolts and just coming off the fusion reaction through a tube with a coil of wire around the outside. This passing charge at high voltage pulls electricity through the wire.
Confinement
Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion. General principles:
Equilibrium: The forces acting on the plasma must be balanced. One exception is inertial confinement, where the fusion must occur faster than the dispersal time.
Stability: The plasma must be constructed so that disturbances will not lead to the plasma dispersing.
Transport or conduction: The loss of material must be sufficiently slow. The plasma carries energy off with it, so rapid loss of material will disrupt fusion. Material can be lost by transport into different regions or conduction through a solid or liquid.
To produce self-sustaining fusion, part of the energy released by the reaction must be used to heat new reactants and maintain the conditions for fusion.
Magnetic confinement
Magnetic Mirror
Magnetic mirror effect. If a particle follows the field line and enters a region of higher field strength, the particles can be reflected. Several devices apply this effect. The most famous was the magnetic mirror machines, a series of devices built at LLNL from the 1960s to the 1980s. Other examples include magnetic bottles and Biconic cusp. Because the mirror machines were straight, they had some advantages over ring-shaped designs. The mirrors were easier to construct and maintain and direct conversion energy capture was easier to implement. Poor confinement has led this approach to be abandoned, except in the polywell design.
Magnetic loops
Magnetic loops bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. The most highly developed systems of this type are the tokamak, the stellarator, and the reversed field pinch. Compact toroids, especially the field-reversed configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area.
Inertial confinement
Inertial confinement is the use of rapid implosion to heat and confine plasma. A shell surrounding the fuel is imploded using a direct laser blast (direct drive), a secondary x-ray blast (indirect drive), or heavy beams. The fuel must be compressed to about 30 times solid density with energetic beams. Direct drive can in principle be efficient, but insufficient uniformity has prevented success.:19–20 Indirect drive uses beams to heat a shell, driving the shell to radiate x-rays, which then implode the pellet. The beams are commonly laser beams, but ion and electron beams have been investigated.:182–193
Electrostatic confinement
Electrostatic confinement fusion devices use electrostatic fields. The best known is the fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Fusion rates in fusors are low because of competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a magnetically shielded-grid, a penning trap, the polywell, and the F1 cathode driver concept.
Fuels
The fuels considered for fusion power have all been light elements like the isotopes of hydrogen—protium, deuterium, and tritium. The deuterium and helium-3 reaction requires helium-3, an isotope of helium so scarce on Earth that it would have to be mined extraterrestrially or produced by other nuclear reactions. Ultimately, researchers hope to adopt the protium–boron-11 reaction, because it does not directly produce neutrons, although side reactions can.
Deuterium, tritium
The easiest nuclear reaction, at the lowest energy, is D+T:
+ → (3.5 MeV) + (14.1 MeV)
This reaction is common in research, industrial and military applications, usually as a neutron source. Deuterium is a naturally occurring isotope of hydrogen and is commonly available. The large mass ratio of the hydrogen isotopes makes their separation easy compared to the uranium enrichment process. Tritium is a natural isotope of hydrogen, but because it has a short half-life of 12.32 years, it is hard to find, store, produce, and is expensive. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions:
+ → +
+ → + +
The reactant neutron is supplied by the D-T fusion reaction shown above, and the one that has the greatest energy yield. The reaction with 6Li is exothermic, providing a small energy gain for the reactor. The reaction with 7Li is endothermic, but does not consume the neutron. Neutron multiplication reactions are required to replace the neutrons lost to absorption by other elements. Leading candidate neutron multiplication materials are beryllium and lead, but the 7Li reaction helps to keep the neutron population high. Natural lithium is mainly 7Li, which has a low tritium production cross section compared to 6Li so most reactor designs use breeding blankets with enriched 6Li.
Drawbacks commonly attributed to D-T fusion power include:
The supply of neutrons results in neutron activation of the reactor materials.:242
80% of the resultant energy is carried off by neutrons, which limits the use of direct energy conversion.
It requires the radioisotope tritium. Tritium may leak from reactors. Some estimates suggest that this would represent a substantial environmental radioactivity release.
The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of fission power reactors, posing problems for material design. After a series of D-T tests at JET, the vacuum vessel was sufficiently radioactive that it required remote handling for the year following the tests.
In a production setting, the neutrons would react with lithium in the breeding blanket composed of lithium ceramic pebbles or liquid lithium, yielding tritium. The energy of the neutrons ends up in the lithium, which would then be transferred to drive electrical production. The lithium blanket protects the outer portions of the reactor from the neutron flux. Newer designs, the advanced tokamak in particular, use lithium inside the reactor core as a design element. The plasma interacts directly with the lithium, preventing a problem known as "recycling". The advantage of this design was demonstrated in the Lithium Tokamak Experiment.
Deuterium
Fusing two deuterium nuclei is the second easiest fusion reaction. The reaction has two branches that occur with nearly equal probability:
+ → +
+ → +
This reaction is also common in research. The optimum energy to initiate this reaction is 15 keV, only slightly higher than that for the D-T reaction. The first branch produces tritium, so that a D-D reactor is not tritium-free, even though it does not require an input of tritium or lithium. Unless the tritons are quickly removed, most of the tritium produced is burned in the reactor, which reduces the handling of tritium, with the disadvantage of producing more, and higher-energy, neutrons. The neutron from the second branch of the D-D reaction has an energy of only , while the neutron from the D-T reaction has an energy of , resulting in greater isotope production and material damage. When the tritons are removed quickly while allowing the 3He to react, the fuel cycle is called "tritium suppressed fusion". The removed tritium decays to 3He with a 12.5 year half life. By recycling the 3He decay into the reactor, the fusion reactor does not require materials resistant to fast neutrons.
Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons would be only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding is not required. Other advantages are independence from lithium resources and a somewhat softer neutron spectrum. The disadvantage of D-D compared to D-T is that the energy confinement time (at a given pressure) must be 30 times longer and the power produced (at a given pressure and volume) is 68 times less.
Assuming complete removal of tritium and 3He recycling, only 6% of the fusion energy is carried by neutrons. The tritium-suppressed D-D fusion requires an energy confinement that is 10 times longer compared to D-T and double the plasma temperature.
Deuterium, helium-3
A second-generation approach to controlled fusion power involves combining helium-3 (3He) and deuterium (2H):
+ → +
This reaction produces 4He and a high-energy proton. As with the p-11B aneutronic fusion fuel cycle, most of the reaction energy is released as charged particles, reducing activation of the reactor housing and potentially allowing more efficient energy harvesting (via any of several pathways). In practice, D-D side reactions produce a significant number of neutrons, leaving p-11B as the preferred cycle for aneutronic fusion.
Proton, boron-11
Both material science problems and non-proliferation concerns are greatly diminished by aneutronic fusion. Theoretically, the most reactive aneutronic fuel is 3He. However, obtaining reasonable quantities of 3He implies large scale extraterrestrial mining on the Moon or in the atmosphere of Uranus or Saturn. Therefore, the most promising candidate fuel for such fusion is fusing the readily available protium (i.e. a proton) and boron. Their fusion releases no neutrons, but produces energetic charged alpha (helium) particles whose energy can directly be converted to electrical power:
+ → 3
Side reactions are likely to yield neutrons that carry only about 0.1% of the power,:177–182 which means that neutron scattering is not used for energy transfer and material activation is reduced several thousand-fold. The optimum temperature for this reaction of 123 keV is nearly ten times higher than that for pure hydrogen reactions, and energy confinement must be 500 times better than that required for the D-T reaction. In addition, the power density is 2500 times lower than for D-T, although per unit mass of fuel, this is still considerably higher compared to fission reactors.
Because the confinement properties of the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts, such as the Polywell and the Dense Plasma Focus. In 2013, a research team led by Christine Labaune at École Polytechnique, reported a new fusion rate record for proton-boron fusion, with an estimated 80 million fusion reactions during a 1.5 nanosecond laser fire, 100 times greater than reported in previous experiments.
Material selection
Structural material stability is a critical issue. Materials that can survive the high temperatures and neutron bombardment experienced in a fusion reactor are considered key to success. The principal issues are the conditions generated by the plasma, neutron degradation of wall surfaces, and the related issue of plasma-wall surface conditions. Reducing hydrogen permeability is seen as crucial to hydrogen recycling and control of the tritium inventory. Materials with the lowest bulk hydrogen solubility and diffusivity provide the optimal candidates for stable barriers. A few pure metals, including tungsten and beryllium, and compounds such as carbides, dense oxides, and nitrides have been investigated. Research has highlighted that coating techniques for preparing well-adhered and perfect barriers are of equivalent importance. The most attractive techniques are those in which an ad-layer is formed by oxidation alone. Alternative methods utilize specific gas environments with strong magnetic and electric fields. Assessment of barrier performance represents an additional challenge. Classical coated membranes gas permeation continues to be the most reliable method to determine hydrogen permeation barrier (HPB) efficiency. In 2021, in response to increasing numbers of designs for fusion power reactors for 2040, the United Kingdom Atomic Energy Authority published the UK Fusion Materials Roadmap 2021–2040, focusing on five priority areas, with a focus on tokamak family reactors:
Novel materials to minimize the amount of activation in the structure of the fusion power plant;
Compounds that can be used within the power plant to optimise breeding of tritium fuel to sustain the fusion process;
Magnets and insulators that are resistant to irradiation from fusion reactions—especially under cryogenic conditions;
Structural materials able to retain their strength under neutron bombardment at high operating temperatures (over 550 degrees C);
Engineering assurance for fusion materials—providing irradiated sample data and modelled predictions such that plant designers, operators and regulators have confidence that materials are suitable for use in future commercial power stations.
Superconducting materials
In a plasma that is embedded in a magnetic field (known as a magnetized plasma) the fusion rate scales as the magnetic field strength to the 4th power. For this reason, many fusion companies that rely on magnetic fields to control their plasma are trying to develop high temperature superconducting devices. In 2021, SuperOx, a Russian and Japanese company, developed a new manufacturing process for making superconducting YBCO wire for fusion reactors. This new wire was shown to conduct between 700 and 2000 Amps per square millimeter. The company was able to produce 186 miles of wire in nine months.
Containment considerations
Even on smaller production scales, the containment apparatus is blasted with matter and energy. Designs for plasma containment must consider:
A heating and cooling cycle, up to a 10 MW/m2 thermal load.
Neutron radiation, which over time leads to neutron activation and embrittlement.
High energy ions leaving at tens to hundreds of electronvolts.
Alpha particles leaving at millions of electronvolts.
Electrons leaving at high energy.
Light radiation (IR, visible, UV, X-ray).
Depending on the approach, these effects may be higher or lower than fission reactors. One estimate put the radiation at 100 times that of a typical pressurized water reactor. Depending on the approach, other considerations such as electrical conductivity, magnetic permeability, and mechanical strength matter. Materials must also not end up as long-lived radioactive waste.
Plasma-wall surface conditions
For long term use, each atom in the wall is expected to be hit by a neutron and displaced about 100 times before the material is replaced. These high-energy neutron collisions with the atoms in the wall result in the absorption of the neutrons, forming unstable isotopes of the atoms. When the isotope decays, it may emit alpha particles, protons, or gamma rays. Alpha particles, once stabilized by capturing electrons, form helium atoms which accumulate at grain boundaries and may result in swelling, blistering, or embrittlement of the material.
Selection of materials
Tungsten is widely regarded as the optimal material for plasma-facing components in next-generation fusion devices due to its unique properties and potential for enhancements. Its low sputtering rates and high melting point make it particularly suitable for the high-stress environments of fusion reactors, allowing it to withstand intense conditions without rapid degradation. Additionally, tungsten's low tritium retention through co-deposition and implantation is essential in fusion contexts, as it helps to minimize the accumulation of this radioactive isotope.
Liquid metals (lithium, gallium, tin) have been proposed, e.g., by injection of 1–5 mm thick streams flowing at 10 m/s on solid substrates.
Graphite features a gross erosion rate due to physical and chemical sputtering amounting to many meters per year, requiring redeposition of the sputtered material. The redeposition site generally does not exactly match the sputter site, allowing net erosion that may be prohibitive. An even larger problem is that tritium is redeposited with the redeposited graphite. The tritium inventory in the wall and dust could build up to many kilograms, representing a waste of resources and a radiological hazard in case of an accident. Graphite found favor as material for short-lived experiments, but appears unlikely to become the primary plasma-facing material (PFM) in a commercial reactor.
Ceramic materials such as silicon carbide (SiC) have similar issues like graphite. Tritium retention in silicon carbide plasma-facing components is approximately 1.5-2 times higher than in graphite, resulting in reduced fuel efficiency and heightened safety risks in fusion reactors. SiC tends to trap more tritium, limiting its availability for fusion and increasing the risk of hazardous accumulation, complicating tritium management. Furthermore, the chemical and physical sputtering of SiC remains significant, contributing to tritium buildup through co-deposition over time and with increasing particle fluence. As a result, carbon-based materials have been excluded from ITER, DEMO, and similar devices.
Tungsten's sputtering rate is orders of magnitude smaller than carbon's, and tritium is much less incorporated into redeposited tungsten. However, tungsten plasma impurities are much more damaging than carbon impurities, and self-sputtering can be high, requiring the plasma in contact with the tungsten not be too hot (a few tens of eV rather than hundreds of eV). Tungsten also has issues around eddy currents and melting in off-normal events, as well as some radiological issues.
Recent advances in materials for containment apparatus materials have found that certain ceramics can actually improve the longevity of the material of the containment apparatus. Studies on MAX phases, such as titanium silicon carbide, show that under the high operating temperatures of nuclear fusion, the material undergoes a phase transformation from a hexagonal structure to a face-centered-cubic (FCC) structure, driven by helium bubble growth. Helium atoms preferentially accumulate in the Si layer of the hexagonal structure, as the Si atoms are more mobile than the Ti-C slabs. As more atoms are trapped, the Ti-C slab is peeled off, causing the Si atoms to become highly mobile interstitial atoms in the new FCC structure. Lattice strain induced by the He bubbles cause Si atoms to diffuse out of compressive areas, typically towards the surface of the material, forming a protective silicon dioxide layer.
Doping vessel materials with iron silicate has emerged as a promising approach to enhance containment materials in fusion reactors, as well. This method targets helium embrittlement at grain boundaries, a common issue that arises as helium atoms accumulate and form bubbles. Over time, these bubbles coalesce at grain boundaries, causing them to expand and degrade the material's structural integrity. By contrast, introducing iron silicate creates nucleation sites within the metal matrix that are more thermodynamically favorable for helium aggregation. This localized congregation around iron silicate nanoparticles induces matrix strain rather than weakening grain boundaries, preserving the material’s strength and longevity.
Safety and the environment
Accident potential
Accident potential and effect on the environment are critical to social acceptance of nuclear fusion, also known as a social license. Fusion reactors are not subject to catastrophic meltdown. It requires precise and controlled temperature, pressure and magnetic field parameters to produce net energy, and any damage or loss of required control would rapidly quench the reaction. Fusion reactors operate with seconds or even microseconds worth of fuel at any moment. Without active refueling, the reactions immediately quench.
The same constraints prevent runaway reactions. Although the plasma is expected to have a volume of or more, the plasma typically contains only a few grams of fuel. By comparison, a fission reactor is typically loaded with enough fuel for months or years, and no additional fuel is necessary to continue the reaction. This large fuel supply is what offers the possibility of a meltdown.
In magnetic containment, strong fields develop in coils that are mechanically held in place by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to other industrial accidents or an MRI machine quench/explosion, and could be effectively contained within a containment building similar to those used in fission reactors.
In laser-driven inertial containment the larger size of the reaction chamber reduces the stress on materials. Although failure of the reaction chamber is possible, stopping fuel delivery prevents catastrophic failure.
Most reactor designs rely on liquid hydrogen as a coolant and to convert stray neutrons into tritium, which is fed back into the reactor as fuel. Hydrogen is flammable, and it is possible that hydrogen stored on-site could ignite. In this case, the tritium fraction of the hydrogen would enter the atmosphere, posing a radiation risk. Calculations suggest that about of tritium and other radioactive gases in a typical power station would be present. The amount is small enough that it would dilute to legally acceptable limits by the time they reached the station's perimeter fence.
The likelihood of small industrial accidents, including the local release of radioactivity and injury to staff, are estimated to be minor compared to fission. They would include accidental releases of lithium or tritium or mishandling of radioactive reactor components.
Magnet quench
A magnet quench is an abnormal termination of magnet operation that occurs when part of the superconducting coil exits the superconducting state (becomes normal). This can occur because the field inside the magnet is too large, the rate of change of field is too large (causing eddy currents and resultant heating in the copper support matrix), or a combination of the two.
More rarely a magnet defect can cause a quench. When this happens, that particular spot is subject to rapid Joule heating from the current, which raises the temperature of the surrounding regions. This pushes those regions into the normal state as well, which leads to more heating in a chain reaction. The entire magnet rapidly becomes normal over several seconds, depending on the size of the superconducting coil. This is accompanied by a loud bang as the energy in the magnetic field is converted to heat, and the cryogenic fluid boils away. The abrupt decrease of current can result in kilovolt inductive voltage spikes and arcing. Permanent damage to the magnet is rare, but components can be damaged by localized heating, high voltages, or large mechanical forces.
In practice, magnets usually have safety devices to stop or limit the current when a quench is detected. If a large magnet undergoes a quench, the inert vapor formed by the evaporating cryogenic fluid can present a significant asphyxiation hazard to operators by displacing breathable air.
A large section of the superconducting magnets in CERN's Large Hadron Collider unexpectedly quenched during start-up operations in 2008, destroying multiple magnets. In order to prevent a recurrence, the LHC's superconducting magnets are equipped with fast-ramping heaters that are activated when a quench event is detected. The dipole bending magnets are connected in series. Each power circuit includes 154 individual magnets, and should a quench event occur, the entire combined stored energy of these magnets must be dumped at once. This energy is transferred into massive blocks of metal that heat up to several hundred degrees Celsius—because of resistive heating—in seconds. A magnet quench is a "fairly routine event" during the operation of a particle accelerator.
Effluents
The natural product of the fusion reaction is a small amount of helium, which is harmless to life. Hazardous tritium is difficult to retain completely.
Although tritium is volatile and biologically active, the health risk posed by a release is much lower than that of most radioactive contaminants, because of tritium's short half-life (12.32 years) and very low decay energy (~14.95 keV), and because it does not bioaccumulate (it cycles out of the body as water, with a biological half-life of 7 to 14 days). ITER incorporates total containment facilities for tritium.
Radioactive waste
Fusion reactors create far less radioactive material than fission reactors. Further, the material it creates is less damaging biologically, and the radioactivity dissipates within a time period that is well within existing engineering capabilities for safe long-term waste storage. In specific terms, except in the case of aneutronic fusion, the neutron flux turns the structural materials radioactive. The amount of radioactive material at shut-down may be comparable to that of a fission reactor, with important differences. The half-lives of fusion and neutron activation radioisotopes tend to be less than those from fission, so that the hazard decreases more rapidly. Whereas fission reactors produce waste that remains radioactive for thousands of years, the radioactive material in a fusion reactor (other than tritium) would be the reactor core itself and most of this would be radioactive for about 50 years, with other low-level waste being radioactive for another 100 years or so thereafter. The fusion waste's short half-life eliminates the challenge of long-term storage. By 500 years, the material would have the same radiotoxicity as coal ash.
Nonetheless, classification as intermediate level waste rather than low-level waste may complicate safety discussions.
The choice of materials is less constrained than in conventional fission, where many materials are required for their specific neutron cross-sections. Fusion reactors can be designed using "low activation", materials that do not easily become radioactive. Vanadium, for example, becomes much less radioactive than stainless steel. Carbon fiber materials are also low-activation, are strong and light, and are promising for laser-inertial reactors where a magnetic field is not required.
Nuclear proliferation
In some scenarios, fusion power technology could be adapted to produce materials for military purposes. A huge amount of tritium could be produced by a fusion power station; tritium is used in the trigger of hydrogen bombs and in modern boosted fission weapons, but it can be produced in other ways. The energetic neutrons from a fusion reactor could be used to breed weapons-grade plutonium or uranium for an atomic bomb (for example by transmutation of to , or to ).
A study conducted in 2011 assessed three scenarios:
Small-scale fusion station: As a result of much higher power consumption, heat dissipation and a more recognizable design compared to enrichment gas centrifuges, this choice would be much easier to detect and therefore implausible.
Commercial facility: The production potential is significant. But no fertile or fissile substances necessary for the production of weapon-usable materials needs to be present at a civil fusion system at all. If not shielded, detection of these materials can be done by their characteristic gamma radiation. The underlying redesign could be detected by regular design information verification. In the (technically more feasible) case of solid breeder blanket modules, it would be necessary for incoming components to be inspected for the presence of fertile material, otherwise plutonium for several weapons could be produced each year.
Prioritizing weapon-grade material regardless of secrecy: The fastest way to produce weapon-usable material was seen in modifying a civil fusion power station. No weapons-compatible material is required during civil use. Even without the need for covert action, such a modification would take about two months to start production and at least an additional week to generate a significant amount. This was considered to be enough time to detect a military use and to react with diplomatic or military means. To stop the production, a military destruction of parts of the facility while leaving out the reactor would be sufficient.
Another study concluded "...large fusion reactors—even if not designed for fissile material breeding—could easily produce several hundred kg Pu per year with high weapon quality and very low source material requirements." It was emphasized that the implementation of features for intrinsic proliferation resistance might only be possible at an early phase of research and development. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with magnetic confinement fusion.
Fuel reserves
Fusion power commonly proposes the use of deuterium as fuel and many current designs also use lithium. Assuming a fusion energy output equal to the 1995 global power output of about 100 EJ/yr (= 1 × 1020 J/yr) and that this does not increase in the future, which is unlikely, then known current lithium reserves would last 3000 years. Lithium from sea water would last 60 million years, however, and a more complicated fusion process using only deuterium would have fuel for 150 billion years. To put this in context, 150 billion years is close to 30 times the remaining lifespan of the Sun, and more than 10 times the estimated age of the universe.
Economics
The EU spent almost through the 1990s. ITER represents an investment of over twenty billion dollars, and possibly tens of billions more, including in kind contributions. Under the European Union's Sixth Framework Programme, nuclear fusion research received (in addition to ITER funding), compared with for sustainable energy research, putting research into fusion power well ahead of that of any single rival technology. The United States Department of Energy has allocated $US367M–$US671M every year since 2010, peaking in 2020, with plans to reduce investment to $US425M in its FY2021 Budget Request. About a quarter of this budget is directed to support ITER.
The size of the investments and time lines meant that fusion research was traditionally almost exclusively publicly funded. However, starting in the 2010s, the promise of commercializing a paradigm-changing low-carbon energy source began to attract a raft of companies and investors. Over two dozen start-up companies attracted over one billion dollars from roughly 2000 to 2020, mainly from 2015, and a further three billion in funding and milestone related commitments in 2021, with investors including Jeff Bezos, Peter Thiel and Bill Gates, as well as institutional investors including Legal & General, and energy companies including Equinor, Eni, Chevron, and the Chinese ENN Group. In 2021, Commonwealth Fusion Systems (CFS) obtained $1.8 billion in scale-up funding, and Helion Energy obtained a half-billion dollars with an additional $1.7 billion contingent on meeting milestones.
Scenarios developed in the 2000s and early 2010s discussed the effects of the commercialization of fusion power on the future of human civilization. Using nuclear fission as a guide, these saw ITER and later DEMO as bringing online the first commercial reactors around 2050 and a rapid expansion after mid-century. Some scenarios emphasized "fusion nuclear science facilities" as a step beyond ITER. However, the economic obstacles to tokamak-based fusion power remain immense, requiring investment to fund prototype tokamak reactors and development of new supply chains, a problem which will affect any kind of fusion reactor. Tokamak designs appear to be labour-intensive, while the commercialization risk of alternatives like inertial fusion energy is high due to the lack of government resources.
Scenarios since 2010 note computing and material science advances enabling multi-phase national or cost-sharing "Fusion Pilot Plants" (FPPs) along various technology pathways, such as the UK Spherical Tokamak for Energy Production, within the 2030–2040 time frame. Notably, in June 2021, General Fusion announced it would accept the UK government's offer to host the world's first substantial public-private partnership fusion demonstration plant, at Culham Centre for Fusion Energy. The plant will be constructed from 2022 to 2025 and is intended to lead the way for commercial pilot plants in the late 2025s. The plant will be 70% of full scale and is expected to attain a stable plasma of 150 million degrees. In the United States, cost-sharing public-private partnership FPPs appear likely, and in 2022 the DOE announced a new Milestone-Based Fusion Development Program as the centerpiece of its Bold Decadal Vision for Commercial Fusion Energy, which envisages private sector-led teams delivering FPP pre-conceptual designs, defining technology roadmaps, and pursuing the R&D necessary to resolve critical-path scientific and technical issues towards an FPP design. Compact reactor technology based on such demonstration plants may enable commercialization via a fleet approach from the 2030s if early markets can be located.
The widespread adoption of non-nuclear renewable energy has transformed the energy landscape. Such renewables are projected to supply 74% of global energy by 2050. The steady fall of renewable energy prices challenges the economic competitiveness of fusion power.
Some economists suggest fusion power is unlikely to match other renewable energy costs. Fusion plants are expected to face large start up and capital costs. Moreover, operation and maintenance are likely to be costly. While the costs of the China Fusion Engineering Test Reactor are not well known, an EU DEMO fusion concept was projected to feature a levelized cost of energy (LCOE) of $121/MWh.
Fuel costs are low, but economists suggest that the energy cost for a one-gigawatt plant would increase by $16.5 per MWh for every $1 billion increase in the capital investment in construction. There is also the risk that easily obtained lithium will be used up making batteries. Obtaining it from seawater would be very costly and might require more energy than the energy that would be generated.
In contrast, renewable levelized cost of energy estimates are substantially lower. For instance, the 2019 levelized cost of energy of solar energy was estimated to be $40-$46/MWh, on shore wind was estimated at $29-$56/MWh, and offshore wind was approximately $92/MWh.
However, fusion power may still have a role filling energy gaps left by renewables, depending on how administration priorities for energy and environmental justice influence the market. In the 2020s, socioeconomic studies of fusion that began to consider these factors emerged, and in 2022 EUROFusion launched its Socio-Economic Studies and Prospective Research and Development strands to investigate how such factors might affect commercialization pathways and timetables. Similarly, in April 2023 Japan announced a national strategy to industrialise fusion. Thus, fusion power may work in tandem with other renewable energy sources rather than becoming the primary energy source. In some applications, fusion power could provide the base load, especially if including integrated thermal storage and cogeneration and considering the potential for retrofitting coal plants.
Regulation
As fusion pilot plants move within reach, legal and regulatory issues must be addressed. In September 2020, the United States National Academy of Sciences consulted with private fusion companies to consider a national pilot plant. The following month, the United States Department of Energy, the Nuclear Regulatory Commission (NRC) and the Fusion Industry Association co-hosted a public forum to begin the process. In November 2020, the International Atomic Energy Agency (IAEA) began working with various nations to create safety standards such as dose regulations and radioactive waste handling. In January and March 2021, NRC hosted two public meetings on regulatory frameworks. A public-private cost-sharing approach was endorsed in the 27 December H.R.133 Consolidated Appropriations Act, 2021, which authorized $325 million over five years for a partnership program to build fusion demonstration facilities, with a 100% match from private industry.
Subsequently, the UK Regulatory Horizons Council published a report calling for a fusion regulatory framework by early 2022 in order to position the UK as a global leader in commercializing fusion power. This call was met by the UK government publishing in October 2021 both its Fusion Green Paper and its Fusion Strategy, to regulate and commercialize fusion, respectively. Then, in April 2023, in a decision likely to influence other nuclear regulators, the NRC announced in a unanimous vote that fusion energy would be regulated not as fission but under the same regulatory regime as particle accelerators.
Then, in October 2023 the UK government, in enacting the Energy Act 2023, made the UK the first country to legislate for fusion separately from fission, to support planning and investment, including the UK's planned prototype fusion power plant for 2040; STEP the UK is working with Canada and Japan in this regard. Meanwhile, in February 2024 the US House of Representatives passed the Atomic Energy Advancement Act, which includes the Fusion Energy Act, which establishes a regulatory framework for fusion energy systems.
Geopolitics
Given the potential of fusion to transform the world's energy industry and mitigate climate change, fusion science has traditionally been seen as an integral part of peace-building science diplomacy. However, technological developments and private sector involvement has raised concerns over intellectual property, regulatory administration, global leadership; equity, and potential weaponization. These challenge ITER's peace-building role and led to calls for a global commission. Fusion power significantly contributing to climate change by 2050 seems unlikely without substantial breakthroughs and a space race mentality emerging, but a contribution by 2100 appears possible, with the extent depending on the type and particularly cost of technology pathways.
Developments from late 2020 onwards have led to talk of a "new space race" with multiple entrants, pitting the US against China and the UK's STEP FPP, with China now outspending the US and threatening to leapfrog US technology. On 24 September 2020, the United States House of Representatives approved a research and commercialization program. The Fusion Energy Research section incorporated a milestone-based, cost-sharing, public-private partnership program modeled on NASA's COTS program, which launched the commercial space industry. In February 2021, the National Academies published Bringing Fusion to the U.S. Grid, recommending a market-driven, cost-sharing plant for 2035–2040, and the launch of the Congressional Bipartisan Fusion Caucus followed.
In December 2020, an independent expert panel reviewed EUROfusion's design and R&D work on DEMO, and EUROfusion confirmed it was proceeding with its Roadmap to Fusion Energy, beginning the conceptual design of DEMO in partnership with the European fusion community, suggesting an EU-backed machine had entered the race.
In October 2023, the UK-oriented Agile Nations group announced a fusion working group. One month later, the UK and the US announced a bilateral partnership to accelerate fusion energy. Then, in December 2023 at COP28 the US announced a US global strategy to commercialize fusion energy. Then, in April 2024, Japan and the US announced a similar partnership, and in May of the same year, the G7 announced a G7 Working Group on Fusion Energy to promote international collaborations to accelerate the development of commercial energy and promote R&D between countries, as well as rationalize fusion regulation. Later the same year, the US partnered with the IAEA to launch the Fusion Energy Solutions Taskforce, to collaboratively crowdsource ideas to accelerate commercial fusion energy, in line with the US COP28 statement.
Specifically to resolve the tritium supply problem, in February 2024, the UK (UKAEA) and Canada (Canadian Nuclear Laboratories) announced an agreement by which Canada could refurbish its Candu deuterium-uranium tritium-generating heavywater nuclear plants and even build new ones, guaranteeing a supply of tritium into the 2070s, while the UKAEA would test breeder materials and simulate how tritium could be captured, purified, and injected back into the fusion reaction.
In 2024, both South Korea and Japan announced major initiatives to accelerate their national fusion strategies, by building electricity-generating public-private fusion plants in the 2030s, aiming to begin operations in the 2040s and 2030s respectively.
Advantages
Fusion power promises to provide more energy for a given weight of fuel than any fuel-consuming energy source currently in use. The fuel (primarily deuterium) exists abundantly in the ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium. Although this is only about 0.015%, seawater is plentiful and easy to access, implying that fusion could supply the world's energy needs for millions of years.
First generation fusion plants are expected to use the deuterium-tritium fuel cycle. This will require the use of lithium for breeding of the tritium. It is not known for how long global lithium supplies will suffice to supply this need as well as those of the battery and metallurgical industries. It is expected that second generation plants will move on to the more formidable deuterium-deuterium reaction. The deuterium-helium-3 reaction is also of interest, but the light helium isotope is practically non-existent on Earth. It is thought to exist in useful quantities in the lunar regolith, and is abundant in the atmospheres of the gas giant planets.
Fusion power could be used for so-called "deep space" propulsion within the solar system and for interstellar space exploration where solar energy is not available, including via antimatter-fusion hybrid drives.
Helium production
Deuterium–tritium fusion produces helium as a by-product.
Disadvantages
Fusion power has a number of disadvantages. Because 80 percent of the energy in any reactor fueled by deuterium and tritium appears in the form of neutron streams, such reactors share many of the drawbacks of fission reactors. This includes the production of large quantities of radioactive waste and serious radiation damage to reactor components. Additionally, naturally occurring tritium is extremely rare. While the hope is that fusion reactors can breed their own tritium, tritium self-sufficiency is extremely challenging, not least because tritium is difficult to contain (tritium has leaked from 48 of 65 nuclear sites in the US). In any case the reserve and start-up tritium inventory requirements are likely to be unacceptably large.
If reactors can be made to operate using only deuterium fuel, then the tritium replenishment issue is eliminated and neutron radiation damage may be reduced. However, the probabilities of deuterium-deuterium reactions are about 20 times lower than for deuterium-tritium. Additionally, the temperature needed is about 3 times higher than for deuterium-tritium (see cross section). The higher temperatures and lower reaction rates thus significantly complicate the engineering challenges. In any case, other drawbacks remain, for instance reactors requiring only deuterium fueling will have greatly enhanced nuclear weapons proliferation potential.
History
Early experiments
The first machine to achieve controlled thermonuclear fusion was a pinch machine at Los Alamos National Laboratory called Scylla I at the start of 1958. The team that achieved it was led by a British scientist named James Tuck and included a young Marshall Rosenbluth. Tuck had been involved in the Manhattan project, but had switched to working on fusion in the early 1950s. He applied for funding for the project as part of a White House sponsored contest to develop a fusion reactor along with Lyman Spitzer. The previous year, 1957, the British had claimed that they had achieved thermonuclear fusion reactions on the Zeta pinch machine. However, it turned out that the neutrons they had detected were from beam-target interactions, not fusion, and they withdrew the claim.
Scylla I was a classified machine at the time, so the achievement was hidden from the public. A traditional Z-pinch passes a current down the center of a plasma, which makes a magnetic force around the outside which squeezes the plasma to fusion conditions. Scylla I was a θ-pinch, which used deuterium to pass a current around the outside of its cylinder to create a magnetic force in the center. After the success of Scylla I, Los Alamos went on to build multiple pinch machines over the next few years.
Spitzer continued his stellarator research at Princeton. While fusion did not immediately transpire, the effort led to the creation of the Princeton Plasma Physics Laboratory.
First tokamak
In the early 1950s, Soviet physicists I.E. Tamm and A.D. Sakharov developed the concept of the tokamak, combining a low-power pinch device with a low-power stellarator.
A.D. Sakharov's group constructed the first tokamaks, achieving the first quasistationary fusion reaction.:90
Over time, the "advanced tokamak" concept emerged, which included non-circular plasma, internal diverters and limiters, superconducting magnets, operation in the "H-mode" island of increased stability, and the compact tokamak, with the magnets on the inside of the vacuum chamber.
First inertial confinement experiments
Laser fusion was suggested in 1962 by scientists at Lawrence Livermore National Laboratory (LLNL), shortly after the invention of the laser in 1960. Inertial confinement fusion experiments using lasers began as early as 1965. Several laser systems were built at LLNL, including the Argus, the Cyclops, the Janus, the long path, the Shiva laser, and the Nova.
Laser advances included frequency-tripling crystals that transformed infrared laser beams into ultraviolet beams and "chirping", which changed a single wavelength into a full spectrum that could be amplified and then reconstituted into one frequency. Laser research cost over one billion dollars in the 1980s.
1980s
The Tore Supra, JET, T-15, and JT-60 tokamaks were built in the 1980s. In 1984, Martin Peng of ORNL proposed the spherical tokamak with a much smaller radius. It used a single large conductor in the center, with magnets as half-rings off this conductor. The aspect ratio fell to as low as 1.2.:B247:225 Peng's advocacy caught the interest of Derek Robinson, who built the Small Tight Aspect Ratio Tokamak, (START).
1990s
In 1991, the Preliminary Tritium Experiment at the Joint European Torus achieved the world's first controlled release of fusion power.
In 1996, Tore Supra created a plasma for two minutes with a current of almost 1 million amperes, totaling 280 MJ of injected and extracted energy.
In 1997, JET produced a peak of 16.1 MW of fusion power (65% of heat to plasma), with fusion power of over 10 MW sustained for over 0.5 sec.
2000s
"Fast ignition" saved power and moved ICF into the race for energy production.
In 2006, China's Experimental Advanced Superconducting Tokamak (EAST) test reactor was completed. It was the first tokamak to use superconducting magnets to generate both toroidal and poloidal fields.
In March 2009, the laser-driven ICF NIF became operational.
In the 2000s, privately backed fusion companies entered the race, including TAE Technologies, General Fusion, and Tokamak Energy.
2010s
Private and public research accelerated in the 2010s. General Fusion developed plasma injector technology and Tri Alpha Energy tested its C-2U device. The French Laser Mégajoule began operation. NIF achieved net energy gain in 2013, as defined in the very limited sense as the hot spot at the core of the collapsed target, rather than the whole target.
In 2014, Phoenix Nuclear Labs sold a high-yield neutron generator that could sustain 5×1011 deuterium fusion reactions per second over a 24-hour period.
In 2015, MIT announced a tokamak it named the ARC fusion reactor, using rare-earth barium-copper oxide (REBCO) superconducting tapes to produce high-magnetic field coils that it claimed could produce comparable magnetic field strength in a smaller configuration than other designs.
In October, researchers at the Max Planck Institute of Plasma Physics in Greifswald, Germany, completed building the largest stellarator to date, the Wendelstein 7-X (W7-X). The W7-X stellarator began Operational phase 1 (OP1.1) on 10 December 2015, successfully producing helium plasma. The objective was to test vital systems and understand the machine's physics. By February 2016, hydrogen plasma was achieved, with temperatures reaching up to 100 million Kelvin. The initial tests used five graphite limiters. After over 2,000 pulses and achieving significant milestones, OP1.1 concluded on 10 March 2016. An upgrade followed, and OP1.2 in 2017 aimed to test an uncooled divertor. By June 2018, record temperatures were reached. W7-X concluded its first campaigns with limiter and island divertor tests, achieving notable advancements by the end of 2018. It soon produced helium and hydrogen plasmas lasting up to 30 minutes.
In 2017, Helion Energy's fifth-generation plasma machine went into operation. The UK's Tokamak Energy's ST40 generated "first plasma". The next year, Eni announced a $50 million investment in Commonwealth Fusion Systems, to attempt to commercialize MIT's ARC technology.
2020s
In January 2021, SuperOx announced the commercialization of a new superconducting wire with more than 700 A/mm2 current capability.
TAE Technologies announced results for its Norman device, holding a temperature of about 60 MK for 30 milliseconds, 8 and 10 times higher, respectively, than the company's previous devices.
In October, Oxford-based First Light Fusion revealed its projectile fusion project, which fires an aluminum disc at a fusion target, accelerated by a 9 mega-amp electrical pulse, reaching speeds of . The resulting fusion generates neutrons whose energy is captured as heat.
On November 8, in an invited talk to the 63rd Annual Meeting of the APS Division of Plasma Physics, the National Ignition Facility claimed to have triggered fusion ignition in the laboratory on August 8, 2021, for the first time in the 60+ year history of the ICF program. The shot yielded 1.3 MJ of fusion energy, an over 8X improvement on tests done in spring of 2021. NIF estimates that 230 kJ of energy reached the fuel capsule, which resulted in an almost 6-fold energy output from the capsule. A researcher from Imperial College London stated that the majority of the field agreed that ignition had been demonstrated.
In November 2021, Helion Energy reported receiving $500 million in Series E funding for its seventh-generation Polaris device, designed to demonstrate net electricity production, with an additional $1.7 billion of commitments tied to specific milestones, while Commonwealth Fusion Systems raised an additional $1.8 billion in Series B funding to construct and operate its SPARC tokamak, the single largest investment in any private fusion company.
In April 2022, First Light announced that their hypersonic projectile fusion prototype had produced neutrons compatible with fusion. Their technique electromagnetically fires projectiles at Mach 19 at a caged fuel pellet. The deuterium fuel is compressed at Mach 204, reaching pressure levels of 100 TPa.
On December 13, 2022, the US Department of Energy reported that researchers at the National Ignition Facility had achieved a net energy gain from a fusion reaction. The reaction of hydrogen fuel at the facility produced about 3.15 MJ of energy while consuming 2.05 MJ of input. However, while the fusion reactions may have produced more than 3 megajoules of energy—more than was delivered to the target—NIF's 192 lasers consumed 322 MJ of grid energy in the conversion process.
In May 2023, the United States Department of Energy (DOE) provided a grant of $46 million to eight companies across seven states to support fusion power plant design and research efforts. This funding, under the Milestone-Based Fusion Development Program, aligns with objectives to demonstrate pilot-scale fusion within a decade and to develop fusion as a carbon-neutral energy source by 2050. The granted companies are tasked with addressing the scientific and technical challenges to create viable fusion pilot plant designs in the next 5–10 years. The recipient firms include Commonwealth Fusion Systems, Focused Energy Inc., Princeton Stellarators Inc., Realta Fusion Inc., Tokamak Energy Inc., Type One Energy Group, Xcimer Energy Inc., and Zap Energy Inc.
In December 2023, the largest and most advanced tokamak JT-60SA was inaugurated in Naka, Japan. The reactor is a joint project between Japan and the European Union. The reactor had achieved its first plasma in October 2023. Subsequently, South Korea's fusion reactor project, the Korean Superconducting Tokamak Advanced Research, successfully operated for 102 seconds in a high-containment mode (H-mode) containing high ion temperatures of more than 100 million degrees in plasma tests conducted from December 2023 to February 2024.
In January 2025, EAST fusion reactor in China was reported to maintain a steady-state high-confinement plasma operation for 1066 seconds.
Future development
Claims of commercially viable fusion power being relatively imminent have often attracted ridicule within the scientific community. A common joke is that human-engineered fusion has always been promised as 30 years away since the concept was first discussed, or that it has been "20 years away for 50 years".
In 2024, Commonwealth Fusion Systems announced plans to build the world's first grid-scale commercial nuclear fusion power plant at the James River Industrial Center in Chesterfield County, Virginia, which is part of the Greater Richmond Region; the plant is designed to produce about 400 MW of electric power, and is intended to come online in the early 2030s.
Records
Fusion records continue to advance:
See also
COLEX process, for production of Li-6
Fusion ignition
High beta fusion reactor
Inertial electrostatic confinement
Levitated dipole
List of fusion experiments
Magnetic mirror
Starship
References
Bibliography
(manuscript)
Nuttall, William J., Konishi, Satoshi, Takeda, Shutaro, and Webbe-Wood, David (2020). Commercialising Fusion Energy: How Small Businesses are Transforming Big Science. IOP Publishing. .
Further reading
Oreskes, Naomi, "Fusion's False Promise: Despite a recent advance, nuclear fusion is not the solution to the climate crisis", Scientific American, vol. 328, no. 6 (June 2023), p. 86.
External links
Fusion Device Information System
Fusion Energy Base
Fusion Industry Association
Princeton Satellite Systems News
U.S. Fusion Energy Science Program
Sustainable energy | Fusion power | [
"Physics",
"Chemistry"
] | 16,024 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
55,184 | https://en.wikipedia.org/wiki/Hop%20%28telecommunications%29 | In telecommunications, a hop is a portion of a signal's journey from source to receiver. Examples include:
The excursion of a radio wave from the Earth to the ionosphere and back to the Earth. The number of hops indicates the number of reflections from the ionosphere.
A similar excursion from an earth station to a communications satellite to another station, counted similarly except that if the return trip is not by satellite, then it is only a half hop.
In computer networks, a hop is the step from one network segment to the next.
References
Telecommunications engineering
Radio frequency propagation | Hop (telecommunications) | [
"Physics",
"Engineering"
] | 117 | [
"Physical phenomena",
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves",
"Electrical engineering"
] |
55,212 | https://en.wikipedia.org/wiki/Newton%27s%20laws%20of%20motion | Newton's laws of motion are three physical laws that describe the relationship between the motion of an object and the forces acting on it. These laws, which provide the basis for Newtonian mechanics, can be paraphrased as follows:
A body remains at rest, or in motion at a constant speed in a straight line, except insofar as it is acted upon by a force.
At any instant of time, the net force on a body is equal to the body's acceleration multiplied by its mass or, equivalently, the rate at which the body's momentum is changing with time.
If two bodies exert forces on each other, these forces have the same magnitude but opposite directions.
The three laws of motion were first stated by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), originally published in 1687. Newton used them to investigate and explain the motion of many physical objects and systems. In the time since Newton, new insights, especially around the concept of energy, built the field of classical mechanics on his foundations. Limitations to Newton's laws have also been discovered; new theories are necessary when objects move at very high speeds (special relativity), are very massive (general relativity), or are very small (quantum mechanics).
Prerequisites
Newton's laws are often stated in terms of point or particle masses, that is, bodies whose volume is negligible. This is a reasonable approximation for real bodies when the motion of internal parts can be neglected, and when the separation between bodies is much larger than the size of each. For instance, the Earth and the Sun can both be approximated as pointlike when considering the orbit of the former around the latter, but the Earth is not pointlike when considering activities on its surface.
The mathematical description of motion, or kinematics, is based on the idea of specifying positions using numerical coordinates. Movement is represented by these numbers changing over time: a body's trajectory is represented by a function that assigns to each value of a time variable the values of all the position coordinates. The simplest case is one-dimensional, that is, when a body is constrained to move only along a straight line. Its position can then be given by a single number, indicating where it is relative to some chosen reference point. For example, a body might be free to slide along a track that runs left to right, and so its location can be specified by its distance from a convenient zero point, or origin, with negative numbers indicating positions to the left and positive numbers indicating positions to the right. If the body's location as a function of time is , then its average velocity over the time interval from to is Here, the Greek letter (delta) is used, per tradition, to mean "change in". A positive average velocity means that the position coordinate increases over the interval in question, a negative average velocity indicates a net decrease over that interval, and an average velocity of zero means that the body ends the time interval in the same place as it began. Calculus gives the means to define an instantaneous velocity, a measure of a body's speed and direction of movement at a single moment of time, rather than over an interval. One notation for the instantaneous velocity is to replace with the symbol , for example,This denotes that the instantaneous velocity is the derivative of the position with respect to time. It can roughly be thought of as the ratio between an infinitesimally small change in position to the infinitesimally small time interval over which it occurs. More carefully, the velocity and all other derivatives can be defined using the concept of a limit. A function has a limit of at a given input value if the difference between and can be made arbitrarily small by choosing an input sufficiently close to . One writes, Instantaneous velocity can be defined as the limit of the average velocity as the time interval shrinks to zero: Acceleration is to velocity as velocity is to position: it is the derivative of the velocity with respect to time. Acceleration can likewise be defined as a limit:Consequently, the acceleration is the second derivative of position, often written .
Position, when thought of as a displacement from an origin point, is a vector: a quantity with both magnitude and direction. Velocity and acceleration are vector quantities as well. The mathematical tools of vector algebra provide the means to describe motion in two, three or more dimensions. Vectors are often denoted with an arrow, as in , or in bold typeface, such as . Often, vectors are represented visually as arrows, with the direction of the vector being the direction of the arrow, and the magnitude of the vector indicated by the length of the arrow. Numerically, a vector can be represented as a list; for example, a body's velocity vector might be , indicating that it is moving at 3 metres per second along the horizontal axis and 4 metres per second along the vertical axis. The same motion described in a different coordinate system will be represented by different numbers, and vector algebra can be used to translate between these alternatives.
The study of mechanics is complicated by the fact that household words like energy are used with a technical meaning. Moreover, words which are synonymous in everyday speech are not so in physics: force is not the same as power or pressure, for example, and mass has a different meaning than weight. The physics concept of force makes quantitative the everyday idea of a push or a pull. Forces in Newtonian mechanics are often due to strings and ropes, friction, muscle effort, gravity, and so forth. Like displacement, velocity, and acceleration, force is a vector quantity.
Laws
First law
Translated from Latin, Newton's first law reads,
Every object perseveres in its state of rest, or of uniform motion in a right line, except insofar as it is compelled to change that state by forces impressed thereon.
Newton's first law expresses the principle of inertia: the natural behavior of a body is to move in a straight line at constant speed. A body's motion preserves the status quo, but external forces can perturb this.
The modern understanding of Newton's first law is that no inertial observer is privileged over any other. The concept of an inertial observer makes quantitative the everyday idea of feeling no effects of motion. For example, a person standing on the ground watching a train go past is an inertial observer. If the observer on the ground sees the train moving smoothly in a straight line at a constant speed, then a passenger sitting on the train will also be an inertial observer: the train passenger feels no motion. The principle expressed by Newton's first law is that there is no way to say which inertial observer is "really" moving and which is "really" standing still. One observer's state of rest is another observer's state of uniform motion in a straight line, and no experiment can deem either point of view to be correct or incorrect. There is no absolute standard of rest. Newton himself believed that absolute space and time existed, but that the only measures of space or time accessible to experiment are relative.
Second law
The change of motion of an object is proportional to the force impressed; and is made in the direction of the straight line in which the force is impressed.
By "motion", Newton meant the quantity now called momentum, which depends upon the amount of matter contained in a body, the speed at which that body is moving, and the direction in which it is moving. In modern notation, the momentum of a body is the product of its mass and its velocity:
where all three quantities can change over time.
Newton's second law, in modern form, states that the time derivative of the momentum is the force:
If the mass does not change with time, then the derivative acts only upon the velocity, and so the force equals the product of the mass and the time derivative of the velocity, which is the acceleration:
As the acceleration is the second derivative of position with respect to time, this can also be written
The forces acting on a body add as vectors, and so the total force on a body depends upon both the magnitudes and the directions of the individual forces. When the net force on a body is equal to zero, then by Newton's second law, the body does not accelerate, and it is said to be in mechanical equilibrium. A state of mechanical equilibrium is stable if, when the position of the body is changed slightly, the body remains near that equilibrium. Otherwise, the equilibrium is unstable.
A common visual representation of forces acting in concert is the free body diagram, which schematically portrays a body of interest and the forces applied to it by outside influences. For example, a free body diagram of a block sitting upon an inclined plane can illustrate the combination of gravitational force, "normal" force, friction, and string tension.
Newton's second law is sometimes presented as a definition of force, i.e., a force is that which exists when an inertial observer sees a body accelerating. In order for this to be more than a tautology — acceleration implies force, force implies acceleration — some other statement about force must also be made. For example, an equation detailing the force might be specified, like Newton's law of universal gravitation. By inserting such an expression for into Newton's second law, an equation with predictive power can be written. Newton's second law has also been regarded as setting out a research program for physics, establishing that important goals of the subject are to identify the forces present in nature and to catalogue the constituents of matter.
Third law
To every action, there is always opposed an equal reaction; or, the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.
Overly brief paraphrases of the third law, like "action equals reaction" might have caused confusion among generations of students: the "action" and "reaction" apply to different bodies. For example, consider a book at rest on a table. The Earth's gravity pulls down upon the book. The "reaction" to that "action" is not the support force from the table holding up the book, but the gravitational pull of the book acting on the Earth.
Newton's third law relates to a more fundamental principle, the conservation of momentum. The latter remains true even in cases where Newton's statement does not, for instance when force fields as well as material bodies carry momentum, and when momentum is defined properly, in quantum mechanics as well. In Newtonian mechanics, if two bodies have momenta and respectively, then the total momentum of the pair is , and the rate of change of is By Newton's second law, the first term is the total force upon the first body, and the second term is the total force upon the second body. If the two bodies are isolated from outside influences, the only force upon the first body can be that from the second, and vice versa. By Newton's third law, these forces have equal magnitude but opposite direction, so they cancel when added, and is constant. Alternatively, if is known to be constant, it follows that the forces have equal magnitude and opposite direction.
Candidates for additional laws
Various sources have proposed elevating other ideas used in classical mechanics to the status of Newton's laws. For example, in Newtonian mechanics, the total mass of a body made by bringing together two smaller bodies is the sum of their individual masses. Frank Wilczek has suggested calling attention to this assumption by designating it "Newton's Zeroth Law". Another candidate for a "zeroth law" is the fact that at any instant, a body reacts to the forces applied to it at that instant. Likewise, the idea that forces add like vectors (or in other words obey the superposition principle), and the idea that forces change the energy of a body, have both been described as a "fourth law".
Moreover, some texts organize the basic ideas of Newtonian mechanics into different postulates, other than the three laws as commonly phrased, with the goal of being more clear about what is empirically observed and what is true by definition.
Examples
The study of the behavior of massive bodies using Newton's laws is known as Newtonian mechanics. Some example problems in Newtonian mechanics are particularly noteworthy for conceptual or historical reasons.
Uniformly accelerated motion
If a body falls from rest near the surface of the Earth, then in the absence of air resistance, it will accelerate at a constant rate. This is known as free fall. The speed attained during free fall is proportional to the elapsed time, and the distance traveled is proportional to the square of the elapsed time. Importantly, the acceleration is the same for all bodies, independently of their mass. This follows from combining Newton's second law of motion with his law of universal gravitation. The latter states that the magnitude of the gravitational force from the Earth upon the body is
where is the mass of the falling body, is the mass of the Earth, is Newton's constant, and is the distance from the center of the Earth to the body's location, which is very nearly the radius of the Earth. Setting this equal to , the body's mass cancels from both sides of the equation, leaving an acceleration that depends upon , , and , and can be taken to be constant. This particular value of acceleration is typically denoted :
If the body is not released from rest but instead launched upwards and/or horizontally with nonzero velocity, then free fall becomes projectile motion. When air resistance can be neglected, projectiles follow parabola-shaped trajectories, because gravity affects the body's vertical motion and not its horizontal. At the peak of the projectile's trajectory, its vertical velocity is zero, but its acceleration is downwards, as it is at all times. Setting the wrong vector equal to zero is a common confusion among physics students.
Uniform circular motion
When a body is in uniform circular motion, the force on it changes the direction of its motion but not its speed. For a body moving in a circle of radius at a constant speed , its acceleration has a magnitudeand is directed toward the center of the circle. The force required to sustain this acceleration, called the centripetal force, is therefore also directed toward the center of the circle and has magnitude . Many orbits, such as that of the Moon around the Earth, can be approximated by uniform circular motion. In such cases, the centripetal force is gravity, and by Newton's law of universal gravitation has magnitude , where is the mass of the larger body being orbited. Therefore, the mass of a body can be calculated from observations of another body orbiting around it.
Newton's cannonball is a thought experiment that interpolates between projectile motion and uniform circular motion. A cannonball that is lobbed weakly off the edge of a tall cliff will hit the ground in the same amount of time as if it were dropped from rest, because the force of gravity only affects the cannonball's momentum in the downward direction, and its effect is not diminished by horizontal movement. If the cannonball is launched with a greater initial horizontal velocity, then it will travel farther before it hits the ground, but it will still hit the ground in the same amount of time. However, if the cannonball is launched with an even larger initial velocity, then the curvature of the Earth becomes significant: the ground itself will curve away from the falling cannonball. A very fast cannonball will fall away from the inertial straight-line trajectory at the same rate that the Earth curves away beneath it; in other words, it will be in orbit (imagining that it is not slowed by air resistance or obstacles).
Harmonic motion
Consider a body of mass able to move along the axis, and suppose an equilibrium point exists at the position . That is, at , the net force upon the body is the zero vector, and by Newton's second law, the body will not accelerate. If the force upon the body is proportional to the displacement from the equilibrium point, and directed to the equilibrium point, then the body will perform simple harmonic motion. Writing the force as , Newton's second law becomes
This differential equation has the solution
where the frequency is equal to , and the constants and can be calculated knowing, for example, the position and velocity the body has at a given time, like .
One reason that the harmonic oscillator is a conceptually important example is that it is good approximation for many systems near a stable mechanical equilibrium. For example, a pendulum has a stable equilibrium in the vertical position: if motionless there, it will remain there, and if pushed slightly, it will swing back and forth. Neglecting air resistance and friction in the pivot, the force upon the pendulum is gravity, and Newton's second law becomes where is the length of the pendulum and is its angle from the vertical. When the angle is small, the sine of is nearly equal to (see small-angle approximation), and so this expression simplifies to the equation for a simple harmonic oscillator with frequency .
A harmonic oscillator can be damped, often by friction or viscous drag, in which case energy bleeds out of the oscillator and the amplitude of the oscillations decreases over time. Also, a harmonic oscillator can be driven by an applied force, which can lead to the phenomenon of resonance.
Objects with variable mass
Newtonian physics treats matter as being neither created nor destroyed, though it may be rearranged. It can be the case that an object of interest gains or loses mass because matter is added to or removed from it. In such a situation, Newton's laws can be applied to the individual pieces of matter, keeping track of which pieces belong to the object of interest over time. For instance, if a rocket of mass , moving at velocity , ejects matter at a velocity relative to the rocket, then
where is the net external force (e.g., a planet's gravitational pull).
Work and energy
The concept of energy was developed after Newton's time, but it has become an inseparable part of what is considered "Newtonian" physics. Energy can broadly be classified into kinetic, due to a body's motion, and potential, due to a body's position relative to others. Thermal energy, the energy carried by heat flow, is a type of kinetic energy not associated with the macroscopic motion of objects but instead with the movements of the atoms and molecules of which they are made. According to the work-energy theorem, when a force acts upon a body while that body moves along the line of the force, the force does work upon the body, and the amount of work done is equal to the change in the body's kinetic energy. In many cases of interest, the net work done by a force when a body moves in a closed loop — starting at a point, moving along some trajectory, and returning to the initial point — is zero. If this is the case, then the force can be written in terms of the gradient of a function called a scalar potential:
This is true for many forces including that of gravity, but not for friction; indeed, almost any problem in a mechanics textbook that does not involve friction can be expressed in this way. The fact that the force can be written in this way can be understood from the conservation of energy. Without friction to dissipate a body's energy into heat, the body's energy will trade between potential and (non-thermal) kinetic forms while the total amount remains constant. Any gain of kinetic energy, which occurs when the net force on the body accelerates it to a higher speed, must be accompanied by a loss of potential energy. So, the net force upon the body is determined by the manner in which the potential energy decreases.
Rigid-body motion and rotation
A rigid body is an object whose size is too large to neglect and which maintains the same shape over time. In Newtonian mechanics, the motion of a rigid body is often understood by separating it into movement of the body's center of mass and movement around the center of mass.
Center of mass
Significant aspects of the motion of an extended body can be understood by imagining the mass of that body concentrated to a single point, known as the center of mass. The location of a body's center of mass depends upon how that body's material is distributed. For a collection of pointlike objects with masses at positions , the center of mass is located at where is the total mass of the collection. In the absence of a net external force, the center of mass moves at a constant speed in a straight line. This applies, for example, to a collision between two bodies. If the total external force is not zero, then the center of mass changes velocity as though it were a point body of mass . This follows from the fact that the internal forces within the collection, the forces that the objects exert upon each other, occur in balanced pairs by Newton's third law. In a system of two bodies with one much more massive than the other, the center of mass will approximately coincide with the location of the more massive body.
Rotational analogues of Newton's laws
When Newton's laws are applied to rotating extended bodies, they lead to new quantities that are analogous to those invoked in the original laws. The analogue of mass is the moment of inertia, the counterpart of momentum is angular momentum, and the counterpart of force is torque.
Angular momentum is calculated with respect to a reference point. If the displacement vector from a reference point to a body is and the body has momentum , then the body's angular momentum with respect to that point is, using the vector cross product, Taking the time derivative of the angular momentum gives The first term vanishes because and point in the same direction. The remaining term is the torque, When the torque is zero, the angular momentum is constant, just as when the force is zero, the momentum is constant. The torque can vanish even when the force is non-zero, if the body is located at the reference point () or if the force and the displacement vector are directed along the same line.
The angular momentum of a collection of point masses, and thus of an extended body, is found by adding the contributions from each of the points. This provides a means to characterize a body's rotation about an axis, by adding up the angular momenta of its individual pieces. The result depends on the chosen axis, the shape of the body, and the rate of rotation.
Multi-body gravitational system
Newton's law of universal gravitation states that any body attracts any other body along the straight line connecting them. The size of the attracting force is proportional to the product of their masses, and inversely proportional to the square of the distance between them. Finding the shape of the orbits that an inverse-square force law will produce is known as the Kepler problem. The Kepler problem can be solved in multiple ways, including by demonstrating that the Laplace–Runge–Lenz vector is constant, or by applying a duality transformation to a 2-dimensional harmonic oscillator. However it is solved, the result is that orbits will be conic sections, that is, ellipses (including circles), parabolas, or hyperbolas. The eccentricity of the orbit, and thus the type of conic section, is determined by the energy and the angular momentum of the orbiting body. Planets do not have sufficient energy to escape the Sun, and so their orbits are ellipses, to a good approximation; because the planets pull on one another, actual orbits are not exactly conic sections.
If a third mass is added, the Kepler problem becomes the three-body problem, which in general has no exact solution in closed form. That is, there is no way to start from the differential equations implied by Newton's laws and, after a finite sequence of standard mathematical operations, obtain equations that express the three bodies' motions over time. Numerical methods can be applied to obtain useful, albeit approximate, results for the three-body problem. The positions and velocities of the bodies can be stored in variables within a computer's memory; Newton's laws are used to calculate how the velocities will change over a short interval of time, and knowing the velocities, the changes of position over that time interval can be computed. This process is looped to calculate, approximately, the bodies' trajectories. Generally speaking, the shorter the time interval, the more accurate the approximation.
Chaos and unpredictability
Nonlinear dynamics
Newton's laws of motion allow the possibility of chaos. That is, qualitatively speaking, physical systems obeying Newton's laws can exhibit sensitive dependence upon their initial conditions: a slight change of the position or velocity of one part of a system can lead to the whole system behaving in a radically different way within a short time. Noteworthy examples include the three-body problem, the double pendulum, dynamical billiards, and the Fermi–Pasta–Ulam–Tsingou problem.
Newton's laws can be applied to fluids by considering a fluid as composed of infinitesimal pieces, each exerting forces upon neighboring pieces. The Euler momentum equation is an expression of Newton's second law adapted to fluid dynamics. A fluid is described by a velocity field, i.e., a function that assigns a velocity vector to each point in space and time. A small object being carried along by the fluid flow can change velocity for two reasons: first, because the velocity field at its position is changing over time, and second, because it moves to a new location where the velocity field has a different value. Consequently, when Newton's second law is applied to an infinitesimal portion of fluid, the acceleration has two terms, a combination known as a total or material derivative. The mass of an infinitesimal portion depends upon the fluid density, and there is a net force upon it if the fluid pressure varies from one side of it to another. Accordingly, becomes
where is the density, is the pressure, and stands for an external influence like a gravitational pull. Incorporating the effect of viscosity turns the Euler equation into a Navier–Stokes equation:
where is the kinematic viscosity.
Singularities
It is mathematically possible for a collection of point masses, moving in accord with Newton's laws, to launch some of themselves away so forcefully that they fly off to infinity in a finite time. This unphysical behavior, known as a "noncollision singularity", depends upon the masses being pointlike and able to approach one another arbitrarily closely, as well as the lack of a relativistic speed limit in Newtonian physics.
It is not yet known whether or not the Euler and Navier–Stokes equations exhibit the analogous behavior of initially smooth solutions "blowing up" in finite time. The question of existence and smoothness of Navier–Stokes solutions is one of the Millennium Prize Problems.
Relation to other formulations of classical physics
Classical mechanics can be mathematically formulated in multiple different ways, other than the "Newtonian" description (which itself, of course, incorporates contributions from others both before and after Newton). The physical content of these different formulations is the same as the Newtonian, but they provide different insights and facilitate different types of calculations. For example, Lagrangian mechanics helps make apparent the connection between symmetries and conservation laws, and it is useful when calculating the motion of constrained bodies, like a mass restricted to move along a curving track or on the surface of a sphere. Hamiltonian mechanics is convenient for statistical physics, leads to further insight about symmetry, and can be developed into sophisticated techniques for perturbation theory. Due to the breadth of these topics, the discussion here will be confined to concise treatments of how they reformulate Newton's laws of motion.
Lagrangian
Lagrangian mechanics differs from the Newtonian formulation by considering entire trajectories at once rather than predicting a body's motion at a single instant. It is traditional in Lagrangian mechanics to denote position with and velocity with . The simplest example is a massive point particle, the Lagrangian for which can be written as the difference between its kinetic and potential energies:
where the kinetic energy is
and the potential energy is some function of the position, . The physical path that the particle will take between an initial point and a final point is the path for which the integral of the Lagrangian is "stationary". That is, the physical path has the property that small perturbations of it will, to a first approximation, not change the integral of the Lagrangian. Calculus of variations provides the mathematical tools for finding this path. Applying the calculus of variations to the task of finding the path yields the Euler–Lagrange equation for the particle,
Evaluating the partial derivatives of the Lagrangian gives
which is a restatement of Newton's second law. The left-hand side is the time derivative of the momentum, and the right-hand side is the force, represented in terms of the potential energy.
Landau and Lifshitz argue that the Lagrangian formulation makes the conceptual content of classical mechanics more clear than starting with Newton's laws. Lagrangian mechanics provides a convenient framework in which to prove Noether's theorem, which relates symmetries and conservation laws. The conservation of momentum can be derived by applying Noether's theorem to a Lagrangian for a multi-particle system, and so, Newton's third law is a theorem rather than an assumption.
Hamiltonian
In Hamiltonian mechanics, the dynamics of a system are represented by a function called the Hamiltonian, which in many cases of interest is equal to the total energy of the system. The Hamiltonian is a function of the positions and the momenta of all the bodies making up the system, and it may also depend explicitly upon time. The time derivatives of the position and momentum variables are given by partial derivatives of the Hamiltonian, via Hamilton's equations. The simplest example is a point mass constrained to move in a straight line, under the effect of a potential. Writing for the position coordinate and for the body's momentum, the Hamiltonian is
In this example, Hamilton's equations are
and
Evaluating these partial derivatives, the former equation becomes
which reproduces the familiar statement that a body's momentum is the product of its mass and velocity. The time derivative of the momentum is
which, upon identifying the negative derivative of the potential with the force, is just Newton's second law once again.
As in the Lagrangian formulation, in Hamiltonian mechanics the conservation of momentum can be derived using Noether's theorem, making Newton's third law an idea that is deduced rather than assumed.
Among the proposals to reform the standard introductory-physics curriculum is one that teaches the concept of energy before that of force, essentially "introductory Hamiltonian mechanics".
Hamilton–Jacobi
The Hamilton–Jacobi equation provides yet another formulation of classical mechanics, one which makes it mathematically analogous to wave optics. This formulation also uses Hamiltonian functions, but in a different way than the formulation described above. The paths taken by bodies or collections of bodies are deduced from a function of positions and time . The Hamiltonian is incorporated into the Hamilton–Jacobi equation, a differential equation for . Bodies move over time in such a way that their trajectories are perpendicular to the surfaces of constant , analogously to how a light ray propagates in the direction perpendicular to its wavefront. This is simplest to express for the case of a single point mass, in which is a function , and the point mass moves in the direction along which changes most steeply. In other words, the momentum of the point mass is the gradient of :
The Hamilton–Jacobi equation for a point mass is
The relation to Newton's laws can be seen by considering a point mass moving in a time-independent potential , in which case the Hamilton–Jacobi equation becomes
Taking the gradient of both sides, this becomes
Interchanging the order of the partial derivatives on the left-hand side, and using the power and chain rules on the first term on the right-hand side,
Gathering together the terms that depend upon the gradient of ,
This is another re-expression of Newton's second law. The expression in brackets is a total or material derivative as mentioned above, in which the first term indicates how the function being differentiated changes over time at a fixed location, and the second term captures how a moving particle will see different values of that function as it travels from place to place:
Relation to other physical theories
Thermodynamics and statistical physics
In statistical physics, the kinetic theory of gases applies Newton's laws of motion to large numbers (typically on the order of the Avogadro number) of particles. Kinetic theory can explain, for example, the pressure that a gas exerts upon the container holding it as the aggregate of many impacts of atoms, each imparting a tiny amount of momentum.
The Langevin equation is a special case of Newton's second law, adapted for the case of describing a small object bombarded stochastically by even smaller ones. It can be writtenwhere is a drag coefficient and is a force that varies randomly from instant to instant, representing the net effect of collisions with the surrounding particles. This is used to model Brownian motion.
Electromagnetism
Newton's three laws can be applied to phenomena involving electricity and magnetism, though subtleties and caveats exist.
Coulomb's law for the electric force between two stationary, electrically charged bodies has much the same mathematical form as Newton's law of universal gravitation: the force is proportional to the product of the charges, inversely proportional to the square of the distance between them, and directed along the straight line between them. The Coulomb force that a charge exerts upon a charge is equal in magnitude to the force that exerts upon , and it points in the exact opposite direction. Coulomb's law is thus consistent with Newton's third law.
Electromagnetism treats forces as produced by fields acting upon charges. The Lorentz force law provides an expression for the force upon a charged body that can be plugged into Newton's second law in order to calculate its acceleration. According to the Lorentz force law, a charged body in an electric field experiences a force in the direction of that field, a force proportional to its charge and to the strength of the electric field. In addition, a moving charged body in a magnetic field experiences a force that is also proportional to its charge, in a direction perpendicular to both the field and the body's direction of motion. Using the vector cross product,
If the electric field vanishes (), then the force will be perpendicular to the charge's motion, just as in the case of uniform circular motion studied above, and the charge will circle (or more generally move in a helix) around the magnetic field lines at the cyclotron frequency . Mass spectrometry works by applying electric and/or magnetic fields to moving charges and measuring the resulting acceleration, which by the Lorentz force law yields the mass-to-charge ratio.
Collections of charged bodies do not always obey Newton's third law: there can be a change of one body's momentum without a compensatory change in the momentum of another. The discrepancy is accounted for by momentum carried by the electromagnetic field itself. The momentum per unit volume of the electromagnetic field is proportional to the Poynting vector.
There is subtle conceptual conflict between electromagnetism and Newton's first law: Maxwell's theory of electromagnetism predicts that electromagnetic waves will travel through empty space at a constant, definite speed. Thus, some inertial observers seemingly have a privileged status over the others, namely those who measure the speed of light and find it to be the value predicted by the Maxwell equations. In other words, light provides an absolute standard for speed, yet the principle of inertia holds that there should be no such standard. This tension is resolved in the theory of special relativity, which revises the notions of space and time in such a way that all inertial observers will agree upon the speed of light in vacuum.
Special relativity
In special relativity, the rule that Wilczek called "Newton's Zeroth Law" breaks down: the mass of a composite object is not merely the sum of the masses of the individual pieces. Newton's first law, inertial motion, remains true. A form of Newton's second law, that force is the rate of change of momentum, also holds, as does the conservation of momentum. However, the definition of momentum is modified. Among the consequences of this is the fact that the more quickly a body moves, the harder it is to accelerate, and so, no matter how much force is applied, a body cannot be accelerated to the speed of light. Depending on the problem at hand, momentum in special relativity can be represented as a three-dimensional vector, , where is the body's rest mass and is the Lorentz factor, which depends upon the body's speed. Alternatively, momentum and force can be represented as four-vectors.
Newton's third law must be modified in special relativity. The third law refers to the forces between two bodies at the same moment in time, and a key feature of special relativity is that simultaneity is relative. Events that happen at the same time relative to one observer can happen at different times relative to another. So, in a given observer's frame of reference, action and reaction may not be exactly opposite, and the total momentum of interacting bodies may not be conserved. The conservation of momentum is restored by including the momentum stored in the field that describes the bodies' interaction.
Newtonian mechanics is a good approximation to special relativity when the speeds involved are small compared to that of light.
General relativity
General relativity is a theory of gravity that advances beyond that of Newton. In general relativity, the gravitational force of Newtonian mechanics is reimagined as curvature of spacetime. A curved path like an orbit, attributed to a gravitational force in Newtonian mechanics, is not the result of a force deflecting a body from an ideal straight-line path, but rather the body's attempt to fall freely through a background that is itself curved by the presence of other masses. A remark by John Archibald Wheeler that has become proverbial among physicists summarizes the theory: "Spacetime tells matter how to move; matter tells spacetime how to curve." Wheeler himself thought of this reciprocal relationship as a modern, generalized form of Newton's third law. The relation between matter distribution and spacetime curvature is given by the Einstein field equations, which require tensor calculus to express.
The Newtonian theory of gravity is a good approximation to the predictions of general relativity when gravitational effects are weak and objects are moving slowly compared to the speed of light.
Quantum mechanics
Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is very different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence.
The Ehrenfest theorem provides a connection between quantum expectation values and Newton's second law, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, position and momentum are represented by mathematical entities known as Hermitian operators, and the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance.
History
The concepts invoked in Newton's laws of motion — mass, velocity, momentum, force — have predecessors in earlier work, and the content of Newtonian physics was further developed after Newton's time. Newton combined knowledge of celestial motions with the study of events on Earth and showed that one theory of mechanics could encompass both.
As noted by scholar I. Bernard Cohen, Newton's work was more than a mere synthesis of previous results, as he selected certain ideas and further transformed them, with each in a new form that was useful to him, while at the same time proving false of certain basic or fundamental principles of scientists such as Galileo Galilei, Johannes Kepler, René Descartes, and Nicolaus Copernicus. He approached natural philosophy with mathematics in a completely novel way, in that instead of a preconceived natural philosophy, his style was to begin with a mathematical construct, and build on from there, comparing it to the real world to show that his system accurately accounted for it.
Antiquity and medieval background
Aristotle and "violent" motion
The subject of physics is often traced back to Aristotle, but the history of the concepts involved is obscured by multiple factors. An exact correspondence between Aristotelian and modern concepts is not simple to establish: Aristotle did not clearly distinguish what we would call speed and force, used the same term for density and viscosity, and conceived of motion as always through a medium, rather than through space. In addition, some concepts often termed "Aristotelian" might better be attributed to his followers and commentators upon him. These commentators found that Aristotelian physics had difficulty explaining projectile motion. Aristotle divided motion into two types: "natural" and "violent". The "natural" motion of terrestrial solid matter was to fall downwards, whereas a "violent" motion could push a body sideways. Moreover, in Aristotelian physics, a "violent" motion requires an immediate cause; separated from the cause of its "violent" motion, a body would revert to its "natural" behavior. Yet, a javelin continues moving after it leaves the thrower's hand. Aristotle concluded that the air around the javelin must be imparted with the ability to move the javelin forward.
Philoponus and impetus
John Philoponus, a Byzantine Greek thinker active during the sixth century, found this absurd: the same medium, air, was somehow responsible both for sustaining motion and for impeding it. If Aristotle's idea were true, Philoponus said, armies would launch weapons by blowing upon them with bellows. Philoponus argued that setting a body into motion imparted a quality, impetus, that would be contained within the body itself. As long as its impetus was sustained, the body would continue to move. In the following centuries, versions of impetus theory were advanced by individuals including Nur ad-Din al-Bitruji, Avicenna, Abu'l-Barakāt al-Baghdādī, John Buridan, and Albert of Saxony. In retrospect, the idea of impetus can be seen as a forerunner of the modern concept of momentum. The intuition that objects move according to some kind of impetus persists in many students of introductory physics.
Inertia and the first law
The French philosopher René Descartes introduced the concept of inertia by way of his "laws of nature" in The World (Traité du monde et de la lumière) written 1629–33. However, The World purported a heliocentric worldview, and in 1633 this view had given rise a great conflict between Galileo Galilei and the Roman Catholic Inquisition. Descartes knew about this controversy and did not wish to get involved. The World was not published until 1664, ten years after his death.
The modern concept of inertia is credited to Galileo. Based on his experiments, Galileo concluded that the "natural" behavior of a moving body was to keep moving, until something else interfered with it. In Two New Sciences (1638) Galileo wrote:Galileo recognized that in projectile motion, the Earth's gravity affects vertical but not horizontal motion. However, Galileo's idea of inertia was not exactly the one that would be codified into Newton's first law. Galileo thought that a body moving a long distance inertially would follow the curve of the Earth. This idea was corrected by Isaac Beeckman, Descartes, and Pierre Gassendi, who recognized that inertial motion should be motion in a straight line. Descartes published his laws of nature (laws of motion) with this correction in Principles of Philosophy (Principia Philosophiae) in 1644, with the heliocentric part toned down.
According to American philosopher Richard J. Blackwell, Dutch scientist Christiaan Huygens had worked out his own, concise version of the law in 1656. It was not published until 1703, eight years after his death, in the opening paragraph of De Motu Corporum ex Percussione.
According to Huygens, this law was already known by Galileo and Descartes among others.
Force and the second law
Christiaan Huygens, in his Horologium Oscillatorium (1673), put forth the hypothesis that "By the action of gravity, whatever its sources, it happens that bodies are moved by a motion composed both of a uniform motion in one direction or another and of a motion downward due to gravity." Newton's second law generalized this hypothesis from gravity to all forces.
One important characteristic of Newtonian physics is that forces can act at a distance without requiring physical contact. For example, the Sun and the Earth pull on each other gravitationally, despite being separated by millions of kilometres. This contrasts with the idea, championed by Descartes among others, that the Sun's gravity held planets in orbit by swirling them in a vortex of transparent matter, aether. Newton considered aetherial explanations of force but ultimately rejected them. The study of magnetism by William Gilbert and others created a precedent for thinking of immaterial forces, and unable to find a quantitatively satisfactory explanation of his law of gravity in terms of an aetherial model, Newton eventually declared, "I feign no hypotheses": whether or not a model like Descartes's vortices could be found to underlie the Principia's theories of motion and gravity, the first grounds for judging them must be the successful predictions they made. And indeed, since Newton's time every attempt at such a model has failed.
Momentum conservation and the third law
Johannes Kepler suggested that gravitational attractions were reciprocal — that, for example, the Moon pulls on the Earth while the Earth pulls on the Moon — but he did not argue that such pairs are equal and opposite. In his Principles of Philosophy (1644), Descartes introduced the idea that during a collision between bodies, a "quantity of motion" remains unchanged. Descartes defined this quantity somewhat imprecisely by adding up the products of the speed and "size" of each body, where "size" for him incorporated both volume and surface area. Moreover, Descartes thought of the universe as a plenum, that is, filled with matter, so all motion required a body to displace a medium as it moved.
During the 1650s, Huygens studied collisions between hard spheres and deduced a principle that is now identified as the conservation of momentum. Christopher Wren would later deduce the same rules for elastic collisions that Huygens had, and John Wallis would apply momentum conservation to study inelastic collisions. Newton cited the work of Huygens, Wren, and Wallis to support the validity of his third law.
Newton arrived at his set of three laws incrementally. In a 1684 manuscript written to Huygens, he listed four laws: the principle of inertia, the change of motion by force, a statement about relative motion that would today be called Galilean invariance, and the rule that interactions between bodies do not change the motion of their center of mass. In a later manuscript, Newton added a law of action and reaction, while saying that this law and the law regarding the center of mass implied one another. Newton probably settled on the presentation in the Principia, with three primary laws and then other statements reduced to corollaries, during 1685.
After the Principia
Newton expressed his second law by saying that the force on a body is proportional to its change of motion, or momentum. By the time he wrote the Principia, he had already developed calculus (which he called "the science of fluxions"), but in the Principia he made no explicit use of it, perhaps because he believed geometrical arguments in the tradition of Euclid to be more rigorous. Consequently, the Principia does not express acceleration as the second derivative of position, and so it does not give the second law as . This form of the second law was written (for the special case of constant force) at least as early as 1716, by Jakob Hermann; Leonhard Euler would employ it as a basic premise in the 1740s. Euler pioneered the study of rigid bodies and established the basic theory of fluid dynamics. Pierre-Simon Laplace's five-volume Traité de mécanique céleste (1798–1825) forsook geometry and developed mechanics purely through algebraic expressions, while resolving questions that the Principia had left open, like a full theory of the tides.
The concept of energy became a key part of Newtonian mechanics in the post-Newton period. Huygens' solution of the collision of hard spheres showed that in that case, not only is momentum conserved, but kinetic energy is as well (or, rather, a quantity that in retrospect we can identify as one-half the total kinetic energy). The question of what is conserved during all other processes, like inelastic collisions and motion slowed by friction, was not resolved until the 19th century. Debates on this topic overlapped with philosophical disputes between the metaphysical views of Newton and Leibniz, and variants of the term "force" were sometimes used to denote what we would call types of energy. For example, in 1742, Émilie du Châtelet wrote, "Dead force consists of a simple tendency to motion: such is that of a spring ready to relax; living force is that which a body has when it is in actual motion." In modern terminology, "dead force" and "living force" correspond to potential energy and kinetic energy respectively. Conservation of energy was not established as a universal principle until it was understood that the energy of mechanical work can be dissipated into heat. With the concept of energy given a solid grounding, Newton's laws could then be derived within formulations of classical mechanics that put energy first, as in the Lagrangian and Hamiltonian formulations described above.
Modern presentations of Newton's laws use the mathematics of vectors, a topic that was not developed until the late 19th and early 20th centuries. Vector algebra, pioneered by Josiah Willard Gibbs and Oliver Heaviside, stemmed from and largely supplanted the earlier system of quaternions invented by William Rowan Hamilton.
See also
Euler's laws of motion
History of classical mechanics
List of eponymous laws
List of equations in classical mechanics
List of scientific laws named after people
List of textbooks on classical mechanics and quantum mechanics
Norton's dome
Notes
References
Further reading
Newton’s Laws of Dynamics - The Feynman Lectures on Physics
Classical mechanics
Isaac Newton
Texts in Latin
Equations of physics
Scientific observation
Experimental physics
Copernican Revolution
Articles containing video clips
Scientific laws
Eponymous laws of physics | Newton's laws of motion | [
"Physics",
"Astronomy",
"Mathematics"
] | 10,810 | [
"Equations of physics",
"History of astronomy",
"Mathematical objects",
"Classical mechanics",
"Equations",
"Scientific laws",
"Mechanics",
"Experimental physics",
"Copernican Revolution"
] |
55,227 | https://en.wikipedia.org/wiki/Baire%20category%20theorem | The Baire category theorem (BCT) is an important result in general topology and functional analysis. The theorem has two forms, each of which gives sufficient conditions for a topological space to be a Baire space (a topological space such that the intersection of countably many dense open sets is still dense). It is used in the proof of results in many areas of analysis and geometry, including some of the fundamental theorems of functional analysis.
Versions of the Baire category theorem were first proved independently in 1897 by Osgood for the real line and in 1899 by Baire for Euclidean space . The more general statement for completely metrizable spaces was first shown by Hausdorff in 1914.
Statement
A Baire space is a topological space in which every countable intersection of open dense sets is dense in See the corresponding article for a list of equivalent characterizations, as some are more useful than others depending on the application.
(BCT1) Every complete pseudometric space is a Baire space. In particular, every completely metrizable topological space is a Baire space.
(BCT2) Every locally compact regular space is a Baire space. In particular, every locally compact Hausdorff space is a Baire space.
Neither of these statements directly implies the other, since there are complete metric spaces that are not locally compact (the irrational numbers with the metric defined below; also, any Banach space of infinite dimension), and there are locally compact Hausdorff spaces that are not metrizable (for instance, any uncountable product of non-trivial compact Hausdorff spaces; also, several function spaces used in functional analysis; the uncountable Fort space).
See Steen and Seebach in the references below.
Relation to the axiom of choice
The proof of BCT1 for arbitrary complete metric spaces requires some form of the axiom of choice; and in fact BCT1 is equivalent over ZF to the axiom of dependent choice, a weak form of the axiom of choice.
A restricted form of the Baire category theorem, in which the complete metric space is also assumed to be separable, is provable in ZF with no additional choice principles.
This restricted form applies in particular to the real line, the Baire space the Cantor space and a separable Hilbert space such as the -space .
Uses
In functional analysis, BCT1 can be used to prove the open mapping theorem, the closed graph theorem and the uniform boundedness principle.
BCT1 also shows that every nonempty complete metric space with no isolated point is uncountable. (If is a nonempty countable metric space with no isolated point, then each singleton in is nowhere dense, and is meagre in itself.) In particular, this proves that the set of all real numbers is uncountable.
BCT1 shows that each of the following is a Baire space:
The space of real numbers
The irrational numbers, with the metric defined by where is the first index for which the continued fraction expansions of and differ (this is a complete metric space)
The Cantor set
By BCT2, every finite-dimensional Hausdorff manifold is a Baire space, since it is locally compact and Hausdorff. This is so even for non-paracompact (hence nonmetrizable) manifolds such as the long line.
BCT is used to prove Hartogs's theorem, a fundamental result in the theory of several complex variables.
BCT1 is used to prove that a Banach space cannot have countably infinite dimension.
Proof
(BCT1) The following is a standard proof that a complete pseudometric space is a Baire space.
Let be a countable collection of open dense subsets. We want to show that the intersection is dense.
A subset is dense if and only if every nonempty open subset intersects it. Thus to show that the intersection is dense, it suffices to show that any nonempty open subset of has some point in common with all of the .
Because is dense, intersects consequently, there exists a point and a number such that:
where and denote an open and closed ball, respectively, centered at with radius
Since each is dense, this construction can be continued recursively to find a pair of sequences and such that:
(This step relies on the axiom of choice and the fact that a finite intersection of open sets is open and hence an open ball can be found inside it centered at .)
The sequence is Cauchy because whenever and hence converges to some limit by completeness.
If is a positive integer then (because this set is closed).
Thus and for all
There is an alternative proof using Choquet's game.
(BCT2) The proof that a locally compact regular space is a Baire space is similar. It uses the facts that (1) in such a space every point has a local base of closed compact neighborhoods; and (2) in a compact space any collection of closed sets with the finite intersection property has nonempty intersection. The result for locally compact Hausdorff spaces is a special case, as such spaces are regular.
Notes
References
Reprinted by Dover Publications, New York, 1995. (Dover edition).
External links
Encyclopaedia of Mathematics article on Baire theorem
Articles containing proofs
Functional analysis
General topology
Theorems in topology | Baire category theorem | [
"Mathematics"
] | 1,104 | [
"General topology",
"Functions and mappings",
"Mathematical theorems",
"Functional analysis",
"Mathematical objects",
"Theorems in topology",
"Topology",
"Mathematical relations",
"Articles containing proofs",
"Mathematical problems"
] |
55,236 | https://en.wikipedia.org/wiki/Compton%20scattering | Compton scattering (or the Compton effect) is the quantum theory of high frequency photons scattering following an interaction with a charged particle, usually an electron. Specifically, when the photon hits electrons, it releases loosely bound electrons from the outer valence shells of atoms or molecules.
The effect was discovered in 1923 by Arthur Holly Compton while researching the scattering of X-rays by light elements, and earned him the Nobel Prize for Physics in 1927. The Compton effect significantly deviated from dominating classical theories, using both special relativity and quantum mechanics to explain the interaction between high frequency photons and charged particles.
Photons can interact with matter at the atomic level (e.g. photoelectric effect and Rayleigh scattering), at the nucleus, or with just an electron. Pair production and the Compton effect occur at the level of the electron. When a high frequency photon scatters due to an interaction with a charged particle, there is a decrease in the energy of the photon and thus, an increase in its wavelength. This tradeoff between wavelength and energy in response to the collision is the Compton effect. Because of conservation of energy, the lost energy from the photon is transferred to the recoiling particle (such an electron would be called a "Compton Recoil electron").
This implies that if the recoiling particle initially carried more energy than the photon, the reverse would occur. This is known as inverse Compton scattering, in which the scattered photon increases in energy.
Introduction
In Compton's original experiment (see Fig. 1), the energy of the X ray photon (≈ 17 keV) was significantly larger than the binding energy of the atomic electron, so the electrons could be treated as being free after scattering. The amount by which the light's wavelength changes is called the Compton shift. Although nucleus Compton scattering exists, Compton scattering usually refers to the interaction involving only the electrons of an atom. The Compton effect was observed by Arthur Holly Compton in 1923 at Washington University in St. Louis and further verified by his graduate student Y. H. Woo in the years following. Compton was awarded the 1927 Nobel Prize in Physics for the discovery.
The effect is significant because it demonstrates that light cannot be explained purely as a wave phenomenon. Thomson scattering, the classical theory of an electromagnetic wave scattered by charged particles, cannot explain shifts in wavelength at low intensity: classically, light of sufficient intensity for the electric field to accelerate a charged particle to a relativistic speed will cause radiation-pressure recoil and an associated Doppler shift of the scattered light, but the effect would become arbitrarily small at sufficiently low light intensities regardless of wavelength. Thus, if we are to explain low-intensity Compton scattering, light must behave as if it consists of particles. Or the assumption that the electron can be treated as free is invalid resulting in the effectively infinite electron mass equal to the nuclear mass (see e.g. the comment below on elastic scattering of X-rays being from that effect). Compton's experiment convinced physicists that light can be treated as a stream of particle-like objects (quanta called photons), whose energy is proportional to the light wave's frequency.
As shown in Fig. 2, the interaction between an electron and a photon results in the electron being given part of the energy (making it recoil), and a photon of the remaining energy being emitted in a different direction from the original, so that the overall momentum of the system is also conserved. If the scattered photon still has enough energy, the process may be repeated. In this scenario, the electron is treated as free or loosely bound. Experimental verification of momentum conservation in individual Compton scattering processes by Bothe and Geiger as well as by Compton and Simon has been important in disproving the BKS theory.
Compton scattering is commonly described as inelastic scattering. This is because, unlike the more common Thomson scattering that happens at the low-energy limit, the energy in the scattered photon in Compton scattering is less than the energy of the incident photon. As the electron is typically weakly bound to the atom, the scattering can be viewed from either the perspective of an electron in a potential well, or as an atom with a small ionization energy. In the former perspective, energy of the incident photon is transferred to the recoil particle, but only as kinetic energy. The electron gains no internal energy, respective masses remain the same, the mark of an elastic collision. From this perspective, Compton scattering could be considered elastic because the internal state of the electron does not change during the scattering process. In the latter perspective, the atom's state is change, constituting an inelastic collision. Whether Compton scattering is considered elastic or inelastic depends on which perspective is being used, as well as the context.
Compton scattering is one of four competing processes when photons interact with matter. At energies of a few eV to a few keV, corresponding to visible light through soft X-rays, a photon can be completely absorbed and its energy can eject an electron from its host atom, a process known as the photoelectric effect. High-energy photons of and above may bombard the nucleus and cause an electron and a positron to be formed, a process called pair production; even-higher-energy photons (beyond a threshold energy of at least , depending on the nuclei involved), can eject a nucleon or alpha particle from the nucleus in a process called photodisintegration. Compton scattering is the most important interaction in the intervening energy region, at photon energies greater than those typical of the photoelectric effect but less than the pair-production threshold.
Description of the phenomenon
By the early 20th century, research into the interaction of X-rays with matter was well under way. It was observed that when X-rays of a known wavelength interact with atoms, the X-rays are scattered through an angle and emerge at a different wavelength related to . Although classical electromagnetism predicted that the wavelength of scattered rays should be equal to the initial wavelength, multiple experiments had found that the wavelength of the scattered rays was longer (corresponding to lower energy) than the initial wavelength.
In 1923, Compton published a paper in the Physical Review that explained the X-ray shift by attributing particle-like momentum to light quanta (Albert Einstein had proposed light quanta in 1905 in explaining the photo-electric effect, but Compton did not build on Einstein's work). The energy of light quanta depends only on the frequency of the light. In his paper, Compton derived the mathematical relationship between the shift in wavelength and the scattering angle of the X-rays by assuming that each scattered X-ray photon interacted with only one electron. His paper concludes by reporting on experiments which verified his derived relation:
where
is the initial wavelength,
is the wavelength after scattering,
is the Planck constant,
is the electron rest mass,
is the speed of light, and
is the scattering angle.
The quantity is known as the Compton wavelength of the electron; it is equal to . The wavelength shift is at least zero (for ) and at most twice the Compton wavelength of the electron (for ).
Compton found that some X-rays experienced no wavelength shift despite being scattered through large angles; in each of these cases the photon failed to eject an electron. Thus the magnitude of the shift is related not to the Compton wavelength of the electron, but to the Compton wavelength of the entire atom, which can be upwards of 10000 times smaller. This is known as "coherent" scattering off the entire atom since the atom remains intact, gaining no internal excitation.
In Compton's original experiments the wavelength shift given above was the directly measurable observable. In modern experiments it is conventional to measure the energies, not the wavelengths, of the scattered photons. For a given incident energy , the outgoing final-state photon energy, , is given by
Derivation of the scattering formula
A photon with wavelength collides with an electron in an atom, which is treated as being at rest. The collision causes the electron to recoil, and a new photon ′ with wavelength ′ emerges at angle from the photon's incoming path. Let ′ denote the electron after the collision. Compton allowed for the possibility that the interaction would sometimes accelerate the electron to speeds sufficiently close to the velocity of light as to require the application of Einstein's special relativity theory to properly describe its energy and momentum.
At the conclusion of Compton's 1923 paper, he reported results of experiments confirming the predictions of his scattering formula, thus supporting the assumption that photons carry momentum as well as quantized energy. At the start of his derivation, he had postulated an expression for the momentum of a photon from equating Einstein's already established mass-energy relationship of to the quantized photon energies of , which Einstein had separately postulated. If , the equivalent photon mass must be . The photon's momentum is then simply this effective mass times the photon's frame-invariant velocity . For a photon, its momentum , and thus can be substituted for for all photon momentum terms which arise in course of the derivation below. The derivation which appears in Compton's paper is more terse, but follows the same logic in the same sequence as the following derivation.
The conservation of energy merely equates the sum of energies before and after scattering.
Compton postulated that photons carry momentum; thus from the conservation of momentum, the momenta of the particles should be similarly related by
in which () is omitted on the assumption it is effectively zero.
The photon energies are related to the frequencies by
where h is the Planck constant.
Before the scattering event, the electron is treated as sufficiently close to being at rest that its total energy consists entirely of the mass-energy equivalence of its (rest) mass ,
After scattering, the possibility that the electron might be accelerated to a significant fraction of the speed of light, requires that its total energy be represented using the relativistic energy–momentum relation
Substituting these quantities into the expression for the conservation of energy gives
This expression can be used to find the magnitude of the momentum of the scattered electron,
Note that this magnitude of the momentum gained by the electron (formerly zero) exceeds the energy/c lost by the photon,
Equation (1) relates the various energies associated with the collision. The electron's momentum change involves a relativistic change in the energy of the electron, so it is not simply related to the change in energy occurring in classical physics. The change of the magnitude of the momentum of the photon is not just related to the change of its energy; it also involves a change in direction.
Solving the conservation of momentum expression for the scattered electron's momentum gives
Making use of the scalar product yields the square of its magnitude,
In anticipation of being replaced with , multiply both sides by ,
After replacing the photon momentum terms with , we get a second expression for the magnitude of the momentum of the scattered electron,
Equating the alternate expressions for this momentum gives
which, after evaluating the square and canceling and rearranging terms, further yields
Dividing both sides by yields
Finally, since = = ,
It can further be seen that the angle of the outgoing electron with the direction of the incoming photon is specified by
Applications
Compton scattering
Compton scattering is of prime importance to radiobiology, as it is the most probable interaction of gamma rays and high energy X-rays with atoms in living beings and is applied in radiation therapy.
Compton scattering is an important effect in gamma spectroscopy which gives rise to the Compton edge, as it is possible for the gamma rays to scatter out of the detectors used. Compton suppression is used to detect stray scatter gamma rays to counteract this effect.
Magnetic Compton scattering
Magnetic Compton scattering is an extension of the previously mentioned technique which involves the magnetisation of a crystal sample hit with high energy, circularly polarised photons. By measuring the scattered photons' energy and reversing the magnetisation of the sample, two different Compton profiles are generated (one for spin up momenta and one for spin down momenta). Taking the difference between these two profiles gives the magnetic Compton profile (MCP), given by – a one-dimensional projection of the electron spin density.
where is the number of spin-unpaired electrons in the system, and are the three-dimensional electron momentum distributions for the majority spin and minority spin electrons respectively.
Since this scattering process is incoherent (there is no phase relationship between the scattered photons), the MCP is representative of the bulk properties of the sample and is a probe of the ground state. This means that the MCP is ideal for comparison with theoretical techniques such as density functional theory.
The area under the MCP is directly proportional to the spin moment of the system and so, when combined with total moment measurements methods (such as SQUID magnetometry), can be used to isolate both the spin and orbital contributions to the total moment of a system.
The shape of the MCP also yields insight into the origin of the magnetism in the system.
Inverse Compton scattering
Inverse Compton scattering is important in astrophysics. In X-ray astronomy, the accretion disk surrounding a black hole is presumed to produce a thermal spectrum. The lower energy photons produced from this spectrum are scattered to higher energies by relativistic electrons in the surrounding corona. This is surmised to cause the power law component in the X-ray spectra (0.2–10 keV) of accreting black holes.
The effect is also observed when photons from the cosmic microwave background (CMB) move through the hot gas surrounding a galaxy cluster. The CMB photons are scattered to higher energies by the electrons in this gas, resulting in the Sunyaev–Zel'dovich effect. Observations of the Sunyaev–Zel'dovich effect provide a nearly redshift-independent means of detecting galaxy clusters.
Some synchrotron radiation facilities scatter laser light off the stored electron beam.
This Compton backscattering produces high energy photons in the MeV to GeV range subsequently used for nuclear physics experiments.
Non-linear inverse Compton scattering
Non-linear inverse Compton scattering (NICS) is the scattering of multiple low-energy photons, given by an intense electromagnetic field, in a high-energy photon (X-ray or gamma ray) during the interaction with a charged particle, such as an electron. It is also called non-linear Compton scattering and multiphoton Compton scattering. It is the non-linear version of inverse Compton scattering in which the conditions for multiphoton absorption by the charged particle are reached due to a very intense electromagnetic field, for example the one produced by a laser.
Non-linear inverse Compton scattering is an interesting phenomenon for all applications requiring high-energy photons since NICS is capable of producing photons with energy comparable to the charged particle rest energy and higher. As a consequence NICS photons can be used to trigger other phenomena such as pair production, Compton scattering, nuclear reactions, and can be used to probe non-linear quantum effects and non-linear QED.
See also
References
Further reading
(the original 1923 paper on the APS website)
Stuewer, Roger H. (1975), The Compton Effect: Turning Point in Physics (New York: Science History Publications)
External links
Compton Scattering – Georgia State University
Compton Scattering Data – Georgia State University
Derivation of Compton shift equation
Astrophysics
Observational astronomy
Atomic physics
Foundational quantum physics
Quantum electrodynamics
X-ray scattering | Compton scattering | [
"Physics",
"Chemistry",
"Astronomy"
] | 3,167 | [
"X-ray scattering",
"Foundational quantum physics",
"Observational astronomy",
"Quantum mechanics",
"Astrophysics",
"Scattering",
"Atomic physics",
" molecular",
"Atomic",
"Astronomical sub-disciplines",
" and optical physics"
] |
55,244 | https://en.wikipedia.org/wiki/Hypersonic%20speed | In aerodynamics, a hypersonic speed is one that exceeds five times the speed of sound, often stated as starting at speeds of Mach 5 and above.
The precise Mach number at which a craft can be said to be flying at hypersonic speed varies, since individual physical changes in the airflow (like molecular dissociation and ionization) occur at different speeds; these effects collectively become important around Mach 5–10. The hypersonic regime can also be alternatively defined as speeds where specific heat capacity changes with the temperature of the flow as kinetic energy of the moving object is converted into heat.
Characteristics of flow
While the definition of hypersonic flow can be quite vague and is generally debatable (especially because of the absence of discontinuity between supersonic and hypersonic flows), a hypersonic flow may be characterized by certain physical phenomena that can no longer be analytically discounted as in supersonic flow. The peculiarities in hypersonic flows are as follows:
Shock layer
Aerodynamic heating
Entropy layer
Real gas effects
Low density effects
Independence of aerodynamic coefficients with Mach number.
Small shock stand-off distance
As a body's Mach number increases, the density behind a bow shock generated by the body also increases, which corresponds to a decrease in volume behind the shock due to conservation of mass. Consequently, the distance between the bow shock and the body decreases at higher Mach numbers.
Entropy layer
As Mach numbers increase, the entropy change across the shock also increases, which results in a strong entropy gradient and highly vortical flow that mixes with the boundary layer.
Viscous interaction
A portion of the large kinetic energy associated with flow at high Mach numbers transforms into internal energy in the fluid due to viscous effects. The increase in internal energy is realized as an increase in temperature. Since the pressure gradient normal to the flow within a boundary layer is approximately zero for low to moderate hypersonic Mach numbers, the increase of temperature through the boundary layer coincides with a decrease in density. This causes the bottom of the boundary layer to expand, so that the boundary layer over the body grows thicker and can often merge with the shock wave near the body leading edge.
High-temperature flow
High temperatures due to a manifestation of viscous dissipation cause non-equilibrium chemical flow properties such as vibrational excitation and dissociation and ionization of molecules resulting in convective and radiative heat-flux.
Classification of Mach regimes
Although "subsonic" and "supersonic" usually refer to speeds below and above the local speed of sound respectively, aerodynamicists often use these terms to refer to particular ranges of Mach values. When an aircraft approaches transonic speeds (around Mach 1), it enters a special regime. The usual approximations based on the Navier–Stokes equations, which work well for subsonic designs, start to break down because, even in the freestream, some parts of the flow locally exceed Mach 1. So, more sophisticated methods are needed to handle this complex behavior.
The "supersonic regime" usually refers to the set of Mach numbers for which linearised theory may be used; for example, where the (air) flow is not chemically reacting and where heat transfer between air and vehicle may be reasonably neglected in calculations. Generally, NASA defines "high" hypersonic as any Mach number from 10 to 25, and re-entry speeds as anything greater than Mach 25. Among the spacecraft operating in these regimes are returning Soyuz and Dragon space capsules; the previously-operated Space Shuttle; various reusable spacecraft in development such as SpaceX Starship and Rocket Lab Electron; and (theoretical) spaceplanes.
In the following table, the "regimes" or "ranges of Mach values" are referenced instead of the usual meanings of "subsonic" and "supersonic".
Similarity parameters
The categorization of airflow relies on a number of similarity parameters, which allow the simplification of a nearly infinite number of test cases into groups of similarity. For transonic and compressible flow, the Mach and Reynolds numbers alone allow good categorization of many flow cases.
Hypersonic flows, however, require other similarity parameters. First, the analytic equations for the oblique shock angle become nearly independent of Mach number at high (~>10) Mach numbers. Second, the formation of strong shocks around aerodynamic bodies means that the freestream Reynolds number is less useful as an estimate of the behavior of the boundary layer over a body (although it is still important). Finally, the increased temperature of hypersonic flow mean that real gas effects become important. Research in hypersonics is therefore often called aerothermodynamics, rather than aerodynamics.
The introduction of real gas effects means that more variables are required to describe the full state of a gas. Whereas a stationary gas can be described by three variables (pressure, temperature, adiabatic index), and a moving gas by four (flow velocity), a hot gas in chemical equilibrium also requires state equations for the chemical components of the gas, and a gas in nonequilibrium solves those state equations using time as an extra variable. This means that for nonequilibrium flow, something between 10 and 100 variables may be required to describe the state of the gas at any given time. Additionally, rarefied hypersonic flows (usually defined as those with a Knudsen number above 0.1) do not follow the Navier–Stokes equations.
Hypersonic flows are typically categorized by their total energy, expressed as total enthalpy (MJ/kg), total pressure (kPa-MPa), stagnation pressure (kPa-MPa), stagnation temperature (K), or flow velocity (km/s).
Wallace D. Hayes developed a similarity parameter, similar to the Whitcomb area rule, which allowed similar configurations to be compared. In the study of hypersonic flow over slender bodies, the product of the freestream Mach number and the flow deflection angle , known as the hypersonic similarity parameter:is considered to be an important governing parameter. The slenderness ratio of a vehicle , where is the diameter and is the length, is often substituted for .
Regimes
Hypersonic flow can be approximately separated into a number of regimes. The selection of these regimes is rough, due to the blurring of the boundaries where a particular effect can be found.
Perfect gas
In this regime, the gas can be regarded as an ideal gas. Flow in this regime is still Mach number dependent. Simulations start to depend on the use of a constant-temperature wall, rather than the adiabatic wall typically used at lower speeds. The lower border of this region is around Mach 5, where ramjets become inefficient, and the upper border around Mach 10–12.
Two-temperature ideal gas
This is a subset of the perfect gas regime, where the gas can be considered chemically perfect, but the rotational and vibrational temperatures of the gas must be considered separately, leading to two temperature models. See particularly the modeling of supersonic nozzles, where vibrational freezing becomes important.
Dissociated gas
In this regime, diatomic or polyatomic gases (the gases found in most atmospheres) begin to dissociate as they come into contact with the bow shock generated by the body. Surface catalysis plays a role in the calculation of surface heating, meaning that the type of surface material also has an effect on the flow. The lower border of this regime is where any component of a gas mixture first begins to dissociate in the stagnation point of a flow (which for nitrogen is around 2000 K). At the upper border of this regime, the effects of ionization start to have an effect on the flow.
Ionized gas
In this regime the ionized electron population of the stagnated flow becomes significant, and the electrons must be modeled separately. Often the electron temperature is handled separately from the temperature of the remaining gas components. This region occurs for freestream flow velocities around 3–4 km/s. Gases in this region are modeled as non-radiating plasmas.
Radiation-dominated regime
Above around 12 km/s, the heat transfer to a vehicle changes from being conductively dominated to radiatively dominated. The modeling of gases in this regime is split into two classes:
Optically thin: where the gas does not re-absorb radiation emitted from other parts of the gas
Optically thick: where the radiation must be considered a separate source of energy.
The modeling of optically thick gases is extremely difficult, since, due to the calculation of the radiation at each point, the computation load theoretically expands exponentially as the number of points considered increases.
See also
Hypersonic glide vehicle
Supersonic transport
Lifting body
Atmospheric entry
Hypersonic flight
DARPA Falcon Project
Reaction Engines Skylon (design study)
Reaction Engines A2 (design study)
HyperSoar (concept)
Boeing X-51 Waverider
X-20 Dyna-Soar (cancelled)
Rockwell X-30 (cancelled)
Avatar RLV (2001 Indian concept study)
Hypersonic Technology Demonstrator Vehicle (Indian project)
Ayaks (Russian wave rider project from the 1990s)
Avangard (Russian hypersonic glide vehicle, in service)
DF-ZF (Chinese hypersonic glide vehicle, operational)
Lockheed Martin SR-72 (planned)
WZ-8 Chinese Hypersonic surveillance UAV (In Service)
MD-22 Chinese Hypersonic Unmanned combat aerial vehicle (In development)
Engines
Rocket engine
Ramjet
Scramjet
Reaction Engines SABRE, LAPCAT (design studies)
Missiles
3M22 Zircon Anti-ship hypersonic cruise missile (in production)
BrahMos-II Cruise Missile – (Under Development)
Other flow regimes
Subsonic flight
Transonic
Supersonic speed
References
External links
NASA's Guide to Hypersonics
Hypersonics Group at Imperial College
University of Queensland Centre for Hypersonics
High Speed Flow Group at University of New South Wales
Hypersonics Group at the University of Oxford
Aerodynamics
Aerospace engineering
Airspeed
Spacecraft propulsion | Hypersonic speed | [
"Physics",
"Chemistry",
"Engineering"
] | 2,046 | [
"Physical quantities",
"Aerodynamics",
"Airspeed",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
55,275 | https://en.wikipedia.org/wiki/Denotational%20semantics | In computer science, denotational semantics (initially known as mathematical semantics or Scott–Strachey semantics) is an approach of formalizing the meanings of programming languages by constructing mathematical objects (called denotations) that describe the meanings of expressions from the languages. Other approaches providing formal semantics of programming languages include axiomatic semantics and operational semantics.
Broadly speaking, denotational semantics is concerned with finding mathematical objects called domains that represent what programs do. For example, programs (or program phrases) might be represented by partial functions or by games between the environment and the system.
An important tenet of denotational semantics is that semantics should be compositional: the denotation of a program phrase should be built out of the denotations of its subphrases.
Historical development
Denotational semantics originated in the work of Christopher Strachey and Dana Scott published in the early 1970s. As originally developed by Strachey and Scott, denotational semantics provided the meaning of a computer program as a function that mapped input into output. To give meanings to recursively defined programs, Scott proposed working with continuous functions between domains, specifically complete partial orders. As described below, work has continued in investigating appropriate denotational semantics for aspects of programming languages such as sequentiality, concurrency, non-determinism and local state.
Denotational semantics has been developed for modern programming languages that use capabilities like concurrency and exceptions, e.g., Concurrent ML, CSP, and Haskell. The semantics of these languages is compositional in that the meaning of a phrase depends on the meanings of its subphrases. For example, the meaning of the applicative expression f(E1,E2) is defined in terms of semantics of its subphrases f, E1 and E2. In a modern programming language, E1 and E2 can be evaluated concurrently and the execution of one of them might affect the other by interacting through shared objects causing their meanings to be defined in terms of each other. Also, E1 or E2 might throw an exception which could terminate the execution of the other one. The sections below describe special cases of the semantics of these modern programming languages.
Meanings of recursive programs
Denotational semantics is ascribed to a program phrase as a function from an environment (holding current values of its free variables) to its denotation. For example, the phrase produces a denotation when provided with an environment that has binding for its two free variables: and . If in the environment has the value 3 and has the value 5, then the denotation is 15.
A function can be represented as a set of ordered pairs of argument and corresponding result values. For example, the set {(0,1), (4,3)} denotes a function with result 1 for argument 0, result 3 for the argument 4, and undefined otherwise.
Consider for example the factorial function, which might be defined recursively as:
int factorial(int n) { if (n == 0) then return 1; else return n * factorial(n-1); }
To provide a meaning for this recursive definition, the denotation is built up as the limit of approximations, where each approximation limits the number of calls to factorial. At the beginning, we start with no calls - hence nothing is defined. In the next approximation, we can add the ordered pair (0,1), because this doesn't require calling factorial again. Similarly we can add (1,1), (2,2), etc., adding one pair each successive approximation because computing factorial(n) requires n+1 calls. In the limit we get a total function from to defined everywhere in its domain.
Formally we model each approximation as a partial function . Our approximation is then repeatedly applying a function implementing "make a more defined partial factorial function", i.e. , starting with the empty function (empty set). F could be defined in code as follows (using Map<int,int> for ):
int factorial_nonrecursive(Map<int,int> factorial_less_defined, int n)
{
if (n == 0) then return 1;
else if (fprev = lookup(factorial_less_defined, n-1)) then
return n * fprev;
else
return NOT_DEFINED;
}
Map<int,int> F(Map<int,int> factorial_less_defined)
{
Map<int,int> new_factorial = Map.empty();
for (int n in all<int>()) {
if (f = factorial_nonrecursive(factorial_less_defined, n) != NOT_DEFINED)
new_factorial.put(n, f);
}
return new_factorial;
}
Then we can introduce the notation Fn to indicate F applied n times.
F0({}) is the totally undefined partial function, represented as the set {};
F1({}) is the partial function represented as the set {(0,1)}: it is defined at 0, to be 1, and undefined elsewhere;
F5({}) is the partial function represented as the set {(0,1), (1,1), (2,2), (3,6), (4,24)}: it is defined for arguments 0,1,2,3,4.
This iterative process builds a sequence of partial functions from to . Partial functions form a chain-complete partial order using ⊆ as the ordering. Furthermore, this iterative process of better approximations of the factorial function forms an expansive (also called progressive) mapping because each using ⊆ as the ordering. So by a fixed-point theorem (specifically Bourbaki–Witt theorem), there exists a fixed point for this iterative process.
In this case, the fixed point is the least upper bound of this chain, which is the full function, which can be expressed as the union
The fixed point we found is the least fixed point of F, because our iteration started with the smallest element in the domain (the empty set). To prove this we need a more complex fixed point theorem such as the Knaster–Tarski theorem.
Denotational semantics of non-deterministic programs
The concept of power domains has been developed to give a denotational semantics to non-deterministic sequential programs. Writing P for a power-domain constructor, the domain P(D) is the domain of non-deterministic computations of type denoted by D.
There are difficulties with fairness and unboundedness in domain-theoretic models of non-determinism.
Denotational semantics of concurrency
Many researchers have argued that the domain-theoretic models given above do not suffice for the more general case of concurrent computation. For this reason various new models have been introduced. In the early 1980s, people began using the style of denotational semantics to give semantics for concurrent languages. Examples include Will Clinger's work with the actor model; Glynn Winskel's work with event structures and Petri nets; and the work by Francez, Hoare, Lehmann, and de Roever (1979) on trace semantics for CSP. All these lines of inquiry remain under investigation (see e.g. the various denotational models for CSP).
Recently, Winskel and others have proposed the category of profunctors as a domain theory for concurrency.
Denotational semantics of state
State (such as a heap) and simple imperative features can be straightforwardly modeled in the denotational semantics described above. The key idea is to consider a command as a partial function on some domain of states. The meaning of "" is then the function that takes a state to the state with assigned to . The sequencing operator "" is denoted by composition of functions. Fixed-point constructions are then used to give a semantics to looping constructs, such as "".
Things become more difficult in modelling programs with local variables. One approach is to no longer work with domains, but instead to interpret types as functors from some category of worlds to a category of domains. Programs are then denoted by natural continuous functions between these functors.
Denotations of data types
Many programming languages allow users to define recursive data types. For example, the type of lists of numbers can be specified by
datatype list = Cons of nat * list | Empty
This section deals only with functional data structures that cannot change. Conventional imperative programming languages would typically allow the elements of such a recursive list to be changed.
For another example: the type of denotations of the untyped lambda calculus is
datatype D = D of (D → D)
The problem of solving domain equations is concerned with finding domains that model these kinds of datatypes. One approach, roughly speaking, is to consider the collection of all domains as a domain itself, and then solve the recursive definition there.
Polymorphic data types are data types that are defined with a parameter. For example, the type of α s is defined by
datatype α list = Cons of α * α list | Empty
Lists of natural numbers, then, are of type , while lists of strings are of .
Some researchers have developed domain theoretic models of polymorphism. Other researchers have also modeled parametric polymorphism within constructive set theories.
A recent research area has involved denotational semantics for object and class based programming languages.
Denotational semantics for programs of restricted complexity
Following the development of programming languages based on linear logic, denotational semantics have been given to languages for linear usage (see e.g. proof nets, coherence spaces) and also polynomial time complexity.
Denotational semantics of sequentiality
The problem of full abstraction for the sequential programming language PCF was, for a long time, a big open question in denotational semantics. The difficulty with PCF is that it is a very sequential language. For example, there is no way to define the parallel-or function in PCF. It is for this reason that the approach using domains, as introduced above, yields a denotational semantics that is not fully abstract.
This open question was mostly resolved in the 1990s with the development of game semantics and also with techniques involving logical relations. For more details, see the page on PCF.
Denotational semantics as source-to-source translation
It is often useful to translate one programming language into another. For example, a concurrent programming language might be translated into a process calculus; a high-level programming language might be translated into byte-code. (Indeed, conventional denotational semantics can be seen as the interpretation of programming languages into the internal language of the category of domains.)
In this context, notions from denotational semantics, such as full abstraction, help to satisfy security concerns.
Abstraction
It is often considered important to connect denotational semantics with operational semantics. This is especially important when the denotational semantics is rather mathematical and abstract, and the operational semantics is more concrete or closer to the computational intuitions. The following properties of a denotational semantics are often of interest.
Syntax independence: The denotations of programs should not involve the syntax of the source language.
Adequacy (or soundness): All observably distinct programs have distinct denotations;
Full abstraction: All observationally equivalent programs have equal denotations.
For semantics in the traditional style, adequacy and full abstraction may be understood roughly as the requirement that "operational equivalence coincides with denotational equality". For denotational semantics in more intensional models, such as the actor model and process calculi, there are different notions of equivalence within each model, and so the concepts of adequacy and of full abstraction are a matter of debate, and harder to pin down. Also the mathematical structure of operational semantics and denotational semantics can become very close.
Additional desirable properties we may wish to hold between operational and denotational semantics are:
Constructivism: Constructivism is concerned with whether domain elements can be shown to exist by constructive methods.
Independence of denotational and operational semantics: The denotational semantics should be formalized using mathematical structures that are independent of the operational semantics of a programming language; However, the underlying concepts can be closely related. See the section on Compositionality below.
Full completeness or definability: Every morphism of the semantic model should be the denotation of a program.
Compositionality
An important aspect of denotational semantics of programming languages is compositionality, by which the denotation of a program is constructed from denotations of its parts. For example, consider the expression "7 + 4". Compositionality in this case is to provide a meaning for "7 + 4" in terms of the meanings of "7", "4" and "+".
A basic denotational semantics in domain theory is compositional because it is given as follows. We start by considering program fragments, i.e. programs with free variables. A typing context assigns a type to each free variable. For instance, in the expression (x + y) might be considered in a typing context (x:,y:). We now give a denotational semantics to program fragments, using the following scheme.
We begin by describing the meaning of the types of our language: the meaning of each type must be a domain. We write 〚τ〛 for the domain denoting the type τ. For instance, the meaning of type should be the domain of natural numbers: 〚〛= ⊥.
From the meaning of types we derive a meaning for typing contexts. We set 〚 x1:τ1,..., xn:τn〛 = 〚 τ1〛× ... ×〚τn〛. For instance, 〚x:,y:〛= ⊥×⊥. As a special case, the meaning of the empty typing context, with no variables, is the domain with one element, denoted 1.
Finally, we must give a meaning to each program-fragment-in-typing-context. Suppose that P is a program fragment of type σ, in typing context Γ, often written Γ⊢P:σ. Then the meaning of this program-in-typing-context must be a continuous function 〚Γ⊢P:σ〛:〚Γ〛→〚σ〛. For instance, 〚⊢7:〛:1→⊥ is the constantly "7" function, while 〚x:,y:⊢x+y:〛:⊥×⊥→⊥ is the function that adds two numbers.
Now, the meaning of the compound expression (7+4) is determined by composing the three functions 〚⊢7:〛:1→⊥, 〚⊢4:〛:1→⊥, and 〚x:,y:⊢x+y:〛:⊥×⊥→⊥.
In fact, this is a general scheme for compositional denotational semantics. There is nothing specific about domains and continuous functions here. One can work with a different category instead. For example, in game semantics, the category of games has games as objects and strategies as morphisms: we can interpret types as games, and programs as strategies. For a simple language without general recursion, we can make do with the category of sets and functions. For a language with side-effects, we can work in the Kleisli category for a monad. For a language with state, we can work in a functor category. Milner has advocated modelling location and interaction by working in a category with interfaces as objects and bigraphs as morphisms.
Semantics versus implementation
According to Dana Scott (1980):
It is not necessary for the semantics to determine an implementation, but it should provide criteria for showing that an implementation is correct.
According to Clinger (1981):
Usually, however, the formal semantics of a conventional sequential programming language may itself be interpreted to provide an (inefficient) implementation of the language. A formal semantics need not always provide such an implementation, though, and to believe that semantics must provide an implementation leads to confusion about the formal semantics of concurrent languages. Such confusion is painfully evident when the presence of unbounded nondeterminism in a programming language's semantics is said to imply that the programming language cannot be implemented.
Connections to other areas of computer science
Some work in denotational semantics has interpreted types as domains in the sense of domain theory, which can be seen as a branch of model theory, leading to connections with type theory and category theory. Within computer science, there are connections with abstract interpretation, program verification, and model checking.
References
Further reading
Textbooks
(A classic if dated textbook.)
out of print now; free electronic version available:
Lecture notes
Other references
External links
Denotational Semantics. Overview of book by Lloyd Allison
1970 in computing
Logic in computer science
Models of computation
Formal specification languages
Programming language semantics
es:Semántica denotacional | Denotational semantics | [
"Mathematics"
] | 3,555 | [
"Mathematical logic",
"Logic in computer science"
] |
55,345 | https://en.wikipedia.org/wiki/Net%20present%20value | The net present value (NPV) or net present worth (NPW) is a way of measuring the value of an asset that has cashflow by adding up the present value of all the future cash flows that asset will generate. The present value of a cash flow depends on the interval of time between now and the cash flow because of the Time value of money (which includes the annual effective discount rate). It provides a method for evaluating and comparing capital projects or financial products with cash flows spread over time, as in loans, investments, payouts from insurance contracts plus many other applications.
Time value of money dictates that time affects the value of cash flows. For example, a lender may offer 99 cents for the promise of receiving $1.00 a month from now, but the promise to receive that same dollar 20 years in the future would be worth much less today to that same person (lender), even if the payback in both cases was equally certain. This decrease in the current value of future cash flows is based on a chosen rate of return (or discount rate). If for example there exists a time series of identical cash flows, the cash flow in the present is the most valuable, with each future cash flow becoming less valuable than the previous cash flow. A cash flow today is more valuable than an identical cash flow in the future because a present flow can be invested immediately and begin earning returns, while a future flow cannot.
NPV is determined by calculating the costs (negative cash flows) and benefits (positive cash flows) for each period of an investment. After the cash flow for each period is calculated, the present value (PV) of each one is achieved by discounting its future value (see Formula) at a periodic rate of return (the rate of return dictated by the market). NPV is the sum of all the discounted future cash flows.
Because of its simplicity, NPV is a useful tool to determine whether a project or investment will result in a net profit or a loss. A positive NPV results in profit, while a negative NPV results in a loss. The NPV measures the excess or shortfall of cash flows, in present value terms, above the cost of funds. In a theoretical situation of unlimited capital budgeting, a company should pursue every investment with a positive NPV. However, in practical terms a company's capital constraints limit investments to projects with the highest NPV whose cost cash flows, or initial cash investment, do not exceed the company's capital. NPV is a central tool in discounted cash flow (DCF) analysis and is a standard method for using the time value of money to appraise long-term projects. It is widely used throughout economics, financial analysis, and financial accounting.
In the case when all future cash flows are positive, or incoming (such as the principal and coupon payment of a bond) the only outflow of cash is the purchase price, the NPV is simply the PV of future cash flows minus the purchase price (which is its own PV). NPV can be described as the "difference amount" between the sums of discounted cash inflows and cash outflows. It compares the present value of money today to the present value of money in the future, taking inflation and returns into account.
The NPV of a sequence of cash flows takes as input the cash flows and a discount rate or discount curve and outputs a present value, which is the current fair price. The converse process in discounted cash flow (DCF) analysis takes a sequence of cash flows and a price as input and as output the discount rate, or internal rate of return (IRR) which would yield the given price as NPV. This rate, called the yield, is widely used in bond trading.
Formula
Each cash inflow/outflow is discounted back to its present value (PV). Then all are summed such that NPV is the sum of all terms:
where:
is the time of the cash flow
is the discount rate, i.e. the return that could be earned per unit of time on an investment with similar risk
is the net cash flow i.e. cash inflow – cash outflow, at time t. For educational purposes, is commonly placed to the left of the sum to emphasize its role as (minus) the investment.
is the discount factor, also known as the present value factor.
The result of this formula is multiplied with the Annual Net cash in-flows and reduced by Initial Cash outlay the present value, but in cases where the cash flows are not equal in amount, the previous formula will be used to determine the present value of each cash flow separately. Any cash flow within 12 months will not be discounted for NPV purpose, nevertheless the usual initial investments during the first year R0 are summed up a negative cash flow.
The NPV can also be thought of as the difference between the discounted benefits and costs over time. As such, the NPV can also be written as:
where:
are the benefits or cash inflows
are the costs or cash outflows
Given the (period, cash inflows, cash outflows) shown by (, , ) where is the total number of periods, the net present value is given by:
where:
are the benefits or cash inflows at time .
are the costs or cash outflows at time .
The NPV can be rewritten using the net cash flow in each time period as:By convention, the initial period occurs at time , where cash flows in successive periods are then discounted from and so on. Furthermore, all future cash flows during a period are assumed to be at the end of each period. For constant cash flow , the net present value is a finite geometric series and is given by:
Inclusion of the term is important in the above formulae. A typical capital project involves a large negative cashflow (the initial investment) with positive future cashflows (the return on the investment). A key assessment is whether, for a given discount rate, the NPV is positive (profitable) or negative (loss-making). The IRR is the discount rate for which the NPV is exactly 0.
Capital efficiency
The NPV method can be slightly adjusted to calculate how much money is contributed to a project's investment per dollar invested. This is known as the capital efficiency ratio. The formula for the net present value per dollar investment (NPVI) is given below:
where:
is the net cash flow i.e. cash inflow – cash outflow, at time t.
are the net cash outflows, at time t.
Example
If the discounted benefits across the life of a project are and the discounted net costs across the life of a project are then the NPVI is:
That is for every dollar invested in the project, a contribution of is made to the project's NPV.
Alternative discounting frequencies
The NPV formula assumes that the benefits and costs occur at the end of each period, resulting in a more conservative NPV. However, it may be that the cash inflows and outflows occur at the beginning of the period or in the middle of the period.
The NPV formula for mid period discounting is given by:
Over a project's lifecycle, cash flows are typically spread across each period (for example spread across each year), and as such the middle of the year represents the average point in time in which these cash flows occur. Hence mid period discounting typically provides a more accurate, although less conservative NPV.
ЧикЙ
The NPV formula using beginning of period discounting is given by:
This results in the least conservative NPV.
The discount rate
The rate used to discount future cash flows to the present value is a key variable of this process.
A firm's weighted average cost of capital (after tax) is often used, but many people believe that it is appropriate to use higher discount rates to adjust for risk, opportunity cost, or other factors. A variable discount rate with higher rates applied to cash flows occurring further along the time span might be used to reflect the yield curve premium for long-term debt.
Another approach to choosing the discount rate factor is to decide the rate which the capital needed for the project could return if invested in an alternative venture. If, for example, the capital required for Project A can earn 5% elsewhere, use this discount rate in the NPV calculation to allow a direct comparison to be made between Project A and the alternative. Related to this concept is to use the firm's reinvestment rate. Re-investment rate can be defined as the rate of return for the firm's investments on average. When analyzing projects in a capital constrained environment, it may be appropriate to use the reinvestment rate rather than the firm's weighted average cost of capital as the discount factor. It reflects opportunity cost of investment, rather than the possibly lower cost of capital.
An NPV calculated using variable discount rates (if they are known for the duration of the investment) may better reflect the situation than one calculated from a constant discount rate for the entire investment duration. Refer to the tutorial article written by Samuel Baker for more detailed relationship between the NPV and the discount rate.
For some professional investors, their investment funds are committed to target a specified rate of return. In such cases, that rate of return should be selected as the discount rate for the NPV calculation. In this way, a direct comparison can be made between the profitability of the project and the desired rate of return.
To some extent, the selection of the discount rate is dependent on the use to which it will be put. If the intent is simply to determine whether a project will add value to the company, using the firm's weighted average cost of capital may be appropriate. If trying to decide between alternative investments in order to maximize the value of the firm, the corporate reinvestment rate would probably be a better choice.
Risk-adjusted net present value (rNPV)
Using variable rates over time, or discounting "guaranteed" cash flows differently from "at risk" cash flows, may be a superior methodology but is seldom used in practice. Using the discount rate to adjust for risk is often difficult to do in practice (especially internationally) and is difficult to do well.
An alternative to using discount factor to adjust for risk is to explicitly correct the cash flows for the risk elements using risk-adjusted net present value (rNPV) or a similar method, then discount at the firm's rate.
Use in decision making
NPV is an indicator of how much value an investment or project adds to the firm. With a particular project, if is a positive value, the project is in the status of positive cash inflow in the time of t. If is a negative value, the project is in the status of discounted cash outflow in the time of t. Appropriately risked projects with a positive NPV could be accepted. This does not necessarily mean that they should be undertaken since NPV at the cost of capital may not account for opportunity cost, i.e., comparison with other available investments. In financial theory, if there is a choice between two mutually exclusive alternatives, the one yielding the higher NPV should be selected. A positive net present value indicates that the projected earnings generated by a project or investment (in present dollars) exceeds the anticipated costs (also in present dollars). This concept is the basis for the Net Present Value Rule, which dictates that the only investments that should be made are those with positive NPVs.
An investment with a positive NPV is profitable, but one with a negative NPV will not necessarily result in a net loss: it is just that the internal rate of return of the project falls below the required rate of return.
Advantages and disadvantages of using Net Present Value
NPV is an indicator for project investments, and has several advantages and disadvantages for decision-making.
Advantages
The NPV includes all relevant time and cash flows for the project by considering the time value of money, which is consistent with the goal of wealth maximization by creating the highest wealth for shareholders.
The NPV formula accounts for cash flow timing patterns and size differences for each project, and provides an easy, unambiguous dollar value comparison of different investment options.
The NPV can be easily calculated using modern spreadsheets, under the assumption that the discount rate and future cash flows are known. For a firm considering investing in multiple projects, the NPV has the benefit of being additive. That is, the NPVs of different projects may be aggregated to calculate the highest wealth creation, based on the available capital that can be invested by a firm.
Disadvantages
The NPV method has several disadvantages.
The NPV approach does not consider hidden costs and project size. Thus, investment decisions on projects with substantial hidden costs may not be accurate.
Relies on input parameters such as knowledge of future cash flows
The NPV is heavily dependent on knowledge of future cash flows, their timing, the length of a project, the initial investment required, and the discount rate. Hence, it can only be accurate if these input parameters are correct; although, sensitivity analyzes can be undertaken to examine how the NPV changes as the input variables are changed, thus reducing the uncertainty of the NPV.
Relies on choice of discount rate and discount factor
The accuracy of the NPV method relies heavily on the choice of a discount rate and hence discount factor, representing an investment's true risk premium. The discount rate is assumed to be constant over the life of an investment; however, discount rates can change over time. For example, discount rates can change as the cost of capital changes. There are other drawbacks to the NPV method, such as the fact that it displays a lack of consideration for a project’s size and the cost of capital.
Lack of consideration of non-financial metrics
The NPV calculation is purely financial and thus does not consider non-financial metrics that may be relevant to an investment decision.
Difficulty in comparing mutually exclusive projects
Comparing mutually exclusive projects with different investment horizons can be difficult. Since unequal projects are all assumed to have duplicate investment horizons, the NPV approach can be used to compare the optimal duration NPV.
Interpretation as integral transform
The time-discrete formula of the net present value
can also be written in a continuous variation
where
r(t) is the rate of flowing cash given in money per time, and r(t) = 0 when the investment is over.
Net present value can be regarded as Laplace- respectively Z-transformed cash flow with the integral operator including the complex number s which resembles to the interest rate i from the real number space or more precisely s = ln(1 + i).
From this follow simplifications known from cybernetics, control theory and system dynamics. Imaginary parts of the complex number s describe the oscillating behaviour (compare with the pork cycle, cobweb theorem, and phase shift between commodity price and supply offer) whereas real parts are responsible for representing the effect of compound interest (compare with damping).
Example
A corporation must decide whether to introduce a new product line. The company will have immediate costs of 100,000 at . Recall, a cost is a negative for outgoing cash flow, thus this cash flow is represented as −100,000. The company assumes the product will provide equal benefits of 10,000 for each of 12 years beginning at . For simplicity, assume the company will have no outgoing cash flows after the initial 100,000 cost. This also makes the simplifying assumption that the net cash received or paid is lumped into a single transaction occurring on the last day of each year. At the end of the 12 years the product no longer provides any cash flow and is discontinued without any additional costs. Assume that the effective annual discount rate is 10%.
The present value (value at ) can be calculated for each year:
The total present value of the incoming cash flows is 68,136.91. The total present value of the outgoing cash flows is simply the 100,000 at time .
Thus:
In this example:
Observe that as t increases the present value of each cash flow at t decreases. For example, the final incoming cash flow has a future value of 10,000 at but has a present value (at ) of 3,186.31. The opposite of discounting is compounding. Taking the example in reverse, it is the equivalent of investing 3,186.31 at (the present value) at an interest rate of 10% compounded for 12 years, which results in a cash flow of 10,000 at (the future value).
The importance of NPV becomes clear in this instance. Although the incoming cash flows () appear to exceed the outgoing cash flow (100,000), the future cash flows are not adjusted using the discount rate. Thus, the project appears misleadingly profitable. When the cash flows are discounted however, it indicates the project would result in a net loss of 31,863.09. Thus, the NPV calculation indicates that this project should be disregarded because investing in this project is the equivalent of a loss of 31,863.09 at . The concept of time value of money indicates that cash flows in different periods of time cannot be accurately compared unless they have been adjusted to reflect their value at the same period of time (in this instance, ). It is the present value of each future cash flow that must be determined in order to provide any meaningful comparison between cash flows at different periods of time. There are a few inherent assumptions in this type of analysis:
The investment horizon of all possible investment projects considered are equally acceptable to the investor (e.g. a 3-year project is not necessarily preferable vs. a 20-year project.)
The 10% discount rate is the appropriate (and stable) rate to discount the expected cash flows from each project being considered. Each project is assumed equally speculative.
The shareholders cannot get above a 10% return on their money if they were to directly assume an equivalent level of risk. (If the investor could do better elsewhere, no projects should be undertaken by the firm, and the excess capital should be turned over to the shareholder through dividends and stock repurchases.)
More realistic problems would also need to consider other factors, generally including: smaller time buckets, the calculation of taxes (including the cash flow timing), inflation, currency exchange fluctuations, hedged or unhedged commodity costs, risks of technical obsolescence, potential future competitive factors, uneven or unpredictable cash flows, and a more realistic salvage value assumption, as well as many others.
A more simple example of the net present value of incoming cash flow over a set period of time, would be winning a Powerball lottery of . If one does not select the "CASH" option they will be paid per year for 20 years, a total of , however, if one does select the "CASH" option, they will receive a one-time lump sum payment of approximately , the NPV of paid over time. See "other factors" above that could affect the payment amount. Both scenarios are before taxes.
Common pitfalls
If, for example, the Rt are generally negative late in the project (e.g., an industrial or mining project might have clean-up and restoration costs), then at that stage the company owes money, so a high discount rate is not cautious but too optimistic. Some people see this as a problem with NPV. A way to avoid this problem is to include explicit provision for financing any losses after the initial investment, that is, explicitly calculate the cost of financing such losses.
Another common pitfall is to adjust for risk by adding a premium to the discount rate. Whilst a bank might charge a higher rate of interest for a risky project, that does not mean that this is a valid approach to adjusting a net present value for risk, although it can be a reasonable approximation in some specific cases. One reason such an approach may not work well can be seen from the following: if some risk is incurred resulting in some losses, then a discount rate in the NPV will reduce the effect of such losses below their true financial cost. A rigorous approach to risk requires identifying and valuing risks explicitly, e.g., by actuarial or Monte Carlo techniques, and explicitly calculating the cost of financing any losses incurred.
Yet another issue can result from the compounding of the risk premium. R is a composite of the risk free rate and the risk premium. As a result, future cash flows are discounted by both the risk-free rate as well as the risk premium and this effect is compounded by each subsequent cash flow. This compounding results in a much lower NPV than might be otherwise calculated. The certainty equivalent model can be used to account for the risk premium without compounding its effect on present value.
Another issue with relying on NPV is that it does not provide an overall picture of the gain or loss of executing a certain project. To see a percentage gain relative to the investments for the project, usually, Internal rate of return or other efficiency measures are used as a complement to NPV.
Non-specialist users frequently make the error of computing NPV based on cash flows after interest. This is wrong because it double counts the time value of money. Free cash flow should be used as the basis for NPV computations.
When using Microsoft's Excel, the "=NPV(...)" formula makes two assumptions that result in an incorrect solution. The first is that the amount of time between each item in the input array is constant and equidistant (e.g., 30 days of time between item 1 and item 2) which may not always be correct based on the cash flow that is being discounted. The second item is that the function will assume the item in the first position of the array is period 1 not period zero. This then results in incorrectly discounting all array items by one extra period. The easiest fix to both of these errors is to use the "=XNPV(...)" formula.
Software support
Many computer-based spreadsheet programs have built-in formulae for PV and NPV.
History
Net present value as a valuation methodology dates at least to the 19th century. Karl Marx refers to NPV as fictitious capital, and the calculation as "capitalising," writing:
In mainstream neo-classical economics, NPV was formalized and popularized by Irving Fisher, in his 1907 The Rate of Interest and became included in textbooks from the 1950s onwards, starting in finance texts.
Alternative capital budgeting methods
Adjusted present value (APV): adjusted present value, is the net present value of a project if financed solely by ownership equity plus the present value of all the benefits of financing.
Accounting rate of return (ARR): a ratio similar to IRR and MIRR
Cost-benefit analysis: which includes issues other than cash, such as time savings.
Internal rate of return (IRR): which calculates the rate of return of a project while disregarding the absolute amount of money to be gained.
Modified internal rate of return (MIRR): similar to IRR, but it makes explicit assumptions about the reinvestment of the cash flows. Sometimes it is called Growth Rate of Return.
Payback period: which measures the time required for the cash inflows to equal the original outlay. It measures risk, not return.
Real option: which attempts to value managerial flexibility that is assumed away in NPV.
Equivalent annual cost (EAC): a capital budgeting technique that is useful in comparing two or more projects with different lifespans.
Adjusted present value
Accounting rate of return
Cost-benefit analysis
Internal rate of return
Modified internal rate of return
Payback period
Equivalent annual cost
See also
Profitability index
References
Mathematical finance
Investment
Engineering economics
Management accounting
Capital budgeting
Valuation (finance) | Net present value | [
"Mathematics",
"Engineering"
] | 4,912 | [
"Applied mathematics",
"Engineering economics",
"Mathematical finance"
] |
55,530 | https://en.wikipedia.org/wiki/Personal%20protective%20equipment | Personal protective equipment (PPE) is protective clothing, helmets, goggles, or other garments or equipment designed to protect the wearer's body from injury or infection. The hazards addressed by protective equipment include physical, electrical, heat, chemical, biohazards, and airborne particulate matter. Protective equipment may be worn for job-related occupational safety and health purposes, as well as for sports and other recreational activities. Protective clothing is applied to traditional categories of clothing, and protective gear applies to items such as pads, guards, shields, or masks, and others. PPE suits can be similar in appearance to a cleanroom suit.
The purpose of personal protective equipment is to reduce employee exposure to hazards when engineering controls and administrative controls are not feasible or effective to reduce these risks to acceptable levels. PPE is needed when there are hazards present. PPE has the serious limitation that it does not eliminate the hazard at the source and may result in employees being exposed to the hazard if the equipment fails.
Any item of PPE imposes a barrier between the wearer/user and the working environment. This can create additional strains on the wearer, impair their ability to carry out their work and create significant levels of discomfort. Any of these can discourage wearers from using PPE correctly, therefore placing them at risk of injury, ill-health or, under extreme circumstances, death. Good ergonomic design can help to minimise these barriers and can therefore help to ensure safe and healthy working conditions through the correct use of PPE.
Practices of occupational safety and health can use hazard controls and interventions to mitigate workplace hazards, which pose a threat to the safety and quality of life of workers. The hierarchy of hazard controls provides a policy framework which ranks the types of hazard controls in terms of absolute risk reduction. At the top of the hierarchy are elimination and substitution, which remove the hazard entirely or replace the hazard with a safer alternative. If elimination or substitution measures cannot be applied, engineering controls and administrative controlswhich seek to design safer mechanisms and coach safer human behaviorare implemented. Personal protective equipment ranks last on the hierarchy of controls, as the workers are regularly exposed to the hazard, with a barrier of protection. The hierarchy of controls is important in acknowledging that, while personal protective equipment has tremendous utility, it is not the desired mechanism of control in terms of worker safety.
History
Early PPE such as body armor, boots and gloves focused on protecting the wearer's body from physical injury. The plague doctors of sixteenth-century Europe also wore protective uniforms consisting of a full-length gown, helmet, glass eye coverings, gloves and boots (see Plague doctor costume) to prevent contagion when dealing with plague victims. These were made of thick material which was then covered in wax to make it water-resistant. A mask with a beak-like structure was filled with pleasant-smelling flowers, herbs and spices to prevent the spread of miasma, the prescientific belief of bad smells which spread disease through the air. In more recent years, scientific personal protective equipment is generally believed to have begun with the cloth facemasks promoted by Wu Lien-teh in the 1910–11 Manchurian pneumonic plague outbreak, although some doctors and scientists of the time doubted the efficacy of facemasks in preventing the spread of that disease since they didn't believe it was transmitted through the air.
Types
Personal protective equipment can be categorized by the area of the body protected, by the type of hazard, and by the type of garment or accessory. A single itemfor example, bootsmay provide multiple forms of protection: a steel toe cap and steel insoles for protection of the feet from crushing or puncture injuries, impervious rubber and lining for protection from water and chemicals, high reflectivity and heat resistance for protection from radiant heat, and high electrical resistivity for protection from electric shock. The protective attributes of each piece of equipment must be compared with the hazards expected to be found in the workplace. More breathable types of personal protective equipment may not lead to more contamination but do result in greater user satisfaction.
Respirators
Respirators are protective breathing equipment, which protect the user from inhaling contaminants in the air, thus preserving the health of their respiratory tract. There are two main types of respirators. One type of respirator functions by filtering out chemicals and gases, or airborne particles, from the air breathed by the user. The filtration may be either passive or active (powered). Gas masks and particulate respirators (like N95 masks) are examples of this type of respirator. A second type of respirator protects users by providing clean, respirable air from another source. This type includes airline respirators and self-contained breathing apparatus (SCBA). In work environments, respirators are relied upon when adequate ventilation is not available or other engineering control systems are not feasible or inadequate.
In the United Kingdom, an organization that has extensive expertise in respiratory protective equipment is the Institute of Occupational Medicine. This expertise has been built on a long-standing and varied research programme that has included the setting of workplace protection factors to the assessment of efficacy of masks available through high street retail outlets.
The Health and Safety Executive (HSE), NHS Health Scotland and Healthy Working Lives (HWL) have jointly developed the RPE (Respiratory Protective Equipment) Selector Tool, which is web-based. This interactive tool provides descriptions of different types of respirators and breathing apparatuses, as well as "dos and don'ts" for each type.
In the United States, The National Institute for Occupational Safety and Health (NIOSH) provides recommendations on respirator use, in accordance to NIOSH federal respiratory regulations 42 CFR Part 84. The National Personal Protective Technology Laboratory (NPPTL) of NIOSH is tasked towards actively conducting studies on respirators and providing recommendations.
Surgical masks
Surgical masks are sometimes considered as PPE, but are not considered as respirators, being unable to stop submicron particles from passing through, and also having unrestricted air flow at the edges of the masks.
Surgical masks are not certified for the prevention of tuberculosis.
Skin protection
Occupational skin diseases such as contact dermatitis, skin cancers, and other skin injuries and infections are the second-most common type of occupational disease and can be very costly. Skin hazards, which lead to occupational skin disease, can be classified into four groups. Chemical agents can come into contact with the skin through direct contact with contaminated surfaces, deposition of aerosols, immersion or splashes. Physical agents such as extreme temperatures and ultraviolet or solar radiation can be damaging to the skin over prolonged exposure. Mechanical trauma occurs in the form of friction, pressure, abrasions, lacerations and contusions. Biological agents such as parasites, microorganisms, plants and animals can have varied effects when exposed to the skin.
Any form of PPE that acts as a barrier between the skin and the agent of exposure can be considered skin protection. Because much work is done with the hands, gloves are an essential item in providing skin protection. Some examples of gloves commonly used as PPE include rubber gloves, cut-resistant gloves, chainsaw gloves and heat-resistant gloves. For sports and other recreational activities, many different gloves are used for protection, generally against mechanical trauma.
Other than gloves, any other article of clothing or protection worn for a purpose serve to protect the skin. Lab coats for example, are worn to protect against potential splashes of chemicals. Face shields serve to protect one's face from potential impact hazards, chemical splashes or possible infectious fluid.
Many migrant workers need training in PPE for Heat Related Illnesses prevention (HRI). Based on study results, research identified some potential gaps in heat safety education. While some farm workers reported receiving limited training on pesticide safety, others did not. This could be remedied by incoming groups of farm workers receiving video and in-person training on HRI prevention. These educational programs for farm workers are most effective when they are based on health behavior theories, use adult learning principles and employ train-the-trainer approaches.
Eye protection
Each day, about 2,000 US workers have a job-related eye injury that requires medical attention. Eye injuries can happen through a variety of means. Most eye injuries occur when solid particles such as metal slivers, wood chips, sand or cement chips get into the eye. Smaller particles in smokes and larger particles such as broken glass also account for particulate matter-causing eye injuries. Blunt force trauma can occur to the eye when excessive force comes into contact with the eye. Chemical burns, biological agents, and thermal agents, from sources such as welding torches and UV light, also contribute to occupational eye injury.
While the required eye protection varies by occupation, the safety provided can be generalized. Safety glasses provide protection from external debris, and should provide side protection via a wrap-around design or side shields.
Goggles provide better protection than safety glasses, and are effective in preventing eye injury from chemical splashes, impact, dusty environments and welding. Goggles with high air flow should be used to prevent fogging.
Face shields provide additional protection and are worn over the standard eyewear; they also provide protection from impact, chemical, and blood-borne hazards.
Full-facepiece respirators are considered the best form of eye protection when respiratory protection is needed as well, but may be less effective against potential impact hazards to the eye.
Eye protection for welding is shaded to different degrees, depending on the specific operation.
Hearing protection
Industrial noise is often overlooked as an occupational hazard, as it is not visible to the eye. Overall, about 22 million workers in the United States are exposed to potentially damaging noise levels each year. Occupational hearing loss accounted for 14% of all occupational illnesses in 2007, with about 23,000 cases significant enough to cause permanent hearing impairment. About 82% of occupational hearing loss cases occurred to workers in the manufacturing sector. In the US the Occupational Safety and Health Administration establishes occupational noise exposure standards. The National Institute for Occupational Safety and Health recommends that worker exposures to noise be reduced to a level equivalent to 85 dBA for eight hours to reduce occupational noise-induced hearing loss.
PPE for hearing protection consists of earplugs and earmuffs. Workers who are regularly exposed to noise levels above the NIOSH recommendation should be provided with hearing protection by the employers, as they are a low-cost intervention. A personal attenuation rating can be objectively measured through a hearing protection fit-testing system. The effectiveness of hearing protection varies with the training offered on their use.
Protective clothing and ensembles
This form of PPE is all-encompassing and refers to the various suits and uniforms worn to protect the user from harm. Lab coats worn by scientists and ballistic vests worn by law enforcement officials, which are worn on a regular basis, would fall into this category. Entire sets of PPE, worn together in a combined suit, are also in this category.
Ensembles
Below are some examples of ensembles of personal protective equipment, worn together for a specific occupation or task, to provide maximum protection for the user:
PPE gowns are used by medical personnel like doctors and nurses.
Chainsaw protection (especially a helmet with face guard, hearing protection, kevlar chaps, anti-vibration gloves, and chainsaw safety boots).
Bee-keepers wear various levels of protection depending on the temperament of their bees and the reaction of the bees to nectar availability. At minimum, most beekeepers wear a brimmed hat and a veil made of fine mesh netting. The next level of protection involves leather gloves with long gauntlets and some way of keeping bees from crawling up one's trouser legs. In extreme cases, specially fabricated shirts and trousers can serve as barriers to the bees' stingers.
Diving equipment, for underwater diving, constitutes equipment such as a diving helmet or diving mask, an underwater breathing apparatus, and a diving suit.
Firefighters wear PPE designed to provide protection against fires and various fumes and gases. PPE worn by firefighters include bunker gear, self-contained breathing apparatus, a helmet, safety boots, and a PASS device.
In sports
Participants in sports often wear protective equipment. Studies performed on the injuries of professional athletes, such as that on NFL players, question the effectiveness of existing personal protective equipment.
Limits of the definition
The definition of what constitutes personal protective equipment varies by country. In the United States, the laws regarding PPE also vary by state. In 2011, workplace safety complaints were brought against Hustler and other adult film production companies by the AIDS Healthcare Foundation, leading to several citations brought by Cal/OSHA. The failure to use condoms by adult film stars was a violation of Cal/OSHA's Blood borne Pathogens Program, Personal Protective Equipment. This example shows that personal protective equipment can cover a variety of occupations in the United States, and has a wide-ranging definition.
Legislation
United States
The National Defense Authorization Act for 2022 defines personal protective equipment as
Under this Act, US military services are prohibited from purchasing PPE from suppliers in North Korea, China, Russia or Iran, unless there are problems with the supply or cost of PPE of "satisfactory quality and quantity".
European Union
At the European Union level, personal protective equipment is governed by Directive 89/686/EEC on personal protective equipment (PPE). The Directive is designed to ensure that PPE meets common quality and safety standards by setting out basic safety requirements for personal protective equipment, as well as conditions for its placement on the market and free movement within the EU single market. It covers "any device or appliance designed to be worn or held by an individual for protection against one or more health and safety hazards". The directive was adopted on 21 January 1989 and came into force on 1 July 1992. The European Commission additionally allowed for a transition period until 30 June 1995 to give companies sufficient time to adapt to the legislation. After this date, all PPE placed on the market in EU Member States was required to comply with the requirements of Directive 89/686/EEC and carry the CE Marking.
Article 1 of Directive 89/686/EEC defines personal protective equipment as any device or appliance designed to be worn or held by an individual for protection against one or more health and safety hazards. PPE which falls under the scope of the Directive is divided into three categories:
Category I: simple design (e.g. gardening gloves, footwear, ski goggles)
Category II: PPE not falling into category I or III (e.g. personal flotation devices, dry and wet suits, motorcycle personal protective equipment)
Category III: complex design (e.g. respiratory equipment, harnesses)
Directive 89/686/EEC on personal protective equipment does not distinguish between PPE for professional use and PPE for leisure purposes.
Personal protective equipment falling within the scope of the Directive must comply with the basic health and safety requirements set out in Annex II of the Directive. To facilitate conformity with these requirements, harmonized standards are developed at the European or international level by the European Committee for Standardization (CEN, CENELEC) and the International Organization for Standardization in relation to the design and manufacture of the product. Usage of the harmonized standards is voluntary and provides presumption of conformity. However, manufacturers may choose an alternative method of complying with the requirements of the Directive.
Personal protective equipment excluded from the scope of the Directive includes:
PPE designed for and used by the armed forces or in the maintenance of law and order;
PPE for self-defence (e.g. aerosol canisters, personal deterrent weapons);
PPE designed and manufactured for personal use against adverse atmospheric conditions (e.g. seasonal clothing, umbrellas), damp and water (e.g. dish-washing gloves) and heat;
PPE used on vessels and aircraft but not worn at all times;
helmets and visors intended for users of two- or three-wheeled motor vehicles.
The European Commission is currently working to revise Directive 89/686/EEC. The revision will look at the scope of the Directive, the conformity assessment procedures and technical requirements regarding market surveillance. It will also align the Directive with the New Legislative Framework. The European Commission is likely to publish its proposal in 2013. It will then be discussed by the European Parliament and Council of the European Union under the ordinary legislative procedure before being published in the Official Journal of the European Union and becoming law.
Research
Research studies in the form of randomized controlled trials and simulation studies are needed to determine the most effective types of PPE for preventing the transmission of infectious diseases to healthcare workers.
There is low certainty evidence that supports making improvements or modifications to PPE in order to help decrease contamination. Examples of modifications include adding tabs to masks or gloves to ease removal and designing protective gowns so that gloves are removed at the same time. In addition, there is low certainty evidence that the following PPE approaches or techniques may lead to reduced contamination and improved compliance with PPE protocols: Wearing double gloves, following specific doffing (removal) procedures such as those from the CDC, and providing people with spoken instructions while removing PPE.
See also
(Chemical Biological Radiological Nuclear, known formerly as NBC)
(hazardous materials)
Normalization of deviance – one reason people stop using effective prevention measures
References
External links
CDC - Emergency Response Resources: Personal Protective Equipment - NIOSH Workplace Safety and Health Topic
European Commission, DG Enterprise, Personal Protective Equipment
Directive 89/686/EEC on Personal Protective Equipment
A short guide to the Personal Protective Equipment at Work Regulations 1992' INDG174(rev1), revised 8/05 (HSE)
Occupational safety and health
Risk management in business
Industrial hygiene
Safety engineering
Environmental social science
Working conditions | Personal protective equipment | [
"Engineering",
"Environmental_science"
] | 3,669 | [
"Safety engineering",
"Systems engineering",
"Personal protective equipment",
"Environmental social science"
] |
56,098 | https://en.wikipedia.org/wiki/Monte%20Carlo%20method | Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanisław Ulam, was inspired by his uncle's gambling habits.
Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically.
Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs.
Monte Carlo methods also have some limitations and challenges, such as the trade-off between accuracy and computational cost, the curse of dimensionality, the reliability of random number generators, and the verification and validation of the results.
Overview
Monte Carlo methods vary, but tend to follow a particular pattern:
Define a domain of possible inputs
Generate inputs randomly from a probability distribution over the domain
Perform a deterministic computation of the outputs
Aggregate the results
For example, consider a quadrant (circular sector) inscribed in a unit square. Given that the ratio of their areas is , the value of can be approximated using a Monte Carlo method:
Draw a square, then inscribe a quadrant within it
Uniformly scatter a given number of points over the square
Count the number of points inside the quadrant, i.e. having a distance from the origin of less than 1
The ratio of the inside-count and the total-sample-count is an estimate of the ratio of the two areas, . Multiply the result by 4 to estimate .
In this procedure the domain of inputs is the square that circumscribes the quadrant. One can generate random inputs by scattering grains over the square then perform a computation on each input (test whether it falls within the quadrant). Aggregating the results yields our final result, the approximation of .
There are two important considerations:
If the points are not uniformly distributed, then the approximation will be poor.
The approximation is generally poor if only a few points are randomly placed in the whole square. On average, the approximation improves as more points are placed.
Uses of Monte Carlo methods require large amounts of random numbers, and their use benefitted greatly from pseudorandom number generators, which are far quicker to use than the tables of random numbers that had been previously used for statistical sampling.
Application
Monte Carlo methods are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases).
Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.
In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean ( the 'sample mean') of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.
In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation). In other instances, a flow of probability distributions with an increasing level of sampling complexity arise (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples ( particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.
Simple Monte Carlo
Suppose one wants to know the expected value μ of a population (and knows that μ exists), but does not have a formula available to compute it. The simple Monte Carlo method gives an estimate for μ by running n simulations and averaging the simulations’ results. It has no restrictions on the probability distribution of the inputs to the simulations, requiring only that the inputs are randomly generated and are independent of each other and that μ exists. A sufficiently large n will produce a value for m that is arbitrarily close to μ; more formally, it will be the case that, for any ε > 0, |μ – m| ≤ ε.
Typically, the algorithm to obtain m is
s = 0;
for i = 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
repeat
m = s / n;
An example
Suppose we want to know how many times we should expect to throw three eight-sided dice for the total of the dice throws to be at least T. We know the expected value exists. The dice throws are randomly distributed and independent of each other. So simple Monte Carlo is applicable:
s = 0;
for i = 1 to n do
throw the three dice until T is met or first exceeded; ri = the number of throws;
s = s + ri;
repeat
m = s / n;
If n is large enough, m will be within ε of μ for any ε > 0.
Determining a sufficiently large n
General formula
Let ε = |μ – m| > 0. Choose the desired confidence level – the percent chance that, when the Monte Carlo algorithm completes, m is indeed within ε of μ. Let z be the z-score corresponding to that confidence level.
Let s2 be the estimated variance, sometimes called the “sample” variance; it is the variance of the results obtained from a relatively small number k of “sample” simulations. Choose a k; Driels and Shin observe that “even for sample sizes an order of magnitude lower than the number required, the calculation of that number is quite stable."
The following algorithm computes s2 in one pass while minimizing the possibility that accumulated numerical error produces erroneous results:
s1 = 0;
run the simulation for the first time, producing result r1;
m1 = r1; //mi is the mean of the first i simulations
for i = 2 to k do
run the simulation for the ith time, producing result ri;
δi = ri - mi−1;
mi = mi-1 + (1/i)δi;
si = si-1 + ((i - 1)/i)(δi)2;
repeat
s2 = sk/(k - 1);
Note that, when the algorithm completes, mk is the mean of the k results.
n is sufficiently large when
If n ≤ k, then mk = m; sufficient sample simulations were done to ensure that mk is within ε of μ. If n > k, then n simulations can be run “from scratch,” or, since k simulations have already been done, one can just run n – k more simulations and add their results into those from the sample simulations:
s = mk * k;
for i = k + 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
m = s / n;
A formula when simulations' results are bounded
An alternate formula can be used in the special case where all simulation results are bounded above and below.
Choose a value for ε that is twice the maximum allowed difference between μ and m. Let 0 < δ < 100 be the desired confidence level, expressed as a percentage. Let every simulation result r1, r2, …ri, … rn be such that a ≤ ri ≤ b for finite a and b. To have confidence of at least δ that |μ – m| < ε/2, use a value for n such that
For example, if δ = 99%, then n ≥ 2(b – a )2ln(2/0.01)/ε2 ≈ 10.6(b – a )2/ε2.
Computational costs
Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.
History
Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem, and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using probabilistic metaheuristics (see simulated annealing).
An early variant of the Monte Carlo method was devised to solve the Buffon's needle problem, in which can be estimated by dropping needles on a floor made of parallel equidistant strips. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but he did not publish this work.
In the late 1940s, Stanisław Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows:
Being secret, the work of von Neumann and Ulam required a code name. A colleague of von Neumann and Ulam, Nicholas Metropolis, suggested using the name Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money from relatives to gamble.
Monte Carlo methods were central to the simulations required for further postwar development of nuclear weapons, including the design of the H-bomb, though severely limited by the computational tools at the time. Von Neumann, Nicholas Metropolis and others programmed the ENIAC computer to perform the first fully automated Monte Carlo calculations, of a fission weapon core, in the spring of 1948. In the 1950s Monte Carlo methods were used at Los Alamos for the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
The theory of more sophisticated mean-field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics. An earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, used mean-field genetic-type Monte Carlo methods for estimating particle transmission energies. Mean-field genetic type Monte Carlo methodologies are also used as heuristic natural search algorithms (a.k.a. metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.
Quantum Monte Carlo, and more specifically diffusion Monte Carlo methods can also be interpreted as a mean-field particle Monte Carlo approximation of Feynman–Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.
The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. Another pioneering article in this field was Genshiro Kitagawa's, on a related "Monte Carlo filter", and the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in 1989–1992 by P. Del Moral, J. C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on radar/sonar and GPS signal processing problems. These Sequential Monte Carlo methodologies can be interpreted as an acceptance-rejection sampler equipped with an interacting recycling mechanism.
From 1950 to 1996, all the publications on Sequential Monte Carlo methodologies, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and on genealogical and ancestral tree based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms were written by Pierre Del Moral in 1996.
Branching type particle methodologies with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons, and by Dan Crisan, Pierre Del Moral and Terry Lyons. Further developments in this field were described in 1999 to 2001 by P. Del Moral, A. Guionnet and L. Miclo.
Definitions
There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior).
Here are some examples:
Simulation: Drawing one pseudo-random uniform variable from the interval [0,1] can be used to simulate the tossing of a coin: If the value is less than or equal to 0.50 designate the outcome as heads, but if the value is greater than 0.50 designate the outcome as tails. This is a simulation, but not a Monte Carlo simulation.
Monte Carlo method: Pouring out a box of coins on a table, and then computing the ratio of coins that land heads versus tails is a Monte Carlo method of determining the behavior of repeated coin tosses, but it is not a simulation.
Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin.
Kalos and Whitlock point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling."
Convergence of the Monte Carlo simulation can be checked with the Gelman-Rubin statistic.
Monte Carlo and random numbers
The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known.
Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally. Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense.
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary.
Sawilowsky lists the characteristics of a high-quality Monte Carlo simulation:
the (pseudo-random) number generator has certain characteristics (e.g. a long "period" before the sequence repeats)
the (pseudo-random) number generator produces values that pass tests for randomness
there are enough samples to ensure accurate results
the proper sampling technique is used
the algorithm used is valid for what is being modeled
it simulates the phenomenon in question.
Pseudo-random number sampling algorithms are used to transform uniformly distributed pseudo-random numbers into numbers that are distributed according to a given probability distribution.
Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods.
In an effort to assess the impact of random number quality on Monte Carlo simulation outcomes, astrophysical researchers tested cryptographically secure pseudorandom numbers generated via Intel's RDRAND instruction set, as compared to those derived from algorithms, like the Mersenne Twister, in Monte Carlo simulations of radio flares from brown dwarfs. No statistically significant difference was found between models generated with typical pseudorandom number generators and RDRAND for trials consisting of the generation of 107 random numbers.
Monte Carlo simulation versus "what if" scenarios
There are ways of using probabilities that are definitely not Monte Carlo simulations – for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a "best guess" estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded.
By contrast, Monte Carlo simulations sample from a probability distribution for each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring. For example, a comparison of a spreadsheet cost construction model run using traditional "what if" scenarios, and then running the comparison again with Monte Carlo simulation and triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the "what if" analysis. This is because the "what if" analysis gives equal weight to all scenarios (see quantifying uncertainty in corporate finance), while the Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called "rare events".
Applications
Monte Carlo methods are especially useful for simulating phenomena with significant uncertainty in inputs and systems with many coupled degrees of freedom. Areas of application include:
Physical sciences
Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms as well as in modeling radiation transport for radiation dosimetry calculations. In statistical physics, Monte Carlo molecular modeling is an alternative to computational molecular dynamics, and Monte Carlo methods are used to compute statistical field theories of simple particle and polymer systems. Quantum Monte Carlo methods solve the many-body problem for quantum systems. In radiation materials science, the binary collision approximation for simulating ion implantation is usually based on a Monte Carlo approach to select the next colliding atom. In experimental particle physics, Monte Carlo methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. In astrophysics, they are used in such diverse manners as to model both galaxy evolution and microwave radiation transmission through a rough planetary surface. Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting.
Engineering
Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example,
In microelectronics engineering, Monte Carlo methods are applied to analyze correlated and uncorrelated variations in analog and digital integrated circuits.
In geostatistics and geometallurgy, Monte Carlo methods underpin the design of mineral processing flowsheets and contribute to quantitative risk analysis.
In fluid dynamics, in particular rarefied gas dynamics, where the Boltzmann equation is solved for finite Knudsen number fluid flows using the direct simulation Monte Carlo method in combination with highly efficient computational algorithms.
In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or particle filter that forms the heart of the SLAM (simultaneous localization and mapping) algorithm.
In telecommunications, when planning a wireless network, the design must be proven to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process.
In reliability engineering, Monte Carlo simulation is used to compute system-level response given the component-level response.
In signal processing and Bayesian inference, particle filters and sequential Monte Carlo techniques are a class of mean-field particle methods for sampling and computing the posterior distribution of a signal process given some noisy and partial observations using interacting empirical measures.
Climate change and radiative forcing
The Intergovernmental Panel on Climate Change relies on Monte Carlo methods in probability density function analysis of radiative forcing.
Computational biology
Monte Carlo methods are used in various fields of computational biology, for example for Bayesian inference in phylogeny, or for studying biological systems such as genomes, proteins, or membranes.
The systems can be studied in the coarse-grained or ab initio frameworks depending on the desired accuracy.
Computer simulations allow monitoring of the local environment of a particular molecule to see if some chemical reaction is happening for instance. In cases where it is not feasible to conduct a physical experiment, thought experiments can be conducted (for instance: breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields).
Computer graphics
Path tracing, occasionally referred to as Monte Carlo ray tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of the rendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence.
Applied statistics
The standards for Monte Carlo experiments in statistics were set by Sawilowsky. In applied statistics, Monte Carlo methods may be used for at least four purposes:
To compare competing statistics for small samples under realistic data conditions. Although type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions.
To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions.
To provide a random sample from the posterior distribution in Bayesian inference. This sample then approximates and summarizes all the essential features of the posterior.
To provide efficient random estimates of the Hessian matrix of the negative log-likelihood function that may be averaged to form an estimate of the Fisher information matrix.
Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice—or more frequently—for the efficiency of not having to track which permutations have already been selected).
Artificial intelligence for games
Monte Carlo methods have been developed into a technique called Monte-Carlo tree search that is useful for searching for the best move in a game. Possible moves are organized in a search tree and many random simulations are used to estimate the long-term potential of each move. A black box simulator represents the opponent's moves.
The Monte Carlo tree search (MCTS) method has four steps:
Starting at root node of the tree, select optimal child nodes until a leaf node is reached.
Expand the leaf node and choose one of its children.
Play a simulated game starting with that node.
Use the results of that simulated game to update the node and its ancestors.
The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move.
Monte Carlo Tree Search has been used successfully to play games such as Go, Tantrix, Battleship, Havannah, and Arimaa.
Design and visuals
Monte Carlo methods are also efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations that produce photo-realistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, and cinematic special effects.
Search and rescue
The US Coast Guard utilizes Monte Carlo methods within its computer modeling software SAROPS in order to calculate the probable locations of vessels during search and rescue operations. Each simulation can generate as many as ten thousand data points that are randomly distributed based upon provided variables. Search patterns are then generated based upon extrapolations of these data in order to optimize the probability of containment (POC) and the probability of detection (POD), which together will equal an overall probability of success (POS). Ultimately this serves as a practical application of probability distribution in order to provide the swiftest and most expedient method of rescue, saving both lives and resources.
Finance and business
Monte Carlo simulation is commonly used to evaluate the risk and uncertainty that would affect the outcome of different decision options. Monte Carlo simulation allows the business risk analyst to incorporate the total effects of uncertainty in variables like sales volume, commodity and labor prices, interest and exchange rates, as well as the effect of distinct risk events like the cancellation of a contract or the change of a tax law.
Monte Carlo methods in finance are often used to evaluate investments in projects at a business unit or corporate level, or other financial valuations. They can be used to model project schedules, where simulations aggregate estimates for worst-case, best-case, and most likely durations for each task to determine outcomes for the overall project. Monte Carlo methods are also used in option pricing, default risk analysis. Additionally, they can be used to estimate the financial impact of medical interventions.
Law
A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications for harassment and domestic abuse restraining orders. It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk of rape and physical assault. However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others. The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole.
Library science
Monte Carlo approach had also been used to simulate the number of book publications based on book genre in Malaysia. The Monte Carlo simulation utilized previous published National Book publication data and book's price according to book genre in the local market. The Monte Carlo results were used to determine what kind of book genre that Malaysians are fond of and was used to compare book publications between Malaysia and Japan.
Other
Nassim Nicholas Taleb writes about Monte Carlo generators in his 2001 book Fooled by Randomness as a real instance of the reverse Turing test: a human can be declared unintelligent if their writing cannot be told apart from a generated one.
Use in mathematics
In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.
Integration
Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10100 points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an iterated integral. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom.
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.
A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling or the VEGAS algorithm.
A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers.
Simulation and optimization
Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the traveling salesman problem the goal is to minimize distance traveled. There are also applications to engineering design, such as multidisciplinary design optimization. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference is a comprehensive review of many issues related to simulation and optimization.
The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. If instead of the goal being to minimize the total distance traveled to visit each desired destination but rather to minimize the total time needed to reach each destination, this goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine the optimal path a different simulation is required: optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize the travel decisions to identify the best path to follow taking that uncertainty into account.
Inverse problems
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data).
As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as normally information on the resolution power of the data is desired. In the general case many parameters are modeled, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution.
Philosophy
Popular exposition of the Monte Carlo Method was conducted by McCracken. The method's general philosophy was discussed by Elishakoff and Grüne-Yanoff and Weirich.
See also
Auxiliary-field Monte Carlo
Biology Monte Carlo method
Direct simulation Monte Carlo
Dynamic Monte Carlo method
Ergodicity
Genetic algorithms
Kinetic Monte Carlo
List of software for Monte Carlo molecular modeling
Mean-field particle methods
Monte Carlo method for photon transport
Monte Carlo methods for electron transport
Monte Carlo N-Particle Transport Code
Morris method
Multilevel Monte Carlo method
Quasi-Monte Carlo method
Sobol sequence
Temporal difference learning
References
Citations
Sources
External links
Numerical analysis
Statistical mechanics
Computational physics
Sampling techniques
Statistical approximations
Stochastic simulation
Randomized algorithms
Risk analysis methodologies | Monte Carlo method | [
"Physics",
"Mathematics"
] | 7,879 | [
"Monte Carlo methods",
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Statistical approximations",
"Numerical analysis",
"Statistical mechanics",
"Approximations"
] |
603,278 | https://en.wikipedia.org/wiki/Nanosensor | Nanosensors are nanoscale devices that measure physical quantities and convert these to signals that can be detected and analyzed. There are several ways proposed today to make nanosensors; these include top-down lithography, bottom-up assembly, and molecular self-assembly. There are different types of nanosensors in the market and in development for various applications, most notably in defense, environmental, and healthcare industries. These sensors share the same basic workflow: a selective binding of an analyte, signal generation from the interaction of the nanosensor with the bio-element, and processing of the signal into useful metrics.
Characteristics
Nanomaterials-based sensors have several benefits in sensitivity and specificity over sensors made from traditional materials, due to nanomaterial features not present in bulk material that arise at the nanoscale. Nanosensors can have increased specificity because they operate at a similar scale as natural biological processes, allowing functionalization with chemical and biological molecules, with recognition events that cause detectable physical changes. Enhancements in sensitivity stem from the high surface-to-volume ratio of nanomaterials, as well as novel physical properties of nanomaterials that can be used as the basis for detection, including nanophotonics. Nanosensors can also potentially be integrated with nanoelectronics to add native processing capability to the nanosensor.
In addition to their sensitivity and specificity, nanosensors offer significant advantages in cost and response times, making them suitable for high-throughput applications. Nanosensors provide real-time monitoring compared to traditional detection methods such as chromatography and spectroscopy. These traditional methods may take days to weeks to obtain results and often require investment in capital costs as well as time for sample preparation.
One-dimensional nanomaterials such as nanowires and nanotubes are well suited for use in nanosensors, as compared to bulk or thin-film planar devices. They can function both as transducers and wires to transmit the signal. Their high surface area can cause large signal changes upon binding of an analyte. Their small size can enable extensive multiplexing of individually addressable sensor units in a small device. Their operation is also "label free" in the sense of not requiring fluorescent or radioactive labels on the analytes. Zinc oxide nanowire is used for gas sensing applications, given that it exhibits high sensitivity toward low concentration of gas under ambient conditions and can be fabricated easily with low cost.
There are several challenges for nanosensors, including avoiding drift and fouling, developing reproducible calibration methods, applying preconcentration and separation methods to attain a proper analyte concentration that avoids saturation, and integrating the nanosensor with other elements of a sensor package in a reliable manufacturable manner. Because nanosensors are a relatively new technology, there are many unanswered questions regarding nanotoxicology, which currently limits their application in biological systems.
Potential applications for nanosensors include medicine, detection of contaminants and pathogens, and monitoring manufacturing processes and transportation systems. By measuring changes in physical properties (volume, concentration, displacement and velocity, gravitational, electrical, and magnetic forces, pressure, or temperature) nanosensors may be able to distinguish between and recognize certain cells at the molecular level in order to deliver medicine or monitor development to specific places in the body. The type of signal transduction defines the major classification system for nanosensors. Some of the main types of nanosensor readouts include optical, mechanical, vibrational, or electromagnetic.
As an example of classification, nanosensors that use molecularly imprinted polymers (MIP) can be divided into three categories, which are electrochemical, piezoelectric, or spectroscopic sensors. Electrochemical sensors induce a change in the electrochemical properties of the sensing material, which includes charge, conductivity, and electric potential. Piezoelectric sensors either convert mechanical force into electric force or vice versa. This force is then transduced into a signal. MIP spectroscopic sensors can be divided into three subcategories, which are chemiluminescent sensors, surface plasmon resonance sensors, and fluorescence sensors. As the name suggests, these sensors produce light based signals in forms of chemiluminescence, resonance, and fluorescence. As described by the examples, the type of change that the sensor detects and type of signal it induces depend on the type of sensor
Mechanisms of operation
There are multiple mechanisms by which a recognition event can be transduced into a measurable signal; generally, these take advantage of the nanomaterial sensitivity and other unique properties to detect a selectively bound analyte.
Electrochemical nanosensors are based on detecting a resistance change in the nanomaterial upon binding of an analyte, due to changes in scattering or to the depletion or accumulation of charge carriers. One possibility is to use nanowires such as carbon nanotubes, conductive polymers, or metal oxide nanowires as gates in field-effect transistors, although as of 2009 they had not yet been demonstrated in real-world conditions. Chemical nanosensors contain a chemical recognition system (receptor) and a physiochemical transducer, in which the receptor interacts with analyte to produce electrical signals. In one case, upon interaction of the analyte with the receptor, the nanoporous transducer had a change in impedance which was determined as the sensor signal. Other examples include electromagnetic or plasmonic nanosensors, spectroscopic nanosensors such as surface-enhanced Raman spectroscopy, magnetoelectronic or spintronic nanosensors, and mechanical nanosensors.
Biological nanosensors consist of a bio-receptor and a transducer. The transduction method of choice is currently fluorescence because of the high sensitivity and relative ease of measurement. The measurement can be achieved by using the following methods: binding active nanoparticles to active proteins within the cell, using site-directed mutagenesis to produce indicator proteins, allowing for real-time measurements, or by creating a nanomaterial (e.g. nanofibers) with attachment sites for the bio-receptors. Even though electrochemical nanosensors can be used to measure intracellular properties, they are typically less selective for biological measurements, as they lack the high specificity of bio-receptors (e.g. antibody, DNA).
Photonic devices can also be used as nanosensors to quantify concentrations of clinically relevant samples. A principle of operation of these sensors is based on the chemical modulation of a hydrogel film volume that incorporates a Bragg grating. As the hydrogel swells or shrinks upon chemical stimulation, the Bragg grating changes color and diffracts light at different wavelengths. The diffracted light can be correlated with the concentration of a target analyte.
Another type of nanosensor is one that works through a colorimetric basis. Here, the presence of the analyte causes a chemical reaction or morphological alteration for a visible color change to occur. One such application, is that gold nanoparticles can be used for the detection of heavy metals. Many harmful gases can also be detected by a colorimetric change, such as through the commercially available Dräger Tube. These provide an alternative to bulky, lab-scale systems, as these can be miniaturized to be used for point-of-sample devices. For example, many chemicals are regulated by the Environmental Protection Agency and require extensive testing to ensure contaminant levels are within the appropriate limits. Colorimetric nanosensors provide a method for on-site determination of many contaminants.
Production methods
The production method plays a central role in determining the characteristics of the manufactured nanosensor in that the function of nanosensor can be made through controlling the surface of nanoparticles. There are two main approaches in the manufacturing of nanosensors: top-down methods, which begin with a pattern generated at a larger scale, and then reduced to microscale. Bottom-up methods start with atoms or molecules that build up to nanostructures.
Top-down methods
Lithography
It involves starting out with a larger block of some material and carving out the desired form. These carved out devices, notably put to use in specific microelectromechanical systems used as microsensors, generally only reach the micro size, but the most recent of these have begun to incorporate nanosized components. One of the most common method is called electron beam lithography. Although very costly, this technique effectively forms a distribution of circular or ellipsoidal plots on the two dimensional surface. Another method is electrodeposition, which requires conductive elements to produce miniaturized devices.
Fiber pulling
This method consists in using a tension device to stretch the major axis of a fiber while it is heated, to achieve nano-sized scales. This method is specially used in optical fiber to develop optical-fiber-based nanosensors.
Chemical etching
Two different types of chemical etching have been reported. In the Turner method, a fiber is etched to a point while placed in the meniscus between hydrofluoric acid and an organic overlayer. This technique has been shown to produce fibers with large taper angles (thus increasing the light reaching the tip of the fiber) and tip diameters comparable to the pulling method. The second method is tube etching, which involves etching an optical fiber with a single-component solution of hydrogen fluoride. A silica fiber, surrounded with an organic cladding, is polished and one end is placed in a container of hydrofluoric acid. The acid then begins to etch away the tip of the fiber without destroying the cladding. As the silica fiber is etched away, the polymer cladding acts as a wall, creating microcurrents in the hydrofluoric acid that, coupled with capillary action, cause the fiber to be etched into the shape of a cone with large, smooth tapers. This method shows much less susceptibility to environmental parameters than the Turner method.
Bottom-up methods
This type of methods involve assembling the sensors out of smaller components, usually individual atoms or molecules. This is done by arranging atoms in specific patterns, which has been achieved in laboratory tests through use of atomic force microscopy, but is still difficult to achieve en masse and is not economically viable.
Self-assembly
Also known as “growing”, this method most often entails an already complete set of components that would automatically assemble themselves into a finished product. Accurately being able to reproduce this effect for a desired sensor in a laboratory would imply that scientists could manufacture nanosensors much more quickly and potentially far more cheaply by letting numerous molecules assemble themselves with little or no outside influence, rather than having to manually assemble each sensor.
Although the conventional fabrication techniques have proven to be efficient, further improvements in the production method can lead to minimization of cost and enhancement in performance. Challenges with current production methods include uneven distribution, size, and shape of nanoparticles, which all lead to limitation in performance. In 2006, researchers in Berlin patented their invention of a novel diagnostic nanosensor fabricated with nanosphere lithography (NSL), which allows precise control oversize and shape of nanoparticles and creates nanoislands. The metallic nanoislands produced an increase in signal transduction and thus increased sensitivity of the sensor. The results also showed that the sensitivity and specification of the diagnostic nanosensor depend on the size of the nanoparticles, that decreasing the nanoparticle size increases the sensitivity.
Current density is influenced by distribution, size, or shape of nanoparticles. These properties can be improved by exploitation of capillary forces. In recent research, capillary forces were induced by applying five microliters of ethanol and, as result, individual nanoparticles have been merged in a larger islands (i.e. 20 micrometer-sized) particles separated by 10 micrometers on average, while the smaller ones were dissolved and absorbed. On the other hand, applying twice as much (i.e. 10 microliters) of ethanol has damaged the nanolayers, while applying too small (i.e. two microliters) of ethanol has failed to spread across them.
Applications
One of the first working examples of a synthetic nanosensor was built by researchers at the Georgia Institute of Technology in 1999. It involved attaching a single particle onto the end of a carbon nanotube and measuring the vibrational frequency of the nanotube both with and without the particle. The discrepancy between the two frequencies allowed the researchers to measure the mass of the attached particle.
Since then, increasing amounts of research have gone into nanosensors, whereby modern nanosensors have been developed for many applications. Currently, the applications of nanosensors in the market include: healthcare, defense and military, and others such as food, environment, and agriculture.
Defense and military
Nanoscience as a whole has many potential applications in the defense and military sector- including chemical detection, decontamination, and forensics. Some nanosensors in development for defense applications include nanosensors for the detection of explosives or toxic gases. Such nanosensors work on the principle that gas molecules can be distinguished based on their mass using, for example, piezoelectric sensors. If a gas molecule is adsorbed at the surface of the detector, the resonance frequency of the crystal changes and this can be measured as a change in electrical properties. In addition, field effect transistors, used as potentiometers, can detect toxic gases if their gate is made sensitive to them.
In a similar application, nanosensors can be utilized in military and law enforcement clothing and gear. The Navy Research Laboratory's Institute for Nanoscience has studied quantum dots for application in nanophotonics and identifying biological materials. Nanoparticles layered with polymers and other receptor molecules will change color when contacted by analytes such as toxic gases. This alerts the user that they are in danger. Other projects involve embedding clothing with biometric sensors to relay information regarding the user's health and vitals, which would be useful for monitoring soldiers in combat.
Surprisingly, some of the most challenging aspects in creating nanosensors for defense and military use are political in nature, rather than technical. Many different government agencies must work together to allocate budgets and share information and progress in testing; this can be difficult with such large and complex institutions. In addition, visas and immigration status can become an issue for foreign researchers - as the subject matter is very sensitive, government clearance can sometimes be required. Finally, there are currently not well defined or clear regulations on nanosensor testing or applications in the sensor industry, which contributes to the difficulty of implementation.
Food and the environment
Nanosensors can improve various sub-areas within food and environment sectors including food processing, agriculture, air and water quality monitoring, and packaging and transport. Due to their sensitivity, as well as their tunability and resulting binding selectivity, nanosensors are very effective and can be designed for a wide variety of environmental applications. Such applications of nanosensors help in a convenient, rapid, and ultrasensitive assessment of many types of environmental pollutants.
Chemical sensors are useful for analyzing odors from food samples and detecting atmospheric gases. The "electronic nose" was developed in 1988 to determine the quality and freshness of food samples using traditional sensors, but more recently the sensing film has been improved with nanomaterials. A sample is placed in a chamber where volatile compounds become concentrated in the gas phase, whereby the gas is then pumped through the chamber to carry the aroma to the sensor that measures its unique fingerprint. The high surface area to volume ratio of the nanomaterials allows for greater interaction with analytes and the nanosensor's fast response time enables the separation of interfering responses. Chemical sensors, too, have been built using nanotubes to detect various properties of gaseous molecules. Many carbon nanotube based sensors are designed as field effect transistors, taking advantage of their sensitivity. The electrical conductivity of these nanotubes will change due to charge transfer and chemical doping by other molecules, enabling their detection. To enhance their selectivity, many of these involve a system by which nanosensors are built to have a specific pocket for another molecule. Carbon nanotubes have been used to sense ionization of gaseous molecules while nanotubes made out of titanium have been employed to detect atmospheric concentrations of hydrogen at the molecular level. Some of these have been designed as field effect transistors, while others take advantage of optical sensing capabilities. Selective analyte binding is detected through spectral shift or fluorescence modulation. In a similar fashion, Flood et al. have shown that supramolecular host–guest chemistry offers quantitative sensing using Raman scattered light as well as SERS.
Other types of nanosensors, including quantum dots and gold nanoparticles, are currently being developed to detect pollutants and toxins in the environment. These take advantage of the localized surface plasmon resonance (LSPR) that arises at the nanoscale, which results in wavelength specific absorption. This LSPR spectrum is particularly sensitive, and its dependence on nanoparticle size and environment can be used in various ways to design optical sensors. To take advantage of the LSPR spectrum shift that occurs when molecules bind to the nanoparticle, their surfaces can be functionalized to dictate which molecules will bind and trigger a response. For environmental applications, quantum dot surfaces can be modified with antibodies that bind specifically to microorganisms or other pollutants. Spectroscopy can then be used to observe and quantify this spectrum shift, enabling precise detection, potentially on the order of molecules. Similarly, fluorescent semiconducting nanosensors may take advantage of fluorescence resonance energy transfer (FRET) to achieve optical detection. Quantum dots can be used as donors, and will transfer electronic excitation energy when positioned near acceptor molecules, thus losing their fluorescence. These quantum dots can be functionalized to determine which molecules will bind, upon which fluorescence will be restored. Gold nanoparticle-based optical sensors can be used to detect heavy metals very precisely; for example, mercury levels as low as 0.49 nanometers. This sensing modality takes advantage of FRET, in which the presence of metals inhibits the interaction between quantum dots and gold nanoparticles, and quenches the FRET response. Another potential implementation takes advantage of the size dependence of the LSPR spectrum to achieve ion sensing. In one study, Liu et al. functionalized gold nanoparticles with a Pb2+ sensitive enzyme to produce a lead sensor. Generally, the gold nanoparticles would aggregate as they approached each other, and the change in size would result in a color change. Interactions between the enzyme and Pb2+ ions would inhibit this aggregation, and thus the presence of ions could be detected.
The main challenge associated with using nanosensors in food and the environment is determining their associated toxicity and overall effect on the environment. Currently, there is insufficient knowledge on how the implementation of nanosensors will affect the soil, plants, and humans in the long-term. This is difficult to fully address because nanoparticle toxicity depends heavily on the type, size, and dosage of the particle as well as environmental variables including pH, temperature, and humidity. To mitigate potential risk, research is being done to manufacture safe, nontoxic nanomaterials, as part of an overall effort towards green nanotechnology.
Healthcare
Nanosensors possess great potential for diagnostic medicine, enabling early identification of disease without reliance on observable symptoms. Ideal nanosensor implementations look to emulate the response of immune cells in the body, incorporating both diagnostic and immune response functionalities, while transmitting data to allow for monitoring of the sensor input and response. However, this model remains a long-term goal, and research is currently focused on the immediate diagnostic capabilities of nanosensors. The intracellular implementation of nanosensor synthesized with biodegradable polymers induces signals that enable real-time monitoring and thus paves way for advancement in drug delivery and treatment.
One example of these nanosensors involves using the fluorescence properties of cadmium selenide quantum dots as sensors to uncover tumors within the body. A downside to the cadmium selenide dots, however, is that they are highly toxic to the body. As a result, researchers are working on developing alternate dots made out of a different, less toxic material while still retaining some of the fluorescence properties. In particular, they have been investigating the particular benefits of zinc sulfide quantum dots which, though they are not quite as fluorescent as cadmium selenide, can be augmented with other metals including manganese and various lanthanide elements. In addition, these newer quantum dots become more fluorescent when they bond to their target cells.
Another application of nanosensors involves using silicon nanowires in IV lines to monitor organ health. The nanowires are sensitive to detect trace biomarkers that diffuse into the IV line through blood which can monitor kidney or organ failure. These nanowires would allow for continuous biomarker measurement, which provides some benefits in terms of temporal sensitivity over traditional biomarker quantification assays such as ELISA.
Nanosensors can also be used to detect contamination in organ implants. The nanosensor is embedded into the implant and detects contamination in the cells surrounding the implant through an electric signal sent to a clinician or healthcare provider. The nanosensor can detect whether the cells are healthy, inflammatory, or contaminated with bacteria. However, a main drawback is found within the long term use of the implant, where tissue grows on top of the sensors, limiting their ability to compress. This impedes the production of electrical charges, thus shortening the lifetime of these nanosensors, as they use the piezoelectric effect to self-power.
Similarly to those used to measure atmospheric pollutants, gold-particle based nanosensors are used to give an early diagnosis to several types of cancer by detecting volatile organic compounds (VOCs) in breath, as tumor growth is associated with peroxidation of the cell membrane. Another cancer related application, though still in mice probing stage, is the use of peptide-coated nanoparticles as activity-based sensors to detect lung cancer. The two main advantages of the use of nanoparticles to detect diseases is that it allows early stage detection, as it can detect tumors the size in the order of millimeters. It also provides a cost-effective, easy-to-use, portable, and non-invasive diagnostic tool.
A recent effort towards advancement in nanosensor technology has employed molecular imprinting, which is a technique used to synthesize polymer matrices that act as a receptor in molecular recognition. Analogous to the enzyme-substrate lock and key model, molecular imprinting uses template molecules with functional monomers to form polymer matrices with specific shape corresponding to its target template molecules, thus increasing the selectivity and affinity of the matrices. This technique has enabled nanosensors to detect chemical species. In the field of biotechnology, molecularly imprinted polymers (MIP) are synthesized receptors that have shown promising, cost-effective alternatives to natural antibodies in that they are engineered to have high selectivity and affinity. For example, an experiment with MI sensor containing nanotips with non-conductive polyphenol nano-coating (PPn coating) showed selective detection of E7 protein and thus demonstrated potential use of these nanosensors in detection and diagnosis of human papillomavirus, other human pathogens, and toxins. As shown above, nanosensors with molecular imprinting technique are capable of selectively detecting ultrasensitive chemical species in that by artificially modifying the polymer matrices, molecular imprinting increases the affinity and selectivity. Although molecularly imprinted polymers provide advantages in selective molecular recognition of nanosensors, the technique itself is relatively recent and there still remains challenges such as attenuation signals, detection systems lacking effective transducers, and surfaces lacking efficient detection. Further investigation and research on the field of molecularly imprinted polymers is crucial for development of highly effective nanosensors.
In order to develop smart health care with nanosensors, a network of nanosensors, often called nanonetwork, need to be established to overcome the size and power limitations of individual nanosensors. Nanonetworks not only mitigates the existing challenges but also provides numerous improvements. Cell-level resolution of nanosensors will enable treatments to eliminate side effects, enable continuous monitoring and reporting of patients’ conditions.
Nanonetworks require further study in that nanosensors are different from traditional sensors. The most common mechanism of sensor networks are through electromagnetic communications. However, the current paradigm is not applicable to nanodevices due to their low range and power. Optical signal transduction has been suggested as an alternative to the classical electromagnetic telemetry and has monitoring applications in human bodies. Other suggested mechanisms include bioinspired molecular communications, wired and wireless active transport in molecular communications, Forster energy transfer, and more. It is crucial to build an efficient nanonetwork so that it can be applied in fields such as medical implants, body area networks (BAN), internet of nano things (IoNT), drug delivery and more. With an adept nanonetwork, bio implantable nanodevices can provide higher accuracy, resolution, and safety compared to macroscale implants. Body area networks (BAN) enable sensors and actuators to collect physical and physiological data from the human body to better anticipate any diseases, which will thus facilitate the treatment. Potential applications of BAN include cardiovascular disease monitoring, insulin management, artificial vision and hearing, and hormonal therapy management. The Internet of Bio-Nano Things refers to networks of nanodevices that can be accessed by the internet. Development of IoBNT has paved the way to new treatments and diagnostic techniques. Nanonetworks may also help drug delivery by increasing localization and circulation time of drugs.
Existing challenges with the aforementioned applications include biocompatibility of the nano implants, physical limitations leading to lack of power and memory storage, and bio compatibility of the transmitter and receiver design of IoBNT. The nanonetwork concept has numerous areas for improvements: these include developing nanomachines, protocol stack issues, power provisioning techniques, and more.
There are still stringent regulations in place for the development of standards for nanosensors to be used in the medical industry, due to insufficient knowledge of the adverse effects of nanosensors as well as potential cytotoxic effects of nanosensors. Additionally, there can be a high cost of raw materials such as silicon, nanowires, and carbon nanotubes, which prevent commercialization and manufacturing of nanosensors requiring scale-up for implementation. To mitigate the drawback of cost, researchers are looking into manufacturing nanosensors made of more cost-effective materials. There is also a high degree of precision needed to reproducibly manufacture nanosensors, due to their small size and sensitivity to different synthesis techniques, which creates additional technical challenges to be overcome.
See also
Nanotechnology
List of nanotechnology topics
Surface plasmon resonance
References
External links
Weighing the Very Small: 'Nanobalance' Based on Carbon Nanotubes Shows New Application for Nanomechanics, Georgia Tech Research News.
Emerging Technologies and the Environment
Nanotechnology and Societal Transformation
Nanotechnology, Privacy and Shifting Social Conventions
Nanotechnology and Surveillance
Nanotechnology
Nanomedicine | Nanosensor | [
"Materials_science",
"Engineering"
] | 5,798 | [
"Nanomedicine",
"Nanotechnology",
"Materials science"
] |
604,063 | https://en.wikipedia.org/wiki/Verbal%20Behavior | Verbal Behavior is a 1957 book by psychologist B. F. Skinner, in which he describes what he calls verbal behavior, or what was traditionally called linguistics. Skinner's work describes the controlling elements of verbal behavior with terminology invented for the analysis - echoics, mands, tacts, autoclitics and others - as well as carefully defined uses of ordinary terms such as audience.
Origins
The origin of Verbal Behavior was an outgrowth of a series of lectures first presented at the University of Minnesota in the early 1940s and developed further in his summer lectures at Columbia and William James lectures at Harvard in the decade before the book's publication.
Research
Skinner's analysis of verbal behavior drew heavily on methods of literary analysis. This tradition has continued. The book Verbal Behavior is almost entirely theoretical, involving little experimental research in the work itself. Many research papers and applied extensions based on Verbal Behavior have been done since its publication.
Functional analysis
Skinner's Verbal Behavior also introduced the autoclitic and six elementary operants: mand, tact, audience relation, echoic, textual, and intraverbal. For Skinner, the proper object of study is behavior itself, analyzed without reference to hypothetical (mental) structures, but rather with reference to the functional relationships of the behavior in the environment in which it occurs. This analysis extends Ernst Mach's pragmatic inductive position in physics, and extends even further a disinclination towards hypothesis-making and testing. Verbal Behavior is divided into 5 parts with 19 chapters. The first chapter sets the stage for this work, a functional analysis of verbal behavior. Skinner presents verbal behavior as a function of controlling consequences and stimuli, not as the product of a special inherent capacity. Neither does he ask us to be satisfied with simply describing the structure, or patterns, of behavior. Skinner deals with some alternative, traditional formulations, and moves on to his own functional position.
General problems
In the ascertaining of the strength of a response Skinner suggests some criteria for strength (probability): emission, energy-level, speed, and repetition. He notes that these are all very limited means for inferring the strength of a response as they do not always vary together and they may come under the control of other factors. Emission is a yes/no measure, however the other three—energy-level, speed, repetition—comprise possible indications of relative strength.
Emission – If a response is emitted it may tend to be interpreted as having some strength. Unusual or difficult conditions would tend to lend evidence to the inference of strength. Under typical conditions it becomes a less compelling basis for inferring strength. This is an inference that is either there or not, and has no gradation of value.
Energy-level – Unlike emission as a basis for inference, energy-level (response magnitude) provides a basis for inferring the response has a strength with a high range of varying strength. Energy level is a basis from which we can infer a high tendency to respond. An energetic and strong "Water!" forms the basis for inferring the strength of the response as opposed to a weak, brief "Water".
Speed – Speed is the speed of the response itself, or the latency from the time in which it could have occurred to the time in which it occurs. A response given quickly when prompted forms the basis for inferring a high strength.
Repetition – "Water! Water! Water!" may be emitted and used as an indication of relative strength compared to the speedy and/or energetic emission of "Water!". In this way repetition can be used as a way to infer strength.
Mands
Chapter Three of Skinner's work Verbal Behavior discusses a functional relationship called the mand. Mand is verbal behavior under functional control of satiation or deprivation (that is, motivating operations) followed by characteristic reinforcement often specified by the response. A mand is typically a demand, command, or request. The mand is often said to "describe its own reinforcer" although this is not always the case, especially as Skinner's definition of verbal behavior does not require that mands be vocal. A loud knock at the door, may be a mand "open the door" and a servant may be called by a hand clap as much as a child might "ask for milk".
Lamarre & Holland (1985) study on mands demonstrated the role of motivating operations. The authors contrived motivating operations for objects by training behavior chains that could not be completed without certain objects. The participants learned to mand for these missing objects, which they had previously only been able to tact...
Behavior under the control of verbal stimuli
Textual
In Chapter Four Skinner notes forms of control by verbal stimuli. One form is textual behavior which refers to the type of behavior we might typically call reading or writing. A vocal response is controlled by a verbal stimulus that is not heard. There are two different modalities involved ("reading"). If they are the same they become "copying text" (see Jack Michael on copying text), if they are heard, then written, it becomes "taking dictation", and so on.
Echoic
Skinner was one of the first to seriously consider the role of imitation in language learning. He introduced this concept into his book Verbal Behavior with the concept of the echoic. It is a behavior under the functional control of a verbal stimulus. The verbal response and the verbal stimulus share what is called point to point correspondence (a formal similarity.) The speaker repeats what is said. In echoic behavior, the stimulus is auditory and response is vocal. It is often seen in early shaping behavior. For example, in learning a new language, a teacher might say "parsimonious" and then say "can you say it?" to induce an echoic response. Winokur (1978) is one example of research about echoic relations.
Tacts
Chapter Five of Verbal Behavior discusses the tact in depth. A tact is said to "make contact with" the world, and refers to behavior that is under functional control of a non-verbal stimulus and generalized conditioned reinforcement. The controlling stimulus is nonverbal, "the whole of the physical environment". In linguistic terms, the tact might be regarded as "expressive labelling". Tact is the most useful form of verbal behaviour to other listeners, as it extends the listeners contact with the environment. In contrast, the tact is the most useful form of verbal behaviour to the speaker as it allows to contact tangible reinforcement.
Tacts can undergo many extensions: generic, metaphoric, metonymical, solecistic, nomination, and "guessing". It can also be involved in abstraction. Lowe, Horne, Harris & Randle (2002) would be one example of recent work in tacts.
Intraverbal
Intraverbals are verbal behavior under the control of other verbal behavior. Intraverbals are often studied by the use of classic association techniques.
Audiences
Audience control is developed through long histories of reinforcement and punishment. Skinner's three-term contingency can be used to analyze how this works: the first term, the antecedent, refers to the audience, in whose presence the verbal response (the second term) occurs. The consequences of the response are the third term, and whether or not those consequences strengthen or weaken the response will affect whether that response will occur again in the presence of that audience. Through this process, audience control, or the probability that certain responses will occur in the presence of certain audiences, develops. Skinner notes that while audience control is developed due to histories with certain audiences, we do not have to have a long history with every listener in order to effectively engage in verbal behavior in their presence (p. 176). We can respond to new audiences (new stimuli) as we would to similar audiences with whom we have a history.
Negative audiences
An audience that has punished certain kinds of verbal behavior is called a negative audience (p. 178): in the presence of this audience, the punished verbal behavior is less likely to occur. Skinner gives examples of adults punishing certain verbal behavior of children, and a king punishing the verbal behavior of his subjects.
Summary of verbal operants
The following table summarizes the new verbal operants in the analysis of verbal behavior.
Verbal operants as a unit of analysis
Skinner notes his categories of verbal behavior: mand, textual, intraverbal, tact, audience relations, and notes how behavior might be classified. He notes that form alone is not sufficient (he uses the example of "fire!" having multiple possible relationships depending on the circumstances). Classification depends on knowing the circumstances under which the behavior is emitted. Skinner then notes that the "same response" may be emitted under different operant conditions. Skinner states:
That is, classification alone does little to further the analysis—the functional relations controlling the operants outlined must be analyzed consistent with the general approach of a scientific analysis of behavior.
Multiple causation
Skinner notes in this chapter how any given response is likely to be the result of multiple variables. Secondly, that any given variable usually affects multiple responses. The issue of multiple audiences is also addressed, as each audience is, as already noted, an occasion for strong and successful responding. Combining audiences produces differing tendencies to respond.
Supplementary stimulation
Supplementary stimulation is a discussion to practical matters of controlling verbal behavior given the context of material which has been presented thus far. Issues of multiple control, and involving many of the elementary operants stated in previous chapters are discussed.
New combinations of fragmentary responses
A special case of where multiple causation comes into play creating new verbal forms is in what Skinner describes as fragmentary responses. Such combinations are typically vocal, although this may be due to different conditions of self-editing rather than any special property. Such mutations may be "nonsense" and may not further the verbal interchange in which it occurs. Freudian slips may be one special case of fragmentary responses which tend to be given reinforcement and may discourage self-editing. This phenomenon appears to be more common in children, and in adults learning a second language. Fatigue, illness and insobriety may tend to produce fragmentary responding.
Autoclitics
An autoclitic is a form of verbal behavior which modifies the functions of other forms of verbal behavior. For example, "I think it is raining" possesses the autoclitic "I think" which moderates the strength of the statement "it is raining". An example of research that involved autoclitics would be Lodhi & Greer (1989).
Self-strengthening
Here Skinner draws a parallel to his position on self-control and notes: "A person controls his own behavior, verbal or otherwise, as he controls the behavior of others." Appropriate verbal behavior may be weak, as in forgetting a name, and in need of strengthening. It may have been inadequately learned, as in a foreign language. Repeating a formula, reciting a poem, and so on. The techniques are manipulating stimuli, changing the level of editing, the mechanical production of verbal behavior, changing motivational and emotional variables, incubation, and so on. Skinner gives an example of the use of some of these techniques provided by an author.
Logical and scientific
The special audience in this case is one concerned with "successful action". Special methods of stimulus control are encouraged that will allow for maximum effectiveness. Skinner notes that "graphs, models, tables" are forms of text that allow for this kind of development. The logical and scientific community also sharpens responses to assure accuracy and avoid distortion. Little progress in the area of science has been made from a verbal behavior perspective; however, suggestions of a research agenda have been laid out.
Tacting private events
Private events are events accessible to only the speaker. Public events are events that occur outside of an organism's skin that are observed by more than one individual. A headache is an example of a private event and a car accident is an example of a public event.
The tacting of private events by an organism is shaped by the verbal community who differentially reinforce a variety of behaviors and responses to the private events that occur (Catania, 2007, p. 9). For example, if a child verbally states, "a circle" when a circle is in the immediate environment, it may be a tact. If a child verbally states, "I have a toothache", she/he may be tacting a private event, whereas the stimulus is present to the speaker, but not the rest of the verbal community.
The verbal community shapes the original development and the maintenance or discontinuation of the tacts for private events (Catania, 2007, p. 232). An organism responds similarly to both private stimuli and public stimuli (Skinner, 1957, p. 130). However, it is harder for the verbal community to shape the verbal behavior associated with private events (Catania, 2007, p. 403). It may be more difficult to shape private events, but there are critical things that occur within an organism's skin that should not be excluded from our understanding of verbal behavior (Catania, 2007, p. 9).
Several concerns are associated with tacting private events. Skinner (1957) acknowledged two major dilemmas. First, he acknowledges our difficulty with predicting and controlling the stimuli associated with tacting private events (p. 130). Catania (2007) describes this as the unavailability of the stimulus to the members of the verbal community (p. 253). The second problem Skinner (1957) describes is our current inability to understand how the verbal behavior associated with private events is developed (p. 131).
Skinner (1957) continues to describe four potential ways a verbal community can encourage verbal behavior with no access to the stimuli of the speaker. He suggests the most frequent method is via "a common public accompaniment". An example might be that when a kid falls and starts bleeding, the caregiver tells them statements like, "you got hurt". Another method is the "collateral response" associated with the private stimulus. An example would be when a kid comes running and is crying and holding their hands over their knee, the caregiver might make a statement like, "you got hurt". The third way is when the verbal community provides reinforcement contingent on the overt behavior and the organism generalizes that to the private event that is occurring. Skinner refers to this as a "metaphorical or metonymical extension". The final method that Skinner suggests may help form our verbal behavior is when the behavior is initially at a low level and then turns into a private event (Skinner, 1957, p. 134). This notion can be summarized by understanding that the verbal behavior of private events can be shaped through the verbal community by extending the language of tacts (Catania, 2007, p. 263).
Private events are limited and should not serve as "explanations of behavior" (Skinner, 1957, p. 254). Skinner (1957) continues to caution that, "the language of private events can easily distract us from the public causes of behavior" (see functions of behavior).
Chomsky's review and replies
In 1959, Noam Chomsky published an influential critique of Verbal Behavior. Chomsky pointed out that children acquire their first language without being explicitly or overtly "taught" in a way that would be consistent with behaviorist theory (see Language acquisition and Poverty of the stimulus), and that Skinner's theories of "operants" and behavioral reinforcements are not able to account for the fact that people can speak and understand sentences that they have never heard before.
According to Frederick J. Newmeyer:
Chomsky's review has come to be regarded as one of the foundational documents of the discipline of cognitive psychology, and even after the passage of twenty-five years it is considered the most important refutation of behaviorism. Of all his writings, it was the Skinner review which contributed most to spreading his reputation beyond the small circle of professional linguists.
Chomsky's 1959 review, amongst his other work of the period, is generally thought to have been influential in the decline of behaviorism's influence within linguistics, philosophy and cognitive science. One reply to it was Kenneth MacCorquodale's 1970 paper On Chomsky's Review of Skinner's Verbal Behavior. MacCorquodale argued that Chomsky did not possess an adequate understanding of either behavioral psychology in general, or the differences between Skinner's behaviorism and other varieties. As a consequence, he argued, Chomsky made several serious errors of logic. On account of these problems, MacCorquodale maintains that the review failed to demonstrate what it has often been cited as doing, implying that those most influenced by Chomsky's paper probably already substantially agreed with him. Chomsky's review has been further argued to misrepresent the work of Skinner and others, including by taking quotes out of context. Chomsky has maintained that the review was directed at the way Skinner's variant of behavioral psychology "was being used in Quinean empiricism and naturalization of philosophy".
Current research
Current research in verbal behavior is published in The Analysis of Verbal Behavior (TAVB), and other Behavior Analytic journals such as The Journal of the Experimental Analysis of Behavior (JEAB) and the Journal of Applied Behavior Analysis (JABA). Also research is presented at poster sessions and conferences, such as at regional Behavior Analysis conventions or Association for Behavior Analysis (ABA) conventions nationally or internationally. There is also a Verbal Behavior Special Interest Group (SIG) of the Association for Behavior Analysis (ABA) which has a mailing list.
Journal of Early and Intensive Behavior Intervention and the Journal of Speech-Language Pathology and Applied Behavior Analysis both publish clinical articles on interventions based on verbal behavior.
Skinner has argued that his account of verbal behavior might have a strong evolutionary parallel. In Skinner's essay, Selection by Consequences he argued that operant conditioning was a part of a three-level process involving genetic evolution, cultural evolution and operant conditioning. All three processes, he argued, were examples of parallel processes of selection by consequences. David L. Hull, Rodney E. Langman and Sigrid S. Glenn have developed this parallel in detail. This topic continues to be a focus for behavior analysts. Behavior analysts have been working on developing ideas based on Verbal Behaviour for fifty years, and despite this, experience difficulty explaining generative verbal behavior.
See also
The Analysis of Verbal Behavior
Applied behavior analysis
Child development
Experimental analysis of behavior
Functional analytic psychotherapy
Jack Michael
Reinforcement
Relational frame theory
References
External links
An Introduction to Verbal Behavior Online Tutorial
Chomsky's 1959 Review of Verbal Behavior
On Chomsky's Appraisal of Skinner's Verbal Behavior: A Half Century of Misunderstanding
The Analysis of Verbal Behavior pubmed archive
abainternational.org
contextualpsychology.org
ironshrink.com
A Tutorial of B.F. Skinner's Verbal Behavior (1957)
Psychology books
Linguistics books
1957 non-fiction books
Behaviorism
Cognitive science literature
Works by B. F. Skinner
History of psychology | Verbal Behavior | [
"Biology"
] | 3,940 | [
"Behavior",
"Behaviorism"
] |
604,111 | https://en.wikipedia.org/wiki/Stone%27s%20representation%20theorem%20for%20Boolean%20algebras | In mathematics, Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a certain field of sets. The theorem is fundamental to the deeper understanding of Boolean algebra that emerged in the first half of the 20th century. The theorem was first proved by Marshall H. Stone. Stone was led to it by his study of the spectral theory of operators on a Hilbert space.
Stone spaces
Each Boolean algebra B has an associated topological space, denoted here S(B), called its Stone space. The points in S(B) are the ultrafilters on B, or equivalently the homomorphisms from B to the two-element Boolean algebra. The topology on S(B) is generated by a basis consisting of all sets of the form
where b is an element of B. These sets are also closed and so are clopen (both closed and open). This is the topology of pointwise convergence of nets of homomorphisms into the two-element Boolean algebra.
For every Boolean algebra B, S(B) is a compact totally disconnected Hausdorff space; such spaces are called Stone spaces (also profinite spaces). Conversely, given any topological space X, the collection of subsets of X that are clopen is a Boolean algebra.
Representation theorem
A simple version of Stone's representation theorem states that every Boolean algebra B is isomorphic to the algebra of clopen subsets of its Stone space S(B). The isomorphism sends an element to the set of all ultrafilters that contain b. This is a clopen set because of the choice of topology on S(B) and because B is a Boolean algebra.
Restating the theorem using the language of category theory; the theorem states that there is a duality between the category of Boolean algebras and the category of Stone spaces. This duality means that in addition to the correspondence between Boolean algebras and their Stone spaces, each homomorphism from a Boolean algebra A to a Boolean algebra B corresponds in a natural way to a continuous function from S(B) to S(A). In other words, there is a contravariant functor that gives an equivalence between the categories. This was an early example of a nontrivial duality of categories.
The theorem is a special case of Stone duality, a more general framework for dualities between topological spaces and partially ordered sets.
The proof requires either the axiom of choice or a weakened form of it. Specifically, the theorem is equivalent to the Boolean prime ideal theorem, a weakened choice principle that states that every Boolean algebra has a prime ideal.
An extension of the classical Stone duality to the category of Boolean spaces (that is, zero-dimensional locally compact Hausdorff spaces) and continuous maps (respectively, perfect maps) was obtained by G. D. Dimov (respectively, by H. P. Doctor).
See also
Citations
References
General topology
Boolean algebra
Theorems in lattice theory
Categorical logic | Stone's representation theorem for Boolean algebras | [
"Mathematics"
] | 626 | [
"Boolean algebra",
"General topology",
"Mathematical structures",
"Categorical logic",
"Mathematical logic",
"Fields of abstract algebra",
"Topology",
"Category theory"
] |
604,224 | https://en.wikipedia.org/wiki/Insulin%20receptor | The insulin receptor (IR) is a transmembrane receptor that is activated by insulin, IGF-I, IGF-II and belongs to the large class of receptor tyrosine kinase. Metabolically, the insulin receptor plays a key role in the regulation of glucose homeostasis; a functional process that under degenerate conditions may result in a range of clinical manifestations including diabetes and cancer. Insulin signalling controls access to blood glucose in body cells. When insulin falls, especially in those with high insulin sensitivity, body cells begin only to have access to lipids that do not require transport across the membrane. So, in this way, insulin is the key regulator of fat metabolism as well. Biochemically, the insulin receptor is encoded by a single gene , from which alternate splicing during transcription results in either IR-A or IR-B isoforms. Downstream post-translational events of either isoform result in the formation of a proteolytically cleaved α and β subunit, which upon combination are ultimately capable of homo or hetero-dimerisation to produce the ≈320 kDa disulfide-linked transmembrane insulin receptor.
Structure
Initially, transcription of alternative splice variants derived from the INSR gene are translated to form one of two monomeric isomers; IR-A in which exon 11 is excluded, and IR-B in which exon 11 is included. Inclusion of exon 11 results in the addition of 12 amino acids upstream of the intrinsic furin proteolytic cleavage site.
Upon receptor dimerisation, after proteolytic cleavage into the α- and β-chains, the additional 12 amino acids remain present at the C-terminus of the α-chain (designated αCT) where they are predicted to influence receptor–ligand interaction.
Each isometric monomer is structurally organized into 8 distinct domains consists of; a leucine-rich repeat domain (L1, residues 1–157), a cysteine-rich region (CR, residues 158–310), an additional leucine rich repeat domain (L2, residues 311–470), three fibronectin type III domains; FnIII-1 (residues 471–595), FnIII-2 (residues 596–808) and FnIII-3 (residues 809–906). Additionally, an insert domain (ID, residues 638–756) resides within FnIII-2, containing the α/β furin cleavage site, from which proteolysis results in both IDα and IDβ domains. Within the β-chain, downstream of the FnIII-3 domain lies a transmembrane helix (TH) and intracellular juxtamembrane (JM) region, just upstream of the intracellular tyrosine kinase (TK) catalytic domain, responsible for subsequent intracellular signaling pathways.
Upon cleavage of the monomer to its respective α- and β-chains, receptor hetero or homo-dimerisation is maintained covalently between chains by a single disulphide link and between monomers in the dimer by two disulphide links extending from each α-chain. The overall 3D ectodomain structure, possessing four ligand binding sites, resembles an inverted 'V', with the each monomer rotated approximately 2-fold about an axis running parallel to the inverted 'V' and L2 and FnIII-1 domains from each monomer forming the inverted 'V's apex.
Ligand binding
The insulin receptor's endogenous ligands include insulin, IGF-I and IGF-II. Using a cryo-EM, structural insight into conformational changes upon insulin binding was provided. Binding of ligand to the α-chains of the IR dimeric ectodomain shifts it from an inverted V-shape to a T-shaped conformation, and this change is propagated structurally to the transmembrane domains, which get closer, eventually leading to autophosphorylation of various tyrosine residues within the intracellular TK domain of the β-chain. These changes facilitate the recruitment of specific adapter proteins such as the insulin receptor substrate proteins (IRS) in addition to SH2-B (Src Homology 2 - B ), APS and protein phosphatases, such as PTP1B, eventually promoting downstream processes involving blood glucose homeostasis.
Strictly speaking the relationship between IR and ligand shows complex allosteric properties. This was indicated with the use of a Scatchard plots which identified that the measurement of the ratio of IR bound ligand to unbound ligand does not follow a linear relationship with respect to changes in the concentration of IR bound ligand, suggesting that the IR and its respective ligand share a relationship of cooperative binding. Furthermore, the observation that the rate of IR-ligand dissociation is accelerated upon addition of unbound ligand implies that the nature of this cooperation is negative; said differently, that the initial binding of ligand to the IR inhibits further binding to its second active site - exhibition of allosteric inhibition.
These models state that each IR monomer possesses 2 insulin binding sites; site 1, which binds to the 'classical' binding surface of insulin: consisting of L1 plus αCT domains and site 2, consisting of loops at the junction of FnIII-1 and FnIII-2 predicted to bind to the 'novel' hexamer face binding site of insulin. As each monomer contributing to the IR ectodomain exhibits 3D 'mirrored' complementarity, N-terminal site 1 of one monomer ultimately faces C-terminal site 2 of the second monomer, where this is also true for each monomers mirrored complement (the opposite side of the ectodomain structure). Current literature distinguishes the complement binding sites by designating the second monomer's site 1 and site 2 nomenclature as either site 3 and site 4 or as site 1' and site 2' respectively.
As such, these models state that each IR may bind to an insulin molecule (which has two binding surfaces) via 4 locations, being site 1, 2, (3/1') or (4/2'). As each site 1 proximally faces site 2, upon insulin binding to a specific site, 'crosslinking' via ligand between monomers is predicted to occur (i.e. as [monomer 1 Site 1 - Insulin - monomer 2 Site (4/2')] or as [monomer 1 Site 2 - Insulin - monomer 2 site (3/1')]). In accordance with current mathematical modelling of IR-insulin kinetics, there are two important consequences to the events of insulin crosslinking; 1. that by the aforementioned observation of negative cooperation between IR and its ligand that subsequent binding of ligand to the IR is reduced and 2. that the physical action of crosslinking brings the ectodomain into such a conformation that is required for intracellular tyrosine phosphorylation events to ensue (i.e. these events serve as the requirements for receptor activation and eventual maintenance of blood glucose homeostasis).
Applying cryo-EM and molecular dynamics simulations of receptor reconstituted in nanodiscs, the structure of the entire dimeric insulin receptor ectodomain with four insulin molecules bound was visualized, therefore confirming and directly showing biochemically predicted 4 binding locations.
Agonists
4548-G05
Insulin
Insulin-like growth factor 1
Mecasermin
A number of small-molecule insulin receptor agonists have been identified.
Signal transduction pathway
The insulin receptor is a type of tyrosine kinase receptor, in which the binding of an agonistic ligand triggers autophosphorylation of the tyrosine residues, with each subunit phosphorylating its partner. The addition of the phosphate groups generates a binding site for the insulin receptor substrate (IRS-1), which is subsequently activated via phosphorylation. The activated IRS-1 initiates the signal transduction pathway and binds to phosphoinositide 3-kinase (PI3K), in turn causing its activation. This then catalyses the conversion of Phosphatidylinositol 4,5-bisphosphate into Phosphatidylinositol 3,4,5-trisphosphate (PIP3). PIP3 acts as a secondary messenger and induces the activation of phosphatidylinositol dependent protein kinase, which then activates several other kinases – most notably protein kinase B, (PKB, also known as Akt). PKB triggers the translocation of glucose transporter (GLUT4) containing vesicles to the cell membrane, via the activation of SNARE proteins, to facilitate the diffusion of glucose into the cell. PKB also phosphorylates and inhibits glycogen synthase kinase, which is an enzyme that inhibits glycogen synthase. Therefore, PKB acts to start the process of glycogenesis, which ultimately reduces blood-glucose concentration.
Function
Regulation of gene expression
The activated IRS-1 acts as a secondary messenger within the cell to stimulate the transcription of insulin-regulated genes. First, the protein Grb2 binds the P-Tyr residue of IRS-1 in its SH2 domain. Grb2 is then able to bind SOS, which in turn catalyzes the replacement of bound GDP with GTP on Ras, a G protein. This protein then begins a phosphorylation cascade, culminating in the activation of mitogen-activated protein kinase (MAPK), which enters the nucleus and phosphorylates various nuclear transcription factors (such as Elk1).
Stimulation of glycogen synthesis
Glycogen synthesis is also stimulated by the insulin receptor via IRS-1. In this case, it is the SH2 domain of PI-3 kinase (PI-3K) that binds the P-Tyr of IRS-1. Now activated, PI-3K can convert the membrane lipid phosphatidylinositol 4,5-bisphosphate (PIP2) to phosphatidylinositol 3,4,5-triphosphate (PIP3). This indirectly activates a protein kinase, PKB (Akt), via phosphorylation. PKB then phosphorylates several target proteins, including glycogen synthase kinase 3 (GSK-3). GSK-3 is responsible for phosphorylating (and thus deactivating) glycogen synthase. When GSK-3 is phosphorylated, it is deactivated, and prevented from deactivating glycogen synthase. In this roundabout manner, insulin increases glycogen synthesis.
Degradation of insulin
Once an insulin molecule has docked onto the receptor and effected its action, it may be released back into the extracellular environment or it may be degraded by the cell. Degradation normally involves endocytosis of the insulin-receptor complex followed by the action of insulin degrading enzyme. Most insulin molecules are degraded by liver cells. It has been estimated that a typical insulin molecule is finally degraded about 71 minutes after its initial release into circulation.
Immune system
Besides the metabolic function, insulin receptors are also expressed on immune cells, such as macrophages, B cells, and T cells. On T cells, the expression of insulin receptors is undetectable during the resting state but up-regulated upon T-cell receptor (TCR) activation. Indeed, insulin has been shown when supplied exogenously to promote in vitro T cell proliferation in animal models. Insulin receptor signalling is important for maximizing the potential effect of T cells during acute infection and inflammation.
Pathology
The main activity of activation of the insulin receptor is inducing glucose uptake. For this reason "insulin insensitivity", or a decrease in insulin receptor signaling, leads to diabetes mellitus type 2 – the cells are unable to take up glucose, and the result is hyperglycemia (an increase in circulating glucose), and all the sequelae that result from diabetes.
Patients with insulin resistance may display acanthosis nigricans.
A few patients with homozygous mutations in the INSR gene have been described, which causes Donohue syndrome or Leprechaunism. This autosomal recessive disorder results in a totally non-functional insulin receptor. These patients have low-set, often protuberant, ears, flared nostrils, thickened lips, and severe growth retardation. In most cases, the outlook for these patients is extremely poor, with death occurring within the first year of life. Other mutations of the same gene cause the less severe Rabson-Mendenhall syndrome, in which patients have characteristically abnormal teeth, hypertrophic gingiva (gums), and enlargement of the pineal gland. Both diseases present with fluctuations of the glucose level: After a meal the glucose is initially very high, and then falls rapidly to abnormally low levels. Other genetic mutations to the insulin receptor gene can cause Severe Insulin Resistance.
Interactions
Insulin receptor has been shown to interact with
ENPP1,
GRB10,
GRB7,
IRS1,
MAD2L1,
PRKCD,
PTPN11, and
SH2B1.
References
Further reading
External links
Clusters of differentiation
EC 2.7.10
Single-pass transmembrane proteins
Tyrosine kinase receptors | Insulin receptor | [
"Chemistry"
] | 2,823 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
604,486 | https://en.wikipedia.org/wiki/Environmental%20impact%20statement | An environmental impact statement (EIS), under United States environmental law, is a document required by the 1969 National Environmental Policy Act (NEPA) for certain actions "significantly affecting the quality of the human environment". An EIS is a tool for decision making. It describes the positive and negative environmental effects of a proposed action, and it usually also lists one or more alternative actions that may be chosen instead of the action described in the EIS. One of the primary authors of the act is Lynton K. Caldwell.
Preliminary versions of these documents are officially known as a draft environmental impact statement (DEIS) or draft environmental impact report (DEIR).
Purpose
The purpose of the NEPA is to promote informed decision-making by federal agencies by making "detailed information concerning significant environmental impacts" available to both agency leaders and the public. The NEPA was the first piece of legislation that created a comprehensive method to assess potential and existing environmental risks at once. It also encourages communication and cooperation between all the actors involved in environmental decisions, including government officials, private businesses, and citizens.
In particular, an EIS acts as an enforcement mechanism to ensure that the federal government adheres to the goals and policies outlined in the NEPA. An EIS should be created in a timely manner as soon as the agency is planning development or is presented with a proposal for development. The statement should use an interdisciplinary approach so that it accurately assesses both the physical and social impacts of the proposed development. In many instances an action may be deemed subject to NEPA's EIS requirement even though the action is not specifically sponsored by a federal agency. These factors may include actions that receive federal funding, federal licensing or authorization, or that are subject to federal control.
Not all federal actions require a full EIS. If the action may or may not cause a significant impact, the agency can first prepare a smaller, shorter document called an Environmental Assessment (EA). The finding of the EA determines whether an EIS is required. If the EA indicates that no significant impact is likely, then the agency can release a finding of no significant impact (FONSI) and carry on with the proposed action. Otherwise, the agency must then conduct a full-scale EIS. Most EAs result in a FONSI. A limited number of federal actions may avoid the EA and EIS requirements under NEPA if they meet the criteria for a categorical exclusion (CATEX). A CATEX is usually permitted when a course of action is identical or very similar to a past course of action and the impacts on the environment from the previous action can be assumed for the proposed action, or for building a structure within the footprint of an existing, larger facility or complex. For example, two recently completed sections of Interstate 69 in Kentucky were granted a CATEX from NEPA requirements as these portions of I-69 utilize existing freeways that required little more than minor spot improvements and a change of highway signage. Additionally, a CATEX can be issued during an emergency when time does not permit the preparation of an EA or EIS. An example of the latter is when the Federal Highway Administration issued a CATEX to construct the replacement bridge in the wake of the I-35W Mississippi River Bridge Collapse.
NEPA does not prohibit the federal government or its licensees/permittees from harming the environment, instead it requires that the prospective impacts be understood and disclosed in advance. The intent of NEPA is to help key decisionmakers and stakeholders balance the need to implement an action with its impacts on the surrounding human and natural environment, and provide opportunities for mitigating those impacts while keeping the cost and schedule for implementing the action under control. However, many activities require various federal permits to comply with other environmental legislation, such as the Clean Air Act, the Clean Water Act, Endangered Species Act and Section 4(f) of the Federal Highway Act to name a few. Similarly, many states and local jurisdictions have enacted environmental laws and ordinances, requiring additional state and local permits before the action can proceed. Obtaining these permits typically requires the lead agency to implement the Least Environmentally Damaging Practicable Alternative (LEDPA) to comply with federal, state, and local environmental laws that are ancillary to NEPA. In some instances, the result of NEPA analysis leads to abandonment or cancellation of the proposed action, particularly when the "No Action" alternative ends up being the LEDPA.
Layout
An EIS typically has four sections:
An Introduction including a statement of the Purpose and Need of the Proposed Action.
A description of the Affected Environment.
A Range of Alternatives to the proposed action. Alternatives are considered the "heart" of the EIS.
An analysis of the environmental impacts of each of the possible alternatives. This section covers topics such as:
Impacts to threatened or endangered species
Air and water quality impacts
Impacts to historic and cultural sites, particularly sites of significant importance to indigenous peoples.
Social and economic impacts to local communities, often including consideration of attributes such as impacts on the available housing stock, economic impacts to businesses, property values, public health, aesthetics and noise within the affected area
Cost and Schedule Analyses for each alternative, including costs and timeline to mitigate expected impacts, to determine if the proposed action can be completed at an acceptable cost and within a reasonable amount of time
While not required in the EIS, the following subjects may be included as part of the EIS or as separate documents based on agency policy.
Financial Plan for the proposed action identifying the sources of secured funding for the action. For example, the Federal Highway Administration has started requiring states to include a financial plan showing that funding has been secured for major highway projects before it will approve an EIS and issue a Record of Decision.
An Environmental mitigation plan is often requested by the Environmental Protection Agency (EPA) if substantial environmental impacts are expected from the preferred alternative.
Additional documentation to comply with state and local environmental policy laws and secure required federal, state, and local permits before the action can proceed.
Every EIS is required to analyze a No Action Alternative, in addition to the range of alternatives presented for study. The No Action Alternative identifies the expected environmental impacts in the future if existing conditions were left as is with no action taken by the lead agency. Analysis of the No Action Alternative is used to establish a baseline upon which to compare the proposed "Action" alternatives. Contrary to popular belief, the "No Action Alternative" doesn't necessarily mean that nothing will occur if that option is selected in the Record of Decision. For example, the "No Action Alternative" was selected for the I-69/Trans-Texas Corridor Tier-I Environmental Impact Statement. In that Record of Decision, the Texas Department of Transportation opted not to proceed with building its portion of I-69 as one of the Trans-Texas Corridors to be built as a new-terrain route (the Trans-Texas Corridor concept was ultimately scrapped entirely), but instead decided to proceed with converting existing US and state routes to I-69 by upgrading those roads to interstate standards.
NEPA process
The NEPA process is designed to involve the public and gather the best available information in a single place so that decision makers can be fully informed when they make their choices.
This is the process of EIS
Proposal: In this stage, the needs and objectives of a project have been decided, but the project has not been financed.
Categorical Exclusion (CATEX): As discussed above, the government may exempt an agency from the process. The agency can then proceed with the project and skip the remaining steps.
Environmental Assessment (EA): The proposal is analyzed in addition to the local environment with the aim to reduce the negative impacts of the development on the area.
Finding of No Significant Impact (FONSI): Occurs when no significant impacts are identified in an EA. A FONSI typically allows the lead agency to proceed without having to complete an EIS.
Environmental Impact Statement
Scoping: The first meetings are held to discuss existing laws, the available information, and the research needed. The tasks are divided up and a lead group is selected. Decision makers and all those involved with the project can attend the meetings.
Notice: The public is notified that the agency is preparing an EIS. The agency also provides the public with information regarding how they can become involved in the process. The agency announces its project proposal with a notice in the Federal Register, notices in local media, and letters to citizens and groups that it knows are likely to be interested. Citizens and groups are welcome to send in comments helping the agency identify the issues it must address in the EIS (or EA).
Draft EIS (DEIS): Based on both agency expertise and issues raised by the public, the agency prepares a Draft EIS with a full description of the affected environment, a reasonable range of alternatives, and an analysis of the impacts of each alternative.
Comment: Affected individuals then have the opportunity to provide feedback through written and public hearing statements.
Final EIS (FEIS) and Proposed Action: Based on the comments on the Draft EIS, the agency writes a Final EIS, and announces its Proposed Action. The public is not invited to comment on this, but if they are still unhappy, or feel that the agency has missed a major issue, they may protest the EIS to the Director of the agency. The Director may either ask the agency to revise the EIS, or explain to the protester why their complaints are not actually taken care of.
Re-evaluation: Prepared following an approved FEIS or ROD when unforeseen changes to the proposed action or its impacts occurs, or when a substantial period of time has passed between approval of an action and the planned start of said action. Based on the significance of the changes, three outcomes may result from a re-evaluation report: (1) the action may proceed with no substantive changes to the FEIS, (2) significant impacts are expected with the change that can be adequately addressed in a Supplemental EIS (SEIS), or (3) the circumstances force a complete change in the nature and scope of the proposed action, thereby voiding the pre-existing FEIS (and ROD, if applicable), requiring the lead agency to restart the NEPA process and prepare a new EIS to encompass the changes.
Supplemental EIS (SEIS): Typically prepared after either a Final EIS or Record of Decision has been issued and new environmental impacts that were not considered in the original EIS are discovered, requiring the lead agency to re-evaluate its initial decision and consider new alternatives to avoid or mitigate the new impacts. Supplemental EISs are also prepared when the size and scope of a federal action changes, when a significant period of time has lapsed since the FEIS was completed to account for changes in the surrounding environment during that time, or when all of the proposed alternatives in an EIS are deemed to have unacceptable environmental impacts and new alternatives are proposed.
Record of Decision (ROD): Once all the protests are resolved the agency issues a Record of Decision which is its final action prior to implementation. If members of the public are still dissatisfied with the outcome, they may sue the agency in Federal court.
Often, the agencies responsible for preparing an EA or EIS do not compile the document directly, but outsource this work to private-sector consulting firms with expertise in the proposed action and its anticipated effects on the environment. Because of the intense level of detail required in analyzing the alternatives presented in an EIS or EA, such documents may take years or even decades to compile, and often compose of multiple volumes that can be thousands to tens of thousands of pages in length.
To avoid potential conflicts in securing required permits and approvals after the ROD is issued, the lead agency will often coordinate with stakeholders at all levels, and resolve any conflicts to the greatest extent possible during the EIS process. Proceeding in this fashion helps avoid interagency conflicts and potential lawsuits after the lead agency reaches its decision.
Tiering
On exceptionally large projects, especially proposed highway, railroad, and utility corridors that cross long distances, the lead agency may use a two-tiered process prior to implementing the proposed action. In such cases, the Tier I EIS would analyze the potential socio-environmental impacts along a general corridor, but would not identify the exact location of where the action would occur. A Tier I ROD would be issued approving the general area where the action would be implemented. Following the Tier I ROD, the approved Tier I area is further broken down into subareas, and a Tier II EIS is then prepared for each subarea, that identifies the exact location of where the proposed action will take place. The preparation of Tier II EISs for each subarea proceeds at its own pace, independent from the other subareas within the Tier I area. For example, parts of the proposed Interstate 69 extension in Indiana and Texas, as well as portions of the Interstate 11 corridor in Nevada and Arizona are being studied through a two-tiered process
Strengths
By requiring agencies to complete an EIS, the act encourages them to consider the environmental costs of a project and introduces new information into the decision-making process. The NEPA has increased the influence of environmental analysts and agencies in the federal government by increasing their involvement in the development process. Because an EIS requires expert skill and knowledge, agencies must hire environmental analysts. Unlike agencies who may have other priorities, analysts are often sympathetic to environmental issues. In addition, this feature introduces scientific procedures into the political process.
Limitations
The differences that exist between science and politics limit the accuracy of an EIS. Although analysts are members of the scientific community, they are affected by the political atmosphere. Analysts do not have the luxury of an unlimited time for research. They are also affected by the different motives behind the research of the EIS and by different perspectives of what constitutes a good analysis. In addition, government officials do not want to reveal an environmental problem from within their own agency.
Citizens often misunderstand the environmental assessment process. The public does not realize that the process is only meant to gather information relevant to the decision. Even if the statement predicts negative impacts of the project, decision makers can still proceed with the proposal.
See also
Natural environment
References
External links
Knowledge Mosaic's environmental blog, The Green Mien, provides a weekly round-up of recently released environmental impact statements.
Northwestern University Transportation Library has one of the world's largest collections of hard copy environmental impact statements.
Environmental science
Environmental law in the United States
Statements (law)
Statement
Environmental impact in the United States | Environmental impact statement | [
"Environmental_science"
] | 2,964 | [
"nan"
] |
605,557 | https://en.wikipedia.org/wiki/Espada%20Acequia | The Espada Acequia, or Piedras Creek Aqueduct, was built by Franciscan friars in 1731 in what is now San Antonio, Texas, United States. It was built to supply irrigation water to the lands near Mission San Francisco de la Espada, today part of San Antonio Missions National Historical Park. The acequia is still in use today and is an National Historic Civil Engineering Landmark and a National Historic Landmark.
Irrigation system
Mission Espada's acequia (irrigation) system can still be seen today. The main ditch, or acequia madre, continues to carry water to the mission and its former farmlands. This water is still used by residents living on these neighboring lands.
The initial survival of a new mission depended upon the planting and harvesting of crops. In south central Texas, intermittent rainfall and the need for a reliable water source made the design and installation of an acequia system a high priority. Irrigation was so important to Spanish colonial settlers that they measured cropland in suertes -the amount of land that could be watered in one day.
The use of acequias was originally brought to the arid regions of Spain by the Romans and the Moors. When Franciscan missionaries arrived in the desert Southwest they found the system worked well in the hot, dry environment. In some areas, like New Mexico, it blended in easily with the irrigation system already in use by the Puebloan Native Americans.
In order to distribute water to the missions along the San Antonio River, Franciscan missionaries oversaw the construction of seven gravity-flow ditches, dams, and at least one aqueduct—a network that irrigated approximately of land. The acequia not only conducted potable water and irrigation, but also powered a mill.
Mission Espada has survived from its beginnings to the present day as a community center that still supports a Catholic parish and religious education, however a school originally opened by the Sisters of the Incarnate Word and Blessed Sacrament was closed in 1967.
References
External links
Buildings and structures in San Antonio
History of San Antonio
National Historic Landmarks in Texas
National Register of Historic Places in San Antonio
Historic American Buildings Survey in Texas
Historic American Engineering Record in Texas
Irrigation projects
Irrigation in the United States
Water supply infrastructure on the National Register of Historic Places
Historic Civil Engineering Landmarks
Spanish missions in Texas
Colonial United States (Spanish)
San Antonio Missions National Historical Park
1730s in Texas
1731 establishments in the Spanish Empire
Individually listed contributing properties to historic districts on the National Register in Texas
San Antonio River | Espada Acequia | [
"Engineering"
] | 506 | [
"Civil engineering",
"Irrigation projects",
"Historic Civil Engineering Landmarks"
] |
605,591 | https://en.wikipedia.org/wiki/Media%20filter | A media filter is a type of filter that uses a bed of sand, peat, shredded tires, foam, crushed glass, geo-textile fabric, anthracite, crushed granite or other material to filter water for drinking, swimming pools, aquaculture, irrigation, stormwater management, oil and gas operations, and other applications.
Each layer of media is designed to filter out specific types and sizes of particles, allowing for more efficient and effective removal of contaminants.
Design
One design brings the water in the top of a container through a "header" which distributes the water evenly. The filter "media" start with fine sand on the top and then becomes gradually coarser sand in a number of layers followed by gravel on the bottom, in gradually larger sizes. The top sand physically removes particles from the water. The job of the subsequent layers is to support the finer layer above and provide efficient drainage.
As particles become trapped in the media, the differential pressure across the bed increases. Periodically, a backwash may be initiated to remove the solids trapped in the bed. During backwash, flow is directed in the opposite direction from normal flow. In multi-media filters, the layers in the media re-stratify due to density differences prior to resuming normal filtration. Multimedia filter can remove particles down to 10-25 microns.
Advantages and disadvantages
Advantages of multimedia filters
Multimedia filters use multiple layers of different filter media to achieve more effective and efficient filtration than single-media filters like sand filters.
They can remove a wider range of particle sizes and types than single-media filters, resulting in more efficient filtration and longer filter life.
They are effective at removing suspended solids, turbidity, and other contaminants from water.
They can be used for a wide range of flow rates and particle sizes. They can be easily backwashed to clean the filter media and restore filtration efficiency.
They require little to no electricity to operate.
Disadvantages of multimedia filters
Multimedia filters have a higher capital cost compared to single-media filters like sand filters.
They have a larger footprint and require more space than single-media filters.
They may not be effective at removing some types of contaminants, such as dissolved organic compounds and bacteria.
They may require pre-treatment to remove large particles or debris that could clog the filter media.
They can create waste material (backwash water) that needs to be treated or disposed of properly.
Uses
Drinking water
Media filters are used in drinking water treatment, where multimedia filters are used as a primary or secondary filtration step to remove a wider range of particle sizes and types than sand filters, including organic matter and smaller particles.
Municipal drinking water systems often use a rapid sand filter and/or a slow sand filter for purification. Silica sand is the most widely used medium in such filters. Anthracite coal, garnet sand, ilmenite, granular activated carbon, manganese green sand and crushed recycled glass are among the alternative filter media used.
Stormwater
Media filters are used to protect water quality in streams, rivers, and lakes. They can be effective at removing pollutants in stormwater such as suspended solids and phosphorus. Sand is the most common filter material. In other filters, sometimes called "organic filters," wood chips or leaf mold may be used.
Sewage and wastewater
Media filters are also used for cleaning the effluent from septic tanks and primary settlement tanks. The materials commonly used are sand, peat and natural stone fibre.
Oil and gas industry
The oil and gas industry uses media filters for various purposes in both upstream and downstream operations. Nut shell filters are commonly used as a tertiary oil removal step for treatment of produced water. Sand filters are often used to remove fine solids following biological treatment and clarification of oil refinery wastewater. Multi-media filters are used for removing suspended solids from both produced water and refinery wastewater. The materials commonly used in multi-media filters are gravel, sand, garnet, and anthracite.
See also
Biofilter
Bioretention
References
__notoc__
Environmental engineering
Irrigation
Water filters
Stormwater management
References
Nalco Water, an Ecolab Company. Nalco Water Handbook, Fourth Edition (McGraw-Hill Education: New York, Chicago, San Francisco, Athens, London, Madrid, Mexico City, Milan, New Delhi, Singapore, Sydney, Toronto, 2018). https://www.accessengineeringlibrary.com/content/book/9781259860973 | Media filter | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 926 | [
"Water filters",
"Water treatment",
"Stormwater management",
"Chemical engineering",
"Filters",
"Water pollution",
"Civil engineering",
"Environmental engineering"
] |
605,697 | https://en.wikipedia.org/wiki/Photohydrogen | In photochemistry, photohydrogen is hydrogen produced with the help of artificial or natural light. This is how the leaf of a tree splits water molecules into protons (hydrogen ions), electrons (to make carbohydrates) and oxygen (released into the air as a waste product). Photohydrogen may also be produced by the photodissociation of water by ultraviolet light.
Photohydrogen is sometimes discussed in the context of obtaining renewable energy from sunlight, by using microscopic organisms such as bacteria or algae. These organisms create hydrogen with the help of hydrogenase enzymes which convert protons derived from the water splitting reaction into hydrogen gas which can then be collected and used as a biofuel.
See also
Solar hydrogen panel
Photofermentation
Biological hydrogen production (Algae)
Photoelectrochemical cell
Photosynthesis
Hydrogen cycle
Hydrogen economy
References
Biofuels technology
Hydrogen production
Photochemistry | Photohydrogen | [
"Chemistry",
"Biology"
] | 187 | [
"Biofuels technology",
"nan"
] |
605,727 | https://en.wikipedia.org/wiki/Kronecker%27s%20theorem | In mathematics, Kronecker's theorem is a theorem about diophantine approximation, introduced by .
Kronecker's approximation theorem had been firstly proved by L. Kronecker in the end of the 19th century. It has been now revealed to relate to the idea of n-torus and Mahler measure since the later half of the 20th century. In terms of physical systems, it has the consequence that planets in circular orbits moving uniformly around a star will, over time, assume all alignments, unless there is an exact dependency between their orbital periods.
Statement
Kronecker's theorem is a result in diophantine approximations applying to several real numbers xi, for 1 ≤ i ≤ n, that generalises Dirichlet's approximation theorem to multiple variables.
The classical Kronecker approximation theorem is formulated as follows.
Given real n-tuples and , the condition:
holds if and only if for any with
the number is also an integer.
In plainer language, the first condition states that the tuple can be approximated arbitrarily well by linear combinations of the s (with integer coefficients) and integer vectors.
For the case of a and , Kronecker's Approximation Theorem can be stated as follows. For any with irrational and there exist integers and with , such that
Relation to tori
In the case of N numbers, taken as a single N-tuple and point P of the torus
T = RN/ZN,
the closure of the subgroup <P> generated by P will be finite, or some torus T′ contained in T. The original Kronecker's theorem (Leopold Kronecker, 1884) stated that the necessary condition for
T′ = T,
which is that the numbers xi together with 1 should be linearly independent over the rational numbers, is also sufficient. Here it is easy to see that if some linear combination of the xi and 1 with non-zero rational number coefficients is zero, then the coefficients may be taken as integers, and a character χ of the group T other than the trivial character takes the value 1 on P. By Pontryagin duality we have T′ contained in the kernel of χ, and therefore not equal to T.
In fact a thorough use of Pontryagin duality here shows that the whole Kronecker theorem describes the closure of <P> as the intersection of the kernels of the χ with
χ(P) = 1.
This gives an (antitone) Galois connection between monogenic closed subgroups of T (those with a single generator, in the topological sense), and sets of characters with kernel containing a given point. Not all closed subgroups occur as monogenic; for example a subgroup that has a
torus of dimension ≥ 1 as connected component of the identity element, and that is not connected, cannot be such a subgroup.
The theorem leaves open the question of how well (uniformly) the multiples mP of P fill up the closure. In the one-dimensional case, the distribution is uniform by the equidistribution theorem.
See also
Weyl's criterion
Dirichlet's approximation theorem
References
Diophantine approximation
Topological groups | Kronecker's theorem | [
"Mathematics"
] | 652 | [
"Space (mathematics)",
"Topological spaces",
"Mathematical relations",
"Topological groups",
"Diophantine approximation",
"Approximations",
"Number theory"
] |
606,123 | https://en.wikipedia.org/wiki/Photoevaporation | Photoevaporation is the process where energetic radiation ionises gas and causes it to disperse away from the ionising source. The term is typically used in an astrophysical context where ultraviolet radiation from hot stars acts on clouds of material such as molecular clouds, protoplanetary disks, or planetary atmospheres.
Molecular clouds
One of the most obvious manifestations of astrophysical photoevaporation is seen in the eroding structures of molecular clouds that luminous stars are born within.
Evaporating gaseous globules (EGGs)
Evaporating gaseous globules or EGGs were first discovered in the Eagle Nebula. These small cometary globules are being photoevaporated by the stars in the nearby cluster. EGGs are places of ongoing star-formation.
Planetary atmospheres
A planet can be stripped of its atmosphere (or parts of the atmosphere) due to high energy photons and other electromagnetic radiation. If a photon interacts with an atmospheric molecule, the molecule is accelerated and its temperature increased. If sufficient energy is provided, the molecule or atom may reach the escape velocity of the planet and "evaporate" into space. The lower the mass number of the gas, the higher the velocity obtained by interaction with a photon. Thus hydrogen is the gas which is most prone to photoevaporation.
Photoevaporation is the likely cause of the small planet radius gap.
Examples of exoplanets with an evaporating atmosphere are HD 209458 b, HD 189733 b and Gliese 3470 b. Material from a possible evaporating planet around WD J0914+1914 might be responsible for the gaseous disk around this white dwarf.
Protoplanetary disks
Protoplanetary disks can be dispersed by stellar wind and heating due to incident electromagnetic radiation. The radiation interacts with matter and thus accelerates it outwards. This effect is only noticeable when there is sufficient radiation strength, such as coming from nearby O and B type stars or when the central protostar commences nuclear fusion.
The disk is composed of gas and dust. The gas, consisting mostly of light elements such as hydrogen and helium, is mainly affected by the effect, causing the ratio between dust and gas to increase.
Radiation from the central star excites particles in the accretion disk. The irradiation of the disk gives rise to a stability length scale known as the gravitational radius (). Outside of the gravitational radius, particles can become sufficiently excited to escape
the gravity of the disk, and evaporate. After 106 – 107 years,
the viscous accretion rates fall below the photoevaporation rates at .
A gap then opens around , the inner disk drains onto the central star,
or spreads to and evaporates. An inner hole extending to
is produced. Once an inner hole forms, the outer disk is very rapidly cleared.
The formula for the gravitational radius of the disk is
where is the ratio of specific heats (= 5/3 for a monatomic gas), the universal gravitational constant, the mass of the central star, the mass of the Sun,
the mean weight of the gas, Boltzmann constant,
is the temperature of the gas and AU the Astronomical Unit.
If we denote the coefficient in the above equation by the Greek letter then
, .
where is the number of degrees of freedom and we have used the formula: .
For an atom, such as a hydrogen atom, then , because an atom can move in three different, orthogonal directions. Consequently, . If the hydrogen atom is ionized, i.e., it is a proton, and is in a strong magnetic field then , because the proton can move along the magnetic field and rotate around the field lines. In this case, . A diatomic molecule, e.g., a hydrogen molecule, has and . For a non-linear triatomic molecule, such as water, and . If becomes very large, then approaches zero. This is summarised in the Table 1 , where we see that different gases may have different gravitational radii.
Table 1: Gravitational radius coefficient as a function of the degrees of freedom.
Because of this effect, the presence of massive stars in a star-forming region is thought to have a great effect on planet formation from the disk around a young stellar object, though it is not yet clear if this effect decelerates or accelerates it.
Regions containing protoplanetary disks with clear signs of external photoevaporation
The most famous region containing photoevaporated protoplanetary disks is the Orion Nebula. They were called bright proplyds and since then the term was used for other regions to describe photoevaporation of protoplanetary disks. They were discovered with the Hubble Space Telescope. There might even be a planetary-mass object in the Orion Nebula that is being photoevaporated by θ 1 Ori C. Since then HST did observe other young star clusters and found bright proplyds in the Lagoon Nebula, the Trifid Nebula, Pismis 24 and NGC 1977. After the launch of the Spitzer Space Telescope additional observations revealed dusty cometary tails around young cluster members in NGC 2244, IC 1396 and NGC 2264. These dusty tails are also explained by photoevaporation of the proto-planetary disk. Later similar cometary tails were found with Spitzer in W5. This study concluded that the tails have a likely lifetime of 5 Myrs or less. Additional tails were found with Spitzer in NGC 1977, NGC 6193 and Collinder 69. Other bright proplyd candidates were found in the Carina Nebula with the CTIO 4m and near Sagittarius A* with the VLA. Follow-up observations of a proplyd candidate in the Carina Nebula with Hubble revealed that it is likely an evaporating gaseous globule.
Objects in NGC 3603 and later in Cygnus OB2 were proposed as intermediate massive versions of the bright proplyds found in the Orion Nebula.
References
Concepts in stellar astronomy | Photoevaporation | [
"Physics"
] | 1,235 | [
"Concepts in stellar astronomy",
"Concepts in astrophysics"
] |
606,149 | https://en.wikipedia.org/wiki/Meso%20compound | A meso compound or meso isomer is an optically inactive isomer in a set of stereoisomers, at least two of which are optically active. This means that despite containing two or more stereocenters, the molecule is not chiral. A meso compound is superposable on its mirror image (not to be confused with superimposable, as any two objects can be superimposed over one another regardless of whether they are the same). Two objects can be superposed if all aspects of the objects coincide and it does not produce a "(+)" or "(-)" reading when analyzed with a polarimeter. The name is derived from the Greek mésos meaning “middle”.
For example, tartaric acid can exist as any of three stereoisomers depicted below in a Fischer projection. Of the four colored pictures at the top of the diagram, the first two represent the meso compound (the 2R,3S and 2S,3R isomers are equivalent), followed by the optically active pair of levotartaric acid (L-(R,R)-(+)-tartaric acid) and dextrotartaric acid (D-(S,S)-(-)-tartaric acid). The meso compound is bisected by an internal plane of symmetry that is not present for the non-meso isomers (indicated by an X). That is, on reflecting the meso compound through a mirror plane perpendicular to the screen, the same stereochemistry is obtained; this is not the case for the non-meso tartaric acid, which generates the other enantiomer. The meso compound must not be confused with a 50:50 racemic mixture of the two optically-active compounds, although neither will rotate light in a polarimeter.
It is a requirement for two of the stereocenters in a meso compound to have at least two substituents in common (although having this characteristic does not necessarily mean that the compound is meso). For example, in 2,4-pentanediol, both the second and fourth carbon atoms, which are stereocenters, have all four substituents in common.
Since a meso isomer has a superposable mirror image, a compound with a total of n chiral centers cannot attain the theoretical maximum of 2n stereoisomers if one of the stereoisomers is meso.
A meso isomer need not have a mirror plane. It may have an inversion or a rotoreflexion symmetry such as S. For example, there are two meso isomers of 1,4-difluoro-2,5-dichlorocyclohexane but neither has a mirror plane, and there are two meso isomers of 1,2,3,4-tetrafluorospiropentane (see figure).
Cyclic meso compounds
1,2-substituted cyclopropane has a meso cis-isomer (molecule has a mirror plane) and two trans-enantiomers:
The two cis stereoisomers of 1,2-substituted cyclohexanes behave like meso compounds at room temperature in most cases. At room temperature, most 1,2-disubstituted cyclohexanes undergo rapid ring flipping (exceptions being rings with bulky substituents), and as a result, the two cis stereoisomers behave chemically identically with chiral reagents. At low temperatures, however, this is not the case, as the activation energy for the ring-flip cannot be overcome, and they therefore behave like enantiomers. Also noteworthy is the fact that when a cyclohexane undergoes a ring flip, the absolute configurations of the stereocenters do not change.
References
Stereochemistry | Meso compound | [
"Physics",
"Chemistry"
] | 811 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.