id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
706,295 | https://en.wikipedia.org/wiki/Canonical%20commutation%20relation | In quantum mechanics, the canonical commutation relation is the fundamental relation between canonical conjugate quantities (quantities which are related by definition such that one is the Fourier transform of another). For example,
between the position operator and momentum operator in the direction of a point particle in one dimension, where is the commutator of and , is the imaginary unit, and is the reduced Planck constant , and is the unit operator. In general, position and momentum are vectors of operators and their commutation relation between different components of position and momentum can be expressed as
where is the Kronecker delta.
This relation is attributed to Werner Heisenberg, Max Born and Pascual Jordan (1925), who called it a "quantum condition" serving as a postulate of the theory; it was noted by E. Kennard (1927) to imply the Heisenberg uncertainty principle. The Stone–von Neumann theorem gives a uniqueness result for operators satisfying (an exponentiated form of) the canonical commutation relation.
Relation to classical mechanics
By contrast, in classical physics, all observables commute and the commutator would be zero. However, an analogous relation exists, which is obtained by replacing the commutator with the Poisson bracket multiplied by ,
This observation led Dirac to propose that the quantum counterparts , of classical observables , satisfy
In 1946, Hip Groenewold demonstrated that a general systematic correspondence between quantum commutators and Poisson brackets could not hold consistently.
However, he further appreciated that such a systematic correspondence does, in fact, exist between the quantum commutator and a deformation of the Poisson bracket, today called the Moyal bracket, and, in general, quantum operators and classical observables and distributions in phase space. He thus finally elucidated the consistent correspondence mechanism, the Wigner–Weyl transform, that underlies an alternate equivalent mathematical representation of quantum mechanics known as deformation quantization.
Derivation from Hamiltonian mechanics
According to the correspondence principle, in certain limits the quantum equations of states must approach Hamilton's equations of motion. The latter state the following relation between the generalized coordinate q (e.g. position) and the generalized momentum p:
In quantum mechanics the Hamiltonian , (generalized) coordinate and (generalized) momentum are all linear operators.
The time derivative of a quantum state is represented by the operator (by the Schrödinger equation). Equivalently, since in the Schrödinger picture the operators are not explicitly time-dependent, the operators can be seen to be evolving in time (for a contrary perspective where the operators are time dependent, see Heisenberg picture) according to their commutation relation with the Hamiltonian:
In order for that to reconcile in the classical limit with Hamilton's equations of motion, must depend entirely on the appearance of in the Hamiltonian and must depend entirely on the appearance of in the Hamiltonian. Further, since the Hamiltonian operator depends on the (generalized) coordinate and momentum operators, it can be viewed as a functional, and we may write (using functional derivatives):
In order to obtain the classical limit we must then have
Weyl relations
The group generated by exponentiation of the 3-dimensional Lie algebra determined by the commutation relation is called the Heisenberg group. This group can be realized as the group of upper triangular matrices with ones on the diagonal.
According to the standard mathematical formulation of quantum mechanics, quantum observables such as and should be represented as self-adjoint operators on some Hilbert space. It is relatively easy to see that two operators satisfying the above canonical commutation relations cannot both be bounded. Certainly, if and were trace class operators, the relation gives a nonzero number on the right and zero on the left.
Alternately, if and were bounded operators, note that , hence the operator norms would satisfy
so that, for any n,
However, can be arbitrarily large, so at least one operator cannot be bounded, and the dimension of the underlying Hilbert space cannot be finite. If the operators satisfy the Weyl relations (an exponentiated version of the canonical commutation relations, described below) then as a consequence of the Stone–von Neumann theorem, both operators must be unbounded.
Still, these canonical commutation relations can be rendered somewhat "tamer" by writing them in terms of the (bounded) unitary operators and . The resulting braiding relations for these operators are the so-called Weyl relations
These relations may be thought of as an exponentiated version of the canonical commutation relations; they reflect that translations in position and translations in momentum do not commute. One can easily reformulate the Weyl relations in terms of the representations of the Heisenberg group.
The uniqueness of the canonical commutation relations—in the form of the Weyl relations—is then guaranteed by the Stone–von Neumann theorem.
For technical reasons, the Weyl relations are not strictly equivalent to the canonical commutation relation . If and were bounded operators, then a special case of the Baker–Campbell–Hausdorff formula would allow one to "exponentiate" the canonical commutation relations to the Weyl relations. Since, as we have noted, any operators satisfying the canonical commutation relations must be unbounded, the Baker–Campbell–Hausdorff formula does not apply without additional domain assumptions. Indeed, counterexamples exist satisfying the canonical commutation relations but not the Weyl relations. (These same operators give a counterexample to the naive form of the uncertainty principle.) These technical issues are the reason that the Stone–von Neumann theorem is formulated in terms of the Weyl relations.
A discrete version of the Weyl relations, in which the parameters s and t range over , can be realized on a finite-dimensional Hilbert space by means of the clock and shift matrices.
Generalizations
It can be shown that
Using , it can be shown that by mathematical induction
generally known as McCoy's formula.
In addition, the simple formula
valid for the quantization of the simplest classical system, can be generalized to the case of an arbitrary Lagrangian . We identify canonical coordinates (such as in the example above, or a field in the case of quantum field theory) and canonical momenta (in the example above it is , or more generally, some functions involving the derivatives of the canonical coordinates with respect to time):
This definition of the canonical momentum ensures that one of the Euler–Lagrange equations has the form
The canonical commutation relations then amount to
where is the Kronecker delta.
Gauge invariance
Canonical quantization is applied, by definition, on canonical coordinates. However, in the presence of an electromagnetic field, the canonical momentum is not gauge invariant. The correct gauge-invariant momentum (or "kinetic momentum") is
(SI units) (cgs units),
where is the particle's electric charge, is the vector potential, and is the speed of light. Although the quantity is the "physical momentum", in that it is the quantity to be identified with momentum in laboratory experiments, it does not satisfy the canonical commutation relations; only the canonical momentum does that. This can be seen as follows.
The non-relativistic Hamiltonian for a quantized charged particle of mass in a classical electromagnetic field is (in cgs units)
where is the three-vector potential and is the scalar potential. This form of the Hamiltonian, as well as the Schrödinger equation , the Maxwell equations and the Lorentz force law are invariant under the gauge transformation
where and is the gauge function.
The angular momentum operator is
and obeys the canonical quantization relations
defining the Lie algebra for so(3), where is the Levi-Civita symbol. Under gauge transformations, the angular momentum transforms as
The gauge-invariant angular momentum (or "kinetic angular momentum") is given by
which has the commutation relations
where is the magnetic field. The inequivalence of these two formulations shows up in the Zeeman effect and the Aharonov–Bohm effect.
Uncertainty relation and commutators
All such nontrivial commutation relations for pairs of operators lead to corresponding uncertainty relations, involving positive semi-definite expectation contributions by their respective commutators and anticommutators. In general, for two Hermitian operators and , consider expectation values in a system in the state , the variances around the corresponding expectation values being , etc.
Then
where is the commutator of and , and is the anticommutator.
This follows through use of the Cauchy–Schwarz inequality, since
, and ; and similarly for the shifted operators and . (Cf. uncertainty principle derivations.)
Substituting for and (and taking care with the analysis) yield Heisenberg's familiar uncertainty relation for and , as usual.
Uncertainty relation for angular momentum operators
For the angular momentum operators , etc., one has that
where is the Levi-Civita symbol and simply reverses the sign of the answer under pairwise interchange of the indices. An analogous relation holds for the spin operators.
Here, for and , in angular momentum multiplets , one has, for the transverse components of the Casimir invariant , the -symmetric relations
,
as well as .
Consequently, the above inequality applied to this commutation relation specifies
hence
and therefore
so, then, it yields useful constraints such as a lower bound on the Casimir invariant: , and hence , among others.
See also
Canonical quantization
CCR and CAR algebras
Conformastatic spacetimes
Lie derivative
Moyal bracket
Stone–von Neumann theorem
References
.
.
Quantum mechanics
Mathematical physics
zh:對易關係 | Canonical commutation relation | [
"Physics",
"Mathematics"
] | 1,999 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics",
"Quantum mechanics"
] |
706,311 | https://en.wikipedia.org/wiki/Canonical%20coordinates | In mathematics and classical mechanics, canonical coordinates are sets of coordinates on phase space which can be used to describe a physical system at any given point in time. Canonical coordinates are used in the Hamiltonian formulation of classical mechanics. A closely related concept also appears in quantum mechanics; see the Stone–von Neumann theorem and canonical commutation relations for details.
As Hamiltonian mechanics are generalized by symplectic geometry and canonical transformations are generalized by contact transformations, so the 19th century definition of canonical coordinates in classical mechanics may be generalized to a more abstract 20th century definition of coordinates on the cotangent bundle of a manifold (the mathematical notion of phase space).
Definition in classical mechanics
In classical mechanics, canonical coordinates are coordinates and in phase space that are used in the Hamiltonian formalism. The canonical coordinates satisfy the fundamental Poisson bracket relations:
A typical example of canonical coordinates is for to be the usual Cartesian coordinates, and to be the components of momentum. Hence in general, the coordinates are referred to as "conjugate momenta".
Canonical coordinates can be obtained from the generalized coordinates of the Lagrangian formalism by a Legendre transformation, or from another set of canonical coordinates by a canonical transformation.
Definition on cotangent bundles
Canonical coordinates are defined as a special set of coordinates on the cotangent bundle of a manifold. They are usually written as a set of or with the xs or qs denoting the coordinates on the underlying manifold and the ps denoting the conjugate momentum, which are 1-forms in the cotangent bundle at point q in the manifold.
A common definition of canonical coordinates is any set of coordinates on the cotangent bundle that allow the canonical one-form to be written in the form
up to a total differential. A change of coordinates that preserves this form is a canonical transformation; these are a special case of a symplectomorphism, which are essentially a change of coordinates on a symplectic manifold.
In the following exposition, we assume that the manifolds are real manifolds, so that cotangent vectors acting on tangent vectors produce real numbers.
Formal development
Given a manifold , a vector field on (a section of the tangent bundle ) can be thought of as a function acting on the cotangent bundle, by the duality between the tangent and cotangent spaces. That is, define a function
such that
holds for all cotangent vectors in . Here, is a vector in , the tangent space to the manifold at point . The function is called the momentum function corresponding to .
In local coordinates, the vector field at point may be written as
where the are the coordinate frame on . The conjugate momentum then has the expression
where the are defined as the momentum functions corresponding to the vectors :
The together with the together form a coordinate system on the cotangent bundle ; these coordinates are called the canonical coordinates.
Generalized coordinates
In Lagrangian mechanics, a different set of coordinates are used, called the generalized coordinates. These are commonly denoted as with called the generalized position and the generalized velocity. When a Hamiltonian is defined on the cotangent bundle, then the generalized coordinates are related to the canonical coordinates by means of the Hamilton–Jacobi equations.
See also
Linear discriminant analysis
Symplectic manifold
Symplectic vector field
Symplectomorphism
Kinetic momentum
Complementarity (physics)
Canonical quantization
Canonical quantum gravity
References
Ralph Abraham and Jerrold E. Marsden, Foundations of Mechanics, (1978) Benjamin-Cummings, London See section 3.2.
Differential topology
Symplectic geometry
Hamiltonian mechanics
Lagrangian mechanics
Coordinate systems
Moment (physics) | Canonical coordinates | [
"Physics",
"Mathematics"
] | 746 | [
"Physical quantities",
"Quantity",
"Theoretical physics",
"Classical mechanics",
"Lagrangian mechanics",
"Hamiltonian mechanics",
"Topology",
"Differential topology",
"Coordinate systems",
"Dynamical systems",
"Moment (physics)"
] |
706,399 | https://en.wikipedia.org/wiki/Path-ordering | In theoretical physics, path-ordering is the procedure (or a meta-operator ) that orders a product of operators according to the value of a chosen parameter:
Here p is a permutation that orders the parameters by value:
For example:
In many fields of physics, the most common type of path-ordering is time-ordering, which is discussed in detail below.
Examples
If an operator is not simply expressed as a product, but as a function of another operator, we must first perform a Taylor expansion of this function. This is the case of the Wilson loop, which is defined as a path-ordered exponential to guarantee that the Wilson loop encodes the holonomy of the gauge connection. The parameter σ that determines the ordering is a parameter describing the contour, and because the contour is closed, the Wilson loop must be defined as a trace in order to be gauge-invariant.
Time ordering
In quantum field theory it is useful to take the time-ordered product of operators. This operation is denoted by . (Although is often called the "time-ordering operator", strictly speaking it is neither an operator on states nor a superoperator on operators.)
For two operators A(x) and B(y) that depend on spacetime locations x and y we define:
Here and denote the invariant scalar time-coordinates of the points x and y.
Explicitly we have
where denotes the Heaviside step function and the depends on if the operators are bosonic or fermionic in nature. If bosonic, then the + sign is always chosen, if fermionic then the sign will depend on the number of operator interchanges necessary to achieve the proper time ordering. Note that the statistical factors do not enter here.
Since the operators depend on their location in spacetime (i.e. not just time) this time-ordering operation is only coordinate independent if operators at spacelike separated points commute. This is why it is necessary to use rather than , since usually indicates the coordinate dependent time-like index of the spacetime point. Note that the time-ordering is usually written with the time argument increasing from right to left.
In general, for the product of n field operators the time-ordered product of operators are defined as follows:
where the sum runs all over p'''s and over the symmetric group of n degree permutations and
The S-matrix in quantum field theory is an example of a time-ordered product. The S-matrix, transforming the state at to a state at , can also be thought of as a kind of "holonomy", analogous to the Wilson loop. We obtain a time-ordered expression because of the following reason:
We start with this simple formula for the exponential
Now consider the discretized evolution operator
where is the evolution operator over an infinitesimal time interval . The higher order terms can be neglected in the limit . The operator is defined by
Note that the evolution operators over the "past" time intervals appears on the right side of the product. We see that the formula is analogous to the identity above satisfied by the exponential, and we may write
The only subtlety we had to include was the time-ordering operator because the factors in the product defining S'' above were time-ordered, too (and operators do not commute in general) and the operator ensures that this ordering will be preserved.
See also
Ordered exponential (essentially the same concept)
Dyson series
Gauge theory
S-matrix
References
Quantum field theory
Gauge theories | Path-ordering | [
"Physics"
] | 712 | [
"Quantum field theory",
"Quantum mechanics"
] |
706,884 | https://en.wikipedia.org/wiki/Waterspout | A waterspout is a rotating column of air that occurs over a body of water, usually appearing as a funnel-shaped cloud in contact with the water and a cumuliform cloud. There are two types of waterspout, each formed by distinct mechanisms. The most common type is a weak vortex known as a "fair weather" or "non-tornadic" waterspout. The other less common type is simply a classic tornado occurring over water rather than land, known as a "tornadic", "supercellular", or "mesocyclonic" waterspout, and accurately a "tornado over water". A fair weather waterspout has a five-part life cycle: formation of a dark spot on the water surface; spiral pattern on the water surface; formation of a spray ring; development of a visible condensation funnel; and ultimately, decay. Most waterspouts do not suck up water.
While waterspouts form mostly in tropical and subtropical areas, they are also reported in Europe, Western Asia (the Middle East), Australia, New Zealand, the Great Lakes, Antarctica, and on rare occasions, the Great Salt Lake. Some are also found on the East Coast of the United States, and the coast of California. Although rare, waterspouts have been observed in connection with lake-effect snow precipitation bands.
Characteristics
Climatology
Though the majority of waterspouts occur in the tropics, they can seasonally appear in temperate areas throughout the world, and are common across the western coast of Europe as well as the British Isles and several areas of the Mediterranean and Baltic Sea. They are not restricted to saltwater; many have been reported on lakes and rivers including the Great Lakes and the St. Lawrence River. They are fairly common on the Great Lakes during late summer and early fall, with a record 66+ waterspouts reported over just a seven-day period in 2003.
Waterspouts are more frequent within from the coast than farther out at sea. They are common along the southeast U.S. coast, especially off southern Florida and the Keys, and can happen over seas, bays, and lakes worldwide. Approximately 160 waterspouts are currently reported per year across Europe, with the Netherlands reporting the most at 60, followed by Spain and Italy at 25, and the United Kingdom at 15. They are most common in late summer. In the Northern Hemisphere, September has been pinpointed as the prime month of formation. Waterspouts are also frequently observed off the east coast of Australia, with several being described by Joseph Banks during the voyage of the Endeavour in 1770.
Formation
Waterspouts exist on a microscale, where their environment is less than two kilometers in width. The cloud from which they develop can be as innocuous as a moderate cumulus, or as great as a supercell. While some waterspouts are strong and tornadic in nature, most are much weaker and caused by different atmospheric dynamics. They normally develop in moisture-laden environments as their parent clouds are in the process of development, and it is theorized they spin as they move up the surface boundary from the horizontal shear near the surface, and then stretch upwards to the cloud once the low-level shear vortex aligns with a developing cumulus cloud or thunderstorm. Some weak tornadoes, known as landspouts, have been shown to develop in a similar manner.
More than one waterspout can occur simultaneously in the same vicinity. In 2012, as many as nine simultaneous waterspouts were reported on Lake Michigan in the United States. In May 2021, at least five simultaneous waterspouts were filmed near Taree, off the northern coast of New South Wales, Australia.
Types
Non-tornadic
Waterspouts that are not associated with a rotating updraft of a supercell thunderstorm are known as "non-tornadic" or "fair-weather" waterspouts. By far the most common type of waterspout, these occur in coastal waters and are associated with dark, flat-bottomed, developing convective cumulus towers. Fair-weather waterspouts develop and dissipate rapidly, having life cycles shorter than 20 minutes. They usually rate no higher than EF0 on the Enhanced Fujita scale, generally exhibiting winds of less than .
They are most frequently seen in tropical and sub-tropical climates, with upwards of 400 per year observed in the Florida Keys. They typically move slowly, if at all, since the cloud to which they are attached is horizontally static, being formed by vertical convective action rather than the subduction/adduction interaction between colliding fronts. Fair-weather waterspouts are very similar in both appearance and mechanics to landspouts, and largely behave as such if they move ashore.
There are five stages to a fair-weather waterspout life cycle. Initially, a prominent circular, light-colored disk appears on the surface of the water, surrounded by a larger dark area of indeterminate shape. After the formation of these colored disks on the water, a pattern of light- and dark-colored spiral bands develops from the dark spot on the water surface. Then, a dense annulus of sea spray, called a "cascade", appears around the dark spot with what appears to be an eye. Eventually, the waterspout becomes a visible funnel from the water surface to the overhead cloud. The spray vortex can rise to a height of several hundred feet or more, and often creates a visible wake and an associated wave train as it moves. Finally, the funnel and spray vortex begin to dissipate as the inflow of warm air into the vortex weakens, ending the waterspout's life cycle.
Tornadic
"Tornadic waterspouts", also accurately referred to as "tornadoes over water", are formed from mesocyclones in a manner essentially identical to land-based tornadoes in connection with severe thunderstorms, but simply occurring over water. A tornado which travels from land to a body of water would also be considered a tornadic waterspout. Since the vast majority of mesocyclonic thunderstorms in the United States occur in land-locked areas, true tornadic waterspouts are correspondingly rarer than their fair-weather counterparts in that country. However, in some areas, such as the Adriatic, Aegean and Ionian Seas, tornadic waterspouts can make up half of the total number.
Snowspout
A winter waterspout, also known as an icespout, an ice devil, or a snowspout, is a rare instance of a waterspout forming under the base of a snow squall. The term "winter waterspout" is used to differentiate between the common warm season waterspout and this rare winter season event. There are a couple of critical criteria for the formation of a winter waterspout. Very cold temperatures need to be present over a body of water, which is itself warm enough to produce fog resembling steam above the water's surface. Like the more efficient lake-effect snow events, winds focusing down the axis of long lakes enhance wind convergence and increase the likelihood of a winter waterspout developing.
The terms "snow devil" and "snownado" describe a different phenomenon: a snow vortex close to the surface with no parent cloud, similar to a dust devil.
Impacts
Human
Waterspouts have long been recognized as serious marine hazards. Stronger waterspouts pose a threat to watercraft, aircraft and people. It is recommended to keep a considerable distance from these phenomena, and to always be on alert through weather reports. The United States National Weather Service will often issue special marine warnings when waterspouts are likely or have been sighted over coastal waters, or tornado warnings when waterspouts are expected to move onshore.
Incidents of waterspouts causing severe damage and casualties are rare; however, there have been several notable examples. The Malta tornado of 1551 was the earliest recorded occurrence of a deadly waterspout. It struck the Grand Harbour of Valletta, sinking four galleys and numerous boats, and killing hundreds of people. The 1851 Sicily tornadoes were twin waterspouts that made landfall in western Sicily, ravaging the coast and countryside before ultimately dissipating back again over the sea. In August 2024, a waterspout has been reported by some witnesses of the sinking of the large yacht Bayesian off the coast of Sicily and might have been the cause or an aggravating circumstance. Seven people died while 15 of 22 were rescued.
Natural
Depending on how fast the winds from a waterspout are whipping, anything that is within about of the surface of the water, including fish of different sizes, frogs, and even turtles, can be lifted into the air. A waterspout can sometimes suck small animals such as fish out of the water and all the way up into the cloud. Even if the waterspout stops spinning, the fish in the cloud can be carried over land, buffeted up and down and around with the cloud's winds until its currents no longer keep the fish airborne. Depending on how far they travel and how high they are taken into the atmosphere, the fish are sometimes dead by the time they rain down. People as far as inland have experienced raining fish. Fish can also be sucked up from rivers, but raining fish is not a common weather phenomenon.
Research and forecasting
The Szilagyi Waterspout Index (SWI), developed by Canadian meteorologist Wade Szilagyi, is used to predict conditions favorable for waterspout development. The SWI ranges from −10 to +10, where values greater than or equal to zero represent conditions favorable for waterspout development.
The International Centre for Waterspout Research (ICWR) is a non-governmental organization of individuals from around the world who are interested in the field of waterspouts from a research, operational and safety perspective. Originally a forum for researchers and meteorologists, the ICWR has expanded interest and contribution from storm chasers, the media, the marine and aviation communities and from private individuals.
Myths
There was a commonly held belief among sailors in the 18th and 19th centuries that shooting a broadside cannon volley dispersed waterspouts. Among others, Captain Vladimir Bronevskiy claims that it was a successful technique, having been an eyewitness to the dissipation of a phenomenon in the Adriatic while a midshipman aboard the frigate Venus during the 1806 campaign under Admiral Senyavin.
A waterspout has been proposed as a reason for the abandonment of the Mary Celeste.
See also
Fire whirl
Funnel cloud
Steam devil
Tornadogenesis
References
External links
A series of pictures from the boat Nicorette approaching the NSW south coast tornadic waterspout.
Pictures of cold-core waterspouts over Lake Michigan on 30 September 2006. Archived from the original on 10 March 2007.
"A Winter Waterspout". Monthly Weather Review, February 1907.
Severe weather and convection
Tornado
Vortices
Weather hazards
de:Wasserhose | Waterspout | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,273 | [
"Physical phenomena",
"Vortices",
"Weather hazards",
"Weather",
"Dynamical systems",
"Fluid dynamics"
] |
706,999 | https://en.wikipedia.org/wiki/Atmospheric%20chemistry | Atmospheric chemistry is a branch of atmospheric science that studies the chemistry of the Earth's atmosphere and that of other planets. This multidisciplinary approach of research draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology, climatology and other disciplines to understand both natural and human-induced changes in atmospheric composition. Key areas of research include the behavior of trace gasses, the formation of pollutants, and the role of aerosols and greenhouse gasses. Through a combination of observations, laboratory experiments, and computer modeling, atmospheric chemists investigate the causes and consequences of atmospheric changes.
Atmospheric composition
The composition and chemistry of the Earth's atmosphere is important for several reasons, but primarily because of the interactions between the atmosphere and living organisms. Natural processes such as volcano emissions, lightning and bombardment by solar particles from corona changes the composition of the Earth's atmosphere. It has also been changed by human activity and some of these changes are harmful to human health, crops and ecosystems.
Trace gas composition
Besides the major components listed above, the Earth's atmosphere contains many trace gas species that vary significantly depending on nearby sources and sinks. These trace gasses include compounds such as CFCs/HCFCs which are particularly damaging to the ozone layer, and H2S which has a characteristic foul odor of rotten eggs and can be smelt in concentrations as low as 0.47 ppb. Some approximate amounts near the surface of some additional gasses are listed below. In addition to gasses, the atmosphere contains particles such as aerosol, which includes examples such as droplets, ice crystals, bacteria, and dust.
History
The first scientific studies of atmospheric composition began in the 18th century when chemists such as Joseph Priestley, Antoine Lavoisier and Henry Cavendish made the first measurements of the composition of the atmosphere.
In the late 19th and early 20th centuries, researchers shifted their interest towards trace constituents with very low concentrations. An important finding from this era was the discovery of ozone by Christian Friedrich Schönbein in 1840.
In the 20th century atmospheric science moved from studying the composition of air to consider how the concentrations of trace gasses in the atmosphere have changed over time and the chemical processes which create and destroy compounds in the air. Two important outcomes were the explanation by Sydney Chapman and Gordon Dobson of how the ozone layer is created and maintained, and Arie Jan Haagen-Smit’s explanation of photochemical smog. Further studies on ozone issues led to the 1995 Nobel Prize in Chemistry award shared between Paul Crutzen, Mario Molina and Frank Sherwood Rowland.
In the 21st century the focus is now shifting again. Instead of concentrating on atmospheric chemistry in isolation, it is now seen as one part of the Earth system with the rest of the atmosphere, biosphere and geosphere. A driving force for this link is the relationship between chemistry and climate. The changing climate and the recovery of the ozone hole and the interaction of the composition of the atmosphere with the oceans and terrestrial ecosystems are examples of the interdependent relationships between Earth's systems. A new field of extraterrestrial atmospheric chemistry has also recently emerged. Astrochemists analyze the atmospheric compositions of our solar system and exoplanets to determine the formation of astronomical objects and find habitual conditions for Earth-like life.
Methodology
Observations, lab measurements, and modeling are the three central elements in atmospheric chemistry. Progress in atmospheric chemistry is often driven by the interactions between these components and they form an integrated whole. For example, observations may tell us that more of a chemical compound exists than previously thought possible. This will stimulate new modeling and laboratory studies which will increase our scientific understanding to a level where we can explain the observations.
Observation
Field observations of chemical systems are essential to understanding atmospheric processes and determining the accuracy of models. Atmospheric chemistry measurements are long term to observe continuous trends or short term to observe smaller variations. In situ and remote measurements can be made using observatories, satellites, field stations, and laboratories.
Routine observations of chemical composition show changes in atmospheric composition over time. Observatories such as the Mauna Loa and mobile platforms such as aircraft ships and balloons (e.g. the UK's Facility for Airborne Atmospheric Measurements) study chemical compositions and weather dynamics. An application of long term observations is the Keeling Curve - a series of measurements from 1958 to today which show a steady rise in the concentration of carbon dioxide (see also ongoing measurements of atmospheric CO2). Observations of atmospheric composition are increasingly made by satellites by passive and active remote sensing with important instruments such as GOME and MOPITT giving a global picture of air pollution and chemistry.
Surface observations have the advantage that they provide long term records at high time resolution but are limited in the vertical and horizontal space they provide observations from. Some surface based instruments e.g. LIDAR can provide concentration profiles of chemical compounds and aerosols but are still restricted in the horizontal region they can cover. Many observations are available online in Atmospheric Chemistry Observational Databases
Laboratory studies
Laboratory studies help understand the complex interactions from Earth’s systems that can be difficult to measure on a large scale. Experiments are performed in controlled environments, such as aerosol chambers, that allow for the individual evaluation of specific chemical reactions or the assessment of properties of a particular atmospheric constituent. A closely related subdiscipline is atmospheric photochemistry, which quantifies the rate that molecules are split apart by sunlight, determines the resulting products, and obtains thermodynamic data such as Henry's law coefficients.
Laboratory measurements are essential to understanding the sources and sinks of pollutants and naturally occurring compounds. Types of analysis that are of interest include both those on gas-phase reactions, as well as heterogeneous reactions that are relevant to the formation and growth of aerosols. Commonly used instruments to measure aerosols include ambient and particulate air samplers, scanning mobility particle sizers, and mass spectrometers.
Modeling
Models are essential tools for interpreting observational data, testing hypotheses about chemical reactions, and predicting future concentrations of atmospheric chemicals. To synthesize and test theoretical understanding of atmospheric chemistry, researchers commonly use computer models, such as chemical transport models (CTMs). CTMs provide realistic descriptions of the three-dimensional transport and evolution of the atmosphere. Atmospheric models can be seen as mathematical representations that replicate the behavior of the atmosphere. These numerical models solve the differential equations governing the concentrations of chemicals in the atmosphere.
Depending on the complexity, these models can range from simple to highly detailed. Models can be zero-, one-, two-, or three-dimensional, each with various uses and advantages. Three-dimensional chemical transport models offer the most realistic simulations but require substantial computational resources. These models can be global e.g. GCM, simulating the atmospheric conditions across the Earth, or regional, e.g. RAMS focusing on specific areas with greater resolution. Global models typically have lower horizontal resolution and represent less complex chemical mechanisms but they cover a larger area, while regional models can represent a limited area with higher resolution and more detail.
A major challenge in atmospheric modeling is balancing the number of chemical compounds and reactions included in the model with the accuracy of physical processes such as transport and mixing in the atmosphere. Two simpliest types of models include box models and puff models. For example, box modeling is relatively simple and may include hundreds or even thousands of chemical reactions, but they typically use a very crude representation of atmospheric mixed layer. This makes them useful for studying specific chemical reactions, but limited in stimulating real-world dynamics. In contrast, 3D models are more complex, representing a variety of physical processes such as wind, convection, and atmospheric mixing. They also provide more realistic representations of transportation and mixing. However, computational limits often simply chemical reactions and typically include fewer chemical reactions than box models. The trade-off between the two approaches lies in resolution and complexity.
To simplify the creation of these complex models, some researchers use automatic code generators like Autochem or Kinetic PreProcessor. These tools help automate the model-building process by selecting relevant chemical reactions from databases based on a user-defined function of chemical constituents. Once the reactions are chosen, the code generator automatically constructs the ordinary differential equations that describe their time evolution, greatly reducing the time and effort required for model construction.
Differences between model prediction and real-world observations can arise from errors in model input parameters or flaws representations of processes in the model. Some input parameters like surface emissions are often less accurately quantified from observations compared to model results. The model can be improved by adjusting poorly known parameters to better match observed data. A formal method for applying these adjustments is through Bayesian Optimization through an inverse modeling framework, where the results from the CTMs are inverted to optimize selected parameters. This approach has gained attention over the past decade as an effective method to interpret large amounts of data generate by models and observations from satellites.
One important current trend is using atmospheric chemistry as part of Earth system models. These models integrate atmospheric chemistry with other Earth system components, enabling the study of complex interactions between climate, atmospheric composition, and ecosystems.
Applications
Atmospheric chemistry is a multidisciplinary field with wide-ranging applications that influence environmental policy, human health, technology development, and climate science. Examples of problems addressed in atmospheric chemistry include acid rain, ozone depletion, photochemical smog, greenhouse gasses and global warming. By developing a theoretical understanding, atmospheric chemists can test potential solutions and evaluate the effects of changes in government policy. Key applications include greenhouse gas monitoring, air quality and pollution control, weather prediction and meteorology, energy and emissions, sustainable energy development, and public health and toxicology. Green atmospheric chemistry research prioritizes the sustainable, safe, and efficient use of chemicals, which led to government regulations minimizing the use of harmful chemicals like CFCs and DDT.
Advances in remote sensing technology allow scientists to monitor atmospheric chemical composition from satellites and ground-based stations. Instruments such as the Ozone Monitoring Instrument (OMI) and Atmospheric Infrared Sounder (AIRS) provide data on pollutants, greenhouse gasses, and aerosols, enabling real-time monitoring of air quality.
Atmospheric chemistry is vital for evaluating the environmental impacts of energy production, including fossil fuels and renewable energy sources. By studying emissions, researchers can develop cleaner energy technologies and assess their effects on air quality and climate. Atmospheric chemistry also helps quantify the concentration and persistence of toxic substances in the air, including particulate matter and volatile organic compounds (VOCs), guiding public health measures and exposures assessments.
See also
Oxygen cycle
Ozone-oxygen cycle
Paleoclimatology
Scientific Assessment of Ozone Depletion
Tropospheric ozone depletion events
References
Further reading
Finlayson-Pitts, Barbara J.; Pitts, James N., Jr. (2000). Chemistry of the Upper and Lower Atmosphere. Academic Press. .
Iribarne, J. V. Cho, H. R. (1980). Atmospheric Physics, D. Reidel Publishing Company.
Seinfeld, John H.; Pandis, Spyros N. (2006). Atmospheric Chemistry and Physics: From Air Pollution to Climate Change (2nd Ed.). John Wiley and Sons, Inc. .
Warneck, Peter (2000). Chemistry of the Natural Atmosphere (2nd Ed.). Academic Press. .
Wayne, Richard P. (2000). Chemistry of Atmospheres (3rd Ed.). Oxford University Press. .
External links
WMO Scientific Assessment of Ozone Depletion: 2006
IGAC The International Global Atmospheric Chemistry Project
Paul Crutzen Interview - freeview video of Paul Crutzen Nobel Laureate for his work on decomposition of ozone, talking to Nobel Laureate Harry Kroto, the Vega Science Trust
The Cambridge Atmospheric Chemistry Database is a large constituent observational database in a common format.
Environmental Science Published for Everybody Round the Earth
NASA-JPL Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies
Kinetic and photochemical data evaluated by the IUPAC Subcommittee for Gas Kinetic Data Evaluation
Tropospheric chemistry
An illustrated elementary assessment of the composition of air
Atmospheric Chemistry
Environmental chemistry
Chemistry | Atmospheric chemistry | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,493 | [
"Environmental chemistry",
"Atmospheric dispersion modeling",
"nan",
"Environmental engineering",
"Environmental modelling"
] |
216,474 | https://en.wikipedia.org/wiki/Real%20options%20valuation | Real options valuation, also often termed real options analysis, (ROV or ROA) applies option valuation techniques to capital budgeting decisions. A real option itself, is the right—but not the obligation—to undertake certain business initiatives, such as deferring, abandoning, expanding, staging, or contracting a capital investment project. For example, real options valuation could examine the opportunity to invest in the expansion of a firm's factory and the alternative option to sell the factory.
Scope
Real options are generally distinguished from conventional financial options in that they are not typically traded as securities, and do not usually involve decisions on an underlying asset that is traded as a financial security. A further distinction is that option holders here, i.e. management, can directly influence the value of the option's underlying project; whereas this is not a consideration regarding the underlying security of a financial option. Moreover, management cannot measure uncertainty in terms of volatility, and must instead rely on their perceptions of uncertainty. Unlike financial options, management must also create or discover real options, and such creation and discovery process comprises an entrepreneurial or business task. Real options are most valuable when uncertainty is high; management has significant flexibility to change the course of the project in a favorable direction and is willing to exercise the options.
Real options analysis, as a discipline, extends from its application in corporate finance, to decision making under uncertainty in general, adapting the techniques developed for financial options to "real-life" decisions. For example, R&D managers can use real options valuation to help them deal with various uncertainties in making decisions about the allocation of resources among R&D projects. Non-business examples might be evaluating the cost of cryptocurrency mining machines, or the decision to join the work force, or rather, to forgo several years of income to attend graduate school. It, thus, forces decision makers to be explicit about the assumptions underlying their projections, and for this reason ROV is increasingly employed as a tool in business strategy formulation. This extension of real options to real-world projects often requires customized decision support systems, because otherwise the complex compound real options will become too intractable to handle.
Types of real options
The flexibility available to management – i.e. the actual "real options" – generically, will relate to project size, project timing, and the operation of the project once established. In all cases, any (non-recoverable) upfront expenditure related to this flexibility is the option premium. Real options are also commonly applied to stock valuation - see - as well as to various other "Applications" referenced below.
Options relating to project size
Where the project's scope is uncertain, flexibility as to the size of the relevant facilities is valuable, and constitutes optionality.
Option to expand: Here the project is built with capacity in excess of the expected level of output so that it can produce at higher rates if needed. Management then has the option (but not the obligation) to expand – i.e. exercise the option – should conditions turn out to be favourable. A project with the option to expand will cost more to establish, the excess being the option premium, but is worth more than the same without the possibility of expansion. This is equivalent to a call option.
Option to contract: The project is engineered such that output can be contracted in future should conditions turn out to be unfavourable. Forgoing these future expenditures constitutes option exercise. This is the equivalent to a put option, and again, the excess upfront expenditure is the option premium.
Option to expand or contract: Here the project is designed such that its operation can be dynamically turned on and off. Management may shut down part or all of the operation when conditions are unfavorable (a put option), and may restart operations when conditions improve (a call option). A flexible manufacturing system (FMS) is a good example of this type of option. This option is also known as a Switching option.
Options relating to project life and timing
Where there is uncertainty as to when, and how, business or other conditions will eventuate, flexibility as to the timing of the relevant project(s) is valuable, and constitutes optionality.
Growth options: perhaps the most generic in this category – these entail the call option to exercise only those projects that appear to be profitable at the time of initiation.
Initiation or deferment options: Here management has flexibility as to when to start a project. For example, in natural resource exploration a firm can delay mining a deposit until market conditions are favorable. This constitutes an American styled call option.
Delay option with a product patent: A firm with a patent right on a product has a right to develop and market the product exclusively until the expiration of the patent. The firm will market and develop the product only if the present value of the expected cash flows from the product sales exceeds the cost of development. If this does not occur, the firm can shelve the patent and not incur any further costs.
Option to abandon: Management may have the option to cease a project during its life, and, possibly, to realise its salvage value. Here, when the present value of the remaining cash flows falls below the liquidation value, the asset may be sold, and this act is effectively the exercising of a put option. This option is also known as a Termination option. Abandonment options are American styled.
Sequencing options: This option is related to the initiation option above, although entails flexibility as to the timing of more than one inter-related projects: the analysis here is as to whether it is advantageous to implement these sequentially or in parallel. Here, observing the outcomes relating to the first project, the firm can resolve some of the uncertainty relating to the venture overall. Once resolved, management has the option to proceed or not with the development of the other projects. If taken in parallel, management would have already spent the resources and the value of the option not to spend them is lost. The sequencing of projects is an important issue in corporate strategy. Related here is also the notion of Intraproject vs. Interproject options.
Options relating to project operation
Management may have flexibility relating to the product produced and/or the process used in manufacture. As in the preceding cases, this flexibility increases the value of the project, corresponding in turn, to the "premium" paid for the real option.
Output mix options: The option to produce different outputs from the same facility is known as an output mix option or product flexibility. These options are particularly valuable in industries where demand is volatile or where quantities demanded in total for a particular good are typically low, and management would wish to change to a different product quickly if required.
Input mix options: An input mix option – process flexibility – allows management to use different inputs to produce the same output as appropriate. For example, a farmer will value the option to switch between various feed sources, preferring to use the cheapest acceptable alternative. An electric utility, for example, may have the option to switch between various fuel sources to produce electricity, and therefore a flexible plant, although more expensive may actually be more valuable.
Operating scale options: Management may have the option to change the output rate per unit of time or to change the total length of production run time, for example in response to market conditions. These options are also known as Intensity options.
Examples
Investment
This simple example shows the relevance of the real option to delay investment and wait for further information.
Consider a firm that has the option to invest in a new factory. It can invest this year or next year. The question is: when should the firm invest? If the firm invests this year, it has an income stream earlier. But, if it invests next year, the firm obtains further information about the state of the economy, which can prevent it from investing with losses.
The firm knows its discounted cash flows if it invests this year: 5M. If it invests next year, the discounted cash flows are 6M with a 66.7% probability, and 3M with a 33.3% probability. Assuming a risk neutral rate of 10%, future discounted cash flows are, in present terms, 5.45M and 2.73M, respectively. The investment cost is 4M. If the firm invests next year, the present value of the investment cost is 3.63M.
Following the net present value rule for investment, the firm should invest this year because the discounted cash flows (5M) are greater than the investment costs (4M) by 1M. Yet, if the firm waits for next year, it only invests if discounted cash flows do not decrease. If discounted cash flows decrease to 3M, then investment is no longer profitable. If, they grow to 6M, then the firm invests. This implies that the firm invests next year with a 66.7% probability and earns 5.45M - 3.63M if it does invest. Thus the value to invest next year is 1.21M. Given that the value to invest next year exceeds the value to invest this year, the firm should wait for further information to prevent losses. This simple example shows how the net present value may lead the firm to take unnecessary risk, which could be prevented by real options valuation.
Staged Investment
Staged investments are quite often in the pharmaceutical, mineral, and oil industries. In this example, it is studied a staged investment abroad in which a firm decides whether to open one or two stores in a foreign country.
The firm does not know how well its stores are accepted in a foreign country. If their stores have high demand, the discounted cash flows per store is 10M. If their stores have low demand, the discounted cash flows per store is 5M. Assuming that the probability of both events is 50%, the expected discounted cash flows per store is 7.5M. It is also known that if the store's demand is independent of the store: if one store has high demand, the other also has high demand. The risk neutral rate is 10%. The investment cost per store is 8M.
Should the firm invest in one store, two stores, or not invest? The net present value suggests the firm should not invest: the net present value is -0.5M per store. But is it the best alternative? Following real options valuation, it is not: the firm has the real option to open one store this year, wait a year to know its demand, and invest in the new store next year if demand is high.
By opening one store, the firm knows that the probability of high demand is 50%. The expected value today of the option of expanding next year is thus 50% * (10M - 8M) / (1 + 10%) = 0.91M. The value of opening one store this year is 7.5M - 8M = -0.5M. Thus the value of the real option to invest in one store, wait a year, and invest next year is 0.41M. Given this, the firm should opt by opening one store. This simple example shows that a negative net present value does not imply that the firm should not invest.
Valuation
Given the above, it is clear that there is an analogy between real options and financial options, and we would therefore expect options-based modelling and analysis to be applied here. At the same time, it is nevertheless important to understand why the more standard valuation techniques may not be applicable for ROV.
Applicability of standard techniques
ROV is often contrasted with more standard techniques of capital budgeting, such as discounted cash flow (DCF) analysis / net present value (NPV). Under this "standard" NPV approach, future expected cash flows are present valued under the empirical probability measure at a discount rate that reflects the embedded risk in the project; see CAPM, APT, WACC. Here, only the expected cash flows are considered, and the "flexibility" to alter corporate strategy in view of actual market realizations is "ignored"; see below as well as . The NPV framework (implicitly) assumes that management is "passive" with regard to their Capital Investment once committed. Some analysts account for this uncertainty by (i) adjusting the discount rate, e.g. by increasing the cost of capital, or (ii) adjusting the cash flows, e.g. using certainty equivalents, or (iii) applying (subjective) "haircuts" to the forecast numbers, or (iv) via probability-weighting these as in rNPV. Even when employed, however, these latter methods do not normally properly account for changes in risk over the project's lifecycle and hence fail to appropriately adapt the risk adjustment.
By contrast, ROV assumes that management is "active" and can "continuously" respond to market changes. Real options consider "all" scenarios (or "states") and indicate the best corporate action in each of these contingent events. Because management adapts to each negative outcome by decreasing its exposure and to positive scenarios by scaling up, the firm benefits from uncertainty in the underlying market, achieving a lower variability of profits than under the commitment/NPV stance. The contingent nature of future profits in real option models is captured by employing the techniques developed for financial options in the literature on contingent claims analysis. Here the approach, known as risk-neutral valuation, consists in adjusting the probability distribution for risk consideration, while discounting at the risk-free rate. This technique is also known as the "martingale" approach, and uses a risk-neutral measure. For technical considerations here, see below. For related discussion and graphical representation see Datar–Mathews method for real option valuation.
Given these different treatments, the real options value of a project is typically higher than the NPV – and the difference will be most marked in projects with major flexibility, contingency, and volatility. As for financial options, a higher volatility of the underlying leads to a higher value. An application of real options valuation in the Philippine banking industry exhibited that increased levels of income volatility may adversely affect option values on the loan portfolio, when the presence of information asymmetry is considered. In this case, increased volatility may limit the value of an option. Part of the criticism and subsequently slow adoption of real options valuation in practice and academia stems from the generally higher values for underlying assets these functions generate. However, studies have shown that these models are reliable estimators of underlying asset value, when input values are properly identified.
Options based valuation
Although there is much similarity between the modelling of real options and financial options, ROV is distinguished from the latter, in that it takes into account uncertainty about the future evolution of the parameters that determine the value of the project, coupled with management's ability to respond to the evolution of these parameters. It is the combined effect of these that makes ROV technically more challenging than its alternatives.
When valuing the real option, the analyst must therefore consider the inputs to the valuation, the valuation method employed, and whether any technical limitations may apply. Conceptually, valuing a real option looks at the premium between inflows and outlays for a particular project. Inputs to the value of a real option (time, discount rates, volatility, cash inflows and outflows) are each affected by the terms of business, and external environmental factors that a project exists in. Terms of business as information regarding ownership, data collection costs, and patents, are formed in relation to political, environmental, socio-cultural, technological, environmental and legal factors that affect an industry. Just as terms of business are affected by external environmental factors, these same circumstances affect the volatility of returns, as well as the discount rate (as firm or project specific risk). Furthermore, the external environmental influences that affect an industry affect projections on expected inflows and outlays.
Valuation inputs
Given the similarity in valuation approach, the inputs required for modelling the real option correspond, generically, to those required for a financial option valuation. The specific application, though, is as follows:
The option's underlying is the project in question – it is modelled in terms of:
Spot price: the starting or current value of the project is required: this is usually based on management's "best guess" as to the gross value of the project's cash flows and resultant NPV;
Volatility: a measure for uncertainty as to the change in value over time is required:
the volatility in project value is generally used, usually derived via monte carlo simulation; sometimes the volatility of the first period's cash flows are preferred; see further under Corporate finance for a discussion relating to the estimation of NPV and project volatility.
some analysts substitute a listed security as a proxy, using either its price volatility (historical volatility), or, if options exist on this security, their implied volatility.
Dividends generated by the underlying asset: As part of a project, the dividend equates to any income which could be derived from the real assets and paid to the owner. These reduce the appreciation of the asset.
Option characteristics:
Strike price: this corresponds to any (non-recoverable) investment outlays, typically the prospective costs of the project. In general, management would proceed (i.e. the option would be in the money) given that the present value of expected cash flows exceeds this amount;
Option term: the time during which management may decide to act, or not act, corresponds to the life of the option. As above, examples include the time to expiry of a patent, or of the mineral rights for a new mine. See Option time value. Note though that given the flexibility related to timing as described, caution must be applied here.
Option style and option exercise. Management's ability to respond to changes in value is modeled at each decision point as a series of options, as above these may comprise, i.a.:
the option to contract the project (an American styled put option);
the option to abandon the project (also an American put);
the option to expand or extend the project (both American styled call options);
switching options or composite options which may also apply to the project.
Valuation methods
The valuation methods usually employed, likewise, are adapted from techniques developed for valuing financial options. Note though that, in general, while most "real" problems allow for American style exercise at any point (many points) in the project's life and are impacted by multiple underlying variables, the standard methods are limited either with regard to dimensionality, to early exercise, or to both. In selecting a model, therefore, analysts must make a trade off between these considerations; see . The model must also be flexible enough to allow for the relevant decision rule to be coded appropriately at each decision point.
Closed form, Black–Scholes-like solutions are sometimes employed. These are applicable only for European styled options or perpetual American options. Note that this application of Black–Scholes assumes constant — i.e. deterministic — costs: in cases where the project's costs, like its revenue, are also assumed stochastic, then Margrabe's formula can (should) be applied instead, here valuing the option to "exchange" expenses for revenue. (Relatedly, where the project is exposed to two (or more) uncertainties — e.g. for natural resources, price and quantity — some analysts attempt to use an overall volatility; this, though, is more correctly treated as a rainbow option, typically valued using simulation as below.)
The most commonly employed methods are binomial lattices. These are more widely used given that most real options are American styled. Additionally, and particularly, lattice-based models allow for flexibility as to exercise, where the relevant, and differing, rules may be encoded at each node. Note that lattices cannot readily handle high-dimensional problems; treating the project's costs as stochastic would add (at least) one dimension to the lattice, increasing the number of ending-nodes by the square (the exponent here, corresponding to the number of sources of uncertainty).
Specialised Monte Carlo Methods have also been developed and are increasingly, and especially, applied to high-dimensional problems. Note that for American styled real options, this application is somewhat more complex; although recent research combines a least squares approach with simulation, allowing for the valuation of real options which are both multidimensional and American styled; see .
When the Real Option can be modelled using a partial differential equation, then Finite difference methods for option pricing are sometimes applied. Although many of the early ROV articles discussed this method, its use is relatively uncommon today—particularly amongst practitioners—due to the required mathematical sophistication; these too cannot readily be used for high-dimensional problems.
Various other methods, aimed mainly at practitioners, have been developed for real option valuation. These typically use cash-flow scenarios for the projection of the future pay-off distribution, and are not based on restricting assumptions similar to those that underlie the closed form (or even numeric) solutions discussed. Recent additions include
the Datar–Mathews method (which can be understood as an extension of the net present value multi-scenario Monte Carlo model with an adjustment for risk aversion and economic decision-making),
the fuzzy pay-off method,
and the simulation with optimized exercise thresholds method.
By contrast, methods focusing on, for example, real option valuation in engineering design may be more sophisticated. These include analytics based on decision rules, which merge physical design considerations and management decisions through an intuitive "if-then-else" statement e.g., if demand is higher than a certain production capacity level, then expand existing capacity, else do nothing; this approach can be combined with advanced mathematical optimization methods like stochastic programming and robust optimisation to find the optimal design and decision rule variables. A more recent approach reformulates the real option problem as a data-driven Markov decision process, and uses advanced machine learning like deep reinforcement learning to evaluate a wide range of possible real option and design implementation strategies, well suited for complex systems and investment projects.
These help quantify the value of flexibility engineered early on in system designs and/or irreversible investment projects. The methods help rank order flexible design solutions relative to one another, and thus enable the best real option strategies to be exercised cost effectively during operations. These methods have been applied in many use cases in aerospace, defense, energy, transport, space, and water infrastructure design and planning.
Limitations
The relevance of Real options, even as a thought framework, may be limited due to market, organizational and / or technical considerations. When the framework is employed, therefore, the analyst must first ensure that ROV is relevant to the project in question. These considerations are as follows.
Market characteristics
As discussed above, the market and environment underlying the project must be one where "change is most evident", and the "source, trends and evolution" in product demand and supply, create the "flexibility, contingency, and volatility"
which result in optionality. Without this, the NPV framework would be more relevant.
Organizational considerations
Real options are "particularly important for businesses with a few key characteristics", and may be less relevant otherwise. In overview, it is important to consider the following in determining that the RO framework is applicable:
Corporate strategy has to be adaptive to contingent events. Some corporations face organizational rigidities and are unable to react to market changes; in this case, the NPV approach is appropriate.
Practically, the business must be positioned such that it has appropriate information flow, and opportunities to act. This will often be a market leader and / or a firm enjoying economies of scale and scope.
Management must understand options, be able to identify and create them, and appropriately exercise them. This contrasts with business leaders focused on maintaining the status quo and / or near-term accounting earnings.
The financial position of the business must be such that it has the ability to fund the project as, and when, required (i.e. issue shares, absorb further debt and / or use internally generated cash flow); see Financial statement analysis. Management must, correspondingly, have appropriate access to this capital.
Management must be in the position to exercise, in so far as some real options are proprietary (owned or exercisable by a single individual or a company) while others are shared (can (only) be exercised by many parties).
Technical considerations
Limitations as to the use of these models arise due to the contrast between Real Options and financial options, for which these were originally developed.
The main difference is that the underlying is often not tradable – e.g. the factory owner cannot easily sell the factory upon which he has the option. Additionally, the real option itself may also not be tradeable – e.g. the factory owner cannot sell the right to extend his factory to another party, only he can make this decision (some real options, however, can be sold, e.g., ownership of a vacant lot of land is a real option to develop that land in the future). Even where a market exists – for the underlying or for the option – in most cases there is limited (or no) market liquidity. Finally, even if the firm can actively adapt to market changes, it remains to determine the right paradigm to discount future claims
The difficulties, are then:
As above, data issues arise as far as estimating key model inputs. Here, since the value or price of the underlying cannot be (directly) observed, there will always be some (much) uncertainty as to its value (i.e. spot price) and volatility (further complicated by uncertainty as to management's actions in the future).
It is often difficult to capture the rules relating to exercise, and consequent actions by management. Further, a project may have a portfolio of embedded real options, some of which may be mutually exclusive.
Theoretical difficulties, which are more serious, may also arise.
Option pricing models are built on rational pricing logic. Here, essentially: (a) it is presupposed that one can create a "hedged portfolio" comprising one option and "delta" shares of the underlying. (b) Arbitrage arguments then allow for the option's price to be estimated today; see . (c) When hedging of this sort is possible, since delta hedging and risk neutral pricing are mathematically identical, then risk neutral valuation may be applied, as is the case with most option pricing models. (d) Under ROV however, the option and (usually) its underlying are clearly not traded, and forming a hedging portfolio would be difficult, if not impossible.
Standard option models: (a) Assume that the risk characteristics of the underlying do not change over the life of the option, usually expressed via a constant volatility assumption. (b) Hence a standard, risk free rate may be applied as the discount rate at each decision point, allowing for risk neutral valuation. Under ROV, however: (a) managements' actions actually change the risk characteristics of the project in question, and hence (b) the Required rate of return could differ depending on what state was realised, and a premium over risk free would be required, invalidating (technically) the risk neutrality assumption.
These issues are addressed via several interrelated assumptions:
As discussed above, the data issues are usually addressed using a simulation of the project, or a listed proxy. Various new methods – see for example those described above – also address these issues.
Also as above, specific exercise rules can often be accommodated by coding these in a bespoke binomial tree; see:.
The theoretical issues:
To use standard option pricing models here, despite the difficulties relating to rational pricing, practitioners adopt the "fiction" that the real option and the underlying project are both traded: the so called, Marketed Asset Disclaimer (MAD) approach. Although this is a strong assumption, it is pointed out that a similar fiction in fact underpins standard NPV / DCF valuation (and using simulation as above). See: and.
To address the fact that changing characteristics invalidate the use of a constant discount rate, some analysts use the "replicating portfolio approach", as opposed to Risk neutral valuation, and modify their models correspondingly. Under this approach, (a) we "replicate" the cash flows on the option by holding a risk free bond and the underlying in the correct proportions. Then, (b) since the cash flows of the option and the portfolio will always be identical, by arbitrage arguments their values may (must) be equated today, and (c) no discounting is required. (For an alternative, modifying Black-Scholes, see:.)
History
Whereas business managers have been making capital investment decisions for centuries, the term "real option" is relatively new, and was coined by Professor Stewart Myers of the MIT Sloan School of Management in 1977. In 1930, Irving Fisher wrote explicitly of the "options" available to a business owner (The Theory of Interest, II.VIII). The description of such opportunities as "real options", however, followed on the development of analytical techniques for financial options, such as Black–Scholes in 1973. As such, the term "real option" is closely tied to these option methods.
Real options are today an active field of academic research. Professor Lenos Trigeorgis has been a leading name for many years, publishing several influential books and academic articles. Other pioneering academics in the field include Professors Michael Brennan, Eduardo Schwartz, Avinash Dixit and Robert Pindyck (the latter two, authoring the pioneering text in the discipline). An academic conference on real options is organized yearly (Annual International Conference on Real Options).
Amongst others, the concept was "popularized" by Michael J. Mauboussin, then chief U.S. investment strategist for Credit Suisse First Boston. He uses real options to explain the gap between how the stock market prices some businesses and the "intrinsic value" for those businesses. Trigeorgis also has broadened exposure to real options through layman articles in publications such as The Wall Street Journal. This popularization is such that ROV is now a standard offering in postgraduate finance degrees, and often, even in MBA curricula at many Business Schools.
Recently, real options have been employed in business strategy, both for valuation purposes and as a conceptual framework. The idea of treating strategic investments as options was popularized by Timothy Luehrman in two HBR articles: "In financial terms, a business strategy is much more like a series of options, than a series of static cash flows". Investment opportunities are plotted in an "option space" with dimensions "volatility" & value-to-cost ("NPVq").
Luehrman also co-authored with William Teichner a Harvard Business School case study, Arundel Partners: The Sequel Project, in 1992, which may have been the first business school case study to teach ROV. Reflecting the "mainstreaming" of ROV, Professor Robert C. Merton discussed the essential points of Arundel in his Nobel Prize Lecture in 1997. Arundel involves a group of investors that is considering acquiring the sequel rights to a portfolio of yet-to-be released feature films. In particular, the investors must determine the value of the sequel rights before any of the first films are produced. Here, the investors face two main choices. They can produce an original movie and sequel at the same time or they can wait to decide on a sequel after the original film is released. The second approach, he states, provides the option not to make a sequel in the event the original movie is not successful. This real option has economic worth and can be valued monetarily using an option-pricing model. See Option (filmmaking).
See also
Option contract
Opportunity cost
Monte Carlo methods in finance
Contingent claim valuation
Fuzzy pay-off method for real option valuation
Datar–Mathews method for real option valuation
Contingent value rights
Present value of growth opportunities
Volume risk
References
Further reading
Standard texts:
Applications:
Grenadier, Steven R. & Weiss, Allen M., 1997. "Investment in technological innovations: An option pricing approach," Journal of Financial Economics, Elsevier, vol. 44(3), pages 397–416, June.
The Impact of Real Options in Agency Problems G. Siller-Pagaza, G. Otalora, E. Cobas-Flores (2006).
External links
Theory
Intro to Real Option Valuation as a Modelling Problem , Mikael Collan
The Promise and Peril of Real Options, Prof. Aswath Damodaran, Stern School of Business
Real Options Tutorial, Prof. Marco Dias, PUC-Rio
Valuing Real Options: Frequently Made Errors, Prof. Pablo Fernandez, IESE Business School, University of Navarra
Identifying real options, Prof. Campbell R. Harvey. Duke University, Fuqua School of Business
An introduction to real options (Investment Analysts Society of Southern Africa), Prof E. Gilbert, University of Cape Town
Decision Making Under Uncertainty—Real Options to the Rescue?, Prof. Luke Miller & Chan Park, Auburn University
Real Options Whitepapers and Case-studies , Dr. Jonathan Mun
Real Options – Introduction, Portfolion Group
How Do You Assess The Value of A Company's "Real Options"? , Prof. Alfred Rappaport Columbia University and Michael Mauboussin
Some Important Issues Involving Real Options: An Overview, Gordon Sick and Andrea Gamba (2005).
Real Power of Real Options, Leslie and Michaels (1997), Keith Leslie and Max Michaels McKinsey Quarterly, 1997 (3) pages 4–22. Cited by Robert Merton in his Nobel Prize Acceptance Speech in 1997. McKinsey classic - Reprinted in McKinsey Anthology 2000 - On Strategy. Cited in McKinsey Anthology 2011 - Have You Tested Your Strategy Lately.
Journals
Journal of Real Options and Strategy
Calculation resources
ROV Spreadsheet Models, Prof. Aswath Damodaran, Stern School of Business
Real Options Calculator, Prof. Steven T. Hackman, Georgia Institute of Technology
Corporate finance
Options (finance)
Capital budgeting | Real options valuation | [
"Engineering"
] | 7,064 | [
"Real options",
"Engineering economics"
] |
216,650 | https://en.wikipedia.org/wiki/Ferrimagnetism | A ferrimagnetic material is a material that has populations of atoms with opposing magnetic moments, as in antiferromagnetism, but these moments are unequal in magnitude, so a spontaneous magnetization remains. This can for example occur when the populations consist of different atoms or ions (such as Fe2+ and Fe3+).
Like ferromagnetic substances, ferrimagnetic substances are attracted by magnets and can be magnetized to make permanent magnets. The oldest known magnetic substance, magnetite (Fe3O4), is ferrimagnetic, but was classified as a ferromagnet before Louis Néel discovered ferrimagnetism in 1948. Since the discovery, numerous uses have been found for ferrimagnetic materials, such as hard-drive platters and biomedical applications.
History
Until the twentieth century, all naturally occurring magnetic substances were called ferromagnets. In 1936, Louis Néel published a paper proposing the existence of a new form of cooperative magnetism he called antiferromagnetism. While working with Mn2Sb, French physicist Charles Guillaud discovered that the current theories on magnetism were not adequate to explain the behavior of the material, and made a model to explain the behavior. In 1948, Néel published a paper about a third type of cooperative magnetism, based on the assumptions in Guillaud's model. He called it ferrimagnetism. In 1970, Néel was awarded for his work in magnetism with the Nobel Prize in Physics.
Physical origin
Ferrimagnetism has the same physical origins as ferromagnetism and antiferromagnetism. In ferrimagnetic materials the magnetization is also caused by a combination of dipole–dipole interactions and exchange interactions resulting from the Pauli exclusion principle. The main difference is that in ferrimagnetic materials there are different types of atoms in the material's unit cell. An example of this can be seen in the figure above. Here the atoms with a smaller magnetic moment point in the opposite direction of the larger moments. This arrangement is similar to that present in antiferromagnetic materials, but in ferrimagnetic materials the net moment is nonzero because the opposed moments differ in magnitude.
Ferrimagnets have a critical temperature above which they become paramagnetic just as ferromagnets do. At this temperature (called the Curie temperature) there is a second-order phase transition, and the system can no longer maintain a spontaneous magnetization. This is because at higher temperatures the thermal motion is strong enough that it exceeds the tendency of the dipoles to align.
Derivation
There are various ways to describe ferrimagnets, the simplest of which is with mean-field theory. In mean-field theory the field acting on the atoms can be written as
where is the applied magnetic field, and is field caused by the interactions between the atoms. The following assumption then is
Here is the average magnetization of the lattice, and is the molecular field coefficient. When we allow and to be position- and orientation-dependent, we can then write it in the form
where is the field acting on the i-th substructure, and is the molecular field coefficient between the i-th and k-th substructures. For a diatomic lattice we can designate two types of sites, a and b. We can designate the number of magnetic ions per unit volume, the fraction of the magnetic ions on the a sites, and the fraction on the b sites. This then gives
It can be shown that and that unless the structures are identical. favors a parallel alignment of and , while favors an anti-parallel alignment. For ferrimagnets, , so it will be convenient to take as a positive quantity and write the minus sign explicitly in front of it. For the total fields on a and b this then gives
Furthermore, we will introduce the parameters and which give the ratio between the strengths of the interactions. At last we will introduce the reduced magnetizations
with the spin of the i-th element. This then gives for the fields:
The solutions to these equations (omitted here) are then given by
where is the Brillouin function. The simplest case to solve now is . Since , this then gives the following pair of equations:
with and . These equations do not have a known analytical solution, so they must be solved numerically to find the temperature dependence of .
Effects of temperature
Unlike ferromagnetism, the magnetization curves of ferrimagnetism can take many different shapes depending on the strength of the interactions and the relative abundance of atoms. The most notable instances of this property are that the direction of magnetization can reverse while heating a ferrimagnetic material from absolute zero to its critical temperature, and that strength of magnetization can increase while heating a ferrimagnetic material to the critical temperature, both of which cannot occur for ferromagnetic materials. These temperature dependencies have also been experimentally observed in NiFe2/5Cr8/5O4 and Li1/2Fe5/4Ce5/4O4.
A temperature lower than the Curie temperature, but at which the opposing magnetic moments are equal (resulting in a net magnetic moment of zero) is called a magnetization compensation point. This compensation point is observed easily in garnets and rare-earth–transition-metal alloys (RE-TM). Furthermore, ferrimagnets may also have an angular momentum compensation point, at which the net angular momentum vanishes. This compensation point is crucial for achieving fast magnetization reversal in magnetic-memory devices.
Effect of external fields
When ferrimagnets are exposed to an external magnetic field, they display what is called magnetic hysteresis, where magnetic behavior depends on the history of the magnet. They also exhibit a saturation magnetization ; this magnetization is reached when the external field is strong enough to make all the moments align in the same direction. When this point is reached, the magnetization cannot increase, as there are no more moments to align. When the external field is removed, the magnetization of the ferrimagnet does not disappear, but a nonzero magnetization remains. This effect is often used in applications of magnets. If an external field in the opposite direction is applied subsequently, the magnet will demagnetize further until it eventually reaches a magnetization of . This behavior results in what is called a hysteresis loop.
Properties and uses
Ferrimagnetic materials have high resistivity and have anisotropic properties. The anisotropy is actually induced by an external applied field. When this applied field aligns with the magnetic dipoles, it causes a net magnetic dipole moment and causes the magnetic dipoles to precess at a frequency controlled by the applied field, called Larmor or precession frequency. As a particular example, a microwave signal circularly polarized in the same direction as this precession strongly interacts with the magnetic dipole moments; when it is polarized in the opposite direction, the interaction is very low. When the interaction is strong, the microwave signal can pass through the material. This directional property is used in the construction of microwave devices like isolators, circulators, and gyrators. Ferrimagnetic materials are also used to produce optical isolators and circulators. Ferrimagnetic minerals in various rock types are used to study ancient geomagnetic properties of Earth and other planets. That field of study is known as paleomagnetism. In addition, it has been shown that ferrimagnets such as magnetite can be used for thermal energy storage.
Examples
The oldest known magnetic material, magnetite, is a ferrimagnetic substance. The tetrahedral and octahedral sites of its crystal structure exhibit opposite spin. Other known ferrimagnetic materials include yttrium iron garnet (YIG); cubic ferrites composed of iron oxides with other elements such as aluminum, cobalt, nickel, manganese, and zinc; and hexagonal or spinel type ferrites, including rhenium ferrite, ReFe2O4, PbFe12O19 and BaFe12O19 and pyrrhotite, Fe1−xS.
Ferrimagnetism can also occur in single-molecule magnets. A classic example is a dodecanuclear manganese molecule with an effective spin S = 10 derived from antiferromagnetic interaction on Mn(IV) metal centers with Mn(III) and Mn(II) metal centers.
See also
References
External links
Magnetic ordering
Quantum phases | Ferrimagnetism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,812 | [
"Quantum phases",
"Phases of matter",
"Quantum mechanics",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Condensed matter physics",
"Matter"
] |
216,900 | https://en.wikipedia.org/wiki/Green%20jay | The green jay (Cyanocorax luxuosus) is a species of the New World jays, found in Central America, Mexico, and South Texas. Adults are about long and variable in color across their range; they usually have blue and black heads, green wings and mantle, bluish-green tails, black bills, yellow or brown eye rings, and dark legs. The basic diet consists of arthropods, vertebrates, seeds, and fruit. The nest is usually built in a thorny bush; the female incubates the clutch of three to five eggs. This is a common species of jay with a wide range and the International Union for Conservation of Nature has rated its conservation status as being of "least concern".
Taxonomy
Seven subspecies are accepted; listed from north to south:
Cyanocorax luxuosus glaucescens – Southern Texas, northeast Mexico
Cyanocorax luxuosus luxuosus – East-central Mexico
Cyanocorax luxuosus speciosus – Western Mexico
Cyanocorax luxuosus vividus – Southwestern Mexico
Cyanocorax luxuosus maya – Yucatan Peninsula
Cyanocorax luxuosus confusus – Southeastern Mexico to western Guatemala
Cyanocorax luxuosus centralis – Honduras
It differs from the related Inca jay of the Andes most obviously in lacking the large nasal bristles that form a distinct tuft at the base of the bill in that species, and also tends to show more blue on the rear crown. Despite its separation from the Inca jay by a 1,600 km range gap, some ornithologists treat the green jay and Inca jay as conspecific, with the green jay as C. yncas luxuosus and the Inca jay as C. yncas yncas.
Description
Green jays are in length. Weight ranges from . They have feathers of yellowish-white with blue tips on the top of the head, cheeks and nape. The breast and underparts range from bright yellow in the south (e.g. C. l. maya in the Yucatan) to pale green in the north (e.g. C. l. glaucescens in Texas). The upper parts are rich green. The color of the iris depends on the subspecies, ranging from dark brownish in the north to bright yellow in the south.
Behavior
Green jays feed on a wide range of insects and other invertebrates and various cereal grains. They take ebony (Ebenopsis spp.) seeds where these occur, and also any oak species' acorns, which they will cache. Meat and human scraps add to the diet when opportunity arises. Green jays have been observed using sticks as tools to extract insects from tree bark.
Breeding
Green jays usually build a nest in a tree or in a thorny bush or thicket, and the female lays three to five eggs. Only the female incubates, but both parents take care of the young.
Voice
As with most of the typical jays, this species has a very extensive voice repertoire. The bird's most common call makes a sound, but many other unusual notes also occur. One of the most distinctive calls sounds like an alarm bell.
Distribution and habitat
The green jay occurs from southern Texas to Honduras. The similar Inca jay has a disjunct home range in the northern Andes of South America.
Status
The green jay is a common species throughout most of its wide range. It is an adaptable species and the population is thought to be increasing as clearing of forests is creating new areas of suitable habitat. No particular threats have been identified, and the International Union for Conservation of Nature has rated its conservation status as being of "least concern".
References
External links
from Belize and Venezuela at
green jay
Birds of the Rio Grande valleys
Birds of Mexico
Birds of Belize
Birds of the Yucatán Peninsula
Birds of the Northern Andes
Tool-using animals
green jay
Taxa named by René Lesson
Taxobox binomials not recognized by IUCN | Green jay | [
"Biology"
] | 806 | [
"Ethology",
"Behavior",
"Tool-using animals"
] |
217,116 | https://en.wikipedia.org/wiki/Dynamic%20equilibrium | In chemistry, a dynamic equilibrium exists once a reversible reaction occurs. Substances initially transition between the reactants and products at different rates until the forward and backward reaction rates eventually equalize, meaning there is no net change. Reactants and products are formed at such a rate that the concentration of neither changes. It is a particular example of a system in a steady state.
In physics, concerning thermodynamics, a closed system is in thermodynamic equilibrium when reactions occur at such rates that the composition of the mixture does not change with time. Reactions do in fact occur, sometimes vigorously, but to such an extent that changes in composition cannot be observed. Equilibrium constants can be expressed in terms of the rate constants for reversible reactions.
Examples
In a new bottle of soda, the concentration of carbon dioxide in the liquid phase has a particular value. If half of the liquid is poured out and the bottle is sealed, carbon dioxide will leave the liquid phase at an ever-decreasing rate, and the partial pressure of carbon dioxide in the gas phase will increase until equilibrium is reached. At that point, due to thermal motion, a molecule of CO2 may leave the liquid phase, but within a very short time another molecule of CO2 will pass from the gas to the liquid, and vice versa. At equilibrium, the rate of transfer of CO2 from the gas to the liquid phase is equal to the rate from liquid to gas. In this case, the equilibrium concentration of CO2 in the liquid is given by Henry's law, which states that the solubility of a gas in a liquid is directly proportional to the partial pressure of that gas above the liquid. This relationship is written as
where K is a temperature-dependent constant, P is the partial pressure, and c is the concentration of the dissolved gas in the liquid. Thus the partial pressure of CO2 in the gas has increased until Henry's law is obeyed. The concentration of carbon dioxide in the liquid has decreased and the drink has lost some of its fizz.
Henry's law may be derived by setting the chemical potentials of carbon dioxide in the two phases to be equal to each other. Equality of chemical potential defines chemical equilibrium. Other constants for dynamic equilibrium involving phase changes, include partition coefficient and solubility product. Raoult's law defines the equilibrium vapor pressure of an ideal solution
Dynamic equilibrium can also exist in a single-phase system. A simple example occurs with acid-base equilibrium such as the dissociation of acetic acid, in an aqueous solution.
CH3COOH <=> CH3COO- + H+
At equilibrium the concentration quotient, K, the acid dissociation constant, is constant (subject to some conditions)
In this case, the forward reaction involves the liberation of some protons from acetic acid molecules and the backward reaction involves the formation of acetic acid molecules when an acetate ion accepts a proton. Equilibrium is attained when the sum of chemical potentials of the species on the left-hand side of the equilibrium expression is equal to the sum of chemical potentials of the species on the right-hand side. At the same time, the rates of forward and backward reactions are equal to each other. Equilibria involving the formation of chemical complexes are also dynamic equilibria and concentrations are governed by the stability constants of complexes.
Dynamic equilibria can also occur in the gas phase as, for example when nitrogen dioxide dimerizes.
2NO2 <=> N2O4;
In the gas phase, square brackets indicate partial pressure. Alternatively, the partial pressure of a substance may be written as P(substance).
Relationship between equilibrium and rate constants
In a simple reaction such as the isomerization:
A <=> B
there are two reactions to consider, the forward reaction in which the species A is converted into B and the backward reaction in which B is converted into A. If both reactions are elementary reactions, then the rate of reaction is given by
where is the rate constant for the forward reaction and is the rate constant for the backward reaction and the square brackets, , denote concentration. If only A is present at the beginning, time , with a concentration [A], the sum of the two concentrations, [A] and [B], at time , will be equal to [A].
The solution to this differential equation is
and is illustrated at the right. As time tends towards infinity, the concentrations [A] and [B] tend towards constant values. Let approach infinity, that is, , in the expression above:
In practice, concentration changes will not be measurable after Since the concentrations do not change thereafter, they are, by definition, equilibrium concentrations. Now, the equilibrium constant for the reaction is defined as
It follows that the equilibrium constant is numerically equal to the quotient of the rate constants.
In general, there may be more than one forward reaction and more than one backward reaction. Atkins states that, for a general reaction, the overall equilibrium constant is related to the rate constants of the elementary reactions by
See also
Equilibrium chemistry
Mechanical equilibrium
Chemical equilibrium
Radiative equilibrium
References
External links
Dynamic Equilibrium Example - Wolfram Demonstrations Project
Equilibrium chemistry
Thermodynamics | Dynamic equilibrium | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,076 | [
"Equilibrium chemistry",
"Thermodynamics",
"Dynamical systems"
] |
217,607 | https://en.wikipedia.org/wiki/Alpha%20process | The alpha process, also known as alpha capture or the alpha ladder, is one of two classes of nuclear fusion reactions by which stars convert helium into heavier elements. The other class is a cycle of reactions called the triple-alpha process, which consumes only helium, and produces carbon. The alpha process most commonly occurs in massive stars and during supernovae.
Both processes are preceded by hydrogen fusion, which produces the helium that fuels both the triple-alpha process and the alpha ladder processes. After the triple-alpha process has produced enough carbon, the alpha-ladder begins and fusion reactions of increasingly heavy elements take place, in the order listed below. Each step only consumes the product of the previous reaction and helium. The later-stage reactions which are able to begin in any particular star, do so while the prior stage reactions are still under way in outer layers of the star.
The energy produced by each reaction, , is mainly in the form of gamma rays (), with a small amount taken by the byproduct element, as added momentum.
It is a common misconception that the above sequence ends at (or , which is a decay product of ) because it is the most tightly bound nuclide – i.e., the nuclide with the highest nuclear binding energy per nucleon – and production of heavier nuclei would consume energy (be endothermic) instead of release it (exothermic). (Nickel-62) is actually the most tightly bound nuclide in terms of binding energy (though has a lower energy or mass per nucleon). The reaction is actually exothermic, and indeed adding alphas continues to be exothermic all the way to , but nonetheless the sequence does effectively end at iron. The sequence stops before producing elements heavier than nickel because conditions in stellar interiors cause the competition between photodisintegration and the alpha process to favor photodisintegration around iron. This leads to more being produced than
All these reactions have a very low rate at the temperatures and densities in stars and therefore do not contribute significant energy to a star's total output. They occur even less easily with elements heavier than neon () due to the increasing Coulomb barrier.
Alpha process elements
Alpha process elements (or alpha elements) are so-called since their most abundant isotopes are integer multiples of four – the mass of the helium nucleus (the alpha particle). These isotopes are called alpha nuclides.
The stable alpha elements are: C, O, Ne, Mg, Si, and S.
The elements Ar and Ca are "observationally stable". They are synthesized by alpha capture prior to the silicon fusing stage, that leads to
Si and Ca are purely alpha process elements.
Mg can be separately consumed by proton capture reactions.
The status of oxygen (O) is contested – some authors consider it an alpha element, while others do not. O is surely an alpha element in low-metallicity Population II stars: It is produced in Type II supernovae, and its enhancement is well correlated with an enhancement of other alpha process elements.
Sometimes C and N are considered alpha process elements since, like O, they are synthesized in nuclear alpha-capture reactions, but their status is ambiguous: Each of the three elements is produced (and consumed) by the CNO cycle, which can proceed at temperatures far lower than those where the alpha-ladder processes start producing significant amounts of alpha elements (including C, N, & O). So just the presence of C, N, or O in a star does not a clearly indicate that the alpha process is actually underway – hence reluctance of some astronomers to (unconditionally) call these three "alpha elements".
Production in stars
The alpha process generally occurs in large quantities only if the star is sufficiently massive – more massive than about 10 solar masses. These stars contract as they age, increasing core temperature and density to high enough levels to enable the alpha process. Requirements increase with atomic mass, especially in later stages – sometimes referred to as silicon burning – and thus most commonly occur in supernovae. Type II supernovae mainly synthesize oxygen and the alpha-elements (Ne, Mg, Si, S, Ar, Ca, and Ti) while Type Ia supernovae mainly produce elements of the iron peak (Ti, V, Cr, Mn, Fe, Co, and Ni). Sufficiently massive stars can synthesize elements up to and including the iron peak solely from the hydrogen and helium that initially comprises the star.
Typically, the first stage of the alpha process (or alpha-capture) follows from the helium-burning stage of the star once helium becomes depleted; at this point, free capture helium to produce . This process continues after the core finishes the helium burning phase as a shell around the core will continue burning helium and convecting into the core. The second stage (neon burning) starts as helium is freed by the photodisintegration of one atom, allowing another to continue up the alpha ladder. Silicon burning is then later initiated through the photodisintegration of in a similar fashion; after this point, the peak discussed previously is reached. The supernova shock wave produced by stellar collapse provides ideal conditions for these processes to briefly occur.
During this terminal heating involving photodisintegration and rearrangement, nuclear particles are converted to their most stable forms during the supernova and subsequent ejection through, in part, alpha processes. Starting at and above, all the product elements are radioactive and will therefore decay into a more stable isotope; for instance, is formed and decays into .
Special notation for relative abundance
The abundance of total alpha elements in stars is usually expressed in terms of logarithms, with astronomers customarily using a square bracket notation:
where is the number of alpha elements per unit volume, and is the number of iron nuclei per unit volume. It is for the purpose of calculating the number that which elements are to be considered "alpha elements" becomes contentious. Theoretical galactic evolution models predict that early in the universe there were more alpha elements relative to iron.
References
Further reading
Nuclear fusion
Nucleosynthesis | Alpha process | [
"Physics",
"Chemistry"
] | 1,262 | [
"Nuclear fission",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion"
] |
217,717 | https://en.wikipedia.org/wiki/Carbon-burning%20process | The carbon-burning process or carbon fusion is a set of nuclear fusion reactions that take place in the cores of massive stars (at least 4 at birth) that combines carbon into other elements. It requires high temperatures (> 5×108 K or 50 keV) and densities (> 3×109 kg/m3).
These figures for temperature and density are only a guide. More massive stars burn their nuclear fuel more quickly, since they have to offset greater gravitational forces to stay in (approximate) hydrostatic equilibrium. That generally means higher temperatures, although lower densities, than for less massive stars. To get the right figures for a particular mass, and a particular stage of evolution, it is necessary to use a numerical stellar model computed with computer algorithms. Such models are continually being refined based on nuclear physics experiments (which measure nuclear reaction rates) and astronomical observations (which include direct observation of mass loss, detection of nuclear products from spectrum observations after convection zones develop from the surface to fusion-burning regions – known as dredge-up events – and so bring nuclear products to the surface, and many other observations relevant to models).
Fusion reactions
The principal reactions are:
:{| border="0"
|- style="height:3em;"
|| ||+ || ||→ || ||+ || ||+ ||4.617 MeV
|- style="height:3em;"
| ||+ || ||→ || ||+ || ||+ ||2.241 MeV
|- style="height:3em;"
| ||+ || ||→ || ||+ ||1n ||− ||2.599 MeV
|- style="height:3em;"
|colspan=99|Alternatively:
|- style="height:3em;"
| ||+ || ||→ || ||+ || ||+ ||13.933 MeV
|- style="height:3em;"
| ||+ || ||→ || ||+ ||2 ||colspan=2|− 0.113 MeV
|}
Reaction products
This sequence of reactions can be understood by thinking of the two interacting carbon nuclei as coming together to form an excited state of the 24Mg nucleus, which then decays in one of the five ways listed above. The first two reactions are strongly exothermic, as indicated by the large positive energies released, and are the most frequent results of the interaction. The third reaction is strongly endothermic, as indicated by the large negative energy indicating that energy is absorbed rather than emitted. This makes it much less likely, yet still possible in the high-energy environment of carbon burning. But the production of a few neutrons by this reaction is important, since these neutrons can combine with heavy nuclei, present in tiny amounts in most stars, to form even heavier isotopes in the s-process.
The fourth reaction might be expected to be the most common from its large energy release, but in fact it is extremely improbable because it proceeds via electromagnetic interaction, as it produces a gamma ray photon, rather than utilising the strong force between nucleons as do the first two reactions. Nucleons look a lot bigger to each other than they do to photons of this energy. However, the 24Mg produced in this reaction is the only magnesium left in the core when the carbon-burning process ends, as 23Mg is radioactive.
The last reaction is also very unlikely since it involves three reaction products, as well as being endothermic — think of the reaction proceeding in reverse, it would require the three products all to converge at the same time, which is less likely than two-body interactions.
The protons produced by the second reaction can take part in the proton–proton chain reaction, or the CNO cycle, but they can also be captured by 23Na to form 20Ne plus a 4He nucleus. In fact, a significant fraction of the 23Na produced by the second reaction gets used up this way. In stars between 4 and 11 solar masses, the 16O already produced by helium fusion in the previous stage of stellar evolution manages to survive the carbon-burning process pretty well, despite some of it being used up by capturing 4He nuclei. So the result of carbon burning is a mixture mainly of oxygen, neon, sodium and magnesium.
The fact that the mass-energy sum of the two carbon nuclei is similar to that of an excited state of the magnesium nucleus is known as 'resonance'. Without this resonance, carbon burning would only occur at temperatures one hundred times higher.
The experimental and theoretical investigation of such resonances is still a subject of research. A similar resonance increases the probability of the triple-alpha process, which is responsible for the original production of carbon.
Neutrino losses
Neutrino losses start to become a major factor in the fusion processes in stars at the temperatures and densities of carbon burning. Though the main reactions don't involve neutrinos, the side reactions such as the proton–proton chain reaction do. But the main source of neutrinos at these high temperatures involves a process in quantum theory known as pair production. A high energy gamma ray which has a greater energy than the rest mass of two electrons (mass-energy equivalence) can interact with electromagnetic fields of the atomic nuclei in the star, and become a particle and anti-particle pair of an electron and positron.
Normally, the positron quickly annihilates with another electron, producing two photons, and this process can be safely ignored at lower temperatures. But around 1 in 1019 pair productions end with a weak interaction of the electron and positron, which replaces them with a neutrino and anti-neutrino pair. Since they move at virtually the speed of light and interact very weakly with matter, these neutrino particles usually escape the star without interacting, carrying away their mass-energy. This energy loss is comparable to the energy output from the carbon fusion.
Neutrino losses, by this and similar processes, play an increasingly important part in the evolution of the most massive stars. They force the star to burn its fuel at a higher temperature to offset them. Fusion processes are very sensitive to temperature so the star can produce more energy to retain hydrostatic equilibrium, at the cost of burning through successive nuclear fuels ever more rapidly. Fusion produces less energy per unit mass as the fuel nuclei get heavier, and the core of the star contracts and heats up when switching from one fuel to the next, so both these processes also significantly reduce the lifetime of each successive fusion-burning fuel.
Up to the helium burning stage the neutrino losses are negligible. But from the carbon burning stage onwards, the reduction in stellar lifetime due to energy lost in the form of neutrinos roughly matches the increased energy production due to fuel change and core contraction. In successive fuel changes in the most massive stars, the reduction in lifetime is dominated by the neutrino losses. For example, a star of 25 solar masses burns hydrogen in the core for 107 years, helium for 106 years and carbon for only 103 years.
Stellar evolution
During helium fusion, stars build up an inert core rich in carbon and oxygen. The inert core eventually reaches sufficient mass to collapse due to gravitation, whilst the helium burning moves gradually outward. This decrease in the inert core volume raises the temperature to the carbon ignition temperature. This will raise the temperature around the core and allow helium to burn in a shell around the core. Outside this is another shell burning hydrogen. The resulting carbon burning provides energy from the core to restore the star's mechanical equilibrium. However, the balance is only short-lived; in a star of 25 solar masses, the process will use up most of the carbon in the core in only 600 years. The duration of this process varies significantly depending on the mass of the star.
Stars of below 4 solar masses never reach high enough core temperature to burn carbon, instead ending their lives as carbon-oxygen white dwarfs after shell helium flashes gently expel the outer envelope in a planetary nebula.
In stars with masses between 4 and 12 solar masses, the carbon-oxygen core is under degenerate conditions and carbon ignition takes place in a carbon flash, that lasts just milliseconds and disrupts the stellar core. In the late stages of this nuclear burning they develop a massive stellar wind, which quickly ejects the outer envelope in a planetary nebula leaving behind an O-Ne-Na-Mg white dwarf core of about 1.1 solar masses. The core never reaches high enough temperature for further fusion burning of heavier elements than carbon.
Stars of more than 12 solar masses start carbon burning in a non-degenerate core, and after carbon exhaustion proceed with the neon-burning process once contraction of the inert (O, Ne, Na, Mg) core raises the temperature sufficiently.
See also
Alpha process
Carbon detonation
CNO cycle
Neon-burning process
Proton–proton chain reaction
Triple-alpha process
References
Nucleosynthesis | Carbon-burning process | [
"Physics",
"Chemistry"
] | 1,886 | [
"Nuclear fission",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion"
] |
217,720 | https://en.wikipedia.org/wiki/Oxygen-burning%20process | The oxygen-burning process is a set of nuclear fusion reactions that take place in massive stars that have used up the lighter elements in their cores. Oxygen-burning is preceded by the neon-burning process and succeeded by the silicon-burning process. As the neon-burning process ends, the core of the star contracts and heats until it reaches the ignition temperature for oxygen burning. Oxygen burning reactions are similar to those of carbon burning; however, they must occur at higher temperatures and densities due to the larger Coulomb barrier of oxygen.
Reactions
Oxygen ignites in the temperature range of (1.5–2.6)×109 K and in the density range of (2.6–6.7)×1012 kg·m−3. The principal reactions are given below, where the branching ratios assume that the deuteron channel is open (at high temperatures):
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || ||+ ||9.593 MeV (34%)
|- style="height:2em;"
| || || ||→ || ||+ || ||+ ||7.676 MeV (56%)
|- style="height:2em;"
| || || ||→ || ||+ || ||+ ||1.459 MeV (5%)
|- style="height:2em;"
| || || ||→ || ||+ ||2 ||+ ||0.381 MeV
|- style="height:2em;"
| || || ||→ || ||+ || ||− ||2.409 MeV (5%)
|- style="height:2em;"
|colspan=99|Alternatively:
|- style="height:2em;"
| || || ||→ || ||+ ||
| +
|16.539 MeV
|- style="height:2em;"
| || || ||→ || ||+ ||2
| −
|0.393 MeV
|}
Near 2×109 K, the oxygen-burning reaction rate is approximately 2.8×10−12(T9/2)33, where T9 is the temperature in billion kelvins. Overall, the major products of the oxygen-burning process are 28Si, 32,33,34S, 35,37Cl, 36,38Ar, 39,41K, and 40,42Ca. Of these, 28Si and 32S constitute 90% of the final composition. The oxygen fuel within the core of the star is exhausted after 0.01–5 years, depending on the star's mass and other parameters. The silicon-burning process, which follows, creates iron, but this iron cannot react further to create energy to support the star.
During the oxygen-burning process, proceeding outward, there is an oxygen-burning shell, followed by a neon shell, a carbon shell, a helium shell, and a hydrogen shell. The oxygen-burning process is the last nuclear reaction in the star's core which does not proceed via the alpha process.
Pre-oxygen burning
Although 16O is lighter than neon, neon burning occurs before oxygen burning, because 16O is a doubly-magic nucleus and hence extremely stable. Compared to oxygen, neon is much less stable. As a result, neon burning occurs at lower temperatures than 16O + 16O. During neon burning, oxygen and magnesium accumulate in the core of the star. At the onset of oxygen burning, oxygen in the stellar core is plentiful due to the helium-burning process (4He(2α,γ)12C(α,γ)16O), carbon-burning process (12C(12C,α)20Ne, 12C(α,γ)16O), and neon-burning process (20Ne(γ,α)16O). The reaction 12C(α,γ)16O has a significant effect on the reaction rates during oxygen burning, as it produces large quantities of 16O.
Convectively bounded flames and off-center oxygen ignition
For stars with masses greater than 10.3 solar masses, oxygen ignites in the core or not at all. Similarly, for stars with a mass of less than 9 solar masses (without accretion of additional mass) oxygen ignites in the core or not at all. However, in the 9–10.3 solar mass range, oxygen ignites off-center.
For stars in this mass range neon-burning occurs in a convective envelope rather than at the core of the star. For the particular example of a 9.5 solar mass star, the neon-burning process takes place in an envelope of approximately 0.252 solar masses (~1560 kilometers) off center. From the ignition flash, the neon convective zone extends further out to 1.1 solar masses with a peak power around 1036 W. After only a month, the power declines to about 1035 W and stays at this rate for about 10 years. After this phase, the neon in the shell is depleted, resulting in greater inward pressure on the star. This raises the shell's temperature to 1.65 billion kelvins. This results in a neon-burning, convectively-bound flame front that moves toward the core. The motion of the flame is what eventually leads to oxygen-burning. In approximately 3 years, the flame's temperature reaches about 1.83 billion kelvins, enabling the oxygen-burning process to commence. This occurs around 9.5 years before the iron core develops. Similarly to the beginning of neon-burning, off-center oxygen-burning commences with another flash. The convectively burning flame then results from both neon and oxygen burning as it advances towards the core, while the oxygen-burning shell continuously shrinks in mass.
Neutrino losses
During the oxygen-burning process, energy loss due to neutrino emission becomes relevant. Due to the large energy loss, oxygen must burn at temperatures higher than a billion kelvins in order to maintain a radiation pressure strong enough to support the star against gravity. Further, (which produce neutrinos) become significant when the matter density is high enough (ρ > 2×107 g/cm3). Due to these factors, the timescale of oxygen burning is much shorter for heavy, dense stars.
Explosive oxygen burning
The oxygen-burning process can occur under hydrostatic and under explosive conditions. The products of explosive oxygen burning are similar to those in hydrostatic oxygen burning. However, stable oxygen burning is accompanied by a multitude of electron captures, while explosive oxygen burning is accompanied by a significantly greater presence of photodisintegration reactions. In the temperature range of (3–4)×109 K, photodisintegration and oxygen fusion occur with comparable reaction rates.
Pair-instability supernovae
Very massive (140–260 solar masses) population III stars may become unstable during core oxygen burning due to pair production. This results in a thermonuclear explosion, which completely disrupts the star.
References
External links
Fusion of Carbon and Oxygen / The Astrophysics spectator, 2005
Arnett, W. D. Advanced evolution of massive stars. VI – Oxygen burning / Astrophysical Journal, vol. 194, Dec. 1, 1974, pt. 1, p. 373–383.
Nucleosynthesis | Oxygen-burning process | [
"Physics",
"Chemistry"
] | 1,587 | [
"Nuclear fission",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion"
] |
218,091 | https://en.wikipedia.org/wiki/Stream%20function | In fluid dynamics, two types of stream function are defined:
The two-dimensional (or Lagrange) stream function, introduced by Joseph Louis Lagrange in 1781, is defined for incompressible (divergence-free), two-dimensional flows.
The Stokes stream function, named after George Gabriel Stokes, is defined for incompressible, three-dimensional flows with axisymmetry.
The properties of stream functions make them useful for analyzing and graphically illustrating flows.
The remainder of this article describes the two-dimensional stream function.
Two-dimensional stream function
Assumptions
The two-dimensional stream function is based on the following assumptions:
The space domain is three-dimensional.
The flow field can be described as two-dimensional plane flow, with velocity vector
The velocity satisfies the continuity equation for incompressible flow:
Although in principle the stream function doesn't require the use of a particular coordinate system, for convenience the description presented here uses a right-handed Cartesian coordinate system with coordinates .
Derivation
The test surface
Consider two points and in the plane, and a curve , also in the plane, that connects them. Then every point on the curve has coordinate . Let the total length of the curve be .
Suppose a ribbon-shaped surface is created by extending the curve upward to the horizontal plane , where is the thickness of the flow. Then the surface has length , width , and area . Call this the test surface.
Flux through the test surface
The total volumetric flux through the test surface is
where is an arc-length parameter defined on the curve , with at the point and at the point .
Here is the unit vector perpendicular to the test surface, i.e.,
where is the rotation matrix corresponding to a anticlockwise rotation about the positive axis:
The integrand in the expression for is independent of , so the outer integral can be evaluated to yield
Classical definition
Lamb and Batchelor define the stream function as follows.
Using the expression derived above for the total volumetric flux, , this can be written as
.
In words, the stream function is the volumetric flux through the test surface per unit thickness, where thickness is measured perpendicular to the plane of flow.
The point is a reference point that defines where the stream function is identically zero. Its position is chosen more or less arbitrarily and, once chosen, typically remains fixed.
An infinitesimal shift in the position of point results in the following change of the stream function:
.
From the exact differential
so the flow velocity components in relation to the stream function must be
Notice that the stream function is linear in the velocity. Consequently if two incompressible flow fields are superimposed, then the stream function of the resultant flow field is the algebraic sum of the stream functions of the two original fields.
Effect of shift in position of reference point
Consider a shift in the position of the reference point, say from to . Let denote the stream function relative to the shifted reference point :
Then the stream function is shifted by
which implies the following:
A shift in the position of the reference point effectively adds a constant (for steady flow) or a function solely of time (for nonsteady flow) to the stream function at every point .
The shift in the stream function, , is equal to the total volumetric flux, per unit thickness, through the surface that extends from point to point . Consequently if and only if and lie on the same streamline.
In terms of vector rotation
The velocity can be expressed in terms of the stream function as
where is the rotation matrix corresponding to a anticlockwise rotation about the positive axis. Solving the above equation for produces the equivalent form
From these forms it is immediately evident that the vectors and are
perpendicular:
of the same length: .
Additionally, the compactness of the rotation form facilitates manipulations (e.g., see Condition of existence).
In terms of vector potential and stream surfaces
Using the stream function, one can express the velocity in terms of the vector potential
where , and is the unit vector pointing in the positive direction. This can also be written as the vector cross product
where we've used the vector calculus identity
Noting that , and defining , one can express the velocity field as
This form shows that the level surfaces of and the level surfaces of (i.e., horizontal planes) form a system of orthogonal stream surfaces.
Alternative (opposite sign) definition
An alternative definition, sometimes used in meteorology and oceanography, is
Relation to vorticity
In two-dimensional plane flow, the vorticity vector, defined as , reduces to , where
or
These are forms of Poisson's equation.
Relation to streamlines
Consider two-dimensional plane flow with two infinitesimally close points and lying in the same horizontal plane. From calculus, the corresponding infinitesimal difference between the values of the stream function at the two points is
Suppose takes the same value, say , at the two points and . Then this gives
implying that the vector is normal to the surface . Because everywhere (e.g., see In terms of vector rotation), each streamline corresponds to the intersection of a particular stream surface and a particular horizontal plane. Consequently, in three dimensions, unambiguous identification of any particular streamline requires that one specify corresponding values of both the stream function and the elevation ( coordinate).
The development here assumes the space domain is three-dimensional. The concept of stream function can also be developed in the context of a two-dimensional space domain. In that case level sets of the stream function are curves rather than surfaces, and streamlines are level curves of the stream function. Consequently, in two dimensions, unambiguous identification of any particular streamline requires that one specify the corresponding value of the stream function only.
Condition of existence
It's straightforward to show that for two-dimensional plane flow satisfies the curl-divergence equation
where is the rotation matrix corresponding to a anticlockwise rotation about the positive axis. This equation holds regardless of whether or not the flow is incompressible.
If the flow is incompressible (i.e., ), then the curl-divergence equation gives
.
Then by Stokes' theorem the line integral of over every closed loop vanishes
Hence, the line integral of is path-independent. Finally, by the converse of the gradient theorem, a scalar function exists such that
.
Here represents the stream function.
Conversely, if the stream function exists, then . Substituting this result into the curl-divergence equation yields (i.e., the flow is incompressible).
In summary, the stream function for two-dimensional plane flow exists if and only if the flow is incompressible.
Potential flow
For two-dimensional potential flow, streamlines are perpendicular to equipotential lines. Taken together with the velocity potential, the stream function may be used to derive a complex potential. In other words, the stream function accounts for the solenoidal part of a two-dimensional Helmholtz decomposition, while the velocity potential accounts for the irrotational part.
Summary of properties
The basic properties of two-dimensional stream functions can be summarized as follows:
The x- and y-components of the flow velocity at a given point are given by the partial derivatives of the stream function at that point.
The value of the stream function is constant along every streamline (streamlines represent the trajectories of particles in steady flow). That is, in two dimensions each streamline is a level curve of the stream function.
The difference between the stream function values at any two points gives the volumetric flux through the vertical surface that connects the two points.
Two-dimensional stream function for flows with time-invariant density
If the fluid density is time-invariant at all points within the flow, i.e.,
,
then the continuity equation (e.g., see Continuity equation#Fluid dynamics) for two-dimensional plane flow becomes
In this case the stream function is defined such that
and represents the mass flux (rather than volumetric flux) per unit thickness through the test surface.
See also
Elementary flow
References
Citations
Sources
Continuum mechanics
Fluid dynamics
External links
Joukowsky Transform Interactive WebApp | Stream function | [
"Physics",
"Chemistry",
"Engineering"
] | 1,671 | [
"Continuum mechanics",
"Chemical engineering",
"Classical mechanics",
"Piping",
"Fluid dynamics"
] |
218,268 | https://en.wikipedia.org/wiki/Characteristic%20polynomial | In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any basis (that is, the characteristic polynomial does not depend on the choice of a basis). The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.
In spectral graph theory, the characteristic polynomial of a graph is the characteristic polynomial of its adjacency matrix.
Motivation
In linear algebra, eigenvalues and eigenvectors play a fundamental role, since, given a linear transformation, an eigenvector is a vector whose direction is not changed by the transformation, and the corresponding eigenvalue is the measure of the resulting change of magnitude of the vector.
More precisely, suppose the transformation is represented by a square matrix Then an eigenvector and the corresponding eigenvalue must satisfy the equation
or, equivalently (since ),
where is the identity matrix, and
(although the zero vector satisfies this equation for every it is not considered an eigenvector).
It follows that the matrix must be singular, and its determinant
must be zero.
In other words, the eigenvalues of are the roots of
which is a monic polynomial in of degree if is a matrix. This polynomial is the characteristic polynomial of .
Formal definition
Consider an matrix The characteristic polynomial of denoted by is the polynomial defined by
where denotes the identity matrix.
Some authors define the characteristic polynomial to be That polynomial differs from the one defined here by a sign so it makes no difference for properties like having as roots the eigenvalues of ; however the definition above always gives a monic polynomial, whereas the alternative definition is monic only when is even.
Examples
To compute the characteristic polynomial of the matrix
the determinant of the following is computed:
and found to be the characteristic polynomial of
Another example uses hyperbolic functions of a hyperbolic angle φ.
For the matrix take
Its characteristic polynomial is
Properties
The characteristic polynomial of a matrix is monic (its leading coefficient is ) and its degree is The most important fact about the characteristic polynomial was already mentioned in the motivational paragraph: the eigenvalues of are precisely the roots of (this also holds for the minimal polynomial of but its degree may be less than ). All coefficients of the characteristic polynomial are polynomial expressions in the entries of the matrix. In particular its constant coefficient of is the coefficient of is one, and the coefficient of is , where is the trace of (The signs given here correspond to the formal definition given in the previous section; for the alternative definition these would instead be and respectively.)
For a matrix the characteristic polynomial is thus given by
Using the language of exterior algebra, the characteristic polynomial of an matrix may be expressed as
where is the trace of the th exterior power of which has dimension This trace may be computed as the sum of all principal minors of of size The recursive Faddeev–LeVerrier algorithm computes these coefficients more efficiently.
When the characteristic of the field of the coefficients is each such trace may alternatively be computed as a single determinant, that of the matrix,
The Cayley–Hamilton theorem states that replacing by in the characteristic polynomial (interpreting the resulting powers as matrix powers, and the constant term as times the identity matrix) yields the zero matrix. Informally speaking, every matrix satisfies its own characteristic equation. This statement is equivalent to saying that the minimal polynomial of divides the characteristic polynomial of
Two similar matrices have the same characteristic polynomial. The converse however is not true in general: two matrices with the same characteristic polynomial need not be similar.
The matrix and its transpose have the same characteristic polynomial. is similar to a triangular matrix if and only if its characteristic polynomial can be completely factored into linear factors over (the same is true with the minimal polynomial instead of the characteristic polynomial). In this case is similar to a matrix in Jordan normal form.
Characteristic polynomial of a product of two matrices
If and are two square matrices then characteristic polynomials of and coincide:
When is non-singular this result follows from the fact that and are similar:
For the case where both and are singular, the desired identity is an equality between polynomials in and the coefficients of the matrices. Thus, to prove this equality, it suffices to prove that it is verified on a non-empty open subset (for the usual topology, or, more generally, for the Zariski topology) of the space of all the coefficients. As the non-singular matrices form such an open subset of the space of all matrices, this proves the result.
More generally, if is a matrix of order and is a matrix of order then is and is matrix, and one has
To prove this, one may suppose by exchanging, if needed, and Then, by bordering on the bottom by rows of zeros, and on the right, by, columns of zeros, one gets two matrices and such that and is equal to bordered by rows and columns of zeros. The result follows from the case of square matrices, by comparing the characteristic polynomials of and
Characteristic polynomial of Ak
If is an eigenvalue of a square matrix with eigenvector then is an eigenvalue of because
The multiplicities can be shown to agree as well, and this generalizes to any polynomial in place of :
That is, the algebraic multiplicity of in equals the sum of algebraic multiplicities of in over such that
In particular, and
Here a polynomial for example, is evaluated on a matrix simply as
The theorem applies to matrices and polynomials over any field or commutative ring.
However, the assumption that has a factorization into linear factors is not always true, unless the matrix is over an algebraically closed field such as the complex numbers.
Secular function and secular equation
Secular function
The term secular function has been used for what is now called characteristic polynomial (in some literature the term secular function is still used). The term comes from the fact that the characteristic polynomial was used to calculate secular perturbations (on a time scale of a century, that is, slow compared to annual motion) of planetary orbits, according to Lagrange's theory of oscillations.
Secular equation
Secular equation may have several meanings.
In linear algebra it is sometimes used in place of characteristic equation.
In astronomy it is the algebraic or numerical expression of the magnitude of the inequalities in a planet's motion that remain after the inequalities of a short period have been allowed for.
In molecular orbital calculations relating to the energy of the electron and its wave function it is also used instead of the characteristic equation.
For general associative algebras
The above definition of the characteristic polynomial of a matrix with entries in a field generalizes without any changes to the case when is just a commutative ring. defines the characteristic polynomial for elements of an arbitrary finite-dimensional (associative, but not necessarily commutative) algebra over a field and proves the standard properties of the characteristic polynomial in this generality.
See also
Characteristic equation (disambiguation)
Invariants of tensors
Companion matrix
Faddeev–LeVerrier algorithm
Cayley–Hamilton theorem
Samuelson–Berkowitz algorithm
References
T.S. Blyth & E.F. Robertson (1998) Basic Linear Algebra, p 149, Springer .
John B. Fraleigh & Raymond A. Beauregard (1990) Linear Algebra 2nd edition, p 246, Addison-Wesley .
Werner Greub (1974) Linear Algebra 4th edition, pp 120–5, Springer, .
Paul C. Shields (1980) Elementary Linear Algebra 3rd edition, p 274, Worth Publishers .
Gilbert Strang (1988) Linear Algebra and Its Applications 3rd edition, p 246, Brooks/Cole .
Polynomials
Linear algebra
Tensors | Characteristic polynomial | [
"Mathematics",
"Engineering"
] | 1,655 | [
"Linear algebra",
"Polynomials",
"Tensors",
"Algebra"
] |
218,320 | https://en.wikipedia.org/wiki/Ultraviolet%20catastrophe | The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century and early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range. The term "ultraviolet catastrophe" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law.
The phrase refers to the fact that the empirically derived Rayleigh–Jeans law, which accurately predicted experimental results at large wavelengths, failed to do so for short wavelengths. (See the image for further elaboration.) As the theory diverged from empirical observations when these frequencies reached the ultraviolet region of the electromagnetic spectrum, there was a problem. This problem was later found to be due to a property of quanta as proposed by Max Planck: There could be no fraction of a discrete energy package already carrying minimal energy.
Since the first use of this term, it has also been used for other predictions of a similar nature, as in quantum electrodynamics and such cases as ultraviolet divergence.
Problem
The Rayleigh-Jeans law is an approximation to the spectral radiance of electromagnetic radiation as a function of wavelength from a black body at a given temperature through classical arguments. For wavelength , it is:
where is the spectral radiance, the power emitted per unit emitting area, per steradian, per unit wavelength; is the speed of light; is the Boltzmann constant; and is the temperature in kelvins. For frequency , the expression is instead
This formula is obtained from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of .
The "ultraviolet catastrophe" is the expression of the fact that the formula misbehaves at higher frequencies; it predicts infinite energy emission because as .
An example, from Mason's A History of the Sciences,
illustrates multi-mode vibration via a piece of string. As a natural vibrator, the string will oscillate with specific modes (the standing waves of a string in harmonic resonance), dependent on the length of the string. In classical physics, a radiator of energy will act as a natural vibrator. Since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are.
According to classical electromagnetism, the number of electromagnetic modes in a 3-dimensional cavity, per unit frequency, is proportional to the square of the frequency. This implies that the radiated power per unit frequency should be proportional to frequency squared. Thus, both the power at a given frequency and the total radiated power is unlimited as higher and higher frequencies are considered: this is unphysical, as the total radiated power of a cavity is not observed to be infinite, a point that was made independently by Einstein, Lord Rayleigh, and Sir James Jeans in 1905.
Solution
In 1900, Max Planck derived the correct form for the intensity spectral distribution function by making some assumptions that were strange for the time. In particular, Planck assumed that electromagnetic radiation can be emitted or absorbed only in discrete packets, called quanta, of energy:
where:
is the Planck constant,
is the frequency of light,
is the speed of light,
is the wavelength of light.
By applying this new energy to the partition function in statistical mechanics, Planck's assumptions led to the correct form of the spectral distribution functions:
where:
is the absolute temperature of the body,
is the Boltzmann constant,
denotes the exponential function.
In 1905, Albert Einstein solved the problem physically by postulating that Planck's quanta were real physical particles – what we now call photons, not just a mathematical fiction. They modified statistical mechanics in the style of Boltzmann to an ensemble of photons. Einstein's photon had an energy proportional to its frequency and also explained an unpublished law of Stokes and the photoelectric effect. This published postulate was specifically cited by the Nobel Prize in Physics committee in their decision to award the prize for 1921 to Einstein.
See also
Wien approximation
Vacuum catastrophe
Planckian locus
References
Bibliography
Further reading
Foundational quantum physics
Physical paradoxes
Physical phenomena | Ultraviolet catastrophe | [
"Physics"
] | 882 | [
"Physical phenomena",
"Foundational quantum physics",
"Quantum mechanics"
] |
218,445 | https://en.wikipedia.org/wiki/Lean%20manufacturing | Lean manufacturing is a method of manufacturing goods aimed primarily at reducing times within the production system as well as response times from suppliers and customers. It is closely related to another concept called just-in-time manufacturing (JIT manufacturing in short). Just-in-time manufacturing tries to match production to demand by only supplying goods that have been ordered and focus on efficiency, productivity (with a commitment to continuous improvement), and reduction of "wastes" for the producer and supplier of goods. Lean manufacturing adopts the just-in-time approach and additionally focuses on reducing cycle, flow, and throughput times by further eliminating activities that do not add any value for the customer. Lean manufacturing also involves people who work outside of the manufacturing process, such as in marketing and customer service.
Lean manufacturing is particularly related to the operational model implemented in the post-war 1950s and 1960s by the Japanese automobile company Toyota called the Toyota Production System (TPS), known in the United States as "The Toyota Way". Toyota's system was erected on the two pillars of just-in-time inventory management and automated quality control. The seven "wastes" ( in Japanese), first formulated by Toyota engineer Shigeo Shingo, are the waste of superfluous inventory of raw material and finished goods, the waste of overproduction (producing more than what is needed now), the waste of over-processing (processing or making parts beyond the standard expected by customer), the waste of transportation (unnecessary movement of people and goods inside the system), the waste of excess motion (mechanizing or automating before improving the method), the waste of waiting (inactive working periods due to job queues), and the waste of making defective products (reworking to fix avoidable defects in products and processes).
The term Lean was coined in 1988 by American businessman John Krafcik in his article "Triumph of the Lean Production System," and defined in 1996 by American researchers James Womack and Daniel Jones to consist of five key principles: "Precisely specify value by specific product, identify the value stream for each product, make value flow without interruptions, let customer pull value from the producer, and pursue perfection."
Companies employ the strategy to increase efficiency. By receiving goods only as they need them for the production process, it reduces inventory costs and wastage, and increases productivity and profit. The downside is that it requires producers to forecast demand accurately as the benefits can be nullified by minor delays in the supply chain. It may also impact negatively on workers due to added stress and inflexible conditions. A successful operation depends on a company having regular outputs, high-quality processes, and reliable suppliers.
History
Frederick Taylor and Henry Ford documented their observations relating to these topics, and Shigeo Shingo and Taiichi Ohno applied their enhanced thoughts on the subject at Toyota in the late 1940s after World War II. The resulting methods were researched in the mid-20th century and dubbed Lean by John Krafcik in 1988, and then were defined in The Machine that Changed the World and further detailed by James Womack and Daniel Jones in Lean Thinking (1996).
Japan: the origins of Lean
The adoption of just-in-time manufacturing in Japan and many other early forms of Lean can be traced back directly to the US-backed Reconstruction and Occupation of Japan following WWII. During this time, an American economist, W. Edwards Deming, and an American statistician, Walter A. Shewhart, promoted some of the earliest modern manufacturing methods and management philosophies they developed in the late '30s and early '40s. The two experts were the first to apply these newly developed statistical models to improve efficiencies in many of America's largest military manufacturers during WWII. However, Deming and Shewhart failed to convince other US manufacturers to apply these "radical" methods.
After the war, Deming was assigned to participate in the Reconstruction of Japan by General Douglas MacArthur. Deming participated as a manufacturing consultant for Japan's struggling heavy industries, which included Toyota and Mitsubishi. Unlike what they experienced in the US, Deming found the Japanese very receptive to learning and applying these new efficiency methods. Many of the manufacturing methods first introduced by Deming in Japan, and later innovated by Japanese companies, are what we now call Lean Manufacturing. Japanese manufacturers still recognize Deming for his contributions to modern Japanese efficiency practices by awarding the best manufacturers in the world the Deming Prize. In addition to Deming's critical influence in Japan, most local companies were in a position where they needed an immediate solution to the extreme situation they were living in after World War II. American supply chain specialist Gerhard Plenert has offered four quite vague reasons, paraphrased here. During Japan's post–World War II rebuilding (of economy, infrastructure, industry, political, and social-emotional stability):
Japan's lack of cash made it difficult for industry to finance the big-batch, large inventory production methods common elsewhere.
Japan lacked space to build big factories loaded with inventory.
The Japanese islands lack natural resources with which to build products.
Japan had high unemployment, which meant that labor efficiency methods were not an obvious pathway to industrial success.
Thus, the Japanese "leaned out" their processes. "They built smaller factories ... in which the only materials housed in the factory were those on which work was currently being done. In this way, inventory levels were kept low, investment in in-process inventories was at a minimum, and the investment in purchased natural resources was quickly turned around so that additional materials were purchased." Plenert goes on to explain Toyota's key role in developing this lean or just-in-time production methodology.
American industrialists recognized the threat of cheap offshore labor to American workers during the 1910s and explicitly stated the goal of what is now called lean manufacturing as a countermeasure. Henry Towne, past president of the American Society of Mechanical Engineers, wrote in the foreword to Frederick Winslow Taylor's Shop Management (1911), "We are justly proud of the high wage rates which prevail throughout our country, and jealous of any interference with them by the products of the cheaper labor of other countries. To maintain this condition, to strengthen our control of home markets, and, above all, to broaden our opportunities in foreign markets where we must compete with the products of other industrial nations, we should welcome and encourage every influence tending to increase the efficiency of our productive processes."
Continuous production improvement and incentives for such were documented in Taylor's Principles of Scientific Management (1911):
"... whenever a workman proposes an improvement, it should be the policy of the management to make a careful analysis of the new method, and if necessary conduct a series of experiments to determine accurately the relative merit of the new suggestion and of the old standard. And whenever the new method is found to be markedly superior to the old, it should be adopted as the standard for the whole establishment."
"...after a workman has had the price per piece of the work he is doing lowered two or three times as a result of his having worked harder and increased his output, he is likely entirely to lose sight of his employer's side of the case and become imbued with a grim determination to have no more cuts if soldiering [marking time, just doing what he is told] can prevent it."
Shigeo Shingo cites reading Principles of Scientific Management in 1931 and being "greatly impressed to make the study and practice of scientific management his life's work".,
Shingo and Taiichi Ohno were key to the design of Toyota's manufacturing process. Previously a textile company, Toyota moved into building automobiles in 1934. Kiichiro Toyoda, the founder of Toyota Motor Corporation, directed the engine casting work and discovered many problems in their manufacturing, with wasted resources on the repair of poor-quality castings. Toyota engaged in intense study of each stage of the process. In 1936, when Toyota won its first truck contract with the Japanese government, the processes encountered new problems, to which Toyota responded by developing Kaizen improvement teams, which into what has become the Toyota Production System (TPS), and subsequently The Toyota Way.
Levels of demand in the postwar economy of Japan were low; as a result, the focus of mass production on lowest cost per item via economies of scale had little application. Having visited and seen supermarkets in the United States, Ohno recognized that the scheduling of work should not be driven by sales or production targets but by actual sales. Given the financial situation during this period, over-production had to be avoided, and thus the notion of "pull" (or "build-to-order" rather than target-driven "push") came to underpin production scheduling.
Evolution in the rest of the world
Just-in-time manufacturing was introduced in Australia in the 1950s by the British Motor Corporation (Australia) at its Victoria Park plant in Sydney, from where the idea later migrated to Toyota. News about just-in-time/Toyota production system reached other western countries from Japan in 1977 in two English-language articles: one referred to the methodology as the "Ohno system", after Taiichi Ohno, who was instrumental in its development within Toyota. The other article, by Toyota authors in an international journal, provided additional details. Finally, those and other publicity were translated into implementations, beginning in 1980 and then quickly multiplying throughout industry in the United States and other developed countries. A seminal 1980 event was a conference in Detroit at Ford World Headquarters co-sponsored by the Repetitive Manufacturing Group (RMG), which had been founded 1979 within the American Production and Inventory Control Society (APICS) to seek advances in manufacturing. The principal speaker, Fujio Cho (later, president of Toyota Motor Corp.), in explaining the Toyota system, stirred up the audience, and led to the RMG's shifting gears from things like automation to just-in-time/Toyota production system.
At least some of audience's stirring had to do with a perceived clash between the new just-in-time regime and manufacturing resource planning (MRP II), a computer software-based system of manufacturing planning and control which had become prominent in industry in the 1960s and 1970s. Debates in professional meetings on just-in-time vs. MRP II were followed by published articles, one of them titled, "The Rise and Fall of Just-in-Time". Less confrontational was Walt Goddard's, "Kanban Versus MRP II—Which Is Best for You?" in 1982. Four years later, Goddard had answered his own question with a book advocating just-in-time. Among the best known of MRP II's advocates was George Plossl, who authored two articles questioning just-in-time's kanban planning method and the "japanning of America". But, as with Goddard, Plossl later wrote that "JIT is a concept whose time has come".
Just-in-time/TPS implementations may be found in many case-study articles from the 1980s and beyond. An article in a 1984 issue of Inc. magazine relates how Omark Industries (chain saws, ammunition, log loaders, etc.) emerged as an extensive just-in-time implementer under its US home-grown name ZIPS (zero inventory production system). At Omark's mother plant in Portland, Oregon, after the work force had received 40 hours of ZIPS training, they were "turned loose" and things began to happen. A first step was to "arbitrarily eliminate a week's lead time [after which] things ran smoother. 'People asked that we try taking another week's worth out.' After that, ZIPS spread throughout the plant's operations 'like an amoeba.'" The article also notes that Omark's 20 other plants were similarly engaged in ZIPS, beginning with pilot projects. For example, at one of Omark's smaller plants making drill bits in Mesabi, Minnesota, "large-size drill inventory was cut by 92%, productivity increased by 30%, scrap and rework ... dropped 20%, and lead time ... from order to finished product was slashed from three weeks to three days." The Inc. article states that companies using just-in-time the most extensively include "the Big Four, Hewlett-Packard, Motorola, Westinghouse Electric, General Electric, Deere & Company, and Black and Decker".
By 1986, a case-study book on just-in-time in the U.S. was able to devote a full chapter to ZIPS at Omark, along with two chapters on just-in-time at several Hewlett-Packard plants, and single chapters for Harley-Davidson, John Deere, IBM-Raleigh, North Carolina, and California-based Apple Inc., a Toyota truck-bed plant, and New United Motor Manufacturing joint venture between Toyota and General Motors.
Two similar, contemporaneous books from the UK are more international in scope. One of the books, with both conceptual articles and case studies, includes three sections on just-in-time practices: in Japan (e.g., at Toyota, Mazda, and Tokagawa Electric); in Europe (jmg Bostrom, Lucas Electric, Cummins Engine, IBM, 3M, Datasolve Ltd., Renault, Massey Ferguson); and in the US and Australia (Repco Manufacturing-Australia, Xerox Computer, and two on Hewlett-Packard). The second book, reporting on what was billed as the First International Conference on just-in-time manufacturing, includes case studies in three companies: Repco-Australia, IBM-UK, and 3M-UK. In addition, a day two keynote address discussed just-in-time as applied "across all disciplines, ... from accounting and systems to design and production".
Rebranding as "lean"
John Krafcik coined the term Lean in his 1988 article, "Triumph of the Lean Production System". The article states: (a) Lean manufacturing plants have higher levels of productivity/quality than non-Lean and (b) "The level of plant technology seems to have little effect on operating performance" (page 51). According to the article, risks with implementing Lean can be reduced by: "developing a well-trained, flexible workforce, product designs that are easy to build with high quality, and a supportive, high-performance supplier network" (page 51).
Middle era and to the present
Three more books which include just-in-time implementations were published in 1993, 1995, and 1996, which are start-up years of the lean manufacturing/lean management movement that was launched in 1990 with publication of the book, The Machine That Changed the World. That one, along with other books, articles, and case studies on lean, were supplanting just-in-time terminology in the 1990s and beyond. The same period, saw the rise of books and articles with similar concepts and methodologies but with alternative names, including cycle time management, time-based competition, quick-response manufacturing, flow, and pull-based production systems.
There is more to just-in-time than its usual manufacturing-centered explication. Inasmuch as manufacturing ends with order-fulfillment to distributors, retailers, and end users, and also includes remanufacturing, repair, and warranty claims, just-in-time's concepts and methods have application downstream from manufacturing itself. A 1993 book on "world-class distribution logistics" discusses kanban links from factories onward, and a manufacturer-to-retailer model developed in the U.S. in the 1980s, referred to as quick response, has morphed over time to what is called fast fashion.
Methodology
The strategic elements of lean can be quite complex, and comprise multiple elements. Four different notions of lean have been identified:
Lean as a fixed state or goal (being lean)
Lean as a continuous change process (becoming lean)
Lean as a set of tools or methods (doing lean/toolbox lean)
Lean as a philosophy (lean thinking)
The other way to avoid market risk and control the supply efficiently is to cut down in stock. P&G has completed their goal to co-operate with Walmart and other wholesales companies by building the response system of stocks directly to the suppliers companies.
In 1999, Spear and Bowen identified four rules which characterize the "Toyota DNA":
All work shall be highly specified as to content, sequence, timing, and outcome.
Every customer-supplier connection must be direct, and there must be an unambiguous yes or no way to send requests and receive responses.
The pathway for every product and service must be simple and direct.
Any improvement must be made in accordance with the scientific method, under the guidance of a teacher, at the lowest possible level in the organization.
This is a fundamentally different approach from most improvement methodologies, and requires more persistence than basic application of the tools, which may partially account for its lack of popularity. The implementation of "smooth flow" exposes quality problems that already existed, and waste reduction then happens as a natural consequence, a system-wide perspective rather focusing directly upon the wasteful practices themselves.
Takt time is the rate at which products need to be produced to meet customer demand. The JIT system is designed to produce products at the rate of takt time, which ensures that products are produced just in time to meet customer demand.
Sepheri provides a list of methodologies of just-in-time manufacturing that "are important but not exhaustive":
Housekeeping: physical organization and discipline.
Make it right the first time: elimination of defects.
Setup reduction: flexible changeover approaches.
Lot sizes of one: the ultimate lot size and flexibility.
Uniform plant load: leveling as a control mechanism.
Balanced flow: organizing flow scheduling throughput.
Skill diversification: multi-functional workers.
Control by visibility: communication media for activity.
Preventive maintenance: flawless running, no defects.
Fitness for use: producibility, design for process.
Compact plant layout: product-oriented design.
Streamlining movements: smoothing materials handling.
Supplier networks: extensions of the factory.
Worker involvement: small group improvement activities.
Cellular manufacturing: production methods for flow.
Pull system: signal [kanban] replenishment/resupply systems.
Key principles
Womack and Jones define Lean as "...a way to do more and more with less and less—less human effort, less equipment, less time, and less space—while coming closer and closer to providing customers exactly what they want" and then translate this into five key principles:
Value: Specify the value desired by the customer. "Form a team for each product to stick with that product during its entire production cycle", "Enter into a dialogue with the customer" (e.g. Voice of the customer)
The Value Stream: Identify the value stream for each product providing that value and challenge all of the wasted steps (generally nine out of ten) currently necessary to provide it
Flow: Make the product flow continuously through the remaining value-added steps
Pull: Introduce pull between all steps where continuous flow is possible
Perfection: Manage toward perfection so that the number of steps and the amount of time and information needed to serve the customer continually falls
Lean is founded on the concept of continuous and incremental improvements on product and process while eliminating redundant activities. "The value of adding activities are simply only those things the customer is willing to pay for, everything else is waste, and should be eliminated, simplified, reduced, or integrated".
On principle 2, waste, see seven basic waste types under The Toyota Way. Additional waste types are:
Faulty goods (manufacturing of goods or services that do not meet customer demand or specifications, Womack et al., 2003. See Lean services)
Waste of skills (Six Sigma)
Under-utilizing capabilities (Six Sigma)
Delegating tasks with inadequate training (Six Sigma)
Metrics (working to the wrong metrics or no metrics) (Mika Geoffrey, 1999)
Participation (not utilizing workers by not allowing them to contribute ideas and suggestions and be part of Participative Management) (Mika Geoffrey, 1999)
Computers (improper use of computers: not having the proper software, training on use and time spent surfing, playing games or just wasting time) (Mika Geoffrey, 1999)
Implementation
One paper suggests that an organization implementing Lean needs its own Lean plan as developed by the "Lean Leadership". This should enable Lean teams to provide suggestions for their managers who then makes the actual decisions about what to implement. Coaching is recommended when an organization starts off with Lean to impart knowledge and skills to shop-floor staff. Improvement metrics are required for informed decision-making.
Lean philosophy and culture is as important as tools and methodologies. Management should not decide on solutions without understanding the true problem by consulting shop floor personnel.
The solution to a specific problem for a specific company may not have generalized application. The solution must fit the problem.
Value-stream mapping (VSM) and 5S are the most common approaches companies take on their first steps to Lean. Lean can be focused on specific processes, or cover the entire supply chain. Front-line workers should be involved in VSM activities. Implementing a series of small improvements incrementally along the supply chain can bring forth enhanced productivity.
Naming
Alternative terms for JIT manufacturing have been used. Motorola's choice was short-cycle manufacturing (SCM). IBM's was continuous-flow manufacturing (CFM), and demand-flow manufacturing (DFM), a term handed down from consultant John Constanza at his Institute of Technology in Colorado. Still another alternative was mentioned by Goddard, who said that "Toyota Production System is often mistakenly referred to as the 'Kanban System'", and pointed out that kanban is but one element of TPS, as well as JIT production.
The wide use of the term JIT manufacturing throughout the 1980s faded fast in the 1990s, as the new term lean manufacturing became established, as "a more recent name for JIT". As just one testament to the commonality of the two terms, Toyota production system (TPS) has been and is widely used as a synonym for both JIT and lean manufacturing.,
Objectives and benefits
Objectives and benefits of JIT manufacturing may be stated in two primary ways: first, in specific and quantitative terms, via published case studies; second, general listings and discussion.
A case-study summary from Daman Products in 1999 lists the following benefits: reduced cycle times 97%, setup times 50%, lead times from 4 to 8 weeks to 5 to 10 days, flow distance 90%. This was achieved via four focused (cellular) factories, pull scheduling, kanban, visual management, and employee empowerment.
Another study from NCR (Dundee, Scotland) in 1998, a producer of make-to-order automated teller machines, includes some of the same benefits while also focusing on JIT purchasing: In switching to JIT over a weekend in 1998, eliminated buffer inventories, reducing inventory from 47 days to 5 days, flow time from 15 days to 2 days, with 60% of purchased parts arriving JIT and 77% going dock to line, and suppliers reduced from 480 to 165.
Hewlett-Packard, one of western industry's earliest JIT implementers, provides a set of four case studies from four H-P divisions during the mid-1980s. The four divisions, Greeley, Fort Collins, Computer Systems, and Vancouver, employed some but not all of the same measures. At the time about half of H-P's 52 divisions had adopted JIT.
Application outside a manufacturing context
Lean principles have been successfully applied to various sectors and services, such as call centers and healthcare. In the former, lean's waste reduction practices have been used to reduce handle time, within and between agent variation, accent barriers, as well as attain near perfect process adherence. In the latter, several hospitals have adopted the idea of lean hospital, a concept that prioritizes the patient, thus increasing the employee commitment and motivation, as well as boosting medical quality and cost effectiveness.
Lean principles also have applications to software development and maintenance as well as other sectors of information technology (IT). More generally, the use of lean in information technology has become known as Lean IT. Lean methods are also applicable to the public sector, but most results have been achieved using a much more restricted range of techniques than lean provides.
The challenge in moving lean to services is the lack of widely available reference implementations to allow people to see how directly applying lean manufacturing tools and practices can work and the impact it does have. This makes it more difficult to build the level of belief seen as necessary for strong implementation. However, some research does relate widely recognized examples of success in retail and even airlines to the underlying principles of lean. Despite this, it remains the case that the direct manufacturing examples of 'techniques' or 'tools' need to be better 'translated' into a service context to support the more prominent approaches of implementation, which has not yet received the level of work or publicity that would give starting points for implementors. The upshot of this is that each implementation often 'feels its way' along as must the early industrial engineering practices of Toyota. This places huge importance upon sponsorship to encourage and protect these experimental developments.
Lean management is nowadays implemented also in non-manufacturing processes and administrative processes. In non-manufacturing processes is still huge potential for optimization and efficiency increase. Some people have advocated using STEM resources to teach children Lean thinking instead of computer science.
Lean manufacturing methodology has become a prevalent practice in public healthcare, commonly known as lean healthcare. Due to the intensively competitive environment, lean approach becomes a growing alternative in the healthcare sector to achieve optimized resource management and performance improvement.
Criticism
According to Williams, it becomes necessary to find suppliers that are close by or can supply materials quickly with limited advance notice. When ordering small quantities of materials, suppliers' minimum order policies may pose a problem, though.
Employees are at risk of precarious work when employed by factories that utilize just-in-time and flexible production techniques. A longitudinal study of US workers since 1970 indicates employers seeking to easily adjust their workforce in response to supply and demand conditions respond by creating more nonstandard work arrangements, such as contracting and temporary work.
Natural and human-made disasters will disrupt the flow of energy, goods and services. The down-stream customers of those goods and services will, in turn, not be able to produce their product or render their service because they were counting on incoming deliveries "just in time" and so have little or no inventory to work with. The disruption to the economic system will cascade to some degree depending on the nature and severity of the original disaster and may create shortages. The larger the disaster the worse the effect on just-in-time failures. Electrical power is the ultimate example of just-in-time delivery. A severe geomagnetic storm could disrupt electrical power delivery for hours to years, locally or even globally. Lack of supplies on hand to repair the electrical system would have catastrophic effects.
The COVID-19 pandemic has caused disruption in JIT practices, with various quarantine restrictions on international trade and commercial activity in general interrupting supply while lacking stockpiles to handle the disruption; along with increased demand for medical supplies like personal protective equipment (PPE) and ventilators, and even panic buying, including of various domestically manufactured (and so less vulnerable) products like panic buying of toilet paper, disturbing regular demand. This has led to suggestions that stockpiles and diversification of suppliers should be more heavily focused.
Critics of Lean argue that this management method has significant drawbacks, especially for the employees of companies operating under Lean. Common criticism of Lean is that it fails to take into consideration the employee's safety and well-being. Lean manufacturing is associated with an increased level of stress among employees, who have a small margin of error in their work environment which require perfection. Lean also over-focuses on cutting waste, which may lead management to cut sectors of the company that are not essential to the company's short-term productivity but are nevertheless important to the company's legacy. Lean also over-focuses on the present, which hinders a company's plans for the future.
Critics also make negative comparison of Lean and 19th century scientific management, which had been fought by the labor movement and was considered obsolete by the 1930s. Finally, lean is criticized for lacking a standard methodology: "Lean is more a culture than a method, and there is no standard lean production model."
After years of success of Toyota's Lean Production, the consolidation of supply chain networks has brought Toyota to the position of being the world's biggest carmaker in the rapid expansion. In 2010, the crisis of safety-related problems in Toyota made other carmakers that duplicated Toyota's supply chain system wary that the same recall issue might happen to them.
James Womack had warned Toyota that cooperating with single outsourced suppliers might bring unexpected problems.
Lean manufacturing is different from lean enterprise. Recent research reports the existence of several lean manufacturing processes but of few lean enterprises. One distinguishing feature opposes lean accounting and standard cost accounting. For standard cost accounting, SKUs are difficult to grasp. SKUs include too much hypothesis and variance, i.e., SKUs hold too much indeterminacy. Manufacturing may want to consider moving away from traditional accounting and adopting lean accounting. In using lean accounting, one expected gain is activity-based cost visibility, i.e., measuring the direct and indirect costs at each step of an activity rather than traditional cost accounting that limits itself to labor and supplies.
See also
Notes
References
Billesbach, Thomas J. 1987. Applicability of Just-in-Time Techniques in the Administrative Area. Doctoral dissertation, University of Nebraska. Ann Arbor, Mich., University Microfilms International.
Goddard, W. E. 2001. JIT/TQC—identifying and solving problems. Proceedings of the 20th Electrical Electronics Insulation Conference, Boston, October 7–10, 88–91.
Goldratt, Eliyahu M. and Fox, Robert E. (1986), The Race, North River Press,
Hall, Robert W. 1983. Zero Inventories. Homewood, Ill.: Dow Jones-Irwin.
Hall, Robert W. 1987. Attaining Manufacturing Excellence: Just-in-Time, Total Quality, Total People Involvement. Homewood, Ill.: Dow Jones-Irwin.
Hay, Edward J. 1988. The Just-in-Time Breakthrough: Implementing the New Manufacturing Basics. New York: Wiley.
Ker, J. I., Wang, Y., Hajli, M. N., Song, J., Ker, C. W. (2014). Deploying Lean in Healthcare: Evaluating Information Technology Effectiveness in US Hospital Pharmacies
Lubben, R. T. 1988. Just-in-Time Manufacturing: An Aggressive Manufacturing Strategy. New York: McGraw-Hill.
MacInnes, Richard L. (2002) The Lean Enterprise Memory Jogger.
Mika, Geoffrey L. (1999) Kaizen Event Implementation Manual
Monden, Yasuhiro. 1982. Toyota Production System. Norcross, Ga: Institute of Industrial Engineers.
Ohno, Taiichi (1988), Toyota Production System: Beyond Large-Scale Production, Productivity Press,
Ohno, Taiichi (1988), Just-In-Time for Today and Tomorrow, Productivity Press, .
Page, Julian (2003) Implementing Lean Manufacturing Techniques.
Schonberger, Richard J. 1982. Japanese Manufacturing Techniques: Nine Hidden Lessons in Simplicity. New York: Free Press.
Suri, R. 1986. Getting from 'just in case' to 'just in time': insights from a simple model. 6 (3) 295–304.
Suzaki, Kyoshi. 1993. The New Shop Floor Management: Empowering People for Continuous Improvement. New York: Free Press.
Voss, Chris, and David Clutterbuck. 1989. Just-in-Time: A Global Status Report. UK: IFS Publications.
Wadell, William, and Bodek, Norman (2005), The Rebirth of American Industry, PCS Press,
External links
Lean Enterprise Institute
Manufacturing
Freight transport
Inventory
Working capital management
Inventory optimization | Lean manufacturing | [
"Engineering"
] | 6,691 | [
"Lean manufacturing",
"Manufacturing",
"Mechanical engineering"
] |
218,628 | https://en.wikipedia.org/wiki/Chemical%20potential | In thermodynamics, the chemical potential of a species is the energy that can be absorbed or released due to a change of the particle number of the given species, e.g. in a chemical reaction or phase transition. The chemical potential of a species in a mixture is defined as the rate of change of free energy of a thermodynamic system with respect to the change in the number of atoms or molecules of the species that are added to the system. Thus, it is the partial derivative of the free energy with respect to the amount of the species, all other species' concentrations in the mixture remaining constant. When both temperature and pressure are held constant, and the number of particles is expressed in moles, the chemical potential is the partial molar Gibbs free energy. At chemical equilibrium or in phase equilibrium, the total sum of the product of chemical potentials and stoichiometric coefficients is zero, as the free energy is at a minimum. In a system in diffusion equilibrium, the chemical potential of any chemical species is uniformly the same everywhere throughout the system.
In semiconductor physics, the chemical potential of a system of electrons at zero absolute temperature is known as the Fermi level.
Overview
Particles tend to move from higher chemical potential to lower chemical potential because this reduces the free energy. In this way, chemical potential is a generalization of "potentials" in physics such as gravitational potential. When a ball rolls down a hill, it is moving from a higher gravitational potential (higher internal energy thus higher potential for work) to a lower gravitational potential (lower internal energy). In the same way, as molecules move, react, dissolve, melt, etc., they will always tend naturally to go from a higher chemical potential to a lower one, changing the particle number, which is the conjugate variable to chemical potential.
A simple example is a system of dilute molecules diffusing in a homogeneous environment. In this system, the molecules tend to move from areas with high concentration to low concentration, until eventually, the concentration is the same everywhere. The microscopic explanation for this is based on kinetic theory and the random motion of molecules. However, it is simpler to describe the process in terms of chemical potentials: For a given temperature, a molecule has a higher chemical potential in a higher-concentration area and a lower chemical potential in a low concentration area. Movement of molecules from higher chemical potential to lower chemical potential is accompanied by a release of free energy. Therefore, it is a spontaneous process.
Another example, not based on concentration but on phase, is an ice cube on a plate above 0 °C. An H2O molecule that is in the solid phase (ice) has a higher chemical potential than a water molecule that is in the liquid phase (water) above 0 °C. When some of the ice melts, H2O molecules convert from solid to the warmer liquid where their chemical potential is lower, so the ice cube shrinks. At the temperature of the melting point, 0 °C, the chemical potentials in water and ice are the same; the ice cube neither grows nor shrinks, and the system is in equilibrium.
A third example is illustrated by the chemical reaction of dissociation of a weak acid HA (such as acetic acid, A = CH3COO−):
HA H+ + A−
Vinegar contains acetic acid. When acid molecules dissociate, the concentration of the undissociated acid molecules (HA) decreases and the concentrations of the product ions (H+ and A−) increase. Thus the chemical potential of HA decreases and the sum of the chemical potentials of H+ and A− increases. When the sums of chemical potential of reactants and products are equal the system is at equilibrium and there is no tendency for the reaction to proceed in either the forward or backward direction. This explains why vinegar is acidic, because acetic acid dissociates to some extent, releasing hydrogen ions into the solution.
Chemical potentials are important in many aspects of multi-phase equilibrium chemistry, including melting, boiling, evaporation, solubility, osmosis, partition coefficient, liquid-liquid extraction and chromatography. In each case the chemical potential of a given species at equilibrium is the same in all phases of the system.
In electrochemistry, ions do not always tend to go from higher to lower chemical potential, but they do always go from higher to lower electrochemical potential. The electrochemical potential completely characterizes all of the influences on an ion's motion, while the chemical potential includes everything except the electric force. (See below for more on this terminology.)
Thermodynamic definition
The chemical potential μi of species i (atomic, molecular or nuclear) is defined, as all intensive quantities are, by the phenomenological fundamental equation of thermodynamics. This holds for both reversible and irreversible infinitesimal processes:
where dU is the infinitesimal change of internal energy U, dS the infinitesimal change of entropy S, dV is the infinitesimal change of volume V for a thermodynamic system in thermal equilibrium, and dNi is the infinitesimal change of particle number Ni of species i as particles are added or subtracted. T is absolute temperature, S is entropy, P is pressure, and V is volume. Other work terms, such as those involving electric, magnetic or gravitational fields may be added.
From the above equation, the chemical potential is given by
This is because the internal energy U is a state function, so if its differential exists, then the differential is an exact differential such as
for independent variables x1, x2, ... , xN of U.
This expression of the chemical potential as a partial derivative of U with respect to the corresponding species particle number is inconvenient for condensed-matter systems, such as chemical solutions, as it is hard to control the volume and entropy to be constant while particles are added. A more convenient expression may be obtained by making a Legendre transformation to another thermodynamic potential: the Gibbs free energy . From the differential (for and , the product rule is applied to) and using the above expression for , a differential relation for is obtained:
As a consequence, another expression for results:
and the change in Gibbs free energy of a system that is held at constant temperature and pressure is simply
In thermodynamic equilibrium, when the system concerned is at constant temperature and pressure but can exchange particles with its external environment, the Gibbs free energy is at its minimum for the system, that is . It follows that
Use of this equality provides the means to establish the equilibrium constant for a chemical reaction.
By making further Legendre transformations from U to other thermodynamic potentials like the enthalpy and Helmholtz free energy , expressions for the chemical potential may be obtained in terms of these:
These different forms for the chemical potential are all equivalent, meaning that they have the same physical content, and may be useful in different physical situations.
Applications
The Gibbs–Duhem equation is useful because it relates individual chemical potentials. For example, in a binary mixture, at constant temperature and pressure, the chemical potentials of the two participants A and B are related by
where is the number of moles of A and is the number of moles of B. Every instance of phase or chemical equilibrium is characterized by a constant. For instance, the melting of ice is characterized by a temperature, known as the melting point at which solid and liquid phases are in equilibrium with each other. Chemical potentials can be used to explain the slopes of lines on a phase diagram by using the Clapeyron equation, which in turn can be derived from the Gibbs–Duhem equation. They are used to explain colligative properties such as melting-point depression by the application of pressure. Henry's law for the solute can be derived from Raoult's law for the solvent using chemical potentials.
History
Chemical potential was first described by the American engineer, chemist and mathematical physicist Josiah Willard Gibbs. He defined it as follows:
Gibbs later noted also that for the purposes of this definition, any chemical element or combination of elements in given proportions may be considered a substance, whether capable or not of existing by itself as a homogeneous body. This freedom to choose the boundary of the system allows the chemical potential to be applied to a huge range of systems. The term can be used in thermodynamics and physics for any system undergoing change. Chemical potential is also referred to as partial molar Gibbs energy (see also partial molar property). Chemical potential is measured in units of energy/particle or, equivalently, energy/mole.
In his 1873 paper A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, Gibbs introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e. bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume–entropy–internal energy graph, Gibbs was able to determine three states of equilibrium, i.e. "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words from the aforementioned paper, Gibbs states:
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body.
Electrochemical, internal, external, and total chemical potential
The abstract definition of chemical potential given above—total change in free energy per extra mole of substance—is more specifically called total chemical potential. If two locations have different total chemical potentials for a species, some of it may be due to potentials associated with "external" force fields (electric potential energy, gravitational potential energy, etc.), while the rest would be due to "internal" factors (density, temperature, etc.) Therefore, the total chemical potential can be split into internal chemical potential and external chemical potential:
where
i.e., the external potential is the sum of electric potential, gravitational potential, etc. (where q and m are the charge and mass of the species, Vele and h are the electric potential and height of the container, respectively, and g is the acceleration due to gravity). The internal chemical potential includes everything else besides the external potentials, such as density, temperature, and enthalpy. This formalism can be understood by assuming that the total energy of a system, , is the sum of two parts: an internal energy, , and an external energy due to the interaction of each particle with an external field, . The definition of chemical potential applied to yields the above expression for .
The phrase "chemical potential" sometimes means "total chemical potential", but that is not universal. In some fields, in particular electrochemistry, semiconductor physics, and solid-state physics, the term "chemical potential" means internal chemical potential, while the term electrochemical potential is used to mean total chemical potential.
Systems of particles
Electrons in solids
Electrons in solids have a chemical potential, defined the same way as the chemical potential of a chemical species: The change in free energy when electrons are added or removed from the system. In the case of electrons, the chemical potential is usually expressed in energy per particle rather than energy per mole, and the energy per particle is conventionally given in units of electronvolt (eV).
Chemical potential plays an especially important role in solid-state physics and is closely related to the concepts of work function, Fermi energy, and Fermi level. For example, n-type silicon has a higher internal chemical potential of electrons than p-type silicon. In a p–n junction diode at equilibrium the chemical potential (internal chemical potential) varies from the p-type to the n-type side, while the total chemical potential (electrochemical potential, or, Fermi level) is constant throughout the diode.
As described above, when describing chemical potential, one has to say "relative to what". In the case of electrons in semiconductors, internal chemical potential is often specified relative to some convenient point in the band structure, e.g., to the bottom of the conduction band. It may also be specified "relative to vacuum", to yield a quantity known as work function, however, work function varies from surface to surface even on a completely homogeneous material. Total chemical potential, on the other hand, is usually specified relative to electrical ground.
In atomic physics, the chemical potential of the electrons in an atom is sometimes said to be the negative of the atom's electronegativity. Likewise, the process of chemical potential equalization is sometimes referred to as the process of electronegativity equalization. This connection comes from the Mulliken electronegativity scale. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is seen that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons, i.e.,
Sub-nuclear particles
In recent years, thermal physics has applied the definition of chemical potential to systems in particle physics and its associated processes. For example, in a quark–gluon plasma or other QCD matter, at every point in space there is a chemical potential for photons, a chemical potential for electrons, a chemical potential for baryon number, electric charge, and so forth.
In the case of photons, photons are bosons and can very easily and rapidly appear or disappear. Therefore, at thermodynamic equilibrium, the chemical potential of photons is in most physical situations always and everywhere zero. The reason is, if the chemical potential somewhere was higher than zero, photons would spontaneously disappear from that area until the chemical potential went back to zero; likewise, if the chemical potential somewhere was less than zero, photons would spontaneously appear until the chemical potential went back to zero. Since this process occurs extremely rapidly - at least, it occurs rapidly in the presence of dense charged matter or also in the walls of the textbook example for a photon gas of blackbody radiation - it is safe to assume that the photon chemical potential here is never different from zero. A physical situation where the chemical potential for photons can differ from zero are material-filled optical microcavities, with spacings between cavity mirrors in the wavelength regime. In such two-dimensional cases, photon gases with tuneable chemical potential, much reminiscent to gases of material particles, can be observed.
Electric charge is different because it is intrinsically conserved, i.e. it can be neither created nor destroyed. It can, however, diffuse. The "chemical potential of electric charge" controls this diffusion: Electric charge, like anything else, will tend to diffuse from areas of higher chemical potential to areas of lower chemical potential. Other conserved quantities like baryon number are the same. In fact, each conserved quantity is associated with a chemical potential and a corresponding tendency to diffuse to equalize it out.
In the case of electrons, the behaviour depends on temperature and context. At low temperatures, with no positrons present, electrons cannot be created or destroyed. Therefore, there is an electron chemical potential that might vary in space, causing diffusion. At very high temperatures, however, electrons and positrons can spontaneously appear out of the vacuum (pair production), so the chemical potential of electrons by themselves becomes a less useful quantity than the chemical potential of the conserved quantities like (electrons minus positrons).
The chemical potentials of bosons and fermions is related to the number of particles and the temperature by Bose–Einstein statistics and Fermi–Dirac statistics respectively.
Ideal vs. non-ideal solutions
Generally the chemical potential is given as a sum of an ideal contribution and an excess contribution:
In an ideal solution, the chemical potential of species i (μi) is dependent on temperature and pressure.
μi0(T, P) is defined as the chemical potential of pure species i. Given this definition, the chemical potential of species i in an ideal solution is
where R is the gas constant, and is the mole fraction of species i contained in the solution. The chemical potential becomes negative infinity when , but this does not lead to nonphysical results because means that species i is not present in the system.
This equation assumes that only depends on the mole fraction () contained in the solution. This neglects intermolecular interaction between species i with itself and other species [i–(j≠i)]. This can be corrected for by factoring in the coefficient of activity of species i, defined as γi. This correction yields
The plots above give a very rough picture of the ideal and non-ideal situation.
See also
Chemical equilibrium
Electrochemical potential
Equilibrium chemistry
Excess chemical potential
Fugacity
Partial molar property
Thermodynamic activity
Thermodynamic equilibrium
Sources
Citations
References
External links
Physical chemistry
Potentials
Chemical thermodynamics
Thermodynamic properties
Chemical engineering thermodynamics | Chemical potential | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 3,576 | [
"Thermodynamic properties",
"Applied and interdisciplinary physics",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Thermodynamics",
"nan",
"Chemical engineering thermodynamics",
"Chemical thermodynamics",
"Physical chemistry"
] |
219,021 | https://en.wikipedia.org/wiki/Capillary%20action | Capillary action (sometimes called capillarity, capillary motion, capillary rise, capillary effect, or wicking) is the process of a liquid flowing in a narrow space without the assistance of external forces like gravity.
The effect can be seen in the drawing up of liquids between the hairs of a paint-brush, in a thin tube such as a straw, in porous materials such as paper and plaster, in some non-porous materials such as clay and liquefied carbon fiber, or in a biological cell.
It occurs because of intermolecular forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of surface tension (which is caused by cohesion within the liquid) and adhesive forces between the liquid and container wall act to propel the liquid.
Etymology
Capillary comes from the Latin word capillaris, meaning "of or resembling hair". The meaning stems from the tiny, hairlike diameter of a capillary.
History
The first recorded observation of capillary action was by Leonardo da Vinci. A former student of Galileo, Niccolò Aggiunti, was said to have investigated capillary action. In 1660, capillary action was still a novelty to the Irish chemist Robert Boyle, when he reported that "some inquisitive French Men" had observed that when a capillary tube was dipped into water, the water would ascend to "some height in the Pipe". Boyle then reported an experiment in which he dipped a capillary tube into red wine and then subjected the tube to a partial vacuum. He found that the vacuum had no observable influence on the height of the liquid in the capillary, so the behavior of liquids in capillary tubes was due to some phenomenon different from that which governed mercury barometers.
Others soon followed Boyle's lead. Some (e.g., Honoré Fabri, Jacob Bernoulli) thought that liquids rose in capillaries because air could not enter capillaries as easily as liquids, so the air pressure was lower inside capillaries. Others (e.g., Isaac Vossius, Giovanni Alfonso Borelli, Louis Carré, Francis Hauksbee, Josia Weitbrecht) thought that the particles of liquid were attracted to each other and to the walls of the capillary.
Although experimental studies continued during the 18th century, a successful quantitative treatment of capillary action was not attained until 1805 by two investigators: Thomas Young of the United Kingdom and Pierre-Simon Laplace of France. They derived the Young–Laplace equation of capillary action. By 1830, the German mathematician Carl Friedrich Gauss had determined the boundary conditions governing capillary action (i.e., the conditions at the liquid-solid interface). In 1871, the British physicist Sir William Thomson (later Lord Kelvin) determined the effect of the meniscus on a liquid's vapor pressure—a relation known as the Kelvin equation. German physicist Franz Ernst Neumann (1798–1895) subsequently determined the interaction between two immiscible liquids.
Albert Einstein's first paper, which was submitted to Annalen der Physik in 1900, was on capillarity.
Phenomena and physics
Capillary penetration in porous media shares its dynamic mechanism with flow in hollow tubes, as both processes are resisted by viscous forces. Consequently, a common apparatus used to demonstrate the phenomenon is the capillary tube. When the lower end of a glass tube is placed in a liquid, such as water, a concave meniscus forms. Adhesion occurs between the fluid and the solid inner wall pulling the liquid column along until there is a sufficient mass of liquid for gravitational forces to overcome these intermolecular forces. The contact length (around the edge) between the top of the liquid column and the tube is proportional to the radius of the tube, while the weight of the liquid column is proportional to the square of the tube's radius. So, a narrow tube will draw a liquid column along further than a wider tube will, given that the inner water molecules cohere sufficiently to the outer ones.
Examples
In the built environment, evaporation limited capillary penetration is responsible for the phenomenon of rising damp in concrete and masonry, while in industry and diagnostic medicine this phenomenon is increasingly being harnessed in the field of paper-based microfluidics.
In physiology, capillary action is essential for the drainage of continuously produced tear fluid from the eye. Two canaliculi of tiny diameter are present in the inner corner of the eyelid, also called the lacrimal ducts; their openings can be seen with the naked eye within the lacrymal sacs when the eyelids are everted.
Wicking is the absorption of a liquid by a material in the manner of a candle wick.
Paper towels absorb liquid through capillary action, allowing a fluid to be transferred from a surface to the towel. The small pores of a sponge act as small capillaries, causing it to absorb a large amount of fluid. Some textile fabrics are said to use capillary action to "wick" sweat away from the skin. These are often referred to as wicking fabrics, after the capillary properties of candle and lamp wicks.
Capillary action is observed in thin layer chromatography, in which a solvent moves vertically up a plate via capillary action. In this case the pores are gaps between very small particles.
Capillary action draws ink to the tips of fountain pen nibs from a reservoir or cartridge inside the pen.
With some pairs of materials, such as mercury and glass, the intermolecular forces within the liquid exceed those between the solid and the liquid, so a convex meniscus forms and capillary action works in reverse.
In hydrology, capillary action describes the attraction of water molecules to soil particles. Capillary action is responsible for moving groundwater from wet areas of the soil to dry areas. Differences in soil potential () drive capillary action in soil.
A practical application of capillary action is the capillary action siphon. Instead of utilizing a hollow tube (as in most siphons), this device consists of a length of cord made of a fibrous material (cotton cord or string works well). After saturating the cord with water, one (weighted) end is placed in a reservoir full of water, and the other end placed in a receiving vessel. The reservoir must be higher than the receiving vessel. A related but simplified capillary siphon only consists of two hook-shaped stainless-steel rods, whose surface is hydrophilic, allowing water to wet the narrow grooves between them. Due to capillary action and gravity, water will slowly transfer from the reservoir to the receiving vessel. This simple device can be used to water houseplants when nobody is home. This property is also made use of in the lubrication of steam locomotives: wicks of worsted wool are used to draw oil from reservoirs into delivery pipes leading to the bearings.
In plants and animals
Capillary action is seen in many plants, and plays a part in transpiration. Water is brought high up in trees by branching; evaporation at the leaves creating depressurization; probably by osmotic pressure added at the roots; and possibly at other locations inside the plant, especially when gathering humidity with air roots.
Capillary action for uptake of water has been described in some small animals, such as Ligia exotica and Moloch horridus.
Height of a meniscus
Capillary rise of liquid in a capillary
The height h of a liquid column is given by Jurin's law
where is the liquid-air surface tension (force/unit length), θ is the contact angle, ρ is the density of liquid (mass/volume), g is the local acceleration due to gravity (length/square of time), and r is the radius of tube.
As r is in the denominator, the thinner the space in which the liquid can travel, the further up it goes. Likewise, lighter liquid and lower gravity increase the height of the column.
For a water-filled glass tube in air at standard laboratory conditions, at 20°C, , and . Because water spreads on clean glass, the effective equilibrium contact angle is approximately zero. For these values, the height of the water column is
Thus for a radius glass tube in lab conditions given above, the water would rise an unnoticeable . However, for a radius tube, the water would rise , and for a radius tube, the water would rise .
Capillary rise of liquid between two glass plates
The product of layer thickness (d) and elevation height (h) is constant (d·h = constant), the two quantities are inversely proportional. The surface of the liquid between the planes is hyperbola.
Liquid transport in porous media
When a dry porous medium is brought into contact with a liquid, it will absorb the liquid at a rate which decreases over time. When considering evaporation, liquid penetration will reach a limit dependent on parameters of temperature, humidity and permeability. This process is known as evaporation limited capillary penetration and is widely observed in common situations including fluid absorption into paper and rising damp in concrete or masonry walls. For a bar shaped section of material with cross-sectional area A that is wetted on one end, the cumulative volume V of absorbed liquid after a time t is
where S is the sorptivity of the medium, in units of m·s−1/2 or mm·min−1/2. This time dependence relation is similar to Washburn's equation for the wicking in capillaries and porous media. The quantity
is called the cumulative liquid intake, with the dimension of length. The wetted length of the bar, that is the distance between the wetted end of the bar and the so-called wet front, is dependent on the fraction f of the volume occupied by voids. This number f is the porosity of the medium; the wetted length is then
Some authors use the quantity S/f as the sorptivity.
The above description is for the case where gravity and evaporation do not play a role.
Sorptivity is a relevant property of building materials, because it affects the amount of rising dampness. Some values for the sorptivity of building materials are in the table below.
See also
References
Further reading
Fluid dynamics
Hydrology
Surface science
Porous media | Capillary action | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Environmental_science"
] | 2,189 | [
"Hydrology",
"Porous media",
"Chemical engineering",
"Materials science",
"Surface science",
"Condensed matter physics",
"Environmental engineering",
"Piping",
"Fluid dynamics"
] |
219,072 | https://en.wikipedia.org/wiki/Risk%20assessment | Risk assessment determines possible mishaps, their likelihood and consequences, and the tolerances for such events.
The results of this process may be expressed in a quantitative or qualitative fashion. Risk assessment is an inherent part of a broader risk management strategy to help reduce any potential risk-related consequences.
More precisely, risk assessment identifies and analyses potential (future) events that may negatively impact individuals, assets, and/or the environment (i.e. hazard analysis). It also makes judgments "on the tolerability of the risk on the basis of a risk analysis" while considering influencing factors (i.e. risk evaluation).
Categories
Individual risk assessment
Risk assessments can be done in individual cases, including in patient and physician interactions. In the narrow sense chemical risk assessment is the assessment of a health risk in response to environmental exposures.
The ways statistics are expressed and communicated to an individual, both through words and numbers impact his or her interpretation of benefit and harm. For example, a fatality rate may be interpreted as less benign than the corresponding survival rate.
A systematic review of patients and doctors from 2017 found that overstatement of benefits and understatement of risks occurred more often than the alternative.
A systematic review from the Cochrane collaboration suggested "well-documented decision aids" are helpful in reducing effects of such tendencies or biases. Aids may help people come to a decision about their care based on evidence informed information that align with their values. Decision aids may also help people understand the risks more clearly, and they empower people to take an active role when making medical decisions. The systematic review did not find a difference in people who regretted their decisions between those who used decision aids and those who had the usual standard treatment.
An individual´s own risk perception may be affected by psychological, ideological, religious or otherwise subjective factors, which impact rationality of the process. Individuals tend to be less rational when risks and exposures concern themselves as opposed to others. There is also a tendency to underestimate risks that are voluntary or where the individual sees themselves as being in control, such as smoking.
Systems risk assessment
Risk assessment can also be made on a much larger systems theory scale, for example assessing the risks of an ecosystem or an interactively complex mechanical, electronic, nuclear, and biological system or a hurricane (a complex meteorological and geographical system). Systems may be defined as linear and nonlinear (or complex), where linear systems are predictable and relatively easy to understand given a change in input, and non-linear systems unpredictable when inputs are changed. As such, risk assessments of non-linear/complex systems tend to be more challenging.
In the engineering of complex systems, sophisticated risk assessments are often made within safety engineering and reliability engineering when it concerns threats to life, natural environment, or machine functioning. The agriculture, nuclear, aerospace, oil, chemical, railroad, and military industries have a long history of dealing with risk assessment. Also, medical, hospital, social service, and food industries control risks and perform risk assessments on a continual basis. Methods for assessment of risk may differ between industries and whether it pertains to general financial decisions or environmental, ecological, or public health risk assessment.
Concept
Rapid technological change, increasing scale of industrial complexes, increased system integration, market competition, and other factors have been shown to increase societal risk in the past few decades. As such, risk assessments become increasingly critical in mitigating accidents, improving safety, and improving outcomes. Risk assessment consists of an objective evaluation of risk in which assumptions and uncertainties are clearly considered and presented. This involves identification of risk (what can happen and why), the potential consequences, the probability of occurrence, the tolerability or acceptability of the risk, and ways to mitigate or reduce the probability of the risk. Optimally, it also involves documentation of the risk assessment and its findings, implementation of mitigation methods, and review of the assessment (or risk management plan), coupled with updates when necessary. Sometimes risks can be deemed acceptable, meaning the risk "is understood and tolerated ... usually because the cost or difficulty of implementing an effective countermeasure for the associated vulnerability exceeds the expectation of loss."
Mild versus wild risk
Benoit Mandelbrot distinguished between "mild" and "wild" risk and argued that risk assessment and risk management must be fundamentally different for the two types of risk. Mild risk follows normal or near-normal probability distributions, is subject to regression to the mean and the law of large numbers, and is therefore relatively predictable. Wild risk follows fat-tailed distributions, e.g., Pareto or power-law distributions, is subject to regression to the tail (infinite mean or variance, rendering the law of large numbers invalid or ineffective), and is therefore difficult or impossible to predict. A common error in risk assessment and management is to underestimate the wildness of risk, assuming risk to be mild when in fact it is wild, which must be avoided if risk assessment and management are to be valid and reliable, according to Mandelbrot.
Mathematical conceptualization
To see the risk management process expressed mathematically, one can define expected risk as the sum over individual risks, , which can be computed as the product of potential losses, , and their probabilities, :
Even though for some risks , we might have , if the probability is small compared to , its estimation might be based only on a smaller number of prior events, and hence, more uncertain. On the other hand, since , must be larger than , so decisions based on this uncertainty would be more consequential, and hence, warrant a different approach.
This becomes important when we consider the variance of risk
as a large changes the value.
Financial decisions, such as insurance, express loss in terms of dollar amounts. When risk assessment is used for public health or environmental decisions, the loss can be quantified in a common metric such as a country's currency or some numerical measure of a location's quality of life. For public health and environmental decisions, the loss is simply a verbal description of the outcome, such as increased cancer incidence or incidence of birth defects. In that case, the "risk" is expressed as
If the risk estimate takes into account information on the number of individuals exposed, it is termed a "population risk" and is in units of expected increased cases per time period. If the risk estimate does not take into account the number of individuals exposed, it is termed an "individual risk" and is in units of incidence rate per time period. Population risks are of more use for cost/benefit analysis; individual risks are of more use for evaluating whether risks to individuals are "acceptable".
Quantitative risk assessment
In quantitative risk assessment, an annualized loss expectancy (ALE) may be used to justify the cost of implementing countermeasures to protect an asset. This may be calculated by multiplying the single loss expectancy (SLE), which is the loss of value based on a single security incident, with the annualized rate of occurrence (ARO), which is an estimate of how often a threat would be successful in exploiting a vulnerability.
The usefulness of quantitative risk assessment has been questioned, however. Barry Commoner, Brian Wynne and other critics have expressed concerns that risk assessment tends to be overly quantitative and reductive. For example, they argue that risk assessments ignore qualitative differences among risks. Some charge that assessments may drop out important non-quantifiable or inaccessible information, such as variations among the classes of people exposed to hazards, or social amplification. Furthermore, Commoner and O'Brien claim that quantitative approaches divert attention from precautionary or preventative measures. Others, like Nassim Nicholas Taleb consider risk managers little more than "blind users" of statistical tools and methods.
Process
Older textbooks distinguish between the term risk analysis and risk evaluation;
a risk analysis includes the following 4 steps:
establish the context, which restricts the range of hazards to be considered. It is also necessary to identify the potential parties or assets which may be affected by the threat, and the potential consequences to them if the hazard is activated.
Hazard identification, an identification of visible and implied hazards and determining the qualitative nature of the potential adverse consequences of each hazard. Without a potential adverse consequence, there is no hazard.
frequency analysis If a consequence is dependent on dose, i.e. the amount of exposure, the relationship between dose and severity of consequence must be established, and the risk depends on the probable dose, which may depend on concentration or amplitude and duration or frequency of exposure. This is the general case for many health hazards where the mechanism of injury is toxicity or repetitive injury, particularly where the effect is cumulative.
consequence analysis. For other hazards, the consequences may either occur or not, and the severity may be extremely variable even when the triggering conditions are the same. This is typical of many biological hazards as well as a large range of safety hazards. Exposure to a pathogen may or may not result in actual infection, and the consequences of infection may also be variable. Similarly, a fall from the same place may result in minor injury or death, depending on unpredictable details. In these cases, estimates must be made of reasonably likely consequences and associated probability of occurrence.
A risk evaluation means that judgements are made on the tolerability of the identified risks, leading to risk acceptance. When risk analysis and risk evaluation are made at the same time, it is called risk assessment.
As of 2023, chemical risk assessment follows these 4 steps:
hazard characterization
exposure assessment
dose-response modeling
risk characterization.
There is tremendous variability in the dose-response relationship between a chemical and human health outcome in particularly susceptible subgroups, such as pregnant women, developing fetuses, children up to adolescence, people with low socioeconomic status, those with preexisting diseases, disabilities, genetic susceptibility, and those with other environmental exposures.
The process of risk assessment may be somewhat informal at the individual social level, assessing economic and household risks, or a sophisticated process at the strategic corporate level. However, in both cases, ability to anticipate future events and create effective strategies for mitigating them when deemed unacceptable is vital.
At the individual level, identifying objectives and risks, weighing their importance, and creating plans, may be all that is necessary.
At the strategic organisational level, more elaborate policies are necessary, specifying acceptable levels of risk, procedures to be followed within the organisation, priorities, and allocation of resources.
At the strategic corporate level, management involved with the project produce project level risk assessments with the assistance of the available expertise as part of the planning process and set up systems to ensure that required actions to manage the assessed risk are in place. At the dynamic level, the personnel directly involved may be required to deal with unforeseen problems in real time. The tactical decisions made at this level should be reviewed after the operation to provide feedback on the effectiveness of both the planned procedures and decisions made in response to the contingency.
Dose dependent risk
Dose-Response Analysis, is determining the relationship between dose and the type of adverse response and/or probability or the incidence of effect (dose-response assessment). The complexity of this step in many contexts derives mainly from the need to extrapolate results from experimental animals (e.g. mouse, rat) to humans, and/or from high to lower doses, including from high acute occupational levels to low chronic environmental levels. In addition, the differences between individuals due to genetics or other factors mean that the hazard may be higher for particular groups, called susceptible populations. An alternative to dose-response estimation is to determine a concentration unlikely to yield observable effects, that is, a no effect concentration. In developing such a dose, to account for the largely unknown effects of animal to human extrapolations, increased variability in humans, or missing data, a prudent approach is often adopted by including safety or uncertainty factors in the estimate of the "safe" dose, typically a factor of 10 for each unknown step.
Exposure Quantification, aims to determine the amount of a contaminant (dose) that individuals and populations will receive, either as a contact level (e.g., concentration in ambient air) or as intake (e.g., daily dose ingested from drinking water). This is done by examining the results of the discipline of exposure assessment. As a different location, lifestyle, and other factors likely influence the amount of contaminant that is received, a range or distribution of possible values is generated in this step. Particular care is taken to determine the exposure of the susceptible population(s).
The results of these steps are combined to produce an estimate of risk. Because of the different susceptibilities and exposures, this risk will vary within a population. An uncertainty analysis is usually included in a health risk assessment.
Dynamic risk assessment
During an emergency response, the situation and hazards are often inherently less predictable than for planned activities (non-linear). In general, if the situation and hazards are predictable (linear), standard operating procedures should deal with them adequately. In some emergencies, this may also hold true, with the preparation and trained responses being adequate to manage the situation. In these situations, the operator can manage risk without outside assistance, or with the assistance of a backup team who are prepared and available to step in at short notice.
Other emergencies occur where there is no previously planned protocol, or when an outsider group is brought in to handle the situation, and they are not specifically prepared for the scenario that exists but must deal with it without undue delay. Examples include police, fire department, disaster response, and other public service rescue teams. In these cases, ongoing risk assessment by the involved personnel can advise appropriate action to reduce risk. HM Fire Services Inspectorate has defined dynamic risk assessment (DRA) as:
Dynamic risk assessment is the final stage of an integrated safety management system that can provide an appropriate response during changing circumstances. It relies on experience, training and continuing education, including effective debriefing to analyse not only what went wrong, but also what went right, and why, and to share this with other members of the team and the personnel responsible for the planning level risk assessment.
Fields of application
The application of risk assessment procedures is common in a wide range of fields, and these may have specific legal obligations, codes of practice, and standardised procedures. Some of these are listed here.
General human health
There are many resources that provide human health risk information:
The National Library of Medicine provides risk assessment and regulation information tools for a varied audience. These include:
TOXNET (databases on hazardous chemicals, environmental health, and toxic releases),
the Household Products Database (potential health effects of chemicals in over 10,000 common household products),
TOXMAP (maps of the U.S. Environmental Protection Agency Superfund and Toxics Release Inventory data).
The United States Environmental Protection Agency provides basic information about environmental health risk assessments for the public for a wide variety of possible environmental exposures.
The Environmental Protection Agency began actively using risk assessment methods to protect drinking water in the United States after the passage of the Safe Drinking Water Act of 1974. The law required the National Academy of Sciences to conduct a study on drinking water issues, and in its report, the NAS described some methodologies for doing risk assessments for chemicals that were suspected carcinogens, recommendations that top EPA officials have described as perhaps the study's most important part.
Considering the increase in junk food and its toxicity, FDA required in 1973 that cancer-causing compounds must not be present in meat at concentrations that would cause a cancer risk greater than 1 in a million over a lifetime. The US Environmental Protection Agency provides extensive information about ecological and environmental risk assessments for the public via its risk assessment portal. The Stockholm Convention on persistent organic pollutants (POPs) supports a qualitative risk framework for public health protection from chemicals that display environmental and biological persistence, bioaccumulation, toxicity (PBT) and long range transport; most global chemicals that meet this criterion have been previously assessed quantitatively by national and international health agencies.
For non-cancer health effects, the terms reference dose (RfD) or reference concentration (RfC) are used to describe the safe level of exposure in a dichotomous fashion. Newer ways of communicating the risk is the probabilistic risk assessment.
Small sub-populations
When risks apply mainly to small sub-populations, it can be difficult to determine when intervention is necessary. For example, there may be a risk that is very low for everyone, other than 0.1% of the population. It is necessary to determine whether this 0.1% is represented by:
all infants younger than X days or
recreational users of a particular product.
If the risk is higher for a particular sub-population because of abnormal exposure rather than susceptibility, strategies to further reduce the exposure of that subgroup are considered. If an identifiable sub-population is more susceptible due to inherent genetic or other factors, public policy choices must be made. The choices are:
to set policies for protecting the general population that are protective of such groups, e.g. for children when data exists, the Clean Air Act for populations such as asthmatics or
not to set policies, because the group is too small, or the costs too high.
Acceptable risk criteria
Acceptable risk is a risk that is understood and tolerated usually because the cost or difficulty of implementing an effective countermeasure for the associated vulnerability exceeds the expectation of loss.
The idea of not increasing lifetime risk by more than one in a million has become commonplace in public health discourse and policy. It is a heuristic measure. It provides a numerical basis for establishing a negligible increase in risk.
Environmental decision making allows some discretion for deeming individual risks potentially "acceptable" if less than one in ten thousand chance of increased lifetime risk. Low risk criteria such as these provide some protection for a case where individuals may be exposed to multiple chemicals e.g. pollutants, food additives, or other chemicals.
In practice, a true zero-risk is possible only with the suppression of the risk-causing activity.
Stringent requirements of 1 in a million may not be technologically feasible or may be so prohibitively expensive as to render the risk-causing activity unsustainable, resulting in the optimal degree of intervention being a balance between risks vs. benefit. For example, emissions from hospital incinerators result in a certain number of deaths per year. However, this risk must be balanced against the alternatives. There are public health risks, as well as economic costs, associated with all options. The risk associated with no incineration is the potential spread of infectious diseases or even no hospitals. Further investigation identifies options such as separating noninfectious from infectious wastes, or air pollution controls on a medical incinerator.
Intelligent thought about a reasonably full set of options is essential. Thus, it is not unusual for there to be an iterative process between analysis, consideration of options, and follow up analysis.
Public health
In the context of public health, risk assessment is the process of characterizing the nature and likelihood of a harmful effect to individuals or populations from certain human activities. Health risk assessment can be mostly qualitative or can include statistical estimates of probabilities for specific populations. In most countries, the use of specific chemicals or the operations of specific facilities (e.g. power plants, manufacturing plants) is not allowed unless it can be shown that they do not increase the risk of death or illness above a specific threshold. For example, the American Food and Drug Administration (FDA) regulates food safety through risk assessment, while the EFSA does the same in EU.
An occupational risk assessment is an evaluation of how much potential danger a hazard can have to a person in a workplace environment. The assessment takes into account possible scenarios in addition to the probability of their occurrence and the results. The six types of hazards to be aware of are safety (those that can cause injury), chemicals, biological, physical, psychosocial (those that cause stress, harassment) and ergonomic (those that can cause musculoskeletal disorders). To appropriately access hazards there are two parts that must occur. Firstly, there must be an "exposure assessment" which measures the likelihood of worker contact and the level of contact. Secondly, a "risk characterization" must be made which measures the probability and severity of the possible health risks.
Human settlements
The importance of risk assessments to manage the consequences of climate change and variability is recalled in the global frameworks for disaster risk reduction, adopted by the member countries of the United Nations at the end of the World Conferences held in Kobe (2005) and Sendai (2015). The Sendai Framework for Disaster Risk Reduction brings attention to the local scale and encourages a holistic risk approach, which should consider all the hazards to which a community is exposed, the integration of technical-scientific knowledge with local knowledge, and the inclusion of the concept of risk in local plans to achieve a significant disaster reduction by 2030. Taking these principles into daily practice poses a challenge for many countries. The Sendai framework monitoring system highlights how little is known about the progress made from 2015 to 2019 in local disaster risk reduction.
Sub-Saharan Africa
As of 2019, in the South of the Sahara, risk assessment is not yet an institutionalized practice. The exposure of human settlements to multiple hazards (hydrological and agricultural drought, pluvial, fluvial and coastal floods) is frequent and requires risk assessments on a regional, municipal, and sometimes individual human settlement scale. The multidisciplinary approach and the integration of local and technical-scientific knowledge are necessary from the first steps of the assessment. Local knowledge remains unavoidable to understand the hazards that threaten individual communities, the critical thresholds in which they turn into disasters, for the validation of hydraulic models, and in the decision-making process on risk reduction. On the other hand, local knowledge alone is not enough to understand the impacts of future changes and climatic variability and to know the areas exposed to infrequent hazards.
The availability of new technologies and open access information (high resolution satellite images, daily rainfall data) allow assessment today with an accuracy that only 10 years ago was unimaginable. The images taken by unmanned vehicle technologies allow to produce very high resolution digital elevation models and to accurately identify the receptors. Based on this information, the hydraulic models allow the identification of flood areas with precision even at the scale of small settlements. The information on loss and damages and on cereal crop at individual settlement scale allow to determine the level of multi-hazard risk on a regional scale.The multi-temporal high-resolution satellite images allow to assess the hydrological drought and the dynamics of human settlements in the flood zone.
Risk assessment is more than an aid to informed decision making about risk reduction or acceptance. It integrates early warning systems by highlighting the hot spots where disaster prevention and preparedness are most urgent. When risk assessment considers the dynamics of exposure over time, it helps to identify risk reduction policies that are more appropriate to the local context.
Despite these potentials, the risk assessment is not yet integrated into the local planning in the South of the Sahara which, in the best of cases, uses only the analysis of vulnerability to climate change and variability.
Auditing
For audits performed by an outside audit firm, risk assessment is a crucial stage before accepting an audit engagement. According to ISA315 Understanding the Entity and its Environment and Assessing the Risks of Material Misstatement, "the auditor should perform risk assessment procedures to obtain an understanding of the entity and its environment, including its internal control". Evidence relating to the auditor's risk assessment of a material misstatement in the client's financial statements. Then, the auditor obtains initial evidence regarding the classes of transactions at the client and the operating effectiveness of the client's internal controls. Audit risk is defined as the risk that the auditor will issue a clean unmodified opinion regarding the financial statements, when in fact the financial statements are materially misstated, and therefore do not qualify for a clean unmodified opinion. As a formula, audit risk is the product of two other risks: Risk of Material Misstatement and Detection Risk. This formula can be further broken down as follows: inherent risk × control risk × detection risk.
Project management
In project management, risk assessment is an integral part of the risk management plan, studying the probability, the impact, and the effect of every known risk on the project, as well as the corrective action to take should an incident be implied by a risk occur. Of special consideration in this area are the relevant codes of practice that are enforced in the specific jurisdiction. Understanding the regime of regulations that risk management must abide by is integral to formulating safe and compliant risk assessment practices.
Information security
Information technology risk assessment can be performed by a qualitative or quantitative approach, following different methodologies. One important difference in risk assessments in information security is modifying the threat model to account for the fact that any adversarial system connected to the Internet has access to threaten any other connected system. Risk assessments may therefore need to be modified to account for the threats from all adversaries, instead of just those with reasonable access as is done in other fields.
NIST Definition: The process of identifying risks to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, other organizations, and the Nation, resulting from the operation of an information system. Part of risk management incorporates threat and vulnerability analyses and considers mitigations provided by security controls planned or in place.
There are various risk assessment methodologies and frameworks available which include NIST Risk Management Framework (RMF), Control Objectives for Information and Related Technologies (COBIT), Factor Analysis of Information Risk (FAIR), Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE), The Center for Internet Security Risk Assessment Method (CIS RAM), and The Duty of Care Risk Analysis (DoCRA) Standard, which helps define 'reasonable' security.
Cybersecurity
The Threat and Risk Assessment (TRA) process is part of risk management referring to risks related to cyber threats. The TRA process will identify cyber risks, assess risks' severities, and may recommend activities to reduce risks to an acceptable level.
There are different methodologies for performing TRA (e.g., Harmonized TRA Methodology), all utilize the following elements: identifying of assets (what should be protected), identifying and assessing of the threats and vulnerabilities for the identified assets, determining the exploitability of the vulnerabilities, determining the levels of risk associated with the vulnerabilities (what are the implications if the assets were damaged or lost), and recommending a risk mitigation program.
Megainvestment projects
Megaprojects (sometimes also called "major programs") are extremely large-scale investment projects, typically costing more than US$1 billion per project. They include bridges, tunnels, highways, railways, airports, seaports, power plants, dams, wastewater projects, coastal flood protection, oil and natural gas extraction projects, public buildings, information technology systems, aerospace projects, and defence systems. Megaprojects have been shown to be particularly risky in terms of finance, safety, and social and environmental impacts.
Software evolution
Studies have shown that early parts of the system development cycle such as requirements and design specifications are especially prone to error. This effect is particularly notorious in projects involving multiple stakeholders with different points of view. Evolutionary software processes offer an iterative approach to requirement engineering to alleviate the problems of uncertainty, ambiguity, and inconsistency inherent in software developments, including uncertainty, ambiguity, and inconsistency inherent in software developments.
Shipping industry
In July 2010, shipping companies agreed to use standardized procedures in order to assess risk in key shipboard operations. These procedures were implemented as part of the amended ISM Code.
Underwater diving
Formal risk assessment is a required component of most professional dive planning, but the format and methodology may vary. Consequences of an incident due to an identified hazard are generally chosen from a small number of standardised categories, and probability is estimated based on statistical data on the rare occasions when it is available, and on a best guess estimate based on personal experience and company policy in most cases. A simple risk matrix is often used to transform these inputs into a level of risk, generally expressed as unacceptable, marginal or acceptable. If unacceptable, measures must be taken to reduce the risk to an acceptable level, and the outcome of the risk assessment must be accepted by the affected parties before a dive commences. Higher levels of risk may be acceptable in special circumstances, such as military or search and rescue operations when there is a chance of recovering a survivor. Diving supervisors are trained in the procedures of hazard identification and risk assessment, and it is part of their planning and operational responsibility. Both health and safety hazards must be considered. Several stages may be identified. There is risk assessment done as part of the diving project planning, on site risk assessment which takes into account the specific conditions of the day, and dynamic risk assessment which is ongoing during the operation by the members of the dive team, particularly the supervisor and the working diver.
In recreational scuba diving, the extent of risk assessment expected of the diver is relatively basic and is included in the pre-dive checks. Several mnemonics have been developed by diver certification agencies to remind the diver to pay some attention to risk, but the training is rudimentary. Diving service providers are expected to provide a higher level of care for their customers, and diving instructors and divemasters are expected to assess risk on behalf of their customers and warn them of site-specific hazards and the competence considered appropriate for the planned dive. Technical divers are expected to make a more thorough assessment of risk, but as they will be making an informed choice for a recreational activity, the level of acceptable risk may be considerably higher than that permitted for occupational divers under the direction of an employer.
Outdoor and wilderness adventure
In outdoor activities including commercial outdoor education, wilderness expeditions, and outdoor recreation, risk assessment refers to the analysis of the probability and magnitude of unfavorable outcomes such as injury, illness, or property damage due to environmental and related causes, compared to the human development or other benefits of outdoor activity. This is of particular importance as school programs and others weigh the benefits of youth and adult participation in various outdoor learning activities against the inherent and other hazards present in those activities. Schools, corporate entities seeking team-building experiences, parents/guardians, and others considering outdoor experiences expect or require organizations to assess the hazards and risks of different outdoor activities—such as sailing, target shooting, hunting, mountaineering, or camping—and select activities with acceptable risk profiles.
Outdoor education, wilderness adventure, and other outdoor-related organizations should, and are in some jurisdictions required, to conduct risk assessments prior to offering programs for commercial purposes.
Such organizations are given guidance on how to provide their risk assessments.
Risk assessments for led outdoor activities form only one component of a comprehensive risk management plan, as many risk assessments use a basic linear-style thinking that does not employ more modern risk management practice employing complex socio-technical systems theory.
Environment
Environmental Risk Assessment (ERA) aims to assess the effects of stressors, usually chemicals, on the local environment. A risk is an integrated assessment of the likelihood and severity of an undesired event. In ERA, the undesired event often depends on the chemical of interest and on the risk assessment scenario. This undesired event is usually a detrimental effect on organisms, populations or ecosystems. Current ERAs usually compare an exposure to a no-effect level, such as the Predicted Environmental Concentration/Predicted No-Effect Concentration (PEC/PNEC) ratio in Europe. Although this type of ratio is useful and often used in regulation purposes, it is only an indication of an exceeded apparent threshold. New approaches start to be developed in ERA in order to quantify this risk and to communicate effectively on it with both the managers and the general public.
Ecological risk assessment is complicated by the fact that there are many nonchemical stressors that substantially influence ecosystems, communities, and individual plants and animals, as well as across landscapes and regions. Defining the undesired (adverse) event is a political or policy judgment, further complicating applying traditional risk analysis tools to ecological systems. Much of the policy debate surrounding ecological risk assessment is over defining precisely what is an adverse event.
Biodiversity
Biodiversity Risk Assessments evaluate risks to biological diversity, specially the risk of species extinction or the risk of ecosystem collapse. The units of assessments are the biological (species, subspecies or populations) or ecological entities (habitats, ecosystems, etc.), and the risk are often related to human actions and interventions (threats and pressures). Regional and national protocols have been proposed by multiple academic or governmental institutions and working groups, but global standards such as the Red List of Threatened Species and the IUCN Red List of Ecosystems have been widely adopted, and are recognized or proposed as official indicators of progress toward international policy targets and goals, such as the Aichi targets and the Sustainable Development Goals.
Law
Risk assessments are used in numerous stages during the legal process and are developed to measure a wide variety of items, such as recidivism rates, potential pretrial issues, probation/parole, and to identify potential interventions for defendants. Clinical psychologists, forensic psychologists, and other practitioners are responsible for conducting risk assessments. Depending on the risk assessment tool, practitioners are required to gather a variety of background information on the defendant or individual being assessed. This information includes their previous criminal history (if applicable) and other records (i.e. Demographics, Education, Job Status, Medical History), which can be accessed through direct interview with the defendant or on-file records.
In the pre-trial stage, a widely used risk assessment tool is the Public Safety Assessment, which predicts failure to appear in court, likelihood of a new criminal arrest while on pretrial release, and likelihood of a new violent criminal arrest while on pretrial release. Multiple items are observed and taken into account based on which aspect of the PSA is being focused, and like all other actuarial risk assessments, each item is assigned a weighted amount to produce a final score. Detailed information such as transparency on the items the PSA factors and how scores are distributed are accessible online.
For defendants who have been incarcerated, risk assessments are used to determine their likelihood of recidivism and inform sentence length decisions. Risk assessments also aid parole/probation officers in determining the level of supervision a probationer should be subjected to and what interventions could be implemented to improve offender risk status. The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a risk assessment too designed to measure pretrial release risk, general recidivism risk, and violent recidivism risk. Detailed information on scoring and algorithms for COMPAS are not accessible to the general public.
See also
References
References
Further reading
Also published as December 4 cover title: "Why We Worry About the Wrong Things: The Psychology of Risk" |work=Time
Impact assessment
Probability assessment
Hazard analysis
Safety engineering
Reliability engineering
Occupational safety and health
Corporate development | Risk assessment | [
"Engineering"
] | 7,212 | [
"Safety engineering",
"Systems engineering",
"Hazard analysis",
"Reliability engineering"
] |
219,144 | https://en.wikipedia.org/wiki/Compressible%20flow | Compressible flow (or gas dynamics) is the branch of fluid mechanics that deals with flows having significant changes in fluid density. While all flows are compressible, flows are usually treated as being incompressible when the Mach number (the ratio of the speed of the flow to the speed of sound) is smaller than 0.3 (since the density change due to velocity is about 5% in that case). The study of compressible flow is relevant to high-speed aircraft, jet engines, rocket motors, high-speed entry into a planetary atmosphere, gas pipelines, commercial applications such as abrasive blasting, and many other fields.
History
The study of gas dynamics is often associated with the flight of modern high-speed aircraft and atmospheric reentry of space-exploration vehicles; however, its origins lie with simpler machines. At the beginning of the 19th century, investigation into the behaviour of fired bullets led to improvement in the accuracy and capabilities of guns and artillery. As the century progressed, inventors such as Gustaf de Laval advanced the field, while researchers such as Ernst Mach sought to understand the physical phenomena involved through experimentation.
At the beginning of the 20th century, the focus of gas dynamics research shifted to what would eventually become the aerospace industry. Ludwig Prandtl and his students proposed important concepts ranging from the boundary layer to supersonic shock waves, supersonic wind tunnels, and supersonic nozzle design. Theodore von Kármán, a student of Prandtl, continued to improve the understanding of supersonic flow. Other notable figures (Meyer, , and Ascher Shapiro) also contributed significantly to the principles considered fundamental to the study of modern gas dynamics. Many others also contributed to this field.
Accompanying the improved conceptual understanding of gas dynamics in the early 20th century was a public misconception that there existed a barrier to the attainable speed of aircraft, commonly referred to as the "sound barrier." In truth, the barrier to supersonic flight was merely a technological one, although it was a stubborn barrier to overcome. Amongst other factors, conventional aerofoils saw a dramatic increase in drag coefficient when the flow approached the speed of sound. Overcoming the larger drag proved difficult with contemporary designs, thus the perception of a sound barrier. However, aircraft design progressed sufficiently to produce the Bell X-1. Piloted by Chuck Yeager, the X-1 officially achieved supersonic speed in October 1947.
Historically, two parallel paths of research have been followed in order to further gas dynamics knowledge. Experimental gas dynamics undertakes wind tunnel model experiments and experiments in shock tubes and ballistic ranges with the use of optical techniques to document the findings. Theoretical gas dynamics considers the equations of motion applied to a variable-density gas, and their solutions. Much of basic gas dynamics is analytical, but in the modern era Computational fluid dynamics applies computing power to solve the otherwise-intractable nonlinear partial differential equations of compressible flow for specific geometries and flow characteristics.
Introductory concepts
There are several important assumptions involved in the underlying theory of compressible flow. All fluids are composed of molecules, but tracking a huge number of individual molecules in a flow (for example at atmospheric pressure) is unnecessary. Instead, the continuum assumption allows us to consider a flowing gas as a continuous substance except at low densities. This assumption provides a huge simplification which is accurate for most gas-dynamic problems. Only in the low-density realm of rarefied gas dynamics does the motion of individual molecules become important.
A related assumption is the no-slip condition where the flow velocity at a solid surface is presumed equal to the velocity of the surface itself, which is a direct consequence of assuming continuum flow. The no-slip condition implies that the flow is viscous, and as a result a boundary layer forms on bodies traveling through the air at high speeds, much as it does in low-speed flow.
Most problems in incompressible flow involve only two unknowns: pressure and velocity, which are typically found by solving the two equations that describe conservation of mass and of linear momentum, with the fluid density presumed constant. In compressible flow, however, the gas density and temperature also become variables. This requires two more equations in order to solve compressible-flow problems: an equation of state for the gas and a conservation of energy equation. For the majority of gas-dynamic problems, the simple ideal gas law is the appropriate state equation. Otherwise, more complex equations of state must be considered and the so-called non ideal compressible fluids dynamics (NICFD) establishes.
Fluid dynamics problems have two overall types of references frames, called Lagrangian and Eulerian (see Joseph-Louis Lagrange and Leonhard Euler). The Lagrangian approach follows a fluid mass of fixed identity as it moves through a flowfield. The Eulerian reference frame, in contrast, does not move with the fluid. Rather it is a fixed frame or control volume that fluid flows through. The Eulerian frame is most useful in a majority of compressible flow problems, but requires that the equations of motion be written in a compatible format.
Finally, although space is known to have 3 dimensions, an important simplification can be had in describing gas dynamics mathematically if only one spatial dimension is of primary importance, hence 1-dimensional flow is assumed. This works well in duct, nozzle, and diffuser flows where the flow properties change mainly in the flow direction rather than perpendicular to the flow. However, an important class of compressible flows, including the external flow over bodies traveling at high speed, requires at least a 2-dimensional treatment. When all 3 spatial dimensions and perhaps the time dimension as well are important, we often resort to computerized solutions of the governing equations.
Mach number, wave motion, and sonic speed
The Mach number (M) is defined as the ratio of the speed of an object (or of a flow) to the speed of sound. For instance, in air at room temperature, the speed of sound is about . M can range from 0 to ∞, but this broad range falls naturally into several flow regimes. These regimes are subsonic, transonic, supersonic, hypersonic, and hypervelocity flow. The figure below illustrates the Mach number "spectrum" of these flow regimes.
These flow regimes are not chosen arbitrarily, but rather arise naturally from the strong mathematical background that underlies compressible flow (see the cited reference textbooks). At very slow flow speeds the speed of sound is so much faster that it is mathematically ignored, and the Mach number is irrelevant. Once the speed of the flow approaches the speed of sound, however, the Mach number becomes all-important, and shock waves begin to appear. Thus the transonic regime is described by a different (and much more complex) mathematical treatment. In the supersonic regime the flow is dominated by wave motion at oblique angles similar to the Mach angle. Above about Mach 5, these wave angles grow so small that a different mathematical approach is required, defining the hypersonic speed regime. Finally, at speeds comparable to that of planetary atmospheric entry from orbit, in the range of several km/s, the speed of sound is now comparatively so slow that it is once again mathematically ignored in the hypervelocity regime.
As an object accelerates from subsonic toward supersonic speed in a gas, different types of wave phenomena occur. To illustrate these changes, the next figure shows a stationary point (M = 0) that emits symmetric sound waves. The speed of sound is the same in all directions in a uniform fluid, so these waves are simply concentric spheres. As the sound-generating point begins to accelerate, the sound waves "bunch up" in the direction of motion and "stretch out" in the opposite direction. When the point reaches sonic speed (M = 1), it travels at the same speed as the sound waves it creates. Therefore, an infinite number of these sound waves "pile up" ahead of the point, forming a Shock wave. Upon achieving supersonic flow, the particle is moving so fast that it continuously leaves its sound waves behind. When this occurs, the locus of these waves trailing behind the point creates an angle known as the Mach wave angle or Mach angle, μ:
where represents the speed of sound in the gas and represents the velocity of the object. Although named for Austrian physicist Ernst Mach, these oblique waves were first discovered by Christian Doppler.
One-dimensional flow
One-dimensional (1-D) flow refers to flow of gas through a duct or channel in which the flow parameters are assumed to change significantly along only one spatial dimension, namely, the duct length. In analysing the 1-D channel flow, a number of assumptions are made:
Ratio of duct length to width (L/D) is ≤ about 5 (in order to neglect friction and heat transfer),
Steady vs. Unsteady Flow,
Flow is isentropic (i.e. a reversible adiabatic process),
Ideal gas law (i.e. P = ρRT)
Converging-diverging Laval nozzles
As the speed of a flow accelerates from the subsonic to the supersonic regime, the physics of nozzle and diffuser flows is altered. Using the conservation laws of fluid dynamics and thermodynamics, the following relationship for channel flow is developed (combined mass and momentum conservation):
,
where dP is the differential change in pressure, M is the Mach number, ρ is the density of the gas, V is the velocity of the flow, A is the area of the duct, and dA is the change in area of the duct. This equation states that, for subsonic flow, a converging duct (dA < 0) increases the velocity of the flow and a diverging duct (dA > 0) decreases velocity of the flow. For supersonic flow, the opposite occurs due to the change of sign of (1 − M2). A converging duct (dA < 0) now decreases the velocity of the flow and a diverging duct (dA > 0) increases the velocity of the flow. At Mach = 1, a special case occurs in which the duct area must be either a maximum or minimum. For practical purposes, only a minimum area can accelerate flows to Mach 1 and beyond. See table of sub-supersonic diffusers and nozzles.
Therefore, to accelerate a flow to Mach 1, a nozzle must be designed to converge to a minimum cross-sectional area and then expand. This type of nozzle – the converging-diverging nozzle – is called a de Laval nozzle after Gustaf de Laval, who invented it. As subsonic flow enters the converging duct and the area decreases, the flow accelerates. Upon reaching the minimum area of the duct, also known as the throat of the nozzle, the flow can reach Mach 1. If the speed of the flow is to continue to increase, its density must decrease in order to obey conservation of mass. To achieve this decrease in density, the flow must expand, and to do so, the flow must pass through a diverging duct. See image of de Laval Nozzle.
Maximum achievable velocity of a gas
Ultimately, because of the energy conservation law, a gas is limited to a certain maximum velocity based on its energy content. The maximum velocity, Vmax, that a gas can attain is:
where cp is the specific heat of the gas and Tt is the stagnation temperature of the flow.
Isentropic flow Mach number relationships
Using conservations laws and thermodynamics, a number of relationships of the form
can be obtained, where M is the Mach number and γ is the ratio of specific heats (1.4 for air). See table of isentropic flow Mach number relationships.
Achieving supersonic flow
As previously mentioned, in order for a flow to become supersonic, it must pass through a duct with a minimum area, or sonic throat. Additionally, an overall pressure ratio, Pb/Pt, of approximately 2 is needed to attain Mach 1. Once it has reached Mach 1, the flow at the throat is said to be choked. Because changes downstream can only move upstream at sonic speed, the mass flow through the nozzle cannot be affected by changes in downstream conditions after the flow is choked.
Non-isentropic 1D channel flow of a gas - normal shock waves
Normal shock waves are shock waves that are perpendicular to the local flow direction. These shock waves occur when pressure waves build up and coalesce into an extremely thin shockwave that converts kinetic energy into thermal energy. The waves thus overtake and reinforce one another, forming a finite shock wave from an infinite series of infinitesimal sound waves. Because the change of state across the shock is highly irreversible, entropy increases across the shock. When analysing a normal shock wave, one-dimensional, steady, and adiabatic flow of a perfect gas is assumed. Stagnation temperature and stagnation enthalpy are the same upstream and downstream of the shock.
Normal shock waves can be easily analysed in either of two reference frames: the standing normal shock and the moving shock. The flow before a normal shock wave must be supersonic, and the flow after a normal shock must be subsonic. The Rankine-Hugoniot equations are used to solve for the flow conditions.
Two-dimensional flow
Although one-dimensional flow can be directly analysed, it is merely a specialized case of two-dimensional flow. It follows that one of the defining phenomena of one-dimensional flow, a normal shock, is likewise only a special case of a larger class of oblique shocks. Further, the name "normal" is with respect to geometry rather than frequency of occurrence. Oblique shocks are much more common in applications such as: aircraft inlet design, objects in supersonic flight, and (at a more fundamental level) supersonic nozzles and diffusers. Depending on the flow conditions, an oblique shock can either be attached to the flow or detached from the flow in the form of a bow shock.
Oblique shock waves
Oblique shock waves are similar to normal shock waves, but they occur at angles less than 90° with the direction of flow. When a disturbance is introduced to the flow at a nonzero angle (δ), the flow must respond to the changing boundary conditions. Thus an oblique shock is formed, resulting in a change in the direction of the flow.
Shock polar diagram
Based on the level of flow deflection (δ), oblique shocks are characterized as either strong or weak. Strong shocks are characterized by larger deflection and more entropy loss across the shock, with weak shocks as the opposite. In order to gain cursory insight into the differences in these shocks, a shock polar diagram can be used. With the static temperature after the shock, T*, known the speed of sound after the shock is defined as,
with R as the gas constant and γ as the specific heat ratio. The Mach number can be broken into Cartesian coordinates
with Vx and Vy as the x and y-components of the fluid velocity V. With the Mach number before the shock given, a locus of conditions can be specified. At some , the flow transitions from a strong to weak oblique shock. With δ = 0°, a normal shock is produced at the limit of the strong oblique shock and the Mach wave is produced at the limit of the weak shock wave.
Oblique shock reflection
Due to the inclination of the shock, after an oblique shock is created, it can interact with a boundary in three different manners, two of which are explained below.
Solid boundary
Incoming flow is first turned by angle δ with respect to the flow. This shockwave is reflected off the solid boundary, and the flow is turned by – δ to again be parallel with the boundary. Each progressive shock wave is weaker and the wave angle is increased.
Irregular reflection
An irregular reflection is much like the case described above, with the caveat that δ is larger than the maximum allowable turning angle. Thus a detached shock is formed and a more complicated reflection known as Mach reflection occurs.
Prandtl–Meyer fans
Prandtl–Meyer fans can be expressed as both compression and expansion fans. Prandtl–Meyer fans also cross a boundary layer (i.e. flowing and solid) which reacts in different changes as well. When a shock wave hits a solid surface the resulting fan returns as one from the opposite family while when one hits a free boundary the fan returns as a fan of opposite type.
Prandtl–Meyer expansion fans
To this point, the only flow phenomena that have been discussed are shock waves, which slow the flow and increase its entropy. It is possible to accelerate supersonic flow in what has been termed a Prandtl–Meyer expansion fan, after Ludwig Prandtl and Theodore Meyer. The mechanism for the expansion is shown in the figure below.
As opposed to the flow encountering an inclined obstruction and forming an oblique shock, the flow expands around a convex corner and forms an expansion fan through a series of isentropic Mach waves. The expansion "fan" is composed of Mach waves that span from the initial Mach angle to the final Mach angle. Flow can expand around either a sharp or rounded corner equally, as the increase in Mach number is proportional to only the convex angle of the passage (δ). The expansion corner that produces the Prandtl–Meyer fan can be sharp (as illustrated in the figure) or rounded. If the total turning angle is the same, then the P-M flow solution is also the same.
The Prandtl–Meyer expansion can be seen as the physical explanation of the operation of the Laval nozzle. The contour of the nozzle creates a smooth and continuous series of Prandtl–Meyer expansion waves.
Prandtl–Meyer compression fans
A Prandtl–Meyer compression is the opposite phenomenon to a Prandtl–Meyer expansion. If the flow is gradually turned through an angle of δ, a compression fan can be formed. This fan is a series of Mach waves that eventually coalesce into an oblique shock. Because the flow is defined by an isentropic region (flow that travels through the fan) and an anisentropic region (flow that travels through the oblique shock), a slip line results between the two flow regions.
Applications
Supersonic wind tunnels
Supersonic wind tunnels are used for testing and research in supersonic flows, approximately over the Mach number range of 1.2 to 5. The operating principle behind the wind tunnel is that a large pressure difference is maintained upstream to downstream, driving the flow.
Wind tunnels can be divided into two categories: continuous-operating and intermittent-operating wind tunnels. Continuous operating supersonic wind tunnels require an independent electrical power source that drastically increases with the size of the test section. Intermittent supersonic wind tunnels are less expensive in that they store electrical energy over an extended period of time, then discharge the energy over a series of brief tests. The difference between these two is analogous to the comparison between a battery and a capacitor.
Blowdown type supersonic wind tunnels offer high Reynolds number, a small storage tank, and readily available dry air. However, they cause a high pressure hazard, result in difficulty holding a constant stagnation pressure, and are noisy during operation.
Indraft supersonic wind tunnels are not associated with a pressure hazard, allow a constant stagnation pressure, and are relatively quiet. Unfortunately, they have a limited range for the Reynolds number of the flow and require a large vacuum tank.
There is no dispute that knowledge is gained through research and testing in supersonic wind tunnels; however, the facilities often require vast amounts of power to maintain the large pressure ratios needed for testing conditions. For example, Arnold Engineering Development Complex has the largest supersonic wind tunnel in the world and requires the power required to light a small city for operation. For this reason, large wind tunnels are becoming less common at universities.
Supersonic aircraft inlets
Perhaps the most common requirement for oblique shocks is in supersonic aircraft inlets for speeds greater than about Mach 2 (the F-16 has a maximum speed of Mach 2 but doesn't need an oblique shock intake). One purpose of the inlet is to minimize losses across the shocks as the incoming supersonic air slows down to subsonic before it enters the turbojet engine. This is accomplished with one or more oblique shocks followed by a very weak normal shock, with an upstream Mach number usually less than 1.4. The airflow through the intake has to be managed correctly over a wide speed range from zero to its maximum supersonic speed. This is done by varying the position of the intake surfaces.
Although variable geometry is required to achieve acceptable performance from take-off to speeds exceeding Mach 2 there is no one method to achieve it. For example, for a maximum speed of about Mach 3, the XB-70 used rectangular inlets with adjustable ramps and the SR-71 used circular inlets with adjustable inlet cone.
See also
Incompressible flow
Conservation laws
Entropy
Equation of state
Gas kinetics
Heat capacity ratio
Isentropic nozzle flow
Lagrangian and Eulerian specification of the flow field
Prandtl–Meyer function
Thermodynamics especially "Commonly Considered Thermodynamic Processes" and "Laws of Thermodynamics"
Non-ideal compressible fluid dynamics
References
External links
NASA Beginner's Guide to Compressible Aerodynamics
Virginia Tech Compressible Flow Calculators
Fluid mechanics
Aerodynamics | Compressible flow | [
"Chemistry",
"Engineering"
] | 4,422 | [
"Aerodynamics",
"Civil engineering",
"Aerospace engineering",
"Fluid mechanics",
"Fluid dynamics"
] |
219,277 | https://en.wikipedia.org/wiki/Codon%20usage%20bias | Codon usage bias refers to differences in the frequency of occurrence of synonymous codons in coding DNA. A codon is a series of three nucleotides (a triplet) that encodes a specific amino acid residue in a polypeptide chain or for the termination of translation (stop codons).
There are 64 different codons (61 codons encoding for amino acids and 3 stop codons) but only 20 different translated amino acids. The overabundance in the number of codons allows many amino acids to be encoded by more than one codon. Because of such redundancy it is said that the genetic code is degenerate. The genetic codes of different organisms are often biased towards using one of the several codons that encode the same amino acid over the others—that is, a greater frequency of one will be found than expected by chance. How such biases arise is a much debated area of molecular evolution. Codon usage tables detailing genomic codon usage bias for organisms in GenBank and RefSeq can be found in the HIVE-Codon Usage Tables (HIVE-CUTs) project which contains two distinct databases, CoCoPUTs and TissueCoCoPUTs. Together, these two databases provide comprehensive, up-to-date codon, codon pair and dinucleotide usage statistics for all organisms with available sequence information and 52 human tissues, respectively.
It is generally acknowledged that codon biases reflect the contributions of 3 main factors: GC-biased gene conversion that favors GC-ending codons in diploid organisms, arrival biases reflecting mutational preferences (typically favoring AT-ending codons), and natural selection for codons that are favorable in regard to translation. Optimal codons in fast-growing microorganisms, like Escherichia coli or Saccharomyces cerevisiae (baker's yeast), reflect the composition of their respective genomic transfer RNA (tRNA) pool. It is thought that optimal codons help to achieve faster translation rates and high accuracy. As a result of these factors, translational selection is expected to be stronger in highly expressed genes, as is indeed the case for the above-mentioned organisms. In other organisms that do not show high growing rates or that present small genomes, codon usage optimization is normally absent, and codon preferences are determined by the characteristic mutational biases seen in that particular genome. Examples of this are Homo sapiens (human) and Helicobacter pylori. Organisms that show an intermediate level of codon usage optimization include Drosophila melanogaster (fruit fly), Caenorhabditis elegans (nematode worm), Strongylocentrotus purpuratus (sea urchin), and Arabidopsis thaliana (thale cress). Several viral families (herpesvirus, lentivirus, papillomavirus, polyomavirus, adenovirus, and parvovirus) are known to encode structural proteins that display heavily skewed codon usage compared to the host cell. The suggestion has been made that these codon biases play a role in the temporal regulation of their late proteins.
The nature of the codon usage-tRNA optimization has been fiercely debated. It is not clear whether codon usage drives tRNA evolution or vice versa. At least one mathematical model has been developed where both codon usage and tRNA expression co-evolve in feedback fashion (i.e., codons already present in high frequencies drive up the expression of their corresponding tRNAs, and tRNAs normally expressed at high levels drive up the frequency of their corresponding codons). However, this model does not seem to yet have experimental confirmation. Another problem is that the evolution of tRNA genes has been a very inactive area of research.
Contributing factors
Different factors have been proposed to be related to codon usage bias, including gene expression level (reflecting selection for optimizing the translation process by tRNA abundance), guanine-cytosine content (GC content, reflecting horizontal gene transfer or mutational bias), guanine-cytosine skew (GC skew, reflecting strand-specific mutational bias), amino acid conservation, protein hydropathy, transcriptional selection, RNA stability, optimal growth temperature, hypersaline adaptation, and dietary nitrogen.
Evolutionary theories
Mutational bias versus selection
Although the mechanism of codon bias selection remains controversial, possible explanations for this bias fall into two general categories. One explanation revolves around the selectionist theory, in which codon bias contributes to the efficiency and/or accuracy of protein expression and therefore undergoes positive selection. The selectionist model also explains why more frequent codons are recognized by more abundant tRNA molecules, as well as the correlation between preferred codons, tRNA levels, and gene copy numbers. Although it has been shown that the rate of amino acid incorporation at more frequent codons occurs at a much higher rate than that of rare codons, the speed of translation has not been shown to be directly affected and therefore the bias towards more frequent codons may not be directly advantageous. However, the increase in translation elongation speed may still be indirectly advantageous by increasing the cellular concentration of free ribosomes and potentially the rate of initiation for messenger RNAs (mRNAs).
The second explanation for codon usage can be explained by mutational bias, a theory which posits that codon bias exists because of nonrandomness in the mutational patterns. In other words, some codons can undergo more changes and therefore result in lower equilibrium frequencies, also known as “rare” codons. Different organisms also exhibit different mutational biases, and there is growing evidence that the level of genome-wide GC content is the most significant parameter in explaining codon bias differences between organisms. Additional studies have demonstrated that codon biases can be statistically predicted in prokaryotes using only intergenic sequences, arguing against the idea of selective forces on coding regions and further supporting the mutation bias model. However, this model alone cannot fully explain why preferred codons are recognized by more abundant tRNAs.
Mutation-selection-drift balance model
To reconcile the evidence from both mutational pressures and selection, the prevailing hypothesis for codon bias can be explained by the mutation-selection-drift balance model. This hypothesis states that selection favors major codons over minor codons, but minor codons are able to persist due to mutation pressure and genetic drift. It also suggests that selection is generally weak, but that selection intensity scales to higher expression and more functional constraints of coding sequences.
Consequences of codon composition
Effect on RNA secondary structure
Because secondary structure of the 5’ end of mRNA influences translational efficiency, synonymous changes at this region on the mRNA can result in profound effects on gene expression. Codon usage in noncoding DNA regions can therefore play a major role in RNA secondary structure and downstream protein expression, which can undergo further selective pressures. In particular, strong secondary structure at the ribosome-binding site or initiation codon can inhibit translation, and mRNA folding at the 5’ end generates a large amount of variation in protein levels.
Effect on transcription or gene expression
Heterologous gene expression is used in many biotechnological applications, including protein production and metabolic engineering. Because tRNA pools vary between different organisms, the rate of transcription and translation of a particular coding sequence can be less efficient when placed in a non-native context. For an overexpressed transgene, the corresponding mRNA makes a large percent of total cellular RNA, and the presence of rare codons along the transcript can lead to inefficient use and depletion of ribosomes and ultimately reduce levels of heterologous protein production. In addition, the composition of the gene (e.g. the total number of rare codons and the presence of consecutive rare codons) may also affect translation accuracy. However, using codons that are optimized for tRNA pools in a particular host to overexpress a heterologous gene may also cause amino acid starvation and alter the equilibrium of tRNA pools. This method of adjusting codons to match host tRNA abundances, called codon optimization, has traditionally been used for expression of a heterologous gene. However, new strategies for optimization of heterologous expression consider global nucleotide content such as local mRNA folding, codon pair bias, a codon ramp, codon harmonization or codon correlations. With the number of nucleotide changes introduced, artificial gene synthesis is often necessary for the creation of such an optimized gene.
Specialized codon bias is further seen in some endogenous genes such as those involved in amino acid starvation. For example, amino acid biosynthetic enzymes preferentially use codons that are poorly adapted to normal tRNA abundances, but have codons that are adapted to tRNA pools under starvation conditions. Thus, codon usage can introduce an additional level of transcriptional regulation for appropriate gene expression under specific cellular conditions.
Effect on speed of translation elongation
Generally speaking for highly expressed genes, translation elongation rates are faster along transcripts with higher codon adaptation to tRNA pools, and slower along transcripts with rare codons. This correlation between codon translation rates and cognate tRNA concentrations provides additional modulation of translation elongation rates, which can provide several advantages to the organism. Specifically, codon usage can allow for global regulation of these rates, and rare codons may contribute to the accuracy of translation at the expense of speed.
Effect on protein folding
Protein folding in vivo is vectorial, such that the N-terminus of a protein exits the translating ribosome and becomes solvent-exposed before its more C-terminal regions. As a result, co-translational protein folding introduces several spatial and temporal constraints on the nascent polypeptide chain in its folding trajectory. Because mRNA translation rates are coupled to protein folding, and codon adaptation is linked to translation elongation, it has been hypothesized that manipulation at the sequence level may be an effective strategy to regulate or improve protein folding. Several studies have shown that pausing of translation as a result of local mRNA structure occurs for certain proteins, which may be necessary for proper folding. Furthermore, synonymous mutations have been shown to have significant consequences in the folding process of the nascent protein and can even change substrate specificity of enzymes. These studies suggest that codon usage influences the speed at which polypeptides emerge vectorially from the ribosome, which may further impact protein folding pathways throughout the available structural space.
Methods of analysis
In the field of bioinformatics and computational biology, many statistical methods have been proposed and used to analyze codon usage bias. Methods such as the 'frequency of optimal codons' (Fop), the relative codon adaptation (RCA) or the codon adaptation index (CAI) are used to predict gene expression levels, while methods such as the 'effective number of codons' (Nc) and Shannon entropy from information theory are used to measure codon usage evenness. Multivariate statistical methods, such as correspondence analysis and principal component analysis, are widely used to analyze variations in codon usage among genes. There are many computer programs to implement the statistical analyses enumerated above, including CodonW, GCUA, INCA, etc. Codon optimization has applications in designing synthetic genes and DNA vaccines. Several software packages are available online for this purpose (refer to external links).
References
External links
Composition Analysis Toolkit : estimating codon usage bias and its statistical significance
HIVE-Codon Usage Table database
Codon Usage Database
CodonW
GCUA - General Codon Usage Analysis
Graphical Codon Usage Analyser
JCat - Java Codon Usage Adaptation Tool
INCA - Interactive Codon Analysis software
ACUA - Automated Codon Usage Analysis Tool
OPTIMIZER - Codon usage optimization
HEG-DB - Highly Expressed Genes Database
E-CAI - Expected value of Codon Adaptation Index
CAIcal -Set of tools to assess codon usage adaptation
scRCA - Automatic determination of translational codon usage bias
Online Synonymous Codon Usage Analyses with the ade4 and seqinR packages
Genetic Algorithm Simulation for Codon Optimization
Molecular biology
Gene expression | Codon usage bias | [
"Chemistry",
"Biology"
] | 2,533 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
219,284 | https://en.wikipedia.org/wiki/Cryobiology | Cryobiology is the branch of biology that studies the effects of low temperatures on living things within Earth's cryosphere or in science. The word cryobiology is derived from the Greek words κρῧος [kryos], "cold", βίος [bios], "life", and λόγος [logos], "word". In practice, cryobiology is the study of biological material or systems at temperatures below normal. Materials or systems studied may include proteins, cells, tissues, organs, or whole organisms. Temperatures may range from moderately hypothermic conditions to cryogenic temperatures.
Areas of study
At least six major areas of cryobiology can be identified: 1) study of cold-adaptation of microorganisms, plants (cold hardiness), and animals, both invertebrates and vertebrates (including hibernation), 2) cryopreservation of cells, tissues, gametes, and embryos of animal and human origin for (medical) purposes of long-term storage by cooling to temperatures below the freezing point of water. This usually requires the addition of substances which protect the cells during freezing and thawing (cryoprotectants), 3) preservation of organs under hypothermic conditions for transplantation, 4) lyophilization (freeze-drying) of pharmaceuticals, 5) cryosurgery, a (minimally) invasive approach for the destruction of unhealthy tissue using cryogenic gases/fluids, and 6) physics of supercooling, ice nucleation/growth and mechanical engineering aspects of heat transfer during cooling and warming, as applied to biological systems. Cryobiology would include cryonics, the low temperature preservation of humans and mammals with the intention of future revival, although this is not part of mainstream cryobiology, depending heavily on speculative technology yet to be invented. Several of these areas of study rely on cryogenics, the branch of physics and engineering that studies the production and use of very low temperatures.
Cryopreservation in nature
Many living organisms are able to tolerate prolonged periods of time at temperatures below the freezing point of water. Most living organisms accumulate cryoprotectants such as antinucleating proteins, polyols, and glucose to protect themselves against frost damage by sharp ice crystals. Most plants, in particular, can safely reach temperatures of −4 °C to −12 °C.
Bacteria
Three species of bacteria, Carnobacterium pleistocenium, Chryseobacterium greenlandensis, and Herminiimonas glaciei, have reportedly been revived after surviving for thousands of years frozen in ice.
Certain bacteria, notably Pseudomonas syringae, produce specialized proteins that serve as potent ice nucleators, which they use to force ice formation on the surface of various fruits and plants at about −2 °C. The freezing causes injuries in the epithelia and makes the nutrients in the underlying plant tissues available to the bacteria. Listeria grows slowly in temperatures as low as -1.5 °C and persists for some time in frozen foods.
Plants
Many plants undergo a process called hardening which allows them to survive temperatures below 0 °C for weeks to months. Cryobiology of plants explores the cellular and molecular adaptations plants develop to survive subzero temperatures, such as antifreeze proteins (AFP) and changes in membrane composition. Cryopreservation is a critical technique in plant cryobiology, used for the long-term storage of genetic material and the preservation of endangered species by maintaining plant tissues or seeds in liquid nitrogen. Research in this area aims to enhance agricultural productivity in cold climates, improve the storage of plant genetic resources, and understand the impacts of climate change on plant biodiversity.
Animals
Invertebrates
Nematodes that survive below 0 °C include Trichostrongylus colubriformis and Panagrolaimus davidi. Cockroach nymphs (Periplaneta japonica) survive short periods of freezing at -6 to -8 °C. The red flat bark beetle (Cucujus clavipes) can survive after being frozen to -150 °C. The fungus gnat Exechia nugatoria can survive after being frozen to -50 °C, by a unique mechanism whereby ice crystals form in the body but not the head. Another freeze-tolerant beetle is Upis ceramboides. See insect winter ecology and antifreeze protein. Another invertebrate that is briefly tolerant to temperatures down to -273 °C is the tardigrade.
The larvae of Haemonchus contortus, a nematode, can survive 44 weeks frozen at -196 °C.
Vertebrates
For the wood frog (Rana sylvatica), in the winter, as much as 45% of its body may freeze and turn to ice. "Ice crystals form beneath the skin and become interspersed among the body's skeletal muscles. During the freeze, the frog's breathing, blood flow, and heartbeat cease. Freezing is made possible by specialized proteins and glucose, which prevent intracellular freezing and dehydration." The wood frog can survive up to 11 days frozen at -4 °C.
Other vertebrates that survive at body temperatures below 0 °C include painted turtles (Chrysemys picta), gray tree frogs (Hyla versicolor), moor frogs (Rana arvalis), box turtles (Terrapene carolina - 48 hours at -2 °C), spring peeper (Pseudacris crucifer), garter snakes (Thamnophis sirtalis- 24 hours at -1.5 °C), the chorus frog (Pseudacris triseriata), Siberian salamander (Salamandrella keyserlingii - 24 hours at -15.3 °C), European common lizard (Lacerta vivipara) and Antarctic fish such as Pagothenia borchgrevinki. Antifreeze proteins cloned from such fish have been used to confer frost-resistance on transgenic plants.
Hibernating Arctic ground squirrels may have abdominal temperatures as low as −2.9 °C (26.8 °F), maintaining subzero abdominal temperatures for more than three weeks at a time, although the temperatures at the head and neck remain at 0 °C or above.
Applied cryobiology
Historical background
Cryobiology history can be traced back to antiquity. As early as in 2500 BC, low temperatures were used in Egypt in medicine. The use of cold was recommended by Hippocrates to stop bleeding and swelling. With the emergence of modern science, Robert Boyle studied the effects of low temperatures on animals.
In 1949, bull semen was cryopreserved for the first time by a team of scientists led by Christopher Polge. This led to a much wider use of cryopreservation today, with many organs, tissues and cells routinely stored at low temperatures. Large organs such as hearts are usually stored and transported, for short times only, at cool but not freezing temperatures for transplantation. Cell suspensions (like blood and semen) and thin tissue sections can sometimes be stored almost indefinitely in liquid nitrogen temperature (cryopreservation). Human sperm, eggs, and embryos are routinely stored in fertility research and treatments. Controlled-rate and slow freezing are well established techniques pioneered in the early 1970s which enabled the first human embryo frozen birth (Zoe Leyland) in 1984. Since then, machines that freeze biological samples using programmable steps, or controlled rates, have been used all over the world for human, animal, and cell biology – 'freezing down' a sample to better preserve it for eventual thawing, before it is deep frozen, or cryopreserved, in liquid nitrogen. Such machines are used for freezing oocytes, skin, blood products, embryo, sperm, stem cells, and general tissue preservation in hospitals, veterinary practices, and research labs. The number of live births from 'slow frozen' embryos is some 300,000 to 400,000 or 20% of the estimated 3 million in vitro fertilized births. Dr Christopher Chen, Australia, reported the world’s first pregnancy using slow-frozen oocytes from a British controlled-rate freezer in 1986.
Cryosurgery (intended and controlled tissue destruction by ice formation) was carried out by James Arnott in 1845 in an operation on a patient with cancer.
Preservation techniques
Cryobiology as an applied science is primarily concerned with low-temperature preservation. Hypothermic storage is typically above 0 °C but below normothermic (32 °C to 37 °C) mammalian temperatures. Storage by cryopreservation, on the other hand, will be in the −80 to −196 °C temperature range. Organs, and tissues are more frequently the objects of hypothermic storage, whereas single cells have been the most common objects cryopreserved.
A rule of thumb in hypothermic storage is that every 10 °C reduction in temperature is accompanied by a 50% decrease in oxygen consumption. Although hibernating animals have adapted mechanisms to avoid metabolic imbalances associated with hypothermia, hypothermic organs, and tissues being maintained for transplantation require special preservation solutions to counter acidosis, depressed sodium pump activity. and increased intracellular calcium. Special organ preservation solutions such as Viaspan (University of Wisconsin solution), HTK, and Celsior have been designed for this purpose. These solutions also contain ingredients to minimize damage by free radicals, prevent edema, compensate for ATP loss, etc.
Cryopreservation of cells is guided by the "two-factor hypothesis" of American cryobiologist Peter Mazur, which states that excessively rapid cooling kills cells by intracellular ice formation and excessively slow cooling kills cells by either electrolyte toxicity or mechanical crushing. During slow cooling, ice forms extracellularly, causing water to osmotically leave cells, thereby dehydrating them. Intracellular ice can be much more damaging than extracellular ice.
For red blood cells, the optimum cooling rate is very rapid (nearly 100 °C per second), whereas for stem cells the optimum cooling rate is very slow (1 °C per minute). Cryoprotectants, such as dimethyl sulfoxide and glycerol, are used to protect cells from freezing. A variety of cell types are protected by 10% dimethyl sulfoxide. Cryobiologists attempt to optimize cryoprotectant concentration (minimizing both ice formation and toxicity) and cooling rate. Cells may be cooled at an optimum rate to a temperature between −30 and −40 °C before being plunged into liquid nitrogen.
Slow cooling methods rely on the fact that cells contain few nucleating agents, but contain naturally occurring vitrifying substances that can prevent ice formation in cells that have been moderately dehydrated. Some cryobiologists are seeking mixtures of cryoprotectants for full vitrification (zero ice formation) in preservation of cells, tissues, and organs. Vitrification methods pose a challenge in the requirement to search for cryoprotectant mixtures that can minimize toxicity.
In humans
Human gametes and two-, four- and eight-cell embryos can survive cryopreservation at -196 °C for 10 years under well-controlled laboratory conditions.
Cryopreservation in humans with regards to infertility involves preservation of embryos, sperm, or oocytes via freezing. Conception, in vitro, is attempted when the sperm is thawed and introduced to the 'fresh' eggs, the frozen eggs are thawed and sperm is placed with the eggs and together they are placed back into the uterus or a frozen embryo is introduced to the uterus. Vitrification has flaws and is not as reliable or proven as freezing fertilized sperm, eggs, or embryos as traditional slow freezing methods because eggs alone are extremely sensitive to temperature. Many researchers are also freezing ovarian tissue in conjunction with the eggs in hopes that the ovarian tissue can be transplanted back into the uterus, stimulating normal ovulation cycles. In 2004, Donnez of Louvain in Belgium reported the first successful ovarian birth from frozen ovarian tissue. In 1997, samples of ovarian cortex were taken from a woman with Hodgkin's lymphoma and cryopreserved in a (Planer, UK) controlled-rate freezer and then stored in liquid nitrogen. Chemotherapy was initiated after the patient had premature ovarian failure. In 2003, after freeze-thawing, orthotopic autotransplantation of ovarian cortical tissue was done by laparoscopy and after five months, reimplantation signs indicated recovery of regular ovulatory cycles. Eleven months after reimplantation, a viable intrauterine pregnancy was confirmed, which resulted in the first such live birth – a girl named Tamara.
Therapeutic hypothermia, e.g. during heart surgery on a "cold" heart (generated by cold perfusion without any ice formation) allows for much longer operations and improves recovery rates for patients.
Scientific societies
The Society for Cryobiology was founded in 1964 to bring together those from the biological, medical, and physical sciences who have a common interest in the effects of low temperatures on biological systems. As of 2007, the Society for Cryobiology had about 280 members from around the world, and one-half of them are US-based. The purpose of the Society is to promote scientific research in low temperature biology, to improve scientific understanding in this field, and to disseminate and apply this knowledge to the benefit of mankind. The Society requires of all its members the highest ethical and scientific standards in the performance of their professional activities. According to the Society's bylaws, membership may be refused to applicants whose conduct is deemed detrimental to the Society; in 1982, the bylaws were amended explicitly to exclude "any practice or application of freezing deceased persons in the anticipation of their reanimation", over the objections of some members who were cryonicists, such as Jerry Leaf. The Society organizes an annual scientific meeting dedicated to all aspects of low-temperature biology. This international meeting offers opportunities for presentation and discussion of the most up-to-date research in cryobiology, as well as reviewing specific aspects through symposia and workshops. Members are also kept informed of news and forthcoming meetings through the Society newsletter, News Notes. The 2011–2012 president of the Society for Cryobiology was John H. Crowe.
The Society for Low Temperature Biology was founded in 1964 and became a registered charity in 2003 with the purpose of promoting research into the effects of low temperatures on all types of organisms and their constituent cells, tissues, and organs. As of 2006, the society had around 130 (mostly British and European) members and holds at least one annual general meeting. The program usually includes both a symposium on a topical subject and a session of free communications on any aspect of low-temperature biology. Recent symposia have included long-term stability, preservation of aquatic organisms, cryopreservation of embryos and gametes, preservation of plants, low-temperature microscopy, vitrification (glass formation of aqueous systems during cooling), freeze drying and tissue banking. Members are informed through the Society Newsletter, which is presently published three times a year.
Journals
Cryobiology (publisher: Elsevier) is the foremost scientific publication in this area, with about 60 refereed contributions published each year. Articles concern any aspect of low-temperature biology and medicine (e.g. freezing, freeze-drying, hibernation, cold tolerance and adaptation, cryoprotective compounds, medical applications of reduced temperature, cryosurgery, hypothermia, and perfusion of organs).
Cryo Letters is an independent UK-based rapid communication journal which publishes papers on the effects produced by low temperatures on a wide variety of biophysical and biological processes, or studies involving low-temperature techniques in the investigation of biological and ecological topics.
Biopreservation and Biobanking (formerly Cell Preservation Technology) is a peer-reviewed quarterly scientific journal published by Mary Ann Liebert, Inc. dedicated to the diverse spectrum of preservation technologies including cryopreservation, dry-state (anhydrobiosis), and glassy-state and hypothermic maintenance. Cell Preservation Technology has been renamed Biopreservation and Biobanking and is the official journal of International Society for Biological and Environmental Repositories.
Problems of Cryobiology and Cryomedicine (formerly 'Kriobiologiya' (1985-1990) and 'Problems of Cryobiology'(1991-2012) ) published by Institute for Problems of Cryobiology and Cryomedicine. The journal covers all topics related to low temperature biology, medicine and engineering.
See also
Cryptobiosis
Aldehyde-stabilized cryopreservation
References
External links
Cell Preservation Technology
Cellular cryobiology and anhydrobiology
An overview of the science behind cryobiology at the Science Creative Quarterly
Phase transitions
Cryogenics
Cryonics | Cryobiology | [
"Physics",
"Chemistry",
"Biology"
] | 3,558 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Phases of matter",
"Cryogenics",
"Critical phenomena",
"Cryobiology",
"Biochemistry",
"Statistical mechanics",
"Matter"
] |
2,427,587 | https://en.wikipedia.org/wiki/Ricci%20decomposition | In the mathematical fields of Riemannian and pseudo-Riemannian geometry, the Ricci decomposition is a way of breaking up the Riemann curvature tensor of a Riemannian or pseudo-Riemannian manifold into pieces with special algebraic properties. This decomposition is of fundamental importance in Riemannian and pseudo-Riemannian geometry.
Definition of the decomposition
Let (M,g) be a Riemannian or pseudo-Riemannian n-manifold. Consider its Riemann curvature, as a (0,4)-tensor field. This article will follow the sign convention
written multilinearly, this is the convention
With this convention, the Ricci tensor is a (0,2)-tensor field defined by Rjk=gilRijkl and the scalar curvature is defined by R=gjkRjk. (Note that this is the less common sign convention for the Ricci tensor; it is more standard to define it by contracting either the first and third or the second and fourth indices, which yields a Ricci tensor with the opposite sign. Under that more common convention, the signs of the Ricci tensor and scalar must be changed in the equations below.) Define the traceless Ricci tensor
and then define three (0,4)-tensor fields S, E, and W by
The "Ricci decomposition" is the statement
As stated, this is vacuous since it is just a reorganization of the definition of W. The importance of the decomposition is in the properties of the three new tensors S, E, and W.
Terminological note. The tensor W is called the Weyl tensor. The notation W is standard in mathematics literature, while C is more common in physics literature. The notation R is standard in both, while there is no standardized notation for S, Z, and E.
Basic properties
Properties of the pieces
Each of the tensors S, E, and W has the same algebraic symmetries as the Riemann tensor. That is:
together with
The Weyl tensor has the additional symmetry that it is completely traceless:
Hermann Weyl showed that in dimension at least four, W has the remarkable property of measuring the deviation of a Riemannian or pseudo-Riemannian manifold from local conformal flatness; if it is zero, then M can be covered by charts relative to which g has the form gij=efδij for some function f defined chart by chart.
(In fewer than three dimensions, every manifold is locally conformally flat, whereas in three dimensions, the Cotton tensor measures deviation from local conformal flatness.)
Properties of the decomposition
One may check that the Ricci decomposition is orthogonal in the sense that
recalling the general definition This has the consequence, which could be proved directly, that
This orthogonality can be represented without indices by
together with
Related formulas
One can compute the "norm formulas"
and the "trace formulas"
Mathematical explanation
Mathematically, the Ricci decomposition is the decomposition of the space of all tensors having the symmetries of the Riemann tensor into its irreducible representations for the action of the orthogonal group . Let V be an n-dimensional vector space, equipped with a metric tensor (of possibly mixed signature). Here V is modeled on the cotangent space at a point, so that a curvature tensor R (with all indices lowered) is an element of the tensor product V⊗V⊗V⊗V. The curvature tensor is skew symmetric in its first and last two entries:
and obeys the interchange symmetry
for all x,y,z,w ∈ V∗. As a result, R is an element of the subspace , the second symmetric power of the second exterior power of V. A curvature tensor must also satisfy the Bianchi identity, meaning that it is in the kernel of the linear map given by
The space in S2Λ2V is the space of algebraic curvature tensors. The Ricci decomposition is the decomposition of this space into irreducible factors. The Ricci contraction mapping
is given by
This associates a symmetric 2-form to an algebraic curvature tensor. Conversely, given a pair of symmetric 2-forms h and k, the Kulkarni–Nomizu product of h and k
produces an algebraic curvature tensor.
If n ≥ 4, then there is an orthogonal decomposition into (unique) irreducible subspaces
where
, where is the space of real scalars
, where SV is the space of trace-free symmetric 2-forms
The parts S, E, and C of the Ricci decomposition of a given Riemann tensor R are the orthogonal projections of R onto these invariant factors, and correspond (respectively) to the Ricci scalar, the trace-removed Ricci tensor, and the Weyl tensor of the Riemann curvature tensor. In particular,
is an orthogonal decomposition in the sense that
This decomposition expresses the space of tensors with Riemann symmetries as a direct sum of the scalar submodule, the Ricci submodule, and Weyl submodule, respectively. Each of these modules is an irreducible representation for the orthogonal group , and thus the Ricci decomposition is a special case of the splitting of a module for a semisimple Lie group into its irreducible factors. In dimension 4, the Weyl module decomposes further into a pair of irreducible factors for the special orthogonal group: the self-dual and antiself-dual parts W+ and W−.
Physical interpretation
The Ricci decomposition can be interpreted physically in Einstein's theory of general relativity, where it is sometimes called the Géhéniau-Debever decomposition. In this theory, the Einstein field equation
where is the stress–energy tensor describing the amount and motion of all matter and all nongravitational field energy and momentum, states that the Ricci tensor—or equivalently, the Einstein tensor—represents that part of the gravitational field which is due to the immediate presence of nongravitational energy and momentum. The Weyl tensor represents the part of the gravitational field which can propagate as a gravitational wave through a region containing no matter or nongravitational fields. Regions of spacetime in which the Weyl tensor vanishes contain no gravitational radiation and are also conformally flat.
See also
Bel decomposition of the Riemann tensor
Conformal geometry
Petrov classification
Plebanski tensor
Ricci calculus
Schouten tensor
Trace-free Ricci tensor
References
.
. Section 6.1 discusses the decomposition. Versions of the decomposition also enter into the discussion of conformal and projective geometries, in chapters 7 and 8.
.
Differential geometry
Riemannian geometry
Tensors in general relativity | Ricci decomposition | [
"Physics",
"Engineering"
] | 1,371 | [
"Tensors in general relativity",
"Tensors",
"Tensor physical quantities",
"Physical quantities"
] |
2,427,912 | https://en.wikipedia.org/wiki/False%20nearest%20neighbor%20algorithm | Within abstract algebra, the false nearest neighbor algorithm is an algorithm for estimating the embedding dimension. The concept was proposed by Kennel et al. (1992). The main idea is to examine how the number of neighbors of a point along a signal trajectory change with increasing embedding dimension. In too low an embedding dimension, many of the neighbors will be false, but in an appropriate embedding dimension or higher, the neighbors are real. With increasing dimension, the false neighbors will no longer be neighbors. Therefore, by examining how the number of neighbors change as a function of dimension, an appropriate embedding can be determined.
See also
Commutative ring
Local ring
Nearest neighbor
Time series
References
Statistical algorithms
Dynamical systems
Nonlinear time series analysis | False nearest neighbor algorithm | [
"Physics",
"Mathematics"
] | 156 | [
"Mechanics",
"Dynamical systems"
] |
2,428,476 | https://en.wikipedia.org/wiki/Mass%20balance | In physics, a mass balance, also called a material balance, is an application of conservation of mass to the analysis of physical systems. By accounting for material entering and leaving a system, mass flows can be identified which might have been unknown, or difficult to measure without this technique. The exact conservation law used in the analysis of the system depends on the context of the problem, but all revolve around mass conservation, i.e., that matter cannot disappear or be created spontaneously.
Therefore, mass balances are used widely in engineering and environmental analyses. For example, mass balance theory is used to design chemical reactors, to analyse alternative processes to produce chemicals, as well as to model pollution dispersion and other processes of physical systems. Mass balances form the foundation of process engineering design. Closely related and complementary analysis techniques include the population balance, energy balance and the somewhat more complex entropy balance. These techniques are required for thorough design and analysis of systems such as the refrigeration cycle.
In environmental monitoring, the term budget calculations is used to describe mass balance equations where they are used to evaluate the monitoring data (comparing input and output, etc.). In biology, the dynamic energy budget theory for metabolic organisation makes explicit use of mass and energy balance.
Introduction
The general form quoted for a mass balance is The mass that enters a system must, by conservation of mass, either leave the system or accumulate within the system.
Mathematically the mass balance for a system without a chemical reaction is as follows:
Strictly speaking the above equation holds also for systems with chemical reactions if the terms in the balance equation are taken to refer to total mass, i.e. the sum of all the chemical species of the system. In the absence of a chemical reaction the amount of any chemical species flowing in and out will be the same; this gives rise to an equation for each species present in the system. However, if this is not the case then the mass balance equation must be amended to allow for the generation or depletion (consumption) of each chemical species. Some use one term in this equation to account for chemical reactions, which will be negative for depletion and positive for generation. However, the conventional form of this equation is written to account for both a positive generation term (i.e. product of reaction) and a negative consumption term (the reactants used to produce the products). Although overall one term will account for the total balance on the system, if this balance equation is to be applied to an individual species and then the entire process, both terms are necessary. This modified equation can be used not only for reactive systems, but for population balances such as arise in particle mechanics problems. The equation is given below; note that it simplifies to the earlier equation in the case that the generation term is zero.
In the absence of a nuclear reaction the number of atoms flowing in and out must remain the same, even in the presence of a chemical reaction.
For a balance to be formed, the boundaries of the system must be clearly defined.
Mass balances can be taken over physical systems at multiple scales.
Mass balances can be simplified with the assumption of steady state, in which the accumulation term is zero.
Illustrative example
A simple example can illustrate the concept. Consider the situation in which a slurry is flowing into a settling tank to remove the solids in the tank. Solids are collected at the bottom by means of a conveyor belt partially submerged in the tank, and water exits via an overflow outlet.
In this example, there are two substances: solids and water. The water overflow outlet carries an increased concentration of water relative to solids, as compared to the slurry inlet, and the exit of the conveyor belt carries an increased concentration of solids relative to water.
Assumptions
Steady state
Non-reactive system
Analysis
Suppose that the slurry inlet composition (by mass) is 50% solid and 50% water, with a mass flow of . The tank is assumed to be operating at steady state, and as such accumulation is zero, so input and output must be equal for both the solids and water. If we know that the removal efficiency for the slurry tank is 60%, then the water outlet will contain of solids (40% times times 50% solids). If we measure the flow rate of the combined solids and water, and the water outlet is shown to be , then the amount of water exiting via the conveyor belt must be . This allows us to completely determine how the mass has been distributed in the system with only limited information and using the mass balance relations across the system boundaries. The mass balance for this system can be described in a tabular form:
Mass feedback (recycle)
Mass balances can be performed across systems which have cyclic flows. In these systems output streams are fed back into the input of a unit, often for further reprocessing.
Such systems are common in grinding circuits, where grain is crushed then sieved to only allow fine particles out of the circuit and the larger particles are returned to the roller mill (grinder). However, recycle flows are by no means restricted to solid mechanics operations; they are used in liquid and gas flows, as well. One such example is in cooling towers, where water is pumped through a tower many times, with only a small quantity of water drawn off at each pass (to prevent solids build up) until it has either evaporated or exited with the drawn off water. The mass balance for water is .
The use of the recycle aids in increasing overall conversion of input products, which is useful for low per-pass conversion processes (such as the Haber process).
Differential mass balances
A mass balance can also be taken differentially. The concept is the same as for a large mass balance, but it is performed in the context of a limiting system (for example, one can consider the limiting case in time or, more commonly, volume). A differential mass balance is used to generate differential equations that can provide an effective tool for modelling and understanding the target system.
The differential mass balance is usually solved in two steps: first, a set of governing differential equations must be obtained, and then these equations must be solved, either analytically or, for less tractable problems, numerically.
The following systems are good examples of the applications of the differential mass balance:
Ideal (stirred) batch reactor
Ideal tank reactor, also named Continuous Stirred Tank Reactor (CSTR)
Ideal Plug Flow Reactor (PFR)
Ideal batch reactor
The ideal completely mixed batch reactor is a closed system. Isothermal conditions are assumed, and mixing prevents concentration gradients as reactant concentrations decrease and product concentrations increase over time. Many chemistry textbooks implicitly assume that the studied system can be described as a batch reactor when they write about reaction kinetics and chemical equilibrium.
The mass balance for a substance A becomes
where
denotes the rate at which substance A is produced;
is the volume (which may be constant or not);
the number of moles () of substance A.
In a fed-batch reactor some reactants/ingredients are added continuously or in pulses (compare making porridge by either first blending all ingredients and then letting it boil, which can be described as a batch reactor, or by first mixing only water and salt and making that boil before the other ingredients are added, which can be described as a fed-batch reactor). Mass balances for fed-batch reactors become a bit more complicated.
Reactive example
In the first example, we will show how to use a mass balance to derive a relationship between the percent excess air for the combustion of a hydrocarbon-base fuel oil and the percent oxygen in the combustion product gas. First, normal dry air contains of oxygen per mole of air, so there is one mole of in of dry air. For stoichiometric combustion, the relationships between the mass of air and the mass of each combustible element in a fuel oil are:
Considering the accuracy of typical analytical procedures, an equation for the mass of air per mass of fuel at stoichiometric combustion is:
where refer to the mass fraction of each element in the fuel oil, sulfur burning to , and refers to the air-fuel ratio in mass units.
For of fuel oil containing 86.1% C, 13.6% H, 0.2% O, and 0.1% S the stoichiometric mass of air is , so AFR = 14.56. The combustion product mass is then . At exact stoichiometry, should be absent. At 15 percent excess air, the AFR = 16.75, and the mass of the combustion product gas is , which contains of excess oxygen. The combustion gas thus contains 2.84 percent by mass. The relationships between percent excess air and % in the combustion gas are accurately expressed by quadratic equations, valid over the range 0–30 percent excess air:
In the second example, we will use the law of mass action to derive the expression for a chemical equilibrium constant.
Assume we have a closed reactor in which the following liquid phase reversible reaction occurs:
The mass balance for substance A becomes
As we have a liquid phase reaction we can (usually) assume a constant volume and since we get
or
In many textbooks this is given as the definition of reaction rate without specifying the implicit assumption that we are talking about reaction rate in a closed system with only one reaction. This is an unfortunate mistake that has confused many students over the years.
According to the law of mass action the forward reaction rate can be written as
and the backward reaction rate as
The rate at which substance A is produced is thus
and since, at equilibrium, the concentration of A is constant we get
or, rearranged
Ideal tank reactor/continuously stirred tank reactor
The continuously mixed tank reactor is an open system with an influent stream of reactants and an effluent stream of products. A lake can be regarded as a tank reactor, and lakes with long turnover times (e.g. with low flux-to-volume ratios) can for many purposes be regarded as continuously stirred (e.g. homogeneous in all respects). The mass balance then becomes
where
is the volumetric flow into the system;
is the volumetric flow out of the system;
is the concentration of A in the inflow;
is the concentration of A in the outflow.
In an open system we can never reach a chemical equilibrium. We can, however, reach a steady state where all state variables (temperature, concentrations, etc.) remain constant ().
Example
Consider a bathtub in which there is some bathing salt dissolved. We now fill in more water, keeping the bottom plug in. What happens?
Since there is no reaction, and since there is no outflow . The mass balance becomes
or
Using a mass balance for total volume, however, it is evident that and that Thus we get
Note that there is no reaction and hence no reaction rate or rate law involved, and yet . We can thus draw the conclusion that reaction rate can not be defined in a general manner using . One must first write down a mass balance before a link between and the reaction rate can be found. Many textbooks, however, define reaction rate as
without mentioning that this definition implicitly assumes that the system is closed, has a constant volume and that there is only one reaction.
Ideal plug flow reactor (PFR)
The idealized plug flow reactor is an open system resembling a tube with no mixing in the direction of flow but perfect mixing perpendicular to the direction of flow, often used for systems like rivers and water pipes if the flow is turbulent. When a mass balance is made for a tube, one first considers an infinitesimal part of the tube and make a mass balance over that using the ideal tank reactor model. That mass balance is then integrated over the entire reactor volume to obtain:
In numeric solutions, e.g. when using computers, the ideal tube is often translated to a series of tank reactors, as it can be shown that a PFR is equivalent to an infinite number of stirred tanks in series, but the latter is often easier to analyze, especially at steady state.
More complex problems
In reality, reactors are often non-ideal, in which combinations of the reactor models above are used to describe the system. Not only chemical reaction rates, but also mass transfer rates may be important in the mathematical description of a system, especially in heterogeneous systems.
As the chemical reaction rate depends on temperature it is often necessary to make both an energy balance (often a heat balance rather than a full-fledged energy balance) as well as mass balances to fully describe the system. A different reactor model might be needed for the energy balance: A system that is closed with respect to mass might be open with respect to energy e.g. since heat may enter the system through conduction.
Commercial use
In industrial process plants, using the fact that the mass entering and leaving any portion of a process plant must balance, data validation and reconciliation algorithms may be employed to correct measured flows, provided that enough redundancy of flow measurements exist to permit statistical reconciliation and exclusion of detectably erroneous measurements. Since all real world measured values contain inherent error, the reconciled measurements provide a better basis than the measured values do for financial reporting, optimization, and regulatory reporting. Software packages exist to make this commercially feasible on a daily basis.
See also
Bioreactor
Chemical engineering
Continuity equation
Dilution (equation)
Energy accounting
Glacier mass balance
Mass flux
Material flow analysis
Material balance planning
Fluid mechanics
References
External links
Material Balance Calculations
Material Balance Fundamentals
The Material Balance for Chemical Reactors
Material and energy balance
Heat and material balance method of process control for petrochemical plants and oil refineries, United States Patent 6751527
Mass
Chemical process engineering
Transport phenomena | Mass balance | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,815 | [
"Transport phenomena",
"Scalar physical quantities",
"Physical phenomena",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Mass",
"Size",
"Chemical process engineering",
"Wikipedia categories named after physical quantities",
"Matter"
] |
2,428,570 | https://en.wikipedia.org/wiki/Immunoelectrophoresis | Immunoelectrophoresis is a general name for a number of biochemical methods for separation and characterization of proteins based on electrophoresis and reaction with antibodies. All variants of immunoelectrophoresis require immunoglobulins, also known as antibodies, reacting with the proteins to be separated or characterized. The methods were developed and used extensively during the second half of the 20th century. In somewhat chronological order: Immunoelectrophoretic analysis (one-dimensional immunoelectrophoresis ad modum Grabar), crossed immunoelectrophoresis (two-dimensional quantitative immunoelectrophoresis ad modum Clarke and Freeman or ad modum Laurell), rocket-immunoelectrophoresis (one-dimensional quantitative immunoelectrophoresis ad modum Laurell), fused rocket immunoelectrophoresis ad modum Svendsen and Harboe, affinity immunoelectrophoresis ad modum Bøg-Hansen.
Methods
Immunoelectrophoresis is a general term describing many combinations of the principles of electrophoresis and reaction of antibodies, also known as immunodiffusion.
Agarose as 1% gel slabs of about 1 mm thickness buffered at high pH (around 8.6) is traditionally preferred for electrophoresis and the reaction with antibodies. The agarose was chosen as the gel matrix because it has large pores allowing free passage and separation of proteins but provides an anchor for the immunoprecipitates of protein and specific antibodies. The high pH was chosen because antibodies are practically immobile at high pH. Electrophoresis equipment with a horizontal cooling plate was normally recommended for the electrophoresis.
Immunoprecipitates are visible in the wet agarose gel, but are stained with protein stains like Coomassie brilliant blue in the dried gel. In contrast to SDS-gel electrophoresis, the electrophoresis in agarose allows native conditions, preserving the native structure and activities of the proteins under investigation, therefore immunoelectrophoresis allows characterization of enzyme activities and ligand binding etc. in addition to electrophoretic separation.
Counterimmunoelectrophoresis is the combination of immunodiffusion with electrophoresis. In essence electrophoresis speeds up the process of moving the reactants together.
The immunoelectrophoretic analysis ad modum Grabar is the classical method of immunoelectrophoresis. Proteins are separated by electrophoresis, then antibodies are applied in a trough next to the separated proteins and immunoprecipitates are formed after a period of diffusion of the separated proteins and antibodies against each other. The introduction of the immunoelectrophoretic analysis gave a great boost to protein chemistry, some of the first results were the resolution of proteins in biological fluids and biological extracts. Among the important observations made were the great number of different proteins in serum, the existence of several immunoglobulin classes and their electrophoretic heterogeneity.
Crossed immunoelectrophoresis is also called two-dimensional quantitative immunoelectrophoresis ad modum Clarke and Freeman or ad modum Laurell. In this method the proteins are first separated during the first dimension electrophoresis, then instead of the diffusion towards the antibodies, the proteins are electrophoresed into an antibody-containing gel in the second dimension. Immunoprecipitation will take place during the second dimension electrophorsis and
the immunoprecipitates have a characteristic bell-shape, each precipitate representing one antigen, the position of the precipitate being dependent on the amount of protein as well as the amount of specific antibody in the gel, so relative quantification can be performed. The sensitivity and resolving power of crossed immunoelectrophoresis is than that of the classical immunoelectrophoretic analysis and there are multiple variations of the technique useful for various purposes. Crossed immunoelectrophoresis has been used for studies of proteins in biological fluids, particularly human serum, and biological extracts.
Rocket immunoelectrophoresis is one-dimensional quantitative immunoelectrophoresis. The method has been used for quantitation of human serum proteins before automated methods became available.
Fused rocket immunoelectrophoresis is a modification of one-dimensional quantitative immunoelectrophorsis used for detailed measurement of proteins in fractions from protein separation experiments.
Affinity immunoelectrophoresis is based on changes in the electrophoretic pattern of proteins through specific interaction or complex formation with other macromolecules or ligands. Affinity immunoelectrophoresis has been used for estimation of binding constants, as for instance with lectins or for characterization of proteins with specific features like glycan content or ligand binding. Some variants of affinity immunoelectrophoresis are similar to affinity chromatography by use of immobilized ligands.
Binding of ligands. The open structure of the immunoprecipitate in the agarose gel will allow additional binding of radioactively labeled antibodies and other ligands to reveal specific proteins. Application of this possibility has been used for instance for identification of allergens through reaction with immunoglobulin E (IgE) and for identification of glycoproteins with lectins.
General comments. Two factors determine that immunoelectrophoretic methods are not widely used. First they are rather work intensive and require some manual expertise. Second they require rather large amounts of polyclonal antibodies. Today gel electrophoresis followed by electroblotting is the preferred method for protein characterization because its ease of operation, its high sensitivity, and its low requirement for specific antibodies. In addition proteins are separated by gel electrophoresis on the basis of their apparent molecular weight, which is not accomplished by immunoelectrophoresis, but nevertheless immunoelectrophoretic methods are still useful when non-reducing conditions are needed.
Applications
Counter-immunoelectrophoresis and its modification
In comparison to other conventional methods of diagnosis e.g. for viral infection testing, counter-immunoelectrophoresis is a highly specific, simple, and speedy method that does not require sophisticated, expensive tools, input materials, or long-term capacity building. Considering the high informativeness of counter-immunoelectrophoresis, the results in practice can be dubious at times. As a result, by using a manufactured amphiphilic fluorescein-containing copolymer to increase the antigen and antibody interaction, counter-immunoelectrophoresis procedures can be improved. The use of the fluorescein copolymer-antigen mixture improved the association with plasma levels antibodies of animals immunized against hemorrhage illness and enhanced protein concentration in the precipitated zone, according to the findings. The capability of the amphiphilic fluorescein copolymer to boost antigen-antibody association and see the fluorescent accumulation domain may improve the efficiency of counter-immunoelectrophoresis for infectious disease rapid diagnosis.
Immunomethods
The terminologies, immune-methods and immune-chemical techniques refer to a variety of immunoelectrophoresis processes whose results are identified using antibodies and immunological methodologies. As a result, immunomethods' great sensitivity is a beneficial compared to the great expense of utilizing antibodies. Many different types of agarose electrophoresis are used to see how proteins travel under diverse circumstances. Proteins are recognized after the timer has expired by incubating gels with certain antibodies, which are then stained with Comassie blue.
Radial immunodiffusion
The radial immunodiffusion is an immunoassay technique for determining the concentration of a particular protein in a mixture including other modules. It is made up of an agarose gel, just like the others. Furthermore, in this procedure, the materials are placed into round wells in the gel's core part and disperse through it, generating a deposition ring with a diameter relation to the number of unbound protein that has diffused.
Identification of nanomaterial interaction with C3 protein complement and 2D immunoelectrophoresis
2D immunoelectrophoresis is a potential method that can be used for a range of functions involving protein flow of migrants, such as the deep examination of protein opsonization, in succession of first dimension as an activity of protein molar mass and the second dimension as a role of the isoelectric point. Despite the fact that it contains a large number of proteins, each spot on the 2D gel will symbolize a particular protein with a specific molecular mass and feature.
2D immunoelectrophoresis is also provided as a valuable implement for examining the stimulation of the signal transduction pathway, which is an essential factor in researching nanoparticles before in vivo delivery, because it will impact nanoparticle longevity, destination, and bio-distribution. This method employs two-dimensional horizontally agarose protein electrophoresis to specifically identify the association of nanoparticles with the C3 protein. Proteins can be separated in the first dimension according to their molecular mass (the shorter the protein, the far it drifts), and in the second dimension according to their abundance
Some limitations of immunoelectrophoresis
Though immunoelectrophoresis has a number of benefits, it also has certain drawbacks, such as when compared to other methods of electrophoresis, such as immunofixation, this method is sluggish and less precise. It can be difficult to interpret the results. Several tiny monoclonal proteins may be harder to identify. The accessibility of particular antibodies limits its utility in analytical techniques. Traditional (classical or conventional) immunoelectrophoresis has a number of drawbacks, including the fact that it is time consuming and the protocol might take up to 3 days to finish, has limited specificity and sensitivity, and the results can be difficult to read. As a result, newer immunoelectrophoresis techniques have largely supplanted the conventional immunoelectrophoresis.
References
External links
Comprehensive text edited by Niels H. Axelsen in Scandinavian Journal of Immunology, 1975 Volume 4 Supplement
https://web.archive.org/web/20070612225626/http://www.lib.mcg.edu/edu/esimmuno/ch4/immelec.htm
Immuno-Electrophoresis. Immuno-Diffusion
Biochemistry methods
Electrophoresis
Molecular biology
Protein methods
Laboratory techniques
Immunologic tests | Immunoelectrophoresis | [
"Chemistry",
"Biology"
] | 2,315 | [
"Biochemistry methods",
"Instrumental analysis",
"Protein methods",
"Protein biochemistry",
"Immunologic tests",
"Biochemical separation processes",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry",
"Electrophoresis"
] |
2,428,994 | https://en.wikipedia.org/wiki/Cockcroft%E2%80%93Walton%20generator | The Cockcroft–Walton (CW) generator, or multiplier, is an electric circuit that generates a high DC voltage from a low-voltage AC. It was named after the British and Irish physicists John Douglas Cockcroft and Ernest Thomas Sinton Walton, who in 1932 used this circuit design to power their particle accelerator, performing the first artificial nuclear disintegration in history. They used this voltage multiplier cascade for most of their research, which in 1951 won them the Nobel Prize in Physics for "Transmutation of atomic nuclei by artificially accelerated atomic particles".
The circuit was developed in 1919, by Heinrich Greinacher, a Swiss physicist. For this reason, this doubler cascade is sometimes also referred to as the Greinacher multiplier. Cockcroft–Walton circuits are still used in particle accelerators. They also are used in everyday electronic devices that require high voltages, such as X-ray machines and photocopiers.
Operation
The CW generator is a voltage multiplier that converts AC electrical power from a low voltage level to a higher DC voltage level. It is made up of a voltage multiplier ladder network of capacitors and diodes to generate high voltages. Unlike transformers, this method eliminates the requirement for the heavy core and the bulk of insulation/potting required. Using only capacitors and diodes, these voltage multipliers can step up relatively low voltages to extremely high values, while at the same time being far lighter and cheaper than transformers. The biggest advantage of such circuits is that the voltage across each stage of the cascade is equal to only twice the peak input voltage in a half-wave rectifier. In a full-wave rectifier it is three times the input voltage. It has the advantage of requiring relatively low-cost components and being easy to insulate. One can also tap the output from any stage, like in a multi-tapped transformer.
To understand the circuit operation, see the diagram of the two-stage version at right. Assume all capacitors are initially uncharged, and the circuit is powered by an alternating voltage Vi such that , i.e. with a peak value of Vp, which after power-on is 0 volts and starts with a negative half-cycle. After the input voltage is turned on
When the input voltage Vi is decreasing and approaching its negative peak −Vp, current flows from the bottom terminal of the source, through diode D1 and then through capacitor C1, charging it. Vi eventually reaches the negative peak −Vp, at which point C1 is charged to a voltage of Vp. Vi then starts increasing ‒ its derivative reverses sign from negative to positive. When this happens, the current reverses its direction, since the load placed on the source is almost purely capacitive and thus current leads voltage by almost 90°.
When Vi is increasing and approaching its positive peak +Vp, current flows from the top terminal of the source, through C1 (discharging it), through diode D2, and finally through capacitor C2 (charging it). Eventually, Vi reaches +Vp, and when we add to it the voltage of C1 (which is now slightly below +Vp), we get the resulting voltage of almost 2Vp ‒ this is the voltage to which C2 is charged. In this phase, diode D1 is reverse-biased, so no current flows through it.
When Vi starts decreasing again ( is negative), current flows from the bottom terminal of the source, through C2 (discharging it), through diode D3, through C3 (charging it to a voltage of almost 2Vp), and finally through C1 (recharging it to Vp, after it was partially discharged in the previous phase). Since some voltage is dropped also on C1 and not just on C3, C3 will not be charged to 2Vp immediately, but only in later iterations. The same applies to C1 and Vp respectively. Also, in this phase, C2 discharges to a voltage below 2Vp, similarly to C1 in the previous phase. It will be recharged in the next phase.
When Vi begins to increase again, current flows from the top terminal of the source, through C1 and C3 (discharging them), through diode D4, through C4 (charging it to a voltage of almost 2Vp), and finally through C2 (recharging it). During this phase, C1 and C3 discharge below Vp and 2Vp respectively, and will be recharged in the following phase.
At any given moment, either the odd-numbered diodes are conducting, or the even-numbered ones, never both. With each change in the derivative of input voltage (i.e. ), current flows up to the next level in the "stack" of capacitors through the diodes. Eventually, after a sufficient number of cycles of the AC input, all capacitors will be charged. (More precisely, we should say their actual voltages will converge sufficiently close to the ideal ones ‒ there will always be some ripple from the AC input). All the capacitors are charged to a voltage of 2Vp, except for C1, which is charged to Vp. The key to the voltage multiplication is that while the capacitors are charged in parallel, they are connected to the load in series. Since C2 and C4 are in series between the output and ground, the total output voltage (under no-load conditions) is Vo = 4Vp.
This circuit can be extended to any number of stages. The no-load output voltage is twice the peak input voltage multiplied by the number of stages N or equivalently the peak-to-peak input voltage swing (Vpp) times the number of stages
The number of stages is equal to the number of capacitors in series between the output and ground.
One way to look at the circuit is that it functions as a charge "pump", pumping electric charge in one direction, up the stack of capacitors. The CW circuit, along with other similar capacitor circuits, is often called a charge pump. For substantial loads, the charge on the capacitors is partially depleted, and the output voltage drops according to the output current divided by the capacitance.
Characteristics
In practice, the CW has a number of drawbacks. As the number of stages is increased, the voltages of the higher stages begin to "sag", primarily due to the electrical impedance of the capacitors in the lower stages. And, when supplying an output current, the voltage ripple rapidly increases as the number of stages is increased (this can be corrected with an output filter, but it requires a stack of capacitors in order to withstand the high voltages involved). For these reasons, CW multipliers with large number of stages are used only where relatively low output current is required. The sag can be reduced by increasing the capacitance in the lower stages, and the ripple can be reduced by increasing the frequency of the input and by using a square waveform. By driving the CW from a high-frequency source, such as an inverter, or a combination of an inverter and HV transformer, the overall physical size and weight of the CW power supply can be substantially reduced.
CW multipliers are typically used to develop higher voltages for relatively low-current applications, such as bias voltages ranging from tens or hundreds of volts to millions of volts for high-energy physics experiments or lightning safety testing. CW multipliers are also found, with a higher number of stages, in laser systems, high-voltage power supplies, X-ray systems, CCFL LCD backlighting, traveling-wave tube amplifiers, ion pumps, electrostatic systems, air ionisers, particle accelerators, copy machines, scientific instrumentation, oscilloscopes, television sets and cathode-ray tubes, electroshock weapons, bug zappers and many other applications that use high-voltage DC.
The Dynamitron is similar to the Cockcroft–Walton generator. However instead of being powered at one end as in the Cockcroft-Walton, the capacitive ladder is charged in parallel electrostatically by a high frequency oscillating voltage applied between two long half-cylindrical electrodes on either side of the ladder column, which induce voltage in semicircular corona rings attached to each end of the diode rectifier tubes.
Image gallery
See also
Marx generator
Voltage multiplier
Notes
Further reading
J. D. Cockcroft and E. T. S. Walton, Experiments with High Velocity Positive Ions.(I) Further Developments in the Method of Obtaining High Velocity Positive Ions, Proceedings of the Royal Society A, vol. 136, pp. 619–630, 1932.
J. D. Cockcroft and E. T. S. Walton, Experiments with High Velocity Positive Ions. II. The Disintegration of Elements by High Velocity Protons, Proceedings of the Royal Society A, vol. 137, pp. 229–242, 1932.
External links
Cockcroft–Walton Multipliers Tutorial EEVBlog at YouTube
Cockcroft Walton
Cockcroft Walton used in particle accelerators
US Department of Energy
Electrical circuits
X-rays
Collection of the Science Museum, London
Electric power conversion
History of electronic engineering
Particle accelerators | Cockcroft–Walton generator | [
"Physics",
"Engineering"
] | 1,944 | [
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Electronic engineering",
"History of electronic engineering",
"Electrical engineering",
"Electrical circuits"
] |
2,430,192 | https://en.wikipedia.org/wiki/Voltammetry | Voltammetry is a category of electroanalytical methods used in analytical chemistry and various industrial processes. In voltammetry, information about an analyte is obtained by measuring the current as the potential is varied. The analytical data for a voltammetric experiment comes in the form of a voltammogram, which plots the current produced by the analyte versus the potential of the working electrode.
Theory
Voltammetry is the study of current as a function of applied potential. Voltammetric methods involve electrochemical cells, and investigate the reactions occurring at electrode/electrolyte interfaces. The reactivity of analytes in these half-cells is used to determine their concentration. It is considered a dynamic electrochemical method as the applied potential is varied over time and the corresponding changes in current are measured. Most experiments control the potential (volts) of an electrode in contact with the analyte while measuring the resulting current (amperes).
Electrochemical cells
Electrochemical cells are used in voltammetric experiments to drive the redox reaction of the analyte. Like other electrochemical cells, two half-cells are required, one to facilitate reduction and the other oxidation. The cell consists of an analyte solution, an ionic electrolyte, and two or three electrodes, with oxidation and reduction reactions occurring at the electrode/electrolyte interfaces. As a species is oxidized (loses electrons), the electrons produced pass through an external electric circuit and generate a current, acting as an electron source for reduction. The generated currents are faradaic currents, which follow Faraday's law. As Faraday's law states that the number of moles of a substance, m, produced or consumed during an electrode process is proportional to the electric charge passed through the electrode, the faradaic currents allow analyte concentrations to be determined. Whether the analyte is reduced or oxidized depends on the analyte and the potential applied, but its reaction always occurs at the working/indicator electrode. Therefore, the working electrode potential varies as a function of the analyte concentration. A second auxiliary electrode completes the electric circuit, called the counter electrode. A third reference electrode provides a constant, baseline potential reading for the other two electrode potentials to be compared to. In case of microelectrodes with small dimensions, the counter electrode and the reference electrode can be combined as the current generated and flowing through the combined electrode will be too small to not affect the potential at the reference.
Three electrode system
Voltammetry experiments investigate the half-cell reactivity of an analyte. Voltammetry is the study of current as a function of applied potential.
These curves I = f(E) are called voltammograms.
The potential is varied arbitrarily, either step by step or continuously, and the resulting current value is measured as the dependent variable.
The opposite, i.e., amperometry, is also possible but not common.
The shape of the curves depends on the speed of potential variation, (nature of driving force) and whether the solution is stirred or quiescent (mass transfer).
Most experiments control the potential (volts) of an electrode in contact with the analyte while measuring the resulting current (amperes).
To conduct such an experiment, at least two electrodes are required. The working electrode, which makes contact with the analyte, must apply the desired potential in a controlled way and facilitate the transfer of charge to and from the analyte. A second electrode acts as the other half of the cell. This second electrode must have a known potential to gauge the potential of the working electrode from; furthermore it must balance the charge added or removed by the working electrode. While this is a viable setup, it has a number of shortcomings. Most significantly, it is extremely difficult for an electrode to maintain a constant potential while passing current to counter redox events at the working electrode.
To solve this problem, the roles of supplying electrons and providing a reference potential are divided between two separate electrodes. The reference electrode is a half cell with a known reduction potential. Its only role is to act as reference for measuring and controlling the working electrode's potential and it does not pass any current. The auxiliary electrode passes the current required to balance the observed current at the working electrode. To achieve this current, the auxiliary will often swing to extreme potentials at the edges of the solvent window, where it oxidizes or reduces the solvent or supporting electrolyte. These electrodes, the working, reference, and auxiliary make up the modern three-electrode system.
There are many systems which have more electrodes, but their design principles are similar to the three-electrode system. For example, the rotating ring-disk electrode has two distinct and separate working electrodes, a disk, and a ring, which can be used to scan or hold potentials independently of each other. Both of these electrodes are balanced by a single reference and auxiliary combination for an overall four-electrode design. More complicated experiments may add working electrodes, reference, or auxiliary electrodes as required.
In practice it can be important to have a working electrode with known dimensions and surface characteristics. As a result, it is common to clean and polish working electrodes regularly. The auxiliary electrode can be almost anything as long as it doesn't react with the bulk of the analyte solution and conducts well. A common voltammetry method, polarography, uses mercury as a working electrode e.g. DME and HMDE, and as an auxiliary electrode. The reference is the most complex of the three electrodes; there are a variety of standards used. For non-aqueous work, IUPAC recommends the use of the ferrocene/ferrocenium couple as an internal standard. In most voltammetry experiments, a bulk electrolyte (also known as a supporting electrolyte) is used to minimize solution resistance. It is possible to run an experiment without a bulk electrolyte, but the added resistance greatly reduces the accuracy of the results. With room temperature ionic liquids, the solvent can act as the electrolyte. The supporting electrolyte also minimises the effect of migration-controlled and ensures that the reaction is diffusion-controlled.
Voltammograms
A voltammogram (see linear sweep voltammetry) is a graph that measures the current of an electrochemical cell as a function of the potential applied. This graph is used to determine the concentration and the standard potential of the analyte. To determine the concentration, values such as the limiting or peak current are read from the graph and applied to various mathematical models. After determining the concentration, the applied standard potential can be identified using the Nernst equation.
There are three main shapes for voltammograms. The first shape is dependent on the diffusion layer. If the analyte is continuously stirred, the diffusion layer will be a constant width and produce a voltammogram that reaches a constant current. The graph takes this shape as the current increases from the background residual to reach the limiting current (il). If the mixture is not stirred, the width of the diffusion layer eventually increases. This can be observed by the maximum peak current (ip), and is identified by the highest point on the graph. The third common shape for a voltammogram measures the sample for change in current rather than current applied. A maximum current is still observed, but represents the maximum change in current (ip).
Mathematical models
To determine analyte concentrations, mathematical models are required to link the applied potential and current measured over time. The Nernst equation relates electrochemical cell potential to the concentration ratio of the reduced and oxidized species in a logarithmic relationship. The Nernst equation is as follows:
Where:
: Reduction potential
: Standard potential
: Universal gas constant
: Temperature in kelvin
: Ion charge (moles of electrons)
: Faraday constant
: Reaction quotient
This equation describes how the changes in applied potential will alter the concentration ratio. However, the Nernst equation is limited, as it is modeled without a time component and voltammetric experiments vary applied potential as a function of time. Other mathematical models, primarily the Butler-Volmer equation, the Tafel equation, and Fick's law address the time dependence.
The Butler–Volmer equation relates concentration, potential, and current as a function of time. It describes the non-linear relationship between the electrode and electrolyte voltage difference and the electrical current. It helps make predictions about how the forward and backward redox reactions affect potential and influence the reactivity of the cell. This function includes a rate constant which accounts for the kinetics of the reaction. A compact version of the Butler-Volmer equation is as follows:
Where:
: electrode current density, A/m2 (defined as j = I/S)
: exchange current density, A/m2
: electrode potential, V
: equilibrium potential, V
: absolute temperature, K
: number of electrons involved in the electrode reaction
: Faraday constant
: universal gas constant
: so-called cathodic charge transfer coefficient, dimensionless
: so-called anodic charge transfer coefficient, dimensionless
: activation overpotential (defined as ).
At high overpotentials, the Butler–Volmer equation simplifies to the Tafel equation. The Tafel equation relates the electrochemical currents to the overpotential exponentially, and is used to calculate the reaction rate. The overpotential is calculated at each electrode separately, and related to the voltammogram data to determine reaction rates. The Tafel equation for a single electrode is:
Where:
the plus sign under the exponent refers to an anodic reaction, and a minus sign to a cathodic reaction
: overpotential, V
A: "Tafel slope", V
: current density, A/m2
: "exchange current density", A/m2.
As the redox species are oxidized and reduced at the electrodes, material accumulates at the electrode/electrolyte interface. Material accumulation creates a concentration gradient between the interface and the bulk solution. Fick's laws of diffusion is used to relate the diffusion of oxidized and reduced species to the faradaic current used to describe redox processes. Fick's law is most commonly written in terms of moles, and is as follows:
Where:
J: diffusion flux (in amount of substance per unit area per unit time)
D: diffusion coefficient or diffusivity. (in area per unit time)
φ: concentration (in amount of substance per unit volume)
x: position (in length)
Types of voltammetry
History
The beginning of voltammetry was facilitated by the discovery of polarography in 1922 by the Nobel Prize–winning Czech chemist Jaroslav Heyrovský. Early voltammetric techniques had many problems, limiting their viability for everyday use in analytical chemistry. In polarography, these problems included the fact that mercury is oxidized at a potential that is more positive than +0.2 Volt, making it harder to analyze the results for the analytes in the positive region of the potential. Another problem included the residual current obtained from the charging of the large capacitance of the electrode surface. When Heyrovsky first recorded the first dependence on the current flowing through the dropping mercury electrode on the applied potential in 1922, he took point-by-point measurements and plotted a current-voltage curve. This was considered to be the first polarogram. In order to facilitate this process, he constructed what is now known as a polarograph with M. Shikata, which enabled him to record photographically the same curve in a matter of hours. He gave recognition to the importance of potential and its control and also recognized the opportunities of measuring the limiting currents. He was also an important part of the introduction of dropping mercury electrode as a modern-day tool.
In 1942, the English electrochemist Archie Hickling (University of Leicester) built the first three electrodes potentiostat, which was an advancement for the field of electrochemistry. He used this potentiostat to control the voltage of an electrode. In the meantime, in the late 1940s, the American biophysicist Kenneth Stewart Cole invented an electronic circuit which he called a voltage clamp. The voltage clamp was used to analyze the ionic conduction in nerves.
The 1960s and 1970s saw many advances in the theory, instrumentation, and the introduction of computer aided and controlled systems. Modern polarographic and voltammetric methods on mercury electrodes came about in three sections.
The first section includes the development of the mercury electrodes. The following electrodes were produced: dropping mercury electrode, mercury steaming electrode, hanging mercury drop electrode, static mercury drop electrode, mercury film electrode, mercury amalgam electrodes, mercury microelectrodes, chemically modified mercury electrodes, controlled growth mercury electrodes, and contractible mercury drop electrodes.
There was also an advancement of the measuring techniques used. These measuring techniques include: classical DC polarography, oscillopolarography, Kaloussek's switcher, AC polarography, tast polarography, normal pulse polarography, differential pulse polarography, square-wave voltammetry, cyclic voltammetry, anodic stripping voltammetry, convolution techniques, and elimination methods.
Lastly, there was also an advancement of preconcentration techniques that produced an increase in the sensitivity of the mercury electrodes. This came about through the development of anodic stripping voltammetry, cathodic stripping voltammetry and adsorptive stripping voltammetry.
These advancements improved sensitivity and created new analytical methods, which prompted the industry to respond with the production of cheaper potentiostat, electrodes, and cells that could be effectively used in routine analytical work.
Applications
Voltammetric sensors
A number of voltammetric systems are produced commercially for the determination of species that are of interest in industry and research. These devices are sometimes called electrodes but are actually complete voltammetric cells, which are better referred to as sensors. These sensors can be employed for the analysis of organic and inorganic analytes in various matrices.
The oxygen electrode
The determination of dissolved oxygen in a variety of aqueous environments, such as sea water, blood, sewage, effluents from chemical plants, and soils is of tremendous importance to industry, biomedical and environmental research, and clinical medicine. One of the most common and convenient methods for making such measurements is with the Clark oxygen sensor, which was patented by L.C. Clark, Jr. in 1956.
See also
Current–voltage characteristic
Neopolarogram
References
Further reading
External links
http://new.ametek.com/content-manager/files/PAR/App%20Note%20E-4%20-%20Electrochemical%20Analysis%20Techniques1.pdf
Electroanalytical methods | Voltammetry | [
"Chemistry"
] | 3,075 | [
"Electroanalytical methods",
"Electroanalytical chemistry"
] |
2,430,317 | https://en.wikipedia.org/wiki/Anti-vibration%20compound | An anti-vibration compound is a temperature-resistant mixture of a liquid with fine particles, which is used to reduce oscillations in calender rolls
and to dampen vibrations in fabricated structures like machine beds and housings.
Use
Vibration may limit the performance of a calender or paper machine. It can have numerous sources such as bulk variations in the sheet, bearing problems, or misalignment of the driveshaft. Vibration manifests itself as a high frequency periodic movement of the roll body with an amplitude from less than one to several μm.
When anti-vibration compound is introduced to the center bores of the rolls, vibration is transferred from the solid roll structure to the incompressible fluid component of the anti-vibration compound. Its solid particles are less mobile due to their inertia. Thus the fluid is forced to oscillate around the solid components. The flow energy is absorbed by micro eddies by which the vibration is damped.
The benefits are a smoother running with increased operating speed and production, longer operating times of the polymer covers between re-grindings and improved product quality due to the reduction of barring.
References
Classical mechanics | Anti-vibration compound | [
"Physics"
] | 240 | [
"Mechanics",
"Classical mechanics"
] |
2,430,525 | https://en.wikipedia.org/wiki/Speech%20transmission%20index | Speech Transmission Index (STI) is a measure of speech transmission quality. The absolute measurement of speech intelligibility is a complex science. The STI measures some physical characteristics of a transmission channel (a room, electro-acoustic equipment, telephone line, etc.), and expresses the ability of the channel to carry across the characteristics of a speech signal. STI is a well-established objective measurement predictor of how the characteristics of the transmission channel affect speech intelligibility.
The influence that a transmission channel has on speech intelligibility is dependent on:
the speech level
frequency response of the channel
non-linear distortions
background noise level
quality of the sound reproduction equipment
echos (reflections with delay > 100ms)
the reverberation time
psychoacoustic effects (masking effects)
History
The STI was introduced by Tammo Houtgast and Herman Steeneken in 1971, and was accepted by Acoustical Society of America in 1980. Steeneken and Houtgast decided to develop the Speech Transmission Index because they were tasked to carry out a very lengthy series of tedious speech intelligibility measurements for the Netherlands Armed Forces. Instead, they spent the time developing a much quicker objective method (which was actually the predecessor to the STI).
Houtgast and Steeneken developed the Speech Transmission Index while working at The Netherlands Organisation of Applied Scientific Research TNO. Their team at TNO kept supporting and developing the STI, improving the model and developing hardware and software for measuring the STI, until 2010. In that year, the TNO research group responsible for the STI spun out of TNO and continued its work as a privately owned company named Embedded Acoustics. Embedded Acoustics now continues to support development of the STI, with Herman Steeneken (now formally retired from TNO) still acting as a senior consultant.
In the early years (until approx. 1985) the use of the STI was largely limited to a relatively small international community of speech researchers. The introduction of the RASTI ("Room Acoustics STI") made the STI method available to a larger population of engineers and consultants, especially when Bruel & Kjaer introduced their RASTI measuring device (which was based on the earlier RASTI system developed by Steeneken and Houtgast at TNO). RASTI was designed to be much faster than the original ("full") STI, taking less than 30 seconds instead of 15 minutes for a measuring point. However, RASTI was only intended (as the name says) for pure room acoustics, not electro-acoustics. Application of RASTI to transmission chains featuring electro-acoustic components (such as loudspeakers and microphones) became fairly common, and led to complaints about inaccurate results. The use of RASTI was even specified by some application standards (such as CAA specification 15 for aircraft cabin PA systems) for applications featuring electro-acoustics, simply because it was the only feasible method at the time. The inadequacies of RASTI were sometimes simply accepted for lack of a better alternative. TNO did produce and sell instruments for measuring full STI and various other STI derivatives, but these devices were relatively expensive, large and heavy.
Around the year 2000, the need for an alternative to RASTI that could also be applied safely to Public Address (PA) systems had become fully apparent. At TNO, Jan Verhave and Herman Steeneken started work on a new STI method, that would later become known as STIPA (STI for Public Address systems). The first device to include STIPA measurements available for sale to the general public was made by Gold-Line. At this time, STIPA measuring instruments are available from various manufacturers.
RASTI was standardized internationally in 1988, in IEC-60268-16. Since then, IEC-60268-16 has been revised three times, the latest revisions (rev.4) appearing in 2011. Each revision included updates of the STI methodology that had become accepted in the STI research community over time, such as the inclusion of redundancy between adjacent octave bands (rev.2), level-dependent auditory masking (rev.3) and various methods for applying the STI to specific populations such as non-natives and the hearing impaired (rev.4). An IEC maintenance team is currently working on rev. 5.
RASTI was declared obsolete by the IEC in June 2011, with the appearance of rev. 4 of IEC-602682-16. At this time, this simplified STI derivative was still stipulated as a standard method in some industries. STIPA is now seen as the successor to RASTI for almost every application.
Scale
STI is a numeric representation measure of communication channel characteristics whose value varies from 0 = bad to 1 = excellent. On this scale, an STI of at least .5 is desirable for most applications.
Barnett (1995, 1999) proposed to use a reference scale, the Common Intelligibility Scale (CIS), based on a mathematical relation with STI (CIS = 1 + log (STI)).
STI predicts the likelihood of syllables, words and sentences being comprehended. As an example, for native speakers, this likelihood is given by:
If non-native speakers, people with speech disorders or hard-of-hearing people are involved, other probabilities hold.
It is interesting but not astonishing that STI prediction is independent of the language spoken – not astonishing, as the ability of the channel to transport patterns of physical speech is measured.
Another method is defined for computing a physical measure that is highly correlated with the intelligibility of speech as evaluated by speech perception tests given a group of talkers and listeners. This measure is called the Speech Intelligibility Index, or SII.
Nominal qualification bands for STI
The IEC 60268-16 ed4 2011 Standard defines a qualification scale in order to provide flexibility for different applications. The values of this alpha-scale run from "U" to "A+".
Standards
STI has gained international acceptance as the quantifier of channel influence on speech intelligibility. The International Electrotechnical Commission Objective rating of speech intelligibility by speech transmission index, as prepared by the TC 100 Technical Committee, defines the international standard.
Further the following standards have, as part of the requirements to be fulfilled, integrated testing the STI and realisation of a minimal speech transmission index:
International Organization for Standardization (ISO) standard for sound system loudspeakers in Fire detection and fire alarm systems
National Fire Protection Association Alarm Code
British Standards Institution Fire detection and alarm systems for buildings
German Institute for Standardization Sound Systems for Emergency Purposes
STIPA
STIPA (Speech Transmission Index for Public Address Systems) is a version of the STI using a simplified method and test signal. Within the STIPA signal, each octave band is modulated simultaneously with two modulation frequencies. The modulation frequencies are spread among the octave bands in a balanced way, making it possible to obtain a reliable STI measurement based on a sparsely sampled Modulation Transfer Function matrix. Although initially designed for Public Address systems (and similar installations, such as Voice Evacuation Systems and Mass Notification Systems), STIPA can also be used for a variety of other applications. The only situation in which RASTI is currently considered inferior to full STI is in the presence of strong echoes.
A single STIPA measurement generally takes between 15 and 25 seconds, combining the speed of RASTI with (nearly) the wide scope of applicability and reliability of full STI.
Since STIPA has become widely available, and given the fact that RASTI has several disadvantages and no benefits over STIPA, RASTI is now considered obsolete.
Although the STIPA test signal does not resemble speech to the human ear, in terms of frequency content as well as intensity fluctuations it is a signal with speech-like characteristics.
Speech can be described as noise that is intensity-modulated by low-frequency signals. The STIPA signal contains such intensity modulations at 14 different modulation frequencies, spread across 7 octave bands. At the receiving end of the communication system, the depth of modulation of the received signal is measured and compared with that of the test signal in each of a number of frequency bands. Reductions in the modulation depth are associated with loss of intelligibility.
Indirect method
An alternative Impulse response method, also known as the "indirect method," assumes that the channel is linear and requires stricter synchronization of the sound source to the measurement instrument. The main benefit of the indirect method over the direct method (based on modulated test signals) is that the full MTF matrix is measured, covering all relevant modulation frequencies in all octave bands. In very large spaces (such as cathedrals), where echoes are likely to occur, the indirect method is usually preferred over direct method (e.g. using modulated STIPA signals). In general, the indirect method is often the best option when studying speech intelligibility based on "pure room acoustics," when no electro-acoustic components are present within the transmission path.
However, the requirement that the channel must be linear implies that the indirect method cannot be used reliably in many real-life applications: whenever the transmission chain features components that might exhibit non-linear behaviour (such as loudspeakers), indirect measurements may yield incorrect results. Also, depending on the type of impulse response measurement that is used, the influence of background noise present during measurements may not be dealt with correctly. This means that the indirect method should only be used with great care when measuring Public Address systems and Voice Evacuation systems. IEC-60268-16 rev. 4 does not disallow the indirect method for such applications, but issues the following words of warning: "Critical analysis is therefore required of how the impulse response is obtained and potentially influenced by non-linearities in the transmission system, particularly as in practice, system components can be operated at the limits of their performance range." In practice, verification of the validity of the linearity assumption is often too complex for everyday use, making the (direct) STIPA method the preferred method whenever loudspeakers are involved.
Although many measuring tools based on the indirect method offer STIPA as well as "full STI" options, the sparse Modulation Transfer Function matrix inherent to STIPA offers no advantages when using the indirect method. Impulse response based STIPA measurements must not be confused with direct STIPA measurements, as the validity of the result still depends on whether or not the channel is linear.
List of manufacturers of STI measuring instruments
STI measuring instruments are (and have been) made by various manufacturers. Below is a list of brands under which STI measuring instruments have been sold, in alphabetical order.
Audio Precision . Offers an STI Plug-in option for use with APx500 Series audio analyzers.
Audiomatica . Offers an STI (including STIPA) tool in CLIO 11 system that is compliant with the latest version of the standard (IEC-60268-16 rev. 4). CLIO 12 system is capable of both indirect STI/STIPA and direct STIPA measurements.
Bedrock Audio . This is the brand under which Embedded Acoustics sells their STIPA hardware, such as the SM50.
Brüel & Kjær . Offers handheld as well as software based solutions.
Gold Line . First to offer STIPA measuring solutions (DSP2 and DSP30), but currently not offering any tools that comply with the latest standards (IEC-60268-16 rev. 4).
HEAD acoustics . Offers STI options (including STIPA, STITEL, and RASTI) for both the Artemis Suite and ACQUA test systems.
Ivie . Offers STIPA-capable acoustic measuring tools such as the IE-45.
Norsonic . Norsonic was early to adopt STIPA and offer STIPA modules on their instruments (Nor-140). Sold by Scantek, Inc. in Columbia Maryland.
NTi Audio . Offers STIPA modules with their AL1 and XL2 line of acoustic measuring instruments as well as a Talkbox and other peripherals. Apparent market leader at this moment (2013).
Quest . Now part of 3M, Quest produces tools such as the Quest Verifier.
Svantek Offers an STI (including STIPA) measurement solution with their more advanced sound level meters.
TNO. Not currently marketing any products, but sold (among others) the STIDAS series of measuring instruments before.
The market for STI measuring solution is still developing, so the above list is subject to change as manufacturers enter or leave the market. The list does not include software producers that produce STI-capable acoustic measuring and simulation software. Mobile apps for STIPA measurements (such as the ones sold by Studio Six Digital and Embedded Acoustics ) are also excluded from the list.
See also
Mean opinion score
References
Jacob, K., McManus, S., Verhave, J.A., and Steeneken, H., (2002) "Development of an Accurate, Handheld, Simple-to-use Meter for the Prediction of Speech Intelligibility", Past, Present, and Future of the Speech Transmission Index, International Symposium on STI
External links
Intelligibility Conversion: %ALcons = Articulation Loss of Consonants in % to STI = Speech Transmission Index and vice versa
Background information on the STI and links to STI resources
Speech Intelligibility Papers IV
Communication
Hearing
Human voice
Sound
Waves | Speech transmission index | [
"Physics"
] | 2,826 | [
"Waves",
"Physical phenomena",
"Motion (physics)"
] |
2,431,002 | https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Straus%20conjecture | The Erdős–Straus conjecture is an unproven statement in number theory. The conjecture is that, for every integer that is greater than or equal to 2, there exist positive integers , , and for which
In other words, the number can be written as a sum of three positive unit fractions.
The conjecture is named after Paul Erdős and Ernst G. Straus, who formulated it in 1948, but it is connected to much more ancient mathematics; sums of unit fractions, like the one in this problem, are known as Egyptian fractions, because of their use in ancient Egyptian mathematics. The Erdős–Straus conjecture is one of many conjectures by Erdős, and one of many unsolved problems in mathematics concerning Diophantine equations.
Although a solution is not known for all values of , infinitely many values in certain infinite arithmetic progressions have simple formulas for their solution, and skipping these known values can speed up searches for counterexamples. Additionally, these searches need only consider values of that are prime numbers, because any composite counterexample would have a smaller counterexample among its prime factors. Computer searches have verified the truth of the conjecture up to .
If the conjecture is reframed to allow negative unit fractions, then it is known to be true. Generalizations of the conjecture to fractions with numerator 5 or larger have also been studied.
Background and history
When a rational number is expanded into a sum of unit fractions, the expansion is called an Egyptian fraction. This way of writing fractions dates to the mathematics of ancient Egypt, in which fractions were written this way instead of in the more modern vulgar fraction form with a numerator and denominator . The Egyptians produced tables of Egyptian fractions for unit fractions multiplied by two, the numbers that in modern notation would be written , such as the Rhind Mathematical Papyrus table; in these tables, most of these expansions use either two or three terms. These tables were needed, because the obvious expansion was not allowed: the Egyptians required all of the fractions in an Egyptian fraction to be different from each other. This same requirement, that all fractions be different, is sometimes imposed in the Erdős–Straus conjecture, but it makes no significant difference to the problem, because for any solution to where the unit fractions are not distinct can be converted into a solution where they are all distinct; see below.
Although the Egyptians did not always find expansions using as few terms as possible, later mathematicians have been interested in the question of how few terms are needed. Every fraction has an expansion of at most terms, so in particular needs at most two terms, needs at most three terms, and needs at most four terms. For , two terms are always needed, and for , three terms are sometimes needed, so for both of these numerators, the maximum number of terms that might be needed is known. However, for , it is unknown whether four terms are sometimes needed, or whether it is possible to express all fractions of the form using only three unit fractions; this is the Erdős–Straus conjecture. Thus, the conjecture covers the first unknown case of a more general question, the problem of finding for all the maximum number of terms needed in expansions for fractions .
One way to find short (but not always shortest) expansions uses the greedy algorithm for Egyptian fractions, first described in 1202 by Fibonacci in his book Liber Abaci. This method chooses one unit fraction at a time, at each step choosing the largest possible unit fraction that would not cause the expanded sum to exceed the target number. After each step, the numerator of the fraction that still remains to be expanded decreases, so the total number of steps can never exceed the starting numerator, but sometimes it is smaller. For example, when it is applied to , the greedy algorithm will use two terms whenever is 2 modulo 3, but there exists a two-term expansion whenever has a factor that is 2 modulo 3, a weaker condition. For numbers of the form , the greedy algorithm will produce a four-term expansion whenever is 1 modulo 4, and an expansion with fewer terms otherwise. Thus, another way of rephrasing the Erdős–Straus conjecture asks whether there exists another method for producing Egyptian fractions, using a smaller maximum number of terms for the numbers .
The Erdős–Straus conjecture was formulated in 1948 by Paul Erdős and Ernst G. Straus, and published by . Richard Obláth also published an early work on the conjecture, a paper written in 1948 and published in 1950, in which he extended earlier calculations of Straus and Harold N. Shapiro in order to verify the conjecture for all .
Formulation
The conjecture states that, for every integer , there exist positive integers , , and such that
For instance, for , there are two solutions:
Multiplying both sides of the equation by leads to an equivalent polynomial form for the problem.
Distinct unit fractions
Some researchers additionally require that the integers , , and be distinct from each other, as the Egyptians would have, while others allow them to be equal. For , it does not matter whether they are required to be distinct: if there exists a solution with any three integers, then there exists a solution with distinct integers. This is because two identical unit fractions can be replaced through one of the following two expansions:
(according to whether the repeated fraction has an even or odd denominator) and this replacement can be repeated until no duplicate fractions remain. For , however, the only solutions are permutations of .
Negative-number solutions
The Erdős–Straus conjecture requires that all three of , , and be positive. This requirement is essential to the difficulty of the problem. Even without this relaxation, the Erdős–Straus conjecture is difficult only for odd values of , and if negative values were allowed then the problem could be solved for every odd by the following formula:
Computational results
If the conjecture is false, it could be proven false simply by finding a number that has no three-term representation. In order to check this, various authors have performed brute-force searches for counterexamples to the conjecture. Searches of this type have confirmed that the conjecture is true for all up to .
In such searches, it is only necessary to look for expansions for numbers where is a prime number. This is because, whenever has a three-term expansion, so does for all positive integers . To find a solution for , just divide all of the unit fractions in the solution for by :
If were a counterexample to the conjecture, for a composite number , every prime factor of would also provide a counterexample that would have been found earlier by the brute-force search. Therefore, checking the existence of a solution for composite numbers is redundant, and can be skipped by the search. Additionally, the known modular identities for the conjecture (see below) can speed these searches by skipping over other values known to have a solution. For instance, the greedy algorithm finds an expansion with three or fewer terms for every number where is not 1 modulo 4, so the searches only need to test values that are 1 modulo 4. One way to make progress on this problem is to collect more modular identities, allowing computer searches to reach higher limits with fewer tests.
The number of distinct solutions to the problem, as a function of , has also been found by computer searches for small and appears to grow somewhat irregularly with . Starting with , the numbers of distinct solutions with distinct denominators are
Even for larger there can sometimes be relatively few solutions; for instance there are only seven distinct solutions for .
Theoretical results
In the form , a polynomial equation with integer variables, the Erdős–Straus conjecture is an example of a Diophantine equation. The Hasse principle for Diophantine equations suggests that these equations should be studied using modular arithmetic. If a polynomial equation has a solution in the integers, then taking this solution modulo , for any integer , provides a solution in modulo- arithmetic. In the other direction, if an equation has a solution modulo for every prime power , then in some cases it is possible to piece together these modular solutions, using methods related to the Chinese remainder theorem, to get a solution in the integers. The power of the Hasse principle to solve some problems is limited by the Manin obstruction, but for the Erdős–Straus conjecture this obstruction does not exist.
On the face of it this principle makes little sense for the Erdős–Straus conjecture. For every , the equation is easily solvable modulo any prime, or prime power, but there appears to be no way to piece those solutions together to get a positive integer solution to the equation. Nevertheless, modular arithmetic, and identities based on modular arithmetic, have proven a very important tool in the study of the conjecture.
Modular identities
For values of satisfying certain congruence relations, one can find an expansion for automatically as an instance of a polynomial identity. For instance, whenever is 2 modulo 3, has the expansion
Here each of the three denominators , , and is a polynomial of , and each is an integer whenever is 2 modulo 3. The greedy algorithm for Egyptian fractions finds a solution in three or fewer terms whenever is not 1 or 17 mod 24, and the 17 mod 24 case is covered by the 2 mod 3 relation, so the only values of for which these two methods do not find expansions in three or fewer terms are those congruent to 1 mod 24.
Polynomial identities listed by provide three-term Egyptian fractions for whenever is one of:
2 mod 3 (above),
3 mod 4,
2 or 3 mod 5,
3, 5, or 6 mod 7, or
5 mod 8.
Combinations of Mordell's identities can be used to expand for all except possibly those that are 1, 121, 169, 289, 361, or 529 mod 840. The smallest prime that these identities do not cover is 1009. By combining larger classes of modular identities, Webb and others showed that the natural density of potential counterexamples to the conjecture is zero: as a parameter goes to infinity, the fraction of values in the interval . that could be counterexamples tends to zero in the limit.
Nonexistence of identities
If it were possible to find solutions such as the ones above for enough different moduli, forming a complete covering system of congruences, the problem would be solved. However, as showed, a polynomial identity that provides a solution for values of congruent to mod can exist only when is not congruent to a square modulo . (More formally, this kind of identity can exist only when is not a quadratic residue modulo .) For instance, 2 is a non-square mod 3, so Mordell's result allows the existence of an identity for congruent to 2 mod 3. However, 1 is a square mod 3 (equal to the square of both 1 and 2 mod 3), so there can be no similar identity for all values of that are congruent to 1 mod 3. More generally, as 1 is a square mod for all , there can be no complete covering system of modular identities for all , because 1 will always be uncovered.
Despite Mordell's result limiting the form of modular identities for this problem, there is still some hope of using modular identities to prove the Erdős–Straus conjecture. No prime number can be a square, so by the Hasse–Minkowski theorem, whenever is prime, there exists a larger prime such that is not a quadratic residue modulo . One possible approach to proving the conjecture would be
to find for each prime a larger prime and a congruence solving the problem for congruent to mod . If this could be done, no prime could be a counterexample to the conjecture and the conjecture would be true.
The number of solutions
showed that the average number of solutions to the problem (averaged over the prime numbers up to ) is upper bounded polylogarithmically in . For some other Diophantine problems, the existence of a solution can be demonstrated through asymptotic lower bounds on the number of solutions, but this works best when the number of solutions grows at least polynomially, so the slower growth rate of Elsholtz and Tao's result makes a proof of this type less likely. Elsholtz and Tao classify solutions according to whether one or two of , , or is divisible by ; for prime , these are the only possibilities, although (on average) most solutions for composite are of other types. Their proof uses the Bombieri–Vinogradov theorem, the Brun–Titchmarsh theorem, and a system of modular identities, valid when is congruent to or modulo , where and are any two coprime positive integers and is any odd factor of . For instance, setting gives one of Mordell's identities, valid when is 3 mod 4.
Generalizations
As with fractions of the form , it has been conjectured that every fraction (for ) can be expressed as a sum of three positive unit fractions. A generalized version of the conjecture states that, for any positive , all but finitely many fractions can be expressed as a sum of three positive unit fractions. The conjecture for fractions was made by Wacław Sierpiński in a 1956 paper, which went on to credit the full conjecture to Sierpiński's student Andrzej Schinzel.
Even if the generalized conjecture is false for any fixed value of , then the number of fractions with in the range from 1 to that do not have three-term expansions must grow only sublinearly as a function of . In particular, if the Erdős–Straus conjecture itself (the case ) is false, then the number of counterexamples grows only sublinearly. Even more strongly, for any fixed , only a sublinear number of values of need more than two terms in their Egyptian fraction expansions. The generalized version of the conjecture is equivalent to the statement that the number of unexpandable fractions is not just sublinear but bounded.
When is an odd number, by analogy to the problem of odd greedy expansions for Egyptian fractions, one may ask for solutions to in which , , and are distinct positive odd numbers. Solutions to this equation are known to always exist for the case in which .
See also
List of sums of reciprocals
Notes
References
.
.
.
.
.
. See in particular the "Small numerators" section
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. Reprinted with additional annotations in .
.
.
.
.
.
Conjectures
Unsolved problems in number theory
Egyptian fractions
Diophantine equations
Straus conjecture | Erdős–Straus conjecture | [
"Mathematics"
] | 3,062 | [
"Unsolved problems in mathematics",
"Mathematical objects",
"Equations",
"Unsolved problems in number theory",
"Diophantine equations",
"Conjectures",
"Mathematical problems",
"Number theory"
] |
2,431,128 | https://en.wikipedia.org/wiki/Theorem%20on%20friends%20and%20strangers | The theorem on friends and strangers is a mathematical theorem in an area of mathematics called Ramsey theory.
Statement
Suppose a party has six people. Consider any two of them. They might be meeting for the first time—in which case we will call them mutual strangers; or they might have met before—in which case we will call them mutual acquaintances. The theorem says:
In any party of six people, at least three of them are (pairwise) mutual strangers or mutual acquaintances.
Conversion to a graph-theoretic setting
A proof of the theorem requires nothing but a three-step logic. It is convenient to phrase the problem in graph-theoretic language.
Suppose a graph has 6 vertices and every pair of (distinct) vertices is joined by an edge. Such a graph is called a complete graph (because there cannot be any more edges). A complete graph on vertices is denoted by the symbol .
Now take a . It has 15 edges in all. Let the 6 vertices stand for the 6 people in our party. Let the edges be coloured red or blue depending on whether the two people represented by the vertices connected by the edge are mutual strangers or mutual acquaintances, respectively. The theorem now asserts:
No matter how you colour the 15 edges of a with red and blue, you cannot avoid having either a red triangle—that is, a triangle all of whose three sides are red, representing three pairs of mutual strangers—or a blue triangle, representing three pairs of mutual acquaintances. In other words, whatever colours you use, there will always be at least one monochromatic triangle ( that is, a triangle all of whose edges have the same color ).
Proof
Choose any one vertex; call it P. There are five edges leaving P. They are each coloured red or blue. The pigeonhole principle says that at least three of them must be of the same colour; for if there are less than three of one colour, say red, then there are at least three that are blue.
Let A, B, C be the other ends of these three edges, all of the same colour, say blue. If any one of AB, BC, CA is blue, then that edge together with the two edges from P to the edge's endpoints forms a blue triangle. If none of AB, BC, CA is blue, then all three edges are red and we have a red triangle, namely, ABC.
Ramsey's paper
The utter simplicity of this argument, which so powerfully produces a very interesting conclusion, is what makes the theorem appealing. In 1930, in a paper entitled 'On a Problem of Formal Logic,' Frank P. Ramsey proved a very general theorem (now known as Ramsey's theorem) of which this theorem is a simple case. This theorem of Ramsey forms the foundation of the area known as Ramsey theory in combinatorics.
Boundaries to the theorem
The conclusion to the theorem does not hold if we replace the party of six people by a party of less than six. To show this, we give a coloring of K5 with red and blue that does not contain a triangle with all edges the same color. We draw K5 as a pentagon surrounding a star (a pentagram). We color the edges of the pentagon red and the edges of the star blue.
Thus, 6 is the smallest number for which we can claim the conclusion of the theorem. In Ramsey theory, we write this fact as:
References
V. Krishnamurthy. Culture, Excitement and Relevance of Mathematics, Wiley Eastern, 1990. .
External links
Party Acquaintances at cut-the-knot (requires Java)
Ramsey theory
Theorems in discrete mathematics
Articles containing proofs | Theorem on friends and strangers | [
"Mathematics"
] | 742 | [
"Discrete mathematics",
"Mathematical theorems",
"Theorems in discrete mathematics",
"Combinatorics",
"Articles containing proofs",
"Mathematical problems",
"Ramsey theory"
] |
2,431,881 | https://en.wikipedia.org/wiki/Foaming%20agent | A foaming agent is a material such as a surfactant or a blowing agent that facilitates the formation of foam. A surfactant, when present in small amounts, reduces surface tension of a liquid (reduces the work needed to create the foam) or increases its colloidal stability by inhibiting coalescence of bubbles. A blowing agent is a gas that forms the gaseous part of the foam.
Surfactants
Sodium laureth sulfate, or sodium lauryl ether sulfate (SLES), is a detergent and surfactant found in many personal care products (soaps, shampoos, toothpastes, etc.). It is an inexpensive and effective foamer. Sodium lauryl sulfate (also known as sodium dodecyl sulfate or SDS) and ammonium lauryl sulfate (ALS) are commonly used alternatives to SLES in consumer products.
Co-surfactants
Surfactants which are less effective at foam production, may have additional co-surfactants added to increase foaming. In which case, the co-surfactant is referred to as the foaming agent. These are surfactants used in lower concentration in a detergent system than the primary surfactant, often the cocamide family of surfactants. Cocamide foaming agents include the nonionic cocamide DEA and cocamidopropylamine oxide, and the zwitterionic cocamidopropyl betaine and cocamidopropyl hydroxysultaine.
Blowing agents
There are two main types of blowing agents: gases at the temperature that the foam is formed, and gases generated by chemical reaction. Carbon dioxide, pentane, and chlorofluorocarbons are examples of the former. Blowing agents that produce gas via chemical reactions include baking powder, azodicarbonamide, titanium hydride, and isocyanates (when they react with water).
See also
Antifoaming agent
Sodium coceth sulfate
Sodium lauryl sulfate
Sodium bicarbonate
Sodium laureth sulfate
Surfactants
References
Surfactants
Foams
Building materials | Foaming agent | [
"Physics",
"Chemistry",
"Engineering"
] | 433 | [
"Building engineering",
"Foams",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
2,431,954 | https://en.wikipedia.org/wiki/Residual%20gas%20analyzer | A residual gas analyzer (RGA) is a small and usually rugged mass spectrometer, typically designed for process control and contamination monitoring in vacuum systems. When constructed as a quadrupole mass analyzer, there exist two implementations, utilizing either an open ion source (OIS) or a closed ion source (CIS). RGAs may be found in high vacuum applications such as research chambers, surface science setups, accelerators, scanning microscopes, etc. RGAs are used in most cases to monitor the quality of the vacuum and easily detect minute traces of impurities in the low-pressure gas environment. These impurities can be measured down to Torr levels, possessing sub-ppm detectability in the absence of background interferences.
RGAs would also be used as sensitive in-situ leak detectors commonly using helium, isopropyl alcohol or other tracer molecules. With vacuum systems pumped down to lower than Torr—checking of the integrity of the vacuum seals and the quality of the vacuum—air leaks, virtual leaks and other contaminants at low levels may be detected before a process is initiated.
Open ion source
OIS is the most widely available type of RGA. Residual Gas Analyzers measure pressure by sensing the weight of each atom as they pass through the quadrupole. Cylindrical and axially symmetrical, this kind of ionizer has been around since the early 1950s. The OIS type is usually mounted directly to the vacuum chamber, exposing the filament wire and anode wire cage to the surrounding vacuum chamber, allowing all molecules in the vacuum chamber to move easily through the ion source. With a maximum operating pressure of Torr and a minimum detectable partial pressure as low as Torr when used in tandem with an electron multiplier.
OIS RGAs measure residual gas levels without affecting the gas composition of their vacuum environment, though there are performance limitations which include:
Outgassing of water from the chamber, from the OIS electrodes and most varieties of 300-series stainless steel used in the surrounding vacuum chamber due to the high temperatures of the hot-cathode source (> 1300 °C).
Electron Stimulated Desorption (ESD) is noted by peaks observed at 12, 16, 19 and 35 u rather than by electron-impact ionization of gaseous species, with the effects similar to outgassing effects. This is frequently counteracted by gold-plating the ionizer which in turn reduces the adsorption of many gases. Using platinum-clad molybdenum ionizers is an alternative.
Closed ion source
With applications requiring measurement of pressures between and Torr, the problem of ambient and process gases can be significantly reduced by replacing the OIS configuration with a CIS sampling system. Such an ionizer sits on top of the quadrupole mass filter and consists of a short, gas-tight tube with two openings for the entrance of electrons and exit of ions. The ions are formed close to a single extraction plate and exit the ionizer. Electrically insulated alumina rings seal the tube and the biased electrodes from the rest of the quadrupole mass assembly. The ions are produced by electron impact directly at the process pressure. Such design has been applied to gas chromatography mass spectrometry instruments before adaption by quadrupole gas analyzers. Most commercially available CIS systems operate between and Torr and offer ppm level detectability over the entire mass range for process pressures between and Torr. The upper limit is set by reduction in mean free path for ion-neutral collisions which takes place at higher pressures, and results in the scattering of ions and reduced sensitivity.
The CIS anode may be viewed as a high conductance tube connected directly to the process chamber. The pressure in the ionization area is virtually the same as the rest of the chamber. Thus the CIS ionizer produces ions by electron impact directly at the process pressure whilst the rest of the mass analyzer is kept under high vacuum. Such direct sampling provides good sensitivity and fast response times.
References
External links
A reference describing threshold ionisation with RGA
Mass spectrometry
Vacuum gauges | Residual gas analyzer | [
"Physics",
"Chemistry",
"Engineering"
] | 845 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Vacuum",
"Mass spectrometry",
"Vacuum gauges",
"Vacuum systems",
"Matter"
] |
2,432,047 | https://en.wikipedia.org/wiki/Membrane-introduction%20mass%20spectrometry | Membrane-introduction mass spectrometry (MIMS) is a method of introducing analytes into the mass spectrometer's vacuum chamber via a semi-permeable membrane. Usually a thin, gas-permeable, hydrophobic membrane is used, for example polydimethylsiloxane. Samples can be almost any fluid including water, air or sometimes even solvents. The great advantage of the method of sample introduction is its simplicity. MIMS can be used to measure a variety of analytes in real-time, with little or no sample preparation. MIMS is most useful for the measurement of small, non-polar molecules, since molecules of this type have a greater affinity for the membrane material than the sample. The advantage of this method is that complex samples that cannot diffuse through the membrane are not incorporated into the mass spectroscopic measurements, highlighting the simplicity of only analyzing (small) molecules of interest.
See also
Atmospheric pressure chemical ionization
Liquid chromatography-mass spectrometry
References
Mass spectrometry | Membrane-introduction mass spectrometry | [
"Physics",
"Chemistry"
] | 217 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Analytical chemistry stubs",
"Matter"
] |
2,432,697 | https://en.wikipedia.org/wiki/ACPI | Advanced Configuration and Power Interface (ACPI) is an open standard that operating systems can use to discover and configure computer hardware components, to perform power management (e.g. putting unused hardware components to sleep), auto configuration (e.g. Plug and Play and hot swapping), and status monitoring. It was first released in December 1996. ACPI aims to replace Advanced Power Management (APM), the MultiProcessor Specification, and the Plug and Play BIOS (PnP) Specification. ACPI brings power management under the control of the operating system, as opposed to the previous BIOS-centric system that relied on platform-specific firmware to determine power management and configuration policies. The specification is central to the Operating System-directed configuration and Power Management (OSPM) system. ACPI defines hardware abstraction interfaces between the device's firmware (e.g. BIOS, UEFI), the computer hardware components, and the operating systems.
Internally, ACPI advertises the available components and their functions to the operating system kernel using instruction lists ("methods") provided through the system firmware (UEFI or BIOS), which the kernel parses. ACPI then executes the desired operations written in ACPI Machine Language (such as the initialization of hardware components) using an embedded minimal virtual machine.
Intel, Microsoft and Toshiba originally developed the standard, while HP, Huawei and Phoenix also participated later. In October 2013, ACPI Special Interest Group (ACPI SIG), the original developers of the ACPI standard, agreed to transfer all assets to the UEFI Forum, in which all future development will take place. of the standard 6.5 was released in August 2022.
Architecture
The firmware-level ACPI has three main components: the ACPI tables, the ACPI BIOS, and the ACPI registers. The ACPI BIOS generates ACPI tables and loads ACPI tables into main memory. Much of the firmware ACPI functionality is provided in bytecode of ACPI Machine Language (AML), a Turing-complete, domain-specific low-level language, stored in the ACPI tables. To make use of the ACPI tables, the operating system must have an interpreter for the AML bytecode. A reference AML interpreter implementation is provided by the ACPI Component Architecture (ACPICA). At the BIOS development time, AML bytecode is compiled from the ASL (ACPI Source Language) code.
ACPI Component Architecture (ACPICA)
The ACPI Component Architecture (ACPICA), mainly written by Intel's engineers, provides an open-source platform-independent reference implementation of the operating system–related ACPI code. The ACPICA code is used by Linux, Haiku, ArcaOS and FreeBSD, which supplement it with their operating-system specific code.
History
The first revision of the ACPI specification was released in December 1996, supporting 16, 24 and 32-bit addressing spaces. It was not until August 2000 that ACPI received 64-bit address support as well as support for multiprocessor workstations and servers with revision 2.0.
In 1999, then Microsoft CEO Bill Gates stated in an e-mail that Linux would benefit from ACPI without them having to do work and suggested to make it Windows-only.
In September 2004, revision 3.0 was released, bringing to the ACPI specification support for SATA interfaces, PCI Express bus, multiprocessor support for more than 256 processors, ambient light sensors and user-presence devices, as well as extending the thermal model beyond the previous processor-centric support.
Released in June 2009, revision 4.0 of the ACPI specification added various new features to the design; most notable are the USB 3.0 support, logical processor idling support, and x2APIC support.
Initially ACPI is exclusive to x86 architecture; Revision 5.0 of the ACPI specification was released in December 2011, which added the ARM architecture support. The revision 5.1 was released in July 2014.
The latest specification revision is 6.5, which was released in August 2022.
Operating systems
Microsoft's Windows 98 was the first operating system to implement ACPI, but its implementation was somewhat buggy or incomplete, although some of the problems associated with it were caused by the first-generation ACPI hardware. Other operating systems, including later versions of Windows, macOS (x86 macOS only), eComStation, ArcaOS, FreeBSD (since FreeBSD 5.0), NetBSD (since NetBSD 1.6), OpenBSD (since OpenBSD 3.8), HP-UX, OpenVMS, Linux, GNU/Hurd and PC versions of Solaris, have at least some support for ACPI. Some newer operating systems, like Windows Vista, require the computer to have an ACPI-compliant BIOS, and since Windows 8, the S0ix/Modern Standby state was implemented.
Windows operating systems use acpi.sys to access ACPI events.
The 2.4 series of the Linux kernel had only minimal support for ACPI, with better support implemented (and enabled by default) from kernel version 2.6.0 onwards. Old ACPI BIOS implementations tend to be quite buggy, and consequently are not supported by later operating systems. For example, Windows 2000, Windows XP, and Windows Server 2003 only use ACPI if the BIOS date is after January 1, 1999. Similarly, Linux kernel 2.6 may not use ACPI if the BIOS date is before January 1, 2001.
Linux-based operating systems can provide handling of ACPI events via acpid.
OSPM responsibilities
Once an OSPM-compatible operating system activates ACPI, it takes exclusive control of all aspects of power management and device configuration. The OSPM implementation must expose an ACPI-compatible environment to device drivers, which exposes certain system, device and processor states.
Power states
Global states
The ACPI Specification defines the following four global "Gx" states and six sleep "Sx" states for an ACPI-compliant computer system:
The specification also defines a Legacy state: the state of an operating system which does not support ACPI. In this state, the hardware and power are not managed via ACPI, effectively disabling ACPI.
Device states
The device states D0–D3 are device dependent:
D0 or Fully On is the operating state.
As with S0ix, Intel has D0ix states for intermediate levels on the SoC.
D1 and D2 are intermediate power-states whose definition varies by device.
D3: The D3 state is further divided into D3 Hot (has auxiliary power), and D3 Cold (no power provided):
Hot: A device can assert power management requests to transition to higher power states.
Cold or Off has the device powered off and unresponsive to its bus.
Processor states
The CPU power states C0–C3 are defined as follows:
C0 is the operating state.
C1 (often known as Halt) is a state where the processor is not executing instructions, but can return to an executing state essentially instantaneously. All ACPI-conformant processors must support this power state. Some processors, such as the Pentium 4 and AMD Athlon, also support an Enhanced C1 state (C1E or Enhanced Halt State) for lower power consumption, however this proved to be buggy on some systems.
C2 (often known as Stop-Clock) is a state where the processor maintains all software-visible state, but may take longer to wake up. This processor state is optional.
C3 (often known as Sleep) is a state where the processor does not need to keep its cache coherent, but maintains other state. Some processors have variations on the C3 state (Deep Sleep, Deeper Sleep, etc.) that differ in how long it takes to wake the processor. This processor state is optional.
Additional states are defined by manufacturers for some processors. For example, Intel's Haswell platform has states up to C10, where it distinguishes core states and package states.
Performance state
While a device or processor operates (D0 and C0, respectively), it can be in one of several power-performance states. These states are implementation-dependent. P0 is always the highest-performance state, with P1 to Pn being successively lower-performance states. The total number of states is device or processor dependent, but can be no greater than 16.
P-states have become known as SpeedStep in Intel processors, as PowerNow! or Cool'n'Quiet in AMD processors, and as PowerSaver in VIA processors.
P0 maximum power and frequency
P1 less than P0, voltage and frequency scaled
P2 less than P1, voltage and frequency scaled
Pn less than P(n–1), voltage and frequency scaled
Interfaces
Hardware
ACPI-compliant systems interact with hardware through either a "Function Fixed Hardware (FFH) Interface", or a platform-independent hardware programming model which relies on platform-specific ACPI Machine Language (AML) provided by the original equipment manufacturer (OEM).
Function Fixed Hardware interfaces are platform-specific features, provided by platform manufacturers for the purposes of performance and failure recovery. Standard Intel-based PCs have a fixed function interface defined by Intel, which provides a set of core functionality that reduces an ACPI-compliant system's need for full driver stacks for providing basic functionality during boot time or in the case of major system failure.
ACPI Platform Error Interface (APEI) is a specification for reporting of hardware errors, e.g. chipset, RAM to the operating system.
Firmware
ACPI defines many tables that provide the interface between an ACPI-compliant operating system and system firmware (BIOS or UEFI). This includes RSDP, RSDT, XSDT, FADT, FACS, DSDT, SSDT, MADT, and MCFG, for example.
The tables allow description of system hardware in a platform-independent manner, and are presented as either fixed-formatted data structures or in AML. The main AML table is the DSDT (differentiated system description table). The AML can be decompiled by tools like Intel's iASL (open-source, part of ACPICA) for purposes like patching the tables for expanding OS compatibility.
The Root System Description Pointer (RSDP) is located in a platform-dependent manner, and describes the rest of the tables.
A custom ACPI table called the Windows Platform Binary Table (WPBT) is used by Microsoft to allow vendors to add software into the Windows OS automatically. Some vendors, such as Lenovo, have been caught using this feature to install harmful software such as Superfish. Samsung shipped PCs with Windows Update disabled. Windows versions older than Windows 7 do not support this feature, but alternative techniques can be used. This behavior has been compared to rootkits.
Criticism
In November 2003, Linus Torvalds—author of the Linux kernel—described ACPI as "a complete design disaster in every way".
See also
Active State Power Management
Coreboot
Green computing
Power management keys
Server Base System Architecture (SBSA)
Wake-on-LAN
Further reading
References
External links
(UEFI and ACPI specifications)
Everything You Need to Know About the CPU C-States Power Saving Modes
Sample EFI ASL code used by VirtualBox; EFI/ASL code itself is from the open source Intel EFI Development Kit II (TianoCore)
ACPICA
BIOS
Unified Extensible Firmware Interface
Application programming interfaces
Computer hardware standards
Open standards
Electric power
System administration | ACPI | [
"Physics",
"Technology",
"Engineering"
] | 2,437 | [
"Physical quantities",
"Computer standards",
"System administration",
"Power (physics)",
"Information systems",
"Electric power",
"Electrical engineering",
"Computer hardware standards"
] |
2,432,911 | https://en.wikipedia.org/wiki/Mass%20flow%20sensor | A mass (air) flow sensor (MAF) is a sensor used to determine the mass flow rate of air entering a fuel-injected internal combustion engine.
The air mass information is necessary for the engine control unit (ECU) to balance and deliver the correct fuel mass to the engine. Air changes its density with temperature and pressure. In automotive applications, air density varies with the ambient temperature, altitude and the use of forced induction, which means that mass flow sensors are more appropriate than volumetric flow sensors for determining the quantity of intake air in each cylinder.
There are two common types of mass airflow sensors in use on automotive engines. These are the vane meter and the hot wire. Neither design employs technology that measures air mass directly. However, with additional sensors and inputs, an engine's ECU can determine the mass flow rate of intake air.
Both approaches are used almost exclusively on electronic fuel injection (EFI) engines. Both sensor designs output a 0.0–5.0 volt or a pulse-width modulation (PWM) signal that is proportional to the air mass flow rate, and both sensors have an intake air temperature (IAT) sensor incorporated into their housings for most post on-board diagnostics (OBDII) vehicles. Vehicles prior to 1996 could have MAF without an IAT. An example is 1994 Infiniti Q45.
When a MAF sensor is used in conjunction with an oxygen sensor, the engine's air/fuel ratio can be controlled very accurately. The MAF sensor provides the open-loop controller predicted air flow information (the measured air flow) to the ECU, and the oxygen sensor provides closed-loop feedback in order to make minor corrections to the predicted air mass. Also see manifold absolute pressure sensor (MAP sensor). Since around 2012, some MAF sensors include a humidity sensor.
Moving vane meter
The VAF (vane air flow) sensor measures the momentum of the air flow into the engine with a spring-loaded air vane (flap/door) attached to a variable resistor (potentiometer). The vane moves in proportion to the momentum of the airflow. A voltage is applied to the potentiometer and a voltage appears on the output terminal of the potentiometer proportional to the angle the vane rotates, or the movement of the vane may directly regulate the amount of fuel injected, as in the K-Jetronic system.
Many VAF sensors have an air-fuel adjustment screw, which opens or closes a small air passage on the side of the VAF sensor. This screw controls the air-fuel mixture by letting a metered amount of air flow past the air flap, thereby leaning or richening the mixture. By turning the screw clockwise the mixture is enriched and counterclockwise the mixture is leaned.
The vane moves because of the drag force of the air flow against it; it does not measure volume or mass directly. The drag force depends on air density (air density in turn depends on air temperature), air velocity and the shape of the vane, see drag equation. Some VAF sensors include an additional intake air temperature sensor (IAT sensor) to allow the engines ECU to calculate the density of the air, and the fuel delivery accordingly.
The vane meter approach has some drawbacks:
it restricts airflow which limits engine output
its moving electrical or mechanical contacts can wear
finding a suitable mounting location within a confined engine compartment is problematic
the vane has to be oriented with respect to gravity.
in some manufacturers fuel pump control was also part on the VAF internal wiring.
Hot wire sensor (MAF)
A hot wire mass airflow sensor determines the mass of air flowing into the engine's air intake system. The theory of operation of the hot wire mass airflow sensor is similar to that of the hot wire anemometer (which determines air velocity). This is achieved by heating a wire suspended in the engine's air stream, like a toaster wire, by applying a constant voltage over the wire. The wire's electrical resistance increases as the wire's temperature increases, which varies the electrical current flowing through the circuit, according to Ohm's law. When air flows past the wire, the wire cools, decreasing its resistance, which in turn allows more current to flow through the circuit, since the supply voltage is a constant. As more current flows, the wire's temperature increases until the resistance reaches equilibrium again. The current increase or decrease is proportional to the mass of air flowing past the wire. The integrated electronic circuit converts the proportional measurement into a proportional voltage which is sent to the ECU.
If air density increases due to pressure increase or temperature drop, but the air volume remains constant, the denser air will remove more heat from the wire indicating a higher mass airflow. Unlike the vane meter's paddle sensing element, the hot wire responds directly to air density. This sensor's capabilities are well suited to support the gasoline combustion process which fundamentally responds to air mass, not air volume. (See stoichiometry.)
This sensor sometimes employs a mixture screw, but this screw is fully electronic and uses a variable resistor (potentiometer) instead of an air bypass screw. The screw needs more turns to achieve the desired results. A hot wire burn-off cleaning circuit is employed on some of these sensors. A burn-off relay applies a high current through the platinum hot wire after the vehicle is turned off for a second or so, thereby burning or vaporizing any contaminants that have stuck to the platinum hot wire element.
The hot film MAF sensor works somewhat similar to the hot wire MAF sensor, but instead it usually outputs a frequency signal. This sensor uses a hot film-grid instead of a hot wire. It is commonly found in late 1980s and early 1990s fuel-injected vehicles. The output frequency is directly proportional to the air mass entering the engine. So as mass flow increases so does frequency. These sensors tend to cause intermittent problems due to internal electrical failures. The use of an oscilloscope is strongly recommended to check the output frequency of these sensors. Frequency distortion is also common when the sensor starts to fail. Many technicians in the field use a tap test with very conclusive results. Not all HFM systems output a frequency. In some cases, this sensor works by outputting a regular varying voltage signal.
A micro-bridge uses the same principles but arranged on a silicon chip.
Coldwire sensor
The GM LS engine series (as well as others) use a coldwire MAF system (produced by AC Delco) that works similarly to the hot-wire MAF system; however, it uses an additional "cold" resistor to measure the ambient air and provide a reference for the "hot" resistor element used to measure the air flow.
The mesh on the MAF is used to smooth out airflow to ensure the sensors have the best chance of a steady reading. It is not used for measuring the air flow per se. In situations where owners use oiled-gauze air filters, it is possible for excess oil to coat the MAF sensor and skew its readings. Indeed, General Motors has issued a Technical Service Bulletin, indicating problems from rough idle all the way to possible transmission damage resulting from the contaminated sensors. To clean the delicate MAF sensor components, a specific MAF sensor cleaner or electronics cleaner should be used, not carburetor or brake cleaners, which can be too aggressive chemically. Instead, the liquid phase of MAF sensor cleaners and electronics cleaners is typically based on hexanes or heptanes with little to no alcohol content and use either carbon dioxide or HFC-152a as aerosol propellants. The sensors should be gently sprayed from a careful distance to avoid physically damaging them and then allowed to thoroughly dry before reinstalling. Manufacturers claim that a simple but extremely reliable test to ensure correct functionality is to tap the unit with the back of a screwdriver while the car is running, and if this causes any changes in the output frequency then the unit should be discarded and an OEM replacement installed.
Kármán vortex sensor
A Kármán vortex sensor works by disrupting the air stream with a perpendicular bow. Providing that the incoming flow is laminar, the wake consists of an oscillatory pattern of Kármán vortices. The frequency of the resulting pattern is proportional to the air velocity.
These vortices can either be read directly as a pressure pulse against a sensor, or they can be made to collide with a mirror which will then interrupt or transmit a reflected light beam to generate the pulses in response to the vortices. The first type can only be used in pull-thru air (prior to a turbo- or supercharger), while the second type could theoretically be used push- or pull-thru air (before or after a forced induction application like the previously mentioned super- or turbocharger). Instead of outputting a constant voltage modified by a resistance factor, this type of MAF outputs a frequency which must then be interpreted by the ECU. This type of MAF can be found on all DSMs (Mitsubishi Eclipse, Eagle Talon, Plymouth Laser), many Mitsubishis, some Toyotas and Lexus, and some BMWs, among others.
Membrane sensor
An emerging technology utilizes a very thin electronic membrane placed in the air stream. The membrane has a thin film temperature sensor printed on the upstream side, and one on the downstream side. A heater is integrated in the center of the membrane which maintains a constant temperature similar to the hot-wire approach. Without any airflow, the temperature profile across the membrane is uniform. When air flows across the membrane, the upstream side cools differently from the downstream side. The difference between the upstream and downstream temperature indicates the mass airflow. The thermal membrane sensor is also capable of measuring flow in both directions, which sometimes occur in pulsating situations. Technological progress allows this kind of sensor to be manufactured on the microscopic scale as microsensors using microelectromechanical systems technology. Such a microsensor reaches a significantly higher speed and sensitivity compared with macroscopic approaches. See also MEMS sensor generations.
Laminar flow elements
Laminar flow elements measure the volumetric flow of gases directly. They operate on the principle that, given laminar flow, the pressure difference across a pipe is linearly proportional to the flow rate. Laminar flow conditions are present in a gas when the Reynolds number of the gas is below the critical figure. The viscosity of the fluid must be compensated for in the result. Laminar flow elements are usually constructed from a large number of parallel pipes to achieve the required flow rating.
See also
List of auto parts
List of sensors
Manifold absolute pressure (MAP)
References
Engine sensors
Flow meters
Gas technologies
Mass | Mass flow sensor | [
"Physics",
"Chemistry",
"Mathematics",
"Technology",
"Engineering"
] | 2,221 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Mass",
"Measuring instruments",
"Size",
"Flow meters",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
19,147,875 | https://en.wikipedia.org/wiki/Electron%20localization%20function | In quantum chemistry, the electron localization function (ELF) is a measure of the likelihood of finding an electron in the neighborhood space of a reference electron located at a given point and with the same spin. Physically, this measures the extent of spatial localization of the reference electron and provides a method for the mapping of electron pair probability in multielectronic systems.
ELF's usefulness stems from the observation that it allows electron localization to be analyzed in a chemically intuitive way. For example, the shell structure of heavy atoms is obvious when plotting ELF against the radial distance from the nucleus; the ELF for radon has six clear maxima, whereas the electronic density decreases monotonically and the radially weighted density fails to show all shells. When applied to molecules, an analysis of the ELF shows a clear separation between the core and valence electron, and also shows covalent bonds and lone pairs, in what has been called "a faithful visualization of VSEPR theory in action". Another feature of the ELF is that it is invariant concerning the transformation of the molecular orbitals.
The ELF was originally defined by Becke and Edgecombe in 1990. They first argued that a measure of the electron localization is provided by
where is the electron spin density and the kinetic energy density. The second term (negative term) is the bosonic kinetic energy density, so is the contribution due to fermions. is expected to be small in those regions of space where localized electrons are to be found. Given the arbitrariness of the magnitude of the localization measure provided by , it is compared to the corresponding value for a uniform electron gas with spin density equal to , which is given by
The ratio,
is a dimensionless localization index that expresses electron localization for the uniform electron gas. In the final step, the ELF is defined in terms of by mapping its values on to the range by defining the electron localization function as
corresponding to perfect localization and corresponding to the electron gas.
The original derivation was based on Hartree–Fock theory. For density functional theory, the approach was generalized by Andreas Savin in 1992, who also have applied the formulation to examining various chemical and materials systems. In 1994, Bernard Silvi and Andreas Savin developed a method for explaining ELFs using differential topology.
The approach of electron localization, in the form of atoms in molecules (AIM), was pioneered by Richard Bader. Bader's analysis partitions the charge density in a molecule to "atoms" according to zero-flux surfaces (surfaces across which no electron flow is taking place). Bader's analysis allows many properties such as multipole moments, energies and forces, to be partitioned in a defensible and consistent manner to individual atoms within molecules.
Both the Bader approach and the ELF approach to partitioning of molecular properties have gained popularity in recent years because the fastest, accurate ab-initio calculations of molecular properties are now mostly made using density functional theory (DFT), which directly calculates the electron density. This electron density is then analyzed using the Bader charge analysis of Electron Localization Functions. One of the most popular functionals in DFT was first proposed by Becke, who also originated Electron Localization Functions.
References
External links
Frank R. Wagner (ed.) Electron localizability: chemical bonding analysis in direct and momentum space. Max-Planck-Institut für Chemische Physik fester Stoffe, 2002. (accessed 2008-09-02).
Quantum chemistry
Chemical bonding | Electron localization function | [
"Physics",
"Chemistry",
"Materials_science"
] | 725 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
"Chemical bonding",
" and optical physics"
] |
19,158,870 | https://en.wikipedia.org/wiki/Journal%20of%20Automata%2C%20Languages%20and%20Combinatorics | The Journal of Automata, Languages and Combinatorics (JALC) is a peer-reviewed scientific journal of computer science. It was established in 1965 as the Journal of Information Processing and Cybernetics (German: Elektronische Informationsverarbeitung und Kybernetik) and obtained its current title in 1996 with volume numbering reset to 1. The main focus of the journal is on automata theory, formal language theory, and combinatorics.
The editor-in-chief of the journal was, until 2015, Jürgen Dassow of the Otto von Guericke University of Magdeburg. From 2016, the editors in chief are Markus Holzer and Martin Kutrib, and the publication is handled by the Institute of Informatics at the University of Giessen.
Bibliographic databases indexing the journal include the ACM Guide to Computing Literature, the Digital Bibliography & Library Project, the MathSciNet database, and the Zentralblatt MATH.
Most cited articles
According to Google Scholar, the following articles have been cited most often (≥ 100 times):
Mehryar Mohri. Semiring frameworks and algorithms for shortest-distance problems. Journal of Automata, Languages and Combinatorics 7(3):321–350 (2002)
Rolf Wiehagen. Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Elektronische Informationsverarbeitung und Kybernetik 12(1/2):93–99 (1976)
Gheorghe Păun. Regular extended H systems are computationally universal. Journal of Automata, Languages and Combinatorics 1(1):27–36 (1996)
References
External links
Computer science journals
Theoretical computer science
Academic journals established in 1996
Quarterly journals
English-language journals | Journal of Automata, Languages and Combinatorics | [
"Mathematics"
] | 382 | [
"Theoretical computer science",
"Applied mathematics"
] |
5,947,843 | https://en.wikipedia.org/wiki/Welch%E2%80%93Satterthwaite%20equation | In statistics and uncertainty analysis, the Welch–Satterthwaite equation is used to calculate an approximation to the effective degrees of freedom of a linear combination of independent sample variances, also known as the pooled degrees of freedom, corresponding to the pooled variance.
For sample variances , each respectively having degrees of freedom, often one computes the linear combination.
where is a real positive number, typically . In general, the probability distribution of {{math|χ}} cannot be expressed analytically. However, its distribution can be approximated by another chi-squared distribution, whose effective degrees of freedom are given by the Welch–Satterthwaite equation'''
There is no assumption that the underlying population variances are equal. This is known as the Behrens–Fisher problem.
The result can be used to perform approximate statistical inference tests. The simplest application of this equation is in performing Welch's t-test.
See also
Pooled variance
References
Further reading
Michael Allwood (2008) "The Satterthwaite Formula for Degrees of Freedom in the Two-Sample t-Test", AP Statistics'', Advanced Placement Program, The College Board.
Theorems in statistics
Equations
Statistical approximations | Welch–Satterthwaite equation | [
"Mathematics"
] | 247 | [
"Mathematical theorems",
"Theorems in statistics",
"Mathematical objects",
"Equations",
"Mathematical relations",
"Statistical approximations",
"Mathematical problems",
"Approximations"
] |
5,949,047 | https://en.wikipedia.org/wiki/Hjulstr%C3%B6m%20curve | The Hjulström curve, named after Filip Hjulström (1902–1982), is a graph used by hydrologists and geologists to determine whether a river will erode, transport, or deposit sediment. It was originally published in his doctoral thesis "Studies of the morphological activity of rivers as illustrated by the river Fyris." in 1935. The graph takes sediment particle size and water velocity into account.
The upper curve shows the critical erosion velocity in cm/s as a function of particle size in mm, while the lower curve shows the deposition velocity as a function of particle size. Note that the axes are logarithmic.
The plot shows several key concepts about the relationships between erosion, transportation, and deposition. For particle sizes where friction is the dominating force preventing erosion, the curves follow each other closely and the required velocity increases with particle size. However, for cohesive sediment, mostly clay but also silt, the erosion velocity increases with decreasing grain size, as the cohesive forces are relatively more important when the particles get smaller. The critical velocity for deposition, on the other hand, depends on the settling velocity, and that decreases with decreasing grainsize. The Hjulström curve shows that sand particles of a size around 0.1 mm require the lowest stream velocity to erode.
The curve was expanded by Åke Sundborg in 1956. He significantly improved the level of detail in the cohesive part of the diagram, and added lines for different modes of transportation. The result is called the Sundborg diagram, or the Hjulström-Sundborg Diagram, in the academic literature.
This curve dates back to early 20th century research on river geomorphology and has no more than a historical value nowadays, although its simplicity is still attractive. Among the drawbacks of this curve are that it does not take the water depth into account and more importantly, that it does not show that sedimentation is caused by flow velocity deceleration and erosion is caused by flow acceleration. The dimensionless Shields Diagram, in combination with the Shields formula is now unanimously accepted for initiation of sediment motion in rivers. Much work was done on river sediment transport formulae in the second half of the 20th century and that work should be used preferably to Hjulström's curve.
See also
Sediment transport
Sediment_transport#Hjulström–Sundborg_diagram
References
Hydrology
Geomorphology
Curves
Geological techniques
1935 in science
Eponymous curves | Hjulström curve | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 506 | [
"Hydrology",
"Hydrology stubs",
"Environmental engineering"
] |
5,950,590 | https://en.wikipedia.org/wiki/Nautilus%20%28secure%20telephone%29 | Nautilus is a program which allows two parties to securely communicate using modems or TCP/IP. It runs from a command line and is available for the Linux and Windows operating systems. The name was based upon Jules Verne's Nautilus and its ability to overcome a Clipper ship as a play on Clipper chip.
The program was originally developed by
Bill Dorsey, Andy Fingerhut, Paul Rubin, Bill Soley, and David Miller.
Nautilus is historically significant in the realm of secure communications because it was one of the first programs which were released as open source to the general public which used strong encryption. It was created as a response to the Clipper chip in which the US government planned to use a key escrow scheme on all products which used the chip. This would allow them to monitor "secure" communications. Once this program and another similar program PGPfone were available on the internet, the proverbial cat was "out of the bag" and it would have been nearly impossible to stop the use of strong encryption for telephone communications.
The project had to move their web presence by the end of May 2014 due to the decision of to shut down the developer platform that hosted the project.
External links
new Nautilus homepage from May 1 2014 on
"Can Nautilus Sink Clipper?" Article in Wired, Aug 1995
Secure telephones
Cryptographic software
VoIP software | Nautilus (secure telephone) | [
"Mathematics"
] | 288 | [
"Cryptographic software",
"Mathematical software"
] |
18,132,644 | https://en.wikipedia.org/wiki/Signomial | A signomial is an algebraic function of one or more independent variables. It is perhaps most easily thought of as an algebraic extension of multivariable polynomials—an extension that permits exponents to be arbitrary real numbers (rather than just non-negative integers) while requiring the independent variables to be strictly positive (so that division by zero and other inappropriate algebraic operations are not encountered).
Formally, a signomial is a function with domain which takes values
where the coefficients and the exponents are real numbers. Signomials are closed under addition, subtraction, multiplication, and scaling.
If we restrict all to be positive, then the function f is a posynomial. Consequently, each signomial is either a posynomial, the negative of a posynomial, or the difference of two posynomials. If, in addition, all exponents are non-negative integers, then the signomial becomes a polynomial whose domain is the positive orthant.
For example,
is a signomial.
The term "signomial" was introduced by Richard J. Duffin and Elmor L. Peterson in their seminal joint work on general algebraic optimization—published in the late 1960s and early 1970s. A recent introductory exposition involves optimization problems. Nonlinear optimization problems with constraints and/or objectives defined by signomials are harder to solve than those defined by only posynomials, because (unlike posynomials) signomials cannot necessarily be made convex by applying a logarithmic change of variables. Nevertheless, signomial optimization problems often provide a much more accurate mathematical representation of real-world nonlinear optimization problems.
See also
Posynomial
Geometric programming
References
External links
S. Boyd, S. J. Kim, L. Vandenberghe, and A. Hassibi, A Tutorial on Geometric Programming
Functions and mappings
Mathematical optimization | Signomial | [
"Mathematics"
] | 396 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Mathematical relations",
"Mathematical optimization"
] |
18,140,032 | https://en.wikipedia.org/wiki/Potentiometer%20%28measuring%20instrument%29 | A potentiometer is an instrument for measuring voltage or 'potential difference' by comparison of an unknown voltage with a known reference voltage. If a sensitive indicating instrument is used, very little current is drawn from the source of the unknown voltage. Since the reference voltage can be produced from an accurately calibrated voltage divider, a potentiometer can provide high precision in measurement. The method was described by Johann Christian Poggendorff around 1841 and became a standard laboratory measuring technique.
In this arrangement, a fraction of a known voltage from a resistive slide wire is compared with an unknown voltage by means of a galvanometer. The sliding contact or wiper of the potentiometer is adjusted and the galvanometer briefly connected between the sliding contact and the unknown voltage. The deflection of the galvanometer is observed and the sliding tap adjusted until the galvanometer no longer deflects from zero. At that point the galvanometer draws no current from the unknown source, and the magnitude of voltage can be calculated from the position of the sliding contact.
This null balance measuring method is still important in electrical metrology and standards work and is also used in other areas of electronics.
Measurement potentiometers are divided into four main classes listed below.
Principle of operation
The principle of a potentiometer is that the potential dropped across a segment of a wire of uniform cross-section carrying a constant current is directly proportional to its length. The potentiometer is a simple device used to measure the electrical potentials (or compare the e.m.f of a cell). One form of potentiometer is a uniform high-resistance wire attached to an insulating support, marked with a linear measuring scale. In use, an adjustable regulated voltage source E, of greater magnitude than the potential to be measured, is connected across the wire so as to pass a steady current through it.
Between the end of the wire and any point along it will be a potential proportional to the length of wire to that point. By comparing the potential at points along the wire with an unknown potential, the magnitude of the unknown potential can be determined. The instrument used for comparison must be sensitive, but need not be particularly well-calibrated or accurate so long as its deflection from zero position can be easily detected.
Constant current potentiometer
In this circuit, the ends of a uniform resistance wire R1 are connected to a regulated DC supply VS for use as a voltage divider. The potentiometer is first calibrated by positioning the wiper (arrow) at the spot on the R1 wire that corresponds to the voltage of a standard cell so that
A standard electrochemical cell is used whose emf is known (e.g. 1.0183 volts for a Weston standard cell).
The supply voltage VS is then adjusted until the galvanometer shows zero, indicating the voltage on R2 is equal to the standard cell voltage.
An unknown DC voltage, in series with the galvanometer, is then connected to the sliding wiper, across a variable-length section R3 of the resistance wire. The wiper is moved until no current flows into or out of the source of unknown voltage, as indicated by the galvanometer in series with the unknown voltage. The voltage across the selected R3 section of wire is then equal to the unknown voltage. The final step is to calculate the unknown voltage from the fraction of the length of the resistance wire that was connected to the unknown voltage.
The galvanometer does not need to be calibrated, as its only function is to read zero or not zero. When measuring an unknown voltage and the galvanometer reads zero, no current is drawn from the unknown voltage and so the reading is independent of the source's internal resistance, as if by a voltmeter of infinite resistance.
Because the resistance wire can be made very uniform in cross-section and resistivity, and the position of the wiper can be measured easily, this method can be used to measure unknown DC voltages greater than or less than a calibration voltage produced by a standard cell without drawing any current from the standard cell.
If the potentiometer is attached to a constant voltage DC supply such as a lead–acid battery, then a second variable resistor (not shown) can be used to calibrate the potentiometer by varying the current through the R1 resistance wire.
If the length of the R1 resistance wire is AB, where A is the (-) end and B is the (+) end, and the movable wiper is at point X at a distance AX on the R3 portion of the resistance wire when the galvanometer gives a zero reading for an unknown voltage, the distance AX is measured or read from a pre-printed scale next to the resistance wire. The unknown voltage can then be calculated:
Constant resistance potentiometer
The constant resistance potentiometer is a variation of the basic idea in which a variable current is fed through a fixed resistor. These are used primarily for measurements in the millivolt and microvolt range.
Microvolt potentiometer
This is a form of the constant resistance potentiometer described above but designed to minimize the effects of contact resistance and thermal emf. This equipment is satisfactorily used down to readings of 1000 nV or so.
Thermocouple potentiometer
Another development of the standard types was the 'thermocouple potentiometer' especially adapted for temperature measurement with thermocouples. Potentiometers for use with thermocouples also measure the temperature at which the thermocouple wires are connected, so that cold-junction compensation may be applied to correct the apparent measured EMF to the standard cold-junction temperature of 0 degrees C.
Analytical chemistry
To make a potentiometric determination of an analyte in a solution, the potential of the cell is measured. This measurement must be corrected for the reference and junction potentials. It can also be used in standardisation methods. The concentration of the analyte can then be calculated from the Nernst Equation. Many varieties of this basic principle exist for quantitative measurements.
Metre bridge
A metre bridge is a simple type of potentiometer which may be used in school science laboratories to demonstrate the principle of resistance measurement by potentiometric means. A resistance wire is laid along the length of a metre rule and contact with the wire is made through a galvanometer by a slider. When the galvanometer reads zero, the ratio between the lengths of wire to the left and right of the slider is equal to the ratio between the values of a known and an unknown resistor in a parallel circuit.
See also
Potentiometer (voltage divider)
Wheatstone bridge
References
External links
Pictures of measuring potentiometers
Electrical calibration equipment including various measurement potentiometers
Voltmeters
Electrical meters
Measuring instruments | Potentiometer (measuring instrument) | [
"Physics",
"Technology",
"Engineering"
] | 1,418 | [
"Voltmeters",
"Physical quantities",
"Measuring instruments",
"Voltage",
"Electrical meters"
] |
10,084,899 | https://en.wikipedia.org/wiki/Static%20synchronous%20compensator | In Electrical Engineering , a static synchronous compensator (STATCOM) is a shunt-connected, reactive compensation device used on transmission networks. It uses power electronics to form a voltage-source converter that can act as either a source or sink of reactive AC power to an electricity network. It is a member of the FACTS family of devices.
STATCOMS are alternatives to other passive reactive power devices, such as capacitors and inductors (reactors). They have a variable reactive power output, can change their output in terms of milliseconds, and able to supply and consume both capacitive and inductive vars. While they can be used for voltage support and power factor correction, their speed and capability are better suited for dynamic situations like supporting the grid under fault conditions or contingency events.
The use of voltage-source based FACTs device had been desirable for some time, as it helps mitigate the limitations of current-source based devices whose reactive output decreases with system voltage. However, limitations in technology have historically prevented wide adoption of STATCOMs. When gate turn-off thyristors (GTO) became more widely available in the 1990s and had the ability to switch both on and off at higher power levels, the first STATCOMs began to be commercially available. These devices typically used 3-level topologies and pulse-width modulation (PWM) to simulate voltage waveforms.
Modern STATCOMs now make use of insulated-gate bipolar transistors (IGBTs), which allow for faster switching at high-power levels. 3-level topologies have begun to give way to Multi-Modular Converter (MMC) Topologies, which allow for more levels in the voltage waveform, reducing harmonics and improving performance.
History
When AC won the War of Currents in the late 19th century, and electric grids began expanding and connecting cities and states, the need for reactive compensation became apparent. While AC offered benefits with transformation and reduced current, the alternating nature of voltage and current lead to additional challenges with the natural capacitance and inductance of transmission lines. Heavily loaded lines consumed reactive power due to the line's inductance, and as transmission voltage increased throughout the 20th century, the higher voltage supplied capacitive reactive power. As operating a transmission line only at it surge impedance loading (SIL) was not feasible, other means to manage the reactive power was needed.
Synchronous Machines were commonly used at the time for generators, and could provide some reactive power support, however were limited due to the increase in losses it caused. They also became less effective as higher voltage transmissions lines moved loads further from sources. Fixed, shunt capacitor and reactor banks filled this need by being deployed where needed. In particular, shunt capacitors switched by circuit breakers provided an effective means to managing varying reactive power requirements due to changing loads. However, this was not without limitations.
Shunt capacitors and reactors are fixed devices, only able to be switched on and off. This required either a careful study of the exact size needed, or accepting less than ideal effects on the voltage of a transmission line. The need for a more dynamic and flexible solution was realized with the mercury-arc valve in the early 20th century. Similar to a vacuum tube, the mercury-arc valve was a high-powered rectifier, capable of converting high AC voltages to DC. As the technology improved, inverting became possible as well and mercury valves found use in power systems and HVDC ties. When connected to a reactor, different switching pattern could be used to vary the effective inductance connected, allow for more dynamic control. Arc valves continued to dominate power electronics until the rise of solid-state semiconductors in the mid 20th century.
As semiconductors replaced vacuum tubes, the thyristor created the first modern FACTs devices in the Static VAR Compensator (SVC). Effectively working as a circuit breaker that could switch on in milliseconds, it allowed for quickly switching capacitor banks. Connected to a reactor and switched sub-cycle allowed the effective inductance to be varied. The thyristor also greatly improved the control system, allowing an SVC to detect and react to faults to better support the system. The thyristor dominated the FACTs and HVDC world until the late 20th century, when the IGBT began to match its power ratings.
With the IGBT, the first voltage-sourced converters and STATCOMs began to enter the FACTs world. A prototype 1 MVAr STATCOM was described in a report by Empire State Electric Energy Research Corporation in 1987. The first production 100 MVAr STATCOM made by Westinghouse Electric was installed at the Tennessee Valley Authority Sullivan substation in 1995 but was quickly retired due to obsolescence of its components.
Theory
The basis of a STATCOM is a voltage source converter (VSC) connected in series with some type of reactance, either a fixed Inductor or a Power Transformer. This allows a STATCOM to control power flow much like a Transmission Line, albeit without any active (real) power flow. Given an inductor connected between two AC voltages, the reactive power flow between the two points is given by:
where
: Reactive Power
: Sending-End Voltage
: Magnitude difference in and receiving end voltage
: Reactance of the Inductor or transformer
: Phase-Angle difference between and
With close to zero (as the STATCOM provides no real power and only consumes a small amount as losses) and a fixed size, reactive power flow is controlled by the difference in magnitude of the two AC voltages. From the equation, if the STATCOM creates a voltage magnitude greater than the system voltage, it supplies capacitive reactive power to the system. If the STATCOM's voltage magnitude is less, it consumes inductive reactive power from the system. As most modern VSCs are made of power electronics that are capable of making small voltage changes very quickly, a dynamic reactive power output is possible. This compares to a traditional, fixed capacitor or inductor, that is either off (0 MVar) or at its maximum (for example, 50 MVar). A similarly sized STATCOM would range from 50 MVar capacitive to 50 MVar inductive, in as small as 1 MVar steps.
VSC topologies
Since a STATCOM varies its voltage magnitude to control reactive power, the topology of how the VSC is designed and connected defines how effectively and quickly it can operate. There are numerous different topologies available for VSCs and power electronic based converters, the most common ones are covered below. IGBTS are listed as the power electronics device below, however older devices also used GTO Thyristors.
Two-level converter
One of the earliest VSC topologies was the two-level converter, adapted from the three-phase bridge rectifier. Also referred to as a 6-pulse rectifier, it is able to connect the AC voltage through different IGBT paths based on switching. When used as a rectifier to convert AC to DC, this allows both the positive and negative portion of the waveform to be converted to DC. When used in a VSC for a STATCOM, a capacitor can be connected across the DC side to produce a square wave with two levels.
This alone offers no real advantages for a STATCOM, as the voltage magnitude is fixed. However, if the IGBTs can be switched fast enough, pulse-width modulation (PWM) can be used to control the voltage magnitude. By varying the durations of the pulses, the effective magnitude of the voltage waveform can be controlled. Since PWM still only produces square waves, harmonic generation is quite significant. Some harmonic reduction can be achieved by analytical techniques on different switching patterns; however, this is limited to controller complexity. Each level of the two-level converter also generally comprises multiple series IGBTs, to create the needed final voltage, so coordination and timing between individual devices is challenging.
Three-level converter
Adding additional levels to a converter topology has the benefit of more closely mirroring a true voltage sine wave, which reduces harmonic generation and improves performance. If all three phases of a VSC utilize its own two-level converter topology, the phase-to-phase voltage will be three levels (as while the three phase have the same switching pattern, they are shifted in time relative to each other). This allows a positive and negative peak in addition to a zero level, which adds positive and negative symmetry and eliminates even order harmonics. Another option is to enhance the two-level topology to a three-level converter.
By adding two additional IGBTs to the converter, three different levels can be created by have two IGBTs on at once. If each phase has its own three-level converter, then a total of five levels can be created. This creates a very crude sine wave, however PWM still offer less harmonic generation (as the pulses are still on all five levels).
Three-level converters can also be combined with transformers and phase shifting to create additional levels. A transformer with two secondaries, one Wye-Wye and the other Wye-Delta, can be connected to two separate three-phase, three-level converters to double the number of levels. Additional phase-shifted windings can be used to turn the traditional 6 pulses of a three-level to 12, 24, or even 48 pulses. With this many pulses and levels, the waveform better approximates a true sine wave, and all harmonics generated are of a much higher order that can be filtered out with a low-pass filter.
Modular multi-level converter
While adding phase shifting to three-level converters improves harmonic performance, it comes at the cost of adding 2, 3 or even 4 additional STATCOMs. It also adds little to no redundancy, as the switching pattern is too complex to accommodate the loss of one STATCOM. As the idea of the three-level converter is to add additional levels to better approximate a voltage sine wave, another topology called the Modular Multi-level Converter (MMC) offers some benefits.
The MMC topology is similar to the three-level in that switching on various IGBTs will connect different capacitors to the circuit. As each IGBT "switch" has its own capacitor, voltage can be built up in discrete steps. Adding additional levels increases the number of steps, better approximating a sine wave. With enough levels, PWM is not necessary as the waveform created is close enough to a true voltage sine wave and generates very little harmonics.
The IGBT arrangement around the capacitor for each step depends on the DC needs. If a DC bus is needed (for an HVDC tie or a STATCOM with synthetic inertia) then only two IGBTs are needed per capacitor level. If a DC bus is not needed, and there are benefits to connecting the three phases into a delta arrangement to eliminate zero sequence harmonics, four IGBTs can be used to surround the capacitor to bypass or switch it in at either polarity.
Operation
As a STATCOM's VSC operation is based on changing current flow to affect voltage, its voltage-current (VI) characteristics control how it operates. The VI characteristic can be divided into two distinct parts: a slopped region between its inductive and capacitive maximums, and its maximum operating points. While in the slopped region between its maximums, the STATCOM is said to be in voltage regulation mode, where it either supplies capacitive vars to increase the voltage or consumes inductive vars to lower the voltage. The rate at which it does this is set by the slope, which functions similar to a generator's droop speed control. This slope is programmable and can be set to a high value (to have the STATCOM regulate voltage like a traditional fixed reactive device) or to near zero, producing a very flat line and reserving the STATCOMs capacity for dynamic or transient events. The maximum slope is generally around 5%, to keep the system voltage within 5% of its nominal value.
When operating at either of its maximums, the STATCOM is said to be in a VAR control mode, where it's supplying or consuming its maximum reactive output. Unlike a traditional SVC, whose capacitive reactive output is linearly dependent on the voltage, a STATCOM can supply its maximum capacitive rating for any voltage. This offers an advantage over SVCs, as a STATCOM's effectiveness is not dependent on the voltage drop caused by the fault. While technically capable of responding to near zero voltage magnitudes, typically a STATCOM is set to ride through voltage drops of around 0.2 pu and lower, to prevent the STATCOM from causing a high over-voltage when the fault clears, and the voltage returns to normal. A STATCOM may also have a transient rating, where it can provide above its maximum current for very short time, allowing it to help the system better for larger faults. This rating depends on the specific design, but can be as high as 3.0 pu.
To control the operation of a STATCOM when in voltage control mode, a closed loop, PID regulator is typically used, which allows for feedback on how changing the current flow is affecting the system voltage. A simplified PID regulator is shown, however a separate closed loop is sometimes used to determine the reference voltage with respect to the slope and any other modes a STATCOM may have. A full PID system can be used, but typically the derivative component is removed (or set very low) to prevent noise from the system or measurements from causing unwanted fluctuations.
A STATCOM may also have additional modes besides voltage regulation or VAR control, depending on specific needs of the system. Examples being active filtering of system harmonics or gain control to accommodate system strength changes due to outages of generation or loads.
Application
As a fast, dynamic, and multi-quadrant source of reactive power, a STATCOM can be used for a wide variety of applications, however they are better suited for supporting the grid under fault or transient events or contingency events. One popular use is to place a STATCOM along a transmission line, to improve system power flow. Under normal operation the STATCOM will do very little, however in the event of a fault of a nearby line, the power that was being served is forced onto other transmission lines. Ordinarily this results in voltage drop increases due to increased power flow, but with a STATCOM available it can supply reactive power to increase the voltage until either the fault is removed (if temporary) or until a fixed capacitor can be switched in (if the fault is permanent). In some cases, a STATCOM can be installed at a substation, to help support multiple lines rather than just one, and help reducing the complexity of the protection on the line with a STATCOM on it.
Depending on available control function, STATCOMs can also be used for more advanced applications, such as active filtering, Power Oscillation Damping (POD), or even limited active power interactions. With growth of Distributed Energy Resources (DER) and Energy Storage, there has been research into using STATCOMs to aid or augment these uses. One area of recent research is virtual inertia: the use of an energy source on the DC side of a STATCOM to give it an inertia response similar to a synchronous condenser or generator.
STATCOM vs. SVC
Fundamentally, a STATCOM is type of static VAR compensator (SVC), with the main difference being that a STATCOM is a voltage-sourced converter while a traditional SVC is a current-sourced converter. Historically, STATCOM have been costlier than an SVC, in part due to higher cost of IGBTs), but in recent years IGBT power ratings have increased, closing the gap.
The response time of a STATCOM is shorter than that of a SVC, mainly due to the fast-switching times provided by the IGBTs of the voltage source converter (thyristors cannot be switched off and must be commutated). As a result, the reaction time of a STATCOM is one to two cycles vs. two to three cycles for an SVC.
The STATCOM also provides better reactive power support at low AC voltages than an SVC, since the reactive power from a STATCOM decreases linearly with the AC voltage (the current can be maintained at the rated value even down to low AC voltage), as opposed to power being a function of a square of voltage for SVC. The SVC is not used in a severe undervoltage conditions (less than 0.6 pu), since leaving the capacitors on can worsen the transient overvoltage once the fault is cleared, while STATCOM can operate until 0.2–0.3 pu (this limit is due to possible loss of synchronicity and cooling).
The footprint of a STATCOM is smaller, as it does not need large capacitors used by an SVC for TSC or filters.
See also
Flexible AC Transmission Systems (FACTs)
Static VAR Compensator (SVC)
Synchronous Condenser
High-Voltage DC (HVDC)
Static synchronous series compensator (SSSC)
Unified power flow controller (UPFC)
References
Electric power
Electric power systems components
Power electronics | Static synchronous compensator | [
"Physics",
"Engineering"
] | 3,656 | [
"Physical quantities",
"Power (physics)",
"Electronic engineering",
"Electric power",
"Electrical engineering",
"Power electronics"
] |
10,086,335 | https://en.wikipedia.org/wiki/Complete%20theory | In mathematical logic, a theory is complete if it is consistent and for every closed formula in the theory's language, either that formula or its negation is provable. That is, for every sentence the theory contains the sentence or its negation but not both (that is, either or ). Recursively axiomatizable first-order theories that are consistent and rich enough to allow general mathematical reasoning to be formulated cannot be complete, as demonstrated by Gödel's first incompleteness theorem.
This sense of complete is distinct from the notion of a complete logic, which asserts that for every theory that can be formulated in the logic, all semantically valid statements are provable theorems (for an appropriate sense of "semantically valid"). Gödel's completeness theorem is about this latter kind of completeness.
Complete theories are closed under a number of conditions internally modelling the T-schema:
For a set of formulas : if and only if and ,
For a set of formulas : if and only if or .
Maximal consistent sets are a fundamental tool in the model theory of classical logic and modal logic. Their existence in a given case is usually a straightforward consequence of Zorn's lemma, based on the idea that a contradiction involves use of only finitely many premises. In the case of modal logics, the collection of maximal consistent sets extending a theory T (closed under the necessitation rule) can be given the structure of a model of T, called the canonical model.
Examples
Some examples of complete theories are:
Presburger arithmetic
Tarski's axioms for Euclidean geometry
The theory of dense linear orders without endpoints
The theory of algebraically closed fields of a given characteristic
The theory of real closed fields
Every uncountably categorical countable theory
Every countably categorical countable theory
A group of three elements
True arithmetic or any other elementary diagram
See also
Lindenbaum's lemma
Łoś–Vaught test
References
Mathematical logic
Model theory | Complete theory | [
"Mathematics"
] | 407 | [
"Mathematical logic stubs",
"Mathematical logic",
"Model theory"
] |
10,087,500 | https://en.wikipedia.org/wiki/Impedance%20parameters | Impedance parameters or Z-parameters (the elements of an impedance matrix or Z-matrix) are properties used in electrical engineering, electronic engineering, and communication systems engineering to describe the electrical behavior of linear electrical networks. They are also used to describe the small-signal (linearized) response of non-linear networks. They are members of a family of similar parameters used in electronic engineering, other examples being: S-parameters, Y-parameters, H-parameters, T-parameters or ABCD-parameters.
Z-parameters are also known as open-circuit impedance parameters as they are calculated under open circuit conditions. i.e., Ix=0, where x=1,2 refer to input and output currents flowing through the ports (of a two-port network in this case) respectively.
The Z-parameter matrix
A Z-parameter matrix describes the behaviour of any linear electrical network that can be regarded as a black box with a number of ports. A port in this context is a pair of electrical terminals carrying equal and opposite currents into and out-of the network, and having a particular voltage between them. The Z-matrix gives no information about the behaviour of the network when the currents at any port are not balanced in this way (should this be possible), nor does it give any information about the voltage between terminals not belonging to the same port. Typically, it is intended that each external connection to the network is between the terminals of just one port, so that these limitations are appropriate.
For a generic multi-port network definition, it is assumed that each of the ports is allocated an integer n ranging from 1 to N, where N is the total number of ports. For port n, the associated Z-parameter definition is in terms of the port current and port voltage, and respectively.
For all ports the voltages may be defined in terms of the Z-parameter matrix and the currents by the following matrix equation:
where Z is an N × N matrix the elements of which can be indexed using conventional matrix notation. In general the elements of the Z-parameter matrix are complex numbers and functions of frequency. For a one-port network, the Z-matrix reduces to a single element, being the ordinary impedance measured between the two terminals. The Z-parameters are also known as the open circuit parameters because they are measured or calculated by applying current to one port and determining the resulting voltages at all the ports while the undriven ports are terminated into open circuits.
Two-port networks
The Z-parameter matrix for the two-port network is probably the most common. In this case the relationship between the port currents, port voltages and the Z-parameter matrix is given by:
.
where
For the general case of an N-port network,
Impedance relations
The input impedance of a two-port network is given by:
where ZL is the impedance of the load connected to port two.
Similarly, the output impedance is given by:
where ZS is the impedance of the source connected to port one.
Relation to S-parameters
The Z-parameters of a network are related to its S-parameters by
and
where is the identity matrix, is a diagonal matrix having the square root of the characteristic impedance at each port as its non-zero elements,
and is the corresponding diagonal matrix of square roots of characteristic admittances. In these expressions the matrices represented by the bracketed factors commute and so, as shown above, may be written in either order.
Two port
In the special case of a two-port network, with the same characteristic impedance at each port, the above expressions reduce to
Where
The two-port S-parameters may be obtained from the equivalent two-port Z-parameters by means of the following expressions
where
The above expressions will generally use complex numbers for and . Note that the value of can become 0 for specific values of so the division by in the calculations of may lead to a division by 0.
Relation to Y-parameters
Conversion from Y-parameters to Z-parameters is much simpler, as the Z-parameter matrix is just the inverse of the Y-parameter matrix. For a two-port:
where
is the determinant of the Y-parameter matrix.
Notes
References
Bibliography
See also
Scattering parameters
Admittance parameters
Two-port network
Electrical parameters
Two-port networks
Transfer functions
de:Zweitor#Zweitorgleichungen und Parameter | Impedance parameters | [
"Engineering"
] | 897 | [
"Electrical engineering",
"Two-port networks",
"Electronic engineering",
"Electrical parameters"
] |
10,087,606 | https://en.wikipedia.org/wiki/Symmetry%20operation | In mathematics, a symmetry operation is a geometric transformation of an object that leaves the object looking the same after it has been carried out. For example, a turn rotation of a regular triangle about its center, a reflection of a square across its diagonal, a translation of the Euclidean plane, or a point reflection of a sphere through its center are all symmetry operations. Each symmetry operation is performed with respect to some symmetry element (a point, line or plane).
In the context of molecular symmetry, a symmetry operation is a permutation of atoms such that the molecule or crystal is transformed into a state indistinguishable from the starting state.
Two basic facts follow from this definition, which emphasizes its usefulness.
Physical properties must be invariant with respect to symmetry operations.
Symmetry operations can be collected together in groups which are isomorphic to permutation groups.
In the context of molecular symmetry, quantum wavefunctions need not be invariant, because the operation can multiply them by a phase or mix states within a degenerate representation, without affecting any physical property.
Molecules
Identity Operation
The identity operation corresponds to doing nothing to the object. Because every molecule is indistinguishable from itself if nothing is done to it, every object possesses at least the identity operation. The identity operation is denoted by or . In the identity operation, no change can be observed for the molecule. Even the most asymmetric molecule possesses the identity operation. The need for such an identity operation arises from the mathematical requirements of group theory.
Reflection through mirror planes
The reflection operation is carried out with respect to symmetry elements known as planes of symmetry or mirror planes. Each such plane is denoted as (sigma). Its orientation relative to the principal axis of the molecule is indicated by a subscript. The plane must pass through the molecule and cannot be completely outside it.
If the plane of symmetry contains the principal axis of the molecule (i.e., the molecular -axis), it is designated as a vertical mirror plane, which is indicated by a subscript ().
If the plane of symmetry is perpendicular to the principal axis, it is designated as a horizontal mirror plane, which is indicated by a subscript ().
If the plane of symmetry bisects the angle between two 2-fold axes perpendicular to the principal axis, it is designated as a dihedral mirror plane, which is indicated by a subscript ().
Through the reflection of each mirror plane, the molecule must be able to produce an identical image of itself.
Inversion operation
In an inversion through a centre of symmetry, (the element), we imagine taking each point in a molecule and then moving it out the same distance on the other side. In summary, the inversion operation projects each atom through the centre of inversion and out to the same distance on the opposite side. The inversion center is a point in space that lies in the geometric center of the molecule. As a result, all the cartesian coordinates of the atoms are inverted (i.e. to ). The symbol used to represent inversion center is . When the inversion operation is carried out times, it is denoted by , where when is even and when is odd.
Examples of molecules that have an inversion center include certain molecules with octahedral geometry (general formula ), square planar geometry (general formula ), and ethylene (). Examples of molecules without inversion centers are cyclopentadienide () and molecules with trigonal pyramidal geometry (general formula ).
Proper rotation operations or n-fold rotation
A proper rotation refers to simple rotation about an axis. Such operations are denoted by where is a rotation of or performed times. The superscript is omitted if it is equal to one. is a rotation through 360°, where . It is equivalent to the Identity () operation. is a rotation of 180°, as is a rotation of 120°, as and so on.
Here the molecule can be rotated into equivalent positions around an axis. An example of a molecule with symmetry is the water () molecule. If the molecule is rotated by 180° about an axis passing through the oxygen atom, no detectable difference before and after the operation is observed.
Order of an axis can be regarded as a number of times that, for the least rotation which gives an equivalent configuration, that rotation must be repeated to give a configuration identical to the original structure (i.e. a 360° or 2 rotation). An example of this is proper rotation, which rotates by represents the first rotation around the axis by is the rotation by while is the rotation by is the identical configuration because it gives the original structure, and it is called an identity element (). Therefore, is an order of three, and is often referred to as a threefold axis.
Improper rotation operations
An improper rotation involves two operation steps: a proper rotation followed by reflection through a plane perpendicular to the rotation axis. The improper rotation is represented by the symbol where is the order. Since the improper rotation is the combination of a proper rotation and a reflection, will always exist whenever and a perpendicular plane exist separately. is usually denoted as , a reflection operation about a mirror plane. is usually denoted as , an inversion operation about an inversion center. When is an even number but when is odd
Rotation axes, mirror planes and inversion centres are symmetry elements, not symmetry operations. The rotation axis of the highest order is known as the principal rotation axis. It is conventional to set the Cartesian -axis of the molecule to contain the principal rotation axis.
Examples
Dichloromethane, . There is a rotation axis which passes through the carbon atom and the midpoints between the two hydrogen atoms and the two chlorine atoms. Define the z axis as co-linear with the axis, the plane as containing and the plane as containing . A rotation operation permutes the two hydrogen atoms and the two chlorine atoms. Reflection in the plane permutes the hydrogen atoms while reflection in the plane permutes the chlorine atoms. The four symmetry operations , , and form the point group . Note that if any two operations are carried out in succession the result is the same as if a single operation of the group had been performed.
Methane, . In addition to the proper rotations of order 2 and 3 there are three mutually perpendicular axes which pass half-way between the C-H bonds and six mirror planes. Note that
Crystals
In crystals, screw rotations and/or glide reflections are additionally possible. These are rotations or reflections together with partial translation. These operations may change based on the dimensions of the crystal lattice.
The Bravais lattices may be considered as representing translational symmetry operations. Combinations of operations of the crystallographic point groups with the addition symmetry operations produce the 230 crystallographic space groups.
See also
Molecular symmetry
Crystal structure
Crystallographic restriction theorem
References
F. A. Cotton Chemical applications of group theory, Wiley, 1962, 1971
Physical chemistry
Symmetry | Symmetry operation | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,415 | [
"Applied and interdisciplinary physics",
"nan",
"Geometry",
"Physical chemistry",
"Symmetry"
] |
10,088,199 | https://en.wikipedia.org/wiki/Magnetocapacitance | Magnetocapacitance is a property of some dielectric, insulating materials, and metal–insulator–metal heterostructures that exhibit a change in the value of their capacitance when an external magnetic field is applied to them. Magnetocapacitance can be an intrinsic property of some dielectric materials, such as multiferroic compounds like BiMnO3, or can be a manifest of properties extrinsic to the dielectric but present in capacitance structures like Pd, Al2O3, and Al.
References
Condensed matter physics
Quantum electronics
Spintronics | Magnetocapacitance | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 128 | [
"Quantum electronics",
"Spintronics",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Nanotechnology",
"Matter"
] |
10,088,265 | https://en.wikipedia.org/wiki/Papkovich%E2%80%93Neuber%20solution | The Papkovich–Neuber solution is a technique for generating analytic solutions to the Newtonian incompressible Stokes equations, though it was originally developed to solve the equations of linear elasticity.
It can be shown that any Stokes flow with body force can be written in the form:
where is a harmonic vector potential and is a harmonic scalar potential. The properties and ease of construction of harmonic functions makes the Papkovich–Neuber solution a powerful technique for solving the Stokes Equations in a variety of domains.
Further reading
.
.
Fluid dynamics | Papkovich–Neuber solution | [
"Chemistry",
"Engineering"
] | 112 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
10,090,547 | https://en.wikipedia.org/wiki/Peano%20existence%20theorem | In mathematics, specifically in the study of ordinary differential equations, the Peano existence theorem, Peano theorem or Cauchy–Peano theorem, named after Giuseppe Peano and Augustin-Louis Cauchy, is a fundamental theorem which guarantees the existence of solutions to certain initial value problems.
History
Peano first published the theorem in 1886 with an incorrect proof. In 1890 he published a new correct proof using successive approximations.
Theorem
Let be an open subset of with
a continuous function and
a continuous, explicit first-order differential equation defined on D, then every initial value problem
for f with
has a local solution
where is a neighbourhood of in ,
such that for all .
The solution need not be unique: one and the same initial value may give rise to many different solutions .
Proof
By replacing with , with , we may assume . As is open there is a rectangle .
Because is compact and is continuous, we have and by the Stone–Weierstrass theorem there exists a sequence of Lipschitz functions converging uniformly to in . Without loss of generality, we assume for all .
We define Picard iterations as follows, where . , and . They are well-defined by induction: as
is within the domain of .
We have
where is the Lipschitz constant of . Thus for maximal difference , we have a bound , and
By induction, this implies the bound which tends to zero as for all .
The functions are equicontinuous as for we have
so by the Arzelà–Ascoli theorem they are relatively compact. In particular, for each there is a subsequence
converging uniformly to a continuous function . Taking limit
in
we conclude that . The functions are in the closure of a relatively compact set, so they are themselves relatively compact. Thus there is a subsequence converging uniformly to a continuous function . Taking limit in we conclude that , using the fact that are equicontinuous by the Arzelà–Ascoli theorem. By the fundamental theorem of calculus, in .
Related theorems
The Peano theorem can be compared with another existence result in the same context, the Picard–Lindelöf theorem. The Picard–Lindelöf theorem both assumes more and concludes more. It requires Lipschitz continuity, while the Peano theorem requires only continuity; but it proves both existence and uniqueness where the Peano theorem proves only the existence of solutions. To illustrate, consider the ordinary differential equation
on the domain
According to the Peano theorem, this equation has solutions, but the Picard–Lindelöf theorem does not apply since the right hand side is not Lipschitz continuous in any neighbourhood containing 0. Thus we can conclude existence but not uniqueness. It turns out that this ordinary differential equation has two kinds of solutions when starting at , either or . The transition between and can happen at any .
The Carathéodory existence theorem is a generalization of the Peano existence theorem with weaker conditions than continuity.
The Peano existence theorem cannot be straightforwardly extended to a general Hilbert space : for an open subset of , the continuity of alone is insufficient for guaranteeing the existence of solutions for the associated initial value problem.
Notes
References
Augustin-Louis Cauchy
Theorems in analysis
Ordinary differential equations | Peano existence theorem | [
"Mathematics"
] | 671 | [
"Mathematical analysis",
"Theorems in mathematical analysis",
"Mathematical theorems",
"Mathematical problems"
] |
10,092,186 | https://en.wikipedia.org/wiki/Nagata%20ring | In commutative algebra, an N-1 ring is an integral domain whose integral closure in its quotient field is a finitely generated -module. It is called a Japanese ring (or an N-2 ring) if for every finite extension of its quotient field , the integral closure of in is a finitely generated -module (or equivalently a finite -algebra). A ring is called universally Japanese if every finitely generated integral domain over it is Japanese, and is called a Nagata ring, named for Masayoshi Nagata, or a pseudo-geometric ring if it is Noetherian and universally Japanese (or, which turns out to be the same, if it is Noetherian and all of its quotients by a prime ideal are N-2 rings). A ring is called geometric if it is the local ring of an algebraic variety or a completion of such a local ring, but this concept is not used much.
Examples
Fields and rings of polynomials or power series in finitely many indeterminates over fields are examples of Japanese rings. Another important example is a Noetherian integrally closed domain (e.g. a Dedekind domain) having a perfect field of fractions. On the other hand, a principal ideal domain or even a discrete valuation ring is not necessarily Japanese.
Any quasi-excellent ring is a Nagata ring, so in particular almost all Noetherian rings that occur in algebraic geometry are Nagata rings.
The first example of a Noetherian domain that is not a Nagata ring was given by .
Here is an example of a discrete valuation ring that is not a Japanese ring. Choose a prime and an infinite degree field extension of a characteristic field , such that . Let the discrete valuation ring be the ring of formal power series over whose coefficients generate a finite extension of . If is any formal power series not in then the ring is not an N-1 ring (its integral closure is not a finitely generated module) so is not a Japanese ring.
If is the subring of the polynomial ring in infinitely many generators generated by the squares and cubes of all generators, and is obtained from by adjoining inverses to all elements not in any of the ideals generated by some , then is a Noetherian domain that is not an N-1 ring, in other words its integral closure in its quotient field is not a finitely generated -module. Also has a cusp singularity at every closed point, so the set of singular points is not closed.
Citations
References
Bosch, Güntzer, Remmert, Non-Archimedean Analysis, Springer 1984,
A. Grothendieck, J. Dieudonné, Eléments de géométrie algébrique, Ch. 0IV § 23, Publ. Math. IHÉS 20, (1964).
H. Matsumura, Commutative algebra , chapter 12.
Nagata, Masayoshi Local rings. Interscience Tracts in Pure and Applied Mathematics, No. 13 Interscience Publishers a division of John Wiley & Sons, New York-London 1962, reprinted by R. E. Krieger Pub. Co (1975)
External links
http://stacks.math.columbia.edu/tag/032E
Algebraic geometry
Commutative algebra | Nagata ring | [
"Mathematics"
] | 680 | [
"Fields of abstract algebra",
"Commutative algebra",
"Algebraic geometry"
] |
10,092,550 | https://en.wikipedia.org/wiki/Body%20force | In physics, a body force is a force that acts throughout the volume of a body. Forces due to gravity, electric fields and magnetic fields are examples of body forces. Body forces contrast with contact forces or surface forces which are exerted to the surface of an object.
Fictitious forces such as the centrifugal force, Euler force, and the Coriolis effect are other examples of body forces.
Definition
Qualitative
A body force is simply a type of force, and so it has the same dimensions as force, [M][L][T]−2. However, it is often convenient to talk about a body force in terms of either the force per unit volume or the force per unit mass. If the force per unit volume is of interest, it is referred to as the force density throughout the system.
A body force is distinct from a contact force in that the force does not require contact for transmission. Thus, common forces associated with pressure gradients and conductive and convective heat transmission are not body forces as they require contact between systems to exist. Radiation heat transfer, on the other hand, is a perfect example of a body force.
More examples of common body forces include;
Gravity,
Electric forces acting on an object charged throughout its volume,
Magnetic forces acting on currents within an object, such as the braking force that results from eddy currents,
Fictitious forces (or inertial forces) can be viewed as body forces. Common inertial forces are,
Centrifugal force,
Coriolis force,
Euler force (or transverse force), which occurs in a rotating reference frame when the rate of rotation of the frame is changing
However, fictitious forces are not actually forces. Rather they are corrections to Newton's second law when it is formulated in an accelerating reference frame. (Gravity can also be considered a fictitious force in the context of General Relativity.)
Quantitative
The body force density is defined so that the volume integral (throughout a volume of interest) of it gives the total force acting throughout the body;
where dV is an infinitesimal volume element, and f is the external body force density field acting on the system.
Acceleration
Like any other force, a body force will cause an object to accelerate. For a non-rigid object, Newton's second law applied to a small volume element is
,
where ρ(r) is the mass density of the substance, ƒ the force density, and a(r) is acceleration, all at point r.
The case of gravity
In the case of a body in the gravitational field on a planet surface, a(r) is nearly constant (g) and uniform. Near the Earth
.
In this case simply
where m is the mass of the body.
See also
Action at a distance
Fictitious force
Force density
Non-contact force
Normal force
Surface force
References
Force | Body force | [
"Physics",
"Mathematics"
] | 577 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
10,093,384 | https://en.wikipedia.org/wiki/P-TEFb | The positive transcription elongation factor, P-TEFb, is a multiprotein complex that plays an essential role in the regulation of transcription by RNA polymerase II (Pol II) in eukaryotes. Immediately following initiation Pol II becomes trapped in promoter proximal paused positions on the majority of human genes (Figure 1). P-TEFb is a cyclin dependent kinase that can phosphorylate the DRB sensitivity inducing factor (DSIF) and negative elongation factor (NELF), as well as the carboxyl terminal domain of the large subunit of Pol II and this causes the transition into productive elongation leading to the synthesis of mRNAs. P-TEFb is regulated in part by a reversible association with the 7SK snRNP. Treatment of cells with the P-TEFb inhibitors DRB or flavopidirol leads to loss of mRNA production and ultimately cell death.
Discovery, composition and structure
P-TEFb was identified and purified as a factor needed for the generation of long run-off transcripts using an in vitro transcription system derived from Drosophila cells. It is a cyclin dependent kinase containing the catalytic subunit, Cdk9, and a regulatory subunit, cyclin T in Drosophila. In humans there are multiple forms of P-TEFb which contain Cdk9 and one of several cyclin subunits, cyclin T1, T2, and K. P-TEFb associates with other factors including the bromodomain protein BRD4, and is found associated with a large complex of proteins called the super elongation complex. Importantly, for the AIDS virus, HIV, P-TEFb is targeted by the HIV Tat protein which bypasses normal cellular P-TEFb control and directly brings P-TEFb to the promoter proximal paused polymerase in the HIV genome.
The structures of human P-TEFb containing Cdk9 and cyclin T1 and the HIV Tat•P-TEFb complex have been solved using X-ray crystallography. The first structure solved demonstrated that the two subunits were arranged as has been found in other cyclin dependent kinases. Three amino acid substitutions were inadvertently introduced in the subunits used for the original structure and a subsequent structure determination using the correct sequences demonstrated the same overall structure except for a few significant changes around the active site. The structure of HIV Tat bound to P-TEFb demonstrated that the viral protein forms extensive contacts with the cyclin T1 subunit (Figure 2).
Regulation of P-TEFb
Because of its central role in controlling eukaryotic gene expression, P-TEFb is subject to stringent regulation at the level of transcription of the genes encoding the subunits, translation of the subunit mRNAs, turnover of the subunits, and also by an unusual mechanism involving the 7SK snRNP. As shown in Figure 3 P-TEFb is held in the 7SK snRNP by the double stranded RNA binding protein HEXIM (HEXIM1 or HEXIM2 in humans). HEXIM bound to 7SK RNA or any double stranded RNA binds to P-TEFb and inhibits the kinase activity. Two other proteins are always found associated with 7SK RNA. The methyl phosphase capping enzyme MEPCE puts a methyl group on the gamma phosphate of the first nucleotide of the 7SK RNA and the La related protein LARP7 binds to the 3' end of 7SK. When P-TEFb is extracted from the 7SK snRNP, 7SK RNA undergoes a conformation change, HEXIM is ejected and hnRNPs take the place of the factors removed. The re-sequestration of P-TEFb requires another rearrangement of the RNA, binding of HEXIM and then P-TEFb. In rapidly growing cells the 7SK snRNP is the predominant form of P-TEFb. For review.
References
Proteins | P-TEFb | [
"Chemistry"
] | 832 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
10,093,989 | https://en.wikipedia.org/wiki/G-ring | In commutative algebra, a G-ring or Grothendieck ring is a Noetherian ring such that the map of any of its local rings to the completion is regular (defined below). Almost all Noetherian rings that occur naturally in algebraic geometry or number theory are G-rings, and it is quite hard to construct examples of Noetherian rings that are not G-rings. The concept is named after Alexander Grothendieck.
A ring that is both a G-ring and a J-2 ring is called a quasi-excellent ring, and if in addition it is universally catenary it is called an excellent ring.
Definitions
A (Noetherian) ring R containing a field k is called geometrically regular over k if for any finite extension K of k the ring R ⊗k K is a regular ring.
A homomorphism of rings from R to S is called regular if it is flat and for every p ∈ Spec(R) the fiber S ⊗R k(p) is geometrically regular over the residue field k(p) of p. (see also Popescu's theorem.)
A ring is called a local G-ring if it is a Noetherian local ring and the map to its completion (with respect to its maximal ideal) is regular.
A ring is called a G-ring if it is Noetherian and all its localizations at prime ideals are local G-rings. (It is enough to check this just for the maximal ideals, so in particular local G-rings are G-rings.)
Examples
Every field is a G-ring
Every complete Noetherian local ring is a G-ring
Every ring of convergent power series in a finite number of variables over R or C is a G-ring.
Every Dedekind domain in characteristic 0, and in particular the ring of integers, is a G-ring, but in positive characteristic there are Dedekind domains (and even discrete valuation rings) that are not G-rings.
Every localization of a G-ring is a G-ring
Every finitely generated algebra over a G-ring is a G-ring. This is a theorem due to Grothendieck.
Here is an example of a discrete valuation ring A of characteristic p>0 which is not a G-ring. If k is any field of characteristic p with [k : kp] = ∞ and R = k[[x]] and A is the subring of power series Σaixi such that [kp(a0,a1,...) : kp] is finite then the formal fiber of A over the generic point is not geometrically regular so A is not a G-ring. Here kp denotes the image of k under the Frobenius morphism a→ap.
References
A. Grothendieck, J. Dieudonné, Eléments de géométrie algébrique IV Publ. Math. IHÉS 24 (1965), section 7
H. Matsumura, Commutative algebra , chapter 13.
Commutative algebra | G-ring | [
"Mathematics"
] | 637 | [
"Fields of abstract algebra",
"Commutative algebra"
] |
10,094,198 | https://en.wikipedia.org/wiki/Hardy%E2%80%93Littlewood%20maximal%20function | In mathematics, the Hardy–Littlewood maximal operator M is a significant non-linear operator used in real analysis and harmonic analysis.
Definition
The operator takes a locally integrable function f : Rd → C and returns another function Mf.
For any point x ∈ Rd, the function Mf returns the maximum of a set of reals, namely the set of average values of f for all the balls B(x, r) of any radius r at x. Formally,
where |E| denotes the d-dimensional Lebesgue measure of a subset E ⊂ Rd.
The averages are jointly continuous in x and r, so the maximal function Mf, being the supremum over r > 0, is measurable. It is not obvious that Mf is finite almost everywhere. This is a corollary of the Hardy–Littlewood maximal inequality.
Hardy–Littlewood maximal inequality
This theorem of G. H. Hardy and J. E. Littlewood states that M is bounded as a sublinear operator from Lp(Rd) to itself for p > 1. That is, if f ∈ Lp(Rd) then the maximal function Mf is weak L1-bounded and Mf ∈ Lp(Rd). Before stating the theorem more precisely, for simplicity, let {f > t} denote the set {x | f(x) > t}. Now we have:
Theorem (Weak Type Estimate). For d ≥ 1, there is a constant Cd > 0 such that for all λ > 0 and f ∈ L1(Rd), we have:
With the Hardy–Littlewood maximal inequality in hand, the following strong-type estimate is an immediate consequence of the Marcinkiewicz interpolation theorem:
Theorem (Strong Type Estimate). For d ≥ 1, 1 < p ≤ ∞, and f ∈ Lp(Rd),
there is a constant Cp,d > 0 such that
In the strong type estimate the best bounds for Cp,d are unknown. However subsequently Elias M. Stein used the Calderón-Zygmund method of rotations to prove the following:
Theorem (Dimension Independence). For 1 < p ≤ ∞ one can pick Cp,d = Cp independent of d.
Proof
While there are several proofs of this theorem, a common one is given below: For p = ∞, the inequality is trivial (since the average of a function is no larger than its essential supremum). For 1≤ p < ∞, first we shall use the following version of the Vitali covering lemma to prove the weak-type estimate. (See the article for the proof of the lemma.)
Lemma. Let X be a separable metric space and a family of open balls with bounded diameter. Then has a countable subfamily consisting of disjoint balls such that
where 5B is B with 5 times radius.
For every x such that Mf(x) > t, by definition, we can find a ball Bx centered at x such that
Thus {Mf > t} is a subset of the union of such balls, as x varies in {Mf > t}. This is trivial since x is contained in Bx. By the lemma, we can find, among such balls, a sequence of disjoint balls Bj such that the union of 5Bj covers {Mf > t}.
It follows:
This completes the proof of the weak-type estimate.
The Lp bounds for p > 1 can be deduced from the weak bound by the Marcinkiewicz interpolation theorem.
Here is how the argument goes in this particular case.
Define the function by if and 0 otherwise.
We have then
and, by the definition of maximal function
By the weak-type estimate applied to , we have:
Then
By the estimate above we have:
This completes the proof of the theorem.
Note that the constant in the proof can be improved to by using the inner regularity of the Lebesgue measure, and the finite version of the Vitali covering lemma. See the Discussion section below for more about optimizing the constant.
Applications
Some applications of the Hardy–Littlewood Maximal Inequality include proving the following results:
Lebesgue differentiation theorem
Rademacher differentiation theorem
Fatou's theorem on nontangential convergence.
Fractional integration theorem
Here we use a standard trick involving the maximal function to give a quick proof of Lebesgue differentiation theorem. (But remember that in the proof of the maximal theorem, we used the Vitali covering lemma.) Let f ∈ L1(Rn) and
where
We write f = h + g where h is continuous and has compact support and g ∈ L1(Rn) with norm that can be made arbitrary small. Then
by continuity. Now, Ωg ≤ 2Mg and so, by the theorem, we have:
Now, we can let and conclude Ωf = 0 almost everywhere; that is, exists for almost all x. It remains to show the limit actually equals f(x). But this is easy: it is known that (approximation of the identity) and thus there is a subsequence almost everywhere. By the uniqueness of limit, fr → f almost everywhere then.
Discussion
It is still unknown what the smallest constants Cp,d and Cd are in the above inequalities. However, a result of Elias Stein about spherical maximal functions can be used to show that, for 1 < p < ∞, we can remove the dependence of Cp,d on the dimension, that is, Cp,d = Cp for some constant Cp > 0 only depending on p. It is unknown whether there is a weak bound that is independent of dimension.
There are several common variants of the Hardy-Littlewood maximal operator which replace the averages over centered balls with averages over different families of sets. For instance, one can define the uncentered HL maximal operator (using the notation of Stein-Shakarchi)
where the balls Bx are required to merely contain x, rather than be centered at x. There is also the dyadic HL maximal operator
where Qx ranges over all dyadic cubes containing the point x. Both of these operators satisfy the HL maximal inequality.
See also
Rising sun lemma
References
John B. Garnett, Bounded Analytic Functions. Springer-Verlag, 2006
G. H. Hardy and J. E. Littlewood. A maximal theorem with function-theoretic applications. Acta Math. 54, 81–116 (1930).
Antonios D. Melas, The best constant for the centered Hardy–Littlewood maximal inequality, Annals of Mathematics, 157 (2003), 647–688
Rami Shakarchi & Elias M. Stein, Princeton Lectures in Analysis III: Real Analysis. Princeton University Press, 2005
Elias M. Stein, Maximal functions: spherical means, Proc. Natl. Acad. Sci. U.S.A. 73 (1976), 2174–2175
Elias M. Stein, Singular Integrals and Differentiability Properties of Functions. Princeton University Press, 1971
Gerald Teschl, Topics in Real and Functional Analysis (lecture notes)
Real analysis
Harmonic analysis
Types of functions | Hardy–Littlewood maximal function | [
"Mathematics"
] | 1,475 | [
"Mathematical objects",
"Functions and mappings",
"Types of functions",
"Mathematical relations"
] |
12,461,193 | https://en.wikipedia.org/wiki/PCell | PCell stands for parameterized Cell, a concept used widely in the automated design of analog integrated circuits. A PCell represents a part or a component of the circuit whose structure is dependent on one or more parameters. Hence, it is a cell which is automatically generated by electronic design automation (EDA) software based on the values of these parameters. For example, one can create a transistor PCell and then use different instances of the same with different user defined lengths and widths. Vendors of EDA software sometimes use different names for the concept of parameterized cells, e.g. T-Cell and Magic Cell.
Application
In electronic circuit designs, cells are basic units of functionality. A given cell may be placed or instantiated many times. A P-Cell is more flexible than a non-parameterized cell because different instances may have different parameter values and, therefore, different structures. For example, rather than having many different cell definitions to represent the variously sized transistors in a given design, a single PCell may take a transistor's dimensions (width and length) as parameters. Different instances of a single PCell can then represent transistors of different sizes, but otherwise similar characteristics.
The structures within an integrated circuit and the rules (design rules) governing their physical dimensions are often complex, thereby making the structures tedious to draw by hand. By using PCells a circuit designer can easily generate a large number of various structures that only differ in a few parameters, thus increasing design productivity and consistency.
Most often, PCell implies a physical PCell, i.e., a physical representation of an electronic component describing its physical structure inside an integrated circuit (IC). Although most PCells are physical PCells, device symbols in circuit schematics may also be implemented as PCells.
Underlying characteristics of all PCells are a dependence on (input) parameters and the ability to generate design data based on these parameters.
Implementation
A PCell is a piece of programming code. This code is responsible for the process of creating the proper structure of the PCell variants based on its (input) parameters. For the example of a physical PCell, this code generates (draws) the actual shapes of the mask design that comprises the circuit.
Since one piece of PCell code can create many different objects (with different parameter values), it is referred to as a PCell Master. The object/shapes/data that this code creates is called an instance of the PCell. Typically, one Master PCell produces many instances/variants. This is not only helpful during design entry and specification but also in reducing memory resources required to represent the design data.
Generation
Although the programming language in which a PCell is written is not of importance, SKILL or Python are most often used to write PCell's code. Alternatively, PCells can be generated using a graphical user interface (GUI) or specialized PCell design tools based on a library of predefined functions.
Further reading
Bales, M. Design Databases. In L. Scheffer, L. Lvagno, and G. Martin, editors, EDA for IC Implementation, Circuit Design, and Process Technology, volume 2 of Electronic Design Automation for Integrated Circuits Handbook, chapter 12. Taylor & Francis, 2006.
References
Electronic circuits | PCell | [
"Engineering"
] | 679 | [
"Electronic engineering",
"Electronic circuits"
] |
12,461,863 | https://en.wikipedia.org/wiki/Targeted%20temperature%20management | Targeted temperature management (TTM), previously known as therapeutic hypothermia or protective hypothermia, is an active treatment that tries to achieve and maintain a specific body temperature in a person for a specific duration of time in an effort to improve health outcomes during recovery after a period of stopped blood flow to the brain. This is done in an attempt to reduce the risk of tissue injury following lack of blood flow. Periods of poor blood flow may be due to cardiac arrest or the blockage of an artery by a clot as in the case of a stroke.
Targeted temperature management improves survival and brain function following resuscitation from cardiac arrest. Evidence supports its use following certain types of cardiac arrest in which an individual does not regain consciousness. The target temperature is often between 32 and 34 °C. Targeted temperature management following traumatic brain injury is of unclear benefit. While associated with some complications, these are generally mild.
Targeted temperature management is thought to prevent brain injury by several methods, including decreasing the brain's oxygen demand, reducing the production of neurotransmitters like glutamate, as well as reducing free radicals that might damage the brain. Body temperature may be lowered by many means, including cooling blankets, cooling helmets, cooling catheters, ice packs and ice water lavage.
Medical uses
Targeted temperature management may be used in the following conditions:
Cardiac arrest
The 2013 ILCOR and 2010 American Heart Association guidelines support the use of cooling following resuscitation from cardiac arrest. These recommendations were largely based on two trials from 2002 which showed improved survival and brain function when cooled to after cardiac arrest.
However, more recent research suggests that there is no benefit to cooling to when compared with less aggressive cooling only to a near-normal temperature of ; it appears cooling is effective because it prevents fever, a common complication seen after cardiac arrest. There is no difference in long term quality of life following mild compared to more severe cooling.
In children, following cardiac arrest, cooling does not appear useful as of 2018.
A recent Cochrane Review summarized available evidence on the topic and found that targeted temperature management around 33 °C may increase the chance to prevent brain damage after cardiac arrest by 40%.
Neonatal encephalopathy
Hypothermia therapy for neonatal encephalopathy has been proven to improve outcomes for newborn infants affected by perinatal hypoxia-ischemia, hypoxic ischemic encephalopathy or birth asphyxia. A 2013 Cochrane review found that it is useful in full term babies with encephalopathy. Whole body or selective head cooling to , begun within six hours of birth and continued for 72 hours, reduces mortality and reduces cerebral palsy and neurological deficits in survivors.
Open heart surgery
Targeted temperature management is used during open-heart surgery because it decreases the metabolic needs of the brain, heart, and other organs, reducing the risk of damage to them. The patient is given medication to prevent shivering. The body is then cooled to . The heart is stopped and an external heart-lung pump maintains circulation to the patient's body. The heart is cooled further and is maintained at a temperature below for the duration of the surgery. This very cold temperature helps the heart muscle to tolerate its lack of blood supply during the surgery.
Adverse effects
Possible complications may include: infection, bleeding, dysrhythmias and high blood sugar. One review found an increased risk of pneumonia and sepsis but not the overall risk of infection. Another review found a trend towards increased bleeding but no increase in severe bleeding. Hypothermia induces a "cold diuresis" which can lead to electrolyte abnormalities – specifically hypokalemia, hypomagnesaemia, and hypophosphatemia, as well as hypovolemia.
Mechanism
The earliest rationale for the effects of hypothermia as a neuroprotectant focused on the slowing of cellular metabolism resulting from a drop in body temperature. For every one degree Celsius drop in body temperature, cellular metabolism slows by 5–7%. Accordingly, most early hypotheses suggested that hypothermia reduces the harmful effects of ischemia by decreasing the body's need for oxygen. The initial emphasis on cellular metabolism explains why the early studies almost exclusively focused on the application of deep hypothermia, as these researchers believed that the therapeutic effects of hypothermia correlated directly with the extent of temperature decline.
In the special case of infants with perinatal asphyxia, it appears that apoptosis is a prominent cause of cell death and that hypothermia therapy for neonatal encephalopathy interrupts the apoptotic pathway. In general, cell death is not directly caused by oxygen deprivation, but occurs indirectly as a result of the cascade of subsequent events. Cells need oxygen to create ATP, a molecule used by cells to store energy, and cells need ATP to regulate intracellular ion levels. ATP is used to fuel both the importation of ions necessary for cellular function and the removal of ions that are harmful to cellular function. Without oxygen, cells cannot manufacture the necessary ATP to regulate ion levels and thus cannot prevent the intracellular environment from approaching the ion concentration of the outside environment. It is not oxygen deprivation itself that precipitates cell death, but rather without oxygen the cell can not make the ATP it needs to regulate ion concentrations and maintain homeostasis.
Notably, even a small drop in temperature encourages cell membrane stability during periods of oxygen deprivation. For this reason, a drop in body temperature helps prevent an influx of unwanted ions during an ischemic insult. By making the cell membrane more impermeable, hypothermia helps prevent the cascade of reactions set off by oxygen deprivation. Even moderate dips in temperature strengthen the cellular membrane, helping to minimize any disruption to the cellular environment. It is by moderating the disruption of homeostasis caused by a blockage of blood flow that many now postulate, results in hypothermia's ability to minimize the trauma resultant from ischemic injuries.
Targeted temperature management may also help to reduce reperfusion injury, damage caused by oxidative stress when the blood supply is restored to a tissue after a period of ischemia. Various inflammatory immune responses occur during reperfusion. These inflammatory responses cause increased intracranial pressure, which leads to cell injury and in some situations, cell death. Hypothermia has been shown to help moderate intracranial pressure and therefore to minimize the harmful effects of a patient's inflammatory immune responses during reperfusion. The oxidation that occurs during reperfusion also increases free radical production. Since hypothermia reduces both intracranial pressure and free radical production, this might be yet another mechanism of action for hypothermia's therapeutic effect. Overt activation of N-methyl-D-aspartate (NMDA) receptors following brain injuries can lead to calcium entry which triggers neuronal death via the mechanisms of excitotoxicity.
Methods
There are a number of methods through which hypothermia is induced. These include: cooling catheters, cooling blankets, and application of ice applied around the body among others. As of 2013 it is unclear if one method is any better than the others. While cool intravenous fluid may be given to start the process, further methods are required to keep the person cold.
Core body temperature must be measured (either via the esophagus, rectum, bladder in those who are producing urine, or within the pulmonary artery) to guide cooling. A temperature below should be avoided, as adverse events increase significantly. The person should be kept at the goal temperature plus or minus half a degree Celsius for 24 hours. Rewarming should be done slowly with suggested speeds of per hour.
Targeted temperature management should be started as soon as possible. The goal temperature should be reached before 8 hours. Targeted temperature management remains partially effective even when initiated as long as 6 hours after collapse.
Prior to the induction of targeted temperature management, pharmacological agents to control shivering must be administered. When body temperature drops below a certain threshold—typically around —people may begin to shiver. It appears that regardless of the technique used to induce hypothermia, people begin to shiver when temperature drops below this threshold. Drugs commonly used to prevent and treat shivering in targeted temperature management include acetaminophen, buspirone, opioids including pethidine (meperidine), dexmedetomidine, fentanyl, and/or propofol. If shivering is unable to be controlled with these drugs, patients are often placed under general anesthesia and/or are given paralytic medication like vecuronium. People should be rewarmed slowly and steadily in order to avoid harmful spikes in intracranial pressure.
Cooling catheters
Cooling catheters are inserted into a femoral vein. Cooled saline solution is circulated through either a metal coated tube or a balloon in the catheter. The saline cools the person's whole body by lowering the temperature of a person's blood. Catheters reduce temperature at rates ranging from per hour. Through the use of the control unit, catheters can bring body temperature to within of the target level. Furthermore, catheters can raise temperature at a steady rate, which helps to avoid harmful rises in intracranial pressure. A number of studies have demonstrated that targeted temperature management via catheter is safe and effective.
Adverse events associated with this invasive technique include bleeding, infection, vascular puncture, and deep vein thrombosis (DVT). Infection caused by cooling catheters is particularly harmful, as resuscitated people are highly vulnerable to the complications associated with infections. Bleeding represents a significant danger, due to a decreased clotting threshold caused by hypothermia. The risk of deep vein thrombosis may be the most pressing medical complication.
Deep vein thrombosis can be characterized as a medical event whereby a blood clot forms in a deep vein, usually the femoral vein. This condition may become potentially fatal if the clot travels to the lungs and causes a pulmonary embolism. Another potential problem with cooling catheters is the potential to block access to the femoral vein, which is a site normally used for a variety of other medical procedures, including angiography of the venous system and the right side of the heart. However, most cooling catheters are triple lumen catheters, and the majority of people post-arrest will require central venous access. Unlike non-invasive methods which can be administered by nurses, the insertion of cooling catheters must be performed by a physician fully trained and familiar with the procedure. The time delay between identifying a person who might benefit from the procedure and the arrival of an interventional radiologist or other physician to perform the insertion may minimize some of the benefit of invasive methods' more rapid cooling.
Transnasal evaporative cooling
Transnasal evaporative cooling is a method of inducing the hypothermia process and provides a means of continuous cooling of a person throughout the early stages of targeted temperature management and during movement throughout the hospital environment. This technique uses two cannulae, inserted into a person's nasal cavity, to deliver a spray of coolant mist that evaporates directly underneath the brain and base of the skull. As blood passes through the cooling area, it reduces the temperature throughout the rest of the body.
The method is compact enough to be used at the point of cardiac arrest, during ambulance transport, or within the hospital proper. It is intended to reduce rapidly the person's temperature to below while targeting the brain as the first area of cooling. Research into the device has shown cooling rates of per hour in the brain (measured through infrared tympanic measurement) and per hour for core body temperature reduction.
Water blankets
With these technologies, cold water circulates through a blanket, or torso wraparound vest and leg wraps. To lower temperature with optimal speed, 70% of a person's surface area should be covered with water blankets. The treatment represents the most well studied means of controlling body temperature. Water blankets lower a person's temperature exclusively by cooling a person's skin and accordingly require no invasive procedures.
Water blankets possess several undesirable qualities. They are susceptible to leaking, which may represent an electrical hazard since they are operated in close proximity to electrically powered medical equipment. The Food and Drug Administration also has reported several cases of external cooling blankets causing significant burns to the skin of person. Other problems with external cooling include overshoot of temperature (20% of people will have overshoot), slower induction time versus internal cooling, increased compensatory response, decreased patient access, and discontinuation of cooling for invasive procedures such as the cardiac catheterization.
If therapy with water blankets is given along with two litres of cold intravenous saline, people can be cooled to in 65 minutes. Most machines now come with core temperature probes. When inserted into the rectum, the core body temperature is monitored and feedback to the machine allows changes in the water blanket to achieve the desired set temperature. In the past some of the models of cooling machines have produced an overshoot in the target temperature and cooled people to levels below , resulting in increased adverse events. They have also rewarmed patients at too fast a rate, leading to spikes in intracranial pressure. Some of the new models have more software that attempt to prevent this overshoot by utilizing warmer water when the target temperature is close and preventing any overshoot. Some of the new machines now also have 3 rates of cooling and warming; a rewarming rate with one of these machines allows a patient to be rewarmed at a very slow rate of just an hour in the "automatic mode", allowing rewarming from to over 24 hours.
Cool caps
There are a number of non-invasive head cooling caps and helmets designed to target cooling at the brain. A hypothermia cap is typically made of a synthetic material such as neoprene, silicone, or polyurethane and filled with a cooling agent such as ice or gel which is either cooled to a very cold temperature, , before application or continuously cooled by an auxiliary control unit. Their most notable uses are in preventing or reducing alopecia in chemotherapy, and for preventing cerebral palsy in babies born with hypoxic ischemic encephalopathy. In the continuously cooled iteration, coolant is cooled with the aid of a compressor and pumped through the cooling cap. Circulation is regulated by means of valves and temperature sensors in the cap. If the temperature deviates or if other errors are detected, an alarm system is activated. The frozen iteration involves continuous application of caps filled with Crylon gel cooled to to the scalp before, during and after intravenous chemotherapy. As the caps warm on the head, multiple cooled caps must be kept on hand and applied every 20 to 30 minutes.
History
Hypothermia has been applied therapeutically since antiquity. The Greek physician Hippocrates, the namesake of the Hippocratic Oath, advocated the packing of wounded soldiers in snow and ice. Napoleonic surgeon Baron Dominique Jean Larrey recorded that officers who were kept closer to the fire survived less often than the minimally pampered infantrymen. In modern times, the first medical article concerning hypothermia was published in 1945. This study focused on the effects of hypothermia on patients with severe head injury. In the 1950s, hypothermia received its first medical application, being used in intracerebral aneurysm surgery to create a bloodless field. Most of the early research focused on the applications of deep hypothermia, defined as a body temperature of . Such an extreme drop in body temperature brings with it a whole host of side effects, which made the use of deep hypothermia impractical in most clinical situations.
This period also saw sporadic investigation of more mild forms of hypothermia, with mild hypothermia being defined as a body temperature of . In the 1950s, Doctor Rosomoff demonstrated in dogs the positive effects of mild hypothermia after brain ischemia and traumatic brain injury. In the 1980s further animal studies indicated the ability of mild hypothermia to act as a general neuroprotectant following a blockage of blood flow to the brain. This animal data was supported by two landmark human studies that were published simultaneously in 2002 by the New England Journal of Medicine. Both studies, one occurring in Europe and the other in Australia, demonstrated the positive effects of mild hypothermia applied following cardiac arrest. Responding to this research, in 2003 the American Heart Association (AHA) and the International Liaison Committee on Resuscitation (ILCOR) endorsed the use of targeted temperature management following cardiac arrest. Currently, a growing percentage of hospitals around the world incorporate the AHA/ILCOR guidelines and include hypothermic therapies in their standard package of care for patients with cardiac arrest. Some researchers go so far as to contend that hypothermia represents a better neuroprotectant following a blockage of blood to the brain than any known drug. Over this same period a particularly successful research effort showed that hypothermia is a highly effective treatment when applied to newborn infants following birth asphyxia. Meta-analysis of a number of large randomised controlled trials showed that hypothermia for 72 hours started within 6 hours of birth significantly increased the chance of survival without brain damage.
Research
TTM has been studied in several use scenarios where it has not usually been found to be helpful, or is still under investigation, despite theoretical grounds for its usefulness.
Stroke
There is currently no evidence supporting targeted temperature management use in humans for stroke and clinical trials have not been completed. Most of the data concerning hypothermia's effectiveness in treating stroke is limited to animal studies. These studies have focused primarily on ischemic stroke as opposed to hemorrhagic stroke, as hypothermia is associated with a lower clotting threshold. In these animal studies, hypothermia was represented an effective neuroprotectant. The use of hypothermia to control intracranial pressure (ICP) after an ischemic stroke was found to be both safe and practical.
Traumatic brain or spinal cord injury
Animal studies have shown the benefit of targeted temperature management in traumatic central nervous system (CNS) injuries. Clinical trials have shown mixed results with regards to the optimal temperature and delay of cooling. Achieving therapeutic temperatures of is thought to prevent secondary neurological injuries after severe CNS trauma. A systematic review of randomised controlled trials in traumatic brain injury (TBI) suggests there is no evidence that hypothermia is beneficial.
Cardiac arrest
A clinical trial in cardiac arrest patients showed that hypothermia improved neurological outcome and reduced mortality. A retrospective study of the use of hypothermia for cardiac arrest patients showed favorable neurological outcome and survival. Osborn waves on electrocardiogram (ECG) are frequent during TTM after cardiac arrest, particularly in patients treated with 33 °C. Osborn waves are not associated with increased risk of ventricular arrhythmia, and may be considered a benign physiological phenomenon, associated with lower mortality in univariable analyses.
Neurosurgery
As of 2015 hypothermia had shown no improvements in neurological outcomes or in mortality in neurosurgery.
Naegleriasis
TTM has been used in some cases of naegleriasis.
See also
Deep hypothermic circulatory arrest
References
External links
The American Society of Hypothermic Medicine
Therapeutic Hypothermia and Temperature Management
Sources
Medical treatments
Cryobiology
Neonatology | Targeted temperature management | [
"Physics",
"Chemistry",
"Biology"
] | 4,118 | [
"Biochemistry",
"Physical phenomena",
"Phase transitions",
"Cryobiology"
] |
15,211,685 | https://en.wikipedia.org/wiki/Glucose-regulated%20protein | Glucose-regulated protein is a protein in the endoplasmic reticulum in the cell.
It comes in several different molecular masses, including:
Grp78 (78 kDa)
Grp94 (94 kDa)
Grp170 (170 kDa), which is a human chaperone protein
References
Endoplasmic reticulum resident proteins | Glucose-regulated protein | [
"Chemistry"
] | 78 | [
"Biochemistry stubs",
"Protein stubs"
] |
15,213,007 | https://en.wikipedia.org/wiki/MLwiN | MLwiN is a statistical software package for fitting multilevel models. It uses both maximum likelihood estimation and Markov chain Monte Carlo (MCMC) methods. MLwiN is based on an earlier package, MLn, but with a graphical user interface (as well as other additional features).
MLwiN represents multilevel models using mathematical notation including Greek letters and multiple subscripts, so the user needs to be (or become) familiar with such notation.
For a tutorial introduction to multilevel models and their applications in medical statistics illustrated using MLwiN, see Goldstein et al.
References
External links
Website
Multilevel Modelling Software Reviews
Statistical software | MLwiN | [
"Mathematics"
] | 133 | [
"Statistical software",
"Mathematical software"
] |
15,213,798 | https://en.wikipedia.org/wiki/PHF8 | PHD finger protein 8 is a protein that in humans is encoded by the PHF8 gene.
Function
PHF8 belongs to the family of ferrous iron and alpha-ketoglutarate-dependent hydroxylases superfamily., and is active as a histone lysine demethylase with selectivity for the di-and monomethyl states. PHF8 induces an EMT (epithelial to mesenchymal transition)-like process by upregulating key EMT transcription factors SNAI1 and ZEB1.
Regulation during differentiation
PHF8 was found to be expressional increased during endothelial differentiation and significantly decreased during cardial differentiation of murine embryonic stem cells.
Clinical significance
Mutations in PHF8 cause Siderius type X-linked intellectual disability (XLMR) ().
In addition to moderate intellectual disability, features of the Siderius-Hamel syndrome include facial dysmorphism, cleft lip and/or cleft palate, and in some cases microcephaly. A chromosomal microdeletion on Xp11.22 encompassing all of the PHF8 and FAM120C genes and a part of the WNK3 gene was reported in two brothers with autism spectrum disorder in addition to Siderius-type XLMR and cleft lip and palate.
This catalytic activity is disrupted by clinically known mutations to PHF8, which were found to cluster in its catalytic JmjC domain. The F279S mutation of PHF8, found in 2 Finnish brothers with mild intellectual disability, facial dysmorphism and cleft lip/palate, was found to additionally prevent nuclear localisation of PHF8 overexpressed in human cells.
The catalytic activity of PHF8 depends on molecular oxygen, a fact considered important with respect to reports on increased incidence of cleft lip/palate in mice that have been exposed to hypoxia during pregnancy. In humans, fetal cleft lip and other congenital abnormalities have also been linked to maternal hypoxia, as caused by e.g. maternal smoking, heavy maternal alcohol use, or maternal hypertension treatment.
References
External links
Transcription factors
Genes on human chromosome X
Human 2OG oxygenases
EC 1.14.11 | PHF8 | [
"Chemistry",
"Biology"
] | 470 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,215,073 | https://en.wikipedia.org/wiki/HMGB3 | High-mobility group protein B3 is a protein that in humans is encoded by the HMGB3 gene.
References
Further reading
External links
PDBe-KB provides an overview of all the structure information available in the PDB for Human High mobility group protein B3 (HMGB3)
Transcription factors | HMGB3 | [
"Chemistry",
"Biology"
] | 61 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,215,486 | https://en.wikipedia.org/wiki/JARID2 | Protein Jumonji is a protein that in humans is encoded by the JARID2 gene. JARID2 is a member of the alpha-ketoglutarate-dependent hydroxylase superfamily.
Jarid2 (jumonji, AT rich interactive domain 2) is a protein coding gene that functions as a putative transcription factor. Distinguished as a nuclear protein necessary for mouse embryogenesis, Jarid2 is a member of the jumonji family that contains a DNA binding domain known as the AT-rich interaction domain (ARID). In vitro studies of Jarid2 reveal that ARID along with other functional domains are involved in DNA binding, nuclear localization, transcriptional repression, and recruitment of Polycomb-repressive complex 2 (PRC2). Intracellular mechanisms underlying these interactions remain largely unknown.
In search of developmentally important genes, Jarid2 has previously been identified by gene trap technology as an important factor necessary for organ development. During mouse organogenesis, Jarid2 is involved in the formation of the neural tube and development of the liver, spleen, thymus and cardiovascular system. Continuous Jarid2 expression in the tissues of the heart, highlight its presiding role in the development of both the embryonic and the adult heart. Mutant models of Jarid2 embryos show severe heart malformations, ventricular septal defects, noncompaction of the ventricular wall, and atrial enlargement. Homozygous mutants of Jarid2 are found to die soon after birth. Overexpression of the mouse Jarid2 gene has been reported to repress cardiomyocyte proliferation through it close interaction with retinoblastoma protein (Rb), a master cell cycle regulator. Retinoblastoma-binding protein-2 and the human SMCX protein share regions of homology between mice and humans.
References
Further reading
External links
Human 2OG oxygenases
EC 1.14.11
Transcription factors
Genes mutated in mice | JARID2 | [
"Chemistry",
"Biology"
] | 409 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,217,260 | https://en.wikipedia.org/wiki/NFIL3 | Nuclear factor, interleukin 3 regulated, also known as NFIL3 or E4BP4 is a protein which in humans is encoded by the NFIL3 gene.
Function
Expression of interleukin-3 (IL-3) is restricted to activated T cells, natural killer (NK) cells, and mast cell lines. Transcription initiation depends on the activating capacity of specific protein factors, such as NFIL3, that bind to regulatory regions of the gene, usually upstream of the transcription start site.
References
Further reading
Transcription factors | NFIL3 | [
"Chemistry",
"Biology"
] | 116 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
4,510,223 | https://en.wikipedia.org/wiki/London%20moment | The London moment (after Fritz London) is a quantum-mechanical phenomenon whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis.
The term may also refer to the magnetic moment of any rotation of any superconductor, caused by the electrons lagging behind the rotation of the object, although the field strength is independent of the charge carrier density in the superconductor.
Gravity Probe B
A magnetometer determines the orientation of the generated field, which is interpolated to determine the axis of rotation. Gyroscopes of this type can be extremely accurate and stable. For example, those used in the Gravity Probe B experiment measured changes in gyroscope spin axis orientation to better than 0.5 milliarcseconds (1.4 degrees) over a one-year period. This is equivalent to an angular separation the width of a human hair viewed from 32 kilometers (20 miles) away.
The GP-B gyro consists of a near-perfect spherical rotating mass made of fused quartz, which provides a dielectric support for a thin layer of niobium superconducting material. To eliminate friction found in conventional bearings, the rotor assembly is centered by the electric field from six electrodes. After the initial spin-up by a jet of helium which brings the rotor to 4,000 RPM, the polished gyroscope housing is evacuated to an ultra-high vacuum to further reduce drag on the rotor. Provided the suspension electronics remain powered, the extreme rotational symmetry, lack of friction, and low drag will allow the angular momentum of the rotor to keep it spinning for about 15,000 years.
A sensitive DC SQUID magnetometer able to discriminate changes as small as one quantum, or about , is used to monitor the gyroscope. A precession, or tilt, in the orientation of the rotor causes the London moment magnetic field to shift relative to the housing. The moving field passes through a superconducting pickup loop fixed to the housing, inducing a small electric current. The current produces a voltage across a shunt resistance, which is resolved to spherical coordinates by a microprocessor. The system is designed to minimize Lorentz torque on the rotor.
Magnetic field strength
The magnetic field strength associated with a rotating superconductor is given by:
where M and Q are the mass and the charge of the superconducting charge carriers respectively. For the case of Cooper pairs of electrons, and . Despite the electrons existing in a strongly interacting environment, me denotes here the mass of the bare electrons (as in vacuum), and not e.g. the effective mass of conducting electrons of the normal phase.
Etymology
Named for the physical scientist Fritz London, and moment as in magnetic moment.
See also
Barnett effect
References
Quantum mechanics | London moment | [
"Physics"
] | 576 | [
"Theoretical physics",
"Quantum mechanics"
] |
4,510,496 | https://en.wikipedia.org/wiki/Head%20gasket | In an internal combustion engine, a head gasket provides the seal between the engine block and cylinder head(s).
Its purpose is to seal the combustion gases within the cylinders and to avoid coolant or engine oil leaking into the cylinders. Leaks in the head gasket can cause poor engine running and/or overheating.
Purpose
Within a water-cooled internal combustion engine, there are three fluids which travel between the engine block and the cylinder head:
Combustion gases (unburned air/fuel mixture and exhaust gases) in each cylinder
Water-based coolant in the coolant passages
Lubricating oil in the oil galleries
Correct operation of the engine requires that each of these circuits do not leak or lose pressure at the junction of the engine block and the cylinder head. The head gasket is the seal that prevents these leaks and pressure losses.
Types
Multi-layer steel (MLS): Most modern engines are produced with MLS gaskets. These consist of two to five (typically three) thin layers of steel, interleaved with elastomer. The contact faces are usually coated with a rubber-like coating such as Viton which adheres to the engine block and cylinder head while the inner layers are optimized for resilience.
Solid copper: A solid sheet of copper, and typically requires special machining called O-ringing that places a piece of wire around the circumference of the cylinder to bite into the copper. When this is performed copper gaskets are very durable.
Composite: An older design which is more prone to blowouts than newer designs. Composite gaskets are traditionally made from asbestos or graphite. Asbestos gaskets are increasingly rare due to health concerns.
Elastomeric: Uses a steel core plate with molded in place silicone rubber beads to seal oil and coolant passages. The bores are sealed by rolled steel fire rings in a more conventional manner. This type of gasket was used in the Rover K-series engine.
O-ring: These gaskets are typically built from steel or copper. They are reusable and if used between correctly prepared flat surfaces will yield the highest clamping pressure, due to their much lower surface area compared with other gasket types.
Gasket failure
A leak in the head gasket - often called a "blown head gasket" - can result in a leak of coolant, the combustion gasses, or both.
Blue smoke from the exhaust suggests that excess oil is entering the combustion chambers (although there are other possible causes than a head gasket leak). White smoke from the exhaust suggests that coolant is entering the combustion chamber.
Head gasket leaks are classified as either external or internal. External leaks are visible as oil or coolant on the outside of the engine (typically underneath). Internal leaks are when the fluids enter another circuit and may result in changes to the coolant or oil. The former may be the presence of foam (caused by hydrocarbons) in the coolant expansion tank. Coolant leaking into the oil system may result in a mayonnaise- or milkshake-like substance in the oil, often to be seen on the dipstick, or oil filler cap. However, the presence of this substance is not conclusive proof of head gasket failure, since oil could mix with the coolant via other routes. Likewise, it is entirely possible for a head gasket to fail in such a way that oil never comes in contact with coolant. Therefore it is not possible to conclusively determine the head gasket condition by inspecting the oil.
Coolant leakage
If coolant enters a cylinder, the burning of the air/fuel mixture is compromised, reducing the engine's performance and often causing steam (white smoke) to be visible from the exhaust. This steam can damage the catalytic converter. If a very large amount of coolant leaks into the cylinders, then the engine can suffer from hydrolock, which can cause extensive engine damage.
Combustion gas leakage
When the combustion gasses leak out of a cylinder, this causes a loss of compression, leading to power reduction or rough running. If the combustion gases are leaking into the cooling system, this reduces the effectiveness of the cooling system and can cause the engine to overheat. In other occurrences the gases can leak into small spaces between the gasket and either the cylinder head or engine block traps those gases, and then released when the engine is turned off. These gases then escape into the coolant and create air pockets. Sometimes these air pockets can get trapped in the engine's coolant thermostat, causing it to stay closed and cause further overheating, thereby creating more voids between the gasket and the engine. Other times these air pockets can also cause the engine to expel coolant into the overflow or expansion tank, thereby reducing the amount of coolant available for cooling.
Diagnosis and repair
Common test methods for head gasket leaks are a compression test (using a pressure gauge), a leak-down test or a chemical test that identifies hydrocarbons in the coolant fluid.
The cost of the replacement component (i.e. the head gasket itself) is usually relatively low, however there are significant labor costs involved in the replacement process. This is because the process of removing and replacing the cylinder head is a time-consuming task.
Effect of engine knocking
Engine knocking (detonation) can be caused by poor quality fuel, an engine fault or if inappropriate fuel and/or ignition settings are trialled/chosen while engine tuning is taking place. If the detonation is severe, the cylinder pressure can increase to eight times above normal pressures, which can cause the cylinder head to lift away from the engine block, disrupting the seal between the two. Most gaskets used in standard production engines can be critically damaged by severe detonation.
See also
Internal combustion engine cooling
List of auto parts
Motor oil
Rocker cover gasket
References
Seals (mechanical)
Engine technology
Engine problems | Head gasket | [
"Physics",
"Technology"
] | 1,241 | [
"Seals (mechanical)",
"Engines",
"Engine problems",
"Engine technology",
"Materials",
"Matter"
] |
4,510,659 | https://en.wikipedia.org/wiki/Fuji%20Electric | , operating under the brand name FE, is a Japanese electrical equipment company, manufacturing pressure transmitters, flowmeters, gas analyzers, controllers, inverters, pumps, generators, ICs, motors, and power equipment.
History
Fuji Electric was established in 1923 as a capital and technology tie-up between Furukawa Electric, a spinoff from Furukawa zaibatsu company, and Siemens AG. The name "Fuji" is derived from Furukawa's "Fu" and Siemens' "Ji", since German pronunciation of Siemens is written jiimensu in Japanese romanization. The characters used to write Mount Fuji were used as ateji.
In 1935, Fuji Electric spun off the telephone department as Fuji Tsushinki (lit. Fuji Communications Equipment, now Fujitsu).
Divisions and products
Power and social infrastructure
Nuclear power-related equipment
Solar power generation systems
Fuel cells
Energy management systems
Smart meters
Industrial infrastructure
Transmission and distribution equipment — joint venture with Schneider Electric
Industrial power supply equipment
Industrial drive systems
Heating and induction furnace equipment
Plant control and measurement systems
Radiation monitoring systems
Power electronics
Inverters/servo systems
Transportation power electronics
Uninterruptible power supply systems
Power conditioners
Power distribution and control equipment
Electronic devices
Power semiconductors
Photoconductive drums
Magnetic disks
Food and beverage distribution
Vending machines
Retail distribution systems
Currency handling equipment
Freezing and refrigerated showcases
Source
References
External links
Fuji Electric Group
List of Fuji Electric Systems Distributors
Wiki collection of bibliographic works on Fuji Electric
Electronics companies of Japan
Electrical equipment manufacturers
Electrical engineering companies of Japan
Electrical wiring and construction supplies manufacturers
Heating, ventilation, and air conditioning companies
Vending machine manufacturers
Manufacturing companies based in Tokyo
Companies listed on the Tokyo Stock Exchange
Companies in the Nikkei 225
Electronics companies established in 1923
Japanese companies established in 1923
Japanese brands
Furukawa Group
Pump manufacturers
Electric motor manufacturers | Fuji Electric | [
"Engineering"
] | 374 | [
"Electrical engineering organizations",
"Electrical equipment manufacturers"
] |
4,510,677 | https://en.wikipedia.org/wiki/Agent%20architecture | Agent architecture in computer science is a blueprint for software agents and intelligent control systems, depicting the arrangement of components. The architectures implemented by intelligent agents are referred to as cognitive architectures. The term agent is a conceptual idea, but not defined precisely. It consists of facts, set of goals and sometimes a plan library.
Types
Reactive architectures
Subsumption
Deliberative reasoning architectures
Procedural reasoning system (PRS)
Layered/hybrid architectures
3T
AuRA
Brahms
GAIuS
GRL
ICARUS
InteRRaP
TinyCog
TouringMachines
Cognitive architectures
ASMO
Soar
ACT-R
Brahms
LIDA
PreAct
Cougaar
PRODIGY
FORR
See also
Action selection
Cognitive architecture
Real-time Control System
References
Software architecture
Robot architectures | Agent architecture | [
"Engineering"
] | 149 | [
"Robot architectures",
"Robotics engineering"
] |
4,512,803 | https://en.wikipedia.org/wiki/Receptor%20activated%20solely%20by%20a%20synthetic%20ligand | A receptor activated solely by a synthetic ligand (RASSL) or designer receptor exclusively activated by designer drugs (DREADD), is a class of artificially engineered protein receptors used in the field of chemogenetics which are selectively activated by certain ligands. They are used in biomedical research, in particular in neuroscience to manipulate the activity of neurons.
Originally differentiated by the approach used to engineer them, RASSLs and DREADDs are often used interchangeably now to represent an engineered receptor-ligand system. These systems typically utilize G protein-coupled receptors (GPCR) engineered to respond exclusively to synthetic ligands, like clozapine N-oxide (CNO), and not to endogenous ligands. Several types of these receptors exists, derived from muscarinic or κ-opioid receptors.
Types of RASSLs / DREADDs
One of the first DREADDs was based on the human M3 muscarinic receptor (hM3). Only two point mutations of hM3 were required to achieve a mutant receptor with nanomolar potency for CNO, insensitivity to acetylcholine and low constitutive activity and this DREADD receptor was named hM3Dq. M1 and M5 muscarinic receptors have been mutated to create DREADDs hM1Dq and hM5Dq respectively.
The most commonly used inhibitory DREADD is hM4Di, derived from the M4 muscarinic receptor that couples with the Gi protein. Another Gi coupled human muscarinic receptor, M2, was also mutated to obtain the DREADD receptor hM2D. Another inhibitory Gi-DREADD is the kappa-opioid-receptor (KOR) DREADD (KORD) which is selectively activated by salvinorin B (SalB).
Gs-coupled DREADDs have also been developed. These receptors are also known as GsD and are chimeric receptors containing intracellular regions of the turkey erythrocyte β-adrenergic receptor substituted into the rat M3 DREADD.
RASSL / DREADD ligands
A growing number of ligands that can be used to activate RASSLs / DREADDs are commercially available.
CNO is the prototypical DREADD activator. CNO activates the excitatory Gq- coupled DREADDs: hM3Dq, hM1Dq and hM5Dq and also the inhibitory hM4Di and hM2Di Gi-coupled DREADDs. CNO also activates the Gs-coupled DREADD (GsD) and the β-arrestin preferring DREADD: rM3Darr (Rq(R165L).
Recent findings suggest that systemically administered CNO does not readily cross the blood-brain-barrier in vivo and converts to clozapine which itself activates DREADDs. Clozapine is an atypical antipsychotic which has been indicated to show high DREADD affinity and potency. Subthreshold injections of clozapine itself can be utilised to induce preferential DREADD-mediated behaviors. Therefore, when using CNO, care must be taken in experimental design and proper controls should be incorporated.
DREADD agonist 21, also known as Compound 21, represents an alternative agonist for muscarinic-based DREADDs and an alternative to CNO. It has been reported that Compound 21 has excellent bioavailability, pharmacokinetic properties and brain penetrability and does not undergo reverse metabolism to clozapine. Another known agonist is perlapine, a hypnotic drug approved for treating insomnia in Japan. It acts as an activator of Gq-, Gi-, and Gs DREADDs that has structural similarity to CNO. A more recent agonist of hM3Dq and hM4Di is deschloroclozapine (DCZ).
On the other hand, SalB B is a potent and selective activator of KORD.
JHU37160 and JHU37152 have been marketed commercially as novel DREADD ligands, active in vivo, with high potency and affinity for hM3Dq and hM4Di DREADDs.
Dihydrochloride salts of DREADD ligands that are water-soluble (but with differing stabilities in solution) have also been commercially developed (see for aqueous stability).
Mechanism
RASSLs and DREADDs are families of designer G-protein-coupled receptors (GPCRs) built specifically to allow for precise spatiotemporal control of GPCR signaling in vivo. These engineered GPCRs are unresponsive to endogenous ligands but can be activated by nanomolar concentrations of pharmacologically inert, drug-like small molecules. Currently, RASSLs exist for the interrogation of several GPCR signaling pathways, including those activated by Gs, Gi, Gq, Golf and β-arrestin. A major cause for success of RASSL resources has been open exchange of DNA constructs, and RASSL related resources.
The hM4Di-DREADD's inhibitory effects are a result of the CNO's stimulation and resulting activation of the G-protein inwardly rectifying potassium (GIRK) channels. This causes hyperpolarization of the targeted neuronal cell and thus attenuates subsequent activity.
Uses
This chemogenetic technique can be used for remote manipulation of cells, in particular excitable cells like neurons, both in vitro and in vivo with the administration of specific ligands. Similar techniques in this field include thermogenetics and optogenetics, the control of neurons with temperature or light, respectively.
Viral expression of DREADD proteins, both in-vivo enhancers and inhibitors of neuronal function, have been used to bidirectionally control behaviors in mice (e.g odor discrimination). Due to their ability to modulate neuronal activity, DREADDs are used as a tool to evaluate both the neuronal pathways and behaviors associated with drug-cues and drug addiction.
History
Strader and colleagues designed the first GPCR which could be activated only by a synthetic compound and has gradually been gaining momentum. The first international RASSL meeting was scheduled for April 6, 2006. A simple example of the use of a RASSL system in behavioral genetics was illustrated by Mueller et al. (2005) where they showed that expressing a RASSL receptor in sweet taste cells of the mouse tongue led to a strong preference for oral consumption of the synthetic ligand, whereas expressing the RASSL in bitter taste cells caused dramatic taste aversion for the same compound.
The attenuating effects of the hM4Di-DREADD were originally explored in 2007, before being confirmed in 2014.
References
Further reading
Signal transduction | Receptor activated solely by a synthetic ligand | [
"Chemistry",
"Biology"
] | 1,399 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
11,521,009 | https://en.wikipedia.org/wiki/Lema%C3%AEtre%E2%80%93Tolman%20metric | In physics, the Lemaître–Tolman metric, also known as the Lemaître–Tolman–Bondi metric or the Tolman metric, is a Lorentzian metric based on an exact solution of Einstein's field equations; it describes an isotropic and expanding (or contracting) universe which is not homogeneous, and is thus used in cosmology as an alternative to the standard Friedmann–Lemaître–Robertson–Walker metric to model the expansion of the universe. It has also been used to model a universe which has a fractal distribution of matter to explain the accelerating expansion of the universe. It was first found by Georges Lemaître in 1933 and Richard Tolman in 1934 and later investigated by Hermann Bondi in 1947.
Details
In a synchronous reference system where and , the time coordinate (we set ) is also the proper time and clocks at all points can be synchronized. For a dust-like medium where the pressure is zero, dust particles move freely i.e., along the geodesics and thus the synchronous frame is also a comoving frame wherein the components of four velocity are . The solution of the field equations yield
where is the radius or luminosity distance in the sense that the surface area of a sphere with radius is and is just interpreted as the Lagrangian coordinate and
subjected to the conditions and , where and are arbitrary functions, is the matter density and finally primes denote differentiation with respect to . We can also assume and that excludes cases resulting in crossing of material particles during its motion. To each particle there corresponds a value of , the function and its time derivative respectively provides its law of motion and radial velocity. An interesting property of the solution described above is that when and are plotted as functions of , the form of these functions plotted for the range is independent of how these functions will be plotted for . This prediction is evidently similar to the Newtonian theory. The total mass within the sphere is given by
which implies that Schwarzschild radius is given by .
The function can be obtained upon integration and is given in a parametric form with a parameter with three possibilities,
where emerges as another arbitrary function. However, we know that centrally symmetric matter distribution can be described by at most two functions, namely their density distribution and the radial velocity of the matter. This means that of the three functions , only two are independent. In fact, since no particular selection has been made for the Lagrangian coordinate yet that can be subjected to arbitrary transformation, we can see that only two functions are arbitrary. For the dust-like medium, there exists another solution where and independent of , although such solution does not correspond to collapse of a finite body of matter.
Schwarzschild solution
When const., and therefore the solution corresponds to empty space with a point mass located at the center. Further by setting and , the solution reduces to Schwarzschild solution expressed in Lemaître coordinates.
Gravitational collapse
The gravitational collapse occurs when reaches with . The moment corresponds to the arrival of matter denoted by its Lagrangian coordinate to the center. In all three cases, as , the asymptotic behaviors are given by
in which the first two relations indicate that in the comoving frame, all radial distances tend to infinity and tangential distances approaches zero like , whereas the third relation shows that the matter density increases like In the special case constant where the time of collapse of all the material particle is the same, the asymptotic behaviors are different,
Here both the tangential and radial distances goes to zero like , whereas the matter density increases like
See also
Lemaître coordinates
Introduction to the mathematics of general relativity
Stress–energy tensor
Metric tensor (general relativity)
Relativistic angular momentum
inhomogeneous cosmology
References
Physical cosmology
Metric tensors
Spacetime
Coordinate charts in general relativity
General relativity
Gravity
Exact solutions in general relativity | Lemaître–Tolman metric | [
"Physics",
"Astronomy",
"Mathematics",
"Engineering"
] | 803 | [
"Exact solutions in general relativity",
"Astronomical sub-disciplines",
"Tensors",
"Vector spaces",
"Coordinate systems",
"Theoretical physics",
"Mathematical objects",
"Astrophysics",
"General relativity",
"Equations",
"Space (mathematics)",
"Metric tensors",
"Theory of relativity",
"Spa... |
11,523,713 | https://en.wikipedia.org/wiki/Control%20%28management%29 | Control is a function of management that helps to check errors and take corrective actions. This is done to minimize deviation from standards and ensure that the stated goals of the organization are achieved in a desired manner.
According to modern concepts, control is a foreseeing action; earlier concepts of control were only used when errors were detected. Control in management includes setting standards, measuring actual performance, and taking corrective action in decision making.
Definition
In 1916, Henri Fayol formulated one of the first definitions of control as it pertains to management: Control of an undertaking consists of seeing that everything is being carried out in accordance with the plan which has been adopted, the orders which have been given, and the principles which have been laid down. Its objective is to point out mistakes so that they may be rectified and prevented from recurring.
According to EFL Brech:Control is checking current performance against pre-determined standards contained in the plans, with a view to ensuring adequate progress and satisfactory performance.
According to Harold Koontz:Controlling is the measurement and correction of performance to make sure that enterprise objectives and the plans devised to attain them are accomplished.
According to Stafford Beer: Management is the profession of control.
Robert J. Mockler presented a more comprehensive definition of managerial control:
Management control can be defined as a systematic torture by business management to compare performance to predetermined standards, plans, or objectives to determine whether performance is in line with these standards and presumably to take any remedial action required to see that human and other corporate resources are being used most effectively and efficiently possible in achieving corporate objectives.
Also, control can be defined as "that function of the system that adjusts operations as needed to achieve the plan, or to maintain variations from system objectives within allowable limits". The control subsystem functions in close harmony with the operating system. The degree to which they interact depends on the nature of the operating system and its objectives. Stability concerns a system's ability to maintain a pattern of output without wide fluctuations. The rapidity of response pertains to the speed with which a system can correct variations and return to the expected output.
A political election can illustrate the concept of control and the importance of feedback. Each party organizes a campaign to get its candidate selected and outlines a plan to inform the public about both the candidate's credentials and the party's platform. As the election nears, opinion polls furnish feedback about the effectiveness of the campaign and about each candidate's chances to win. Depending on the nature of this feedback, certain adjustments in strategy and/or tactics can be made in an attempt to achieve the desired result.
From these definitions, it can be stated that there is a close link between planning and controlling. Planning is a process by which an organization's objectives and the methods to achieve the objectives are established, and controlling is a process that measures and directs the actual performance against the planned goals of the organization. Thus, goals and objectives are often referred to as Siamese twins of management.
the managerial function of management and correction of performance to make sure that enterprise objectives and the goals devised to attain them are accomplished.
Characteristics
Control is a continuous process
Control is a management process
Control is closely linked with planning
Control is a tool for achieving organizational activities
Control is an end-to-end process
Control compares actual performance with planned performance*
Control points out the error in the execution process
Control minimizes cost
Control achieves the standard
Control saves time
Control helps management monitor performance
Control compares performance against standards
Control is action oriented
Elements
The four basic elements in a control system are:
the characteristic or condition to be controlled
the sensor
the comparator
the activator
They occur in the same sequence and maintain consistent relationships with each other in every system.
The first element is the characteristic or condition of the operating system to be measured. Specific characteristics are selected because a correlation exists between them and the system's performance. A characteristic can be the output of the system during any stage of processing (e.g. the heat energy produced by a furnace), or it may be a condition that is the result of the system (e.g. the temperature in the room which has changed because of the heat generated by the furnace). In an elementary school system, the hours a teacher works or the gain in knowledge demonstrated by the students on a national examination are examples of characteristics that may be selected for measurement, or control.
The second element of control, the sensor, is a means for measuring the characteristic. For example, in a home heating system, this device would be the thermostat, and in a quality-control system, this measurement might be performed by a visual inspection of the product.
The third element of control, the comparator, determines the need for correction by comparing what is occurring with what has been planned. Some deviation from the plan is usual and expected, but when variations are beyond those considered acceptable, corrective action is required. It involves a sort of preventative action that indicates that good control is being achieved.
The fourth element of control, the activator, is the corrective action taken to return the system to its expected output. The actual person, device, or method used to direct corrective inputs into the operating system may take a variety of forms. It may be a hydraulic controller positioned by a solenoid or electric motor in response to an electronic error signal, an employee directed to rework the parts that failed to pass quality inspection, or a school principal who decides to buy additional books to provide for an increased number of students. As long as a plan is performed within allowable limits, corrective action is not necessary; however, this seldom occurs in practice.
Information is the medium of control, because the flow of sensory data and later the flow of corrective information allow a characteristic or condition of the system to be controlled.
Controlled characteristic or condition
The primary requirement of a control system is that it maintains the level and kind of output necessary to achieve the system's objectives. It is usually impractical to control every feature and condition associated with the system's output. Therefore, the choice of the controlled item (and appropriate information about it) is extremely important. There should be a direct correlation between the controlled item and the system's operation. In other words, control of the selected characteristic should have a direct relationship to the goal or objective of the system.
Sensor
After the characteristic is sensed, or measured, information pertinent to control is fed back. Exactly what information needs to be transmitted and also the language that will best facilitate the communication process and reduce the possibility of distortion in transmission must be carefully considered. Information that is to be compared with the standard, or plan, should be expressed in the same terms or language as in the original plan to facilitate decision making. Using machine methods (computers) may require extensive translation of the information. Since optimal languages for computation and for human review are not always the same, the relative ease of translation may be a significant factor in selecting the units of measurement or the language unit in the sensing element.
In many instances, the measurement may be sampled rather than providing a complete and continuous feedback of information about the operation. A sampling procedure suggests measuring some segment or portion of the operation that will represent the total.
Comparison with standard
In a social system, the norms of acceptable behavior become the standard against which so-called deviant behavior may be judged. Regulations and laws provide a more formal collection of information for society. Social norms change, but very slowly. In contrast, the standards outlined by a formal law can be changed from one day to the next through revision, discontinuation, or replacement by another.
Information about deviant behavior becomes the basis for controlling social activity. Output information is compared with the standard or norm and significant deviations are noted. In an industrial example, frequency distribution (a tabulation of the number of times a given characteristic occurs within the sample of products being checked) may be used to show the average quality, the spread, and the comparison of output with a standard.
If there is a significant and uncorrectable difference between output and plan, the system is "out of control." This means that the objectives of the system are not feasible in relation to the capabilities of the present design. Either the objectives must be reevaluated or the system redesigned to add new capacity or capability. For example, drug trafficking has been increasing in some cities at an alarming rate. The citizens must decide whether to revise the police system so as to regain control, or whether to modify the law to reflect a different norm of acceptable behavior.
Implementor
The activator unit responds to the information received from the comparator and initiates corrective action. If the system is a machine-to-machine system, the corrective inputs (decision rules) are designed into the network. When the control relates to a man-to-machine or man-to-man system, however, the individual(s) in charge must evaluate (1) the accuracy of the feedback information, (2) the significance of the variation, and (3) what corrective inputs will restore the system to a reasonable degree of stability. Once the decision has been made to direct new inputs into the system, the actual process may be relatively easy. A small amount of energy can change the operation of jet airplanes, automatic steel mills, and hydroelectric power plants. The pilot presses a button, and the landing gear of the airplane goes up or down; the operator of a steel mill pushes a lever, and a ribbon of white-hot steel races through the plant; a worker at a control board directs the flow of electrical energy throughout a regional network of stations and substations. It takes but a small amount of control energy to release or stop large quantities of input.
The comparator may be located far from the operating system, although at least some of the elements must be in close proximity to operations. For example, the measurement (the sensory element) is usually at the point of operations. The measurement information can be transmitted to a distant point for comparison with the standard (comparator), and when deviations occur, the correcting input can be released from the distant point. However, the input (activator) will be located at the operating system. This ability to control from afar means that aircraft can be flown by remote control, dangerous manufacturing processes can be operated from a safe distance, and national organizations can be directed from centralized headquarters in Dublin, Ireland.
- Kenard E. White
Process
Step 1. Establishment of Standard.
Standards are the criteria against which actual performance will be measured. Standards are set in both quantitative and qualitative terms.
Step 2. Measurement of actual performance
Performance is measured in an objective and reliable manner. It should be checked in the same unit in which the standards are set.
Step 3. Comparing actual performance with standards.
This step involves comparing the actual performance with standards laid down in order to find the deviations. For example, performance of a salesman in terms of unit sold in a week can be easily measured against the standard output for the week.
Step 4. Analysis the cause of deviations.
Managers must determine why standards were not met. This step also involves determining whether more control is necessary or if the standard should be changed.
Step 5. Taking corrective action.
After the reasons for deviations have been determined, managers can then develop solutions for issues with meeting the standards and make changes to processes or behaviors.
Classifications
Control may be grouped according to three general classifications:
the nature of the information flow designed into the system (open- or closed-loop control)
the kind of components included in the design (man or machine control systems)
the relationship of control to the decision process (organizational or operational control).
Open- and closed-loop control
A street-lighting system controlled by a timing device is an example of an open-loop system. At a certain time each evening, a mechanical device closes the circuit and energy flows through the electric lines to light the lamps. Note, however, that the timing mechanism is an independent unit and is not measuring the objective function of the lighting system. If the lights should be needed on a dark, stormy day the timing device would not recognize this need and therefore would not activate energy inputs. Corrective properties may sometimes be built into the controller (for example, to modify the time the lights are turned on as the days grow shorter or longer), but this would not close the loop. In another instance, the sensing, comparison, or adjustment may be made through action taken by an individual who is not part of the system. For example, the lights may be turned on by someone who happens to pass by and recognizes the need for additional light.
If control is exercised as a result of the operation rather than because of outside or predetermined arrangements, it is a closed-loop system. A home thermostat is an example of a control device in a closed-loop system. When the room temperature drops below the desired point, the control mechanism closes the circuit to start the furnace and the temperature rises. The furnace is deactivated as the temperature reaches the preselected level. The significant difference between this type of system and an open-loop system is that the control device is an element of the system it serves and measures the performance of the system. In other words, all four control elements are integral to the specific system.
An essential part of a closed-loop system is feedback; that is, the output of the system is measured continually through the item controlled, and the input is modified to reduce any difference or error toward zero. Many of the patterns of information flow in organizations are found to have the nature of closed loops, which use feedback. The reason for such a condition is apparent when one recognizes that any system, if it is to achieve a predetermined goal, must have available to it at all times an indication of its degree of attainment. In general, every goal-seeking system employs feedback.
Human and machine control
The elements of control are easy to identify in machine systems. For example, the characteristic to be controlled might be some variable like speed or temperature, and the sensing device could be a speedometer or a thermometer. An expectation of precision exists because the characteristic is quantifiable and the standard and the normal variation to be expected can be described in exact terms. In automatic machine systems, inputs of information are used in a process of continual adjustment to achieve output specifications. When even a small variation from the standard occurs, the correction process begins. The automatic system is highly structured, designed to accept certain kinds of input and produce specific output, and programmed to regulate the transformation of inputs within a narrow range of variation.
For an illustration of mechanical control: as the load on a steam engine increases and the engine starts to slow down, the regulator reacts by opening a valve that releases additional inputs of steam energy. This new input returns the engine to the desired number of revolutions per minute. This type of mechanical control is crude in comparison to the more sophisticated electronic control systems in everyday use. Consider the complex missile-guidance systems that measure the actual course according to predetermined mathematical calculations and make almost instantaneous corrections to direct the missile to its target.
Machine systems can be complex because of the sophisticated technology, whereas control of people is complex because the elements of control are difficult to determine. In human control systems, the relationship between objectives and associated characteristics is often vague; the measurement of the characteristic may be extremely subjective; the expected standard is difficult to define; and the amount of new inputs required is impossible to quantify. To illustrate, let us refer once more to a formalized social system in which deviant behavior is controlled through a process of observed violation of the existing law (sensing), court hearings and trials (comparison with standard), incarceration when the accused is found guilty (correction), and release from custody after rehabilitation of the individual has occurred.
The speed limit established for freeway driving is one standard of performance that is quantifiable, but even in this instance, the degree of permissible variation and the amount of the actual variation are often a subject of disagreement between the patrolman and the suspected violator. The complexity of society is
reflected in many laws and regulations, which establish the general standards for economic, political, and social operations. A citizen may not know or understand the law and consequently would not know whether or not he was guilty of a violation.
Most organized systems are some combination of man and machine; some elements of control may be performed by machine whereas others are accomplished by man. In addition, some standards may be precisely structured whereas others may be little more than general guidelines with wide variations expected in output. Man must act as the controller when measurement is subjective and judgment is required. Machines such as computers are incapable of making exceptions from the specified control criteria regardless of how much a particular case might warrant special consideration. A pilot acts in conjunction with computers and automatic pilots to fly large jets. In the event of unexpected weather changes, or possible collision with another plane, he must intercede and assume direct control.
Organizational and operational control
The concept of organizational control is implicit in the bureaucratic theory of Max Weber. Associated with this theory are such concepts as "span of control", "closeness of supervision", and "hierarchical authority". Weber's view tends to include all levels or types of organizational control as being the same. More recently, writers have tended to differentiate the control process between that which emphasizes the nature of the organizational or systems design and that which deals with daily operations. To illustrate the difference, we "evaluate" the performance of a system to see how effective and efficient the design proved to be or to discover why it failed. In contrast, we operate and "control" the system with respect to the daily inputs of material, information, and energy. In both instances, the elements of feedback are present, but organizational control tends to review and evaluate the nature and arrangement of components in the system, whereas operational control tends to adjust the daily inputs.
The direction for organizational control comes from the goals and strategic plans of the organization. General plans are translated into specific performance measures such as share of the market, earnings, return on investment, and budgets. The process of organizational control is to review and evaluate the performance of the system against these established norms. Rewards for meeting or exceeding standards may range from special recognition to salary increases or promotions. On the other hand, a failure to meet expectations may signal the need to reorganize or redesign.
In organizational control, the approach used in the program of review and evaluation depends on the reason for the evaluation — that is, is it because the system is not effective (accomplishing its objectives)? Is the system failing to achieve an expected standard of efficiency? Is the evaluation being conducted because of a breakdown or failure in operations? Is it merely a periodic audit-and-review process?
When a system has failed or is in great difficulty, special diagnostic techniques may be required to isolate the trouble areas and to identify the causes of the difficulty. It is appropriate to investigate areas that have been troublesome before or areas where some measure of performance can be quickly identified. For example, if an organization's output backlog builds rapidly, it is logical to check first to see if the problem is due to such readily obtainable measures as increased demand or to a drop in available man hours. When a more detailed analysis is necessary, a systematic procedure should be followed.
In contrast to organizational control, operational control serves to regulate the day-to-day output relative to schedules, specifications, and costs. Is the output of product or service the proper quality and is it available as scheduled? Are inventories of raw materials, goods-in-process, and finished products being purchased and produced in the desired quantities? Are the costs associated with the transformation process in line with cost estimates? Is the information needed in the transformation process available in the right form and at the right time? Is the energy resource being utilized efficiently?
The most difficult task of management concerns monitoring the behavior of individuals, comparing performance to some standard, and providing rewards or punishment as indicated. Sometimes this control over people relates entirely to their output. For example, a manager might not be concerned with the behavior of a salesman as long as sales were as high as expected. In other instances, close supervision of the salesman might be appropriate if achieving customer satisfaction were one of the sales organization's main objectives.
The larger the unit, the more likely that the control characteristic will be related to some output goal. It also follows that if it is difficult or impossible to identify the actual output of individuals, it is better to measure the performance of the entire group. This means that individuals' levels of motivation and the measurement of their performance become subjective judgments made by the supervisor. Controlling output also suggests the difficulty of controlling individuals' performance and relating this to the total system's objectives.
Problems
The perfect plan could be outlined if every possible variation of input could be anticipated and if the system would operate as predicted. This kind of planning is neither realistic, economical, nor feasible for most business systems. If it were feasible, planning requirements would be so complex that the system would be out of date before it could be operated. Therefore, we design control into systems. This requires more thought in the systems design but allows more flexibility of operations and makes it possible to operate a system using unpredictable components and undetermined input. Still, the design and effective operation of control are not without problems.
The objective of the system is to perform some specified function.
The objective of organizational control is to see that the specified function is achieved.
The objective of operational control is to ensure that variations in daily output are maintained within prescribed limits.
It is one thing to design a system that contains all of the elements of control, and quite another to make it operate true to the best objectives of design. Operating "in control" or "with plan" does not guarantee optimum performance. For example, the plan may not make the best use of the inputs of materials, energy, or information — in other words, the system may not be designed to operate efficiently. Some of the more typical problems relating to control include the difficulty of measurement, the problem of timing information flow, and the setting of proper standards.
When objectives are not limited to quantitative output, the measurement of system effectiveness is difficult to make and subsequently perplexing to evaluate. Many of the characteristics pertaining to output do not lend themselves to quantitative measurement. This is true particularly when inputs of human energy cannot be related directly to output. The same situation applies to machines and other equipment associated with human involvement, when output is not in specific units. In evaluating man-machine or human-oriented systems, psychological and sociological factors obviously do not easily translate into quantifiable terms. For example, how does mental fatigue affect the quality or quantity of output? And, if it does, is mental fatigue a function of the lack of a challenging assignment or the fear of a potential injury?
Subjective inputs may be transferred into numerical data, but there is always the danger of an incorrect appraisal and transfer, and the danger that the analyst may assume undue confidence in such data after they have been quantified. Let us suppose, for example, that the decisions made by an executive are rated from 1 to 10, 10 being the perfect decision. After determining the ranking for each decision, adding these, and dividing by the total number of decisions made, the average ranking would indicate a particular executive's score in his decision-making role. On the basis of this score, judgments — which could be quite erroneous — might be made about his decision-making effectiveness. One executive with a ranking of 6.75 might be considered more effective than another who had a ranking of 6.25, and yet the two managers may have made decisions under different circumstances and conditions. External factors over which neither executive had any control may have influenced the difference in "effectiveness".
Quantifying human behavior, despite its extreme difficulty, subjectivity, and imprecision in relation to measuring physical characteristics is the most prevalent and important measurement made in large systems. The behavior of individuals ultimately dictates the success or failure of every man-made system.
Information flow
Another problem of control relates to the improper timing of information introduced into the feedback channel. Improper timing can occur in both computerized and human control systems, either by mistakes in measurement or in judgment. The more rapid the system's response to an error signal, the more likely it is that the system could overadjust; yet the need for prompt action is important because any delay in providing corrective input could also be crucial. A system generating feedback inconsistent with current need will tend to fluctuate and will not adjust in the desired manner.
The most serious problem in information flow arises when the delay in feedback is exactly one-half cycle, for then the corrective action is superimposed on a variation from norm which, at that moment, is in the same direction as that of the correction. This causes the system to overcorrect, and then if the reverse adjustment is made out of cycle, to correct too much in the other direction, and so on until the system fluctuates ("oscillates") out of control. This phenomenon is illustrated in Figure 1. “Oscillation and Feedback”. If, at Point A, the trend below standard is recognized and new inputs are added, but not until Point B, the system will overreact and go beyond the allowable limits. Again, if this is recognized at Point C, but inputs are not withdrawn until Point D, it will cause the system to drop below the lower limit of allowable variation.
One solution to this problem rests in anticipation, which involves measuring not only the change but also the rate of change. The correction is outlined as a factor of the type and rate of the error. The difficulty also might be overcome by reducing the time lag between the measurement of the output and the adjustment to input. If a trend can be indicated, a time lead can be introduced to compensate for the time lag, bringing about consistency between the need for correction and the type and magnitude of the indicated action. It is usually more effective for an organization to maintain continuous measurement of its performance and to make small adjustments in operations constantly (this assumes a highly sensitive control system). Information feedback, consequently, should be timely and correct to be effective. That is, the information should provide an accurate indication of the status of the system.
Setting standards
Setting the proper standards or control limits is a problem in many systems. Parents are confronted with this dilemma in expressing what they expect of their children, and business managers face the same issue in establishing standards that will be acceptable to employees. Some theorists have proposed that workers be allowed to set their own standards, on the assumption that when people establish their own goals, they are more apt to accept and achieve them.
Standards should be as precise as possible and communicated to all persons concerned. Moreover, communication alone is not sufficient; understanding is necessary. In human systems, standards tend to be poorly defined and the allowable range of deviation from standard also indefinite. For example, how many hours each day should a professor be expected to be available for student consultation? Or, what kind of behavior should be expected by students in the classroom? Discretion and personal judgment play a large part in such systems, to determine whether corrective action should be taken.
Perhaps the most difficult problem in human systems is the unresponsiveness of individuals to indicated correction. This may take the form of opposition and subversion to control, or it may be related to the lack of defined responsibility or authority to take action. Leadership and positive motivation then become vital ingredients in achieving the proper response to input requirements.
Most control problems relate to design; thus the solution to these problems must start at that point. Automatic control systems, provided that human intervention is possible to handle exceptions, offer the greatest promise. There is a danger, however, that we may measure characteristics that do not represent effective performance (as in the case of the speaker who requested that all of the people who could not hear what he was saying should raise their hands), or that improper information may be communicated.
Importance of control
Motivation for efficient employees
For complete discipline
Helpful in future planning
Aids efficiency
Decrease in risk
Helpful in coordination
Limitations
1. Difficult to set up quantitative standards: Controlling loses its benefits when standards and norms cannot be explained in volume statistics. Human behaviour, job satisfaction, and employee morale are some of the factors that are not well managed by quantitative measurement. The control loses some of its usefulness when it is not possible to define a situation in terms of numbers. This makes measuring performance and comparing it to benchmarks a difficult task. It is not an easy task to set principles for human work and set standards for competence and how to maintain one's level of satisfaction. In such cases, it depends on the decision of the manager. This is especially true of job satisfaction, employee behaviour and morale. For example, the task of measuring the quality of behaviour of employees is qualitative. It cannot be measured directly. To measure the behaviour of employees, absenteeism, conflict frequency, turnover etc. can be taken into account. If all these measures have a high proportion, it can be said that the behaviour of the employees in the institution is not great. It is clear that it is not possible to set criteria for all projects and suitable models are not completely accurate.
2. Less control on external controls: Any project operating in another state of the country under a government system cannot stop development. In addition, no company can manage the availability of technology, the latest acquisition of information technology and high competition in the market, etc. There are some issues that are not under the control of management or the organization. As such, the company cannot control external factors such as government policy, technological change, competition and anything that is not under the control of the company and makes things unmanageable. Policies need to be put in place through planning to ensure staff re-energizes improvements. It is incorrect to say that the manager by completing the management process may warn the organization. The manager can control internal factors (e.g. human power, infrastructure, infrastructure, etc.) but cannot control external factors (e.g. political, social change, competition, etc.),
3. Restrictions by employees: When a manager is used to managing his or her subordinates, some of his or her colleagues may refuse and report as directed by the manager or company. This usually happens because you are in control of the rules with or without discussion. For example, users in this field may resist when the GPS or control area of a control system is tracking their location. They see it as a restriction on their freedom. Employees are restricted or restricted in their freedom. Opponents of coping with this challenge are not under the control of the company in some respects. For example, workers may complain while kept under surveillance with the help of CCTV. Employees can resist using the camera for monitoring them. An employer may force employees but they cannot force them to work based on rules and regulations. The business environment is constantly changing. A new regulatory framework must be used to reverse this change. However, users are opposed to these systems. For example, if large company employees have CCTV (Close Circuit TV) to control their work, they will challenge this process.
4. Expensive to install: Create an effective and cost-effective management system because organizations need to have different management levels. Some company executives are more valuable than the company. Or it is the duty of their practice to declare the cost of managing a higher order than their own business. Controlling is expensive because it involves a lot of money, time and effort. Systemic regulation is expensive because it affects more stressful movements. This involves a lot of money, time and effort, which means it is very expensive. It is also important to call other employees who add to their value. Small businesses cannot set up cheap systems. To determine the performance of all employees or employees in an organization, proper equipment is required to send reports to management. In order to improve management for the company with effective control, it is necessary to spend a lot of money. Small organizations cannot afford these. Therefore, it is useful only for large companies and costly for small and expensive organizations.
5. Overcontrolling can lead to employee turnover: However, legal aid covers a number of effective procedures if an employee has complaints; if the employee becomes upset by overcontrolling he might get irritated and moves to another company. In the current situation, managers often keep their employees under control several times to monitor their behaviour on the ground. This can be a hands-on example, especially in the case of new members and facilitates a variety of organizational changes. With too much control, employees feel their freedom is being violated. They do not want to work for an organization who do not let them work according to their preferences. That is why they go to other companies that do give them freedom. It takes a lot of time and effort to manage the system.
See also
References
Chenhall, R., 2003. Management control system design within its organizational context: Findings from contingency-based research and directions for the future, Accounting, Organizations and Society, 28(2-3), 127-168.
External links
Business terms
Control theory
Control (social and political) | Control (management) | [
"Mathematics"
] | 6,832 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
11,524,295 | https://en.wikipedia.org/wiki/Tetrahalomethane | Tetrahalomethanes are fully halogenated methane derivatives of general formula CFkCllBrmInAtp, where:Tetrahalomethanes are on the border of inorganic and organic chemistry, thus they can be assigned both inorganic and organic names by IUPAC: tetrafluoromethane - carbon tetrafluoride, tetraiodomethane - carbon tetraiodide, dichlorodifluoromethane - carbon dichloride difluoride.
Each halogen (F, Cl, Br, I, At) forms a corresponding halomethane, but their stability decreases in order CF4 >
CCl4 > CBr4 > CI4 from exceptionally stable gaseous tetrafluoromethane with bond energy 515 kJ·mol−1 to solid tetraiodomethane, depending on bond energy.
Many mixed halomethanes are also known, such as CBrClF2.
Uses
Fluorine, chlorine, and sometimes bromine-substituted halomethanes were used as refrigerants, commonly known as CFCs (chlorofluorocarbons).
See also
Monohalomethane
Dihalomethane
Trihalomethane
Inorganic carbon compounds
Nonmetal halides | Tetrahalomethane | [
"Chemistry"
] | 269 | [
"Inorganic carbon compounds",
"Inorganic compounds"
] |
11,525,168 | https://en.wikipedia.org/wiki/De-asphalter | A de-asphalter is a unit in a crude oil refinery or bitumen upgrader that separates asphalt from the residuum fraction of crude oil or bitumen. The primary purpose of the separation is to remove contaminants (asphaltenes, metals) from the feed that would cause rapid deactivation of catalysts in downstream processing units. In doing so, the de-asphalter is the first step in a series of processes that upgrade a low-value feedstock to high-value refined products.
The de-asphalter unit is usually placed after the vacuum distillation tower and receives feed from the bottom (residuum) stream. It is usually a solvent de-asphalter unit, SDA. The SDA separates the asphalt from the feedstock because light hydrocarbons will dissolve aliphatic compounds but not asphaltenes. The output from the de-asphalter unit is de-asphalted oil ("DAO") and asphalt.
DAO from propane de-asphalting has the highest quality but lowest yield, whereas using pentane may double or triple the yield from a heavy feed, but at the expense of contamination by metals and carbon residues that shorten the life of downstream cracking catalysts. If the solvent is butane the unit will be referred to as a butane de-asphalter ("BDA") and if the solvent is propane, it will be called a propane de-asphalter ("PDA") unit.
References
Study of selected petroleum refining residuals by US EPA
Lubricants and Lubrication (Second Edition)
External links
Solvent de-asphalting
Solvent de-asphalting of vacuum residuum
Asphalt used for gasification
Chemical equipment
Distillation
Petroleum technology | De-asphalter | [
"Chemistry",
"Engineering"
] | 362 | [
"Separation processes",
"Chemical equipment",
"Petroleum technology",
"Petroleum engineering",
"Distillation",
"nan"
] |
11,527,974 | https://en.wikipedia.org/wiki/Glycerite | A traditional glycerite is a fluid extract of an herb or other medicinal substance made using glycerin as the majority of the fluid extraction medium.
Definition
According to King's American Dispensatory (1898), glycerite is:Glycerita.—Glycerites.
By this class of preparations is generally understood solutions of medicinal substances in glycerin, although in certain instances the various Pharmacopoeias deviate to an extent. The term Glycerita as here applied to fluid glycerines, or solutions of agents in glycerin, is preferable to the ordinary names, "glyceroles," "glycerates," or "glycemates," etc., and includes all fluid preparations of the kind referred to, whether for internal administration or local application.
Glycerites may consist of either vegetable source glycerin, animal source glycerin or a combination of the two. In the case of liquid herbal products (a segment of the dietary supplements industry), the general rule is to utilize vegetable glycerin only, while nutraceuticals (another segment of the dietary supplements industry) might use a combination of both vegetable and animal source derived glycerin.
Alcohol-free (as opposed to alcohol-removed) glycerite products, in which alcohol is never used or added at any time, are preferred by those desiring or requiring that no alcohol be used in making products or added thereafter. The reasons are typically for personal or religious beliefs.
Muslims for instance, represent the largest population requiring an alcohol-free standard. Halal, the Islamic dietary law, lists alcohol as one of the 'explicitly forbidden substances' (called Haram) in which anything made with and/or at any time containing alcohol is forbidden. USP grade vegetable glycerin is acceptable for Halal certifying and in some instances a Halal standard may (but not always) accept Kosher certified USP Grade vegetable glycerin as meeting Halal standards (i.e. to be Halal 'compliant'). Where the issue of Halal Alcohol-Free versus Haram Alcohol-Removed glycerites is concerned, even though U.S. FDA Title 21 rules forbid referring to or labeling a product as being 'Alcohol-Free' that has at any time come into contact with alcohol, the Islamic community has taken the stance that products listed as alcohol-free often does not always mean "Alcohol-Free" as defined by Halal standards or U.S. FDA Title 21 rules, since many products listed as alcohol-free may in fact have been made using alcohol as an ingredient after which the alcohol is removed, which would still make any such products Haram by Islamic Dietary Law and in breach of U.S. FDA Title 21 labeling rules. The Islamic community is therefore encouraged to first ascertain whether a botanical glycerite is actually Halal 'Alcohol-Free' (e.g. Halal Certified or Halal compliant) or is Haram 'Alcohol-Removed' with glycerin thereafter added.
References
Pharmacognosy | Glycerite | [
"Chemistry"
] | 657 | [
"Pharmacology",
"Pharmacognosy"
] |
11,528,159 | https://en.wikipedia.org/wiki/NTU%20method | The number of transfer units (NTU) method is used to calculate the rate of heat transfer in heat exchangers (especially parallel flow, counter current, and cross-flow exchangers) when there is insufficient information to calculate the log mean temperature difference (LMTD). Alternatively, this method is useful for determining the expected heat exchanger effectiveness from the known geometry. In heat exchanger analysis, if the fluid inlet and outlet temperatures are specified or can be determined by simple energy balance, the LMTD method can be used; but when these temperatures are not available either the NTU or the effectiveness NTU method is used.
The effectiveness-NTU method is very useful for all the flow arrangements (besides parallel flow, cross flow, and counterflow ones) but the effectiveness of all other types must be obtained by a numerical solution of the partial differential equations and there is no analytical equation for LMTD or effectiveness.
Defining and using heat exchanger effectiveness
To define the effectiveness of a heat exchanger we need to find the maximum possible heat transfer that can be hypothetically achieved in a counter-flow heat exchanger of infinite length. Therefore one fluid will experience the maximum possible temperature difference, which is the difference of (the temperature difference between the inlet temperature of the hot stream and the inlet temperature of the cold stream). First, you must know the specific heat capacity of your two fluid streams, denoted as . By definition is the derivative of enthalpy with respect to temperature:This information can usually be found in a thermodynamics textbook, or by using various software packages. Additionally, the mass flowrates () of the two streams exchanging heat must be known (here, the cold stream is denoted with subscripts 'c' and the hot stream is denoted with subscripts 'h'). The method proceeds by calculating the heat capacity rates (i.e. mass flow rate multiplied by specific heat capacity) and for the hot and cold fluids respectively. To determine the maximum possible heat transfer rate in the heat exchanger, the minimum heat capacity rate must be used, denoted as :
Where is the mass flow rate and is the fluid's specific heat capacity at constant pressure. The maximum possible heat transfer rate is then determined by the following expression:
Here, is the maximum rate of heat that could be transferred between the fluids per unit time. must be used as it is the fluid with the lowest heat capacity rate that would, in this hypothetical infinite length exchanger, actually undergo the maximum possible temperature change. The other fluid would change temperature more slowly along the heat exchanger length. The method, at this point, is concerned only with the fluid undergoing the maximum temperature change.
The effectiveness of the heat exchanger (), is the ratio between the actual heat transfer rate and the maximum possible heat transfer rate:
where the real heat transfer rate can be determined either from the cold fluid or the hot fluid (they must provide equivalent results):
Effectiveness is a dimensionless quantity between 0 and 1. If we know for a particular heat exchanger, and we know the inlet conditions of the two flow streams we can calculate the amount of heat being transferred between the fluids by:
Then, having determined the actual heat transfer from the effectiveness and inlet temperatures, the outlet temperatures can be determined from the equation above.
Relating Effectiveness to the Number of Transfer Units (NTU)
For any heat exchanger it can be shown that the effectiveness of the heat exchanger is related to a non-dimensional term called the "number of transfer units" or NTU:
For a given geometry, can be calculated using correlations in terms of the "heat capacity ratio," or and NTU:
describes heat transfer across a surface
Here, is the overall heat transfer coefficient, is the total heat transfer area, and is the minimum heat capacity rate. To better understand where this definition of NTU comes from, consider the following heat transfer energy balance, which is an extension of the energy balance above:
From this energy balance, it is clear that NTU relates the temperature change of the flow with the minimum heat capacitance rate to the log mean temperature difference (). Starting from the differential equations that describe heat transfer, several "simple" correlations between effectiveness and NTU can be made. For brevity, below summarizes the Effectiveness-NTU correlations for some of the most common flow configurations:
For example, the effectiveness of a parallel flow heat exchanger is calculated with:
Or the effectiveness of a counter-current flow heat exchanger is calculated with:
For a balanced counter-current flow heat exchanger (balanced meaning , which is a scenario desirable to enable irreversible entropy production to be reduced given sufficient heat transfer area):
A single-stream heat exchanger is a special case in which . This occurs when or and may represent a situation in which a phase change (condensation or evaporation) is occurring in one of the heat exchanger fluids or when one of the heat exchanger fluids is being held at a fixed temperature. In this special case the heat exchanger behavior is independent of the flow arrangement and the effectiveness is given by:
For a crossflow heat exchanger with both fluid unmixed, the effectiveness is:
where is the polynomial function
If both fluids are mixed in the crossflow heat exchanger, then
If one of the fluids in the crossflow heat exchanger is mixed and the other is unmixed, the result depends on which one has the minimum heat capacity rate. If corresponds to the mixed fluid, the result is
whereas if corresponds to the unmixed fluid, the solution is
All these formulas for crossflow heat exchangers are also valid for .
Additional effectiveness-NTU analytical relationships have been derived for other flow arrangements, including shell-and-tube heat exchangers with multiple passes and different shell types, and plate heat exchangers.
Effectiveness-NTU method for gaseous mass transfer
It is common in the field of mass transfer system design and modeling to draw analogies between heat transfer and mass transfer. However, a mass transfer-analogous definition of the effectiveness-NTU method requires some additional terms. One common misconception is that gaseous mass transfer is driven by concentration gradients, however, in reality it is the partial pressure of the given gas that drive mass transfer. In the same way that the heat transfer definition includes the specific heat capacity of the fluid, which describes the change in enthalpy of the fluid with respect to change in temperature and is defined as:then a mass transfer-analogous specific mass capacity is required. This specific mass capacity should describe the change in concentration of the transferring gas relative to the partial pressure difference driving the mass transfer. This results in a definition for specific mass capacity as follows:Here, represents the mass ratio of gas 'x' (meaning mass of gas 'x' relative to the mass of all other non-'x' gas mass) and is the partial pressure of gas 'x'. Using the ideal gas formulation for the mass ratio gives the following definition for the specific mass capacity:Here, is the molecular weight of gas 'x' and is the average molecular weight of all other gas constituents. With this information, the NTU for gaseous mass transfer of gas 'x' can be defined as follows:Here, is the overall mass transfer coefficient, which could be determined by empirical correlations, is the surface area for mass transfer (particularly relevant in membrane-based separations), and is the mass flowrate of bulk fluid (e.g., mass flowrate of air in an application where water vapor is being separated from the air mixture). At this point, all of the same heat transfer effectiveness-NTU correlations will accurately predict the mass transfer performance, as long as the heat transfer terms in the definition of NTU have been replaced by the mass transfer terms, as shown above. Similarly, it follows that the definition of becomes:
Effectiveness-NTU method for dehumidification applications
One particularly useful application for the above described effectiveness-NTU framework is membrane-based air dehumidification. In this case, the definition of specific mass capacity can be defined for humid air and is termed the "specific humidity capacity."Here, is the molecular weight of water (vapor), is the average molecular weight of air, is the partial pressure of air (not including the partial pressure of water vapor in an air mixture) and can be approximated by knowing the partial pressure of water vapor at the inlet, before dehumidification occurs, . From here, all of the previously described equations can be used to determine the effectiveness of the mass exchanger.
Importance of defining the specific mass capacity
It is very common, especially in dehumidification applications, to define the mass transfer driving force as the concentration difference. When deriving effectiveness-NTU correlations for membrane-based gas separations, this is valid only if the total pressures are approximately equal on both sides of the membrane (e.g., an energy recovery ventilator for a building). This is sufficient since the partial pressure and concentration are proportional. However, if the total pressures are not approximately equal on both sides of the membrane, the low pressure side could have a higher "concentration" but a lower partial pressure of the given gas (e.g., water vapor in a dehumidification application) than the high pressure side, thus using the concentration as the driving is not physically accurate.
References
Kays & London, 1955 Compact Heat Exchangers
F. P. Incropera & D. P. DeWitt 1990 Fundamentals of Heat and Mass Transfer, 3rd edition, pp. 658–660. Wiley, New York
Heat transfer | NTU method | [
"Physics",
"Chemistry"
] | 1,979 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics"
] |
14,111,610 | https://en.wikipedia.org/wiki/ERBB3 | Receptor tyrosine-protein kinase erbB-3, also known as HER3 (human epidermal growth factor receptor 3), is a membrane bound protein that in humans is encoded by the ERBB3 gene.
ErbB3 is a member of the epidermal growth factor receptor (EGFR/ERBB) family of receptor tyrosine kinases. The kinase-impaired ErbB3 is known to form active heterodimers with other members of the ErbB family, most notably the ligand binding-impaired ErbB2.
Gene and expression
The human ERBB3 gene is located on the long arm of chromosome 12 (12q13). It is encoded by 23,651 base pairs and translates into 1342 amino acids.
During human development, ERBB3 is expressed in skin, bone, muscle, nervous system, heart, lungs, and intestinal epithelium. ERBB3 is expressed in normal adult human gastrointestinal tract, reproductive system, skin, nervous system, urinary tract, and endocrine system.
Structure
ErbB3, like the other members of the ErbB receptor tyrosine kinase family, consists of an extracellular domain, a transmembrane domain, and an intracellular domain. The extracellular domain contains four subdomains (I-IV). Subdomains I and III are leucine-rich and are primarily involved in ligand binding. Subdomains II and IV are cysteine-rich and most likely contribute to protein conformation and stability through the formation of disulfide bonds. Subdomain II also contains the dimerization loop required for dimer formation. The cytoplasmic domain contains a juxtamembrane segment, a kinase domain, and a C-terminal domain.
Unliganded receptor adopts a conformation that inhibits dimerization. Binding of neuregulin to the ligand binding subdomains (I and III) induces a conformational change in ErbB3 that causes the protrusion of the dimerization loop in subdomain II, activating the protein for dimerization.
Function
ErbB3 has been shown to bind the ligands heregulin and NRG-2. Ligand binding causes a change in conformation that allows for dimerization, phosphorylation, and activation of signal transduction. ErbB3 can heterodimerize with any of the other three ErbB family members. The theoretical ErbB3 homodimer would be non-functional because the kinase-impaired protein requires transphosphorylation by its binding partner to be active.
Unlike the other ErbB receptor tyrosine kinase family members which are activated through autophosphorylation upon ligand binding, ErbB3 was found to be kinase impaired, having only 1/1000 the autophosphorylation activity of EGFR and no ability to phosphorylate other proteins. Therefore, ErbB3 must act as an allosteric activator.
Interaction with ErbB2
The ErbB2-ErbB3 dimer is considered the most active of the possible ErbB dimers, in part because ErbB2 is the preferred dimerization partner of all the ErbB family members, and ErbB3 is the preferred partner of ErbB2. This heterodimer conformation allows the signaling complex to activate multiple pathways including the MAPK, PI3K/Akt, and PLCγ. There is also evidence that the ErbB2-ErbB3 heterodimer can bind and be activated by EGF-like ligands.
Activation of the PI3K/Akt pathway
The intracellular domain of ErbB3 contains 6 recognition sites for the SH2 domain of the p85 subunit of PI3K. ErbB3 binding causes the allosteric activation of p110α, the lipid kinase subunit of PI3K, a function not found in either EGFR or ErbB2.
Role in cancer
While no evidence has been found that ErbB3 overexpression, constitutive activation, or mutation alone is oncogenic, the protein as a heterodimerization partner, most critically with ErbB2, is implicated in growth, proliferation, chemotherapeutic resistance, and the promotion of invasion and metastasis.
ErbB3 is associated with targeted therapeutic resistance in numerous cancers including resistance to:
HER2 inhibitors in HER2+ breast cancers
anti-estrogen therapy in ER+ breast cancers
EGFR inhibitors in lung and head and neck cancers
hormones in prostate cancers
IGF1R inhibitors in hepatomas
BRAF inhibitors in melanoma
ErbB2 overexpression may promote the formation of active heterodimers with ErbB3 and other ErbB family members without the need for ligand binding, resulting in weak but constitutive signaling activity.
Role in normal development
ERBB3 is expressed in the mesenchyme of the endocardial cushion, which will later develop into the valves of the heart. ErbB3 null mouse embryos show severely underdeveloped atrioventricular valves, leading to death at embryonic day 13.5. Although this function of ErbB3 depends on neuregulin, it does not seem to require ErbB2, which is not expressed in the tissue.
ErbB3 also seems to be required for neural crest differentiation and the development of the sympathetic nervous system and neural crest derivatives such as Schwann cells.
See also
Epidermal growth factor receptor family
Epidermal growth factor receptor
Receptor tyrosine-kinases
References
Further reading
Tyrosine kinase receptors | ERBB3 | [
"Chemistry"
] | 1,164 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
14,117,191 | https://en.wikipedia.org/wiki/SIN3A | Paired amphipathic helix protein Sin3a is a protein that in humans is encoded by the SIN3A gene.
Function
The protein encoded by this gene is a transcriptional regulatory protein. It contains paired amphipathic helix (PAH) domains, which are important for protein-protein interactions and may mediate repression by the Mad-Max complex.
Interactions
SIN3A has been shown to interact with:
CABIN1
HBP1,
HDAC1,
HDAC9,
Histone deacetylase 2,
Host cell factor C1,
IKZF1,
ING1,
KLF11,
MNT,
MXD1,
Methyl-CpG-binding domain protein 2,
Nuclear receptor co-repressor 2,
OGT,
PHF12,
Promyelocytic leukemia protein,
RBBP4,
RBBP7,
SAP130,
SAP30,
SMARCA2,
SMARCA4,
SMARCC1,
SUDS3,
TAL1, and
Zinc finger and BTB domain-containing protein 16.
See also
Transcription coregulator
References
Further reading
External links
Gene expression
Transcription coregulators | SIN3A | [
"Chemistry",
"Biology"
] | 233 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
14,119,736 | https://en.wikipedia.org/wiki/Cyclin%20D3 | G1/S-specific cyclin-D3 is a protein that in humans is encoded by the CCND3 gene.
Function
The protein encoded by this gene belongs to the highly conserved cyclin family, whose members are characterized by a dramatic periodicity in protein abundance through the cell cycle. Cyclins function as regulators of CDK kinases. Different cyclins exhibit distinct expression and degradation patterns which contribute to the temporal coordination of each mitotic event. This cyclin forms a complex with and functions as a regulatory subunit of CDK4 or CDK6, whose activity is required for cell cycle G1/S transition. This protein has been shown to interact with and be involved in the phosphorylation of tumor suppressor protein Rb. The CDK4 activity associated with this cyclin was reported to be necessary for cell cycle progression through G2 phase into mitosis after UV radiation.
Clinical significance
Mutations in CCND3 are implicated in cases of breast cancer.
Interactions
Cyclin D3 has been shown to interact with:
AKAP8,
CDC2L1,
CDKN1B,
CRABP2,
Cyclin-dependent kinase 4,
Cyclin-dependent kinase 6,
EIF3K, and
Retinoic acid receptor alpha.
See also
Cyclin
Cyclin D
References
Further reading
Cell cycle regulators | Cyclin D3 | [
"Chemistry"
] | 280 | [
"Cell cycle regulators",
"Signal transduction"
] |
14,119,752 | https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%207 | Cyclin-dependent kinase 7, or cell division protein kinase 7, is an enzyme that in humans is encoded by the CDK7 gene.
The protein encoded by this gene is a member of the cyclin-dependent protein kinase (CDK) family. CDK family members are highly similar to the gene products of Saccharomyces cerevisiae cdc28, and Schizosaccharomyces pombe cdc2, and are known to be important regulators of cell cycle progression.
This protein forms a trimeric complex with cyclin H and MAT1, which functions as a Cdk-activating kinase (CAK). It is an essential component of the transcription factor TFIIH, that is involved in transcription initiation and DNA repair. This protein is thought to serve as a direct link between the regulation of transcription and the cell cycle.
Clinical significance e.g. cancer
Given that CDK7 is involved in two important regulation roles, it's expected that CDK7 regulation may play a role in cancerous cells. Cells from breast cancer tumors were found to have elevated levels of CDK7 and Cyclin H when compared to normal breast cells. It was also found that the higher levels were generally found in ER-positive breast cancer. Together, these findings indicate that CDK7 therapy might make sense for some breast cancer patients. Further confirming these findings, recent research indicates that inhibition of CDK7 may be an effective therapy for HER2-positive breast cancers, even overcoming therapeutic resistance. THZ1 was tested on HER2-positive breast cancer cells and exhibited high potency for the cells regardless of their sensitivity to HER2 inhibitors. This finding was demonstrated in vivo, where inhibition of HER2 and CDK7 resulted in tumor regression in therapeutically resistant HER2+ xenograft models.
Inhibitors
The growth suppressor p53 has been shown to interact with cyclin H both in vitro and in vivo. Addition of wild type p53 was found to heavily downregulated CAK activity, resulting in decreased phosphorylation of both CDK2 and CTD by CDK7. Mutant p53 was unable to downregulate CDK7 activity and mutant p21 had no effect on downregulation, indicating that p53 is responsible for negative regulation of CDK7.
In 2017 CT7001, an oral CDK7 inhibitor, started a phase 1 clinical trial.
THZ1 is an inhibitor for CDK7 that selectively forms a covalent bond with the CDK7-cycH-MAT1 complex. This selectivity stems from forming a bond at C312, which is unique to CDK7 within the CDK family. CDK12 and CDK13 could also be inhibited using THZ1 (but at higher concentrations) because they have similar structures in the region surrounding C312. It was found that treatment of 250 nM THZ1 was sufficient to inhibit global transcription and that cancer cell lines were sensitive to much lower concentrations, opening up further research into the efficacy of using THZ1 as a component of cancer therapy, as described above.
In renal cell carcinoma (RCC), the expression of CDK7 was significantly higher in the advanced stage tumors. Besides, the overall survival was significantly shorter in patients with higher CDK7 expression in the tumors. These results suggest that CDK7 may be a potential target for overcoming RCC.
Based on molecular docking results, Ligands-3, 5, 14, and 16 were screened among 17 different Pyrrolone-fused benzosuberene compounds as potent and specific inhibitors without any cross-reactivity against different CDK isoforms. Analysis of MD simulations and MM-PBSA studies, revealed the binding energy profiles of all the selected complexes. Selected ligands performed better than the experimental drug candidate (Roscovitine). Ligands-3 and 14 show specificity for CDK7. These ligands are expected to possess lower risk of side effects due to their natural origin.
In urothelial carcinoma (UC), CDK7 expression is increased in bladder cancer tissues, especially in patients with chemoresistance. CDK7 inhibition-related cancer stemness suppression is a potential therapeutic strategy for both chemonaïve and chemoresistant UC.
Interactions
Cyclin-dependent kinase 7 has been shown to interact with:
Androgen receptor,
Cyclin H,
GTF2H1,
MNAT1,
P53,
SUPT5H, and
XPB.
See also
Cyclin-dependent kinase
CDK7 pathway
References
Further reading
External links
Cell cycle
Proteins
EC 2.7.11 | Cyclin-dependent kinase 7 | [
"Chemistry",
"Biology"
] | 961 | [
"Biomolecules by chemical classification",
"Cellular processes",
"Molecular biology",
"Proteins",
"Cell cycle"
] |
14,119,857 | https://en.wikipedia.org/wiki/NFATC2 | Nuclear factor of activated T-cells, cytoplasmic 2 is a protein that in humans is encoded by the NFATC2 gene.
Function
This gene is a member of the nuclear factor of activated T cells (NFAT) family. The product of this gene is a DNA-binding protein with a REL-homology region (RHR) and an NFAT-homology region (NHR). This protein is present in the cytosol and only translocates to the nucleus upon T cell receptor (TCR) stimulation, where it becomes a member of the nuclear factors of activated T cells transcription complex. This complex plays a central role in inducing gene transcription during the immune response. Alternate transcriptional splice variants, encoding different isoforms, have been characterized.
Clinical significance
Translocation forming an in frame fusions product between EWSR1 gene and the NFATc2 gene has been described in bone tumor with a Ewing sarcoma-like clinical appearance. The translocation breakpoint led to the loss of the controlling elements of the NFATc2 protein and the fusion of the N terminal region of the EWSR1 gene conferred constant activation of the protein.
Interactions
NFATC2 has been shown to interact with MEF2D, EP300, IRF4 and Protein kinase Mζ. Prostaglandin F2alpha stimulates a NFCT2 pathway stimulating growth of skeletal muscle cells.
References
Further reading
External links
Transcription factors
Human proteins | NFATC2 | [
"Chemistry",
"Biology"
] | 308 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,120,062 | https://en.wikipedia.org/wiki/ELK1 | ETS Like-1 protein Elk-1 is a protein that in humans is encoded by the ELK1. Elk-1 functions as a transcription activator. It is classified as a ternary complex factor (TCF), a subclass of the ETS family, which is characterized by a common protein domain that regulates DNA binding to target sequences. Elk1 plays important roles in various contexts, including long-term memory formation, drug addiction, Alzheimer's disease, Down syndrome, breast cancer, and depression.
Structure
As depicted in Figure 1, the Elk1 protein is composed of several domains. Localized in the N-terminal region, the A domain is required for the binding of Elk1 to DNA. This region also contains a nuclear localization signal (NLS) and a nuclear export signal (NES), which are responsible for nuclear import and export, respectively. The B domain allows Elk1 to bind to a dimer of its cofactor, serum response factor (SRF). Located adjacent to the B domain, the R domain is involved in suppressing Elk1 transcriptional activity. This domain harbors the lysine residues that are likely to undergo SUMOylation, a post-translational event that strengthens the inhibition function of the R domain. The D domain plays the key role of binding to active Mitogen-activated protein kinases (MAPKs). Located in the C-terminal region of Elk1, the C domain includes the amino acids that actually become phosphorylated by MAPKs. In this region, Serine 383 and 389 are key sites that need to be phosphorylated for Elk1-mediated transcription to occur. Finally, the DEF domain is specific for the interaction of activated extracellular signal-regulated kinase (Erk), a type of MAPK, with Elk1.
Expression
Given its role as a transcription factor, Elk1 is expressed in the nuclei of non-neuronal cells. The protein is present in the cytoplasm as well as in the nucleus of mature neurons. In post-mitotic neurons, a variant of Elk1, sElk1, is expressed solely in the nucleus because it lacks the NES site present in the full-length protein. Moreover, while Elk1 is broadly expressed, actual levels vary among tissues. The rat brain, for example, is extremely rich in Elk1, but the protein is exclusively expressed in neurons.
Splice variants
Aside from the full-length protein, the Elk1 gene can yield two shortened versions of Elk1: ∆Elk1 and sElk1. Alternative splicing produces ∆Elk1. This variant lacks part of the DNA-binding domain that allows interaction with SRF. On the other hand, sElk1 has an intact region that binds to SRF, but it lacks the first 54 amino acids that contain the NES. Found only in neurons, sElk1 is created by employing an internal translation start site. Both ∆Elk1 and sElk1, truncated versions of full-length protein, are capable of binding to DNA and inducing various cellular signaling. In fact, sElk1 counteracts Elk1 in neuronal differentiation and the regulation of nerve growth factor/ERK signaling.
Signaling
The downstream target of Elk1 is the serum response element (SRE) of the c-fos proto-oncogene. To produce c-fos, a protein encoded by the Fos gene, Elk1 needs to be phosphorylated by MAPKs at its C-terminus. MAPKs are the final effectors of signal transduction pathways that begin at the plasma membrane. Phosphorylation by MAPKs results in a conformational change of Elk1. As seen in Figure 2, Raf kinase acts upstream of MAPKs to activate them by phosphorylating and, thereby activating, MEKs, or MAPK or ERK kinases. Raf itself is activated by Ras, which is linked to growth factor receptors with tyrosine kinase activity via Grb2 and Sos. Grb2 and Sos can stimulate Ras only after the binding of growth factors to their corresponding receptors. However, Raf activation does not exclusively depend on Ras. Protein kinase C, which is activated by phorbol esters, can fulfill the same function as Ras. MEK kinase (MEKK) can also activate MEKs, which then activate MAPKs, making Raf unnecessary at times. Various signal transduction pathways, therefore, funnel through MEKs and MAPKs and lead to the activation of Elk1. After stimulation of Elk1, SRF, which allows Elk1 to bind to the c-fos promoter, must be recruited. The binding of Elk1 to SRF happens due to protein-protein interaction between the B domain of Elk1 and SRF and the protein-DNA interaction via the A domain.
The aforementioned proteins are like recipes for a certain signaling output. If one of these ingredients, such as SRF, is missing, then a different output occurs. In this case, lack of SRF leads to Elk1's activation of another gene. Elk1 can, thus, independently interact with an ETS binding site, as in the case of the lck proto-oncogene in Figure 2. Moreover, the spacing and relative orientation of the Elk1 binding site to the SRE is rather flexible, suggesting that the SRE-regulated early genes other than c-fos could be targets of Elk1. egr-1 is an example of an Elk1 target that depends on SRE interaction. Ultimately, phosphorylation of Elk1 can result in the production of many proteins, depending on the other factors involved and their specific interactions with each other.
When studying signaling pathways, mutations can further highlight the importance of each component used to activate the downstream target. For instance, disruption of the C-terminal domain of Elk1 that MAPK phosphorylates triggers inhibition of c-fos activation. Similarly, dysfunctional SRF, which normally tethers Elk1 to the SRE, leads to Fos not being transcribed. At the same time, without Elk1, SRF cannot induce c-fos transcription after MAPK stimulation. For these reasons, Elk1 represents an essential link between signal transduction pathways and the initiation of gene transcription.
Clinical significance
Long-term memory
Formation of long-term memory may be dependent on Elk1. MEK inhibitors block Elk1 phosphorylation and, thus, impair acquired conditioned taste aversion. Moreover, avoidance learning, which involves the subject learning that a particular response leads to prevention of an aversive stimulus, is correlated with a definite increase in activation of Erk, Elk1, and c-fos in the hippocampus. This area of the brain is involved in short-term and long-term information storage. When Elk1 or SRF binding to DNA is blocked in the rat hippocampus, only sequestration of SRF interferes with long-term spatial memory. While the interaction of Elk1 with DNA may not be essential for memory formation, its specific role still needs to be explored. This is because activation of Elk1 can trigger other molecular events that do not require Elk1 to bind DNA. For example, Elk1 is involved in the phosphorylation of histones, increased interaction with SRF, and recruitment of the basal transcriptional machinery, all of which do not require direct binding of Elk1 to DNA.
Drug addiction
Elk1 activation plays a central role in drug addiction. After mice are given cocaine, a strong and momentary hyperphosphorylation of Erk and Elk1 is observed in the striatum. When these mice are then given MEK inhibitors, Elk1 phosphorylation is absent. Without active Elk1, c-fos production and cocaine-induced conditioned place preference are shown to be blocked. Moreover, acute ethanol ingestion leads to excessive phosphorylation of Elk1 in the amygdala. Silencing of Elk1 activity has also been found to decrease cellular responses to withdrawal signals and lingering treatment of opioids, one of the world's oldest known drugs. Altogether, these results highlight that Elk1 is an important component of drug addiction.
Pathophysiology
Buildup of beta amyloid (Aβ) peptides is shown to cause and/or trigger Alzheimer's disease. Aβ interferes with BDNF-induced phosphorylation of Elk1. With Elk1 activation being hindered in this pathway, the SRE-driven gene regulation leads to increased vulnerability of neurons. Elk1 also inhibits transcription of presenilin 1 (PS1), which encodes a protein that is necessary for the last step of the sequential proteolytic processing of amyloid precursor protein (APP). APP makes variants of Aβ (Aβ42/43 polypeptide). Moreover, PS1 is genetically associated with most early-onset cases of familial Alzheimer's disease. These data emphasize the intriguing link between Aβ, Elk1, and PS1.
Another condition associated with Elk1 is Down syndrome. Fetal and aged mice with this pathophysiological condition have shown a decrease in the activity of calcineurin, the major phosphatase for Elk1. These mice also have age-dependent changes in ERK activation. Moreover, expression of SUMO3, which represses Elk1 activity, increases in the adult Down syndrome patient. Therefore, Down syndrome is correlated with changes in ERK, calcineurin, and SUMO pathways, all of which act antagonistically on Elk1 activity.
Elk1 also interacts with BRCA1 splice variants, namely BRCA1a and BRCA1b. This interaction enhances BRCA1-mediated growth suppression in breast cancer cells. Elk1 may be a downstream target of BRCA1 in its growth control pathway. Recent literature reveals that c-fos promoter activity is inhibited, while overexpression of BRCA1a/1b reduces MEK-induced activation of the SRE. These results show that one mechanism of growth and tumor suppression by BRCA1a/1b proteins acts through repression of the expression of Elk1 downstream target genes like Fos.
Depression has been linked with Elk1. Decreased Erk-mediated Elk1 phosphorylation is observed in the hippocampus and prefrontal cortex of post-mortem brains of suicidal individuals. Imbalanced Erk signaling is correlated with depression and suicidal behavior. Future research will reveal the exact role of Elk1 in the pathophysiology of depression.
References
External links
Transcription factors | ELK1 | [
"Chemistry",
"Biology"
] | 2,210 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,120,476 | https://en.wikipedia.org/wiki/REL | The proto-oncogene c-Rel is a protein that in humans is encoded by the REL gene. The c-Rel protein is a member of the NF-κB family of transcription factors and contains a Rel homology domain (RHD) at its N-terminus and two C-terminal transactivation domains. c-Rel is a myeloid checkpoint protein that can be targeted for treating cancer. c-Rel has an important role in B-cell survival and proliferation. The REL gene is amplified or mutated in several human B-cell lymphomas, including diffuse large B-cell lymphoma and Hodgkin's lymphoma.
References
Further reading
External links
Transcription factors | REL | [
"Chemistry",
"Biology"
] | 159 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,120,660 | https://en.wikipedia.org/wiki/Homeobox%20protein%20CDX-2 | Homeobox protein CDX-2 is a protein that in humans is encoded by the CDX2 gene. The CDX-2 protein is a homeobox transcription factor expressed in the nuclei of intestinal epithelial cells, playing an essential role in the development and function of the digestive system. CDX2 is part of the ParaHox gene cluster, a group of three highly conserved developmental genes present in most vertebrate species. Together with CDX1 and CDX4, CDX2 is one of three caudal-related genes in the human genome.
Function
In common with the two other Cdx genes, CDX2 regulates several essential processes in the development and function of the lower gastrointestinal tract (from the duodenum to the anus) in vertebrates. In vertebrate embryonic development, CDX2 becomes active in endodermal cells that are posterior to the developing stomach. These cells eventually form the intestinal epithelium. The activity of CDX2 at this stage is essential for the correct formation of the intestine and the anus. CDX2 is also required for the development of the placenta.
Later in development, CDX2 is expressed in intestinal epithelial stem cells, which are cells that continuously differentiate into the cells that form the intestinal lining. This differentiation is dependent on CDX2, as illustrated by experiments where the expression of this gene was knocked-out or overexpressed in mice. Heterozygous CDX2 knock-outs have intestinal lesions caused by the differentiation of intestinal cells into gastric epithelium; this can be considered a form of homeotic transformation. Conversely, the over-expression of CDX2 leads to the formation of intestinal epithelium in the stomach.
In addition to roles in endoderm, CDX2 is also expressed in very early stages of mouse and human embryonic development, specifically marking the trophectoderm lineage of cells in the blastocyst of mouse and human. Trophectoderm cells contribute to the placenta.
Pathology
Ectopic expression of CDX2 was reported in more than 85% of the human patients with acute myeloid leukemia (AML). Ectopic expression of Cdx2 in murine bone marrow induced AML in mice and upregulate Hox genes in bone marrow progenitors. CDX2 is also implicated in the pathogenesis of Barrett's esophagus where it has been shown that components from gastroesophageal reflux such as bile acids are able to induce the expression of an intestinal differentiation program through up-regulation of NF-κB and CDX2.
Biomarker for intestinal cancer
CDX2 is also used in diagnostic surgical pathology as a marker for gastrointestinal differentiation, especially colorectal.
Possible use in stem cell research
This gene (or, more specifically, the equivalent gene in humans) has come up in the proposal by the President's Council on Bioethics, as a solution to the stem cell controversy. According to one of the plans put forth, by deactivating the gene, it would not be possible for a properly organized embryo to form, thus providing stem cells without requiring the destruction of an embryo. Other genes that have been proposed for this purpose include Hnf4, which is required for gastrulation.
Interactions
CDX2 has been shown to interact with EP300, and PAX6.
References
Further reading
External links
Transcription factors | Homeobox protein CDX-2 | [
"Chemistry",
"Biology"
] | 737 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,122,543 | https://en.wikipedia.org/wiki/Burns%20temperature | The Burns temperature, Td, is the temperature where a ferroelectric material, previously in paraelectric state, starts to present randomly polarized nanoregions, that are polar precursor clusters. This behaviour is typical of several, but not all, ferroelectric materials, and was observed in lead titanate (PbTiO3), potassium niobate (KNbO3), lead lanthanum zirconate titanate (PLZT), lead magnesium niobate (PMN), lead zinc niobate (PZN), K2Sr4(NbO3)10, and strontium barium niobate (SBN), Na1/2Bi1/2O3 (NBT).
The Burns temperature, named from Gerald Burns, who studied this phenomenon with collaboration of Frank H. Dacol, has not been well understood yet.
References
Electrical phenomena | Burns temperature | [
"Physics",
"Materials_science"
] | 192 | [
"Materials science stubs",
"Physical phenomena",
"Electrical phenomena",
"Electromagnetism stubs"
] |
1,146,267 | https://en.wikipedia.org/wiki/Geometric%20quantization | In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.
Origins
One of the earliest attempts at a natural quantization was Weyl quantization, proposed by Hermann Weyl in 1927. Here, an attempt is made to associate a quantum-mechanical observable (a self-adjoint operator on a Hilbert space) with a real-valued function on classical phase space. The position and momentum in this phase space are mapped to the generators of the Heisenberg group, and the Hilbert space appears as a group representation of the Heisenberg group. In 1946, H. J. Groenewold considered the product of a pair of such observables and asked what the corresponding function would be on the classical phase space. This led him to discover the phase-space star-product of a pair of functions.
The modern theory of geometric quantization was developed by Bertram Kostant and Jean-Marie Souriau in the 1970s. One of the motivations of the theory was to understand and generalize Kirillov's orbit method in representation theory.
Types
The geometric quantization procedure falls into the following three steps: prequantization, polarization, and metaplectic correction. Prequantization produces a natural Hilbert space together with a quantization procedure for observables that exactly transforms Poisson brackets on the classical side into commutators on the quantum side. Nevertheless, the prequantum Hilbert space is generally understood to be "too big". The idea is that one should then select a Poisson-commuting set of n variables on the 2n-dimensional phase space and consider functions (or, more properly, sections) that depend only on these n variables. The n variables can be either real-valued, resulting in a position-style Hilbert space, or complex analytic, producing something like the Segal–Bargmann space.
A polarization is a coordinate-independent description of such a choice of n Poisson-commuting functions. The metaplectic correction (also known as the half-form correction) is a technical modification of the above procedure that is necessary in the case of real polarizations and often convenient for complex polarizations.
Prequantization
Suppose is a symplectic manifold with symplectic form . Suppose at first that is exact, meaning that there is a globally defined symplectic potential with . We can consider the "prequantum Hilbert space" of square-integrable functions on (with respect to the Liouville volume measure). For each smooth function on , we can define the Kostant–Souriau prequantum operator
.
where is the Hamiltonian vector field associated to .
More generally, suppose has the property that the integral of over any closed surface is an integer. Then we can construct a line bundle with connection whose curvature 2-form is . In that case, the prequantum Hilbert space is the space of square-integrable sections of , and we replace the formula for above with
,
with the connection.
The prequantum operators satisfy
for all smooth functions and .
The construction of the preceding Hilbert space and the operators is known as prequantization.
Polarization
The next step in the process of geometric quantization is the choice of a polarization. A polarization is a choice at each point in a Lagrangian subspace of the complexified tangent space of . The subspaces should form an integrable distribution, meaning that the commutator of two vector fields lying in the subspace at each point should also lie in the subspace at each point. The quantum (as opposed to prequantum) Hilbert space is the space of sections of that are covariantly constant in the direction of the polarization.
The idea is that in the quantum Hilbert space, the sections should be functions of only variables on the -dimensional classical phase space.
If is a function for which the associated Hamiltonian flow preserves the polarization, then will preserve the quantum Hilbert space.
The assumption that the flow of preserve the polarization is a strong one. Typically not very many functions will satisfy this assumption.
Half-form correction
The half-form correction—also known as the metaplectic correction—is a technical modification to the above procedure that is necessary in the case of real polarizations to obtain a nonzero quantum Hilbert space; it is also often useful in the complex case. The line bundle is replaced by the tensor product of with the square root of the canonical bundle of the polarization. In the case of the vertical polarization, for example, instead of considering functions of that are independent of , one considers objects of the form . The formula for must then be supplemented by an additional Lie derivative term.
In the case of a complex polarization on the plane, for example, the half-form correction allows the quantization of the harmonic oscillator to reproduce the standard quantum mechanical formula for the energies, , with the "" coming courtesy of the half-forms.
Poisson manifolds
Geometric quantization of Poisson manifolds and symplectic foliations also is developed. For instance, this is the case of partially integrable and superintegrable Hamiltonian systems and non-autonomous mechanics.
Example
In the case that the symplectic manifold is the 2-sphere, it can be realized as a coadjoint orbit in . Assuming that the area of the sphere is an integer multiple of , we can perform geometric quantization and the resulting Hilbert space carries an irreducible representation of SU(2). In the case that the area of the sphere is , we obtain the two-dimensional spin-1/2 representation.
Generalization
More generally, this technique leads to deformation quantization, where the ★-product is taken to be a deformation of the algebra of functions on a symplectic manifold or Poisson manifold. However, as a natural quantization scheme (a functor), Weyl's map is not satisfactory. For example, the Weyl map of the classical angular-momentum-squared is not just the quantum angular momentum squared operator, but it further contains a constant term 3ħ2/2. (This extra term is actually physically significant, since it accounts for the nonvanishing angular momentum of the ground-state Bohr orbit in the hydrogen atom.) As a mere representation change, however, Weyl's map underlies the alternate phase-space formulation of conventional quantum mechanics.
See also
Half-form
Lagrangian foliation
Kirillov orbit method
Quantization commutes with reduction
Notes
Citations
Sources
External links
William Ritter's review of Geometric Quantization presents a general framework for all problems in physics and fits geometric quantization into this framework
John Baez's review of Geometric Quantization, by John Baez is short and pedagogical
Matthias Blau's primer on Geometric Quantization, one of the very few good primers (ps format only)
A. Echeverria-Enriquez, M. Munoz-Lecanda, N. Roman-Roy, Mathematical foundations of geometric quantization, .
G. Sardanashvily, Geometric quantization of symplectic foliations, .
Functional analysis
Mathematical quantization | Geometric quantization | [
"Physics",
"Mathematics"
] | 1,585 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Quantum mechanics",
"Mathematical quantization",
"Mathematical relations"
] |
1,146,294 | https://en.wikipedia.org/wiki/Current%20algebra | Certain commutation relations among the current density operators in quantum field theories define an infinite-dimensional Lie algebra called a current algebra. Mathematically these are Lie algebras consisting of smooth maps from a manifold into a finite dimensional Lie algebra.
History
The original current algebra, proposed in 1964 by Murray Gell-Mann, described weak and electromagnetic currents of the strongly interacting particles, hadrons, leading to the Adler–Weisberger formula and other important physical results. The basic concept, in the era just preceding quantum chromodynamics, was that even without knowing the Lagrangian governing hadron dynamics in detail, exact kinematical information – the local symmetry – could still be encoded in an algebra of
currents.
The commutators involved in current algebra amount to an infinite-dimensional extension of the Jordan map, where the quantum fields represent infinite arrays of oscillators.
Current algebraic techniques are still part of the shared background of particle physics when analyzing symmetries and indispensable in discussions of the Goldstone theorem.
Example
In a non-Abelian Yang–Mills symmetry, where and are flavor-current and axial-current 0th components (charge densities), respectively, the paradigm of a current algebra is
and
where are the structure constants of the Lie algebra. To get meaningful expressions, these must be normal ordered.
The algebra resolves to a direct sum of two algebras, and , upon defining
whereupon
Conformal field theory
For the case where space is a one-dimensional circle, current algebras arise naturally as a central extension of the loop algebra, known as Kac–Moody algebras or, more specifically, affine Lie algebras. In this case, the commutator and normal ordering can be given a very precise mathematical definition in terms of integration contours on the complex plane, thus avoiding some of the formal divergence difficulties commonly encountered in quantum field theory.
When the Killing form of the Lie algebra is contracted with the current commutator, one obtains the energy–momentum tensor of a two-dimensional conformal field theory. When this tensor is expanded as a Laurent series, the resulting algebra is called the Virasoro algebra. This calculation is known as the Sugawara construction.
The general case is formalized as the vertex operator algebra.
See also
Affine Lie algebra
Chiral model
Jordan map
Virasoro algebra
Vertex operator algebra
Kac–Moody algebra
Notes
References
Sample.
Quantum field theory
Lie algebras | Current algebra | [
"Physics"
] | 500 | [
"Quantum field theory",
"Quantum mechanics",
"Quantum physics stubs"
] |
1,146,338 | https://en.wikipedia.org/wiki/Bogoliubov%20transformation | In theoretical physics, the Bogoliubov transformation, also known as the Bogoliubov–Valatin transformation, was independently developed in 1958 by Nikolay Bogolyubov and John George Valatin for finding solutions of BCS theory in a homogeneous system. The Bogoliubov transformation is an isomorphism of either the canonical commutation relation algebra or canonical anticommutation relation algebra. This induces an autoequivalence on the respective representations. The Bogoliubov transformation is often used to diagonalize Hamiltonians, which yields the stationary solutions of the corresponding Schrödinger equation. The Bogoliubov transformation is also important for understanding the Unruh effect, Hawking radiation, Davies-Fulling radiation (moving mirror model), pairing effects in nuclear physics, and many other topics.
The Bogoliubov transformation is often used to diagonalize Hamiltonians, with a corresponding transformation of the state function. Operator eigenvalues calculated with the diagonalized Hamiltonian on the transformed state function thus are the same as before.
Single bosonic mode example
Consider the canonical commutation relation for bosonic creation and annihilation operators in the harmonic oscillator basis
Define a new pair of operators
for complex numbers u and v, where the latter is the Hermitian conjugate of the first.
The Bogoliubov transformation is the canonical transformation mapping the operators and to and . To find the conditions on the constants u and v such that the transformation is canonical, the commutator is evaluated, namely,
It is then evident that is the condition for which the transformation is canonical.
Since the form of this condition is suggestive of the hyperbolic identity
the constants and can be readily parametrized as
This is interpreted as a linear symplectic transformation of the phase space. By comparing to the Bloch–Messiah decomposition, the two angles and correspond to the orthogonal symplectic transformations (i.e., rotations) and the squeezing factor corresponds to the diagonal transformation.
Applications
The most prominent application is by Nikolai Bogoliubov himself in the context of superfluidity. Other applications comprise Hamiltonians and excitations in the theory of antiferromagnetism. When calculating quantum field theory in curved spacetimes the definition of the vacuum changes, and a Bogoliubov transformation between these different vacua is possible. This is used in the derivation of Hawking radiation. Bogoliubov transforms are also used extensively in quantum optics, particularly when working with gaussian unitaries (such as beamsplitters, phase shifters, and squeezing operations).
Fermionic mode
For the anticommutation relations
the Bogoliubov transformation is constrained by . Therefore, the only non-trivial possibility is corresponding to particle–antiparticle interchange (or particle–hole interchange in many-body systems) with the possible inclusion of a phase shift. Thus, for a single particle, the transformation can only be implemented (1) for a Dirac fermion, where particle and antiparticle are distinct (as opposed to a Majorana fermion or chiral fermion), or (2) for multi-fermionic systems, in which there is more than one type of fermion.
Applications
The most prominent application is again by Nikolai Bogoliubov himself, this time for the BCS theory of superconductivity. The point where the necessity to perform a Bogoliubov transform becomes obvious is that in mean-field approximation the Hamiltonian of the system can be written in both cases as a sum of bilinear terms in the original creation and destruction operators, involving finite terms, i.e. one must go beyond the usual Hartree–Fock method. In particular, in the mean-field Bogoliubov–de Gennes Hamiltonian formalism with a superconducting pairing term such as , the Bogoliubov transformed operators annihilate and create quasiparticles (each with well-defined energy, momentum and spin but in a quantum superposition of electron and hole state), and have coefficients and given by eigenvectors of the Bogoliubov–de Gennes matrix. Also in nuclear physics, this method is applicable, since it may describe the "pairing energy" of nucleons in a heavy element.
Multimode example
The Hilbert space under consideration is equipped with these operators, and henceforth describes a higher-dimensional quantum harmonic oscillator (usually an infinite-dimensional one).
The ground state of the corresponding Hamiltonian is annihilated by all the annihilation operators:
All excited states are obtained as linear combinations of the ground state excited by some creation operators:
One may redefine the creation and the annihilation operators by a linear redefinition:
where the coefficients must satisfy certain rules to guarantee that the annihilation operators and the creation operators , defined by the Hermitian conjugate equation, have the same commutators
for bosons and anticommutators for fermions.
The equation above defines the Bogoliubov transformation of the operators.
The ground state annihilated by all is different from the original ground state , and they can be viewed as the Bogoliubov transformations of one another using the operator–state correspondence. They can also be defined as squeezed coherent states. BCS wave function is an example of squeezed coherent state of fermions.
Multimode example
The Hilbert space under consideration is equipped with these operators, and henceforth describes a higher-dimensional quantum harmonic oscillator (usually an infinite-dimensional one).
The ground state of the corresponding Hamiltonian is annihilated by all the annihilation operators:
All excited states are obtained as linear combinations of the ground state excited by some creation operators:
One may redefine the creation and the annihilation operators by a linear redefinition:
where the coefficients must satisfy certain rules to guarantee that the annihilation operators and the creation operators , defined by the Hermitian conjugate equation, have the same commutators
for bosons and anticommutators for fermions.
The equation above defines the Bogoliubov transformation of the operators.
The ground state annihilated by all is different from the original ground state , and they can be viewed as the Bogoliubov transformations of one another using the operator–state correspondence. They can also be defined as squeezed coherent states. BCS wave function is an example of squeezed coherent state of fermions.
Unified matrix description
Because Bogoliubov transformations are linear recombination of operators, it is more convenient and insightful to write them in terms of matrix transformations. If a pair of annihilators transform as
where is a matrix. Then naturally
For fermion operators, the requirement of commutation relations reflects in two requirements for the form of matrix
and
For boson operators, the commutation relations require
and
These conditions can be written uniformly as
where
where applies to fermions and bosons, respectively.
Diagonalizing a quadratic Hamiltonian using matrix description
Bogoliubov transformation lets us diagonalize a quadratic Hamiltonian
by just diagonalizing the matrix .
In the notations above, it is important to distinguish the operator and the numeric matrix .
This fact can be seen by rewriting as
and if and only if diagonalizes , i.e. .
Useful properties of Bogoliubov transformations are listed below.
Other applications
Fermionic condensates
Bogoliubov transformations are a crucial mathematical tool for understanding and describing fermionic condensates. They provide a way to diagonalize the Hamiltonian of an interacting fermion system in the presence of a condensate, allowing us to identify the elementary excitations, or quasiparticles, of the system.
In a system where fermions can form pairs, the standard approach of filling single-particle energy levels (the Fermi sea) is insufficient. The presence of a condensate implies a coherent superposition of states with different particle numbers, making the usual creation and annihilation operators inadequate. The Hamiltonian of such a system typically contains terms that create or annihilate pairs of fermions, such as:where and are the creation and annihilation operators for a fermion with momentum , is the single-particle energy, and is the pairing amplitude, which characterizes the strength of the condensate. This Hamiltonian is not diagonal in terms of the original fermion operators, making it difficult to directly interpret the physical properties of the system.
Bogoliubov transformations provide a solution by introducing a new set of quasiparticle operators, and , which are linear combinations of the original fermion operators:where and are complex coefficients that satisfy the normalization condition . This transformation mixes particle and hole creation operators, reflecting the fact that the quasiparticles are a superposition of particles and holes due to the pairing interaction. This transformation was first introduced by N. N. Bogoliubov in his seminal work on superfluidity.
The coefficients and are chosen such that the Hamiltonian, when expressed in terms of the quasiparticle operators, becomes diagonal:where is the ground state energy and is the energy of the quasiparticle with momentum . The diagonalization process involves solving the Bogoliubov-de Gennes equations, which are a set of self-consistent equations for the coefficients , , and the pairing amplitude . A detailed discussion of the Bogoliubov-de Gennes equations can be found in de Gennes' book on superconductivity..
Physical interpretation
The Bogoliubov transformation reveals several key features of fermion condensates:
Quasiparticles: The elementary excitations of the system are not individual fermions but quasiparticles, which are coherent superpositions of particles and holes. These quasiparticles have a modified energy spectrum , which includes a gap of size at zero momentum. This gap represents the energy required to break a Cooper pair and is a hallmark of superconductivity and other fermionic condensate phenomena.
Ground state: The ground state of the system is not simply an empty Fermi sea but a state where all quasiparticle levels are unoccupied, i.e., for all . This state, often called the BCS state in the context of superconductivity, is a coherent superposition of states with different particle numbers and represents the macroscopic condensate.
Broken symmetry: The formation of a fermion condensate is often associated with the spontaneous breaking of a symmetry, such as the U(1) gauge symmetry in superconductors. The Bogoliubov transformation provides a way to describe the system in the broken symmetry phase. The connection between broken symmetry and Bogoliubov transformations is explored in Anderson's work on pseudo-spin and gauge invariance..
See also
Holstein–Primakoff transformation
Jordan–Wigner transformation
Jordan–Schwinger transformation
Klein transformation
References
Further reading
The whole topic, and a lot of definite applications, are treated in the following textbooks:
Quantum field theory | Bogoliubov transformation | [
"Physics"
] | 2,302 | [
"Quantum field theory",
"Quantum mechanics"
] |
1,147,005 | https://en.wikipedia.org/wiki/Ternary%20compound | In inorganic chemistry and materials chemistry, a ternary compound or ternary phase is a chemical compound containing three different elements.
While some ternary compounds are molecular, e.g. chloroform (), more typically ternary phases refer to extended solids. The perovskites are a famous example.
Binary phases, with only two elements, have lower degrees of complexity than ternary phases. With four elements, quaternary phases are more complex.
The number of isomers of a ternary compound provide a distinction between inorganic and organic chemistry: "In inorganic chemistry one or, at most, only a few compounds composed of any two or three elements were known, whereas in organic chemistry the situation was very different."
Ternary crystalline compounds
An example is sodium phosphate, . The sodium ion has a charge of 1+ and the phosphate ion has a charge of 3–. Therefore, three sodium ions are needed to balance the charge of one phosphate ion. Another example of a ternary compound is calcium carbonate, . In naming and writing the formulae for ternary compounds, rules are similar to binary compounds.
Classifications of ternary crystals
According to Rustum Roy and Olaf Müller, "the chemistry of the entire mineral world informs us that chemical complexity can easily be accommodated within structural simplicity." The example of zircon is cited, where various metal atoms are replaced in the same crystal structure. "The structural entity ... remains ternary in character and is able to accommodate an enormous range of chemical elements." The great variety of ternary compounds is therefore reduced to relatively few structures: "By dealing with approximately ten ternary structural groupings we can cover the most important structures of science and technology specific to the non-metallics world. It is a remarkable instance of nature's simplexity."
Letting A and B represent cations and X an anion, these ternary groupings are organized by stoichiometric types , , and .
A ternary compound of type may be in the class of olivine, the spinel group, or phenakite. Examples include , β-, and .
One of type may be of the class of zircon, scheelite, barite or an ordered silicon dioxide derivative.
In the class of ternary compounds, there are the structures of perovskite (structure), calcium carbonate, pyroxenes, corundum and hexagonal types.
Other ternary compounds are described as crystals of types , , , , and .
Ternary semiconductors
A particular class of ternary compounds are the ternary semiconductors, particularly within the III-V semiconductor family. In this type of semiconductor, the ternary can be considered to be an alloy of the two binary endpoints. Varying the composition between the endpoints allows both the lattice constant and the energy bandgap to be adjusted to produce the properties desired, for example, in emitting light (for example, as a LED) or absorbing light (as a photodetector or a photovoltaic cell). An example would be the semiconductor indium gallium arsenide (), a material with band gap dependent on In/Ga ratio.
Important examples of ternary semiconductors can also be found in other semiconductor families, such as the II-VI family (e.g., Mercury cadmium telluride, ), or the I-II-VI2 family, with examples such as .
Organics
In organic chemistry, the carbohydrates and carboxylic acids are ternary compounds with carbon, oxygen, and hydrogen. Other organic ternary compounds replace oxygen with another atom to form functional groups.
The multiplicity of ternary compounds based on {C, H, O} has been noted. For example, C9 H10 O3 corresponds to more than 60 ternary compounds.
See also
Binary compound
Mitscherlich's law of isomorphism
Quaternary phase
References
Chemical compounds | Ternary compound | [
"Physics",
"Chemistry"
] | 812 | [
"Chemical compounds",
"Molecules",
"Matter"
] |
1,147,994 | https://en.wikipedia.org/wiki/Friedmann%20equations | The Friedmann equations, also known as the Friedmann–Lemaître (FL) equations, are a set of equations in physical cosmology that govern cosmic expansion in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by Alexander Friedmann in 1922 from Einstein's field equations of gravitation for the Friedmann–Lemaître–Robertson–Walker metric and a perfect fluid with a given mass density and pressure . The equations for negative spatial curvature were given by Friedmann in 1924.
Assumptions
The Friedmann equations start with the simplifying assumption that the universe is spatially homogeneous and isotropic, that is, the cosmological principle; empirically, this is justified on scales larger than the order of 100 Mpc. The cosmological principle implies that the metric of the universe must be of the form
where is a three-dimensional metric that must be one of (a) flat space, (b) a sphere of constant positive curvature or (c) a hyperbolic space with constant negative curvature. This metric is called the Friedmann–Lemaître–Robertson–Walker (FLRW) metric. The parameter discussed below takes the value 0, 1, −1, or the Gaussian curvature, in these three cases respectively. It is this fact that allows us to sensibly speak of a "scale factor" .
Einstein's equations now relate the evolution of this scale factor to the pressure and energy of the matter in the universe. From FLRW metric we compute Christoffel symbols, then the Ricci tensor. With the stress–energy tensor for a perfect fluid, we substitute them into Einstein's field equations and the resulting equations are described below.
Equations
There are two independent Friedmann equations for modelling a homogeneous, isotropic universe.
The first is:
which is derived from the 00 component of the Einstein field equations. The second is:
which is derived from the first together with the trace of Einstein's field equations (the dimension of the two equations is time−2).
The term Friedmann equation sometimes is used only for the first equation.
is the scale factor, , , and are universal constants ( is the Newtonian constant of gravitation, is the cosmological constant with dimension length−2, and is the speed of light in vacuum). and are the volumetric mass density (and not the volumetric energy density) and the pressure, respectively. is constant throughout a particular solution, but may vary from one solution to another.
In previous equations, , , and are functions of time. is the spatial curvature in any time-slice of the universe; it is equal to one-sixth of the spatial Ricci curvature scalar since
in the Friedmann model. is the Hubble parameter.
We see that in the Friedmann equations, does not depend on which coordinate system we chose for spatial slices. There are two commonly used choices for and which describe the same physics:
or depending on whether the shape of the universe is a closed 3-sphere, flat (Euclidean space) or an open 3-hyperboloid, respectively. If , then is the radius of curvature of the universe. If , then may be fixed to any arbitrary positive number at one particular time. If , then (loosely speaking) one can say that is the radius of curvature of the universe.
is the scale factor which is taken to be 1 at the present time. is the current spatial curvature (when ). If the shape of the universe is hyperspherical and is the radius of curvature ( at the present), then . If is positive, then the universe is hyperspherical. If , then the universe is flat. If is negative, then the universe is hyperbolic.
Using the first equation, the second equation can be re-expressed as
which eliminates and expresses the conservation of mass–energy:
These equations are sometimes simplified by replacing
to give:
The simplified form of the second equation is invariant under this transformation.
The Hubble parameter can change over time if other parts of the equation are time dependent (in particular the mass density, the vacuum energy, or the spatial curvature). Evaluating the Hubble parameter at the present time yields Hubble's constant which is the proportionality constant of Hubble's law. Applied to a fluid with a given equation of state, the Friedmann equations yield the time evolution and geometry of the universe as a function of the fluid density.
Density parameter
The density parameter is defined as the ratio of the actual (or observed) density to the critical density of the Friedmann universe. The relation between the actual density and the critical density determines the overall geometry of the universe; when they are equal, the geometry of the universe is flat (Euclidean). In earlier models, which did not include a cosmological constant term, critical density was initially defined as the watershed point between an expanding and a contracting Universe.
To date, the critical density is estimated to be approximately five atoms (of monatomic hydrogen) per cubic metre, whereas the average density of ordinary matter in the Universe is believed to be 0.2–0.25 atoms per cubic metre.
A much greater density comes from the unidentified dark matter, although both ordinary and dark matter contribute in favour of contraction of the universe. However, the largest part comes from so-called dark energy, which accounts for the cosmological constant term. Although the total density is equal to the critical density (exactly, up to measurement error), dark energy does not lead to contraction of the universe but rather may accelerate its expansion.
An expression for the critical density is found by assuming to be zero (as it is for all basic Friedmann universes) and setting the normalised spatial curvature, , equal to zero. When the substitutions are applied to the first of the Friedmann equations we find:
The density parameter (useful for comparing different cosmological models) is then defined as:
This term originally was used as a means to determine the spatial geometry of the universe, where is the critical density for which the spatial geometry is flat (or Euclidean). Assuming a zero vacuum energy density, if is larger than unity, the space sections of the universe are closed; the universe will eventually stop expanding, then collapse. If is less than unity, they are open; and the universe expands forever. However, one can also subsume the spatial curvature and vacuum energy terms into a more general expression for in which case this density parameter equals exactly unity. Then it is a matter of measuring the different components, usually designated by subscripts. According to the ΛCDM model, there are important components of due to baryons, cold dark matter and dark energy. The spatial geometry of the universe has been measured by the WMAP spacecraft to be nearly flat. This means that the universe can be well approximated by a model where the spatial curvature parameter is zero; however, this does not necessarily imply that the universe is infinite: it might merely be that the universe is much larger than the part we see.
The first Friedmann equation is often seen in terms of the present values of the density parameters, that is
Here is the radiation density today (when ), is the matter (dark plus baryonic) density today, is the "spatial curvature density" today, and is the cosmological constant or vacuum density today.
Useful solutions
The Friedmann equations can be solved exactly in presence of a perfect fluid with equation of state
where is the pressure, is the mass density of the fluid in the comoving frame and is some constant.
In spatially flat case (), the solution for the scale factor is
where is some integration constant to be fixed by the choice of initial conditions. This family of solutions labelled by is extremely important for cosmology. For example, describes a matter-dominated universe, where the pressure is negligible with respect to the mass density. From the generic solution one easily sees that in a matter-dominated universe the scale factor goes as
matter-dominated
Another important example is the case of a radiation-dominated universe, namely when . This leads to
radiation-dominated
Note that this solution is not valid for domination of the cosmological constant, which corresponds to an . In this case the energy density is constant and the scale factor grows exponentially.
Solutions for other values of can be found at
Mixtures
If the matter is a mixture of two or more non-interacting fluids each with such an equation of state, then
holds separately for each such fluid . In each case,
from which we get
For example, one can form a linear combination of such terms
where is the density of "dust" (ordinary matter, ) when ; is the density of radiation () when ; and is the density of "dark energy" (). One then substitutes this into
and solves for as a function of time.
Detailed derivation
To make the solutions more explicit, we can derive the full relationships from the first Friedmann equation:
with
Rearranging and changing to use variables and for the integration
Solutions for the dependence of the scale factor with respect to time for universes dominated by each component can be found. In each we also have assumed that , which is the same as assuming that the dominating source of energy density is approximately 1.
For matter-dominated universes, where and , as well as :
which recovers the aforementioned
For radiation-dominated universes, where and , as well as :
For -dominated universes, where and , as well as , and where we now will change our bounds of integration from to and likewise to :
The -dominated universe solution is of particular interest because the second derivative with respect to time is positive, non-zero; in other words implying an accelerating expansion of the universe, making a candidate for dark energy:
Where by construction , our assumptions were , and has been measured to be positive, forcing the acceleration to be greater than zero.
In popular culture
Several students at Tsinghua University (CCP leader Xi Jinping's alma mater) participating in the 2022 COVID-19 protests in China carried placards with Friedmann equations scrawled on them, interpreted by some as a play on the words "Free man". Others have interpreted the use of the equations as a call to “open up” China and stop its Zero Covid policy, as the Friedmann equations relate to the expansion, or “opening” of the universe.
See also
Mathematics of general relativity
Solutions of the Einstein field equations
Warm inflation
Sources
Further reading
Eponymous equations of physics
General relativity | Friedmann equations | [
"Physics"
] | 2,154 | [
"General relativity",
"Eponymous equations of physics",
"Equations of physics",
"Theory of relativity"
] |
1,148,092 | https://en.wikipedia.org/wiki/Primordial%20fluctuations | Primordial fluctuations are density variations in the early universe which are considered the seeds of all structure in the universe. Currently, the most widely accepted explanation for their origin is in the context of cosmic inflation. According to the inflationary paradigm, the exponential growth of the scale factor during inflation caused quantum fluctuations of the inflaton field to be stretched to macroscopic scales, and, upon leaving the horizon, to "freeze in".
At the later stages of radiation- and matter-domination, these fluctuations re-entered the horizon, and thus set the initial conditions for structure formation.
The statistical properties of the primordial fluctuations can be inferred from observations of anisotropies in the cosmic microwave background and from measurements of the distribution of matter, e.g., galaxy redshift surveys. Since the fluctuations are believed to arise from inflation, such measurements can also set constraints on parameters within inflationary theory.
Formalism
Primordial fluctuations are typically quantified by a power spectrum which gives the power of the variations as a function of spatial scale. Within this formalism, one usually considers the fractional energy density of the fluctuations, given by:
where is the energy density, its average and the wavenumber of the fluctuations. The power spectrum can then be defined via the ensemble average of the Fourier components:
There are both scalar and tensor modes of fluctuations.
Scalar modes
Scalar modes have the power spectrum defined as the mean squared density fluctuation for a specific wavenumber , i.e., the average fluctuation amplitude at a given scale:
Many inflationary models predict that the scalar component of the fluctuations obeys a power law in which
For scalar fluctuations, is referred to as the scalar spectral index, with corresponding to scale invariant fluctuations (not scale invariant in but in the comoving curvature perturbation for which the power is indeed invariant with when ).
The scalar spectral index describes how the density fluctuations vary with scale. As the size of these fluctuations depends upon the inflaton's motion when these quantum fluctuations are becoming super-horizon sized, different inflationary potentials predict different spectral indices. These depend upon the slow roll parameters, in particular the gradient and curvature of the potential. In models where the curvature is large and positive . On the other hand, models such as monomial potentials predict a red spectral index . Planck provides a value of .
Tensor modes
The presence of primordial tensor fluctuations is predicted by many inflationary models. As with scalar fluctuations, tensor fluctuations are expected to follow a power law and are parameterized by the tensor index (the tensor version of the scalar index). The ratio of the tensor to scalar power spectra is given by
where the 2 arises due to the two polarizations of the tensor modes. 2015 CMB data from the Planck satellite gives a constraint of .
Adiabatic/isocurvature fluctuations
Adiabatic fluctuations are density variations in all forms of matter and energy which have equal fractional over/under densities in the number density. So for example, an adiabatic photon overdensity of a factor of two in the number density would also correspond to an electron overdensity of two. For isocurvature fluctuations, the number density variations for one component do not necessarily correspond to number density variations in other components. While it is usually assumed that the initial fluctuations are adiabatic, the possibility of isocurvature fluctuations can be considered given current cosmological data. Current cosmic microwave background data favor adiabatic fluctuations and constrain uncorrelated isocurvature cold dark matter modes to be small.
See also
Big Bang
Cosmological perturbation theory
Cosmic microwave background spectral distortions
Press–Schechter formalism
Primordial gravitational wave
Primordial black hole
References
External links
Crotty, Patrick, "Bounds on isocurvature perturbations from CMB and LSS data". Physical Review Letters.
Linde, Andrei, "Quantum Cosmology and the Structure of Inflationary Universe". Invited talk.
Peiris, Hiranya, "First Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Implications for Inflation". Astrophysical Journal.
Tegmark, Max, "Cosmological parameters from SDSS and WMAP". Physical Review D.
Physical cosmology
Inflation (cosmology) | Primordial fluctuations | [
"Physics",
"Astronomy"
] | 901 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
1,148,356 | https://en.wikipedia.org/wiki/Lami%27s%20theorem | In physics, Lami's theorem is an equation relating the magnitudes of three coplanar, concurrent and non-collinear vectors, which keeps an object in static equilibrium, with the angles directly opposite to the corresponding vectors. According to the theorem,
where are the magnitudes of the three coplanar, concurrent and non-collinear vectors, , which keep the object in static equilibrium, and are the angles directly opposite to the vectors, thus satisfying .
Lami's theorem is applied in static analysis of mechanical and structural systems. The theorem is named after Bernard Lamy.
Proof
As the vectors must balance , hence by making all the vectors touch its tip and tail the result is a triangle with sides and angles ( are the exterior angles).
By the law of sines then
Then by applying that for any angle , (supplementary angles have the same sine), and the result is
See also
Mechanical equilibrium
Parallelogram of force
Tutte embedding
References
Further reading
R.K. Bansal (2005). "A Textbook of Engineering Mechanics". Laxmi Publications. p. 4. .
I.S. Gujral (2008). "Engineering Mechanics". Firewall Media. p. 10.
Eponymous theorems of physics
Statics | Lami's theorem | [
"Physics"
] | 259 | [
"Statics",
"Equations of physics",
"Classical mechanics",
"Eponymous theorems of physics",
"Physics theorems"
] |
1,148,456 | https://en.wikipedia.org/wiki/Symmetry%20in%20biology | Symmetry in biology refers to the symmetry observed in organisms, including plants, animals, fungi, and bacteria. External symmetry can be easily seen by just looking at an organism. For example, the face of a human being has a plane of symmetry down its centre, or a pine cone displays a clear symmetrical spiral pattern. Internal features can also show symmetry, for example the tubes in the human body (responsible for transporting gases, nutrients, and waste products) which are cylindrical and have several planes of symmetry.
Biological symmetry can be thought of as a balanced distribution of duplicate body parts or shapes within the body of an organism. Importantly, unlike in mathematics, symmetry in biology is always approximate. For example, plant leaves – while considered symmetrical – rarely match up exactly when folded in half. Symmetry is one class of patterns in nature whereby there is near-repetition of the pattern element, either by reflection or rotation.
While sponges and placozoans represent two groups of animals which do not show any symmetry (i.e. are asymmetrical), the body plans of most multicellular organisms exhibit, and are defined by, some form of symmetry. There are only a few types of symmetry which are possible in body plans. These are radial (cylindrical) symmetry, bilateral, biradial and spherical symmetry. While the classification of viruses as an "organism" remains controversial, viruses also contain icosahedral symmetry.
The importance of symmetry is illustrated by the fact that groups of animals have traditionally been defined by this feature in taxonomic groupings. The Radiata, animals with radial symmetry, formed one of the four branches of Georges Cuvier's classification of the animal kingdom. Meanwhile, Bilateria is a taxonomic grouping still used today to represent organisms with embryonic bilateral symmetry.
Radial symmetry
Organisms with radial symmetry show a repeating pattern around a central axis such that they can be separated into several identical pieces when cut through the central point, much like pieces of a pie. Typically, this involves repeating a body part 4, 5, 6 or 8 times around the axis – referred to as tetramerism, pentamerism, hexamerism and octamerism, respectively. Such organisms exhibit no left or right sides but do have a top and a bottom surface, or a front and a back.
George Cuvier classified animals with radial symmetry in the taxon Radiata (Zoophytes), which is now generally accepted to be an assemblage of different animal phyla that do not share a single common ancestor (a polyphyletic group). Most radially symmetric animals are symmetrical about an axis extending from the center of the oral surface, which contains the mouth, to the center of the opposite (aboral) end. Animals in the phyla Cnidaria and Echinodermata generally show radial symmetry, although many sea anemones and some corals within the Cnidaria have bilateral symmetry defined by a single structure, the siphonoglyph. Radial symmetry is especially suitable for sessile animals such as the sea anemone, floating animals such as jellyfish, and slow moving organisms such as starfish; whereas bilateral symmetry favours locomotion by generating a streamlined body.
Many flowers are also radially symmetric, or "actinomorphic". Roughly identical floral structures – petals, sepals, and stamens – occur at regular intervals around the axis of the flower, which is often the female reproductive organ containing the carpel, style and stigma.
Subtypes of radial symmetry
Three-fold triradial symmetry was present in Trilobozoa from the Late Ediacaran period.
Four-fold tetramerism appears in some jellyfish, such as Aurelia marginalis. This is immediately obvious when looking at the jellyfish due to the presence of four gonads, visible through its translucent body. This radial symmetry is ecologically important in allowing the jellyfish to detect and respond to stimuli (mainly food and danger) from all directions.
Flowering plants show five-fold pentamerism, in many of their flowers and fruits. This is easily seen through the arrangement of five carpels (seed pockets) in an apple when cut transversely. Among animals, only the echinoderms such as sea stars, sea urchins, and sea lilies are pentamerous as adults, with five arms arranged around the mouth. Being bilaterian animals, however, they initially develop with mirror symmetry as larvae, then gain pentaradial symmetry later.
is found in the corals and sea anemones (class Anthozoa), which are divided into two groups based on their symmetry. The most common corals in the subclass Hexacorallia have a hexameric body plan; their polyps have six-fold internal symmetry and a number of tentacles that is a multiple of six.
is found in corals of the subclass Octocorallia. These have polyps with eight tentacles and octameric radial symmetry. The octopus, however, has bilateral symmetry, despite its eight arms.
Icosahedral symmetry
Icosahedral symmetry occurs in an organism which contains 60 subunits generated by 20 faces, each an equilateral triangle, and 12 corners. Within the icosahedron there is 2-fold, 3-fold and 5-fold symmetry. Many viruses, including canine parvovirus, show this form of symmetry due to the presence of an icosahedral viral shell. Such symmetry has evolved because it allows the viral particle to be built up of repetitive subunits consisting of a limited number of structural proteins (encoded by viral genes), thereby saving space in the viral genome. The icosahedral symmetry can still be maintained with more than 60 subunits, but only in multiples of 60. For example, the T=3 Tomato bushy stunt virus has 60x3 protein subunits (180 copies of the same structural protein). Although these viruses are often referred to as 'spherical', they do not show true mathematical spherical symmetry.
In the early 20th century, Ernst Haeckel described (Haeckel, 1904) a number of species of Radiolaria, some of whose skeletons are shaped like various regular polyhedra. Examples include Circoporus octahedrus, Circogonia icosahedra, Lithocubus geometricus and Circorrhegma dodecahedra. The shapes of these creatures should be obvious from their names. Tetrahedral symmetry is not present in Callimitra agnesae.
Spherical symmetry
Spherical symmetry is characterised by the ability to draw an endless, or great but finite, number of symmetry axes through the body. This means that spherical symmetry occurs in an organism if it is able to be cut into two identical halves through any cut that runs through the organism's center. True spherical symmetry is not found in animal body plans. Organisms which show approximate spherical symmetry include the freshwater green alga Volvox.
Bacteria are often referred to as having a 'spherical' shape. Bacteria are categorized based on their shapes into three classes: cocci (spherical-shaped), bacillus (rod-shaped) and spirochetes (spiral-shaped) cells. In reality, this is a severe over-simplification as bacterial cells can be curved, bent, flattened, oblong spheroids and many more shapes. Due to the huge number of bacteria considered to be cocci (coccus if a single cell), it is unlikely that all of these show true spherical symmetry. It is important to distinguish between the generalized use of the word 'spherical' to describe organisms at ease, and the true meaning of spherical symmetry. The same situation is seen in the description of viruses – 'spherical' viruses do not necessarily show spherical symmetry, being usually icosahedral.
Bilateral symmetry
Organisms with bilateral symmetry contain a single plane of symmetry, the sagittal plane, which divides the organism into two roughly mirror image left and right halves – approximate reflectional symmetry.
Animals with bilateral symmetry are classified into a large group called the bilateria, which contains 99% of all animals (comprising over 32 phyla and 1 million described species). All bilaterians have some asymmetrical features; for example, the human heart and liver are positioned asymmetrically despite the body having external bilateral symmetry.
The bilateral symmetry of bilaterians is a complex trait which develops due to the expression of many genes. The bilateria have two axes of polarity. The first is an anterior–posterior (AP) axis which can be visualised as an imaginary axis running from the head or mouth to the tail or other end of an organism. The second is the dorsal–ventral (DV) axis which runs perpendicular to the AP axis. During development the AP axis is always specified before the DV axis, which is known as the second embryonic axis.
The AP axis is essential in defining the polarity of bilateria and allowing the development of a front and back to give the organism direction. The front end encounters the environment before the rest of the body so sensory organs such as eyes tend to be clustered there. This is also the site where a mouth develops since it is the first part of the body to encounter food. Therefore, a distinct head, with sense organs connected to a central nervous system, tends to develop. This pattern of development (with a distinct head and tail) is called cephalization. It is also argued that the development of an AP axis is important in locomotion – bilateral symmetry gives the body an intrinsic direction and allows streamlining to reduce drag.
In addition to animals, the flowers of some plants also show bilateral symmetry. Such plants are referred to as zygomorphic and include the orchid (Orchidaceae) and pea (Fabaceae) families, and most of the figwort family (Scrophulariaceae). The leaves of plants also commonly show approximate bilateral symmetry.
Biradial symmetry
Biradial symmetry is found in organisms which show morphological features (internal or external) of both bilateral and radial symmetry. Unlike radially symmetrical organisms which can be divided equally along many planes, biradial organisms can only be cut equally along two planes. This could represent an intermediate stage in the evolution of bilateral symmetry from a radially symmetric ancestor.
The animal group with the most obvious biradial symmetry is the ctenophores. In ctenophores the two planes of symmetry are (1) the plane of the tentacles and (2) the plane of the pharynx. In addition to this group, evidence for biradial symmetry has even been found in the 'perfectly radial' freshwater polyp Hydra (a cnidarian). Biradial symmetry, especially when considering both internal and external features, is more common than originally accounted for.
Evolution of symmetry
Like all the traits of organisms, symmetry (or indeed asymmetry) evolves due to an advantage to the organism – a process of natural selection. This involves changes in the frequency of symmetry-related genes throughout time.
Evolution of symmetry in plants
Early flowering plants had radially symmetric flowers but since then many plants have evolved bilaterally symmetrical flowers. The evolution of bilateral symmetry is due to the expression of CYCLOIDEA genes. Evidence for the role of the CYCLOIDEA gene family comes from mutations in these genes which cause a reversion to radial symmetry. The CYCLOIDEA genes encode transcription factors, proteins which control the expression of other genes. This allows their expression to influence developmental pathways relating to symmetry. For example, in Antirrhinum majus, CYCLOIDEA is expressed during early development in the dorsal domain of the flower meristem and continues to be expressed later on in the dorsal petals to control their size and shape. It is believed that the evolution of specialized pollinators may play a part in the transition of radially symmetrical flowers to bilaterally symmetrical flowers.
Evolution of symmetry in animals
Symmetry is often selected for in the evolution of animals. This is unsurprising since asymmetry is often an indication of unfitness – either defects during development or injuries throughout a lifetime. This is most apparent during mating during which females of some species select males with highly symmetrical features. Additionally, female barn swallows, a species where adults have long tail streamers, prefer to mate with males that have the most symmetrical tails.
While symmetry is known to be under selection, the evolutionary history of different types of symmetry in animals is an area of extensive debate. Traditionally it has been suggested that bilateral animals evolved from a radial ancestor. Cnidarians, a phylum containing animals with radial symmetry, are the most closely related group to the bilaterians. Cnidarians are one of two groups of early animals considered to have defined structure, the second being the ctenophores. Ctenophores show biradial symmetry leading to the suggestion that they represent an intermediate step in the evolution of bilateral symmetry from radial symmetry.
Interpretations based only on morphology are not sufficient to explain the evolution of symmetry. Two different explanations are proposed for the different symmetries in cnidarians and bilateria. The first suggestion is that an ancestral animal had no symmetry (was asymmetric) before cnidarians and bilaterians separated into different evolutionary lineages. Radial symmetry could have then evolved in cnidarians and bilateral symmetry in bilaterians. Alternatively, the second suggestion is that an ancestor of cnidarians and bilaterians had bilateral symmetry before the cnidarians evolved and became different by having radial symmetry. Both potential explanations are being explored and evidence continues to fuel the debate.
Asymmetry
Although asymmetry is typically associated with being unfit, some species have evolved to be asymmetrical as an important adaptation. Many members of the phylum Porifera (sponges) have no symmetry, though some are radially symmetric.
Symmetry breaking
The presence of these asymmetrical features requires a process of symmetry breaking during development, both in plants and animals. Symmetry breaking occurs at several different levels in order to generate the anatomical asymmetry which we observe. These levels include asymmetric gene expression, protein expression, and activity of cells.
For example, left–right asymmetry in mammals has been investigated extensively in the embryos of mice. Such studies have led to support for the nodal flow hypothesis. In a region of the embryo referred to as the node there are small hair-like structures (monocilia) that all rotate together in a particular direction. This creates a unidirectional flow of signalling molecules causing these signals to accumulate on one side of the embryo and not the other. This results in the activation of different developmental pathways on each side, and subsequent asymmetry.
Much of the investigation of the genetic basis of symmetry breaking has been done on chick embryos. In chick embryos the left side expresses genes called NODAL and LEFTY2 that activate PITX2 to signal the development of left side structures. Whereas, the right side does not express PITX2 and consequently develops right side structures. A more complete pathway is shown in the image at the side of the page.
For more information about symmetry breaking in animals please refer to the left–right asymmetry page.
Plants also show asymmetry. For example the direction of helical growth in Arabidopsis, the most commonly studied model plant, shows left-handedness. Interestingly, the genes involved in this asymmetry are similar (closely related) to those in animal asymmetry – both LEFTY1 and LEFTY2 play a role. In the same way as animals, symmetry breaking in plants can occur at a molecular (genes/proteins), subcellular, cellular, tissue and organ level.
Fluctuating asymmetry
See also
Biological structures
Standard anatomical position
Anatomical terms of motion
Anatomical terms of muscle
Anatomical terms of bone
Anatomical terms of neuroanatomy
Glossary of botanical terms
Glossary of plant morphology
Glossary of leaf morphology
Glossary of entomology terms
Plant morphology
Terms of orientation
Handedness
Laterality
Proper right and proper left
Reflection symmetry
Sinistral and dextral
Direction (disambiguation)
Symmetry (disambiguation)
References
Citations
Sources
Ball, Philip (2009). Shapes. Oxford University Press.
Stewart, Ian (2007). What Shape is a Snowflake? Magical Numbers in Nature. Weidenfeld and Nicolson.
Thompson, D'Arcy (1942). On Growth and Form. Cambridge University Press.
Haeckel, Ernst, E. (1904). Kunstformen der Natur. Available as Haeckel, E. (1998); Art forms in nature, Prestel US. .
Symmetry
Developmental biology
Animal anatomy
Evolutionary biology
pt:Simetria#Simetria na biologia | Symmetry in biology | [
"Physics",
"Mathematics",
"Biology"
] | 3,437 | [
"Evolutionary biology",
"Behavior",
"Developmental biology",
"Reproduction",
"Geometry",
"Symmetry"
] |
1,150,115 | https://en.wikipedia.org/wiki/Germicidal%20lamp | A germicidal lamp (also known as disinfection lamp or sterilizer lamp) is an electric light that produces ultraviolet C (UVC) light. This short-wave ultraviolet light disrupts DNA base pairing, causing formation of pyrimidine dimers, and leads to the inactivation of bacteria, viruses, and protozoans. It can also be used to produce ozone for water disinfection. They are used in ultraviolet germicidal irradiation (UVGI).
There are four common types available:
Low-pressure mercury lamps
High-pressure mercury lamps
Excimer lamps
LEDs
Low-pressure mercury lamps
Low-pressure mercury lamps are very similar to a fluorescent lamp, with a wavelength of 253.7 nm (1182.5 THz).
The most common form of germicidal lamp looks similar to an ordinary fluorescent lamp but the tube contains no fluorescent phosphor. In addition, rather than being made of ordinary borosilicate glass, the tube is made of fused quartz or vycor 7913 glass. These two changes combine to allow the 253.7 nm ultraviolet light produced by the mercury arc to pass out of the lamp unmodified (whereas, in common fluorescent lamps, it causes the phosphor to fluoresce, producing visible light). Germicidal lamps still produce a small amount of visible light due to other mercury radiation bands.
An older design looks like an incandescent lamp but with the envelope containing a few droplets of mercury. In this design, the incandescent filament heats the mercury, producing a vapor which eventually allows an arc to be struck, short circuiting the incandescent filament.
As with all gas-discharge lamps, low- and high-pressure mercury lamps exhibit negative resistance and require the use of an external ballast to regulate the current flow. The older lamps that resembled an incandescent lamp were often operated in series with an ordinary 40 W incandescent "appliance" lamp; the incandescent lamp acted as the ballast for the germicidal lamp.
High-pressure mercury lamps
High-pressure lamps are much more similar to HID lamps than fluorescent lamps.
These lamps radiate a broad-band UVC radiation, rather than a single line. They are widely used in industrial water treatment, because they are very intense radiation sources. High-pressure lamps produce very bright bluish white light.
Excimer lamps
Excimer lamps emit narrow-band UVC and vacuum-ultraviolet radiation at a variety of wavelengths depending on the medium. They are mercury-free and reach full output quicker than a mercury lamp, and generate less heat. Excimer emission at 207 and 222 nm appears to be safer than traditional 254 nm germicidal radiation, due to greatly reduced penetration of these wavelengths in human skin.
Light-emitting diodes (LEDs)
Recent developments in light-emitting diode (LED) technology have led to the commercial availability of UVC LED sources.
UVC LEDs use semiconductor materials to produce light in a solid-state device. The wavelength of emission is tuneable by adjusting the chemistry of the semiconductor material, giving a selectivity to the emission profile of the LED across, and beyond, the germicidal wavelength band. Advances in understanding and synthesis of the AlGaN materials system led to significant increases in the output power, device lifetime, and efficiency of UVC LEDs in the early 2010s.
The reduced size of LEDs opens up options for small reactor systems allowing point-of-use applications and integration into medical devices. Low power consumption of semiconductors introduce UV disinfection systems that utilized small solar cells in remote or Third World applications.
By 2019, LEDs made up 41.4% of UV light sales, up from 19.2% in 2014 The UV-C LED global market is expected to rise from $223m in 2017 to US$991m in 2023.
Uses
Germicidal lamps are used to sterilize workspaces and tools used in biology laboratories and medical facilities. If the quartz envelope transmits shorter wavelengths, such as the 185 nm mercury emission line, they can also be used wherever ozone is desired, for example, in the sanitizing systems of hot tubs and aquariums. They are also used by geologists to provoke fluorescence in mineral samples, aiding in their identification. In this application, the light produced by the lamp is usually filtered to remove as much visible light as possible, leaving just the UV light. Germicidal lamps are also used in waste water treatment in order to kill microorganisms.
The light produced by germicidal lamps is also used to erase EPROMs; the ultraviolet photons are sufficiently energetic to allow the electrons trapped on the transistors' floating gates to tunnel through the gate insulation, eventually removing the stored charge that represents binary ones and zeroes.
Ozone production
For most purposes, ozone production would be a detrimental side effect of lamp operation. To prevent this, most germicidal lamps are treated to absorb the 185 nm mercury emission line (which is the longest wavelength of mercury light which will ionize oxygen).
In some cases (such as water sanitization), ozone production is precisely the point. This requires specialized lamps which do not have the surface treatment.
Safety concerns
Short-wave UV light is harmful to humans. In addition to causing sunburn and (over time) skin cancer, this light can produce extremely painful inflammation of the cornea of the eye, which may lead to temporary or permanent vision impairment. For this reason, the light produced by a germicidal lamp must be carefully shielded against direct viewing, with consideration of reflections and dispersed light. A February 2017 risk analysis of UVC lights concluded that ultraviolet light from these lamps can cause skin and eye problems.
References
External links
Gas discharge lamps
Disinfectants
Ultraviolet radiation | Germicidal lamp | [
"Physics",
"Chemistry"
] | 1,216 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Ultraviolet radiation"
] |
1,150,377 | https://en.wikipedia.org/wiki/Slurry | A slurry is a mixture of denser solids suspended in liquid, usually water. The most common use of slurry is as a means of transporting solids or separating minerals, the liquid being a carrier that is pumped on a device such as a centrifugal pump. The size of solid particles may vary from 1 micrometre up to hundreds of millimetres.
The particles may settle below a certain transport velocity and the mixture can behave like a Newtonian or non-Newtonian fluid. Depending on the mixture, the slurry may be abrasive and/or corrosive.
Examples
Examples of slurries include:
Cement slurry, a mixture of cement, water, and assorted dry and liquid additives used in the petroleum and other industries
Soil/cement slurry, also called Controlled Low-Strength Material (CLSM), flowable fill, controlled density fill, flowable mortar, plastic soil-cement, K-Krete, and other names
A mixture of thickening agent, oxidizers, and water used to form a gel explosive
A mixture of pyroclastic material, rocky debris, and water produced in a volcanic eruption and known as a lahar
A mixture of bentonite and water used to make slurry walls
Coal slurry, a mixture of coal waste and water, or crushed coal and water
Slip, a mixture of clay and water used for joining, glazing and decoration of ceramics and pottery.
Slurry oil, the highest boiling fraction distilled from the effluent of an FCC unit in an oil refinery. It contains a large amount of catalyst, in form of sediments hence the denomination of slurry.
A mixture of wood pulp and water used to make paper
Manure slurry, a mixture of animal waste, organic matter, and sometimes water often known simply as "slurry" in agricultural use, used as fertilizer after aging in a slurry pit
Meat slurry, a mixture of finely ground meat and water, centrifugally dewatered and used as a food ingredient.
An abrasive substance used in chemical-mechanical polishing
Slurry ice, a mixture of ice crystals, freezing point depressant, and water
A mixture of raw materials and water involved in the rawmill manufacture of Portland cement
A bolus of chewed food mixed with saliva
A mixture of epoxy glue and glass microspheres used as a filler compound around core materials in sandwich-structured composite airframes.
Calculations
Determining solids fraction
To determine the percent solids (or solids fraction) of a slurry from the density of the slurry, solids and liquid
where
is the solids fraction of the slurry (state by mass)
is the solids density
is the slurry density
is the liquid density
In aqueous slurries, as is common in mineral processing, the specific gravity of the species is typically used, and since specific gravity of water is taken to be 1, this relation is typically written:
even though specific gravity with units tonnes/m3 (t/m3) is used instead of the SI density unit, kg/m3.
Liquid mass from mass fraction of solids
To determine the mass of liquid in a sample given the mass of solids and the mass fraction:
By definition
therefore
and
then
and therefore
where
is the solids fraction of the slurry
is the mass or mass flow of solids in the sample or stream
is the mass or mass flow of slurry in the sample or stream
is the mass or mass flow of liquid in the sample or stream
Volumetric fraction from mass fraction
Equivalently
and in a minerals processing context where the specific gravity of the liquid (water) is taken to be one:
So
and
Then combining with the first equation:
So
Then since
we conclude that
where
is the solids fraction of the slurry on a volumetric basis
is the solids fraction of the slurry on a mass basis
is the mass or mass flow of solids in the sample or stream
is the mass or mass flow of slurry in the sample or stream
is the mass or mass flow of liquid in the sample or stream
is the bulk specific gravity of the solids
See also
Grout
Slurry pipeline
Slurry transport
Slurry wall
References
External links
Bonapace, A.C. A General Theory of the Hydraulic Transport of Solids in Full Suspension
Ming, G., Ruixiang, L., Fusheng, N., Liqun, X. (2007). Hydraulic Transport of Coarse Gravel—A Laboratory Investigation Into Flow Resistance.
Environmental engineering
Materials
Fluid mechanics | Slurry | [
"Physics",
"Chemistry",
"Engineering"
] | 941 | [
"Chemical engineering",
"Materials",
"Civil engineering",
"Environmental engineering",
"Fluid mechanics",
"Matter"
] |
1,150,897 | https://en.wikipedia.org/wiki/Radioimmunoassay | A radioimmunoassay (RIA) is an immunoassay that uses radiolabeled molecules in a stepwise formation of immune complexes. A RIA is a very sensitive in vitro assay technique used to measure concentrations of substances, usually measuring antigen concentrations (for example, hormone levels in blood) by use of antibodies.
The RIA technique is extremely sensitive and extremely specific, and although it requires specialized equipment, it remains among the least expensive methods to perform such measurements. It requires special precautions and licensing, since radioactive substances are used.
In contrast, an immunoradiometric assay (IRMA) is an immunoassay that uses radiolabeled molecules but in an immediate rather than stepwise way.
A radioallergosorbent test (RAST) is an example of radioimmunoassay. It is used to detect the causative allergen for an allergy.
Method
Classically, to perform a radioimmunoassay, a known quantity of an antigen is made radioactive, frequently by labeling it with gamma-radioactive isotopes of iodine, such as 125-I, or tritium attached to tyrosine. This radiolabeled antigen is then mixed with a known amount of antibody for that antigen, and as a result, the two specifically bind to one another. Then, a sample of serum from a patient containing an unknown quantity of that same antigen is added. This causes the unlabeled (or "cold") antigen from the serum to compete with the radiolabeled antigen ("hot") for antibody binding sites. As the concentration of "cold" antigen is increased, more of it binds to the antibody, displacing the radiolabeled variant, and reducing the ratio of antibody-bound radiolabeled antigen to free radiolabeled antigen. The bound antigens are then separated and the radioactivity of the free(unbound) antigen remaining in the supernatant is measured using a gamma counter. This value is then compared to a standardised calibration curve to work out the concentration of the unlabelled antigen in the patient serum sample. RIAs can detect a few picograms of analyte in an experimental tubes if using antibodies of high affinity.
This method can be used for any biological molecule in principle and is not restricted to serum antigens, nor is it required to use the indirect method of measuring the free antigen instead of directly measuring the captured antigen. For example, if it is undesirable or not possible to radiolabel the antigen or target molecule of interest, a RIA can be done if two different antibodies that recognize the target are available and the target is large enough (e.g., a protein) to present multiple epitopes to the antibodies. One antibody would be radiolabeled as above while the other would remain unmodified. The RIA would begin with the "cold" unlabeled antibody being allowed to interact and bind to the target molecule in solution. Preferably, this unlabeled antibody is immobilized in some way, such as coupled to an agarose bead, coated to a surface, etc. Next, the "hot" radiolabeled antibody is allowed to interact with the first antibody-target molecule complex. After extensive washing, the direct amount of radioactive antibody bound is measured and the amount of target molecule quantified by comparing it to a reference amount assayed at the same time. This method is similar in principle to the non-radioactive sandwich ELISA method.
History
This method was developed by Solomon Berson and Rosalyn Sussman Yalow at the Veterans Administration Hospital in the Bronx, New York. This revolutionary development earned Dr. Yalow the Nobel Prize for Medicine in 1977, the second woman ever to win it. In her acceptance speech, Dr. Yalow said, "The world cannot afford the loss of the talents of half its people if we are to solve the many problems which beset us." Yalow shared the Nobel Prize with Roger Guillemin, and Andrew Schally who earned the prize based on their research into "the peptide hormone production of the brain".
References
steps in radioimmunoassay technique
External links
Radioimmunoassay (RIA)
Biochemistry detection reactions
Endocrine procedures
Immunologic tests
Radiobiology | Radioimmunoassay | [
"Chemistry",
"Biology"
] | 906 | [
"Radiobiology",
"Biochemistry detection reactions",
"Immunologic tests",
"Biochemical reactions",
"Microbiology techniques",
"Radioactivity"
] |
92,377 | https://en.wikipedia.org/wiki/Electromagnet | An electromagnet is a type of magnet in which the magnetic field is produced by an electric current. Electromagnets usually consist of wire wound into a coil. A current through the wire creates a magnetic field which is concentrated along the center of the coil. The magnetic field disappears when the current is turned off. The wire turns are often wound around a magnetic core made from a ferromagnetic or ferrimagnetic material such as iron; the magnetic core concentrates the magnetic flux and makes a more powerful magnet.
The main advantage of an electromagnet over a permanent magnet is that the magnetic field can be quickly changed by controlling the amount of electric current in the winding. However, unlike a permanent magnet, which needs no power, an electromagnet requires a continuous supply of current to maintain the magnetic field.
Electromagnets are widely used as components of other electrical devices, such as motors, generators, electromechanical solenoids, relays, loudspeakers, hard disks, MRI machines, scientific instruments, and magnetic separation equipment. Electromagnets are also employed in industry for picking up and moving heavy iron objects such as scrap iron and steel.
History
Danish scientist Hans Christian Ørsted discovered in 1820 that electric currents create magnetic fields. In the same year, the French scientist André-Marie Ampère showed that iron can be magnetized by inserting it into an electrically fed solenoid.
British scientist William Sturgeon invented the electromagnet in 1824.
His first electromagnet was a horseshoe-shaped piece of iron that was wrapped with about 18 turns of bare copper wire. (Insulated wire did not then exist.) The iron was varnished to insulate it from the windings. When a current was passed through the coil, the iron became magnetized and attracted other pieces of iron; when the current was stopped, it lost magnetization. Sturgeon displayed its power by showing that although it only weighed seven ounces (roughly 200 grams), it could lift nine pounds (roughly 4 kilos) when the current of a single-cell power supply was applied. However, Sturgeon's magnets were weak because the uninsulated wire he used could only be wrapped in a single spaced-out layer around the core, limiting the number of turns.
Beginning in 1830, US scientist Joseph Henry systematically improved and popularised the electromagnet. By using wire insulated by silk thread and inspired by Schweigger's use of multiple turns of wire to make a galvanometer, he was able to wind multiple layers of wire onto cores, creating powerful magnets with thousands of turns of wire, including one that could support . The first major use for electromagnets was in telegraph sounders.
The magnetic domain theory of how ferromagnetic cores work was first proposed in 1906 by French physicist Pierre-Ernest Weiss, and the detailed modern quantum mechanical theory of ferromagnetism was worked out in the 1920s by Werner Heisenberg, Lev Landau, Felix Bloch, and others.
Applications of electromagnets
A portative electromagnet is one designed to just hold material in place; an example is a lifting magnet. A tractive electromagnet applies a force and moves something.
Electromagnets are very widely used in electric and electromechanical devices, including:
Motors and generators
Transformers
Relays
Electric bells and buzzers
Loudspeakers and headphones
Actuators such as valves
Magnetic recording and data storage equipment: tape recorders, VCRs, hard disks
MRI machines
Scientific equipment such as mass spectrometers
Particle accelerators
Magnetic locks
Magnetic separation equipment used for separating magnetic from nonmagnetic material; for example, separating ferrous metal in scrap
Industrial lifting magnets
Magnetic levitation, used in maglev trains
Induction heating for cooking, manufacturing, and hyperthermia therapy
Simple solenoid
A common tractive electromagnet is a uniformly wound solenoid and plunger. The solenoid is a coil of wire, and the plunger is made of a material such as soft iron. Applying a current to the solenoid applies a force to the plunger and may make it move. The plunger stops moving when the forces upon it are balanced. For example, the forces are balanced when the plunger is centered in the solenoid.
The maximum uniform pull happens when one end of the plunger is at the middle of the solenoid. An approximation for the force is
where is a proportionality constant, is the cross-sectional area of the plunger, is the number of turns in the solenoid, is the current through the solenoid wire, and is the length of the solenoid. For long, slender, solenoids (in units using inches, pounds force, and amperes), the value of is around 0.009 to 0.010 psi (maximum pull pounds per square inch of plunger cross-sectional area). For example, a 12-inch-long coil () with a long plunger with a cross section of one inch square () and 11,200 ampere-turns () had a maximum pull of 8.75 pounds (corresponding to ).
The maximum pull is increased when a magnetic stop is inserted into the solenoid. The stop becomes a magnet that will attract the plunger; it adds little to the solenoid pull when the plunger is far away but dramatically increases the pull when the plunger is close. An approximation for the pull is
Here is the distance between the end of the stop and the end of the plunger. The additional constant for units of inches, pounds, and amperes with slender solenoids is about 2660. The first term inside the bracket represents the attraction between the stop and the plunger; the second term represents the same force as the solenoid without a stop ().
Some improvements can be made on this basic design. The ends of the stop and plunger are often conical. For example, the plunger may have a pointed end that fits into a matching recess in the stop. The shape makes the solenoid's pull more uniform as a function of separation. Another improvement is to add a magnetic return path around the outside of the solenoid (an "iron-clad solenoid"). The magnetic return path, just as the stop, has little impact until the air gap is small.
Physics
An electric current flowing in a wire creates a magnetic field around the wire, due to Ampere's law (see drawing of wire with magnetic field). To concentrate the magnetic field in an electromagnet, the wire is wound into a coil with many turns of wire lying side-by-side. The magnetic field of all the turns of wire passes through the center of the coil, creating a strong magnetic field there. A coil forming the shape of a straight tube (a helix) is called a solenoid.
The direction of the magnetic field through a coil of wire can be determined by the right-hand rule. If the fingers of the right hand are curled around the coil in the direction of current flow (conventional current, flow of positive charge) through the windings, the thumb points in the direction of the field inside the coil. The side of the magnet that the field lines emerge from is defined to be the north pole.
Magnetic core
For definitions of the variables below, see box at end of article.
Much stronger magnetic fields can be produced if a magnetic core, made of a soft ferromagnetic (or ferrimagnetic) material such as iron, is placed inside the coil. A core can increase the magnetic field to thousands of times the strength of the field of the coil alone, due to the high magnetic permeability of the material. Not all electromagnets use cores, so this is called a ferromagnetic-core or iron-core electromagnet.
This phenomenon occurs because the magnetic core's material (often iron or steel) is composed of small regions called magnetic domains that act like tiny magnets (see ferromagnetism). Before the current in the electromagnet is turned on, these domains point in random directions, so their tiny magnetic fields cancel each other out, and the core has no large-scale magnetic field. When a current passes through the wire wrapped around the core, its magnetic field penetrates the core and turns the domains to align in parallel with the field. As they align, all their tiny magnetic fields add to the wire's field, which creates a large magnetic field that extends into the space around the magnet. The core concentrates the field, and the magnetic field passes through the core in lower reluctance than it would when passing through air.
The larger the current passed through the wire coil, the more the domains align, and the stronger the magnetic field is. Once all the domains are aligned, any additional current only causes a slight increase in the strength of the magnetic field. Eventually, the field strength levels off and becomes nearly constant, regardless of how much current is sent through the windings. This phenomenon is called saturation, and is the main nonlinear feature of ferromagnetic materials. For most high-permeability core steels, the maximum possible strength of the magnetic field is around 1.6 to 2 teslas (T). This is why the very strongest electromagnets, such as superconducting and very high current electromagnets, cannot use cores.
When the current in the coil is turned off, most of the domains in the core material lose alignment and return to a random state, and the electromagnetic field disappears. However, some of the alignment persists because the domains resist turning their direction of magnetization, which leaves the core magnetized as a weak permanent magnet. This phenomenon is called hysteresis and the remaining magnetic field is called remanent magnetism. The residual magnetization of the core can be removed by degaussing. In alternating current electromagnets, such as those used in motors, the core's magnetization is constantly reversed, and the remanence contributes to the motor's losses.
Ampere's law
The magnetic field of electromagnets in the general case is given by Ampere's Law:
which says that the integral of the magnetizing field around any closed loop is equal to the sum of the current flowing through the loop. A related equation is the Biot–Savart law, which gives the magnetic field due to each small segment of current.
Force exerted by magnetic field
Likewise, on the solenoid, the force exerted by an electromagnet on a conductor located at a section of core material is:
This equation can be derived from the energy stored in a magnetic field. Energy is force times distance. Rearranging terms yields the equation above.
The 1.6 T limit on the field previously mentioned sets a limit on the maximum force per unit core area, or magnetic pressure, an iron-core electromagnet can exert; roughly:
for the core's saturation limit, . In more intuitive units, it is useful to remember that at 1 T the magnetic pressure is approximately .
Given a core geometry, the magnetic field needed for a given force can be calculated from (); if the result is much more than 1.6 T, a larger core must be used.
However, computing the magnetic field and force exerted by ferromagnetic materials in general is difficult for two reasons. First, the strength of the field varies from point to point in a complicated way, particularly outside the core and in air gaps, where fringing fields and leakage flux must be considered. Second, the magnetic field and force are nonlinear functions of the current, depending on the nonlinear relation between and for the particular core material used. For precise calculations, computer programs that can produce a model of the magnetic field using the finite element method are employed.
Magnetic circuit
In many practical applications of electromagnets, such as motors, generators, transformers, lifting magnets, and loudspeakers, the iron core is in the form of a loop or magnetic circuit, possibly broken by a few narrow air gaps. Iron presents much less "resistance" (reluctance) to the magnetic field than air, so a stronger field can be obtained if most of the magnetic field's path is within the core. Since the magnetic field lines are closed loops, the core is usually made in the form of a loop.
Since most of the magnetic field is confined within the outlines of the core loop, this allows a simplification of the mathematical analysis. A common simplifying assumption satisfied by many electromagnets, which will be used in this section, is that the magnetic field strength is constant around the magnetic circuit (within the core and air gaps) and zero outside it. Most of the magnetic field will be concentrated in the core material (C) (see Fig. 1). Within the core, the magnetic field (B) will be approximately uniform across any cross-section; if the core also has roughly constant area throughout its length, the field in the core will be constant.
At any air gaps (G) between core sections, the magnetic field lines are no longer confined by the core. Here, they bulge out beyond the core geometry over the length of the gap, reducing the field strength in the gap. The "bulges" (BF) are called fringing fields. However, as long as the length of the gap is smaller than the cross-section dimensions of the core, the field in the gap will be approximately the same as in the core.
In addition, some of the magnetic field lines (BL) will take "short cuts" and not pass through the entire core circuit, and thus will not contribute to the force exerted by the magnet. This also includes field lines that encircle the wire windings but do not enter the core. This is called leakage flux.
The equations in this section are valid for electromagnets for which:
the magnetic circuit is a single loop of core material, possibly broken by a few air gaps;
the core has roughly the same cross-sectional area throughout its length;
any air gaps between sections of core material are not large compared with the cross-sectional dimensions of the core;
there is negligible leakage flux.
Magnetic field in magnetic circuit
The magnetic field created by an electromagnet is proportional to both and ; their product, , is magnetomotive force. For an electromagnet with a single magnetic circuit, Ampere's Law reduces to:
This is a nonlinear equation, because the permeability of the core varies with . For an exact solution, must be obtained from the core material hysteresis curve. If is unknown, the equation must be solved by numerical methods.
However, if the magnetomotive force is well above saturation (so the core material is in saturation), the magnetic field will be approximately the material's saturation value , and will not vary much with changes in . For a closed magnetic circuit (no air gap), most core materials saturate at a magnetomotive force of roughly 800 ampere-turns per meter of flux path.
For most core materials, the relative permeability . So in (), the second term dominates. Therefore, in magnetic circuits with an air gap, depends strongly on the length of the air gap, and the length of the flux path in the core does not matter much. Given an air gap of 1mm, a magnetomotive force of about 796 ampere-turns is required to produce a magnetic field of 1 T.
Closed magnetic circuit
For a closed magnetic circuit (no air gap), such as would be found in an electromagnet lifting a piece of iron bridged across its poles, equation () becomes:
Substituting into (), the force is:
To maximize the force, a core with a short flux path and a wide cross-sectional area is preferred (this also applies to magnets with an air gap). To achieve this, in applications like lifting magnets and loudspeakers, a flat cylindrical design is often used. The winding is wrapped around a short wide cylindrical core that forms one pole, and a thick metal housing that wraps around the outside of the windings forms the other part of the magnetic circuit, bringing the magnetic field to the front to form the other pole.
Force between electromagnets
The previous methods are applicable to electromagnets with a magnetic circuit; however, they do not apply when a large part of the magnetic field path is outside the core. (A non-circuit example would be a magnet with a straight cylindrical core.) To determine the force between two electromagnets (or permanent magnets) in these cases, a special analogy called a magnetic-charge model can be used. In this model, it is assumed that the magnets have well-defined "poles" where the field lines emerge from the core, and that the magnetic field is produced by fictitious "magnetic charges" on the surface of the poles. This model assumes point-like poles (instead of surfaces), and thus it only yields a good approximation when the distance between the magnets is much larger than their diameter; thus, it is useful just for determining a force between them.
The magnetic pole strength of an electromagnet is given by
and thus the force between two poles is
Each electromagnet has two poles, so the total force on magnet 1 from magnet 2 is equal to the vector sum of the forces of magnet 2's poles acting on each pole of magnet 1.
Side effects
There are several side effects which occur in electromagnets, which must be considered in their design. These effects generally become more significant in larger electromagnets.
Ohmic heating
The only power consumed in a direct current (DC) electromagnet under steady-state conditions is due to the resistance of the windings, and is dissipated as heat. Some large electromagnets require water cooling systems in the windings to carry off the waste heat.
Since the magnetic field is proportional to the product , the number of turns in the windings and the current can be chosen to minimize heat losses, as long as their product is constant. Since the power dissipation, , increases with the square of the current but only increases approximately linearly with the number of windings, the power lost in the windings can be minimized by reducing and proportionally increasing the number of turns , or using thicker wire to reduce the resistance. For example, halving and doubling halves the power loss, as does doubling the area of the wire. In either case, increasing the amount of wire reduces the ohmic losses. For this reason, electromagnet windings often have a significant thickness.
However, the limit to increasing or lowering the resistance is that the windings take up more space between the magnet's core pieces. If the area available for windings is filled up, adding more turns requires a smaller diameter of wire, which has higher resistance, and thus cancels the advantage of using more turns. So, in large magnets there is a minimum amount of heat loss that cannot be reduced. This increases with the square of the magnetic flux, .
Inductive voltage spikes
An electromagnet has significant inductance, and resists changes in the current through its windings. Any sudden changes in the winding current cause large voltage spikes across the windings. This is because when the current through the magnet is increased, such as when it is turned on, energy from the circuit must be stored in the magnetic field. When it is turned off, the energy in the field is returned to the circuit.
If an ordinary switch is used to control the winding current, this can cause sparks at the terminals of the switch. This does not occur when the magnet is switched on, because the limited supply voltage causes the current through the magnet and the field energy to increase slowly. But when it is switched off, the energy in the magnetic field is suddenly returned to the circuit, causing a large voltage spike and an arc across the switch contacts, which can damage them. With small electromagnets, a capacitor is sometimes used across the contacts, which reduces arcing by temporarily storing the current. More often, a diode is used to prevent voltage spikes by providing a path for the current to recirculate through the winding until the energy is dissipated as heat. The diode is connected across the winding, oriented so it is reverse-biased during steady state operation and does not conduct. When the supply voltage is removed, the voltage spike forward-biases the diode and the reactive current continues to flow through the winding, through the diode, and back into the winding. A diode used in this way is called a freewheeling diode or flyback diode.
Large electromagnets are usually powered by variable current electronic power supplies, controlled by a microprocessor, which prevent voltage spikes by accomplishing current changes slowly, in gentle ramps. It may take several minutes to energize or deenergize a large magnet.
Lorentz forces
In powerful electromagnets, the magnetic field exerts a force on each turn of the windings, due to the Lorentz force acting on the moving charges within the wire. The Lorentz force is perpendicular to both the axis of the wire and the magnetic field. It can be visualized as a pressure between the magnetic field lines, pushing them apart. It has two effects on an electromagnet's windings:
The field lines within the axis of the coil exert a radial force on each turn of the windings, tending to push them outward in all directions. This causes a tensile stress in the wire.
The leakage field lines between each turn of the coil exert an attractive force between adjacent turns, tending to pull them together.
The Lorentz forces increase with . In large electromagnets the windings must be firmly clamped in place, to prevent motion on power-up and power-down from causing metal fatigue in the windings. In the Bitter electromagnet design (Fig. 2), used in very high-field research magnets, the windings are constructed as flat disks to resist the radial forces, and clamped in an axial direction to resist the axial ones.
Core losses
In alternating current (AC) electromagnets, used in transformers, inductors, and AC motors and generators, the magnetic field is constantly changing. This causes energy losses in their magnetic cores, which is dissipated as heat in the core. The losses stem from two processes: eddy currents and hysteresis losses.
Eddy currents: From Faraday's law of induction, a changing magnetic field induces circulating electric currents (eddy currents) inside nearby conductors. The energy in these currents is dissipated as heat in the electrical resistance of the conductor, so they are a cause of energy loss. Since the magnet's iron core is conductive, and most of the magnetic field is concentrated there, eddy currents in the core are the major problem. Eddy currents are closed loops of current that flow in planes perpendicular to the magnetic field. The energy dissipated is proportional to the area enclosed by the loop. To prevent them, the cores of AC electromagnets are made of stacks of thin steel sheets, or laminations, oriented parallel to the magnetic field, with an insulating coating on the surface. The insulation layers prevent eddy current from flowing between the sheets. Any remaining eddy currents must flow within the cross-section of each individual lamination, which reduces losses greatly. Another alternative is to use a ferrite core, which is a nonconductor.
Hysteresis losses: Reversing the direction of magnetization of the magnetic domains in the core material each cycle causes energy loss, because of the coercivity of the material. These are called hysteresis losses. The energy lost per cycle is proportional to the area of the hysteresis loop in the graph. To minimize this loss, magnetic cores used in transformers and other AC electromagnets are made of "soft" low coercivity materials, such as silicon steel or soft ferrite. The energy loss per cycle of the alternating current is constant for each of these processes, so the power loss increases linearly with frequency.
High-field electromagnets
Superconducting electromagnets
When a magnetic field higher than the ferromagnetic limit of 1.6 T is needed, superconducting electromagnets can be used. Instead of using ferromagnetic materials, these use superconducting windings cooled with liquid helium, which conduct current without electrical resistance. These allow enormous currents to flow, which generate intense magnetic fields. Superconducting magnets are limited by the field strength at which the winding material ceases to be superconducting. Current designs are limited to 10–20 T, with the current (2017) record of 32 T. The necessary refrigeration equipment and cryostat make them much more expensive than ordinary electromagnets. However, in high-power applications this can be offset by lower operating costs, since after startup no power is required for the windings, since no energy is lost to ohmic heating. They are used in particle accelerators and MRI machines.
Bitter electromagnets
Both iron-core and superconducting electromagnets have limits to the field they can produce. Therefore, the most powerful man-made magnetic fields have been generated by air-core non-superconducting electromagnets of a design invented by Francis Bitter in 1933, called Bitter electromagnets. Instead of wire windings, a Bitter magnet consists of a solenoid made of a stack of conducting disks, arranged so that the current moves in a helical path through them, with a hole through the center where the maximum field is created. This design has the mechanical strength to withstand the extreme Lorentz forces of the field, which increase with . The disks are pierced with holes through which cooling water passes to carry away the heat caused by the high current. The strongest continuous field achieved solely with a resistive magnet is 41.5 T , produced by a Bitter electromagnet at the National High Magnetic Field Laboratory in Tallahassee, Florida. The previous record was 37.5 T. The strongest continuous magnetic field overall, 45 T, was achieved in June 2000 with a hybrid device consisting of a Bitter magnet inside a superconducting magnet.
The factor that limits the strength of electromagnets is the inability to dissipate the enormous waste heat, so more powerful fields, up to 100 T, have been obtained from resistive magnets by sending brief pulses of high current through them; the inactive period after each pulse allows the heat produced during the pulse to be removed before the next pulse.
Explosively pumped flux compression
The most powerful man-made magnetic fields have been created by using explosives to compress the magnetic field inside an electromagnet as it is pulsed; these are called explosively pumped flux compression generators. The implosion compresses the magnetic field to values of around 1,000 T for a few microseconds. While this method may seem very destructive, shaped charges redirect the blast outward to minimize harm to the experiment. These devices are known as destructive pulsed electromagnets. They are used in physics and materials science research to study the properties of materials at high magnetic fields.
Definition of terms
See also
Dipole magnet – the most basic form of magnet
Electromagnetism
Electropermanent magnet – a magnetically hard electromagnet arrangement
Field coil
Magnetic bearing
Pulsed field magnet
Quadrupole magnet – a combination of magnets and electromagnets used mainly to affect the motion of charged particles
References
External links
Electromagnets - The Feynman Lectures on Physics
Electromagnetism
Types of magnets | Electromagnet | [
"Physics"
] | 5,777 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
92,447 | https://en.wikipedia.org/wiki/Superoxide | In chemistry, a superoxide is a compound that contains the superoxide ion, which has the chemical formula . The systematic name of the anion is dioxide(1−). The reactive oxygen ion superoxide is particularly important as the product of the one-electron reduction of dioxygen , which occurs widely in nature. Molecular oxygen (dioxygen) is a diradical containing two unpaired electrons, and superoxide results from the addition of an electron which fills one of the two degenerate molecular orbitals, leaving a charged ionic species with a single unpaired electron and a net negative charge of −1. Both dioxygen and the superoxide anion are free radicals that exhibit paramagnetism. Superoxide was historically also known as "hyperoxide".
Salts
Superoxide forms salts with alkali metals and alkaline earth metals. The salts sodium superoxide (), potassium superoxide (), rubidium superoxide () and caesium superoxide () are prepared by the reaction of with the respective alkali metal.
The alkali salts of are orange-yellow in color and quite stable, if they are kept dry. Upon dissolution of these salts in water, however, the dissolved undergoes disproportionation (dismutation) extremely rapidly (in a pH-dependent manner):
This reaction (with moisture and carbon dioxide in exhaled air) is the basis of the use of potassium superoxide as an oxygen source in chemical oxygen generators, such as those used on the Space Shuttle and on submarines. Superoxides are also used in firefighters' oxygen tanks to provide a readily available source of oxygen. In this process, acts as a Brønsted base, initially forming the hydroperoxyl radical ().
The superoxide anion, , and its protonated form, hydroperoxyl, are in equilibrium in an aqueous solution:
Given that the hydroperoxyl radical has a pKa of around 4.8, superoxide predominantly exists in the anionic form at neutral pH.
Potassium superoxide is soluble in dimethyl sulfoxide (facilitated by crown ethers) and is stable as long as protons are not available. Superoxide can also be generated in aprotic solvents by cyclic voltammetry.
Superoxide salts also decompose in the solid state, but this process requires heating:
Biology
Superoxide is common in biology, reflecting the pervasiveness of O2 and its ease of reduction. Superoxide is implicated in a number of biological processes, some with negative connotations, and some with beneficial effects.
Like hydroperoxyl, superoxide is classified as reactive oxygen species. It is generated by the immune system to kill invading microorganisms. In phagocytes, superoxide is produced in large quantities by the enzyme NADPH oxidase for use in oxygen-dependent killing mechanisms of invading pathogens. Mutations in the gene coding for the NADPH oxidase cause an immunodeficiency syndrome called chronic granulomatous disease, characterized by extreme susceptibility to infection, especially catalase-positive organisms. In turn, micro-organisms genetically engineered to lack the superoxide-scavenging enzyme superoxide dismutase (SOD) lose virulence. Superoxide is also deleterious when produced as a byproduct of mitochondrial respiration (most notably by Complex I and Complex III), as well as several other enzymes, for example xanthine oxidase, which can catalyze the transfer of electrons directly to molecular oxygen under strongly reducing conditions.
Because superoxide is toxic at high concentrations, nearly all aerobic organisms express SOD. SOD efficiently catalyzes the disproportionation of superoxide:
Other proteins that can be both oxidized and reduced by superoxide (such as hemoglobin) have weak SOD-like activity. Genetic inactivation ("knockout") of SOD produces deleterious phenotypes in organisms ranging from bacteria to mice and have provided important clues as to the mechanisms of toxicity of superoxide in vivo.
Yeast lacking both mitochondrial and cytosolic SOD grow very poorly in air, but quite well under anaerobic conditions. Absence of cytosolic SOD causes a dramatic increase in mutagenesis and genomic instability. Mice lacking mitochondrial SOD (MnSOD) die around 21 days after birth due to neurodegeneration, cardiomyopathy, and lactic acidosis. Mice lacking cytosolic SOD (CuZnSOD) are viable but suffer from multiple pathologies, including reduced lifespan, liver cancer, muscle atrophy, cataracts, thymic involution, haemolytic anemia, and a very rapid age-dependent decline in female fertility.
Superoxide may contribute to the pathogenesis of many diseases (the evidence is particularly strong for radiation poisoning and hyperoxic injury), and perhaps also to aging via the oxidative damage that it inflicts on cells. While the action of superoxide in the pathogenesis of some conditions is strong (for instance, mice and rats overexpressing CuZnSOD or MnSOD are more resistant to strokes and heart attacks), the role of superoxide in aging must be regarded as unproven, for now. In model organisms (yeast, the fruit fly Drosophila, and mice), genetically knocking out CuZnSOD shortens lifespan and accelerates certain features of aging: (cataracts, muscle atrophy, macular degeneration, and thymic involution). But the converse, increasing the levels of CuZnSOD, does not seem to consistently increase lifespan (except perhaps in Drosophila). The most widely accepted view is that oxidative damage (resulting from multiple causes, including superoxide) is but one of several factors limiting lifespan.
The binding of by reduced () heme proteins involves formation of Fe(III) superoxide complex.
Assay in biological systems
The assay of superoxide in biological systems is complicated by its short half-life. One approach that has been used in quantitative assays converts superoxide to hydrogen peroxide, which is relatively stable. Hydrogen peroxide is then assayed by a fluorimetric method. As a free radical, superoxide has a strong EPR signal, and it is possible to detect superoxide directly using this method. For practical purposes, this can be achieved only in vitro under non-physiological conditions, such as high pH (which slows the spontaneous dismutation) with the enzyme xanthine oxidase. Researchers have developed a series of tool compounds termed "spin traps" that can react with superoxide, forming a meta-stable radical (half-life 1–15 minutes), which can be more readily detected by EPR. Superoxide spin-trapping was initially carried out with DMPO, but phosphorus derivatives with improved half-lives, such as DEPPMPO and DIPPMPO, have become more widely used.
Bonding and structure
Superoxides are compounds in which the oxidation number of oxygen is −. Whereas molecular oxygen (dioxygen) is a diradical containing two unpaired electrons, the addition of a second electron fills one of its two degenerate molecular orbitals, leaving a charged ionic species with single unpaired electron and a net negative charge of −1. Both dioxygen and the superoxide anion are free radicals that exhibit paramagnetism.
The derivatives of dioxygen have characteristic O–O distances that correlate with the order of the O–O bond.
See also
Oxygen,
Ozonide,
Peroxide,
Oxide,
Dioxygenyl,
Antimycin A – used in fishery management, this compound produces large quantities of this free radical.
Paraquat – used as a herbicide, this compound produces large quantities of this free radical.
Xanthine oxidase – This form of the enzyme xanthine dehydrogenase produces large amounts of superoxide.
References
Anions
Oxygen compounds
Oxyanions
Immune system
Free radicals
Reactive oxygen species | Superoxide | [
"Physics",
"Chemistry",
"Biology"
] | 1,685 | [
"Matter",
"Anions",
"Immune system",
"Free radicals",
"Senescence",
"Organ systems",
"Biomolecules",
"Ions"
] |
92,512 | https://en.wikipedia.org/wiki/Lipoprotein | A lipoprotein is a biochemical assembly whose primary function is to transport hydrophobic lipid (also known as fat) molecules in water, as in blood plasma or other extracellular fluids. They consist of a triglyceride and cholesterol center, surrounded by a phospholipid outer shell, with the hydrophilic portions oriented outward toward the surrounding water and lipophilic portions oriented inward toward the lipid center. A special kind of protein, called apolipoprotein, is embedded in the outer shell, both stabilising the complex and giving it a functional identity that determines its role.
Plasma lipoprotein particles are commonly divided into five main classes, based on size, lipid composition, and apolipoprotein content: HDL, LDL, IDL, VLDL and chylomicrons. Subgroups of these plasma particles are primary drivers or modulators of atherosclerosis.
Many enzymes, transporters, structural proteins, antigens, adhesins, and toxins are sometimes also classified as lipoproteins, since they are formed by lipids and proteins.
Scope
Transmembrane lipoproteins
Some transmembrane proteolipids, especially those found in bacteria, are referred to as lipoproteins; they are not related to the lipoprotein particles that this article is about. Such transmembrane proteins are difficult to isolate, as they bind tightly to the lipid membrane, often require lipids to display the proper structure, and can be water-insoluble. Detergents are usually required to isolate transmembrane lipoproteins from their associated biological membranes.
Plasma lipoprotein particles
Because fats are insoluble in water, they cannot be transported on their own in extracellular water, including blood plasma. Instead, they are surrounded by a hydrophilic external shell that functions as a transport vehicle. The role of lipoprotein particles is to transport fat molecules, such as triglycerides, phospholipids, and cholesterol within the extracellular water of the body to all the cells and tissues of the body. The proteins included in the external shell of these particles, called apolipoproteins, are synthesized and secreted into the extracellular water by both the small intestine and liver cells. The external shell also contains phospholipids and cholesterol.
All cells use and rely on fats and cholesterol as building blocks to create the multiple membranes that cells use both to control internal water content and internal water-soluble elements and to organize their internal structure and protein enzymatic systems. The outer shell of lipoprotein particles have the hydrophilic groups of phospholipids, cholesterol, and apolipoproteins directed outward. Such characteristics make them soluble in the salt-water-based blood pool. Triglycerides and cholesteryl esters are carried internally, shielded from the water by the outer shell. The kind of apolipoproteins contained in the outer shell determines the functional identity of the lipoprotein particles. The interaction of these apolipoproteins with enzymes in the blood, with each other, or with specific proteins on the surfaces of cells, determines whether triglycerides and cholesterol will be added to or removed from the lipoprotein transport particles.
Characterization in human plasma
Structure
Lipoproteins are complex particles that have a central hydrophobic core of non-polar lipids, primarily cholesteryl esters and triglycerides. This hydrophobic core is surrounded by a hydrophilic membrane consisting of phospholipids, free cholesterol, and apolipoproteins. Plasma lipoproteins, found in blood plasma, are typically divided into five main classes based on size, lipid composition, and apolipoprotein content: HDL, LDL, IDL, VLDL and chylomicrons.
Functions
Metabolism
The handling of lipoprotein particles in the body is referred to as lipoprotein particle metabolism. It is divided into two pathways, exogenous and endogenous, depending in large part on whether the lipoprotein particles in question are composed chiefly of dietary (exogenous) lipids or whether they originated in the liver (endogenous), through de novo synthesis of triglycerides.
The hepatocytes are the main platform for the handling of triglycerides and cholesterol; the liver can also store certain amounts of glycogen and triglycerides. While adipocytes are the main storage cells for triglycerides, they do not produce any lipoproteins.
Exogenous pathway
Bile emulsifies fats contained in the chyme, then pancreatic lipase cleaves triglyceride molecules into two fatty acids and one 2-monoacylglycerol. Enterocytes readily absorb the small molecules from the chymus. Inside of the enterocytes, fatty acids and monoacylglycerides are transformed again into triglycerides. Then these lipids are assembled with apolipoprotein B-48 into nascent chylomicrons. These particles are then secreted into the lacteals in a process that depends heavily on apolipoprotein B-48. As they circulate through the lymphatic vessels, nascent chylomicrons bypass the liver circulation and are drained via the thoracic duct into the bloodstream.
In the blood stream, nascent chylomicron particles interact with HDL particles, resulting in HDL donation of apolipoprotein C-II and apolipoprotein E to the nascent chylomicron. The chylomicron at this stage is then considered mature. Via apolipoprotein C-II, mature chylomicrons activate lipoprotein lipase (LPL), an enzyme on endothelial cells lining the blood vessels. LPL catalyzes the hydrolysis of triglycerides that ultimately releases glycerol and fatty acids from the chylomicrons. Glycerol and fatty acids can then be absorbed in peripheral tissues, especially adipose and muscle, for energy and storage.
The hydrolyzed chylomicrons are now called chylomicron remnants. The chylomicron remnants continue circulating the bloodstream until they interact via apolipoprotein E with chylomicron remnant receptors, found chiefly in the liver. This interaction causes the endocytosis of the chylomicron remnants, which are subsequently hydrolyzed within lysosomes. Lysosomal hydrolysis releases glycerol and fatty acids into the cell, which can be used for energy or stored for later use.
Endogenous pathway
The liver is the central platform for the handling of lipids: it is able to store glycerols and fats in its cells, the hepatocytes. Hepatocytes are also able to create triglycerides via de novo synthesis. They also produce the bile from cholesterol. The intestines are responsible for absorbing cholesterol. They transfer it over into the blood stream.
In the hepatocytes, triglycerides and cholesteryl esters are assembled with apolipoprotein B-100 to form nascent VLDL particles. Nascent VLDL particles are released into the bloodstream via a process that depends upon apolipoprotein B-100.
In the blood stream, nascent VLDL particles bump with HDL particles; as a result, HDL particles donate apolipoprotein C-II and apolipoprotein E to the nascent VLDL particle. Once loaded with apolipoproteins C-II and E, the nascent VLDL particle is considered mature. VLDL particles circulate and encounter LPL expressed on endothelial cells. Apolipoprotein C-II activates LPL, causing hydrolysis of the VLDL particle and the release of glycerol and fatty acids. These products can be absorbed from the blood by peripheral tissues, principally adipose and muscle. The hydrolyzed VLDL particles are now called VLDL remnants or intermediate-density lipoproteins (IDLs). VLDL remnants can circulate and, via an interaction between apolipoprotein E and the remnant receptor, be absorbed by the liver, or they can be further hydrolyzed by hepatic lipase.
Hydrolysis by hepatic lipase releases glycerol and fatty acids, leaving behind IDL remnants, called low-density lipoproteins (LDL), which contain a relatively high cholesterol content (). LDL circulates and is absorbed by the liver and peripheral cells. Binding of LDL to its target tissue occurs through an interaction between the LDL receptor and apolipoprotein B-100 on the LDL particle. Absorption occurs through endocytosis, and the internalized LDL particles are hydrolyzed within lysosomes, releasing lipids, chiefly cholesterol.
Possible role in oxygen transport
Plasma lipoproteins may carry oxygen gas. This property is due to the crystalline hydrophobic structure of lipids, providing a suitable environment for O2 solubility compared to an aqueous medium.
Role in inflammation
Inflammation, a biological system response to stimuli such as the introduction of a pathogen, has an underlying role in numerous systemic biological functions and pathologies. This is a useful response by the immune system when the body is exposed to pathogens, such as bacteria in locations that will prove harmful, but can also have detrimental effects if left unregulated. It has been demonstrated that lipoproteins, specifically HDL, have important roles in the inflammatory process.
When the body is functioning under normal, stable physiological conditions, HDL has been shown to be beneficial in several ways. LDL contains apolipoprotein B (apoB), which allows LDL to bind to different tissues, such as the artery wall if the glycocalyx has been damaged by high blood sugar levels. If oxidised, the LDL can become trapped in the proteoglycans, preventing its removal by HDL cholesterol efflux. Normal functioning HDL is able to prevent the process of oxidation of LDL and the subsequent inflammatory processes seen after oxidation.
Lipopolysaccharide, or LPS, is the major pathogenic factor on the cell wall of Gram-negative bacteria. Gram-positive bacteria has a similar component named Lipoteichoic acid, or LTA. HDL has the ability to bind LPS and LTA, creating HDL-LPS complexes to neutralize the harmful effects in the body and clear the LPS from the body. HDL also has significant roles interacting with cells of the immune system to modulate the availability of cholesterol and modulate the immune response.
Under certain abnormal physiological conditions such as system infection or sepsis, the major components of HDL become altered, The composition and quantity of lipids and apolipoproteins are altered as compared to normal physiological conditions, such as a decrease in HDL cholesterol (HDL-C), phospholipids, apoA-I (a major lipoprotein in HDL that has been shown to have beneficial anti-inflammatory properties), and an increase in Serum amyloid A. This altered composition of HDL is commonly referred to as acute-phase HDL in an acute-phase inflammatory response, during which time HDL can lose its ability to inhibit the oxidation of LDL. In fact, this altered composition of HDL is associated with increased mortality and worse clinical outcomes in patients with sepsis.
Classification
By density
Lipoproteins may be classified as five major groups, listed from larger and lower density to smaller and higher density. Lipoproteins are larger and less dense when the fat to protein ratio is increased. They are classified on the basis of electrophoresis, ultracentrifugation and nuclear magnetic resonance spectroscopy via the Vantera Analyzer.
Chylomicrons carry triglycerides (fat) from the intestines to the liver, to skeletal muscle, and to adipose tissue.
Very-low-density lipoproteins (VLDL) carry (newly synthesised) triglycerides from the liver to adipose tissue.
Intermediate-density lipoproteins (IDL) are intermediate between VLDL and LDL. They are not usually detectable in the blood when fasting.
Low-density lipoproteins (LDL) carry 3,000 to 6,000 fat molecules (phospholipids, cholesterol, triglycerides, etc.) around the body. LDL particles are sometimes referred to as "bad" lipoprotein because concentrations of two kinds of LDL (sd-LDL and LPA), correlate with atherosclerosis progression. In healthy individuals, most LDL is large and buoyant (lb LDL).
large buoyant LDL (lb LDL) particles
small dense LDL (sd LDL) particles
Lipoprotein(a) (LPA) is a lipoprotein particle of a certain phenotype
High-density lipoproteins (HDL) collect fat molecules from the body's cells/tissues and take them back to the liver. HDLs are sometimes referred to as "good" lipoprotein because higher concentrations correlate with low rates of atherosclerosis progression and/or regression.
For young healthy research subjects, ~70 kg (154 lb), these data represent averages across individuals studied, percentages represent % dry weight:
However, these data are not necessarily reliable for any one individual or for the general clinical population.
Alpha and beta
It is also possible to classify lipoproteins as "alpha" and "beta", according to the classification of proteins in serum protein electrophoresis. This terminology is sometimes used in describing lipid disorders such as abetalipoproteinemia.
Subdivisions
Lipoproteins, such as LDL and HDL, can be further subdivided into subspecies isolated through a variety of methods. These are subdivided by density or by the protein contents/ proteins they carry. While the research is currently ongoing, researchers are learning that different subspecies contain different apolipoproteins, proteins, and lipid contents between species which have different physiological roles. For example, within the HDL lipoprotein subspecies, a large number of proteins are involved in general lipid metabolism. However, it is being elucidated that HDL subspecies also contain proteins involved in the following functions: homeostasis, fibrinogen, clotting cascade, inflammatory and immune responses, including the complement system, proteolysis inhibitors, acute-phase response proteins, and the LPS-binding protein, heme and iron metabolism, platelet regulation, vitamin binding and general transport.
Research
High levels of lipoprotein(a) are a significant risk factor for atherosclerotic cardiovascular diseases via mechanisms associated with inflammation and thrombosis. The links of mechanisms between different lipoprotein isoforms and risk for cardiovascular diseases, lipoprotein synthesis, regulation, and metabolism, and related risks for genetic diseases are under active research, as of 2022.
See also
Lipid anchored protein
Remnant cholesterol
Reverse cholesterol transport
Vertical Auto Profile
References
External links
Lipids
Physiology
Cardiology
he:כולסטרול#ליפופרוטאינים | Lipoprotein | [
"Chemistry",
"Biology"
] | 3,309 | [
"Lipid biochemistry",
"Biomolecules by chemical classification",
"Physiology",
"Organic compounds",
"Lipids",
"Lipoproteins"
] |
92,923 | https://en.wikipedia.org/wiki/Sverdrup | In oceanography, the sverdrup (symbol: Sv) is a non-SI metric unit of volumetric flow rate, with equal to . It is equivalent to the SI derived unit cubic hectometer per second (symbol: hm3/s or hm3⋅s−1): 1 Sv is equal to 1 hm3/s. It is used almost exclusively in oceanography to measure the volumetric rate of transport of ocean currents. It is named after Harald Sverdrup.
One sverdrup is about five times what is carried by the world's largest river, the Amazon. In the context of ocean currents, a volume of one million cubic meters may be imagined as a "slice" of ocean with dimensions × × (width × length × thickness). At this scale, these units can be more easily compared in terms of width of the current (several km), depth (hundreds of meters), and current speed (as meters per second). Thus, a hypothetical current wide, 500 m (0.5 km) deep, and moving at 2 m/s would be transporting of water.
The sverdrup is distinct from the SI sievert unit or the non-SI svedberg unit. All three use the same symbol, but they are not related.
History
The sverdrup is named in honor of the Norwegian oceanographer, meteorologist and polar explorer Harald Ulrik Sverdrup (1888–1957), who wrote the 1942 volume The Oceans, Their Physics, Chemistry, and General Biology together with Martin W. Johnson and Richard H. Fleming.
In the 1950s and early 1960s both Soviet and North American scientists contemplated the damming of the Bering Strait, thus enabling temperate Atlantic water to heat up the cold Arctic Sea and, the theory went, making Siberia and northern Canada more habitable. As part of the North American team, Canadian oceanographer Maxwell Dunbar found it "very cumbersome" to repeatedly reference millions of cubic meters per second. He casually suggested that as a new unit of water flow, "the inflow through Bering Strait is one sverdrup". At the Arctic Basin Symposium in October 1962, the unit came into general usage.
Examples
The water transport in the Gulf Stream gradually increases from in the Florida Current to a maximum of south of Newfoundland at 55° W longitude.
The Antarctic Circumpolar Current, at approximately , is the largest ocean current.
The entire global input of fresh water from rivers to the ocean is approximately .
References
Non-SI metric units
Oceanography
Units of flow | Sverdrup | [
"Physics",
"Mathematics",
"Environmental_science"
] | 526 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Quantity",
"Non-SI metric units",
"Units of flow",
"Units of measurement"
] |
92,943 | https://en.wikipedia.org/wiki/Digital-to-analog%20converter | In electronics, a digital-to-analog converter (DAC, D/A, D2A, or D-to-A) is a system that converts a digital signal into an analog signal. An analog-to-digital converter (ADC) performs the reverse function.
There are several DAC architectures; the suitability of a DAC for a particular application is determined by figures of merit including: resolution, maximum sampling frequency and others. Digital-to-analog conversion can degrade a signal, so a DAC should be specified that has insignificant errors in terms of the application.
DACs are commonly used in music players to convert digital data streams into analog audio signals. They are also used in televisions and mobile phones to convert digital video data into analog video signals. These two applications use DACs at opposite ends of the frequency/resolution trade-off. The audio DAC is a low-frequency, high-resolution type while the video DAC is a high-frequency low- to medium-resolution type.
Due to the complexity and the need for precisely matched components, all but the most specialized DACs are implemented as integrated circuits (ICs). These typically take the form of metal–oxide–semiconductor (MOS) mixed-signal integrated circuit chips that integrate both analog and digital circuits.
Discrete DACs (circuits constructed from multiple discrete electronic components instead of a packaged IC) would typically be extremely high-speed low-resolution power-hungry types, as used in military radar systems. Very high-speed test equipment, especially sampling oscilloscopes, may also use discrete DACs.
Overview
A DAC converts an abstract finite-precision number (usually a fixed-point binary number) into a physical quantity (e.g., a voltage or a pressure). In particular, DACs are often used to convert finite-precision time series data to a continually varying physical signal.
Provided that a signal's bandwidth meets the requirements of the Nyquist–Shannon sampling theorem (i.e., a baseband signal with bandwidth less than the Nyquist frequency) and was sampled with infinite resolution, the original signal can theoretically be reconstructed from the sampled data. However, an ADC's filtering can't entirely eliminate all frequencies above the Nyquist frequency, which will alias into the baseband frequency range. And the ADC's digital sampling process introduces some quantization error (rounding error), which manifests as low-level noise. These errors can be kept within the requirements of the targeted application (e.g. under the limited dynamic range of human hearing for audio applications).
Applications
DACs and ADCs are part of an enabling technology that has contributed greatly to the digital revolution. To illustrate, consider a typical long-distance telephone call. The caller's voice is converted into an analog electrical signal by a microphone, then the analog signal is converted to a digital stream by an ADC. The digital stream is then divided into network packets where it may be sent along with other digital data, not necessarily audio. The packets are then received at the destination, but each packet may take a completely different route and may not even arrive at the destination in the correct time order. The digital voice data is then extracted from the packets and assembled into a digital data stream. A DAC converts this back into an analog electrical signal, which drives an audio amplifier, which in turn drives a speaker, which finally produces sound.
Audio
Most modern audio signals are stored in digital form (for example MP3s and CDs), and in order to be heard through speakers, they must be converted into an analog signal. DACs are therefore found in CD players, digital music players, and PC sound cards.
Specialist standalone DACs can also be found in high-end hi-fi systems. These normally take the digital output of a compatible CD player or dedicated transport (which is basically a CD player with no internal DAC) and convert the signal into an analog line-level output that can then be fed into an amplifier to drive speakers.
Similar digital-to-analog converters can be found in digital speakers such as USB speakers and in sound cards.
In voice over IP applications, the source must first be digitized for transmission, so it undergoes conversion via an ADC and is then reconstructed into analog using a DAC on the receiving party's end.
Video
Video sampling tends to work on a completely different scale altogether thanks to the highly nonlinear response both of cathode ray tubes (for which the vast majority of digital video foundation work was targeted) and the human eye, using a "gamma curve" to provide an appearance of evenly distributed brightness steps across the display's full dynamic range - hence the need to use RAMDACs in computer video applications with deep enough color resolution to make engineering a hardcoded value into the DAC for each output level of each channel impractical (e.g. an Atari ST or Sega Genesis would require 24 such values; a 24-bit video card would need 768...). Given this inherent distortion, it is not unusual for a television or video projector to truthfully claim a linear contrast ratio (difference between darkest and brightest output levels) of 1000:1 or greater, equivalent to 10 bits of audio precision even though it may only accept signals with 8-bit precision and use an LCD panel that only represents 6 or 7 bits per channel.
Video signals from a digital source, such as a computer, must be converted to analog form if they are to be displayed on an analog monitor. As of 2007, analog inputs were more commonly used than digital, but this changed as flat-panel displays with DVI and/or HDMI connections became more widespread. A video DAC is, however, incorporated in any digital video player with analog outputs. The DAC is usually integrated with some memory (RAM), which contains conversion tables for gamma correction, contrast and brightness, to make a device called a RAMDAC.
Digital potentiometer
A device that is distantly related to the DAC is the digitally controlled potentiometer, used to control an analog signal digitally.
Mechanical
A one-bit mechanical actuator assumes two positions: one when on, another when off. The motion of several one-bit actuators can be combined and weighted with a whiffletree mechanism to produce finer steps. The IBM Selectric typewriter uses such a system.
Communications
DACs are widely used in modern communication systems enabling the generation of digitally-defined transmission signals. High-speed DACs are used for mobile communications and ultra-high-speed DACs are employed in optical communications systems.
Types
The most common types of electronic DACs are:
The pulse-width modulator where a stable current or voltage is switched into a low-pass analog filter with a duration determined by the digital input code. This technique is often used for electric motor speed control and dimming LED lamps.
Oversampling DACs or interpolating DACs such as those employing delta-sigma modulation, use a pulse density conversion technique with oversampling. Audio delta-sigma DACs are sold with 384 kHz sampling rate and quoted 24-bit resolution, though quality is lower due to inherent noise (see ). Some consumer electronics use a type of oversampling DAC referred to as a 1-bit DAC.
The binary-weighted DAC, which contains individual electrical components for each bit of the DAC connected to a summing point, typically an operational amplifier. Each input in the summing has powers-of-two weighting with the most current or voltage at the most-significant bit. This is one of the fastest conversion methods but suffers from poor accuracy because of the high precision required for each individual voltage or current.
Switched resistor DAC contains a parallel resistor network. Individual resistors are enabled or bypassed in the network based on the digital input.
Switched current source DAC, from which different current sources are selected based on the digital input.
Switched capacitor DAC contains a parallel capacitor network. Individual capacitors are connected or disconnected with switches based on the input.
The R-2R ladder DAC which is a binary-weighted DAC that uses a repeating cascaded structure of resistor values R and 2R. This improves the precision due to the relative ease of producing equal valued-matched resistors.
The successive approximation or cyclic DAC, which successively constructs the output during each cycle. Individual bits of the digital input are processed each cycle until the entire input is accounted for.
The thermometer-coded DAC, which contains an equal resistor or current-source segment for each possible value of DAC output. An 8-bit thermometer DAC would have 255 segments, and a 16-bit thermometer DAC would have 65,535 segments. This is a fast and highest precision DAC architecture but at the expense of requiring many components which, for practical implementations, fabrication requires high-density IC processes.
Hybrid DACs, which use a combination of the above techniques in a single converter. Most DAC integrated circuits are of this type due to the difficulty of getting low cost, high speed and high precision in one device.
The segmented DAC, which combines the thermometer-coded principle for the most significant bits and the binary-weighted principle for the least significant bits. In this way, a compromise is obtained between precision (by the use of the thermometer-coded principle) and number of resistors or current sources (by the use of the binary-weighted principle). The full binary-weighted design means 0% segmentation, the full thermometer-coded design means 100% segmentation.
Most DACs shown in this list rely on a constant reference voltage or current to create their output value. Alternatively, a multiplying DAC takes a variable input voltage or current as a conversion reference. This puts additional design constraints on the bandwidth of the conversion circuit.
Modern high-speed DACs have an interleaved architecture, in which multiple DAC cores are used in parallel. Their output signals are combined in the analog domain to enhance the performance of the combined DAC. The combination of the signals can be performed either in the time domain or in the frequency domain.
Performance
The most important characteristics of a DAC are:
Resolution The number of possible output levels the DAC is designed to reproduce. This is usually stated as the number of bits it uses, which is the binary logarithm of the number of levels. For instance, a 1-bit DAC is designed to reproduce 2 (21) levels while an 8-bit DAC is designed for 256 (28) levels. Resolution is related to the effective number of bits which is a measurement of the actual resolution attained by the DAC. Resolution determines color depth in video applications and audio bit depth in audio applications.
Maximum sampling rate The maximum speed at which the DACs circuitry can operate and still produce correct output. The Nyquist–Shannon sampling theorem defines a relationship between this and the bandwidth of the sampled signal.
Monotonicity The ability of a DAC's analog output to move only in the direction that the digital input moves (i.e., if the input increases, the output doesn't dip before asserting the correct output.) This characteristic is very important for DACs used as a low-frequency signal source or as a digitally programmable trim element.
Total harmonic distortion and noise (THD+N) A measurement of the distortion and noise introduced to the signal by the DAC. It is expressed as a percentage of the total power of unwanted harmonic distortion and noise that accompanies the desired signal.
Dynamic range A measurement of the difference between the largest and smallest signals the DAC can reproduce expressed in decibels. This is usually related to resolution and noise floor.
Other measurements, such as phase distortion and jitter, can also be very important for some applications, some of which (e.g. wireless data transmission, composite video) may even rely on accurate production of phase-adjusted signals.
Non-linear PCM encodings (A-law / μ-law, ADPCM, NICAM) attempt to improve their effective dynamic ranges by using logarithmic step sizes between the output signal strengths represented by each data bit. This trades greater quantization distortion of loud signals for better performance of quiet signals.
Figures of merit
Static performance:
Differential nonlinearity (DNL) shows how much two adjacent code analog values deviate from the ideal 1 LSB step.
Integral nonlinearity (INL) shows how much the DAC transfer characteristic deviates from an ideal one. That is, the ideal characteristic is usually a straight line; INL shows how much the actual voltage at a given code value differs from that line, in LSBs (1 LSB steps).
Gain error
Offset error
Noise is ultimately limited by the thermal noise generated by passive components such as resistors. For audio applications and in room temperatures, such noise is usually a little less than 1μV (microvolt) of white noise. This practically limits resolution to less than 20~21 bits, even in 24-bit DACs.
Frequency domain performance
Spurious-free dynamic range (SFDR) indicates in dB the ratio between the powers of the converted main signal and the greatest undesired spur.
Signal-to-noise and distortion (SINAD) indicates in dB the ratio between the powers of the converted main signal and the sum of the noise and the generated harmonic spurs
i-th harmonic distortion (HDi) indicates the power of the i-th harmonic of the converted main signal
Total harmonic distortion (THD) is the sum of the powers of all the harmonics of the input signal
If the maximum DNL is less than 1 LSB, then the converter is guaranteed to be monotonic. However, many monotonic converters may have a maximum DNL greater than 1 LSB.
Time domain performance:
Glitch impulse area (glitch energy)
See also
References
Further reading
S. Norsworthy, Richard Schreier, Gabor C. Temes, Delta-Sigma Data Converters. .
Mingliang Liu, Demystifying Switched-Capacitor Circuits. .
Behzad Razavi, Principles of Data Conversion System Design. .
Phillip E. Allen, Douglas R. Holberg, CMOS Analog Circuit Design. .
Robert F. Coughlin, Frederick F. Driscoll, Operational Amplifiers and Linear Integrated Circuits. .
A Anand Kumar, Fundamentals of Digital Circuits. , .
Ndjountche Tertulien, "CMOS Analog Integrated Circuits: High-Speed and Power-Efficient Design". .
External links
High-Resolution Multiplying DACs Handle AC Signals
R-2R Ladder DAC explained with circuit diagrams.
Dynamic Evaluation of High-Speed, High Resolution D/A Converters Outlines HD, IMD and NPR measurements, also includes a derivation of quantization noise
Digital signal processing
Electronic circuits
Analog computers | Digital-to-analog converter | [
"Engineering"
] | 3,079 | [
"Electronic engineering",
"Electronic circuits"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.