id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
220,642 | https://en.wikipedia.org/wiki/Geometrization%20conjecture | In mathematics, Thurston's geometrization conjecture (now a theorem) states that each of certain three-dimensional topological spaces has a unique geometric structure that can be associated with it. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic).
In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by as part of his 24 questions, and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture.
Thurston's hyperbolization theorem implies that Haken manifolds satisfy the geometrization conjecture. Thurston announced a proof in the 1980s, and since then, several complete proofs have appeared in print.
Grigori Perelman announced a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery in two papers posted at the arxiv.org preprint server. Perelman's papers were studied by several independent groups that produced books and online manuscripts filling in the complete details of his arguments. Verification was essentially complete in time for Perelman to be awarded the 2006 Fields Medal for his work, and in 2010 the Clay Mathematics Institute awarded him its 1 million USD prize for solving the Poincaré conjecture, though Perelman declined both awards.
The Poincaré conjecture and the spherical space form conjecture are corollaries of the geometrization conjecture, although there are shorter proofs of the former that do not lead to the geometrization conjecture.
The conjecture
A 3-manifold is called closed if it is compact – without "punctures" or "missing endpoints" – and has no boundary ("edge").
Every closed 3-manifold has a prime decomposition: this means it is the connected sum ("a gluing together") of prime 3-manifolds. This reduces much of the study of 3-manifolds to the case of prime 3-manifolds: those that cannot be written as a non-trivial connected sum.
Here is a statement of Thurston's conjecture:
Every oriented prime closed 3-manifold can be cut along tori, so that the interior of each of the resulting manifolds has a geometric structure with finite volume.
There are 8 possible geometric structures in 3 dimensions. There is a unique minimal way of cutting an irreducible oriented 3-manifold along tori into pieces that are Seifert manifolds or atoroidal called the JSJ decomposition, which is not quite the same as the decomposition in the geometrization conjecture, because some of the pieces in the JSJ decomposition might not have finite volume geometric structures. (For example, the mapping torus of an Anosov map of a torus has a finite volume solv structure, but its JSJ decomposition cuts it open along one torus to produce a product of a torus and a unit interval, and the interior of this has no finite volume geometric structure.)
For non-oriented manifolds the easiest way to state a geometrization conjecture is to first take the oriented double cover. It is also possible to work directly with non-orientable manifolds, but this gives some extra complications: it may be necessary to cut along projective planes and Klein bottles as well as spheres and tori, and manifolds with a projective plane boundary component usually have no geometric structure.
In 2 dimensions, every closed surface has a geometric structure consisting of a metric with constant curvature; it is not necessary to cut the manifold up first. Specifically, every closed surface is diffeomorphic to a quotient of S2, E2, or H2.
The eight Thurston geometries
A model geometry is a simply connected smooth manifold X together with a transitive action of a Lie group G on X with compact stabilizers.
A model geometry is called maximal if G is maximal among groups acting smoothly and transitively on X with compact stabilizers. Sometimes this condition is included in the definition of a model geometry.
A geometric structure on a manifold M is a diffeomorphism from M to X/Γ for some model geometry X, where Γ is a discrete subgroup of G acting freely on X ; this is a special case of a complete (G,X)-structure. If a given manifold admits a geometric structure, then it admits one whose model is maximal.
A 3-dimensional model geometry X is relevant to the geometrization conjecture if it is maximal and if there is at least one compact manifold with a geometric structure modelled on X. Thurston classified the 8 model geometries satisfying these conditions; they are listed below and are sometimes called Thurston geometries. (There are also uncountably many model geometries without compact quotients.)
There is some connection with the Bianchi groups: the 3-dimensional Lie groups. Most Thurston geometries can be realized as a left invariant metric on a Bianchi group. However S2 × R cannot be, Euclidean space corresponds to two different Bianchi groups, and there are an uncountable number of solvable non-unimodular Bianchi groups, most of which give model geometries with no compact representatives.
Spherical geometry S3
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group O(4, R), with 2 components. The corresponding manifolds are exactly the closed 3-manifolds with finite fundamental group. Examples include the 3-sphere, the Poincaré homology sphere, Lens spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type IX. Manifolds with this geometry are all compact, orientable, and have the structure of a Seifert fiber space (often in several ways). The complete list of such manifolds is given in the article on spherical 3-manifolds. Under Ricci flow, manifolds with this geometry collapse to a point in finite time.
Euclidean geometry E3
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group R3 × O(3, R), with 2 components. Examples are the 3-torus, and more generally the mapping torus of a finite-order automorphism of the 2-torus; see torus bundle. There are exactly 10 finite closed 3-manifolds with this geometry, 6 orientable and 4 non-orientable. This geometry can be modeled as a left invariant metric on the Bianchi groups of type I or VII0. Finite volume manifolds with this geometry are all compact, and have the structure of a Seifert fiber space (sometimes in two ways). The complete list of such manifolds is given in the article on Seifert fiber spaces. Under Ricci flow, manifolds with Euclidean geometry remain invariant.
Hyperbolic geometry H3
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group O+(1, 3, R), with 2 components. There are enormous numbers of examples of these, and their classification is not completely understood. The example with smallest volume is the Weeks manifold. Other examples are given by the Seifert–Weber space, or "sufficiently complicated" Dehn surgeries on links, or most Haken manifolds. The geometrization conjecture implies that a closed 3-manifold is hyperbolic if and only if it is irreducible, atoroidal, and has infinite fundamental group. This geometry can be modeled as a left invariant metric on the Bianchi group of type V or VIIh≠0. Under Ricci flow, manifolds with hyperbolic geometry expand.
The geometry of S2 × R
The point stabilizer is O(2, R) × Z/2Z, and the group G is O(3, R) × R × Z/2Z, with 4 components. The four finite volume manifolds with this geometry are: S2 × S1, the mapping torus of the antipode map of S2, the connected sum of two copies of 3-dimensional projective space, and the product of S1 with two-dimensional projective space. The first two are mapping tori of the identity map and antipode map of the 2-sphere, and are the only examples of 3-manifolds that are prime but not irreducible. The third is the only example of a non-trivial connected sum with a geometric structure. This is the only model geometry that cannot be realized as a left invariant metric on a 3-dimensional Lie group. Finite volume manifolds with this geometry are all compact and have the structure of a Seifert fiber space (often in several ways). Under normalized Ricci flow manifolds with this geometry converge to a 1-dimensional manifold.
The geometry of H2 × R
The point stabilizer is O(2, R) × Z/2Z, and the group G is O+(1, 2, R) × R × Z/2Z, with 4 components. Examples include the product of a hyperbolic surface with a circle, or more generally the mapping torus of an isometry of a hyperbolic surface. Finite volume manifolds with this geometry have the structure of a Seifert fiber space if they are orientable. (If they are not orientable the natural fibration by circles is not necessarily a Seifert fibration: the problem is that some fibers may "reverse orientation"; in other words their neighborhoods look like fibered solid Klein bottles rather than solid tori.) The classification of such (oriented) manifolds is given in the article on Seifert fiber spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type III. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
The geometry of the universal cover of SL(2, R)
The universal cover of SL(2, R) is denoted . It fibers over H2, and the space is sometimes called "Twisted H2 × R". The group G has 2 components. Its identity component has the structure . The point stabilizer is O(2,R).
Examples of these manifolds include: the manifold of unit vectors of the tangent bundle of a hyperbolic surface, and more generally the Brieskorn homology spheres (excepting the 3-sphere and the Poincaré dodecahedral space). This geometry can be modeled as a left invariant metric on the Bianchi group of type VIII or III. Finite volume manifolds with this geometry are orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
Nil geometry
This fibers over E2, and so is sometimes known as "Twisted E2 × R". It is the geometry of the Heisenberg group. The point stabilizer is O(2, R). The group G has 2 components, and is a semidirect product of the 3-dimensional Heisenberg group by the group O(2, R) of isometries of a circle. Compact manifolds with this geometry include the mapping torus of a Dehn twist of a 2-torus, or the quotient of the Heisenberg group by the "integral Heisenberg group". This geometry can be modeled as a left invariant metric on the Bianchi group of type II. Finite volume manifolds with this geometry are compact and orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow, compact manifolds with this geometry converge to R2 with the flat metric.
Sol geometry
This geometry (also called Solv geometry) fibers over the line with fiber the plane, and is the geometry of the identity component of the group G. The point stabilizer is the dihedral group of order 8. The group G has 8 components, and is the group of maps from 2-dimensional Minkowski space to itself that are either isometries or multiply the metric by −1. The identity component has a normal subgroup R2 with quotient R, where R acts on R2 with 2 (real) eigenspaces, with distinct real eigenvalues of product 1. This is the Bianchi group of type VI0 and the geometry can be modeled as a left invariant metric on this group. All finite volume manifolds with solv geometry are compact. The compact manifolds with solv geometry are either the mapping torus of an Anosov map of the 2-torus (such a map is an automorphism of the 2-torus given by an invertible 2 by 2 matrix whose eigenvalues are real and distinct, such as ), or quotients of these by groups of order at most 8. The eigenvalues of the automorphism of the torus generate an order of a real quadratic field, and the solv manifolds can be classified in terms of the units and ideal classes of this order.
Under normalized Ricci flow compact manifolds with this geometry converge (rather slowly) to R1.
Uniqueness
A closed 3-manifold has a geometric structure of at most one of the 8 types above, but finite volume non-compact 3-manifolds can occasionally have more than one type of geometric structure. (Nevertheless, a manifold can have many different geometric structures of the same type; for example, a surface of genus at least 2 has a continuum of different hyperbolic metrics.) More precisely, if M is a manifold with a finite volume geometric structure, then the type of geometric structure is almost determined as follows, in terms of the fundamental group π1(M):
If π1(M) is finite then the geometric structure on M is spherical, and M is compact.
If π1(M) is virtually cyclic but not finite then the geometric structure on M is S2×R, and M is compact.
If π1(M) is virtually abelian but not virtually cyclic then the geometric structure on M is Euclidean, and M is compact.
If π1(M) is virtually nilpotent but not virtually abelian then the geometric structure on M is nil geometry, and M is compact.
If π1(M) is virtually solvable but not virtually nilpotent then the geometric structure on M is solv geometry, and M is compact.
If π1(M) has an infinite normal cyclic subgroup but is not virtually solvable then the geometric structure on M is either H2×R or the universal cover of SL(2, R). The manifold M may be either compact or non-compact. If it is compact, then the 2 geometries can be distinguished by whether or not π1(M) has a finite index subgroup that splits as a semidirect product of the normal cyclic subgroup and something else. If the manifold is non-compact, then the fundamental group cannot distinguish the two geometries, and there are examples (such as the complement of a trefoil knot) where a manifold may have a finite volume geometric structure of either type.
If π1(M) has no infinite normal cyclic subgroup and is not virtually solvable then the geometric structure on M is hyperbolic, and M may be either compact or non-compact.
Infinite volume manifolds can have many different types of geometric structure: for example, R3 can have 6 of the different geometric structures listed above, as 6 of the 8 model geometries are homeomorphic to it. Moreover if the volume does not have to be finite there are an infinite number of new geometric structures with no compact models; for example, the geometry of almost any non-unimodular 3-dimensional Lie group.
There can be more than one way to decompose a closed 3-manifold into pieces with geometric structures. For example:
Taking connected sums with several copies of S3 does not change a manifold.
The connected sum of two projective 3-spaces has a S2×R geometry, and is also the connected sum of two pieces with S3 geometry.
The product of a surface of negative curvature and a circle has a geometric structure, but can also be cut along tori to produce smaller pieces that also have geometric structures. There are many similar examples for Seifert fiber spaces.
It is possible to choose a "canonical" decomposition into pieces with geometric structure, for example by first cutting the manifold into prime pieces in a minimal way, then cutting these up using the smallest possible number of tori. However this minimal decomposition is not necessarily the one produced by Ricci flow; in fact, the Ricci flow can cut up a manifold into geometric pieces in many inequivalent ways, depending on the choice of initial metric.
History
The Fields Medal was awarded to Thurston in 1982 partially for his proof of the geometrization conjecture for Haken manifolds.
In 1982, Richard S. Hamilton showed that given a closed 3-manifold with a metric of positive Ricci curvature, the Ricci flow would collapse the manifold to a point in finite time, which proves the geometrization conjecture for this case as the metric becomes "almost round" just before the collapse. He later developed a program to prove the geometrization conjecture by Ricci flow with surgery. The idea is that the Ricci flow will in general produce singularities, but one may be able to continue the Ricci flow past the singularity by using surgery to change the topology of the manifold. Roughly speaking, the Ricci flow contracts positive curvature regions and expands negative curvature regions, so it should kill off the pieces of the manifold with the "positive curvature" geometries S3 and S2 × R, while what is left at large times should have a thick–thin decomposition into a "thick" piece with hyperbolic geometry and a "thin" graph manifold.
In 2003, Grigori Perelman announced a proof of the geometrization conjecture by showing that the Ricci flow can indeed be continued past the singularities, and has the behavior described above.
One component of Perelman's proof was a novel collapsing theorem in Riemannian geometry. Perelman did not release any details on the proof of this result (Theorem 7.4 in the preprint 'Ricci flow with surgery on three-manifolds'). Beginning with Shioya and Yamaguchi, there are now several different proofs of Perelman's collapsing theorem, or variants thereof. Shioya and Yamaguchi's formulation was used in the first fully detailed formulations of Perelman's work.
A second route to the last part of Perelman's proof of geometrization is the method of Laurent Bessières and co-authors, which uses Thurston's hyperbolization theorem for Haken manifolds and Gromov's norm for 3-manifolds. A book by the same authors with complete details of their version of the proof has been published by the European Mathematical Society.
Higher dimensions
In four dimensions, only a rather restricted class of closed 4-manifolds admit a geometric decomposition. However, lists of maximal model geometries can still be given.
The four-dimensional maximal model geometries were classified by Richard Filipkiewicz in 1983. They number eighteen, plus one countably infinite family: their usual names are E4, Nil4, , (a countably infinite family), , , , , , , H4, H2(C) (a complex hyperbolic space), F4 (the tangent bundle of the hyperbolic plane), S2 × E2, , , S4, CP2 (the complex projective plane), and . No closed manifold admits the geometry F4, but there are manifolds with proper decomposition including an F4 piece.
The five-dimensional maximal model geometries were classified by Andrew Geng in 2016. There are 53 individual geometries and six infinite families. Some new phenomena not observed in lower dimensions occur, including two uncountable families of geometries and geometries with no compact quotients.
Footnotes
Notes
References
L. Bessieres, G. Besson, M. Boileau, S. Maillot, J. Porti, 'Geometrisation of 3-manifolds', EMS Tracts in Mathematics, volume 13. European Mathematical Society, Zurich, 2010.
M. Boileau Geometrization of 3-manifolds with symmetries
F. Bonahon Geometric structures on 3-manifolds Handbook of Geometric Topology (2002) Elsevier.
Allen Hatcher: Notes on Basic 3-Manifold Topology 2000
J. Isenberg, M. Jackson, Ricci flow of locally homogeneous geometries on a Riemannian manifold, J. Diff. Geom. 35 (1992) no. 3 723–741.
John W. Morgan. Recent progress on the Poincaré conjecture and the classification of 3-manifolds. Bulletin Amer. Math. Soc. 42 (2005) no. 1, 57–78 (expository article explains the eight geometries and geometrization conjecture briefly, and gives an outline of Perelman's proof of the Poincaré conjecture)
Scott, Peter The geometries of 3-manifolds. (errata) Bull. London Math. Soc. 15 (1983), no. 5, 401–487.
This gives the original statement of the conjecture.
William Thurston. Three-dimensional geometry and topology. Vol. 1. Edited by Silvio Levy. Princeton Mathematical Series, 35. Princeton University Press, Princeton, NJ, 1997. x+311 pp. (in depth explanation of the eight geometries and the proof that there are only eight)
William Thurston. The Geometry and Topology of Three-Manifolds, 1980 Princeton lecture notes on geometric structures on 3-manifolds.
External links
A public lecture on the Poincaré and geometrization conjectures, given by C. McMullen at Harvard in 2006.
Geometric topology
Riemannian geometry
3-manifolds
Conjectures that have been proved
Theorems in topology | Geometrization conjecture | [
"Mathematics"
] | 4,672 | [
"Mathematical theorems",
"Geometric topology",
"Theorems in topology",
"Topology",
"Conjectures that have been proved",
"Mathematical problems"
] |
220,681 | https://en.wikipedia.org/wiki/Green%20sulfur%20bacteria | The green sulfur bacteria are a phylum, Chlorobiota, of obligately anaerobic photoautotrophic bacteria that metabolize sulfur.
Green sulfur bacteria are nonmotile (except Chloroherpeton thalassium, which may glide) and capable of anoxygenic photosynthesis. They live in anaerobic aquatic environments. In contrast to plants, green sulfur bacteria mainly use sulfide ions as electron donors. They are autotrophs that utilize the reverse tricarboxylic acid cycle to perform carbon fixation. They are also mixotrophs and reduce nitrogen.
Characteristics
Green sulfur bacteria are gram-negative rod or spherical shaped bacteria. Some types of green sulfur bacteria have gas vacuoles that allow for movement. They are photolithoautotrophs, and use light energy and reduced sulfur compounds as the electron source. Electron donors include H2, H2S, S. The major photosynthetic pigment in these bacteria is Bacteriochlorophylls c or d in green species and e in brown species, and is located in the chlorosomes and plasma membranes. Chlorosomes are a unique feature that allow them to capture light in low-light conditions.
Habitat
The majority of green sulfur bacteria are mesophilic, preferring moderate temperatures, and all live in aquatic environments. They require anaerobic conditions and reduced sulfur; they are usually found in the top millimeters of sediment. They are capable of photosynthesis in low light conditions.
The Black Sea, an extremely anoxic environment, was found to house a large population of green sulfur bacteria at about 100 m depth. Due to the lack of light available in this region of the sea, most bacteria were photosynthetically inactive. The photosynthetic activity detected in the sulfide chemocline suggests that the bacteria need very little energy for cellular maintenance.
A species of green sulfur bacteria has been found living near a black smoker off the coast of Mexico at a depth of 2,500 m in the Pacific Ocean. At this depth, the bacterium, designated GSB1, lives off the dim glow of the thermal vent since no sunlight can penetrate to that depth.
Green sulfur bacteria has also been found living on coral reef colonies in Taiwan, they make up the majority of a "green layer" on these colonies. They likely play a role in the coral system, and there could be a symbiotic relationship between the bacteria and the coral host. The coral could provide an anaerobic environment and a source of carbon for the bacteria. The bacteria can provide nutrients and detoxify the coral by oxidizing sulfide.
One type of green sulfur bacteria, Chlorobaculum tepidum, has been found in sulfur springs. These organisms are thermophilic, unlike most other green sulfur bacteria.
Phylogeny
Taxonomy
Family Chlorobiaceae Copeland 1956 ["Chlorobacteriaceae" Geitler & Pascher 1925]
?Ancalochloris Gorlenko and Lebedeva 1971
Chlorobaculum Imhoff 2003
Chlorobium Nadson 1906
?"Chloroplana" Dubinina and Gorlenko 1975
?"Clathrochloris" Geitler 1925
Prosthecochloris Gorlenko 1970
Family "Thermochlorobacteriaceae" corrig. Liu et al. 2012 ["Chloroherpetonaceae" Bello et al. 2022]
Chloroherpeton Gibson et al. 1985
"Ca. Thermochlorobacter" Liu et al. 2012
Specific characteristics of genera
Green sulfur bacteria are family Chlorobiaceae. There are four genera; Chloroherpeton, Prosthecochloris, Chlorobium and Chlorobaculum. Characteristics used to distinguish between these genera include some metabolic properties, pigments, cell morphology and absorption spectra. However, it is difficult to distinguish these properties and therefore the taxonomic division is sometimes unclear.
Generally, Chlorobium are rod or vibroid shaped and some species contain gas vesicles. They can develop as single or aggregate cells. They can be green or dark brown. The green strains use photosynthetic pigments Bchl c or d with chlorobactene carotenoids and the brown strains use photosynthetic pigment Bchl e with isorenieratene carotenoids. Low amounts of salt are required for growth.
Prosthecochloris are made up of vibroid, ovid or rod shaped cells. They start as single cells that form appendages that do not branch, referred to as non-branching prosthecae. They can also form gas vesicles. The photosynthetic pigments present include Bchl c, d or e. Furthermore, salt is necessary for growth.
Chlorobaculum develop as single cells and are generally vibroid or rod-shaped. Some of these can form gas vesicles. The photosynthetic pigments in this genus are Bchl c, d or e. Some species require NaCl (sodium chloride) for growth. Members of this genus used to be a part of the genus Chlorobium, but have formed a separate lineage.
The genus Chloroherpeton is unique because members of this genus are motile. They are flexing long rods, and can move by gliding. They are green in color and contain the photosynthetic pigment Bchl c as well as γ-carotene. Salt is required for growth.
Metabolism
Photosynthesis
The green sulfur bacteria use a Type I reaction center for photosynthesis. Type I reaction centers are the bacterial homologue of photosystem I (PSI) in plants and cyanobacteria. The GSB reaction centers contain bacteriochlorophyll a and are known as P840 reaction centers due to the excitation wavelength of 840 nm that powers the flow of electrons. In green sulfur bacteria the reaction center is associated with a large antena complex called the chlorosome that captures and funnels light energy to the reaction center. The chlorosomes have a peak absorption in the far red region of the spectrum between 720 and 750 nm because they contain bacteriochlorophyll c, d and e. A protein complex called the Fenna-Matthews-Olson complex (FMO) is physically located between the chlorosomes and the P840 RC. The FMO complex helps efficiently transfer the energy absorbed by the antena to the reaction center.
PSI and Type I reaction centers are able to reduce ferredoxin (Fd), a strong reductant that can be used to fix and reduce NAD+. Once the reaction center (RC) has given an electron to Fd it becomes an oxidizing agent (P840+) with a reduction potential of around +300 mV. While this is not positive enough to strip electrons from water to synthesize ( = +820 mV), it can accept electrons from other sources like , thiosulphate or ions. This transport of electrons from donors like to the acceptor Fd is called linear electron flow or linear electron transport. The oxidation of sulfide ions leads to the production of sulfur as a waste product that accumulates as globules on the extracellular side of the membrane. These globules of sulfur give green sulfur bacteria their name. When sulfide is depleted, the sulfur globules are consumed and further oxidized to sulfate. However, the pathway of sulfur oxidation is not well-understood.
Instead of passing the electrons onto Fd, the Fe-S clusters in the P840 reaction center can transfer the electrons to menaquinone (MQ:) which returns the electrons to the P840+ via an electron transport chain (ETC). On the way back to the RC the electrons from MQH2 pass through a cytochrome bc1 complex (similar to the complex III of mitochondria) that pumps ions across the membrane. The electrochemical potential of the protons across the membrane is used to synthesize ATP by the FoF1 ATP synthase. This cyclic electron transport is responsible for converting light energy into cellular energy in the form of ATP.
Sulfur metabolism
Green sulfur bacteria oxidize inorganic sulfur compounds to use as electron donors for anaerobic photosynthesis, specifically in carbon dioxide fixation. They usually prefer to utilize sulfide over other sulfur compounds as an electron donor, however they can utilize thiosulfate or H2. The intermediate is usually sulfur, which is deposited outside of the cell, and the end product is sulfate. The sulfur, which is deposited extracellularly, is in the form of sulfur globules, which can be later oxidized completely.
The mechanisms of sulfur oxidation in green sulfur bacteria are not well characterized. Some enzymes thought to be involved in sulfide oxidation include flavocytochrome c, sulfide:quinone oxidoreductase and the system. Flavocytochrome can catalyze the transfer of electrons to cytochromes from sulfide, and these cytochromes could then move the electrons to the photosynthetic reaction center. However, not all green sulfur bacteria produce this enzyme, demonstrating that it is not needed for the oxidation of sulfide. Sulfide:quinone oxidoreductase (SQR) also helps with electron transport, but, when alone, has been found to produce decreased rates of sulfide oxidation in green sulfur bacteria, suggesting that there is a different, more effective mechanism. However, most green sulfur bacteria contain a homolog of the SQR gene. The oxidation of thiosulfate to sulfate could be catalyzed by the enzymes in the system.
It is thought that the enzymes and genes related to sulfur metabolism were obtained via horizontal gene transfer during the evolution of green sulfur bacteria.
Carbon fixation
Green sulfur bacteria are photoautotrophs: they not only get energy from light, they can grow using carbon dioxide as their sole source of carbon. They fix carbon dioxide using the reverse tricarboxylic acid cycle (rTCA) cycle where energy is consumed to reduce carbon dioxide, rather than oxidize as seen in the forward TCA cycle, in order to synthesize pyruvate and acetate. These molecules are used as the raw materials to synthesize all the building blocks a cell needs to generate macromolecules. The rTCA cycle is highly energy efficient enabling the bacteria to grow under low light conditions. However it has several oxygen sensitive enzymes that limits its efficiency in aerobic conditions.
The reactions of reversal of the oxidative tricarboxylic acid cycle are catalyzed by four enzymes:
pyruvate:ferredoxin (Fd) oxidoreductase:
acetyl-CoA + + 2Fdred + 2H+ ⇌ pyruvate + CoA + 2Fdox
ATP citrate lyase:
ACL, acetyl-CoA + oxaloacetate + ADP + Pi ⇌ citrate + CoA + ATP
α-keto-glutarate:ferredoxin oxidoreductase:
succinyl-CoA + + 2Fdred + 2H+ ⇌ α-ketoglutarate + CoA + 2Fdox
fumarare reductase
succinate + acceptor ⇌ fumarate + reduced acceptor
However, the oxidative TCA cycle (OTCA) still is present in green sulfur bacteria. The OTCA can assimilate acetate, however the OTCA appears to be incomplete in green sulfur bacteria due to the location and down regulation of the gene during phototrophic growth.
Mixotrophy
Green sulfur bacteria are often referred to as obligate photoautotrophs as they cannot grow in the absence of light even if they are provided with organic matter. However they exhibit a form of mixotrophy where they can consume simple organic compounds in the presence of light and CO2. In the presence of CO2 or HCO3−, some green sulfur bacteria can utilize acetate or pyruvate.
Mixotrophy in green sulfur bacteria is best modeled by the representative green sulfur bacterium Chlorobaculum tepidum. Mixotrophy occurs during amino acid biosynthesis/carbon utilization and energy metabolism. The bacterium uses electrons, generated from the oxidation of sulfur, and the energy it captures from light to run the rTCA. C. tepidum also exhibits use of both pyruvate and acetate as an organic carbon source.
An example of mixotrophy in C. tepidum that combines autotrophy and heterotrophy is in its synthesis of acetyl-CoA. C. tepidum can autotrophically generate acetyl-CoA through the rTCA cycle, or it can heterotrophically generate it from the uptake of acetate. Similar mixotrophic activity occurs when pyruvate is used for amino acid biosynthesis, but mixotrophic growth using acetate yields higher growth rates.
In energy metabolism, C. tepidum relies on light reactions to produce energy (NADPH and NADH) because the pathways typically responsible for energy production (oxidative pentose phosphate pathway and normal TCA cycle) are only partly functional. Photons absorbed from the light are used to produce NADPH and NADH, the cofactors of energy metabolism. C. tepidum also generates energy in the form of ATP using the proton motive force derived from sulfide oxidation. Energy production from both sulfide oxidation and photon absorption via bacteriochlorophylls.
Nitrogen fixation
The majority of green sulfur bacteria are diazotrophs: they can reduce nitrogen to ammonia which is then used to synthesize amino acids. Nitrogen fixation among green sulfur bacteria is generally typical of an anoxygenic phototroph, and requires the presence of light. Green sulfur bacteria exhibit activity from a Type-1 secretion system and a ferredoxin-NADP+ oxidoreductase to generate reduced iron, a trait that evolved to support nitrogen fixation. Like purple sulfur bacteria, they can regulate the activity of nitrogenase post-translationally in response to ammonia concentrations. Their possession of nif genes, even though evolutionarily distinct, may suggest their nitrogen fixation abilities arose in two different events or through a shared very distant ancestor.
Examples of green sulfur bacteria capable of nitrogen fixation include the genus Chlorobium and Pelodictyon, excluding P. phaeoclathratiforme. Prosthecochloris aestuarii and Chloroherpeton thalassium also fall into this category. Their N2 fixation is widespread and plays an important role in overall nitrogen availability for ecosystems. Green sulfur bacteria living in coral reefs, such as Prosthecochloris, are crucial in generating available nitrogen in the already nutrient-limited environment.
See also
Anoxic event
Purple sulfur bacteria
Green non-sulfur bacteria
List of bacteria genera
List of bacterial order
References
External links
Phototrophic bacteria | Green sulfur bacteria | [
"Chemistry",
"Biology"
] | 3,174 | [
"Bacteria",
"Photosynthesis",
"Phototrophic bacteria"
] |
220,782 | https://en.wikipedia.org/wiki/Exterior%20derivative | On a differentiable manifold, the exterior derivative extends the concept of the differential of a function to differential forms of higher degree. The exterior derivative was first described in its current form by Élie Cartan in 1899. The resulting calculus, known as exterior calculus, allows for a natural, metric-independent generalization of Stokes' theorem, Gauss's theorem, and Green's theorem from vector calculus.
If a differential -form is thought of as measuring the flux through an infinitesimal -parallelotope at each point of the manifold, then its exterior derivative can be thought of as measuring the net flux through the boundary of a -parallelotope at each point.
Definition
The exterior derivative of a differential form of degree (also differential -form, or just -form for brevity here) is a differential form of degree .
If is a smooth function (a -form), then the exterior derivative of is the differential of . That is, is the unique -form such that for every smooth vector field , , where is the directional derivative of in the direction of .
The exterior product of differential forms (denoted with the same symbol ) is defined as their pointwise exterior product.
There are a variety of equivalent definitions of the exterior derivative of a general -form.
In terms of axioms
The exterior derivative is defined to be the unique -linear mapping from -forms to -forms that has the following properties:
The operator applied to the -form is the differential of
If and are two -forms, then for any field elements
If is a -form and is an -form, then (graded product rule)
If is a -form, then (Poincare's lemma)
If and are two -forms (functions), then from the third property for the quantity , which is simply , the familiar product rule is recovered. The third property can be generalised, for instance, if is a -form, is an -form and is an -form, then
In terms of local coordinates
Alternatively, one can work entirely in a local coordinate system . The coordinate differentials form a basis of the space of one-forms, each associated with a coordinate. Given a multi-index with for (and denoting with ), the exterior derivative of a (simple) -form
over is defined as
(using the Einstein summation convention). The definition of the exterior derivative is extended linearly to a general -form (which is expressible as a linear combination of basic simple -forms)
where each of the components of the multi-index run over all the values in . Note that whenever equals one of the components of the multi-index then (see Exterior product).
The definition of the exterior derivative in local coordinates follows from the preceding definition in terms of axioms. Indeed, with the -form as defined above,
Here, we have interpreted as a -form, and then applied the properties of the exterior derivative.
This result extends directly to the general -form as
In particular, for a -form , the components of in local coordinates are
Caution: There are two conventions regarding the meaning of . Most current authors have the convention that
while in older text like Kobayashi and Nomizu or Helgason
In terms of invariant formula
Alternatively, an explicit formula can be given for the exterior derivative of a -form , when paired with arbitrary smooth vector fields :
where denotes the Lie bracket and a hat denotes the omission of that element:
In particular, when is a -form we have that .
Note: With the conventions of e.g., Kobayashi–Nomizu and Helgason the formula differs by a factor of :
Examples
Example 1. Consider over a -form basis for a scalar field . The exterior derivative is:
The last formula, where summation starts at , follows easily from the properties of the exterior product. Namely, .
Example 2. Let be a -form defined over . By applying the above formula to each term (consider and ) we have the sum
Stokes' theorem on manifolds
If is a compact smooth orientable -dimensional manifold with boundary, and is an -form on , then the generalized form of Stokes' theorem states that
Intuitively, if one thinks of as being divided into infinitesimal regions, and one adds the flux through the boundaries of all the regions, the interior boundaries all cancel out, leaving the total flux through the boundary of .
Further properties
Closed and exact forms
A -form is called closed if ; closed forms are the kernel of . is called exact if for some -form ; exact forms are the image of . Because , every exact form is closed. The Poincaré lemma states that in a contractible region, the converse is true.
de Rham cohomology
Because the exterior derivative has the property that , it can be used as the differential (coboundary) to define de Rham cohomology on a manifold. The -th de Rham cohomology (group) is the vector space of closed -forms modulo the exact -forms; as noted in the previous section, the Poincaré lemma states that these vector spaces are trivial for a contractible region, for . For smooth manifolds, integration of forms gives a natural homomorphism from the de Rham cohomology to the singular cohomology over . The theorem of de Rham shows that this map is actually an isomorphism, a far-reaching generalization of the Poincaré lemma. As suggested by the generalized Stokes' theorem, the exterior derivative is the "dual" of the boundary map on singular simplices.
Naturality
The exterior derivative is natural in the technical sense: if is a smooth map and is the contravariant smooth functor that assigns to each manifold the space of -forms on the manifold, then the following diagram commutes
so , where denotes the pullback of . This follows from that , by definition, is , being the pushforward of . Thus is a natural transformation from to .
Exterior derivative in vector calculus
Most vector calculus operators are special cases of, or have close relationships to, the notion of exterior differentiation.
Gradient
A smooth function on a real differentiable manifold is a -form. The exterior derivative of this -form is the -form .
When an inner product is defined, the gradient of a function is defined as the unique vector in such that its inner product with any element of is the directional derivative of along the vector, that is such that
That is,
where denotes the musical isomorphism mentioned earlier that is induced by the inner product.
The -form is a section of the cotangent bundle, that gives a local linear approximation to in the cotangent space at each point.
Divergence
A vector field on has a corresponding -form
where denotes the omission of that element.
(For instance, when , i.e. in three-dimensional space, the -form is locally the scalar triple product with .) The integral of over a hypersurface is the flux of over that hypersurface.
The exterior derivative of this -form is the -form
Curl
A vector field on also has a corresponding -form
Locally, is the dot product with . The integral of along a path is the work done against along that path.
When , in three-dimensional space, the exterior derivative of the -form is the -form
Invariant formulations of operators in vector calculus
The standard vector calculus operators can be generalized for any pseudo-Riemannian manifold, and written in coordinate-free notation as follows:
where is the Hodge star operator, and are the musical isomorphisms, is a scalar field and is a vector field.
Note that the expression for requires to act on , which is a form of degree . A natural generalization of to -forms of arbitrary degree allows this expression to make sense for any .
See also
Exterior covariant derivative
de Rham complex
Finite element exterior calculus
Discrete exterior calculus
Green's theorem
Lie derivative
Stokes' theorem
Fractal derivative
Notes
References
External links
Archived at Ghostarchive and the Wayback Machine:
Differential forms
Differential operators
Generalizations of the derivative | Exterior derivative | [
"Mathematics",
"Engineering"
] | 1,646 | [
"Mathematical analysis",
"Tensors",
"Differential operators",
"Differential forms"
] |
221,047 | https://en.wikipedia.org/wiki/Flow%20measurement | Flow measurement is the quantification of bulk fluid movement. Flow can be measured using devices called flowmeters in various ways. The common types of flowmeters with industrial applications are listed below:
Obstruction type (differential pressure or variable area)
Inferential (turbine type)
Electromagnetic
Positive-displacement flowmeters, which accumulate a fixed volume of fluid and then count the number of times the volume is filled to measure flow.
Fluid dynamic (vortex shedding)
Anemometer
Ultrasonic flow meter
Mass flow meter (Coriolis force).
Flow measurement methods other than positive-displacement flowmeters rely on forces produced by the flowing stream as it overcomes a known constriction, to indirectly calculate flow. Flow may be measured by measuring the velocity of fluid over a known area. For very large flows, tracer methods may be used to deduce the flow rate from the change in concentration of a dye or radioisotope.
Kinds and units of measurement
Both gas and liquid flow can be measured in physical quantities of kind volumetric flow rate or mass flow rates, with respective SI units such as cubic meters per second or kilograms per second, respectively. These measurements are related by the material's density. The density of a liquid is almost independent of conditions. This is not the case for gases, the densities of which depend greatly upon pressure, temperature and to a lesser extent, composition.
When gases or liquids are transferred for their energy content, as in the sale of natural gas, the flow rate may also be expressed in terms of energy flow, such as gigajoule per hour or BTU per day. The energy flow rate is the volumetric flow rate multiplied by the energy content per unit volume or mass flow rate multiplied by the energy content per unit mass. Energy flow rate is usually derived from mass or volumetric flow rate by the use of a flow computer.
In engineering contexts, the volumetric flow rate is usually given the symbol , and the mass flow rate, the symbol .
For a fluid having density , mass and volumetric flow rates may be related by .
Gas
Gases are compressible and change volume when placed under pressure, are heated or are cooled. A volume of gas under one set of pressure and temperature conditions is not equivalent to the same gas under different conditions. References will be made to "actual" flow rate through a meter and "standard" or "base" flow rate through a meter with units such as acm/h (actual cubic meters per hour), sm3/sec (standard cubic meters per second), kscm/h (thousand standard cubic meters per hour), LFM (linear feet per minute), or MMSCFD (million standard cubic feet per day).
Gas mass flow rate can be directly measured, independent of pressure and temperature effects, with ultrasonic flow meters, thermal mass flowmeters, Coriolis mass flowmeters, or mass flow controllers.
Liquid
For liquids, various units are used depending upon the application and industry, but might include gallons (U.S. or imperial) per minute, liters per second, liters per m2 per hour, bushels per minute or, when describing river flows, cumecs (cubic meters per second) or acre-feet per day. In oceanography a common unit to measure volume transport (volume of water transported by a current for example) is a sverdrup (Sv) equivalent to 106 m3/s.
Primary flow element
A primary flow element is a device inserted into the flowing fluid that produces a physical property that can be accurately related to flow. For example, an orifice plate produces a pressure drop that is a function of the square of the volume rate of flow through the orifice. A vortex meter primary flow element produces a series of oscillations of pressure. Generally, the physical property generated by the primary flow element is more convenient to measure than the flow itself. The properties of the primary flow element, and the fidelity of the practical installation to the assumptions made in calibration, are critical factors in the accuracy of the flow measurement.
Mechanical flowmeters
A positive displacement meter may be compared to a bucket and a stopwatch. The stopwatch is started when the flow starts and stopped when the bucket reaches its limit. The volume divided by the time gives the flow rate. For continuous measurements, we need a system of continually filling and emptying buckets to divide the flow without letting it out of the pipe. These continuously forming and collapsing volumetric displacements may take the form of pistons reciprocating in cylinders, gear teeth mating against the internal wall of a meter or through a progressive cavity created by rotating oval gears or a helical screw.
Piston meter/rotary piston
Because they are used for domestic water measurement, piston meters, also known as rotary piston or semi-positive displacement meters, are the most common flow measurement devices in the UK and are used for almost all meter sizes up to and including 40 mm ( in). The piston meter operates on the principle of a piston rotating within a chamber of known volume. For each rotation, an amount of water passes through the piston chamber. Through a gear mechanism and, sometimes, a magnetic drive, a needle dial and odometer type display are advanced.
Oval gear meter
An oval gear meter is a positive displacement meter that uses two or more oblong gears configured to rotate at right angles to one another, forming a T shape. Such a meter has two sides, which can be called A and B. No fluid passes through the center of the meter, where the teeth of the two gears always mesh. On one side of the meter (A), the teeth of the gears close off the fluid flow because the elongated gear on side A is protruding into the measurement chamber, while on the other side of the meter (B), a cavity holds a fixed volume of fluid in a measurement chamber. As the fluid pushes the gears, it rotates them, allowing the fluid in the measurement chamber on side B to be released into the outlet port. Meanwhile, fluid entering the inlet port will be driven into the measurement chamber of side A, which is now open. The teeth on side B will now close off the fluid from entering side B. This cycle continues as the gears rotate and fluid is metered through alternating measurement chambers. Permanent magnets in the rotating gears can transmit a signal to an electric reed switch or current transducer for flow measurement. Though claims for high performance are made, they are generally not as precise as the sliding vane design.
Gear meter
Gear meters differ from oval gear meters in that the measurement chambers are made up of the gaps between the teeth of the gears. These openings divide up the fluid stream and as the gears rotate away from the inlet port, the meter's inner wall closes off the chamber to hold the fixed amount of fluid. The outlet port is located in the area where the gears are coming back together. The fluid is forced out of the meter as the gear teeth mesh and reduce the available pockets to nearly zero volume.
Helical gear
Helical gear flowmeters get their name from the shape of their gears or rotors. These rotors resemble the shape of a helix, which is a spiral-shaped structure. As the fluid flows through the meter, it enters the compartments in the rotors, causing the rotors to rotate. The length of the rotor is sufficient that the inlet and outlet are always separated from each other thus blocking a free flow of liquid. The mating helical rotors create a progressive cavity which opens to admit fluid, seals itself off and then opens up to the downstream side to release the fluid. This happens in a continuous fashion and the flowrate is calculated from the speed of rotation.
Nutating disk meter
This is the most commonly used measurement system for measuring water supply in houses. The fluid, most commonly water, enters in one side of the meter and strikes the nutating disk, which is eccentrically mounted. The disk must then "wobble" or nutate about the vertical axis, since the bottom and the top of the disk remain in contact with the mounting chamber. A partition separates the inlet and outlet chambers. As the disk nutates, it gives direct indication of the volume of the liquid that has passed through the meter as volumetric flow is indicated by a gearing and register arrangement, which is connected to the disk. It is reliable for flow measurements within 1 percent.
Turbine flowmeter
The turbine flowmeter (better described as an axial turbine) translates the mechanical action of the turbine rotating in the liquid flow around an axis into a user-readable rate of flow (gpm, lpm, etc.). The turbine tends to have all the flow traveling around it.
The turbine wheel is set in the path of a fluid stream. The flowing fluid impinges on the turbine blades, imparting a force to the blade surface and setting the rotor in motion. When a steady rotation speed has been reached, the speed is proportional to fluid velocity.
Turbine flowmeters are used for the measurement of natural gas and liquid flow. Turbine meters are less accurate than displacement and jet meters at low flow rates, but the measuring element does not occupy or severely restrict the entire path of flow. The flow direction is generally straight through the meter, allowing for higher flow rates and less pressure loss than displacement-type meters. They are the meter of choice for large commercial users, fire protection, and as master meters for the water distribution system. Strainers are generally required to be installed in front of the meter to protect the measuring element from gravel or other debris that could enter the water distribution system. Turbine meters are generally available for 4 to 30 cm (–12 in) or higher pipe sizes. Turbine meter bodies are commonly made of stainless steel, bronze, cast Iron, or ductile iron. Internal turbine elements can be plastic or non-corrosive metal alloys. They are accurate in normal working conditions but are greatly affected by the flow profile and fluid conditions.
Turbine flowmeters are commonly best suited for low viscosity, as large particulate can damage the rotor. When choosing a meter for an application that requires particulate flowing through the pipe, it is best to use a meter without moving parts such as a Magnetic flowmeters.
Fire meters are a specialized type of turbine meter with approvals for the high flow rates required in fire protection systems. They are often approved by Underwriters Laboratories (UL) or Factory Mutual (FM) or similar authorities for use in fire protection. Portable turbine meters may be temporarily installed to measure water used from a fire hydrant. The meters are normally made of aluminum to be lightweight, and are usually 7.5 cm (3 in) capacity. Water utilities often require them for measurement of water used in construction, pool filling, or where a permanent meter is not yet installed.
Woltman meter
The Woltman meter (invented by Reinhard Woltman in the 19th century) comprises a rotor with helical blades inserted axially in the flow, much like a ducted fan; it can be considered a type of turbine flowmeter. They are commonly referred to as helix meters, and are popular at larger sizes.
Single jet meter
A single jet meter consists of a simple impeller with radial vanes, impinged upon by a single jet. They are increasing in popularity in the UK at larger sizes and are commonplace in the EU.
Paddle wheel meter
Paddle wheel flowmeters (also known as Pelton wheel sensors) consist of three primary components: the paddle wheel sensor, the pipe fitting and the display/controller. The paddle wheel sensor consists of a freely rotating wheel/impeller with embedded magnets which are perpendicular to the flow and will rotate when inserted in the flowing medium. As the magnets in the blades spin past the sensor, the paddle wheel meter generates a frequency and voltage signal which is proportional to the flow rate. The faster the flow the higher the frequency and the voltage output.
The paddle wheel meter is designed to be inserted into a pipe fitting, either 'in-line' or insertion style. Similarly to turbine meters, the paddle wheel meter requires a minimum run of straight pipe before and after the sensor.
Flow displays and controllers are used to receive the signal from the paddle wheel meter and convert it into actual flow rate or total flow values.
Multiple jet meter
A multiple jet or multijet meter is a velocity type meter which has an impeller which rotates horizontally on a vertical shaft. The impeller element is in a housing in which multiple inlet ports direct the fluid flow at the impeller causing it to rotate in a specific direction in proportion to the flow velocity. This meter works mechanically much like a single jet meter except that the ports direct the flow at the impeller equally from several points around the circumference of the element, not just one point; this minimizes uneven wear on the impeller and its shaft. Thus, these types of meters are recommended to be installed horizontally with its roller index pointing skywards.
Pelton wheel
The Pelton wheel turbine (better described as a radial turbine) translates the mechanical action of the Pelton wheel rotating in the liquid flow around an axis into a user-readable rate of flow (gpm, lpm, etc.). The Pelton wheel tends to have all the flow traveling around it with the inlet flow focused on the blades by a jet. The original Pelton wheels were used for the generation of power and consisted of a radial flow turbine with "reaction cups" which not only move with the force of the water on the face but return the flow in opposite direction using this change of fluid direction to further increase the efficiency of the turbine.
Current meter
Flow through a large penstock such as used at a hydroelectric power plant can be measured by averaging the flow velocity over the entire area. Propeller-type current meters (similar to the purely mechanical Ekman current meter, but now with electronic data acquisition) can be traversed over the area of the penstock and velocities averaged to calculate total flow. This may be on the order of hundreds of cubic meters per second. The flow must be kept steady during the traverse of the current meters. Methods for testing hydroelectric turbines are given in IEC standard 41. Such flow measurements are often commercially important when testing the efficiency of large turbines.
Pressure-based meters
There are several types of flowmeter that rely on Bernoulli's principle. The pressure is measured either by using laminar plates, an orifice, a nozzle, or a Venturi tube to create an artificial constriction and then measure the pressure loss of fluids as they pass that constriction, or by measuring static and stagnation pressures to derive the dynamic pressure.
Venturi meter
A Venturi meter constricts the flow in some fashion, and pressure sensors measure the differential pressure before and within the constriction. This method is widely used to measure flow rate in the transmission of gas through pipelines, and has been used since Roman Empire times. The coefficient of discharge of Venturi meter ranges from 0.93 to 0.97. The first large-scale Venturi meters to measure liquid flows were developed by Clemens Herschel, who used them to measure small and large flows of water and wastewater beginning at the very end of the 19th century.
Orifice plate
An orifice plate is a plate with a hole through it, placed perpendicular to the flow; it constricts the flow, and measuring the pressure differential across the constriction gives the flow rate. It is basically a crude form of Venturi meter, but with higher energy losses. There are three type of orifice: concentric, eccentric, and segmental.
Dall tube
The Dall tube is a shortened version of a Venturi meter, with a lower pressure drop than an orifice plate. As with these flowmeters the flow rate in a Dall tube is determined by measuring the pressure drop caused by restriction in the conduit. The pressure differential is typically measured using diaphragm pressure transducers with digital readout. Since these meters have significantly lower permanent pressure losses than orifice meters, Dall tubes are widely used for measuring the flow rate of large pipeworks. Differential pressure produced by a Dall tube is higher than Venturi tube and nozzle, all of them having same throat diameters.
Pitot tube
A pitot tube is used to measure fluid flow velocity. The tube is pointed into the flow and the difference between the stagnation pressure at the tip of the probe and the static pressure at its side is measured, yielding the dynamic pressure from which the fluid velocity is calculated using Bernoulli's equation. A volumetric rate of flow may be determined by measuring the velocity at different points in the flow and generating the velocity profile.
Averaging pitot tube
Averaging pitot tubes (also called impact probes) extend the theory of pitot tube to more than one dimension. A typical averaging pitot tube consists of three or more holes (depending on the type of probe) on the measuring tip arranged in a specific pattern. More holes allow the instrument to measure the direction of the flow velocity in addition to its magnitude (after appropriate calibration). Three holes arranged in a line allow the pressure probes to measure the velocity vector in two dimensions. Introduction of more holes, e.g. five holes arranged in a "plus" formation, allow measurement of the three-dimensional velocity vector.
Cone meters
Cone meters are a newer differential pressure metering device first launched in 1985 by McCrometer in Hemet, CA. The cone meter is a generic yet robust differential pressure (DP) meter that has shown to be resistant to effects of asymmetric and swirling flow. While working with the same basic principles as Venturi and orifice type DP meters, cone meters don't require the same upstream and downstream piping. The cone acts as a conditioning device as well as a differential pressure producer. Upstream requirements are between 0–5 diameters compared to up to 44 diameters for an orifice plate or 22 diameters for a Venturi. Because cone meters are generally of welded construction, it is recommended they are always calibrated prior to service. Inevitably heat effects of welding cause distortions and other effects that prevent tabular data on discharge coefficients with respect to line size, beta ratio and operating Reynolds numbers from being collected and published. Calibrated cone meters have an uncertainty up to ±0.5%. Un-calibrated cone meters have an uncertainty of ±5.0%
Linear resistance meters
Linear resistance meters, also called laminar flowmeters, measure very low flows at which the measured differential pressure is linearly proportional to the flow and to the fluid viscosity. Such flow is called viscous drag flow or laminar flow, as opposed to the turbulent flow measured by orifice plates, Venturis and other meters mentioned in this section, and is characterized by Reynolds numbers below 2000. The primary flow element may consist of a single long capillary tube, a bundle of such tubes, or a long porous plug; such low flows create small pressure differentials but longer flow elements create higher, more easily measured differentials. These flowmeters are particularly sensitive to temperature changes affecting the fluid viscosity and the diameter of the flow element, as can be seen in the governing Hagen–Poiseuille equation.
Variable-area flowmeters
A "variable area meter" measures fluid flow by allowing the cross sectional area of the device to vary in response to the flow, causing some measurable effect that indicates the rate.
A rotameter is an example of a variable area meter, where a weighted "float" rises in a tapered tube as the flow rate increases; the float stops rising when area between float and tube is large enough that the weight of the float is balanced by the drag of fluid flow. A kind of rotameter used for medical gases is the Thorpe tube flowmeter. Floats are made in many different shapes, with spheres and spherical ellipses being the most common. Some are designed to spin visibly in the fluid stream to aid the user in determining whether the float is stuck or not. Rotameters are available for a wide range of liquids but are most commonly used with water or air. They can be made to reliably measure flow down to 1% accuracy.
Another type is a variable area orifice, where a spring-loaded tapered plunger is deflected by flow through an orifice. The displacement can be related to the flow rate.
Optical flowmeters
Optical flowmeters use light to determine flow rate. Small particles which accompany natural and industrial gases pass through two laser beams focused a short distance apart in the flow path in a pipe by illuminating optics. Laser light is scattered when a particle crosses the first beam. The detecting optics collects scattered light on a photodetector, which then generates a pulse signal. As the same particle crosses the second beam, the detecting optics collect scattered light on a second photodetector, which converts the incoming light into a second electrical pulse. By measuring the time interval between these pulses, the gas velocity is calculated as where is the distance between the laser beams and is the time interval.
Laser-based optical flowmeters measure the actual speed of particles, a property which is not dependent on thermal conductivity of gases, variations in gas flow or composition of gases. The operating principle enables optical laser technology to deliver highly accurate flow data, even in challenging environments which may include high temperature, low flow rates, high pressure, high humidity, pipe vibration and acoustic noise.
Optical flowmeters are very stable with no moving parts and deliver a highly repeatable measurement over the life of the product. Because distance between the two laser sheets does not change, optical flowmeters do not require periodic calibration after their initial commissioning. Optical flowmeters require only one installation point, instead of the two installation points typically required by other types of meters. A single installation point is simpler, requires less maintenance and is less prone to errors.
Commercially available optical flowmeters are capable of measuring flow from 0.1 m/s to faster than 100 m/s (1000:1 turn down ratio) and have been demonstrated to be effective for the measurement of flare gases from oil wells and refineries, a contributor to atmospheric pollution.
Open-channel flow measurement
Open channel flow describes cases where flowing liquid has a top surface open to the air; the cross-section of the flow is only determined by the shape of the channel on the lower side, and is variable depending on the depth of liquid in the channel. Techniques appropriate for a fixed cross-section of flow in a pipe are not useful in open channels. Measuring flow in waterways is an important open-channel flow application; such installations are known as stream gauges.
Level to flow
The level of the water is measured at a designated point behind weir or in flume using various secondary devices (bubblers, ultrasonic, float, and differential pressure are common methods). This depth is converted to a flow rate according to a theoretical formula of the form where is the flow rate, is a constant, is the water level, and is an exponent which varies with the device used; or it is converted according to empirically derived level/flow data points (a "flow curve"). The flow rate can then be integrated over time into volumetric flow. Level to flow devices are commonly used to measure the flow of surface waters (springs, streams, and rivers), industrial discharges, and sewage. Of these, weirs are used on flow streams with low solids (typically surface waters), while flumes are used on flows containing low or high solids contents.
Area/velocity
The cross-sectional area of the flow is calculated from a depth measurement and the average velocity of the flow is measured directly (Doppler and propeller methods are common). Velocity times the cross-sectional area yields a flow rate which can be integrated into volumetric flow. There are two types of area velocity flowmeter: (1) wetted; and (2) non-contact. Wetted area velocity sensors have to be typically mounted on the bottom of a channel or river and use Doppler to measure the velocity of the entrained particles. With depth and a programmed cross-section this can then provide discharge flow measurement. Non-contact devices that use laser or radar are mounted above the channel and measure the velocity from above and then use ultrasound to measure the depth of the water from above. Radar devices can only measure surface velocities, whereas laser-based devices can measure velocities sub-surface.
Dye testing
A known amount of dye (or salt) per unit time is added to a flow stream. After complete mixing, the concentration is measured. The dilution rate equals the flow rate.
Acoustic Doppler velocimetry
Acoustic Doppler velocimetry (ADV) is designed to record instantaneous velocity components at a single point with a relatively high frequency. Measurements are performed by measuring the velocity of particles in a remote sampling volume based upon the Doppler shift effect.
Thermal mass flowmeters
Thermal mass flowmeters generally use combinations of heated elements and temperature sensors to measure the difference between static and flowing heat transfer to a fluid and infer its flow with a knowledge of the fluid's specific heat and density. The fluid temperature is also measured and compensated for. If the density and specific heat characteristics of the fluid are constant, the meter can provide a direct mass flow readout, and does not need any additional pressure temperature compensation over their specified range.
Technological progress has allowed the manufacture of thermal mass flowmeters on a microscopic scale as MEMS sensors; these flow devices can be used to measure flow rates in the range of nanoliters or microliters per minute.
Thermal mass flowmeter (also called thermal dispersion or thermal displacement flowmeter) technology is used for compressed air, nitrogen, helium, argon, oxygen, and natural gas. In fact, most gases can be measured as long as they are fairly clean and non-corrosive. For more aggressive gases, the meter may be made out of special alloys (e.g. Hastelloy), and pre-drying the gas also helps to minimize corrosion.
Today, thermal mass flowmeters are used to measure the flow of gases in a growing range of applications, such as chemical reactions or thermal transfer applications that are difficult for other flowmetering technologies. Some other typical applications of flow sensors can be found in the medical field like, for example, CPAP devices, anesthesia equipment or respiratory devices. This is because thermal mass flowmeters monitor variations in one or more of the thermal characteristics (temperature, thermal conductivity, and/or specific heat) of gaseous media to define the mass flow rate.
The MAF sensor
In many late model automobiles, a Mass Airflow (MAF) sensor is used to accurately determine the mass flow rate of intake air used in the internal combustion engine. Many such mass flow sensors use a heated element and a downstream temperature sensor to indicate the air flowrate. Other sensors use a spring-loaded vane. In either case, the vehicle's electronic control unit interprets the sensor signals as a real-time indication of an engine's fuel requirement.
Vortex flowmeters
Another method of flow measurement involves placing a bluff body (called a shedder bar) in the path of the fluid. As the fluid passes this bar, disturbances in the flow called vortices are created. The vortices trail behind the cylinder, alternatively from each side of the bluff body. This vortex trail is called the Von Kármán vortex street after von Kármán's 1912 mathematical description of the phenomenon. The frequency at which these vortices alternate sides is essentially proportional to the flow rate of the fluid. Inside, atop, or downstream of the shedder bar is a sensor for measuring the frequency of the vortex shedding. This sensor is often a piezoelectric crystal, which produces a small, but measurable, voltage pulse every time a vortex is created. Since the frequency of such a voltage pulse is also proportional to the fluid velocity, a volumetric flow rate is calculated using the cross-sectional area of the flowmeter. The frequency is measured and the flow rate is calculated by the flowmeter electronics using the equation
where is the frequency of the vortices, the characteristic length of the bluff body, is the velocity of the flow over the bluff body, and is the Strouhal number, which is essentially a constant for a given body shape within its operating limits.
Sonar flow measurement
Sonar flowmeters are non-intrusive clamp-on devices that measure flow in pipes conveying slurries, corrosive fluids, multiphase fluids and flows where insertion type flowmeters are not desired. Sonar flowmeters have been widely adopted in mining, metals processing, and upstream oil and gas industries where traditional technologies have certain limitations due to their tolerance to various flow regimes and turn down ratios.
Sonar flowmeters have the capacity of measuring the velocity of liquids or gases non-intrusively within the pipe and then leverage this velocity measurement into a flow rate by using the cross-sectional area of the pipe and the line pressure and temperature. The principle behind this flow measurement is the use of underwater acoustics.
In underwater acoustics, to locate an object underwater, sonar uses two knowns:
The speed of sound propagation through the array (i.e., the speed of sound through seawater)
The spacing between the sensors in the sensor array
and then calculates the unknown:
The location (or angle) of the object.
Likewise, sonar flow measurement uses the same techniques and algorithms employed in underwater acoustics, but applies them to flow measurement of oil and gas wells and flow lines.
To measure flow velocity, sonar flowmeters use two knowns:
The location (or angle) of the object, which is 0 degrees since the flow is moving along the pipe, which is aligned with the sensor array
The spacing between the sensors in the sensor array
and then calculates the unknown:
The speed of propagation through the array (i.e. the flow velocity of the medium in the pipe).
Electromagnetic, ultrasonic and Coriolis flowmeters
Modern innovations in the measurement of flow rate incorporate electronic devices that can correct for varying pressure and temperature (i.e. density) conditions, non-linearities, and for the characteristics of the fluid.
Magnetic flowmeters
Magnetic flowmeters, often called "mag meter"s or "electromag"s, use a magnetic field applied to the metering tube, which results in a potential difference proportional to the flow velocity perpendicular to the flux lines. The potential difference is sensed by electrodes aligned perpendicular to the flow and the applied magnetic field. The physical principle at work is Faraday's law of electromagnetic induction. The magnetic flowmeter requires a conducting fluid and a nonconducting pipe liner. The electrodes must not corrode in contact with the process fluid; some magnetic flowmeters have auxiliary transducers installed to clean the electrodes in place. The applied magnetic field is pulsed, which allows the flowmeter to cancel out the effect of stray voltage in the piping system.
Non-contact electromagnetic flowmeters
A Lorentz force velocimetry system is called Lorentz force flowmeter (LFF). An LFF measures the integrated or bulk Lorentz force resulting from the interaction between a liquid metal in motion and an applied magnetic field. In this case, the characteristic length of the magnetic field is of the same order of magnitude as the dimensions of the channel. It must be addressed that in the case where localized magnetic fields are used, it is possible to perform local velocity measurements and thus the term Lorentz force velocimeter is used.
Ultrasonic flowmeters (Doppler, transit time)
There are two main types of ultrasonic flowmeters: Doppler and transit time. While they both utilize ultrasound to make measurements and can be non-invasive (measure flow from outside the tube, pipe or vessel, also called clamp-on device), they measure flow by very different methods.
Ultrasonic transit time flowmeters measure the difference of the transit time of ultrasonic pulses propagating in and against the direction of flow. This time difference is a measure for the average velocity of the fluid along the path of the ultrasonic beam. By using the absolute transit times both the averaged fluid velocity and the speed of sound can be calculated. Using the two transit times and and the distance between receiving and transmitting transducers and the inclination angle one can write the equations:
and
where is the average velocity of the fluid along the sound path and is the speed of sound.
With wide-beam illumination transit time ultrasound can also be used to measure volume flow independent of the cross-sectional area of the vessel or tube.
Ultrasonic Doppler flowmeters measure the Doppler shift resulting from reflecting an ultrasonic beam off the particulates in flowing fluid. The frequency of the transmitted beam is affected by the movement of the particles; this frequency shift can be used to calculate the fluid velocity. For the Doppler principle to work, there must be a high enough density of sonically reflective materials such as solid particles or air bubbles suspended in the fluid. This is in direct contrast to an ultrasonic transit time flowmeter, where bubbles and solid particles reduce the accuracy of the measurement. Due to the dependency on these particles, there are limited applications for Doppler flowmeters. This technology is also known as acoustic Doppler velocimetry.
One advantage of ultrasonic flowmeters is that they can effectively measure the flow rates for a wide variety of fluids, as long as the speed of sound through that fluid is known. For example, ultrasonic flowmeters are used for the measurement of such diverse fluids as liquid natural gas (LNG) and blood. One can also calculate the expected speed of sound for a given fluid; this can be compared to the speed of sound empirically measured by an ultrasonic flowmeter for the purposes of monitoring the quality of the flowmeter's measurements. A drop in quality (change in the measured speed of sound) is an indication that the meter needs servicing.
Coriolis flowmeters
Using the Coriolis effect that causes a laterally vibrating tube to distort, a direct measurement of mass flow can be obtained in a coriolis flowmeter. Furthermore, a direct measure of the density of the fluid is obtained. Coriolis measurement can be very accurate irrespective of the type of gas or liquid that is measured; the same measurement tube can be used for hydrogen gas and bitumen without recalibration.
Coriolis flowmeters can be used for the measurement of natural gas flow.
Laser Doppler flow measurement
A beam of laser light impinging on a moving particle will be partially scattered with a change in wavelength proportional to the particle's speed (the Doppler effect). A laser Doppler velocimeter (LDV), also called a laser Doppler anemometer (LDA), focuses a laser beam into a small volume in a flowing fluid containing small particles (naturally occurring or induced). The particles scatter the light with a Doppler shift. Analysis of this shifted wavelength can be used to directly, and with great precision, determine the speed of the particle and thus a close approximation of the fluid velocity.
A number of different techniques and device configurations are available for determining the Doppler shift. All use a photodetector (typically an avalanche photodiode) to convert the light into an electrical waveform for analysis. In most devices, the original laser light is divided into two beams. In one general LDV class, the two beams are made to intersect at their focal points where they interfere and generate a set of straight fringes. The sensor is then aligned to the flow such that the fringes are perpendicular to the flow direction. As particles pass through the fringes, the Doppler-shifted light is collected into the photodetector. In another general LDV class, one beam is used as a reference and the other is Doppler-scattered. Both beams are then collected onto the photodetector where optical heterodyne detection is used to extract the Doppler signal.
Calibration
Even though ideally the flowmeter should be unaffected by its environment, in practice this is unlikely to be the case. Often measurement errors originate from incorrect installation or other environment dependent factors. In situ methods are used when flowmeter is calibrated in the correct flow conditions. The result of a flowmeter calibration will result in two related statistics: a performance indicator metric and a flow rate metric.
Transit time method
For pipe flows a so-called transit time method is applied where a radiotracer is injected as a pulse into the measured flow. The transit time is defined with the help of radiation detectors placed on the outside of the pipe. The volume flow is obtained by multiplying the measured average fluid flow velocity by the inner pipe cross-section. This reference flow value is compared with the simultaneous flow value given by the flow measurement to be calibrated.
The procedure is standardised (ISO 2975/VII for liquids and BS 5857-2.4 for gases). The best accredited measurement uncertainty for liquids and gases is 0.5%.
Tracer dilution method
The radiotracer dilution method is used to calibrate open channel flow measurements. A solution with a known tracer concentration is injected at a constant known velocity into the channel flow. Downstream the tracer solution is thoroughly mixed over the flow cross-section, a continuous sample is taken and its tracer concentration in relation to that of the injected solution is determined. The flow reference value is determined by using the tracer balance condition between the injected tracer flow and the diluting flow.
The procedure is standardised (ISO 9555-1 and ISO 9555-2 for liquid flow in open channels). The best accredited measurement uncertainty is 1%.
See also
Anemometer
Automatic meter reading
Flowmeter error
Ford viscosity cup
Gas meter
Ultrasonic flow meter
Laser Doppler velocimetry
Primary flow element
Water meter
References
Fluid dynamics
Measurement
Medical ultrasonography | Flow measurement | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 7,868 | [
"Physical quantities",
"Chemical engineering",
"Quantity",
"Measurement",
"Size",
"Piping",
"Fluid dynamics"
] |
221,244 | https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford%20algorithm | The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph.
It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers. The algorithm was first proposed by , but is instead named after Richard Bellman and Lester Ford Jr., who published it in 1958 and 1956, respectively. Edward F. Moore also published a variation of the algorithm in 1959, and for this reason it is also sometimes called the Bellman–Ford–Moore algorithm.
Negative edge weights are found in various applications of graphs. This is why this algorithm is useful.
If a graph contains a "negative cycle" (i.e. a cycle whose edges sum to a negative value) that is reachable from the source, then there is no cheapest path: any path that has a point on the negative cycle can be made cheaper by one more walk around the negative cycle. In such a case, the Bellman–Ford algorithm can detect and report the negative cycle.
Algorithm
Like Dijkstra's algorithm, Bellman–Ford proceeds by relaxation, in which approximations to the correct distance are replaced by better ones until they eventually reach the solution. In both algorithms, the approximate distance to each vertex is always an overestimate of the true distance, and is replaced by the minimum of its old value and the length of a newly found path.
However, Dijkstra's algorithm uses a priority queue to greedily select the closest vertex that has not yet been processed, and performs this relaxation process on all of its outgoing edges; by contrast, the Bellman–Ford algorithm simply relaxes all the edges, and does this times, where is the number of vertices in the graph.
In each of these repetitions, the number of vertices with correctly calculated distances grows, from which it follows that eventually all vertices will have their correct distances. This method allows the Bellman–Ford algorithm to be applied to a wider class of inputs than Dijkstra's algorithm. The intermediate answers depend on the order of edges relaxed, but the final answer remains the same.
Bellman–Ford runs in time, where and are the number of vertices and edges respectively.
function BellmanFord(list vertices, list edges, vertex source) is
// This implementation takes in a graph, represented as
// lists of vertices (represented as integers [0..n-1]) and edges,
// and fills two arrays (distance and predecessor) holding
// the shortest path from the source to each vertex
distance := list of size n
predecessor := list of size n
// Step 1: initialize graph
for each vertex v in vertices do
// Initialize the distance to all vertices to infinity
distance[v] := inf
// And having a null predecessor
predecessor[v] := null
// The distance from the source to itself is zero
distance[source] := 0
// Step 2: relax edges repeatedly
repeat |V|−1 times:
for each edge (u, v) with weight w in edges do
if distance[u] + w < distance[v] then
distance[v] := distance[u] + w
predecessor[v] := u
// Step 3: check for negative-weight cycles
for each edge (u, v) with weight w in edges do
if distance[u] + w < distance[v] then
predecessor[v] := u
// A negative cycle exists; find a vertex on the cycle
visited := list of size n initialized with false
visited[v] := true
while not visited[u] do
visited[u] := true
u := predecessor[u]
// u is a vertex in a negative cycle, find the cycle itself
ncycle := [u]
v := predecessor[u]
while v != u do
ncycle := concatenate([v], ncycle)
v := predecessor[v]
error "Graph contains a negative-weight cycle", ncycle
return distance, predecessor
Simply put, the algorithm initializes the distance to the source to 0 and all other nodes to infinity. Then for all edges, if the distance to the destination can be shortened by taking the edge, the distance is updated to the new lower value.
The core of the algorithm is a loop that scans across all edges at every loop. For every , at the end of the -th iteration, from any vertex , following the predecessor trail recorded in yields a path that has a total weight that is at most , and further, is a lower bound to the length of any path from source to that uses at most edges.
Since the longest possible path without a cycle can be edges, the edges must be scanned times to ensure the shortest path has been found for all nodes. A final scan of all the edges is performed and if any distance is updated, then a path of length edges has been found which can only occur if at least one negative cycle exists in the graph.
The edge (u, v) that is found in step 3 must be reachable from a negative cycle, but it isn't necessarily part of the cycle itself, which is why it's necessary to follow the path of predecessors backwards until a cycle is detected. The above pseudo-code uses a Boolean array (visited) to find a vertex on the cycle, but any cycle finding algorithm can be used to find a vertex on the cycle.
A common improvement when implementing the algorithm is to return early when an iteration of step 2 fails to relax any edges, which implies all shortest paths have been found, and therefore there are no negative cycles. In that case, the complexity of the algorithm is reduced from to where is the maximum length of a shortest path in the graph.
Proof of correctness
The correctness of the algorithm can be shown by induction:
Lemma. After i repetitions of for loop,
if Distance(u) is not infinity, it is equal to the length of some path from s to u; and
if there is a path from s to u with at most i edges, then Distance(u) is at most the length of the shortest path from s to u with at most i edges.
Proof. For the base case of induction, consider i=0 and the moment before for loop is executed for the first time. Then, for the source vertex, source.distance = 0, which is correct. For other vertices u, u.distance = infinity, which is also correct because there is no path from source to u with 0 edges.
For the inductive case, we first prove the first part. Consider a moment when a vertex's distance is updated by
v.distance := u.distance + uv.weight. By inductive assumption, u.distance is the length of some path from source to u. Then u.distance + uv.weight is the length of the path from source to v that follows the path from source to u and then goes to v.
For the second part, consider a shortest path P (there may be more than one) from source to v with at most i edges. Let u be the last vertex before v on this path. Then, the part of the path from source to u is a shortest path from source to u with at most i-1 edges, since if it were not, then there must be some strictly shorter path from source to u with at most i-1 edges, and we could then append the edge uv to this path to obtain a path with at most i edges that is strictly shorter than P—a contradiction. By inductive assumption, u.distance after i−1 iterations is at most the length of this path from source to u. Therefore, uv.weight + u.distance is at most the length of P. In the ith iteration, v.distance gets compared with uv.weight + u.distance, and is set equal to it if uv.weight + u.distance is smaller. Therefore, after i iterations, v.distance is at most the length of P, i.e., the length of the shortest path from source to v that uses at most i edges.
If there are no negative-weight cycles, then every shortest path visits each vertex at most once, so at step 3 no further improvements can be made. Conversely, suppose no improvement can be made. Then for any cycle with vertices v[0], ..., v[k−1],
v[i].distance <= v[i-1 (mod k)].distance + v[i-1 (mod k)]v[i].weight
Summing around the cycle, the v[i].distance and v[i−1 (mod k)].distance terms cancel, leaving
0 <= sum from 1 to k of v[i-1 (mod k)]v[i].weight
I.e., every cycle has nonnegative weight.
Finding negative cycles
When the algorithm is used to find shortest paths, the existence of negative cycles is a problem, preventing the algorithm from finding a correct answer. However, since it terminates upon finding a negative cycle, the Bellman–Ford algorithm can be used for applications in which this is the target to be sought – for example in cycle-cancelling techniques in network flow analysis.
Applications in routing
A distributed variant of the Bellman–Ford algorithm is used in distance-vector routing protocols, for example the Routing Information Protocol (RIP). The algorithm is distributed because it involves a number of nodes (routers) within an Autonomous system (AS), a collection of IP networks typically owned by an ISP.
It consists of the following steps:
Each node calculates the distances between itself and all other nodes within the AS and stores this information as a table.
Each node sends its table to all neighboring nodes.
When a node receives distance tables from its neighbors, it calculates the shortest routes to all other nodes and updates its own table to reflect any changes.
The main disadvantages of the Bellman–Ford algorithm in this setting are as follows:
It does not scale well.
Changes in network topology are not reflected quickly since updates are spread node-by-node.
Count to infinity if link or node failures render a node unreachable from some set of other nodes, those nodes may spend forever gradually increasing their estimates of the distance to it, and in the meantime there may be routing loops.
Improvements
The Bellman–Ford algorithm may be improved in practice (although not in the worst case) by the observation that, if an iteration of the main loop of the algorithm terminates without making any changes, the algorithm can be immediately terminated, as subsequent iterations will not make any more changes. With this early termination condition, the main loop may in some cases use many fewer than iterations, even though the worst case of the algorithm remains unchanged. The following improvements all maintain the worst-case time complexity.
A variation of the Bellman–Ford algorithm described by , reduces the number of relaxation steps that need to be performed within each iteration of the algorithm. If a vertex v has a distance value that has not changed since the last time the edges out of v were relaxed, then there is no need to relax the edges out of v a second time. In this way, as the number of vertices with correct distance values grows, the number whose outgoing edges that need to be relaxed in each iteration shrinks, leading to a constant-factor savings in time for dense graphs. This variation can be implemented by keeping a collection of vertices whose outgoing edges need to be relaxed, removing a vertex from this collection when its edges are relaxed, and adding to the collection any vertex whose distance value is changed by a relaxation step. In China, this algorithm was popularized by Fanding Duan, who rediscovered it in 1994, as the "shortest path faster algorithm".
described another improvement to the Bellman–Ford algorithm. His improvement first assigns some arbitrary linear order on all vertices and then partitions the set of all edges into two subsets. The first subset, Ef, contains all edges (vi, vj) such that i < j; the second, Eb, contains edges (vi, vj) such that i > j. Each vertex is visited in the order , relaxing each outgoing edge from that vertex in Ef. Each vertex is then visited in the order , relaxing each outgoing edge from that vertex in Eb. Each iteration of the main loop of the algorithm, after the first one, adds at least two edges to the set of edges whose relaxed distances match the correct shortest path distances: one from Ef and one from Eb. This modification reduces the worst-case number of iterations of the main loop of the algorithm from to .
Another improvement, by , replaces the arbitrary linear order of the vertices used in Yen's second improvement by a random permutation. This change makes the worst case for Yen's improvement (in which the edges of a shortest path strictly alternate between the two subsets Ef and Eb) very unlikely to happen. With a randomly permuted vertex ordering, the expected number of iterations needed in the main loop is at most .
, at Georgetown University, created an improved algorithm that with high probability, runs in time.
Notes
References
Original sources
Secondary sources
, Fourth Edition. MIT Press, 2022. . Section 22.1: The Bellman–Ford algorithm, pp. 612–616. Problem 22–1, p. 640.
Graph algorithms
Polynomial-time problems
Articles with example C code
Articles with example pseudocode
Dynamic programming
Graph distance | Bellman–Ford algorithm | [
"Mathematics"
] | 2,825 | [
"Graph theory",
"Computational problems",
"Polynomial-time problems",
"Mathematical relations",
"Mathematical problems",
"Graph distance"
] |
221,400 | https://en.wikipedia.org/wiki/Aircraft%20flight%20dynamics | Flight dynamics is the science of air vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation in three dimensions about the vehicle's center of gravity (cg), known as pitch, roll and yaw. These are collectively known as aircraft attitude, often principally relative to the atmospheric frame in normal flight, but also relative to terrain during takeoff or landing, or when operating at low elevation. The concept of attitude is not specific to fixed-wing aircraft, but also extends to rotary aircraft such as helicopters, and dirigibles, where the flight dynamics involved in establishing and controlling attitude are entirely different.
Control systems adjust the orientation of a vehicle about its cg. A control system includes control surfaces which, when deflected, generate a moment (or couple from ailerons) about the cg which rotates the aircraft in pitch, roll, and yaw. For example, a pitching moment comes from a force applied at a distance forward or aft of the cg, causing the aircraft to pitch up or down.
A fixed-wing aircraft increases or decreases the lift generated by the wings when it pitches nose up or down by increasing or decreasing the angle of attack (AOA). The roll angle is also known as bank angle on a fixed-wing aircraft, which usually "banks" to change the horizontal direction of flight. An aircraft is streamlined from nose to tail to reduce drag making it advantageous to keep the sideslip angle near zero, though an aircraft may be deliberately "sideslipped" to increase drag and descent rate during landing, to keep aircraft heading same as runway heading during cross-wind landings and during flight with asymmetric power.
Background
Roll, pitch and yaw refer to rotations about the respective axes starting from a defined steady flight equilibrium state. The equilibrium roll angle is known as wings level or zero bank angle.
The most common aeronautical convention defines roll as acting about the longitudinal axis, positive with the starboard (right) wing down. Yaw is about the vertical body axis, positive with the nose to starboard. Pitch is about an axis perpendicular to the longitudinal plane of symmetry, positive nose up.
Reference frames
Three right-handed, Cartesian coordinate systems see frequent use in flight dynamics. The first coordinate system has an origin fixed in the reference frame of the Earth:
Earth frame
Origin - arbitrary, fixed relative to the surface of the Earth
xE axis - positive in the direction of north
yE axis - positive in the direction of east
zE axis - positive towards the center of the Earth
In many flight dynamics applications, the Earth frame is assumed to be inertial with a flat xE,yE-plane, though the Earth frame can also be considered a spherical coordinate system with origin at the center of the Earth.
The other two reference frames are body-fixed, with origins moving along with the aircraft, typically at the center of gravity. For an aircraft that is symmetric from right-to-left, the frames can be defined as:
Body frame
Origin - airplane center of gravity
xb axis - positive out the nose of the aircraft in the plane of symmetry of the aircraft
zb axis - perpendicular to the xb axis, in the plane of symmetry of the aircraft, positive below the aircraft
yb axis - perpendicular to the xb,zb-plane, positive determined by the right-hand rule (generally, positive out the right wing)
Wind frame
Origin - airplane center of gravity
xw axis - positive in the direction of the velocity vector of the aircraft relative to the air
zw axis - perpendicular to the xw axis, in the plane of symmetry of the aircraft, positive below the aircraft
yw axis - perpendicular to the xw,zw-plane, positive determined by the right hand rule (generally, positive to the right)
Asymmetric aircraft have analogous body-fixed frames, but different conventions must be used to choose the precise directions of the x and z axes.
The Earth frame is a convenient frame to express aircraft translational and rotational kinematics. The Earth frame is also useful in that, under certain assumptions, it can be approximated as inertial. Additionally, one force acting on the aircraft, weight, is fixed in the +zE direction.
The body frame is often of interest because the origin and the axes remain fixed relative to the aircraft. This means that the relative orientation of the Earth and body frames describes the aircraft attitude. Also, the direction of the force of thrust is generally fixed in the body frame, though some aircraft can vary this direction, for example by thrust vectoring.
The wind frame is a convenient frame to express the aerodynamic forces and moments acting on an aircraft. In particular, the net aerodynamic force can be divided into components along the wind frame axes, with the drag force in the −xw direction and the lift force in the −zw direction.
In addition to defining the reference frames, the relative orientation of the reference frames can be determined. The relative orientation can be expressed in a variety of forms, including:
Rotation matrices
Direction cosines
Euler angles
Quaternions
The various Euler angles relating the three reference frames are important to flight dynamics. Many Euler angle conventions exist, but all of the rotation sequences presented below use the z-y'-x" convention. This convention corresponds to a type of Tait-Bryan angles, which are commonly referred to as Euler angles. This convention is described in detail below for the roll, pitch, and yaw Euler angles that describe the body frame orientation relative to the Earth frame. The other sets of Euler angles are described below by analogy.
Transformations (Euler angles)
From Earth frame to body frame
First, rotate the Earth frame axes xE and yE around the zE axis by the yaw angle ψ. This results in an intermediate reference frame with axes denoted x,y,z, where z'=zE.
Second, rotate the x and z axes around the y axis by the pitch angle θ. This results in another intermediate reference frame with axes denoted x",y",z", where y"=y.
Finally, rotate the y" and z" axes around the x" axis by the roll angle φ. The reference frame that results after the three rotations is the body frame.
Based on the rotations and axes conventions above:
Yaw angle ψ: angle between north and the projection of the aircraft longitudinal axis onto the horizontal plane;
Pitch angle θ: angle between the aircraft longitudinal axis and horizontal;
Roll angle φ: rotation around the aircraft longitudinal axis after rotating by yaw and pitch.
From Earth frame to wind frame
Heading angle σ: angle between north and the horizontal component of the velocity vector, which describes which direction the aircraft is moving relative to cardinal directions.
Flight path angle γ: is the angle between horizontal and the velocity vector, which describes whether the aircraft is climbing or descending.
Bank angle μ: represents a rotation of the lift force around the velocity vector, which may indicate whether the airplane is turning.
When performing the rotations described above to obtain the body frame from the Earth frame, there is this analogy between angles:
σ, ψ (heading vs yaw)
γ, θ (Flight path vs pitch)
μ, φ (Bank vs Roll)
From wind frame to body frame
sideslip angle β: angle between the velocity vector and the projection of the aircraft longitudinal axis onto the xw,yw-plane, which describes whether there is a lateral component to the aircraft velocity
angle of attack α: angle between the xw,yw-plane and the aircraft longitudinal axis and, among other things, is an important variable in determining the magnitude of the force of lift
When performing the rotations described earlier to obtain the body frame from the Earth frame, there is this analogy between angles:
β, ψ (sideslip vs yaw)
α, θ (attack vs pitch)
(φ = 0) (nothing vs roll)
Analogies
Between the three reference frames there are hence these analogies:
Yaw / Heading / Sideslip (Z axis, vertical)
Pitch / Flight path / Attack angle (Y axis, wing)
Roll / Bank / nothing (X axis, nose)
Design cases
In analyzing the stability of an aircraft, it is usual to consider perturbations about a nominal steady flight state. So the analysis would be applied, for example, assuming:
Straight and level flight
Turn at constant speed
Approach and landing
Takeoff
The speed, height and trim angle of attack are different for each flight condition, in addition, the aircraft will be configured differently, e.g. at low speed flaps may be deployed and the undercarriage may be down.
Except for asymmetric designs (or symmetric designs at significant sideslip), the longitudinal equations of motion (involving pitch and lift forces) may be treated independently of the lateral motion (involving roll and yaw).
The following considers perturbations about a nominal straight and level flight path.
To keep the analysis (relatively) simple, the control surfaces are assumed fixed throughout the motion, this is stick-fixed stability. Stick-free analysis requires the further complication of taking the motion of the control surfaces into account.
Furthermore, the flight is assumed to take place in still air, and the aircraft is treated as a rigid body.
Forces of flight
Three forces act on an aircraft in flight: weight, thrust, and the aerodynamic force.
Aerodynamic force
Components of the aerodynamic force
The expression to calculate the aerodynamic force is:
where:
Difference between static pressure and free current pressure
outer normal vector of the element of area
tangential stress vector practised by the air on the body
adequate reference surface
projected on wind axes we obtain:
where:
Drag
Lateral force
Lift
Aerodynamic coefficients
Dynamic pressure of the free current
Proper reference surface (wing surface, in case of planes)
Pressure coefficient
Friction coefficient
Drag coefficient
Lateral force coefficient
Lift coefficient
It is necessary to know Cp and Cf in every point on the considered surface.
Dimensionless parameters and aerodynamic regimes
In absence of thermal effects, there are three remarkable dimensionless numbers:
Compressibility of the flow:
Mach number
Viscosity of the flow:
Reynolds number
Rarefaction of the flow:
Knudsen number
where:
speed of sound
specific heat ratio
gas constant by mass unity
absolute temperature
mean free path
According to λ there are three possible rarefaction grades and their corresponding motions are called:
Continuum current (negligible rarefaction):
Transition current (moderate rarefaction):
Free molecular current (high rarefaction):
The motion of a body through a flow is considered, in flight dynamics, as continuum current. In the outer layer of the space that surrounds the body viscosity will be negligible. However viscosity effects will have to be considered when analysing the flow in the nearness of the boundary layer.
Depending on the compressibility of the flow, different kinds of currents can be considered:
Incompressible subsonic current:
Compressible subsonic current:
Transonic current:
Supersonic current:
Hypersonic current:
Drag coefficient equation and aerodynamic efficiency
If the geometry of the body is fixed and in case of symmetric flight (β=0 and Q=0), pressure and friction coefficients are functions depending on:
where:
angle of attack
considered point of the surface
Under these conditions, drag and lift coefficient are functions depending exclusively on the angle of attack of the body and Mach and Reynolds numbers. Aerodynamic efficiency, defined as the relation between lift and drag coefficients, will depend on those parameters as well.
It is also possible to get the dependency of the drag coefficient respect to the lift coefficient. This relation is known as the drag coefficient equation:
drag coefficient equation
The aerodynamic efficiency has a maximum value, Emax, respect to CL where the tangent line from the coordinate origin touches the drag coefficient equation plot.
The drag coefficient, CD, can be decomposed in two ways. First typical decomposition separates pressure and friction effects:
There is a second typical decomposition taking into account the definition of the drag coefficient equation. This decomposition separates the effect of the lift coefficient in the equation, obtaining two terms CD0 and CDi. CD0 is known as the parasitic drag coefficient and it is the base drag coefficient at zero lift. CDi is known as the induced drag coefficient and it is produced by the body lift.
Parabolic and generic drag coefficient
A good attempt for the induced drag coefficient is to assume a parabolic dependency of the lift
Aerodynamic efficiency is now calculated as:
If the configuration of the plane is symmetrical respect to the XY plane, minimum drag coefficient equals to the parasitic drag of the plane.
In case the configuration is asymmetrical respect to the XY plane, however, minimum drag differs from the parasitic drag. On these cases, a new approximate parabolic drag equation can be traced leaving the minimum drag value at zero lift value.
Variation of parameters with the Mach number
The Coefficient of pressure varies with Mach number by the relation given below:
where
Cp is the compressible pressure coefficient
Cp0 is the incompressible pressure coefficient
M∞ is the freestream Mach number.
This relation is reasonably accurate for 0.3 < M < 0.7 and when M = 1 it becomes ∞ which is impossible physical situation and is called Prandtl–Glauert singularity.
Aerodynamic force in a specified atmosphere
see Aerodynamic force
Stability
Stability is the ability of the aircraft to counteract disturbances to its flight path.
According to David P. Davies, there are six types of aircraft stability: speed stability, stick free static longitudinal stability, static lateral stability, directional stability, oscillatory stability, and spiral stability.
Speed stability
An aircraft in cruise flight is typically speed stable. If speed increases, drag increases, which will reduce the speed back to equilibrium for its configuration and thrust setting. If speed decreases, drag decreases, and the aircraft will accelerate back to its equilibrium speed where thrust equals drag.
However, in slow flight, due to lift-induced drag, as speed decreases, drag increases (and vice versa). This is known as the "back of the drag curve". The aircraft will be speed unstable, because a decrease in speed will cause a further decrease in speed.
Static stability and control
Longitudinal static stability
Longitudinal stability refers to the stability of an aircraft in pitch. For a stable aircraft, if the aircraft pitches up, the wings and tail create a pitch-down moment which tends to restore the aircraft to its original attitude. For an unstable aircraft, a disturbance in pitch will lead to an increasing pitching moment. Longitudinal static stability is the ability of an aircraft to recover from an initial disturbance. Longitudinal dynamic stability refers to the damping of these stabilizing moments, which prevents persistent or increasing oscillations in pitch.
Directional stability
Directional or weathercock stability is concerned with the static stability of the airplane about the z axis. Just as in the case of longitudinal stability it is desirable that the aircraft should tend to return to an equilibrium condition when subjected to some form of yawing disturbance. For this the slope of the yawing moment curve must be positive.
An airplane possessing this mode of stability will always point towards the relative wind, hence the name weathercock stability.
Dynamic stability and control
Longitudinal modes
It is common practice to derive a fourth order characteristic equation to describe the longitudinal motion, and then factorize it approximately into a high frequency mode and a low frequency mode. The approach adopted here is using qualitative knowledge of aircraft behavior to simplify the equations from the outset, reaching the result by a more accessible route.
The two longitudinal motions (modes) are called the short period pitch oscillation (SPPO), and the phugoid.
Short-period pitch oscillation
A short input (in control systems terminology an impulse) in pitch (generally via the elevator in a standard configuration fixed-wing aircraft) will generally lead to overshoots about the trimmed condition. The transition is characterized by a damped simple harmonic motion about the new trim. There is very little change in the trajectory over the time it takes for the oscillation to damp out.
Generally this oscillation is high frequency (hence short period) and is damped over a period of a few seconds. A real-world example would involve a pilot selecting a new climb attitude, for example 5° nose up from the original attitude. A short, sharp pull back on the control column may be used, and will generally lead to oscillations about the new trim condition. If the oscillations are poorly damped the aircraft will take a long period of time to settle at the new condition, potentially leading to Pilot-induced oscillation. If the short period mode is unstable it will generally be impossible for the pilot to safely control the aircraft for any period of time.
This damped harmonic motion is called the short period pitch oscillation; it arises from the tendency of a stable aircraft to point in the general direction of flight. It is very similar in nature to the weathercock mode of missile or rocket configurations. The motion involves mainly the pitch attitude (theta) and incidence (alpha). The direction of the velocity vector, relative to inertial axes is . The velocity vector is:
where , are the inertial axes components of velocity. According to Newton's Second Law, the accelerations are proportional to the forces, so the forces in inertial axes are:
where m is the mass.
By the nature of the motion, the speed variation is negligible over the period of the oscillation, so:
But the forces are generated by the pressure distribution on the body, and are referred to the velocity vector. But the velocity (wind) axes set is not an inertial frame so we must resolve the fixed axes forces into wind axes. Also, we are only concerned with the force along the z-axis:
Or:
In words, the wind axes force is equal to the centripetal acceleration.
The moment equation is the time derivative of the angular momentum:
where M is the pitching moment, and B is the moment of inertia about the pitch axis.
Let: , the pitch rate.
The equations of motion, with all forces and moments referred to wind axes are, therefore:
We are only concerned with perturbations in forces and moments, due to perturbations in the states and q, and their time derivatives. These are characterized by stability derivatives determined from the flight condition. The possible stability derivatives are:
Lift due to incidence, this is negative because the z-axis is downwards whilst positive incidence causes an upwards force.
Lift due to pitch rate, arises from the increase in tail incidence, hence is also negative, but small compared with .
Pitching moment due to incidence - the static stability term. Static stability requires this to be negative.
Pitching moment due to pitch rate - the pitch damping term, this is always negative.
Since the tail is operating in the flowfield of the wing, changes in the wing incidence cause changes in the downwash, but there is a delay for the change in wing flowfield to affect the tail lift, this is represented as a moment proportional to the rate of change of incidence:
The delayed downwash effect gives the tail more lift and produces a nose down moment, so is expected to be negative.
The equations of motion, with small perturbation forces and moments become:
These may be manipulated to yield as second order linear differential equation in :
This represents a damped simple harmonic motion.
We should expect to be small compared with unity, so the coefficient of (the 'stiffness' term) will be positive, provided . This expression is dominated by , which defines the longitudinal static stability of the aircraft, it must be negative for stability. The damping term is reduced by the downwash effect, and it is difficult to design an aircraft with both rapid natural response and heavy damping. Usually, the response is underdamped but stable.
Phugoid
If the stick is held fixed, the aircraft will not maintain straight and level flight (except in the unlikely case that it happens to be perfectly trimmed for level flight at its current altitude and thrust setting), but will start to dive, level out and climb again. It will repeat this cycle until the pilot intervenes. This long period oscillation in speed and height is called the phugoid mode. This is analyzed by assuming that the SSPO performs its proper function and maintains the angle of attack near its nominal value. The two states which are mainly affected are the flight path angle (gamma) and speed. The small perturbation equations of motion are:
which means the centripetal force is equal to the perturbation in lift force.
For the speed, resolving along the trajectory:
where g is the acceleration due to gravity at the Earth's surface. The acceleration along the trajectory is equal to the net x-wise force minus the component of weight. We should not expect significant aerodynamic derivatives to depend on the flight path angle, so only and need be considered. is the drag increment with increased speed, it is negative, likewise is the lift increment due to speed increment, it is also negative because lift acts in the opposite sense to the z-axis.
The equations of motion become:
These may be expressed as a second order equation in flight path angle or speed perturbation:
Now lift is very nearly equal to weight:
where is the air density, is the wing area, W the weight and is the lift coefficient (assumed constant because the incidence is constant), we have, approximately:
The period of the phugoid, T, is obtained from the coefficient of u:
Or:
Since the lift is very much greater than the drag, the phugoid is at best lightly damped. A propeller with fixed speed would help. Heavy damping of the pitch rotation or a large rotational inertia increase the coupling between short period and phugoid modes, so that these will modify the phugoid.
Lateral modes
With a symmetrical rocket or missile, the directional stability in yaw is the same as the pitch stability; it resembles the short period pitch oscillation, with yaw plane equivalents to the pitch plane stability derivatives. For this reason, pitch and yaw directional stability are collectively known as the "weathercock" stability of the missile.
Aircraft lack the symmetry between pitch and yaw, so that directional stability in yaw is derived from a different set of stability derivatives. The yaw plane equivalent to the short period pitch oscillation, which describes yaw plane directional stability is called Dutch roll. Unlike pitch plane motions, the lateral modes involve both roll and yaw motion.
Dutch roll
It is customary to derive the equations of motion by formal manipulation in what, to the engineer, amounts to a piece of mathematical sleight of hand. The current approach follows the pitch plane analysis in formulating the equations in terms of concepts which are reasonably familiar.
Applying an impulse via the rudder pedals should induce Dutch roll, which is the oscillation in roll and yaw, with the roll motion lagging yaw by a quarter cycle, so that the wing tips follow elliptical paths with respect to the aircraft.
The yaw plane translational equation, as in the pitch plane, equates the centripetal acceleration to the side force.
where (beta) is the sideslip angle, Y the side force and r the yaw rate.
The moment equations are a bit trickier. The trim condition is with the aircraft at an angle of attack with respect to the airflow. The body x-axis does not align with the velocity vector, which is the reference direction for wind axes. In other words, wind axes are not principal axes (the mass is not distributed symmetrically about the yaw and roll axes). Consider the motion of an element of mass in position -z, x in the direction of the y-axis, i.e. into the plane of the paper.
If the roll rate is p, the velocity of the particle is:
Made up of two terms, the force on this particle is first the proportional to rate of v change, the second is due to the change in direction of this component of velocity as the body moves. The latter terms gives rise to cross products of small quantities (pq, pr, qr), which are later discarded. In this analysis, they are discarded from the outset for the sake of clarity. In effect, we assume that the direction of the velocity of the particle due to the simultaneous roll and yaw rates does not change significantly throughout the motion. With this simplifying assumption, the acceleration of the particle becomes:
The yawing moment is given by:
There is an additional yawing moment due to the offset of the particle in the y direction:
The yawing moment is found by summing over all particles of the body:
where N is the yawing moment, E is a product of inertia, and C is the moment of inertia about the yaw axis.
A similar reasoning yields the roll equation:
where L is the rolling moment and A the roll moment of inertia.
Lateral and longitudinal stability derivatives
The states are (sideslip), r (yaw rate) and p (roll rate), with moments N (yaw) and L (roll), and force Y (sideways). There are nine stability derivatives relevant to this motion, the following explains how they originate. However a better intuitive understanding is to be gained by simply playing with a model airplane, and considering how the forces on each component are affected by changes in sideslip and angular velocity:
Side force due to side slip (in absence of yaw).
Sideslip generates a sideforce from the fin and the fuselage. In addition, if the wing has dihedral, side slip at a positive roll angle increases incidence on the starboard wing and reduces it on the port side, resulting in a net force component directly opposite to the sideslip direction. Sweep back of the wings has the same effect on incidence, but since the wings are not inclined in the vertical plane, backsweep alone does not affect . However, anhedral may be used with high backsweep angles in high performance aircraft to offset the wing incidence effects of sideslip. Oddly enough this does not reverse the sign of the wing configuration's contribution to (compared to the dihedral case).
Side force due to roll rate.
Roll rate causes incidence at the fin, which generates a corresponding side force. Also, positive roll (starboard wing down) increases the lift on the starboard wing and reduces it on the port. If the wing has dihedral, this will result in a side force momentarily opposing the resultant sideslip tendency. Anhedral wing and or stabilizer configurations can cause the sign of the side force to invert if the fin effect is swamped.
Side force due to yaw rate.
Yawing generates side forces due to incidence at the rudder, fin and fuselage.
Yawing moment due to sideslip forces.
Sideslip in the absence of rudder input causes incidence on the fuselage and empennage, thus creating a yawing moment counteracted only by the directional stiffness which would tend to point the aircraft's nose back into the wind in horizontal flight conditions. Under sideslip conditions at a given roll angle will tend to point the nose into the sideslip direction even without rudder input, causing a downward spiraling flight.
Yawing moment due to roll rate.
Roll rate generates fin lift causing a yawing moment and also differentially alters the lift on the wings, thus affecting the induced drag contribution of each wing, causing a (small) yawing moment contribution. Positive roll generally causes positive values unless the empennage is anhedral or fin is below the roll axis. Lateral force components resulting from dihedral or anhedral wing lift differences has little effect on because the wing axis is normally closely aligned with the center of gravity.
Yawing moment due to yaw rate.
Yaw rate input at any roll angle generates rudder, fin and fuselage force vectors which dominate the resultant yawing moment. Yawing also increases the speed of the outboard wing whilst slowing down the inboard wing, with corresponding changes in drag causing a (small) opposing yaw moment. opposes the inherent directional stiffness which tends to point the aircraft's nose back into the wind and always matches the sign of the yaw rate input.
Rolling moment due to sideslip.
A positive sideslip angle generates empennage incidence which can cause positive or negative roll moment depending on its configuration. For any non-zero sideslip angle dihedral wings causes a rolling moment which tends to return the aircraft to the horizontal, as does back swept wings. With highly swept wings the resultant rolling moment may be excessive for all stability requirements and anhedral could be used to offset the effect of wing sweep induced rolling moment.
Rolling moment due to yaw rate.
Yaw increases the speed of the outboard wing whilst reducing speed of the inboard one, causing a rolling moment to the inboard side. The contribution of the fin normally supports this inward rolling effect unless offset by anhedral stabilizer above the roll axis (or dihedral below the roll axis).
Rolling moment due to roll rate.
Roll creates counter rotational forces on both starboard and port wings whilst also generating such forces at the empennage. These opposing rolling moment effects have to be overcome by the aileron input in order to sustain the roll rate. If the roll is stopped at a non-zero roll angle the upward rolling moment induced by the ensuing sideslip should return the aircraft to the horizontal unless exceeded in turn by the downward rolling moment resulting from sideslip induced yaw rate. Longitudinal stability could be ensured or improved by minimizing the latter effect.
Equations of motion
Since Dutch roll is a handling mode, analogous to the short period pitch oscillation, any effect it might have on the trajectory may be ignored. The body rate r is made up of the rate of change of sideslip angle and the rate of turn. Taking the latter as zero, assuming no effect on the trajectory, for the limited purpose of studying the Dutch roll:
The yaw and roll equations, with the stability derivatives become:
(yaw)
(roll)
The inertial moment due to the roll acceleration is considered small compared with the aerodynamic terms, so the equations become:
This becomes a second order equation governing either roll rate or sideslip:
The equation for roll rate is identical. But the roll angle, (phi) is given by:
If p is a damped simple harmonic motion, so is , but the roll must be in quadrature with the roll rate, and hence also with the sideslip. The motion consists of oscillations in roll and yaw, with the roll motion lagging 90 degrees behind the yaw. The wing tips trace out elliptical paths.
Stability requires the "stiffness" and "damping" terms to be positive. These are:
(damping)
(stiffness)
The denominator is dominated by , the roll damping derivative, which is always negative, so the denominators of these two expressions will be positive.
Considering the "stiffness" term: will be positive because is always negative and is positive by design. is usually negative, whilst is positive. Excessive dihedral can destabilize the Dutch roll, so configurations with highly swept wings require anhedral to offset the wing sweep contribution to .
The damping term is dominated by the product of the roll damping and the yaw damping derivatives, these are both negative, so their product is positive. The Dutch roll should therefore be damped.
The motion is accompanied by slight lateral motion of the center of gravity and a more "exact" analysis will introduce terms in etc. In view of the accuracy with which stability derivatives can be calculated, this is an unnecessary pedantry, which serves to obscure the relationship between aircraft geometry and handling, which is the fundamental objective of this article.
Roll subsidence
Jerking the stick sideways and returning it to center causes a net change in roll orientation.
The roll motion is characterized by an absence of natural stability, there are no stability derivatives which generate moments in response to the inertial roll angle. A roll disturbance induces a roll rate which is only canceled by pilot or autopilot intervention. This takes place with insignificant changes in sideslip or yaw rate, so the equation of motion reduces to:
is negative, so the roll rate will decay with time. The roll rate reduces to zero, but there is no direct control over the roll angle.
Spiral mode
Simply holding the stick still, when starting with the wings near level, an aircraft will usually have a tendency to gradually veer off to one side of the straight flightpath. This is the (slightly unstable) spiral mode.
Spiral mode trajectory
In studying the trajectory, it is the direction of the velocity vector, rather than that of the body, which is of interest. The direction of the velocity vector when projected on to the horizontal will be called the track, denoted (mu). The body orientation is called the heading, denoted (psi). The force equation of motion includes a component of weight:
where g is the gravitational acceleration, and U is the speed.
Including the stability derivatives:
Roll rates and yaw rates are expected to be small, so the contributions of and will be ignored.
The sideslip and roll rate vary gradually, so their time derivatives are ignored. The yaw and roll equations reduce to:
(yaw)
(roll)
Solving for and p:
Substituting for sideslip and roll rate in the force equation results in a first order equation in roll angle:
This is an exponential growth or decay, depending on whether the coefficient of is positive or negative. The denominator is usually negative, which requires (both products are positive). This is in direct conflict with the Dutch roll stability requirement, and it is difficult to design an aircraft for which both the Dutch roll and spiral mode are inherently stable.
Since the spiral mode has a long time constant, the pilot can intervene to effectively stabilize it, but an aircraft with an unstable Dutch roll would be difficult to fly. It is usual to design the aircraft with a stable Dutch roll mode, but slightly unstable spiral mode.
See also
Acronyms and abbreviations in avionics
Aeronautics
Attitude and heading reference system
Steady flight
Aircraft flight control system
Aircraft flight mechanics
Aircraft heading
Aircraft bank
Crosswind landing
Dynamic positioning
Flight control surfaces
Helicopter dynamics
JSBSim (An open source flight dynamics software model)
Longitudinal static stability
Rigid body dynamics
Rotation matrix
Ship motions
Stability derivatives
Static margin
Weathervane effect
1902 Wright Glider
References
Notes
Bibliography
NK Sinha and N Ananthkrishnan (2013), Elementary Flight Dynamics with an Introduction to Bifurcation and Continuation Methods, CRC Press, Taylor & Francis.
External links
MIXR - mixed reality simulation platform
JSBsim, An open source, platform-independent, flight dynamics & control software library in C++
Aerodynamics
Avionics
Flight control systems | Aircraft flight dynamics | [
"Chemistry",
"Technology",
"Engineering"
] | 7,128 | [
"Avionics",
"Aerodynamics",
"Aircraft instruments",
"Aerospace engineering",
"Fluid dynamics"
] |
221,430 | https://en.wikipedia.org/wiki/Bearing%20%28mechanical%29 | A bearing is a machine element that constrains relative motion to only the desired motion and reduces friction between moving parts. The design of the bearing may, for example, provide for free linear movement of the moving part or for free rotation around a fixed axis; or, it may prevent a motion by controlling the vectors of normal forces that bear on the moving parts. Most bearings facilitate the desired motion by minimizing friction. Bearings are classified broadly according to the type of operation, the motions allowed, or the directions of the loads (forces) applied to the parts.
The term "bearing" is derived from the verb "to bear"; a bearing being a machine element that allows one part to bear (i.e., to support) another. The simplest bearings are bearing surfaces, cut or formed into a part, with varying degrees of control over the form, size, roughness, and location of the surface. Other bearings are separate devices installed into a machine or machine part. The most sophisticated bearings for the most demanding applications are very precise components; their manufacture requires some of the highest standards of current technology.
Types of bearings
Rotary bearings hold rotating components such as shafts or axles within mechanical systems and transfer axial and radial loads from the source of the load to the structure supporting it. The simplest form of bearing, the plain bearing, consists of a shaft rotating in a hole. Lubrication is used to reduce friction. Lubricants come in different forms, including liquids, solids, and gases. The choice of lubricant depends on the specific application and factors such as temperature, load, and speed. In the ball bearing and roller bearing, to reduce sliding friction, rolling elements such as rollers or balls with a circular cross-section are located between the races or journals of the bearing assembly. A wide variety of bearing designs exists to allow the demands of the application to be correctly met for maximum efficiency, reliability, durability, and performance.
History
It is sometimes assumed that the invention of the rolling bearing, in the form of wooden rollers supporting– or bearing –an object being moved, predates the invention of a wheel rotating on a plain bearing; this underlies speculation that cultures such as the Ancient Egyptians used roller bearings in the form of tree trunks under sleds. There is no evidence for this sequence of technological development. The Egyptians' own drawings in the tomb of Djehutihotep show the process of moving massive stone blocks on sledges as using liquid-lubricated runners which would constitute plain bearings. There are also Egyptian drawings of plain bearings used with hand drills.
Wheeled vehicles using plain bearings emerged between about 5000 BC and 3000 BC.
A recovered example of an early rolling-element bearing is a wooden ball bearing supporting a rotating table from the remains of the Roman Nemi ships in Lake Nemi, Italy. The wrecks were dated to 40 BC.
Leonardo da Vinci incorporated drawings of ball bearings in his design for a helicopter around the year 1500; this is the first recorded use of bearings in an aerospace design. However, Agostino Ramelli is the first to have published roller and thrust bearings sketches. An issue with the ball and roller bearings is that the balls or rollers rub against each other, causing additional friction. This can be reduced by enclosing each individual ball or roller within a cage. The captured, or caged, ball bearing was originally described by Galileo in the 17th century.
The first practical caged-roller bearing was invented in the mid-1740s by horologist John Harrison for his H3 marine timekeeper. In this timepiece, the caged bearing was only used for a very limited oscillating motion, but later on, Harrison applied a similar bearing design with a true rotational movement in a contemporaneous regulator clock.
The first patent on ball bearings was awarded to Philip Vaughan, a British inventor and ironmaster in Carmarthen in 1794. His was the first modern ball-bearing design, with the ball running along a groove in the axle assembly.
Bearings played a pivotal role in the nascent Industrial Revolution, allowing the new industrial machinery to operate efficiently. For example, they were used for holding wheel and axle assemblies to greatly reduce friction compared to prior non-bearing designs.
The first patent for a radial-style ball bearing was awarded to Jules Suriray, a Parisian bicycle mechanic, on 3 August 1869. The bearings were then fitted to the winning bicycle ridden by James Moore in the world's first bicycle road race, Paris-Rouen, in November 1869.
In 1883, Friedrich Fischer, founder of FAG, developed an approach for milling and grinding balls of equal size and exact roundness by means of a suitable production machine, which set the stage for the creation of an independent bearing industry. His hometown Schweinfurt later became a world-leading center for ball bearing production.
The modern, self-aligning design of ball bearing is attributed to Sven Wingquist of the SKF ball-bearing manufacturer in 1907 when he was awarded Swedish patent No. 25406 on its design.
Henry Timken, a 19th-century visionary and innovator in carriage manufacturing, patented the tapered roller bearing in 1898. The following year he formed a company to produce his innovation. Over a century, the company grew to make bearings of all types, including specialty steel bearings and an array of related products and services.
Erich Franke invented and patented the wire race bearing in 1934. His focus was on a bearing design with a cross-section as small as possible and which could be integrated into the enclosing design. After World War II, he founded with Gerhard Heydrich the company Franke & Heydrich KG (today Franke GmbH) to push the development and production of wire race bearings.
Richard Stribeck's extensive research on ball bearing steels identified the metallurgy of the commonly used 100Cr6 (AISI 52100), showing coefficient of friction as a function of pressure.
Designed in 1968 and later patented in 1972, Bishop-Wisecarver's co-founder Bud Wisecarver created vee groove bearing guide wheels, a type of linear motion bearing consisting of both an external and internal 90-degree vee angle.
In the early 1980s, Pacific Bearing's founder, Robert Schroeder, invented the first bi-material plain bearing that was interchangeable with linear ball bearings. This bearing had a metal shell (aluminum, steel or stainless steel) and a layer of Teflon-based material connected by a thin adhesive layer.
Today's ball and roller bearings are used in many applications, which include a rotating component. Examples include ultra high-speed bearings in dental drills, aerospace bearings in the Mars Rover, gearbox and wheel bearings on automobiles, flexure bearings in optical alignment systems, and air bearings used in coordinate-measuring machines.
Design
Motions
Common motions permitted by bearings are:
Radial rotation, e.g. shaft rotation;
Linear motion, e.g. drawer;
Spherical rotation, e.g. ball and socket joint;
Hinge motion, e.g. door, elbow, knee.
Materials
The first plain and rolling-element bearings were wood, closely followed by bronze. Over their history, bearings have been made of many materials, including ceramic, sapphire, glass, steel, bronze, and other metals. Plastic bearings made of nylon, polyoxymethylene, polytetrafluoroethylene, and UHMWPE, among other materials, are also in use today.
Watchmakers produce "jeweled" watches using sapphire plain bearings to reduce friction, thus allowing more precise timekeeping.
Even basic materials can have impressive durability. Wooden bearings, for instance, can still be seen today in old clocks or in water mills where the water provides cooling and lubrication.
Types
By far, the most common bearing is the plain bearing, a bearing that uses surfaces in rubbing contact, often with a lubricant such as oil or graphite. A plain bearing may or may not be a discrete device. It may be nothing more than the bearing surface of a hole with a shaft passing through it, or of a planar surface that bears another (in these cases, not a discrete device); or it may be a layer of bearing metal either fused to the substrate (semi-discrete) or in the form of a separable sleeve (discrete). With suitable lubrication, plain bearings often give acceptable accuracy, life, and friction at minimal cost. Therefore, they are very widely used.
However, there are many applications where a more suitable bearing can improve efficiency, accuracy, service intervals, reliability, speed of operation, size, weight, and costs of purchasing and operating machinery.
Thus, many types of bearings have varying shapes, materials, lubrication, principle of operation, and so on.
There are at least 6 common types of bearing, each of which operates on a different principle:
Plain bearing, consisting of a shaft rotating in a hole. There are several specific styles: bushing, journal bearing, sleeve bearing, rifle bearing, composite bearing;
Rolling-element bearings, whose performance does not depend on avoiding or reducing friction between two surfaces but employs a different principle to achieve low external friction: the rolling motion of an intermediate element in between the surfaces which bear the axial or radial load. Classified as either:
Ball bearing, in which the rolling elements are spherical balls;
Roller bearing, in which the rolling elements are cylindrical rollers, linearly-tapered (conical) rollers, rollers with a curved taper (such as so-called spherical rollers), or gears;
Jewel bearing, a plain bearing in which one of the bearing surfaces is made of an ultrahard glassy jewel material such as sapphire to reduce friction and wear;
Fluid bearing, a noncontact bearing in which the load is supported by a gas or liquid (i.e. air bearing);
Magnetic bearing, in which the load is supported by a magnetic field;
Flexure bearing, in which the motion is supported by a load element which bends.
The following table summarizes the notable characteristics of each of these bearing types.
Characteristics
Friction
Reducing friction in bearings is often important for efficiency, to reduce wear and to facilitate extended use at high speeds and to avoid overheating and premature failure of the bearing. Essentially, a bearing can reduce friction by virtue of its shape, by its material, or by introducing and containing a fluid between surfaces or by separating the surfaces with an electromagnetic field.
Shape: gains advantage usually by using spheres or rollers, or by forming flexure bearings.
Material: exploits the nature of the bearing material used. (An example would be using plastics that have low surface friction.)
Fluid: exploits the low viscosity of a layer of fluid, such as a lubricant or as a pressurized medium to keep the two solid parts from touching, or by reducing the normal force between them.
Fields: exploits electromagnetic fields, such as magnetic fields, to keep solid parts from touching.
Air pressure: exploits air pressure to keep solid parts from touching.
Combinations of these can even be employed within the same bearing. An example is where the cage is made of plastic, and it separates the rollers/balls, which reduce friction by their shape and finish.
Loads
Bearing design varies depending on the size and directions of the forces required to support. Forces can be predominately radial, axial (thrust bearings), or bending moments perpendicular to the main axis.
Speeds
Different bearing types have different operating speed limits. Speed is typically specified as maximum relative surface speeds, often specified ft/s or m/s. Rotational bearings typically describe performance in terms of the product DN where D is the mean diameter (often in mm) of the bearing and N is the rotation rate in revolutions per minute.
Generally, there is considerable speed range overlap between bearing types. Plain bearings typically handle only lower speeds, rolling element bearings are faster, followed by fluid bearings and finally magnetic bearings which are limited ultimately by centripetal force overcoming material strength.
Play
Some applications apply bearing loads from varying directions and accept only limited play or "slop" as the applied load changes. One source of motion is gaps or "play" in the bearing. For example, a 10 mm shaft in a 12 mm hole has 2 mm play.
Allowable play varies greatly depending on the use. As an example, a wheelbarrow wheel supports radial and axial loads. Axial loads may be hundreds of newtons force left or right, and it is typically acceptable for the wheel to wobble by as much as 10 mm under the varying load. In contrast, a lathe may position a cutting tool to ±0.002 mm using a ball lead screw held by rotating bearings. The bearings support axial loads of thousands of newtons in either direction and must hold the ball lead screw to ±0.002 mm across that range of loads
Stiffness
Stiffness is the amount that the gap varies when the load on the bearing changes, distinct from the friction of the bearing.
A second source of motion is elasticity in the bearing itself. For example, the balls in a ball bearing are like stiff rubber and under load deform from a round to a slightly flattened shape. The race is also elastic and develops a slight dent where the ball presses on it.
The stiffness of a bearing is how the distance between the parts separated by the bearing varies with the applied load. With rolling element bearings, this is due to the strain of the ball and race. With fluid bearings, it is due to how the pressure of the fluid varies with the gap (when correctly loaded, fluid bearings are typically stiffer than rolling element bearings).
Lubrication
Some bearings use a thick grease for lubrication, which is pushed into the gaps between the bearing surfaces, also known as packing. The grease is held in place by a plastic, leather, or rubber gasket (also called a gland) that covers the inside and outside edges of the bearing race to keep the grease from escaping. Bearings may also be packed with other materials. Historically, the wheels on railroad cars used sleeve bearings packed with waste or loose scraps of cotton or wool fiber soaked in oil, then later used solid pads of cotton.
Bearings can be lubricated by a ring oiler, a metal ring that rides loosely on the central rotating shaft of the bearing. The ring hangs down into a chamber containing lubricating oil. As the bearing rotates, viscous adhesion draws oil up the ring and onto the shaft, where the oil migrates into the bearing to lubricate it. Excess oil is flung off and collects in the pool again.
A rudimentary form of lubrication is splash lubrication. Some machines contain a pool of lubricant in the bottom, with gears partially immersed in the liquid, or crank rods that can swing down into the pool as the device operates. The spinning wheels fling oil into the air around them, while the crank rods slap at the surface of the oil, splashing it randomly on the engine's interior surfaces. Some small internal combustion engines specifically contain special plastic flinger wheels which randomly scatter oil around the interior of the mechanism.
For high-speed and high-power machines, a loss of lubricant can result in rapid bearing heating and damage due to friction. Also, in dirty environments, the oil can become contaminated with dust or debris, increasing friction. In these applications, a fresh supply of lubricant can be continuously supplied to the bearing and all other contact surfaces, and the excess can be collected for filtration, cooling, and possibly reuse. Pressure oiling is commonly used in large and complex internal combustion engines in parts of the engine where directly splashed oil cannot reach, such as up into overhead valve assemblies. High-speed turbochargers also typically require a pressurized oil system to cool the bearings and keep them from burning up due to the heat from the turbine.
Composite bearings are designed with a self-lubricating polytetrafluorethylene (PTFE) liner with a laminated metal backing. The PTFE liner offers consistent, controlled friction as well as durability, whilst the metal backing ensures the composite bearing is robust and capable of withstanding high loads and stresses throughout its long life. Its design also makes it lightweight-one tenth the weight of a traditional rolling element bearing.
Mounting
There are many methods of mounting bearings, usually involving an interference fit. When press fitting or shrink fitting a bearing into a bore or onto a shaft, it's important to keep the housing bore and shaft outer diameter to very close limits, which can involve one or more counterboring operations, several facing operations, and drilling, tapping, and threading operations. Alternatively, an interference fit can also be achieved with the addition of a tolerance ring.
Service life
The service life of the bearing is affected by many factors not controlled by the bearing manufacturers. For example, bearing mounting, temperature, exposure to external environment, lubricant cleanliness, and electrical currents through bearings. High frequency PWM inverters can induce electric currents in a bearing, which can be suppressed by the use of ferrite chokes. The temperature and terrain of the micro-surface will determine the amount of friction by touching solid parts. Certain elements and fields reduce friction while increasing speeds. Strength and mobility help determine the load the bearing type can carry. Alignment factors can play a damaging role in wear and tear, yet overcome by computer aid signaling and non-rubbing bearing types, such as magnetic levitation or air field pressure.
Fluid and magnetic bearings can have practically indefinite service lives. In practice, fluid bearings support high loads in hydroelectric plants that have been in nearly continuous service since about 1900 and show no signs of wear.
Rolling element bearing life is determined by load, temperature, maintenance, lubrication, material defects, contamination, handling, installation and other factors. These factors can all have a significant effect on bearing life. For example, the service life of bearings in one application was extended dramatically by changing how the bearings were stored before installation and use, as vibrations during storage caused lubricant failure even when the only load on the bearing was its own weight; the resulting damage is often false brinelling. Bearing life is statistical: several samples of a given bearing will often exhibit a bell curve of service life, with a few samples showing significantly better or worse life. Bearing life varies because microscopic structure and contamination vary greatly even where macroscopically they seem identical.
Bearings are often specified to give an "L10" (US) or "B10" (elsewhere) life, the duration by which ten percent of the bearings in that application can be expected to have failed due to classical fatigue failure (and not any other mode of failure such as lubrication starvation, wrong mounting etc.), or, alternatively, the duration at which ninety percent will still be operating. The L10/B10 life of the bearing is theoretical, and may not represent service life of the bearing. Bearings are also rated using the C0 (static loading) value. This is the basic load rating as a reference, and not an actual load value.
For plain bearings, some materials give a much longer life than others. Some of the John Harrison clocks still operate after hundreds of years because of the lignum vitae wood employed in their construction, whereas his metal clocks are seldom run due to potential wear.
Flexure bearings rely on elastic properties of a material. Flexure bearings bend a piece of material repeatedly. Some materials fail after repeated bending, even at low loads, but careful material selection and bearing design can make flexure bearing life indefinite.
Although long bearing life is often desirable, it is sometimes not necessary. describes a bearing for a rocket motor oxygen pump that gave several hours life, far in excess of the several tens of minutes needed.
Depending on the customized specifications (backing material and PTFE compounds), composite bearings can operate up to 30 years without maintenance.
For bearings which are used in oscillating applications, customized approaches to calculate L10/B10 are used.
Many bearings require periodic maintenance to prevent premature failure, but others require little maintenance. The latter include various kinds of polymer, fluid and magnetic bearings, as well as rolling-element bearings that are described with terms including sealed bearing and sealed for life. These contain seals to keep the dirt out and the grease in. They work successfully in many applications, providing maintenance-free operation. Some applications cannot use them effectively.
Nonsealed bearings often have a grease fitting, for periodic lubrication with a grease gun, or an oil cup for periodic filling with oil. Before the 1970s, sealed bearings were not encountered on most machinery, and oiling and greasing were a more common activity than they are today. For example, automotive chassis used to require "lube jobs" nearly as often as engine oil changes, but today's car chassis are mostly sealed for life. From the late 1700s through the mid-1900s, industry relied on many workers called oilers to lubricate machinery frequently with oil cans.
Factory machines today usually have lube systems, in which a central pump serves periodic charges of oil or grease from a reservoir through lube lines to the various lube points in the machine's bearing surfaces, bearing journals, pillow blocks, and so on. The timing and number of such lube cycles is controlled by the machine's computerized control, such as PLC or CNC, as well as by manual override functions when occasionally needed. This automated process is how all modern CNC machine tools and many other factory machines are lubricated. Similar lube systems are also used on nonautomated machines, in which case there is a hand pump that a machine operator is supposed to pump once daily (for machines in constant use) or once weekly. These are called one-shot systems from their chief selling point: one pull on one handle to lube the whole machine, instead of a dozen pumps of an alemite gun or oil can in a dozen different positions around the machine.
The oiling system inside a modern automotive or truck engine is similar in concept to the lube systems mentioned above, except that oil is pumped continuously. Much of this oil flows through passages drilled or cast into the engine block and cylinder heads, escaping through ports directly onto bearings and squirting elsewhere to provide an oil bath. The oil pump simply pumps constantly, and any excess pumped oil continuously escapes through a relief valve back into the sump.
Many bearings in high-cycle industrial operations need periodic lubrication and cleaning, and many require occasional adjustment, such as pre-load adjustment, to minimize the effects of wear.
Bearing life is often much better when the bearing is kept clean and well-lubricated. However, many applications make good maintenance difficult. One example is bearings in the conveyor of a rock crusher are exposed continually to hard abrasive particles. Cleaning is of little use because cleaning is expensive, yet the bearing is contaminated again as soon as the conveyor resumes operation. Thus, a good maintenance program might lubricate the bearings frequently but not include any disassembly for cleaning. The frequent lubrication, by its nature, provides a limited kind of cleaning action by displacing older (grit-filled) oil or grease with a fresh charge, which itself collects grit before being displaced by the next cycle. Another example are bearings in wind turbines, which makes maintenance difficult since the nacelle is placed high up in the air in strong wind areas. In addition, the turbine does not always run and is subjected to different operating behavior in different weather conditions, which makes proper lubrication a challenge.
See also
Manufacturers:
Timken
SKF
Schaeffler Group
NSK
NTN
Koyo Seiko
MinebeaMitsumi
References
Further reading
Comprehensive review on bearings, University of Cambridge
Types of bearings, Cambridge University
"How bearings work" How Stuff Works
External links
ISO Dimensional system and bearing numbers
Kinematic Models for Design Digital Library (KMODDL) – Movies and photos of hundreds of working mechanical-systems models at Cornell University
A glossary of bearing terms
Tribology | Bearing (mechanical) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 4,950 | [
"Tribology",
"Mechanical engineering",
"Materials science",
"Surface science"
] |
221,669 | https://en.wikipedia.org/wiki/Darcy%E2%80%93Weisbach%20equation | In fluid dynamics, the Darcy–Weisbach equation is an empirical equation that relates the head loss, or pressure loss, due to friction along a given length of pipe to the average velocity of the fluid flow for an incompressible fluid. The equation is named after Henry Darcy and Julius Weisbach. Currently, there is no formula more accurate or universally applicable than the Darcy-Weisbach supplemented by the Moody diagram or Colebrook equation.
The Darcy–Weisbach equation contains a dimensionless friction factor, known as the Darcy friction factor. This is also variously called the Darcy–Weisbach friction factor, friction factor, resistance coefficient, or flow coefficient.
Historical background
The Darcy-Weisbach equation, combined with the Moody chart for calculating head losses in pipes, is traditionally attributed to Henry Darcy, Julius Weisbach, and Lewis Ferry Moody. However, the development of these formulas and charts also involved other scientists and engineers over its historical development. Generally, the Bernoulli's equation would provide the head losses but in terms of quantities not known a priori, such as pressure. Therefore, empirical relationships were sought to correlate the head loss with quantities like pipe diameter and fluid velocity.
Julius Weisbach was certainly not the first to introduce a formula correlating the length and diameter of a pipe to the square of the fluid velocity. Antoine Chézy (1718-1798), in fact, had published a formula in 1770 that, although referring to open channels (i.e., not under pressure), was formally identical to the one Weisbach would later introduce, provided it was reformulated in terms of the hydraulic radius. However, Chézy's formula was lost until 1800, when Gaspard de Prony (a former student of his) published an account describing his results. It is likely that Weisbach was aware of Chézy's formula through Prony's publications.
Weisbach's formula was proposed in 1845 in the form we still use today:
where:
: head loss.
: length of the pipe.
: diameter of the pipe.
: velocity of the fluid.
: acceleration due to gravity.
However, the friction factor f was expressed by Weisbach through the following empirical formula:
with and depending on the diameter and the type of pipe wall.
Weisbach's work was published in the United States of America in 1848 and soon became well known there. In contrast, it did not initially gain much traction in France, where Prony equation, which had a polynomial form in terms of velocity (often approximated by the square of the velocity), continued to be used. Beyond the historical developments, Weisbach's formula had the objective merit of adhering to dimensional analysis, resulting in a dimensionless friction factor f. The complexity of f, dependent on the mechanics of the boundary layer and the flow regime (laminar, transitional, or turbulent), tended to obscure its dependence on the quantities in Weisbach's formula, leading many researchers to derive irrational and dimensionally inconsistent empirical formulas. It was understood not long after Weisbach's work that the friction factor f depended on the flow regime and was independent of the Reynolds number (and thus the velocity) only in the case of rough pipes in a fully turbulent flow regime (Prandtl-von Kármán equation).
Pressure-loss equation
In a cylindrical pipe of uniform diameter , flowing full, the pressure loss due to viscous effects is proportional to length and can be characterized by the Darcy–Weisbach equation:
where the pressure loss per unit length (SI units: Pa/m) is a function of:
, the density of the fluid (kg/m3);
, the hydraulic diameter of the pipe (for a pipe of circular section, this equals ; otherwise for a pipe of cross-sectional area and perimeter ) (m);
, the mean flow velocity, experimentally measured as the volumetric flow rate per unit cross-sectional wetted area (m/s);
, the Darcy friction factor (also called flow coefficient ).
For laminar flow in a circular pipe of diameter , the friction factor is inversely proportional to the Reynolds number alone () which itself can be expressed in terms of easily measured or published physical quantities (see section below). Making this substitution the Darcy–Weisbach equation is rewritten as
where
is the dynamic viscosity of the fluid (Pa·s = N·s/m2 = kg/(m·s));
is the volumetric flow rate, used here to measure flow instead of mean velocity according to (m3/s).
Note that this laminar form of Darcy–Weisbach is equivalent to the Hagen–Poiseuille equation, which is analytically derived from the Navier–Stokes equations.
Head-loss formula
The head loss (or ) expresses the pressure loss due to friction in terms of the equivalent height of a column of the working fluid, so the pressure drop is
where:
= The head loss due to pipe friction over the given length of pipe (SI units: m);
= The local acceleration due to gravity (m/s2).
It is useful to present head loss per length of pipe (dimensionless):
where is the pipe length (m).
Therefore, the Darcy–Weisbach equation can also be written in terms of head loss:
In terms of volumetric flow
The relationship between mean flow velocity and volumetric flow rate is
where:
= The volumetric flow (m3/s),
= The cross-sectional wetted area (m2).
In a full-flowing, circular pipe of diameter ,
Then the Darcy–Weisbach equation in terms of is
Shear-stress form
The mean wall shear stress in a pipe or open channel is expressed in terms of the Darcy–Weisbach friction factor as
The wall shear stress has the SI unit of pascals (Pa).
Darcy friction factor
The friction factor is not a constant: it depends on such things as the characteristics of the pipe (diameter and roughness height ), the characteristics of the fluid (its kinematic viscosity [nu]), and the velocity of the fluid flow . It has been measured to high accuracy within certain flow regimes and may be evaluated by the use of various empirical relations, or it may be read from published charts. These charts are often referred to as Moody diagrams, after L. F. Moody, and hence the factor itself is sometimes erroneously called the Moody friction factor. It is also sometimes called the Blasius friction factor, after the approximate formula he proposed.
Figure 1 shows the value of as measured by experimenters for many different fluids, over a wide range of Reynolds numbers, and for pipes of various roughness heights. There are three broad regimes of fluid flow encountered in these data: laminar, critical, and turbulent.
Laminar regime
For laminar (smooth) flows, it is a consequence of Poiseuille's law (which stems from an exact classical solution for the fluid flow) that
where is the Reynolds number
and where is the viscosity of the fluid and
is known as the kinematic viscosity. In this expression for Reynolds number, the characteristic length is taken to be the hydraulic diameter of the pipe, which, for a cylindrical pipe flowing full, equals the inside diameter. In Figures 1 and 2 of friction factor versus Reynolds number, the regime demonstrates laminar flow; the friction factor is well represented by the above equation.
In effect, the friction loss in the laminar regime is more accurately characterized as being proportional to flow velocity, rather than proportional to the square of that velocity: one could regard the Darcy–Weisbach equation as not truly applicable in the laminar flow regime.
In laminar flow, friction loss arises from the transfer of momentum from the fluid in the center of the flow to the pipe wall via the viscosity of the fluid; no vortices are present in the flow. Note that the friction loss is insensitive to the pipe roughness height : the flow velocity in the neighborhood of the pipe wall is zero.
Critical regime
For Reynolds numbers in the range , the flow is unsteady (varies grossly with time) and varies from one section of the pipe to another (is not "fully developed"). The flow involves the incipient formation of vortices; it is not well understood.
Turbulent regime
For Reynolds number greater than 4000, the flow is turbulent; the resistance to flow follows the Darcy–Weisbach equation: it is proportional to the square of the mean flow velocity. Over a domain of many orders of magnitude of (), the friction factor varies less than one order of magnitude (). Within the turbulent flow regime, the nature of the flow can be further divided into a regime where the pipe wall is effectively smooth, and one where its roughness height is salient.
Smooth-pipe regime
When the pipe surface is smooth (the "smooth pipe" curve in Figure 2), the friction factor's variation with Re can be modeled by the Kármán–Prandtl resistance equation for turbulent flow in smooth pipes with the parameters suitably adjusted
The numbers 1.930 and 0.537 are phenomenological; these specific values provide a fairly good fit to the data. The product (called the "friction Reynolds number") can be considered, like the Reynolds number, to be a (dimensionless) parameter of the flow: at fixed values of , the friction factor is also fixed.
In the Kármán–Prandtl resistance equation, can be expressed in closed form as an analytic function of through the use of the Lambert function:
In this flow regime, many small vortices are responsible for the transfer of momentum between the bulk of the fluid to the pipe wall. As the friction Reynolds number increases, the profile of the fluid velocity approaches the wall asymptotically, thereby transferring more momentum to the pipe wall, as modeled in Blasius boundary layer theory.
Rough-pipe regime
When the pipe surface's roughness height is significant (typically at high Reynolds number), the friction factor departs from the smooth pipe curve, ultimately approaching an asymptotic value ("rough pipe" regime). In this regime, the resistance to flow varies according to the square of the mean flow velocity and is insensitive to Reynolds number. Here, it is useful to employ yet another dimensionless parameter of the flow, the roughness Reynolds number
where the roughness height is scaled to the pipe diameter .
It is illustrative to plot the roughness function :
Figure 3 shows versus for the rough pipe data of Nikuradse, Shockling, and Langelandsvik.
In this view, the data at different roughness ratio fall together when plotted against , demonstrating scaling in the variable . The following features are present:
When , then is identically zero: flow is always in the smooth pipe regime. The data for these points lie to the left extreme of the abscissa and are not within the frame of the graph.
When , the data lie on the line ; flow is in the smooth pipe regime.
When , the data asymptotically approach a horizontal line; they are independent of , , and .
The intermediate range of constitutes a transition from one behavior to the other. The data depart from the line very slowly, reach a maximum near , then fall to a constant value.
Afzal's fit to these data in the transition from smooth pipe flow to rough pipe flow employs an exponential expression in that ensures proper behavior for (the transition from the smooth pipe regime to the rough pipe regime):
and
This function shares the same values for its term in common with the Kármán–Prandtl resistance equation, plus one parameter 0.305 or 0.34 to fit the asymptotic behavior for along with one further parameter, 11, to govern the transition from smooth to rough flow. It is exhibited in Figure 3.
The friction factor for another analogous roughness becomes
:
and
:
This function shares the same values for its term in common with the Kármán–Prandtl resistance equation, plus one parameter 0.305 or 0.34 to fit the asymptotic behavior for along with one further parameter, 26, to govern the transition from smooth to rough flow.
The Colebrook–White relation fits the friction factor with a function of the form
This relation has the correct behavior at extreme values of , as shown by the labeled curve in Figure 3: when is small, it is consistent with smooth pipe flow, when large, it is consistent with rough pipe flow. However its performance in the transitional domain overestimates the friction factor by a substantial margin. Colebrook acknowledges the discrepancy with Nikuradze's data but argues that his relation is consistent with the measurements on commercial pipes. Indeed, such pipes are very different from those carefully prepared by Nikuradse: their surfaces are characterized by many different roughness heights and random spatial distribution of roughness points, while those of Nikuradse have surfaces with uniform roughness height, with the points extremely closely packed.
Calculating the friction factor from its parametrization
For turbulent flow, methods for finding the friction factor include using a diagram, such as the Moody chart, or solving equations such as the Colebrook–White equation (upon which the Moody chart is based), or the Swamee–Jain equation. While the Colebrook–White relation is, in the general case, an iterative method, the Swamee–Jain equation allows to be found directly for full flow in a circular pipe.
Direct calculation when friction loss is known
In typical engineering applications, there will be a set of given or known quantities. The acceleration of gravity and the kinematic viscosity of the fluid are known, as are the diameter of the pipe and its roughness height . If as well the head loss per unit length is a known quantity, then the friction factor can be calculated directly from the chosen fitting function. Solving the Darcy–Weisbach equation for ,
we can now express :
Expressing the roughness Reynolds number ,
we have the two parameters needed to substitute into the Colebrook–White relation, or any other function, for the friction factor , the flow velocity , and the volumetric flow rate .
Confusion with the Fanning friction factor
The Darcy–Weisbach friction factor is 4 times larger than the Fanning friction factor , so attention must be paid to note which one of these is meant in any "friction factor" chart or equation being used. Of the two, the Darcy–Weisbach factor is more commonly used by civil and mechanical engineers, and the Fanning factor by chemical engineers, but care should be taken to identify the correct factor regardless of the source of the chart or formula.
Note that
Most charts or tables indicate the type of friction factor, or at least provide the formula for the friction factor with laminar flow. If the formula for laminar flow is , it is the Fanning factor , and if the formula for laminar flow is , it is the Darcy–Weisbach factor .
Which friction factor is plotted in a Moody diagram may be determined by inspection if the publisher did not include the formula described above:
Observe the value of the friction factor for laminar flow at a Reynolds number of 1000.
If the value of the friction factor is 0.064, then the Darcy friction factor is plotted in the Moody diagram. Note that the nonzero digits in 0.064 are the numerator in the formula for the laminar Darcy friction factor: .
If the value of the friction factor is 0.016, then the Fanning friction factor is plotted in the Moody diagram. Note that the nonzero digits in 0.016 are the numerator in the formula for the laminar Fanning friction factor: .
The procedure above is similar for any available Reynolds number that is an integer power of ten. It is not necessary to remember the value 1000 for this procedure—only that an integer power of ten is of interest for this purpose.
History
Historically this equation arose as a variant on the Prony equation; this variant was developed by Henry Darcy of France, and further refined into the form used today by Julius Weisbach of Saxony in 1845. Initially, data on the variation of with velocity was lacking, so the Darcy–Weisbach equation was outperformed at first by the empirical Prony equation in many cases. In later years it was eschewed in many special-case situations in favor of a variety of empirical equations valid only for certain flow regimes, notably the Hazen–Williams equation or the Manning equation, most of which were significantly easier to use in calculations. However, since the advent of the calculator, ease of calculation is no longer a major issue, and so the Darcy–Weisbach equation's generality has made it the preferred one.
Derivation by dimensional analysis
Away from the ends of the pipe, the characteristics of the flow are independent of the position along the pipe. The key quantities are then the pressure drop along the pipe per unit length, , and the volumetric flow rate. The flow rate can be converted to a mean flow velocity by dividing by the wetted area of the flow (which equals the cross-sectional area of the pipe if the pipe is full of fluid).
Pressure has dimensions of energy per unit volume, therefore the pressure drop between two points must be proportional to the dynamic pressure q. We also know that pressure must be proportional to the length of the pipe between the two points as the pressure drop per unit length is a constant. To turn the relationship into a proportionality coefficient of dimensionless quantity, we can divide by the hydraulic diameter of the pipe, , which is also constant along the pipe. Therefore,
The proportionality coefficient is the dimensionless "Darcy friction factor" or "flow coefficient". This dimensionless coefficient will be a combination of geometric factors such as , the Reynolds number and (outside the laminar regime) the relative roughness of the pipe (the ratio of the roughness height to the hydraulic diameter).
Note that the dynamic pressure is not the kinetic energy of the fluid per unit volume, for the following reasons. Even in the case of laminar flow, where all the flow lines are parallel to the length of the pipe, the velocity of the fluid on the inner surface of the pipe is zero due to viscosity, and the velocity in the center of the pipe must therefore be larger than the average velocity obtained by dividing the volumetric flow rate by the wet area. The average kinetic energy then involves the root mean-square velocity, which always exceeds the mean velocity. In the case of turbulent flow, the fluid acquires random velocity components in all directions, including perpendicular to the length of the pipe, and thus turbulence contributes to the kinetic energy per unit volume but not to the average lengthwise velocity of the fluid.
Practical application
In a hydraulic engineering application, it is typical for the volumetric flow within a pipe (that is, its productivity) and the head loss per unit length (the concomitant power consumption) to be the critical important factors. The practical consequence is that, for a fixed volumetric flow rate , head loss decreases with the inverse fifth power of the pipe diameter, . Doubling the diameter of a pipe of a given schedule (say, ANSI schedule 40) roughly doubles the amount of material required per unit length and thus its installed cost. Meanwhile, the head loss is decreased by a factor of 32 (about a 97% reduction). Thus the energy consumed in moving a given volumetric flow of the fluid is cut down dramatically for a modest increase in capital cost.
Advantages
The Darcy-Weisbach's accuracy and universal applicability makes it the ideal formula for flow in pipes. The advantages of the equation are as follows:
It is based on fundamentals.
It is dimensionally consistent.
It is useful for any fluid, including oil, gas, brine, and sludges.
It can be derived analytically in the laminar flow region.
It is useful in the transition region between laminar flow and fully developed turbulent flow.
The friction factor variation is well documented.
See also
Bernoulli's principle
Darcy friction factor formulae
Euler number
Friction loss
Hazen–Williams equation
Hagen–Poiseuille equation
Water pipe
Notes
References
18. Afzal, Noor (2013) "Roughness effects of commercial steel pipe in turbulent flow:
Universal scaling". Canadian Journal of Civil Engineering 40, 188-193.
Further reading
External links
The History of the Darcy–Weisbach Equation
Darcy–Weisbach equation calculator
Pipe pressure drop calculator for single phase flows.
Pipe pressure drop calculator for two phase flows.
Open source pipe pressure drop calculator.
Web application with pressure drop calculations for pipes and ducts
ThermoTurb – A web application for thermal and turbulent flow analysis
Dimensionless numbers of fluid mechanics
Eponymous equations of physics
Equations of fluid dynamics
Piping | Darcy–Weisbach equation | [
"Physics",
"Chemistry",
"Engineering"
] | 4,315 | [
"Equations of fluid dynamics",
"Equations of physics",
"Building engineering",
"Chemical engineering",
"Eponymous equations of physics",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
222,154 | https://en.wikipedia.org/wiki/Blastulation | Blastulation is the stage in early animal embryonic development that produces the blastula. In mammalian development, the blastula develops into the blastocyst with a differentiated inner cell mass and an outer trophectoderm. The blastula (from Greek βλαστός ( meaning sprout)) is a hollow sphere of cells known as blastomeres surrounding an inner fluid-filled cavity called the blastocoel. Embryonic development begins with a sperm fertilizing an egg cell to become a zygote, which undergoes many cleavages to develop into a ball of cells called a morula. Only when the blastocoel is formed does the early embryo become a blastula. The blastula precedes the formation of the gastrula in which the germ layers of the embryo form.
A common feature of a vertebrate blastula is that it consists of a layer of blastomeres, known as the blastoderm, which surrounds the blastocoel. In mammals, the blastocyst contains an embryoblast (or inner cell mass) that will eventually give rise to the definitive structures of the fetus, and a trophoblast which goes on to form the extra-embryonic tissues.
During blastulation, a significant amount of activity occurs within the early embryo to establish cell polarity, cell specification, axis formation, and to regulate gene expression. In many animals, such as Drosophila and Xenopus, the mid blastula transition (MBT) is a crucial step in development during which the maternal mRNA is degraded and control over development is passed to the embryo. Many of the interactions between blastomeres are dependent on cadherin expression, particularly E-cadherin in mammals and EP-cadherin in amphibians.
The study of the blastula, and of cell specification has many implications in stem cell research, and assisted reproductive technology. In Xenopus, blastomeres behave as pluripotent stem cells which can migrate down several pathways, depending on cell signaling. By manipulating the cell signals during the blastula stage of development, various tissues can be formed. This potential can be instrumental in regenerative medicine for disease and injury cases. In vitro fertilisation involves the transfer of an embryo into a uterus for implantation.
Development
The blastula stage of early embryo development begins with the appearance of the blastocoel. The origin of the blastocoel in Xenopus has been shown to be from the first cleavage furrow, which is widened and sealed with tight junctions to create a cavity.
In many organisms the development of the embryo up to this point and for the early part of the blastula stage is controlled by maternal mRNA, so called because it was produced in the egg prior to fertilization and is therefore exclusively from the mother.
Midblastula transition
In many organisms including Xenopus and Drosophila, the midblastula transition usually occurs after a particular number of cell divisions for a given species, and is defined by the ending of the synchronous cell division cycles of the early blastula development, and the lengthening of the cell cycles by the addition of the G1 and G2 phases. Prior to this transition, cleavage occurs with only the synthesis and mitosis phases of the cell cycle. The addition of the two growth phases into the cell cycle allows for the cells to increase in size, as up to this point the blastomeres undergo reductive divisions in which the overall size of the embryo does not increase, but more cells are created. This transition begins the growth in size of the organism.
The mid-blastula transition is also characterized by a marked increase in transcription of new, non-maternal mRNA transcribed from the genome of the organism. Large amounts of the maternal mRNA are destroyed at this point, either by proteins such as SMAUG in Drosophila or by microRNA. These two processes shift the control of the embryo from the maternal mRNA to the nuclei.
Structure
A blastula (blastocyst in mammals), is a sphere of cells surrounding a fluid-filled cavity called the blastocoel. The blastocoel contains amino acids, proteins, growth factors, sugars, ions and other components which are necessary for cellular differentiation. The blastocoel also allows blastomeres to move during the process of gastrulation.
In Xenopus embryos, the blastula is composed of three different regions. The animal cap forms the roof of the blastocoel and goes on primarily to form ectodermal derivatives. The equatorial or marginal zone, which compose the walls of the blastocoel differentiate primarily into mesodermal tissue. The vegetal mass is composed of the blastocoel floor and primarily develops into endodermal tissue.
In the mammalian blastocyst there are three lineages that give rise to later tissue development. The epiblast gives rise to the fetus itself while the trophoblast develops into part of the placenta and the primitive endoderm becomes the yolk sac. In the mouse embryo, blastocoel formation begins at the 32-cell stage. During this process, water enters the embryo, aided by an osmotic gradient which is the result of sodium–potassium pumps that produce a high sodium gradient on the basolateral side of the trophectoderm. This movement of water is facilitated by aquaporins. A seal is created by tight junctions of the epithelial cells that line the blastocoel.
Cellular adhesion
Tight junctions are very important in embryo development. In the blastula, these cadherin mediated cell interactions are essential to development of epithelium which are most important to paracellular transport, maintenance of cell polarity and the creation of a permeability seal to regulate blastocoel formation. These tight junctions arise after the polarity of epithelial cells is established which sets the foundation for further development and specification. Within the blastula, inner blastomeres are generally non-polar while epithelial cells demonstrate polarity.
Mammalian embryos undergo compaction around the 8-cell stage where E-cadherins as well as alpha and beta catenins are expressed. This process makes a ball of embryonic cells which are capable of interacting, rather than a group of diffuse and undifferentiated cells. E-cadherin adhesion defines the apico-basal axis in the developing embryo and turns the embryo from an indistinct ball of cells to a more polarized phenotype which sets the stage for further development into a fully formed blastocyst.
Xenopus membrane polarity is established with the first cell cleavage. Amphibian EP-cadherin and XB/U cadherin perform a similar role as E-cadherin in mammals establishing blastomere polarity and solidifying cell-cell interactions which are crucial for further development.
Clinical implications
Fertilization technologies
Experiments with implantation in mice show that hormonal induction, superovulation and artificial insemination successfully produce preimplantation mouse embryos. In the mice, ninety percent of the females were induced by mechanical stimulation to undergo pregnancy and implant at least one embryo. These results prove to be encouraging because they provide a basis for potential implantation in other mammalian species, such as humans.
Stem cells
Blastula-stage cells can behave as pluripotent stem cells in many species. Pluripotent stem cells are the starting point to produce organ specific cells that can potentially aid in repair and prevention of injury and degeneration. Combining the expression of transcription factors and locational positioning of the blastula cells can lead to the development of induced functional organs and tissues. Pluripotent Xenopus cells, when used in an in vivo strategy, were able to form into functional retinas. By transplanting them to the eye field on the neural plate, and by inducing several mis-expressions of transcription factors, the cells were committed to the retinal lineage and could guide vision based behavior in the Xenopus.
See also
Polarity in embryogenesis
Diploblasty
Triploblasty
References
Bibliography
Animal developmental biology
Cloning | Blastulation | [
"Engineering",
"Biology"
] | 1,677 | [
"Cloning",
"Genetic engineering"
] |
222,300 | https://en.wikipedia.org/wiki/Oxytocin | Oxytocin is a peptide hormone and neuropeptide normally produced in the hypothalamus and released by the posterior pituitary. Present in animals since early stages of evolution, in humans it plays roles in behavior that include social bonding, love, reproduction, childbirth, and the period after childbirth. Oxytocin is released into the bloodstream as a hormone in response to sexual activity and during childbirth. It is also available in pharmaceutical form. In either form, oxytocin stimulates uterine contractions to speed up the process of childbirth.
In its natural form, it also plays a role in maternal bonding and milk production. Production and secretion of oxytocin is controlled by a positive feedback mechanism, where its initial release stimulates production and release of further oxytocin. For example, when oxytocin is released during a contraction of the uterus at the start of childbirth, this stimulates production and release of more oxytocin and an increase in the intensity and frequency of contractions. This process compounds in intensity and frequency and continues until the triggering activity ceases. A similar process takes place during lactation and during sexual activity.
Oxytocin is derived by enzymatic splitting from the peptide precursor encoded by the human OXT gene. The deduced structure of the active nonapeptide is:
Cys – Tyr – Ile – Gln – Asn – Cys – Pro – Leu – Gly – NH2, or CYIQNCPLG-NH2.
Etymology
The term "oxytocin" derives from the Greek "ὠκυτόκος" (ōkutókos), based on ὀξύς (oxús), meaning "sharp" or "swift", and τόκος (tókos), meaning "childbirth". The adjective form is "oxytocic", which refers to medicines which stimulate uterine contractions, to speed up the process of childbirth. Colloquially, it has been referred to as the "cuddle hormone" or the "moral molecule" which have been considered misnomers.
History
The uterine-contracting properties of the principle that would later be named oxytocin were discovered by British pharmacologist Henry Hallett Dale in 1906, and its milk ejection property was described by Ott and Scott in 1910 and by Schafer and Mackenzie in 1911. In 1909 the first clinical use of oxytocin was performed by William Blair-Bell to induce childbirth in patients with complications.
By the 1920s, oxytocin and vasopressin had been isolated from pituitary tissue and given their current names. Oxytocin's molecular structure was determined in 1952. In the early 1950s, American biochemist Vincent du Vigneaud found that oxytocin is made up of nine amino acids, and he identified its amino acid sequence, the first polypeptide hormone to be sequenced. In 1953, du Vigneaud carried out the synthesis of oxytocin, the first polypeptide hormone to be synthesized. Du Vigneaud was awarded the Nobel Prize in Chemistry in 1955 for his work. Further work on different synthetic routes for oxytocin, as well as the preparation of analogues of the hormone (e.g. 4-deamido-oxytocin) was performed in the following decade by Iphigenia Photaki.
Biochemistry
Estrogen has been found to increase the secretion of oxytocin and to increase the expression of its receptor, the oxytocin receptor, in the brain. In women, a single dose of estradiol has been found to be sufficient to increase circulating oxytocin concentrations.
Biosynthesis
Oxytocin and vasopressin are the only known hormones released by the human posterior pituitary gland to act at a distance. However, oxytocin neurons make other peptides, including corticotropin-releasing hormone and dynorphin, for example, that act locally. The magnocellular neurons that make oxytocin are adjacent to magnocellular neurons that make vasopressin and are similar in many respects.
The oxytocin peptide is synthesized as an inactive precursor protein from the OXT gene. This precursor protein also includes the oxytocin carrier protein neurophysin I. The inactive precursor protein is progressively hydrolyzed into smaller fragments (one of which is neurophysin I) via a series of enzymes. The last hydrolysis that releases the active oxytocin nonapeptide is catalyzed by peptidylglycine alpha-amidating monooxygenase (PAM).
The activity of the PAM enzyme system is dependent upon vitamin C (ascorbate), which is a necessary vitamin cofactor. By chance, sodium ascorbate by itself was found to stimulate the production of oxytocin from ovarian tissue over a range of concentrations in a dose-dependent manner. Many of the same tissues (e.g. ovaries, testes, eyes, adrenals, placenta, thymus, pancreas) where PAM (and oxytocin by default) is found are also known to store higher concentrations of vitamin C.
Oxytocin is known to be metabolized by the oxytocinase, leucyl/cystinyl aminopeptidase. Other oxytocinases are also known to exist. Amastatin, bestatin (ubenimex), leupeptin, and puromycin have been found to inhibit the enzymatic degradation of oxytocin, though they also inhibit the degradation of various other peptides, such as vasopressin, met-enkephalin, and dynorphin A.
Neural sources
In the hypothalamus, oxytocin is made in magnocellular neurosecretory cells of the supraoptic and paraventricular nuclei, and is stored in Herring bodies at the axon terminals in the posterior pituitary. It is then released into the blood from the posterior lobe (neurohypophysis) of the pituitary gland. These axons (likely, but dendrites have not been ruled out) have collaterals that innervate neurons in the nucleus accumbens, a brain structure where oxytocin receptors are expressed. The endocrine effects of hormonal oxytocin, and the cognitive or behavioral effects of oxytocin neuropeptides are thought to be coordinated through its common release through these collaterals. Oxytocin is also produced by some neurons in the paraventricular nucleus that project to other parts of the brain and to the spinal cord. Depending on the species, oxytocin receptor-expressing cells are located in other areas, including the amygdala and bed nucleus of the stria terminalis.
In the pituitary gland, oxytocin is packaged in large, dense-core vesicles, where it is bound to neurophysin I as shown in the inset of the figure; neurophysin is a large peptide fragment of the larger precursor protein molecule from which oxytocin is derived by enzymatic cleavage.
Secretion of oxytocin from the neurosecretory nerve endings is regulated by the electrical activity of the oxytocin cells in the hypothalamus. These cells generate action potentials that propagate down axons to the nerve endings in the pituitary; the endings contain large numbers of oxytocin-containing vesicles, which are released by exocytosis when the nerve terminals are depolarised.
Non-neural sources
Endogenous oxytocin concentrations in the brain have been found to be as much as 1000-fold higher than peripheral levels. Outside the brain, oxytocin-containing cells have been identified in several diverse tissues, including in females in the corpus luteum and the placenta; in males in the testicles' interstitial cells of Leydig; and in both sexes in the retina, the adrenal medulla, the thymus and the pancreas. The finding of significant amounts of this classically "neurohypophysial" hormone outside the central nervous system raises many questions regarding its possible importance in these diverse tissues.
The Leydig cells in some species have been shown to possess the biosynthetic machinery to manufacture testicular oxytocin de novo, to be specific, in rats (which can synthesize vitamin C endogenously), and in guinea pigs, which, like humans, require an exogenous source of vitamin C in their diets. Oxytocin is synthesized by corpora lutea of several species, including ruminants and primates. Along with estrogen, it is involved in inducing the endometrial synthesis of prostaglandin F2α to cause regression of the corpus luteum.
Evolution
Virtually all vertebrates have an oxytocin-like nonapeptide hormone that supports reproductive functions and a vasopressin-like nonapeptide hormone involved in water regulation. The two genes are usually located close to each other (less than 15,000 bases apart) on the same chromosome, and are transcribed in opposite directions (however, in fugu, the homologs are further apart and transcribed in the same direction). The two genes are believed to result from a gene duplication event; the ancestral gene is estimated to be about 500 million years old and is found in cyclostomata (modern members of the Agnatha).
A 2023 study found that zebrafish utilize oxytocin in reaction to the fear of other fish. It found that zebrafish that have had oxytocin production removed by gene editing cannot respond to the fear of other fish. When oxytocin is injected back into the fish, they react again in a way that suggests they may have empathy in regards to this emotion. Furthermore, because the same regions of the brain are involved as in mammals, the study suggests oxytocin-based empathy may have evolved from a common ancestor many millions of years ago.
Biological function
Oxytocin has peripheral (hormonal) actions and also has actions in the brain. Its actions are mediated by specific oxytocin receptors. The oxytocin receptor is a G-protein-coupled receptor, OT-R, which requires magnesium and cholesterol and is expressed in myometrial cells. It belongs to the rhodopsin-type (class I) group of G-protein-coupled receptors.
Studies have looked at oxytocin's role in various behaviors, including orgasm, social recognition, pair bonding, anxiety, in-group bias, situational lack of honesty, autism, and maternal behaviors. Oxytocin is believed to have a significant role in social learning. There are indicators that oxytocin may help to decrease noise in the brain's auditory system, increase perception of social cues and support more targeted social behavior. It may also enhance reward responses. However, its effects may be influenced by context, such as the presence of familiar or unfamiliar individuals. In addition to its oxytocin receptor agonism, oxytocin has been found to act as a PAM of the μ- and κ-opioid receptors and this may be involved in its analgesic effects.
Physiological
The peripheral actions of oxytocin mainly reflect secretion from the pituitary gland. The behavioral effects of oxytocin are thought to reflect release from centrally projecting oxytocin neurons, different from those that project to the pituitary gland, or that are collaterals from them. Oxytocin receptors are expressed by neurons in many parts of the brain and spinal cord, including the amygdala, ventromedial hypothalamus, septum, nucleus accumbens, and brainstem, although the distribution differs markedly between species. Furthermore, the distribution of these receptors changes during development and has been observed to change after parturition in the montane vole.
Milk ejection reflex/Letdown reflex: in lactating (breastfeeding) mothers, oxytocin acts at the mammary glands, causing milk to be 'let down' into lactiferous ducts, from where it can be excreted via the nipple. Suckling by the infant at the nipple is relayed by spinal nerves to the hypothalamus. The stimulation causes neurons that make oxytocin to fire action potentials in intermittent bursts; these bursts result in the secretion of pulses of oxytocin from the neurosecretory nerve terminals of the pituitary gland.
Uterine contraction: important for cervical dilation before birth, oxytocin causes contractions during the second and third stages of labor. Oxytocin release during breastfeeding causes mild but often painful contractions during the first few weeks of lactation. This also serves to assist the uterus in clotting the placental attachment point postpartum. However, in knockout mice lacking the oxytocin receptor, reproductive behavior and birth are normal.
In male rats, oxytocin may induce erections. A burst of oxytocin is released during ejaculation in several species, including human males; its suggested function is to stimulate contractions of the reproductive tract, aiding sperm release.
Human sexual response: Oxytocin levels in plasma rise during sexual stimulation and orgasm. At least two uncontrolled studies have found increases in plasma oxytocin at orgasm in both men and women. Plasma oxytocin levels are increased around the time of self-stimulated orgasm and are still higher than baseline when measured five minutes after self arousal. The authors of one of these studies speculated that oxytocin's effects on muscle contractibility may facilitate sperm and egg transport.
In a study measuring oxytocin serum levels in women before and after sexual stimulation, the author suggests it serves an important role in sexual arousal. This study found genital tract stimulation resulted in increased oxytocin immediately after orgasm. Another study reported increases of oxytocin during sexual arousal could be in response to nipple/areola, genital, and/or genital tract stimulation as confirmed in other mammals. Murphy et al. (1987), studying men, found that plasma oxytocin levels remain unchanged during sexual arousal, but that levels increase sharply after ejaculation, returning to baseline levels within 30 minutes. In contrast, vasopressin was increased during arousal but returned to baseline at the time of ejaculation. The study concludes that (in males) vasopressin is secreted during arousal, while oxytocin is only secreted after ejaculation. A more recent study of men found an increase in plasma oxytocin immediately after orgasm, but only in a portion of their sample that did not reach statistical significance. The authors noted these changes "may simply reflect contractile properties on reproductive tissue".
Due to its similarity to vasopressin, it can reduce the excretion of urine slightly, and so it can be classified as an antidiuretic. In several species, oxytocin can stimulate sodium excretion from the kidneys (natriuresis), and in humans high doses can result in low sodium levels (hyponatremia).
Cardiac effects: oxytocin and oxytocin receptors are also found in the heart in some rodents, and the hormone may play a role in the embryonal development of the heart by promoting cardiomyocyte differentiation. However, the absence of either oxytocin or its receptor in knockout mice has not been reported to produce cardiac insufficiencies.
Modulation of hypothalamic-pituitary-adrenal axis activity: oxytocin, under certain circumstances, indirectly inhibits release of adrenocorticotropic hormone and cortisol and, in those situations, may be considered an antagonist of vasopressin.
Preparing fetal neurons for delivery (in rats): crossing the placenta, maternal oxytocin reaches the fetal brain and induces a switch in the action of neurotransmitter GABA from excitatory to inhibitory on fetal cortical neurons. This silences the fetal brain for the period of delivery and reduces its vulnerability to hypoxic damage.
Feeding: a 2012 paper suggested that oxytocin neurons in the para-ventricular hypothalamus in the brain may play a key role in suppressing appetite under normal conditions and that other hypothalamic neurons may trigger eating via inhibition of these oxytocin neurons. This population of oxytocin neurons is absent in Prader-Willi syndrome, a genetic disorder that leads to uncontrollable feeding and obesity, and may play a key role in its pathophysiology. Research on the oxytocin-related neuropeptide asterotocin in starfish also showed that in echinoderms, the chemical induces muscle relaxation, and in starfish specifically caused the organisms to evert their stomach and react as though feeding on prey, even when none were present.
Psychological
Autism: Oxytocin has been implicated in the etiology of autism, with one report suggesting autism is correlated to a mutation on the oxytocin receptor gene (OXTR). Studies involving Caucasian, Finnish and Chinese Han families provide support for the relationship of OXTR with autism. Autism may also be associated with an aberrant methylation of OXTR. However, evidence has shown that intranasal administration is likely insufficient to produce behavioural effects and it could be explained by publication bias and selective outcome reporting, impacting reproducibility of results. In addition, current discussion has challenged the intervention and stated that neurodivergent perspectives need to be considered, with focus being primarily on animal models, which have a lack of translational validity due to autism's complex social and communicative dimensions.
Protection of brain functions: Studies in rats have demonstrated that nasal application of oxytocin can alleviate impaired learning capabilities caused by restrained stress. The authors attributed this effect to an improved hippocampal response in Brain-Derived Neurotrophic Factor (BDNF) being observed. Accordingly, oxytocin has been shown to promote neural growth in the hippocampus in rats even during swim stress or glucocorticoid administration. In a mouse model of early onset of Alzheimer's, the administration of oxytocin by a gel particularly designed to make the peptide accessible for the brain, the cognitive decline and hippocampal atrophy of these mice were delayed. Moreover, the amyloid β-protein deposit and nerve cell apoptosis were retarded. An observed inhibitory impact by oxytocin on the inflammatory activity of the microglia was proposed to be an important factor.
Bonding
In the prairie vole, oxytocin released into the brain of the female during sexual activity is important for forming a pair bond with her sexual partner. Vasopressin appears to have a similar effect in males. Oxytocin has a role in social behaviors in many species, so it likely also does in humans. In a 2003 study, both humans and dog oxytocin levels in the blood rose after a five to 24 minute petting session. This possibly plays a role in the emotional bonding between humans and dogs.
Maternal behavior: Female rats given oxytocin antagonists after giving birth do not exhibit typical maternal behavior. By contrast, virgin female sheep show maternal behavior toward foreign lambs upon cerebrospinal fluid infusion of oxytocin, which they would not do otherwise. Oxytocin is involved in the initiation of human maternal behavior, not its maintenance; for example, it is higher in mothers after they interact with unfamiliar children rather than their own.
Human ingroup bonding: Oxytocin can increase positive attitudes, such as bonding, toward individuals classified as "in-group" members, whereas other individuals become classified as "out-group" members. Oxytocin has also been implicated in lying when lying would prove beneficial to other in-group members. In a study where such a relationship was examined, it was found that when individuals were administered oxytocin, rates of dishonesty in the participants' responses increased for their in-group members when a beneficial outcome for their group was expected. Both of these examples show the tendency of individuals to act in ways that benefit those considered to be members of their social group, or in-group.
Decreased oxytocin & receptor expression has been associated with aggressive behavior in aggressive-impulsive disorders.
Oxytocin is not only correlated with the preferences of individuals to associate with members of their own group, but it is also evident during conflicts between members of different groups. During conflict, individuals receiving nasally administered oxytocin demonstrate more frequent defense-motivated responses toward in-group members than out-group members. Further, oxytocin was correlated with participant desire to protect vulnerable in-group members, despite that individual's attachment to the conflict. Similarly, it has been demonstrated that when oxytocin is administered, individuals alter their subjective preferences in order to align with in-group ideals over out-group ideals. These studies demonstrate that oxytocin is associated with intergroup dynamics. Further, oxytocin influences the responses of individuals in a particular group to those of another group. The in-group bias is evident in smaller groups; however, it can also be extended to groups as large as one's entire country leading toward a tendency of strong national zeal. A study done in the Netherlands showed that oxytocin increased the in-group favoritism of their nation while decreasing acceptance of members of other ethnicities and foreigners. People also show more affection for their country's flag while remaining indifferent to other cultural objects when exposed to oxytocin. It has thus been hypothesized that this hormone may be a factor in xenophobic tendencies secondary to this effect. Thus, oxytocin appears to affect individuals at an international level where the in-group becomes a specific "home" country and the out-group grows to include all other countries.
Drugs
Drug interaction: According to several studies in animals, oxytocin inhibits the development of tolerance to various addictive drugs (opiates, cocaine, alcohol), and reduces withdrawal symptoms. MDMA (ecstasy) may increase feelings of love, empathy, and connection to others by stimulating oxytocin activity primarily via activation of serotonin 5-HT1A receptors, if initial studies in animals apply to humans. The anxiolytic drug buspirone may produce some of its effects via 5-HT1A receptor-induced oxytocin stimulation as well.
Addiction vulnerability: Concentrations of endogenous oxytocin can impact the effects of various drugs and one's susceptibility to substance use disorders, with higher concentrations associated with lower susceptibility. The status of the endogenous oxytocin system can enhance or reduce susceptibility to addiction through its bidirectional interaction with numerous systems, including the dopamine system, the hypothalamic–pituitary–adrenal axis and the immune system. Individual differences in the endogenous oxytocin system based on genetic predisposition, gender and environmental influences, may therefore affect addiction vulnerability. Oxytocin may be related to the place conditioning behaviors observed in habitual drug abusers.
Fear and anxiety
Oxytocin is typically remembered for the effect it has on prosocial behaviors, such as its role in facilitating trust and attachment between individuals. However, oxytocin has a more complex role than solely enhancing prosocial behaviors. There is consensus that oxytocin modulates fear and anxiety; that is, it does not directly elicit fear or anxiety. Two dominant theories explain the role of oxytocin in fear and anxiety. One theory states that oxytocin increases approach/avoidance to certain social stimuli and the second theory states that oxytocin increases the salience of certain social stimuli, causing animals (including humans) to pay closer attention to socially relevant stimuli.
Nasally administered oxytocin has been reported to reduce fear, possibly by inhibiting the amygdala (which is thought to be responsible for fear responses). Indeed, studies in rodents have shown oxytocin can efficiently inhibit fear responses by activating an inhibitory circuit within the amygdala. Some researchers have argued oxytocin has a general enhancing effect on all social emotions, since intranasal administration of oxytocin also increases envy and Schadenfreude. Individuals who receive an intranasal dose of oxytocin identify facial expressions of disgust more quickly than individuals who do not receive oxytocin. Facial expressions of disgust are evolutionarily linked to the idea of contagion. Thus, oxytocin increases the salience of cues that imply contamination, which leads to a faster response because these cues are especially relevant for survival. In another study, after administration of oxytocin, individuals displayed an enhanced ability to recognize expressions of fear compared to the individuals who received the placebo. Oxytocin modulates fear responses by enhancing the maintenance of social memories. Rats who are genetically modified to have a surplus of oxytocin receptors display a greater fear response to a previously conditioned stressor. Oxytocin enhances the aversive social memory, leading the rat to display a greater fear response when the aversive stimulus is encountered again.
Mood and depression
Oxytocin produces antidepressant-like effects in animal models of depression, and a deficit of it may be involved in the pathophysiology of depression in humans. The antidepressant-like effects of oxytocin are not blocked by a selective antagonist of the oxytocin receptor, suggesting that these effects are not mediated by the oxytocin receptor. In accordance, unlike oxytocin, the selective non-peptide oxytocin receptor agonist WAY-267,464 does not produce antidepressant-like effects, at least in the tail suspension test. In contrast to WAY-267,464, carbetocin, a close analogue of oxytocin and peptide oxytocin receptor agonist, notably does produce antidepressant-like effects in animals. As such, the antidepressant-like effects of oxytocin may be mediated by modulation of a different target, perhaps the vasopressin V1A receptor where oxytocin is known to weakly bind as an agonist.
Oxytocin mediates the antidepressant-like effects of sexual activity. A drug for sexual dysfunction, sildenafil enhances electrically evoked oxytocin release from the pituitary gland. In accordance, it may have promise as an antidepressant.
Sex differences
It has been shown that oxytocin differentially affects males and females. Females who are administered oxytocin are overall faster in responding to socially relevant stimuli than males who received oxytocin. Additionally, after the administration of oxytocin, females show increased amygdala activity in response to threatening scenes; however, males do not show increased amygdala activation. This phenomenon can be explained by looking at the role of gonadal hormones, specifically estrogen, which modulate the enhanced threat processing seen in females. Estrogen has been shown to stimulate the release of oxytocin from the hypothalamus and promote receptor binding in the amygdala.
It has also been shown that testosterone directly suppresses oxytocin in mice. This has been hypothesized to have evolutionary significance. With oxytocin suppressed, activities such as hunting and attacking invaders would be less mentally difficult as oxytocin is strongly associated with empathy.
Social
Because oxytocin plays a role in social bonding, maternal behaviors and emotional connections between people, it is also informally referred to as the "love hormone". This term is not a medical or scientific name but is often used to describe oxytocin's effects on human behavior and emotions.
Affecting generosity by increasing empathy during perspective taking: In a neuroeconomics experiment, intranasal oxytocin increased generosity in the Ultimatum Game by 80%, but had no effect in the Dictator Game that measures altruism. Perspective-taking is not required in the Dictator Game, but the researchers in this experiment explicitly induced perspective-taking in the Ultimatum Game by not identifying to participants into which role they would be placed. Serious methodological questions have arisen, however, with regard to the role of oxytocin in trust and generosity. Empathy in healthy males has been shown to be increased after intranasal oxytocin This is most likely due to the effect of oxytocin in enhancing eye gaze. There is some discussion about which aspect of empathy oxytocin might alter – for example, cognitive vs. emotional empathy. While studying wild chimpanzees, it was noted that after a chimpanzee shared food with a non-kin related chimpanzee, the subjects' levels of oxytocin increased, as measured through their urine. In comparison to other cooperative activities between chimpanzees that were monitored including grooming, food sharing generated higher levels of oxytocin. This comparatively higher level of oxytocin after food sharing parallels the increased level of oxytocin in nursing mothers, sharing nutrients with their kin.
Trust is increased by oxytocin. Study found that with the oxytocin nasal spray, people place more trust to strangers in handling their money. Disclosure of emotional events is a sign of trust in humans. When recounting a negative event, humans who receive intranasal oxytocin share more emotional details and stories with more emotional significance. Humans also find faces more trustworthy after receiving intranasal oxytocin. In a study, participants who received intranasal oxytocin viewed photographs of human faces with neutral expressions and found them to be more trustworthy than those who did not receive oxytocin. This may be because oxytocin reduces the fear of social betrayal in humans. Even after experiencing social alienation by being excluded from a conversation, humans who received oxytocin scored higher in trust on the Revised NEO Personality Inventory. Moreover, in a risky investment game, experimental subjects given nasally administered oxytocin displayed "the highest level of trust" twice as often as the control group. Subjects who were told they were interacting with a computer showed no such reaction, leading to the conclusion that oxytocin was not merely affecting risk aversion. When there is a reason to be distrustful, such as experiencing betrayal, differing reactions are associated with oxytocin receptor gene (OXTR) differences. Those with the CT haplotype experience a stronger reaction, in the form of anger, to betrayal.
Romantic attachment: In some studies, high levels of plasma oxytocin have been correlated with romantic attachment. For example, if a couple is separated for a long period of time, anxiety can increase due to the lack of physical affection. Oxytocin may aid romantically attached couples by increasing feelings of anxiety during separation.
Group-serving dishonesty/deception: In a carefully controlled study exploring the biological roots of immoral behavior, oxytocin was shown to promote dishonesty when the outcome favored the group to which an individual belonged instead of just the individual.
Oxytocin affects social distance between adult males and females, and may be responsible at least in part for romantic attraction and subsequent monogamous pair bonding. An oxytocin nasal spray caused men in a monogamous relationship, but not single men, to increase the distance between themselves and an attractive woman during a first encounter by 10 to 15 centimeters. The researchers suggested that oxytocin may help promote fidelity within monogamous relationships. For this reason, it is sometimes referred to as the "bonding hormone". There is some evidence that oxytocin promotes ethnocentric behavior, incorporating the trust and empathy of in-groups with their suspicion and rejection of outsiders. Furthermore, genetic differences in the oxytocin receptor gene (OXTR) have been associated with maladaptive social traits such as aggressive behavior.
Social behavior and wound healing: Oxytocin is also thought to modulate inflammation by decreasing certain cytokines. Thus, the increased release in oxytocin following positive social interactions has the potential to improve wound healing. A study by Marazziti and colleagues used heterosexual couples to investigate this possibility. They found increases in plasma oxytocin following a social interaction were correlated with faster wound healing. They hypothesized this was due to oxytocin reducing inflammation, thus allowing the wound to heal more quickly. This study provides preliminary evidence that positive social interactions may directly influence aspects of health.
According to a study published in 2014, silencing of oxytocin receptor interneurons in the medial prefrontal cortex (mPFC) of female mice resulted in loss of social interest in male mice during the sexually receptive phase of the estrous cycle. Oxytocin evokes feelings of contentment, reductions in anxiety, and feelings of calmness and security when in the company of the mate. This suggests oxytocin may be important for the inhibition of the brain regions associated with behavioral control, fear, and anxiety, thus allowing orgasm to occur. Research has also demonstrated that oxytocin can decrease anxiety and protect against stress, particularly in combination with social support. It is found that endocannabinoid signaling mediates oxytocin-driven social reward. During a 2008 study, a lack of oxytocin in mice was associated with abnormalities in emotional behavior. Another study conducted in 2014 saw similar results with a variation in the oxytocin receptor connected with dopamine transport and how levels of oxytocin are dependent on the levels of dopamine transporter levels. One study explored the effects of low levels of oxytocin and the other on possible explanation of what affects oxytocin receptors. As a lack of social skills and proper emotional behavior are common signs of Autism, low levels of oxytocin could become a new sign for individuals that fall into the Autism Spectrum.
Chemistry
Oxytocin is a peptide of nine amino acids (a nonapeptide) in the sequence cysteine-tyrosine-isoleucine-glutamine-asparagine-cysteine-proline-leucine-glycine-amide (Cys – Tyr – Ile – Gln – Asn – Cys – Pro – Leu – Gly – NH2, or CYIQNCPLG-NH2); its C-terminus has been converted to a primary amide and a disulfide bridge joins the cysteine moieties. Oxytocin has a molecular mass of 1007 Da, and one international unit (IU) of oxytocin is the equivalent of 1.68 μg of pure peptide.
While the structure of oxytocin is highly conserved in placental mammals, a novel structure of oxytocin was reported in 2011 in marmosets, tamarins, and other new world primates. Genomic sequencing of the gene for oxytocin revealed a single in-frame mutation (thymine for cytosine) which results in a single amino acid substitution at the 8-position (proline for leucine). Since this original Lee et al. paper, two other laboratories have confirmed Pro8-OT and documented additional oxytocin structural variants in this primate taxon. Vargas-Pinilla et al. sequenced the coding regions of the OXT gene in other genera in new world primates and identified the following variants in addition to Leu8- and Pro8-OT: Ala8-OT, Thr8-OT, and Val3/Pro8-OT. Ren et al. identified a variant further, Phe2-OT in howler monkeys.
Recent advances in analytical instrumental techniques highlighted the importance of liquid chromatography (LC) coupled with mass spectrometry (MS) for measuring oxytocin levels in various samples derived from biological sources. Most of these studies optimized the oxytocin quantification in electrospray ionization (ESI) positive mode, using [M+H]+ as the parent ion at mass-to-charge ratio (m/z) 1007.4 and the fragment ions as diagnostic peaks at m/z 991.0, m/z 723.2 and m/z 504.2. These important ion selections paved the way for the development of current methods of oxytocin quantification using MS instrumentation.
The structure of oxytocin is very similar to that of vasopressin. Both are nonapeptides with a single disulfide bridge, differing only by two substitutions in the amino acid sequence (differences from oxytocin bolded for clarity): Cys – Tyr – Phe – Gln – Asn – Cys – Pro – Arg – Gly – NH2. Oxytocin and vasopressin were isolated and their total synthesis reported in 1954, work for which Vincent du Vigneaud was awarded the 1955 Nobel Prize in Chemistry with the citation: "for his work on biochemically important sulphur compounds, especially for the first synthesis of a polypeptide hormone."
Oxytocin and vasopressin are the only known hormones released by the human posterior pituitary gland to act at a distance. However, oxytocin neurons make other peptides, including corticotropin-releasing hormone and dynorphin, for example, that act locally. The magnocellular neurosecretory cells that make oxytocin are adjacent to magnocellular neurosecretory cells that make vasopressin. These are large neuroendocrine neurons which are excitable and can generate action potentials.
In medicine
Small-molecule oxytocin receptor agonists like LIT-001 may prove to be useful in the treatment of social deficits, for instance in autism.
See also
Oxytocin (medication)
References
Further reading
.
External links
Analgesics
Antidiuretics
Breastfeeding
Gynaecological endocrinology
Hormones of the hypothalamus
Hormones of the pregnant female
Human female endocrine system
Interpersonal attraction
Neuropeptides
Neurotransmitters
Opioid receptor positive allosteric modulators
Orgasm
Oxytocin receptor agonists
Posterior pituitary hormones
Vasopressin receptor agonists
Orphan drugs
Nonapeptides
Happy hormones | Oxytocin | [
"Chemistry"
] | 8,176 | [
"Neurochemistry",
"Neurotransmitters"
] |
222,320 | https://en.wikipedia.org/wiki/Interphase | Interphase is the active portion of the cell cycle that includes the G1, S, and G2 phases, where the cell grows, replicates its DNA, and prepares for mitosis, respectively. Interphase was formerly called the "resting phase," but the cell in interphase is not simply dormant. Calling it so would be misleading since a cell in interphase is very busy synthesizing proteins, transcribing DNA into RNA, engulfing extracellular material, and processing signals, to name just a few activities. The cell is quiescent only in G0. Interphase is the phase of the cell cycle in which a typical cell spends most of its life. Interphase is the "daily living" or metabolic phase of the cell, in which the cell obtains nutrients and metabolizes them, grows, replicates its DNA in preparation for mitosis, and conducts other "normal" cell functions.
A common misconception is that interphase is the first stage of mitosis, but since mitosis is the division of the nucleus, prophase is actually the first stage.
In interphase, the cell gets itself ready for mitosis or meiosis. Somatic cells, or normal diploid cells of the body, go through mitosis in order to reproduce themselves through cell division, whereas diploid germ cells (i.e., primary spermatocytes and primary oocytes) go through meiosis in order to create haploid gametes (i.e., sperm and ova) for the purpose of sexual reproduction.
Stages of interphase
There are three stages of cellular interphase, with each phase ending when a cellular checkpoint checks the accuracy of the stage's completion before proceeding to the next. The stages of interphase are:
G1 (Gap 1), in which the cell grows and functions normally. During this time, a high amount of protein synthesis occurs and the cell grows (to about double its original size) – more organelles are produced and the volume of the cytoplasm increases. If the cell is not to divide again, it will enter G0.
Synthesis (S), in which the cell synthesizes its DNA and the amount of DNA is doubled but the number of chromosomes remains constant (via semiconservative replication).
G2 (Gap 2), in which the cell resumes its growth in preparation for division. The cell continues to grow until mitosis begins. In plants, chloroplasts divide during G2.
In addition, some cells that do not divide often or ever, enter a stage called G0 (Gap zero), which is either a stage separate from interphase or an extended G1.
The duration of time spent in interphase and in each stage of interphase is variable and depends on both the type of cell and the species of organism it belongs to. Most cells of adult mammals spend about 24 hours in interphase; this accounts for about 90%-96% of the total time involved in cell division.
Interphase includes G1, S, and G2 phases. Mitosis and cytokinesis, however, are separate from interphase.
DNA double-strand breaks can be repaired during interphase by two principal processes. The first process, non-homologous end joining (NHEJ), can join the two broken ends of DNA in the G1, S and G2 phases of interphase. The second process, homologous recombinational repair (HRR), is more accurate than NHEJ in repairing double-strand breaks. However HRR is only active during the S and G2 phases of interphase when DNA replication is either partially or fully accomplished, since HRR requires two adjacent homologous chromosomes.
Interphase within sequences of cellular processes
Interphase and the cell cycle
When G2 is completed, the cell enters a relatively brief period of nuclear and cellular division, composed of mitosis and cytokinesis, respectively. After the successful completion of mitosis and cytokinesis, both resulting daughter cells re-enter G1 of interphase.
In the cell cycle, interphase is preceded by telophase and cytokinesis of the M phase. In alternative fashion, interphase is sometimes interrupted by G0 phase, which, in some circumstances, may then end and be followed by the remaining stages of interphase. After the successful completion of the G2 checkpoint, the final checkpoint in interphase, the cell proceeds to prophase, or in plants to preprophase, which is the first stage of mitosis.
G0 phase is viewed as either an extended G1 phase where the cell is neither dividing nor preparing to divide, or as a distinct quiescent stage which occurs outside of the cell cycle.
Interphase and other cellular processes
In gamete production, interphase is succeeded by meiosis. In programmed cell death, interphase is followed or preempted by apoptosis.
See also
Prophase
Prometaphase
Metaphase
Anaphase
Telophase
Cytokinesis
Cytoskeleton
Interphase (Materials)
The transition region between two materials. For example between the fibre and matrix of a composite material.
References
Mitosis
Cell biology | Interphase | [
"Biology"
] | 1,108 | [
"Cell biology",
"Cellular processes",
"Mitosis"
] |
222,367 | https://en.wikipedia.org/wiki/Multivalued%20function | In mathematics, a multivalued function, multiple-valued function, many-valued function, or multifunction, is a function that has two or more values in its range for at least one point in its domain. It is a set-valued function with additional properties depending on context; some authors do not distinguish between set-valued functions and multifunctions, but English Wikipedia currently does, having a separate article for each.
A multivalued function of sets f : X → Y is a subset
Write f(x) for the set of those y ∈ Y with (x,y) ∈ Γf. If f is an ordinary function, it is a multivalued function by taking its graph
They are called single-valued functions to distinguish them.
Distinction from set-valued relations
Although other authors may distinguish them differently (or not at all), Wriggers and Panatiotopoulos (2014) distinguish multivalued functions from set-valued relations (also called set-valued functions) by the fact that multivalued functions only take multiple values at finitely (or denumerably) many points, and otherwise behave like a function. Geometrically, this means that the graph of a multivalued function is necessarily a line of zero area that doesn't loop, while the graph of a set-valued relation may contain solid filled areas or loops.
Motivation
The term multivalued function originated in complex analysis, from analytic continuation. It often occurs that one knows the value of a complex analytic function in some neighbourhood of a point . This is the case for functions defined by the implicit function theorem or by a Taylor series around . In such a situation, one may extend the domain of the single-valued function along curves in the complex plane starting at . In doing so, one finds that the value of the extended function at a point depends on the chosen curve from to ; since none of the new values is more natural than the others, all of them are incorporated into a multivalued function.
For example, let be the usual square root function on positive real numbers. One may extend its domain to a neighbourhood of in the complex plane, and then further along curves starting at , so that the values along a given curve vary continuously from . Extending to negative real numbers, one gets two opposite values for the square root—for example for —depending on whether the domain has been extended through the upper or the lower half of the complex plane. This phenomenon is very frequent, occurring for th roots, logarithms, and inverse trigonometric functions.
To define a single-valued function from a complex multivalued function, one may distinguish one of the multiple values as the principal value, producing a single-valued function on the whole plane which is discontinuous along certain boundary curves. Alternatively, dealing with the multivalued function allows having something that is everywhere continuous, at the cost of possible value changes when one follows a closed path (monodromy). These problems are resolved in the theory of Riemann surfaces: to consider a multivalued function as an ordinary function without discarding any values, one multiplies the domain into a many-layered covering space, a manifold which is the Riemann surface associated to .
Inverses of functions
If f : X → Y is an ordinary function, then its inverse is the multivalued function
defined as Γf, viewed as a subset of X × Y. When f is a differentiable function between manifolds, the inverse function theorem gives conditions for this to be single-valued locally in X.
For example, the complex logarithm log(z) is the multivalued inverse of the exponential function ez : C → C×, with graph
It is not single valued, given a single w with w = log(z), we have
Given any holomorphic function on an open subset of the complex plane C, its analytic continuation is always a multivalued function.
Concrete examples
Every real number greater than zero has two real square roots, so that square root may be considered a multivalued function. For example, we may write ; although zero has only one square root, .
Each nonzero complex number has two square roots, three cube roots, and in general n nth roots. The only nth root of 0 is 0.
The complex logarithm function is multiple-valued. The values assumed by for real numbers and are for all integers .
Inverse trigonometric functions are multiple-valued because trigonometric functions are periodic. We have As a consequence, arctan(1) is intuitively related to several values: /4, 5/4, −3/4, and so on. We can treat arctan as a single-valued function by restricting the domain of tan x to – a domain over which tan x is monotonically increasing. Thus, the range of arctan(x) becomes . These values from a restricted domain are called principal values.
The antiderivative can be considered as a multivalued function. The antiderivative of a function is the set of functions whose derivative is that function. The constant of integration follows from the fact that the derivative of a constant function is 0.
Inverse hyperbolic functions over the complex domain are multiple-valued because hyperbolic functions are periodic along the imaginary axis. Over the reals, they are single-valued, except for arcosh and arsech.
These are all examples of multivalued functions that come about from non-injective functions. Since the original functions do not preserve all the information of their inputs, they are not reversible. Often, the restriction of a multivalued function is a partial inverse of the original function.
Branch points
Multivalued functions of a complex variable have branch points. For example, for the nth root and logarithm functions, 0 is a branch point; for the arctangent function, the imaginary units i and −i are branch points. Using the branch points, these functions may be redefined to be single-valued functions, by restricting the range. A suitable interval may be found through use of a branch cut, a kind of curve that connects pairs of branch points, thus reducing the multilayered Riemann surface of the function to a single layer. As in the case with real functions, the restricted range may be called the principal branch of the function.
Applications
In physics, multivalued functions play an increasingly important role. They form the mathematical basis for Dirac's magnetic monopoles, for the theory of defects in crystals and the resulting plasticity of materials, for vortices in superfluids and superconductors, and for phase transitions in these systems, for instance melting and quark confinement. They are the origin of gauge field structures in many branches of physics.
See also
Relation (mathematics)
Function (mathematics)
Binary relation
Set-valued function
Further reading
H. Kleinert, Multivalued Fields in Condensed Matter, Electrodynamics, and Gravitation, World Scientific (Singapore, 2008) (also available online)
H. Kleinert, Gauge Fields in Condensed Matter, Vol. I: Superflow and Vortex Lines, 1–742, Vol. II: Stresses and Defects, 743–1456, World Scientific, Singapore, 1989 (also available online: Vol. I and Vol. II)
References
Functions and mappings | Multivalued function | [
"Mathematics"
] | 1,536 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
222,405 | https://en.wikipedia.org/wiki/Linnik%27s%20theorem | Linnik's theorem in analytic number theory answers a natural question after Dirichlet's theorem on arithmetic progressions. It asserts that there exist positive c and L such that, if we denote p(a,d) the least prime in the arithmetic progression
where n runs through the positive integers and a and d are any given positive coprime integers with 1 ≤ a ≤ d − 1, then:
The theorem is named after Yuri Vladimirovich Linnik, who proved it in 1944. Although Linnik's proof showed c and L to be effectively computable, he provided no numerical values for them.
It follows from Zsigmondy's theorem that p(1,d) ≤ 2d − 1, for all d ≥ 3. It is known that p(1,p) ≤ Lp, for all primes p ≥ 5, as Lp is congruent to 1 modulo p for all prime numbers p, where Lp denotes the p-th Lucas number. Just like Mersenne numbers, Lucas numbers with prime indices have divisors of the form 2kp+1.
Properties
It is known that L ≤ 2 for almost all integers d.
On the generalized Riemann hypothesis it can be shown that
where is the totient function,
and the stronger bound
has been also proved.
It is also conjectured that:
Bounds for L
The constant L is called Linnik's constant and the following table shows the progress that has been made on determining its size.
Moreover, in Heath-Brown's result the constant c is effectively computable.
Notes
Theorems in analytic number theory
Theorems about prime numbers | Linnik's theorem | [
"Mathematics"
] | 337 | [
"Theorems in mathematical analysis",
"Theorems in number theory",
"Theorems in analytic number theory",
"Theorems about prime numbers"
] |
14,138,466 | https://en.wikipedia.org/wiki/JunD | Transcription factor JunD is a protein that in humans is encoded by the JUND gene.
Function
The protein encoded by this intronless gene is a member of the JUN family, and a functional component of the AP1 transcription factor complex. It has been proposed to protect cells from p53-dependent senescence and apoptosis. Alternate translation initiation site usage results in the production of different isoforms.
ΔJunD
The dominant negative mutant variant of JunD, known as ΔJunD or Delta JunD, is a potent antagonist of the ΔFosB transcript, as well as other forms of AP-1-mediated transcriptional activity. In the nucleus accumbens, ΔJunD directly opposes many of the neurological changes that occur in addiction (i.e., those induced by ΔFosB). ΔFosB inhibitors (drugs that oppose its action) may be an effective treatment for addiction and addictive disorders. Being an unnatural genetic variant, deltaJunD has not been observed in humans.
Interactions
JunD has been shown to interact with ATF3, MEN1, DNA damage-inducible transcript 3 and BRCA1.
See also
AP-1 (transcription factor)
References
Further reading
External links
PDBe-KB provides an overview of all the structure information available in the PDB for Human Transcription factor jun-D
Transcription factors | JunD | [
"Chemistry",
"Biology"
] | 273 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,139,384 | https://en.wikipedia.org/wiki/Comparison%20of%20anaerobic%20and%20aerobic%20digestion | The following article is a comparison of aerobic and anaerobic digestion. In both aerobic and anaerobic systems the growing and reproducing microorganisms within them require a source of elemental oxygen to survive.
In an anaerobic system there is an absence of gaseous oxygen. In an anaerobic digester, gaseous oxygen is prevented from entering the system through physical containment in sealed tanks. Anaerobes access oxygen from sources other than the surrounding air. The oxygen source for these microorganisms can be the organic material itself or alternatively may be supplied by inorganic oxides from within the input material. When the oxygen source in an anaerobic system is derived from the organic material itself, then the 'intermediate' end products are primarily alcohols, aldehydes, and organic acids plus carbon dioxide. In the presence of specialised methanogens, the intermediates are converted to the 'final' end products of methane, carbon dioxide with trace levels of hydrogen sulfide. In an anaerobic system the majority of the chemical energy contained within the starting material is released by methanogenic bacteria as methane.
In an aerobic system, such as composting, the microorganisms access free, gaseous oxygen directly from the surrounding atmosphere. The end products of an aerobic process are primarily carbon dioxide and water which are the stable, oxidised forms of carbon and hydrogen. If the biodegradable starting material contains nitrogen, phosphorus and sulfur, then the end products may also include their oxidised forms- nitrate, phosphate and sulfate. In an aerobic system the majority of the energy in the starting material is released as heat by their oxidisation into carbon dioxide and water.
Composting systems typically include organisms such as fungi that are able to break down lignin and celluloses to a greater extent than anaerobic bacteria. Due to this fact it is possible, following anaerobic digestion, to compost the anaerobic digestate allowing further volume reduction and stabilisation.
References
Anaerobic digestion | Comparison of anaerobic and aerobic digestion | [
"Chemistry",
"Engineering"
] | 427 | [
"Water technology",
"Anaerobic digestion",
"Environmental engineering"
] |
14,143,143 | https://en.wikipedia.org/wiki/Kubo%20gap | In atomic physics, the kubo gap is the average spacing that exists between consecutive energy levels. The units of measure are meV or millielectron volts. It varies with an inverse relationship to the nuclearity.
As the material in question is viewed from the bulk and atomic levels, we can see that the kubo gap goes from a smaller to larger value respectively. As the kubo gap increases there is also a decrease in the density of states located at the Fermi level. The kubo gap can also have an effect on the properties associated with the material. It is possible to control the kubo gap which will then cause the system to become metallic or nonmetallic. The electrical conductivity and magnetic susceptibility are also both influenced by the kubo gap and vary according to the relative size of the kubo gap.
See also
Nanoparticle
Quantum dot
References
Atomic physics
Mesoscopic physics | Kubo gap | [
"Physics",
"Chemistry",
"Materials_science"
] | 190 | [
"Materials science stubs",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Condensed matter physics",
"Atomic",
"Condensed matter stubs",
"Mesoscopic physics",
" and optical physics"
] |
14,147,401 | https://en.wikipedia.org/wiki/Mesophase | In chemistry and chemical physics, a mesophase or mesomorphic phase is a phase of matter intermediate between solid and liquid. Gelatin is a common example of a partially ordered structure in a mesophase. Further, biological structures such as the lipid bilayers of cell membranes are examples of mesophases. Mobile ions in mesophases are either orientationally or rotationally disordered while their centers are located at the ordered sites in the crystal structure. Mesophases with long-range positional order but no orientational order are plastic crystals, whereas those with long-range orientational order but only partial or no positional order are liquid crystals.
Georges Friedel (1922) called attention to the "mesomorphic states of matter" in his scientific assessment of observations of the so-called liquid crystals. Conventionally a crystal is solid, and crystallization converts liquid to solid. The oxymoron of the liquid crystal is resolved through the notion of mesophases. The observations noted an optic axis persisting in materials that had been melted and had begun to flow. The term liquid crystal persists as a colloquialism, but use of the term was criticized in 1993: In The Physics of Liquid Crystals the mesophases are introduced from the beginning:
...certain organic materials do not show a single transition from solid to liquid, but rather a cascade of transitions involving new phases. The mechanical properties and the symmetry properties of these phases are intermediate between those of a liquid and those of a crystal. For this reason they have often been called liquid crystals. A more proper name is ‘mesomorphic phases’ (mesomorphic: intermediate form)
Further, "The classification of mesophases (first clearly set out by G. Friedel in 1922) is essentially based on symmetry."
Molecules that demonstrate mesophases are called mesogens.
In technology, molecules in which the optic axis is subject to manipulation during a mesophase have become commercial products as they can be used to manufacture display devices, known as liquid-crystal displays (LCDs). The susceptibility of the optical axis, called a director, to an electric or magnetic field produces the potential for an optical switch that obscures light or lets it pass. Methods used include the Freedericksz transition, the twisted nematic field effect and the in-plane switching effect. From early liquid crystal displays the buying public has embraced the low-power optical switch facility of mesophases with director.
Consider a solid consisting of a single molecular species and subjected to melting. Ultimately it is rendered to an isotropic state classically referred to as liquid. Mesophases occur before then when an intermediate state of order is still maintained as in the nematic, smectic, and columnar phases of liquid crystals. Mesophases thus exhibit anisotropy. LCD devices work as an optical switch which is turned off and on by an electric field applied to the mesogen with director. The response of the director to the field is expressed with viscosity parameters, as in the Ericksen-Leslie theory in continuum mechanics developed by Jerald Ericksen and Frank Matthews Leslie. LCD devices work only up to the transition temperature when the mesophase changes to the isotropic liquid phase at the so-called clearing point.
Mesophase phenomena are important in many scientific fields. The publishing arms of professional societies have academic journals as needed. For instance, the American Chemical Society has both Macromolecules and Langmuir, while Royal Society of Chemistry has Soft Matter, and American Physical Society has Physical Review E, and Elsevier has Advances in Colloid and Interface Science.
See also
Condensed matter physics
Sol-gel
Walter Noll
Notes and references
Sivaramakrishna Chandrasekhar (1992) Liquid Crystals, 2nd edition, Cambridge University Press .
David Dunmur & Tim Sluckin (2011) Soap, Science, and Flat-screen TVs: a history of liquid crystals, Oxford University Press .
J. Prost & C.E. Williams (1999) "Liquid Crystals: Between Order and Disorder", pp 289–315 in Soft Matter Physics, Mohamed Daoud & Claudine E. Williams, editors, translated by Stephen N. Lyle from La Just Argile (1995), Springer Verlag .
External links
Soft Matter World organization
Springer Verlag Partially Ordered Systems ISSN 0941-5114 .
Phases of matter
Soft matter
Phase transitions
Liquid crystals
Continuum mechanics | Mesophase | [
"Physics",
"Chemistry",
"Materials_science"
] | 931 | [
"Physical phenomena",
"Phase transitions",
"Continuum mechanics",
"Soft matter",
"Phases of matter",
"Critical phenomena",
"Classical mechanics",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
12,484,448 | https://en.wikipedia.org/wiki/Alkali%E2%80%93silica%20reaction | The alkali–silica reaction (ASR), also commonly known as concrete cancer, is a deleterious internal swelling reaction that occurs over time in concrete between the highly alkaline cement paste and the reactive amorphous (i.e., non-crystalline) silica found in many common aggregates, given sufficient moisture.
This deleterious chemical reaction causes the expansion of the altered aggregate by the formation of a soluble and viscous gel of sodium silicate (Na2SiO3, also noted Na2H2SiO4, or N-S-H (sodium silicate hydrate), depending on the adopted convention). This hygroscopic gel swells and increases in volume when absorbing water: it exerts an expansive pressure inside the siliceous aggregate, causing spalling and loss of strength of the concrete, finally leading to its failure.
ASR can lead to serious cracking in concrete, resulting in critical structural problems that can even force the demolition of a particular structure. The expansion of concrete through reaction between cement and aggregates was first studied by Thomas E. Stanton in California during the 1930s with his founding publication in 1940.
Chemistry
To attempt to simplify and to stylize a very complex set of various reactions, the whole ASR reaction, after its complete evolution (ageing process) in the presence of sufficient Ca2+ cations available in solution, could be compared to the pozzolanic reaction which would be catalysed by the undesirable presence of excessive concentrations of alkali hydroxides (NaOH and KOH) in the concrete. It is a mineral acid-base reaction between NaOH or KOH, calcium hydroxide, also known as portlandite, or (Ca(OH)2), and silicic acid (H4SiO4, or Si(OH)4). For simplification, after a complete exchange of the alkali cations with the calcium ions released by portlandite, the alkali-silica reaction in its ultimate stage leading to calcium silicate hydrate (C-S-H) could be schematically represented as follows:
Ca(OH)2 + H4SiO4 → Ca2+ + H2SiO42− + 2 H2O → CaH2SiO4
Here, the silicic acid H4SiO4, or Si(OH)4, which is equivalent to SiO2 · 2 H2O represents hydrous or amorphous silica for the sake of simplicity in aqueous chemistry.
Indeed, the term silicic acid has traditionally been used as a synonym for silica, SiO2. Strictly speaking, silica is the anhydride of orthosilicic acid, Si(OH)4.
SiO2↓ + 2 H2O Si(OH)4
An ancient industrial notation referring to , metasilicic acid, is also often used to depict the alkali-silica reaction. However, the metasilicic acid, , or , is a hypothetic molecule which has never been observed, even in extreme diluted solutions because is unstable and continues to hydrate.
Indeed, contrary to the hydration of CO2 which consumes only one water molecule and stops at H2CO3, the hydration of SiO2 consumes two water molecules and continues one step further to form H4SiO4. The difference in hydration behaviour between SiO2 and CO2 is explained by thermodynamic reasons (Gibbs free energy) and by bond energy or steric hindrance around the central atom of the molecule.
This is why the more correct geochemical notation referring to the orthosilicic acid really existing in dilute solution is preferred here. However, the main advantage of the now deprecated, but still often used, industrial notation referring to the metasilicate anion (), which also does not exist in aqueous solution, is its greater simplicity and its direct similitude in notation with the carbonate () system.
One will also note that the NaOH and KOH species (alkali hydroxides, also often simply called alkali to refer to their strongly basic character) which catalyze and accelerate the silica dissolution in the alkali-silica reaction do not explicitly appear in this simplified representation of the ultimate reaction with portlandite, because they are continuously regenerated from the cation exchange reaction with portlandite. As a consequence, they disappear from the global mass balance equation of the catalyzed reaction.
Silica dissolution mechanism
The surface of solid silica in contact with water is covered by siloxane bonds (≡Si–O–Si≡) and silanol groups (≡Si–OH) sensitive to an alkaline attack by ions.
The presence of these oxygen-bearing groups very prone to form hydrogen bonds with water molecules explains the affinity of silica for water and makes colloidal silica very hydrophilic.
Siloxane bonds may undergo hydrolysis and condensation reactions as schematically represented hereafter:
≡Si–O–Si≡ + ↔ ≡Si–OH + HO–Si≡
=Si=O + ↔ =
On the other hand, silanol groups can also undergo protonation/deprotonation:
≡Si–OH ↔ ≡Si– + .
These equilibria can be shifted towards the right side of the reaction leading to silica dissolution by increasing the concentration of the hydroxide anion (OH–), i.e., by increasing the pH of the solution.
Alkaline hydrolysis of siloxane bonds occurs by nucleophilic substitution of OH– onto a silicon atom, while another –O–Si group is leaving to preserve the tetravalent character of Si atom:
≡Si–O–Si≡ + → ≡Si–OH + –O–Si≡
=Si=O + → =
Deprotonation of silanol groups:
≡Si–OH + → ≡Si– + .
In the pH range 0 – 7, the solubility of silica is constant, but above pH=8, the hydrolysis of siloxane bonds and deprotonation of silanol groups exponentially increase with the pH value. This is why glass easily dissolves at high pH values and does not withstand extremely basic NaOH/KOH solutions. Therefore, NaOH/KOH is released during cement hydration attacks and dissolves the tridimensional network of silica present in the aggregates. Amorphous or poorly crystallized silica, such as cryptocrystalline chalcedony or chert present in flints (in chalk) or rolled river gravels, is much more soluble and sensitive to alkaline attack by OH– anions than well crystallized silica such as quartz. Strained (deformed) quartz or chert exposed to freeze-thaw cycles in Canada and Nordic countries are also more sensitive to alkaline (high pH) solutions.
The species responsible for silica dissolution is the hydroxide anion (OH–). The high pH conditions are said to be alkaline and one also speaks of the alkalinity of the basic solutions. For the sake of electroneutrality, (OH–) anions need to be accompanied by positively charged cations, Na+ or K+ in NaOH or KOH solutions, respectively. Na and K both belong to the alkali metals column in the Periodic Table. When speaking of alkalis, one systematically refers to NaOH and KOH basic hydroxides, or their corresponding oxides Na2O and K2O in cement. Therefore, it is the hydroxide, or the oxide, component of the salt which is the only relevant chemical species for silica dissolution, not the alkali metal in itself. However, to determine the alkali equivalent content (Na2Oeq) in cement, because of the need to maintain electroneutrality in solids or in solution, one directly measures the contents of cement in Na and K elements and one conservatively considers that their counter ions are the hydroxide ions. As Na+ and K+ cations are hydrated species, they also contribute to retain water in alkali-silica reaction products.
Osmotic processes (Chatterji et al., 1986, 1987, 1989) and the electrical double layer (EDL) play also a fundamental role in the transport of water towards the concentrated liquid alkali gel, explaining their swelling behavior and the deleterious expansion of aggregates responsible of ASR damages in concrete.
Catalysis of ASR by dissolved NaOH or KOH
The ASR reaction significantly differs from the pozzolanic reaction by the fact that it is catalysed by soluble alkali hydroxides (NaOH / KOH) at very high pH. It can be represented as follows using the classical geochemical notation for representing silica by the fully hydrated dissolved silica (Si(OH)4 or silicic acid: H4SiO4), while in an older industrial notation the non-existing (H2SiO3, hemihydrated silica, is considered in analogy to carbonic acid):
2 Na(OH) + H4SiO4 → Na2H2SiO4
The so-produced soluble alkali silicagel can then react with calcium hydroxide (portlandite) to precipitate insoluble calcium silicate hydrates (C-S-H phases) and regenerate NaOH, thus continuing the initial silica dissolution reaction:
Na2H2SiO4 + Ca(OH)2 → CaH2SiO4 + 2 NaOH
The combination of the two above mentioned reactions gives a general reaction resembling the pozzolanic reaction, but it is important to keep in mind that this reaction is catalysed by the undesirable presence in cement, or other concrete components, of soluble alkaline hydroxydes (NaOH / KOH) responsible for the dissolution of the silica (silicic acid) at high pH:
Ca(OH)2 + H4SiO4 → CaH2SiO4
Without the presence of dissolved NaOH or KOH, responsible for the high pH (~13.5) of the concrete pore water, the amorphous silica of the reactive aggregates would not be dissolved and the reaction would not evolve. Moreover, the soluble sodium or potassium silicate is very hygroscopic and swells when it absorbs water. When the sodium silicate gel forms and swells inside a porous siliceous aggregate, it first expands and occupies the free porosity. When this latter is completely filled, and if the soluble but very viscous gel cannot be easily expelled from the silica network, the hydraulic pressure rises inside the attacked aggregate and leads to its fracture. The hydro-mechanical expansion of the damaged siliceous aggregate surrounded by calcium-rich hardened cement paste is responsible for the development of a network of cracks in concrete. When the sodium silicate expelled from the aggregate encounters grains of portlandite present in the hardened cement paste, an exchange between sodium and calcium cations occurs and hydrated calcium silicate (C-S-H) precipitates with a concomitant release of NaOH. In its turn, the regenerated NaOH can react with the amorphous silica aggregate, leading to an increased production of soluble sodium silicate. When a continuous rim of C-S-H completely envelops the external surface of the attacked siliceous aggregate, it behaves as a semi-permeable barrier and hinders the expulsion of the viscous sodium silicate while allowing the NaOH / KOH to diffuse from the hardened cement paste inside the aggregate. This selective barrier of C-S-H contributes to increase the hydraulic pressure inside the aggregate and aggravates the cracking process. It is the expansion of the aggregates which damages concrete in the alkali-silica reaction.
Portlandite (Ca(OH)2) represents the main reserve of OH– anions in the solid phase as suggested by Davies and Oberholster (1988) and emphasized by Wang and Gillott (1991). As long as portlandite, or the siliceous aggregates, has not become completely exhausted, the ASR reaction will continue. The alkali hydroxides are continuously regenerated by the reaction of the sodium silicate with portlandite and thus represent the transmission belt of the ASR reaction driving it to completeness. It is thus impossible to interrupt the ASR reaction. The only way to avoid ASR in the presence of siliceous aggregates and water is to maintain the concentration of soluble alkali (NaOH and KOH) at the lowest possible level in concrete, so that the catalysis mechanism becomes negligible.
Analogy with the soda lime and concrete carbonatation
The alkali-silica reaction mechanism catalysed by a soluble strong base as NaOH or KOH in the presence of Ca(OH)2 (alkalinity buffer present in the solid phase) can be compared with the carbonatation process of soda lime. The silicic acid (H2SiO3 or SiO2) is simply replaced in the reaction by the carbonic acid (H2CO3 or CO2).
{| border = 0
|-
| align = "right" | (1)|| || align = "right" | CO2 + 2 NaOH || || → || || Na2CO3 + H2O|| ||(CO2 trapping by soluble NaOH)
|-
| align = "right" | (2)|| || align = "right" | Na2CO3 + Ca(OH)2 || || → || || CaCO3 + 2 NaOH || ||(regeneration of NaOH after reaction with lime)
|-
| align = "right" | sum (1+2)|| || align = "right" | CO2 + Ca(OH)2 || || → || || CaCO3 + H2O || ||(global reaction)
|}
In the presence of water or simply ambient moisture, the strong bases, NaOH or KOH, readily dissolve in their hydration water (hygroscopic substances, deliquescence phenomenon), and this greatly facilitates the catalysis process because the reaction in aqueous solution occurs much faster than in the dry solid phase. The moist NaOH impregnates the surface and the porosity of calcium hydroxide grains with a high specific surface area. Soda lime is commonly used in closed-circuit diving rebreathers and in anesthesia systems.
The same catalytic effect of the alkali hydroxides (function of the Na2Oeq content of cement) also contributes to the carbonatation of portlandite by atmospheric CO2 in concrete although the rate of propagation of the reaction front is there essentially limited by the CO2 diffusion within the concrete matrix less porous.
The soda lime carbonatation reaction can be directly translated into the ancient industrial notation of silicate (referring to the never observed metasilicic acid) simply by substituting a C atom by a Si atom in the mass balance equations (i.e., by replacing a carbonate by a metasilicate anion). This gives the following set of reactions also commonly encountered in the literature to schematically depict the continuous regeneration of NaOH in ASR:
{| border = 0
|-
| align = "right" | (1)|| || align = "right" | SiO2 + 2 NaOH || || → || || Na2SiO3 + H2O|| ||(SiO2 quickly dissolved by hygroscopic NaOH)
|-
| align = "right" | (2)|| || align = "right" | Na2SiO3 + Ca(OH)2 || || → || || CaSiO3 + 2 NaOH || ||(regeneration of NaOH after reaction with portlandite)
|-
| align = "right" | sum (1+2)|| || align = "right" | SiO2 + Ca(OH)2 || || → || || CaSiO3 + H2O || ||(global reaction resembling the Pozzolanic reaction)
|}
If NaOH is clearly deficient in the system under consideration (soda lime or alkali-silica reaction), it is formally possible to write the same reactions sets by simply replacing the CO32- anions by HCO3− and the SiO32- anions by HSiO3−, the principle of catalysis remaining the same, even if the number of intermediate species differs.
Main sources of in hardened cement paste
One can distinguish several sources of hydroxide anions () in hardened cement paste (HCP) from the family of Portland cement (pure OPC, with BFS, or with cementitious additions, FA or SF).
Direct sources
anions can be directly present in the HCP pore water or be slowly released from the solid phase (main buffer, or solid stock) by the dissolution of (portlandite) when its solubility increases when high pH value starts to drop. Beside these two main sources, ions exchange reactions and precipitation of poorly soluble calcium salts can also contribute to release into solution.
Alkali hydroxides, NaOH and KOH, arise from the direct dissolution of and oxides produced by the pyrolysis of the raw materials at high temperature (1450 °C) in the cement kiln. The presence of minerals with high Na and K contents in the raw materials can thus be problematic. The ancient wet manufacturing process of cement, consuming more energy (water evaporation) that the modern dry process, had the advantage to eliminate much of the soluble Na and K salts present in the raw material.
As previously described in the two sections dealing respectively with ASR catalysis by alkali hydroxides and soda lime carbonatation, soluble NaOH and KOH are continuously regenerated and released into solution when the soluble alkali silicate reacts with to precipitate insoluble calcium silicate. As suggested by Davies and Oberholster (1988), the alkali-silica reaction is self-perpetuating as the alkali hydroxides are continuously regenerated in the system. Therefore, portlandite is the main buffer of in the solid phase. As long as the stock of hydroxides in the solid phase is not exhausted, the alkali-silica reaction can continue to proceed until the complete disparition of one of the reagents ( or ) involved in the pozzolanic reaction.
Indirect sources
There exist also other indirect sources of , all related to the presence of soluble Na and K salts in the pore water of hardened cement paste (HCP).
The first category contains soluble Na and K salts whose corresponding anions can precipitate an insoluble calcium salts, e.g., , , , , , ... .
Hereafter, an example for calcium sulfate (gypsum, anhydrite) precipitation releasing sodium hydroxide:
+ → + 2 NaOH
or, the reaction of sodium carbonate with portlandite, also important for the catalysis of the alkali–carbonate reaction as emphasized by Fournier and Bérubé (2000) and Bérubé et al. (2005):
+ → + 2 NaOH
However, not all Na or K soluble salts can precipitate insoluble calcium salts, such as, e.g., NaCl-based deicing salts:
2 + ← + 2 NaOH
As calcium chloride is a soluble salt, the reaction cannot occur and the chemical equilibrium regresses to the left side of the reaction.
So, a question arises: can NaCl or KCl from deicing salts still possibly play a role in the alkali-silica reaction? and cations in themselves cannot attack silica (the culprit is their counter ion ), and soluble alkali chlorides cannot produce soluble alkali hydroxide by interacting with calcium hydroxide. So, does it exist another route to still produce hydroxide anions in the hardened cement paste (HCP)?
Beside portlandite, other hydrated solid phases are present in HCP. The main phases are the calcium silicate hydrates (C-S-H) (the "glue" in cement paste), calcium sulfo-aluminate phases (AFm and AFt, ettringite) and hydrogarnet. C-S-H phases are less soluble (~ 10−5 M) than portlandite (CH) (~ 2.2 10−2 M at 25 °C) and therefore are expected to play a negligible role for the calcium ions release.
An anion-exchange reaction between chloride ions and the hydroxide anions contained in the lattice of some calcium aluminate hydrates (C-A-H), or related phases (C-A-S-H, AFm, AFt), is suspected to also contribute to the release of hydroxide anions into solution. The principle mechanism is schematically illustrated hereafter for C-A-H phases:
+ (C-A-H)–OH → (C-A-H)–Cl +
As a simple, but robust, conclusion, the presence of soluble Na and K salts can also cause, by precipitation of poorly soluble calcium salt (with portlandite, CH) or anion exchange reactions (with phases related to C-A-H), the release of anions into solution. Therefore, the presence of any salts of Na and K in cement pore water is undesirable and the measurements of Na and K elements is a good proxy (indicator) for the maximal concentration of in pore solution. This is why the total alkali equivalent content () of cement can simply rely on the measurements of Na and K (e.g., by ICP-AES, AAS, XRF measurements techniques).
Alkali gel evolution and ageing
The maturation process of the fluid alkali silicagel found in exudations into less soluble solid products found in gel pastes or in efflorescences is described hereafter. Four distinct steps are considered in this progressive transformation.
1. dissolution and formation (here, explicitly written in the ancient industrial metasilicate notation (based on the non-existing metasilicic acid, ) to also illustrate the frequent use of this later in the literature):
2 NaOH + → · (young N-S-H gel)
This reaction is accompanied by hydration and swelling of the alkali gel leading to the expansion of the affected aggregates. The pH of the fresh alkali gel is very high and it has often a characteristic amber color. The high pH of young alkali gel exudations often precludes the growth of mosses at the surface of concrete crack infilling.
2. Maturation of the alkali gel: polymerisation and gelation by the sol–gel process. Condensation of silicate monomers or oligomers dispersed in a colloidal solution (sol) into a biphasic aqueous polymeric network of silicagel. divalent cations released by calcium hydroxide (portlandite) when the pH starts to slightly drop may influence the gelation process.
3. Cation exchange with calcium hydroxide (portlandite) and precipitation of amorphous calcium silicate hydrates (C-S-H) accompanied by NaOH regeneration:
+ → + 2 NaOH
Amorphous non-stoechiometric calcium silicate hydrates (C-S-H, the non-stoechiometry being denoted here by the use of dashes) can recrystallize into rosettes similar to these of gyrolite. The C-S-H formed at this stage can be considered an evolved calcium silicate hydrate.
4. Carbonation of the C-S-H leading to precipitation of calcium carbonate and amorphous SiO2 stylized as follows:
+ → +
As long as the alkali gel () has not yet reacted with ions released from portlandite dissolution, it remains fluid and can easily exude from broken aggregates or through open cracks in the damage concrete structure. This can lead to visible yellow viscous liquid exudations (amber liquid droplets) at the surface of affected concrete.
When pH slowly drops due to the progress of the silica dissolution reaction, the solubility of calcium hydroxide increases, and the alkali gel reacts with ions. Its viscosity increases due to gelation process and its mobility (fluidity) strongly decreases when C-S-H phases start to precipitate after reaction with calcium hydroxide (portlandite). At this moment, the calcified gel hardens, hindering therefore the alkali gel transport in concrete.
When the C-S-H gel is exposed to atmospheric carbon dioxide, it undergoes a rapid carbonation, and white or yellow efflorescences appear at the surface of concrete. When the relatively fluid alkali gel continues to exude below the hardened superficial gel layer, it pushes the efflorescences out of the crack surface making them to appear in relief. Because the rates of the gel drying and of the carbonation reactions are faster than the gel exudation velocity (liquid gel expulsion rate through open cracks), in most of the cases, fresh liquid alkali exudates are not frequently encountered at the surface of civil engineering concrete structures. Decompressed concrete cores can sometimes let observe fresh yellow liquid alkali exudations (viscous amber droplets) just after their drilling.
Mechanism of concrete deterioration
The mechanism of ASR causing the deterioration of concrete can thus be described in four steps as follows:
The very basic solution (NaOH / KOH) attacks the siliceous aggregates (silicic acid dissolution at high pH), converting the poorly crystallised or amorphous silica to a soluble but very viscous alkali silicate gel (N-S-H, K-S-H).
The consumption of NaOH / KOH by the dissolution reaction of amorphous silica decreases the pH of the pore water of the hardened cement paste. This allows the dissolution of Ca(OH)2 (portlandite) and increases the concentration of Ca2+ ions into the cement pore water. Calcium ions then react with the soluble sodium silicate gel to convert it into solid calcium silicate hydrates (C-S-H). The C-S-H forms a continuous poorly permeable coating at the external surface of the aggregate.
The penetrated alkaline solution (NaOH / KOH) converts the remaining siliceous minerals into bulky soluble alkali silicate gel. The resulting expansive pressure increases in the core of the aggregate.
The accumulated pressure cracks the aggregate and the surrounding cement paste when the pressure exceeds the tolerance of the aggregate.
Structural effects of ASR
The cracking caused by ASR can have several negative impacts on concrete, including:
Expansion: The swelling nature of ASR gel increases the chance of expansion in concrete elements.
Compressive strength: The effect of ASR on compressive strength can be minor for low expansion levels, to relatively higher degrees at larger expansion. Swamy and Al-Asali (1986) points out that the compressive strength is not a very accurate parameter to study the severity of ASR; however, the test is done because of its simplicity.
Tensile strength / Flexural capacity: Researches show that ASR cracking can significantly reduce the tensile strength of concrete; therefore reducing the flexural capacity of beams. Some research on bridge structures indicate about 85% loss of capacity as a result of ASR.
Modulus of elasticity/UPV: The effect of ASR on elastic properties of concrete and ultrasound pulse velocity (UPV) is very similar to tensile capacity. The modulus of elasticity is shown to be more sensitive to ASR than pulse velocity.
Fatigue: ASR reduces the load bearing capacity and the fatigue life of concrete.
Shear strength: ASR enhances the shear capacity of reinforced concrete with and without shear reinforcement (Ahmed et al., 2000).
Mitigation
ASR can be mitigated in new concrete by several approaches:
Limit the alkali metal content of the cement. Many standards impose limits on the "Equivalent Na2O" content of cement.
Limit the reactive silica content of the aggregate. Certain volcanic rocks are particularly susceptible to ASR because they contain volcanic glass (obsidian) and should not be used as aggregate. The use of calcium carbonate aggregates can avoid this. In principle, limestone (CaCO3) the level of silica depends on its purity. Some siliceous limestones (a.o., Kieselkalk found in Switzerland) may be cemented by amorphous or poorly crystalline silica and can be very sensitive to the ASR reaction, as also observed with some Tournaisian siliceous limestones exploited in quarries in the area of Tournai in Belgium. The use of limestone as aggregate is not a guarantee against ASR in itself. In Canada, the Spratt siliceous limestone is also particularly well known in studies dealing with ASR and is commonly used as the Canadian ASR reference aggregate.
Add very fine siliceous materials to neutralize the excessive alkalinity of cement with silicic acid by a controlled pozzolanic reaction at the early stage of the cement setting. Pozzolanic materials to add to the mix may be, e.g., pozzolan, silica fume, fly ash, or metakaolin. These react preferentially with the cement alkalis without formation of an expansive pressure, because siliceous minerals in fine particles convert to alkali silicate and then to calcium silicate without formation of semipermeable reaction rims.
Limit the external alkalis that come in contact with the system.
A prompt reaction initiated at the early stage of concrete hardening on very fine silica particles will help to suppress a slow and delayed reaction with larger siliceous aggregates on the long term. Following the same principle, the fabrication of low-pH cement also implies the addition of finely divided pozzolanic materials rich in silicic acid to the concrete mix to decrease its alkalinity. Beside initially lowering the pH value of the concrete pore water, the main working mechanism of silica fume addition is to consume portlandite (the reservoir of hydroxyde (OH–) in the solid phase) and to decrease the porosity of the hardened cement paste by the formation of calcium silicate hydrates (C-S-H). However, silica fume has to be very finely dispersed in the concrete mix because agglomerated flakes of compacted silica fume can themselves also induce ASR if the dispersion process is insufficient. This can be the case in laboratory studies made on cement pastes alone in the absence of aggregates. Silica fume is sufficiently dispersed during mixing operations of large batches of fresh concrete by the presence of coarse and fine aggregates.
As part of a study conducted by the Federal Highway Administration, a variety of methods have been applied to field structures suffering from ASR-affected expansion and cracking. Some methods, such as the application of silanes, have shown significant promise, especially when applied to elements such as small columns and highway barriers. The topical application of lithium compounds, have shown little or no promise in reducing ASR-induced expansion and cracking.
Curative treatment
There are no curative treatments in general for ASR affected structures. Repair in damaged sections is possible, but the reaction will continue. In some cases, when a sufficient drying of thin components (walls, slabs) of a structure is possible, and is followed by the installation of a watertight membrane, the evolution of the reaction can be slowed down, and sometimes stopped, due to the lack of water needed to continue fueling the reaction. Indeed, water plays a triple role in the alkali-silica reaction: solvent for the reaction taking place, transport medium for the dissolved species reacting, and finally also reagent consumed by the reaction itself.
However, concrete at the center of thick concrete components or structures can never dry because water transport in saturated or unsaturated conditions is always limited by diffusion in the concrete pores (water present under the liquid form, or under the vapor state). The water diffusion time is thus proportional to the square of its transport distance. As a consequence, the water saturation degree inside thick concrete structures often remains higher than 80%, a level sufficient to provide enough water to the system and to maintain the alkali-silica reaction on going.
Massive structures such as dams pose particular problems: they cannot be easily replaced, and the swelling can block spillway gates or turbine operations. Cutting slots across the structure can relieve some pressure, and help restore geometry and function.
Heavy aggregates for nuclear shielding concrete
Two types of heavy aggregates are commonly used for nuclear shielding concrete in order to efficiently absorb gamma-rays: baryte (, density = 4.3 – 4.5) and various types of iron oxides, mainly magnetite (, density = 5.2) and hematite (, density = 5.3). The reason is their high density favorable to gamma attenuation. Both types of aggregates need to be checked for ASR as they may contain reactive silica impurities under a form or another.
As elevated temperature in the range of may be reached in the concrete of the primary confinement wall around nuclear reactors, particular attention has to be paid to the selection of aggregates and heavy aggregates to avoid alkali-silica reaction promoted by reactive silica impurities and accelerated by the high temperature to which concrete is exposed.
In some hydrothermal deposits, baryte is associated with silica mineralization and can also contain reactive cristobalite while oxy-hydroxides of Fe(III), in particular ferrihydrite, exhibit a strong affinity for dissolved silica present in water and may constitute an ultimate sink for .
This explains how microcrystalline silica can progressively accumulate in the mineral gangue of iron oxides.
Dissolved silica (), and its corresponding silicate anion (), strongly sorbs onto hydrous ferric oxides (HFO) and ferric oxides hydrated surface (>Fe–OH) by ligand exchange:
+ >Fe–OH +
In this ligand exchange reaction, a silicate anion (also often more simply written as ) is making a nucleophilic substitution onto a >Fe–OH ferrol surface group of HFO and ejects a hydroxide anion while taking its place onto the ferrol group. This mechanism explains the formation of strong inner sphere complexes of silica at the surface of iron oxy-hydroxides and iron oxides. The surface of iron oxides becomes progressively coated with silica and a silica gangue forms at the surface of iron oxide ores. This explains why some iron ores are rich in silica and may therefore be sensitive to the alkali-silica reaction. Very low level of reactive silica in heavy aggregates are sufficient to induce ASR. This is why heavy aggregates must be systematically tested for ASR before nuclear applications such as radiation shielding or immobilization of strongly irradiating radioactive waste.
Another reason of concern for the possible accelerated development of ASR in the concrete of nuclear structures is the progressive amorphization of the silica contained in aggregates exposed to high neutron fluence. This process is also known as metamictization and is known to create amorphous halo's in minerals like zircon rich in uranium and thorium when their crystal structure is submitted to intense alpha-particles internal bombardment and becomes amorph (metamict state).
The loss of mechanical properties of heavily neutron-irradiated concrete component such as the biological shield of a reactor at the end of the service life of a nuclear power plant is expected to be due to radiation-induced swelling of aggregates, which leads to volumetric expansion of the concrete.
Prevention of the risk
The only way to prevent, or to limit, the risk of ASR is to avoid one or several of the three elements in the critical triangle aggregate reactivity – cement alkali content – water:
by selecting non-reactive aggregates after testing them according to an appropriate standard test method (see next section);
by using a low-alkali (LA) cement: with a maximum alkali content expressed in < 0.60% of the cement mass, according to EN 197-1 European standard for cement, or by limiting the total alkali content in concrete (e.g., less than 3 kg /m3 of concrete for a CEM I cement (OPC)). Example of standard for concrete in Belgium: NBN EN 206 and its national supplement NBN B 15-001;
by limiting the contact of underground or meteoritic water infiltrations with the concrete structure (water tight membrane, roofing, sufficient water drainage, ...). This last precaution is always advisable when possible and the only one also sometimes applicable for existing ASR-affected concrete structures.
Methods for testing potential alkali reactivity
The American Society for Testing and Materials (ASTM International) has developed different standardized test methods for screening aggregates for their susceptibility to ASR:
ASTM C227: "Test Method for Potential Alkali Reactivity of Cement-Aggregate Combinations (Mortar-Bar Method)"
ASTM C289: "Standard Test Method for Potential Alkali-Silica Reactivity of Aggregates (Chemical Method)"
ASTM C295: "Guide for Petrographic Examination of Aggregate for Concrete"
ASTM C1260: "Test Method for Potential Reactivity of Aggregates (Mortar-Bar-Test)". It is a rapid test of aggregates: immersion of mortar bars in NaOH 1 M at 80 °C for 14 days used to quickly identify highly reactive aggregates or quasi non-reactive aggregates. Beside an elevated temperature, the C1260 method also involves the use of a large quantity/inventory of NaOH in the solution in which the mortar bar is immersed. A large pool of OH– anions is thus available to diffuse inside the mortar bar to dissolve silica present in aggregates. Consequently, this test is very severe and may exclude valuable aggregates. In case of non-decisive results, the long-term ASTM C1293 test method has to be used for a final screening. The main advantage of the ASTM C1260 test is that it allows to quickly identify extreme cases: very insensitive or very reactive aggregates.
ASTM C1293: "Test Method for Concrete Aggregates by Determination of Length Change of Concrete Due to Alkali-Silica Reaction". It is a long-term confirmation test (1 or 2 years) at 38 °C in a water-saturated moist atmosphere (inside a thermostated oven) with concrete prisms containing the aggregates to be characterised mixed with a high-alkali cement specially selected to induce ASR. The concrete prisms are not directly immersed in an alkaline solution, but wrapped with moist tissues and tightly packed inside a water-tight plastic foils.
ASTM C1567: "Standard Test Method for Determining the Potential Alkali-Silica Reactivity of Combinations of Cementitious Materials and Aggregate (Accelerated Mortar-Bar Method)"
Other concrete prism methods have also been internationally developed to detect potential alkali-reactivity of aggregates or sometimes hardened concrete cores, e.g.:
The Oberholster method on which the ASTM C1260 test is based. It is a severe short duration test with immersion of the mortar prism or concrete core in a solution of NaOH 1 M at 80 °C for 14 days.
The Duggan method starts with a first immersion of several concrete cores in distilled water at 22 °C for rehydration during 3 days. It is then followed by heating for one day in a dry oven at 82 °C and then with a succession of cycles of one day hydration followed by one day drying at 82 °C. The expansion of the concrete cores is measured till 14 or 20 days. It is a short duration test for ASR/AAR but much softer than the Oberholster test. It can also be used to measure the expansion of concrete due to delayed ettringite formation (DEF). The mechanical stresses induced by the thermal cycles create micro-cracks in the concrete matrix and so facilitate the accessibility to water of the reactive mineral phases in the treated samples.
The concrete microbar test was proposed by Grattan-Bellew et al. (2003) as a universal accelerated test for alkali-aggregate reaction.
CSA A23.1-14A and CSA A23.2-14A: Canadian CSA standard concrete prism tests for potential expansivity of cement/aggregate combinations. CSA A23.2-14A is a long-term test in which concrete prisms are stored under saturated moist conditions at a temperature of 38 °C, for a minimum of 365 days. It is the Canadian standard equivalent to ASTM C1293.
LCPC/IFSTTAR (1997) LPC-44. Alkali reaction in concrete. Residual expansion tests on hardened concrete.
RILEM AAR-3 concrete prism method (storage at 38 °C).
RILEM AAR-4 concrete prism method (storage at 60 °C).
RILEM AAR-4 alternative method (storage at 60 °C).
German concrete test method (storage at 40 °C).
Norwegian concrete prism method (storage at 38 °C).
Known affected structures
Australia
Adelaide Festival Centre car park, demolished in 2017
Centennial Hall, Adelaide (1936–2007)
Dee Why ocean pool, Dee Why, New South Wales.
King St Bridge, demolished and replaced in 2011 (crossing the Patawalonga River, Glenelg North, South Australia).
Manly Surf Pavilion, Manly, New South Wales (1939–1981).
The MCG's old Southern Stand, demolished in 1990 and replaced with the Great Southern Stand which was completed in 1992
Westpoint Blacktown car park
Belgium
Many bridges and civil engineering works of motorways because the improper use of highly reactive siliceous Tournaisian limestone (lower carboniferous Dinantian) during the years 1960 – 1970 when most of the motorways were constructed in Belgium. ASR damages started to be recognised only in the 1980s. The Tournaisian limestone may contain up to 25–30 wt. % of reactive biogenic silica originating from the spicules of siliceous sponges deposited with calcium carbonate in the marine sediments.
Pommeroeul lock in Hainaut on the canal Hensies – Pommeroeul – Condé.
Tour & Taxis car access ramp in Brussels with liquid exudations of amber alkali gel evidenced on concrete cores by SPW experts (Public Services of Wallonia).
External containment building of the Tihange 2 nuclear power plant.
Poorly conditioned radioactive waste from the Doel nuclear power plant: evaporator concentrates and spent ion-exchange resins (SIER) exuding out of the concrete immobilization matrix very large quantities of liquid sodium silicagel (mainly N-S-H).
Canada
Alkali-aggregate reactions (AAR), both alkali-silica (ASR) and alkali-carbonate (ACR, involving dolomite) reactions, were identified in Canada since the years 1950s.
Many hydraulic dams are affected by ASR in Canada because of the wide use of reactive aggregates. Indeed, reactive frost-sensitive chert is very often found in glacio-fluvial environments from which gravels are commonly extracted in Canada. Another reason is also the presence of reactive silica in Paleozoic limestones like the siliceous Ordovician limestone (Bobcaygeon Formation) from the Spratt's quarry near Ottawa in Ontario. The Spratt's limestone aggregates (from the company "Spratt Sand and Gravel Limited") are widely used for ASR studies in Canada and worldwide as described by Rogers et al. (2000) and also recommended by RILEM (International Union of Laboratories and Experts in Construction Materials, Systems, and Structures).
Many bridges and civil engineering works of motorways.
Interchange Robert Bourassa – Charest (Québec city: interchange autoroutes 740 – 440) demolished in 2010.
Gentilly 2 nuclear power plant.
Building of the National Gallery of Canada at Ottawa.
Mactaquac Dam
France
Former Térénez bridge in Brittany, built in 1951 and replaced in 2011.
Germany
East German Deutsche Reichsbahn used numerous concrete ties in the 1970s to replace previous wooden ties. However, the gravel from the Baltic Sea caused ASR and the ties had to be replaced earlier than planned, lasting well into the 1990s.
After reunification, many Autobahns in East Germany were refurbished with concrete that turned out to have been defective and affected by ASR, necessitating expensive replacement work.
New Zealand
Fairfield Bridge in Hamilton, New Zealand. Repaired in 1991 at a cost of NZ$1.1 million.
United Kingdom
Keybridge House, South Lambeth Road, Vauxhall, London, England.
Millennium Stadium North Stand (part of the old National Stadium), Cardiff, Wales.
Merafield Bridge, A38, England. Demolished manually in 2016.
Pebble Mill Studios, Birmingham. Demolished in 2005
Royal Devon and Exeter Hospital, Wonford. Demolished and replaced in the mid-1990s.
Steve Bull Stand, Molineux Stadium, Wolverhampton
United States
Chickamauga Dam in Tennessee.
Kauffman Stadium in Missouri.
Seabrook Station Nuclear Power Plant in Seabrook, New Hampshire.
Seminoe Dam in Wyoming.
Sixth Street Viaduct in Los Angeles. Demolished in 2016.
See also
Alkali-carbonate reaction
Alkali–aggregate reaction
Calthemite: Secondary calcium carbonate deposit growing under man-made structures
Carbonatation
Colloidal silica
Construction aggregate
Cracking pattern
Crocodile cracking: distress in asphalt pavement characterized by interconnecting or interlaced cracking in the asphalt layer
Energetically modified cement (EMC)
Gyrolite, a product of slag hydration and ASR gel ageing
Hydrated silica
Pozzolanic reaction
Silicate: see solid SiO2 hydrolysis/dissolution and Si–OH deprotonation reactions at high pH
Siliceous sponge
Soda lime: the mechanism of ASR catalysed by NaOH is analogous to the trapping mechanism of CO2 by Ca(OH)2 impregnated with NaOH
References
Further reading
External links
Building materials
Catalysis
Cement
Inorganic reactions
Concrete degradation
Fracture mechanics
Mechanical failure modes
Patterns
Pavements
Silicates | Alkali–silica reaction | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 9,788 | [
"Catalysis",
"Structural engineering",
"Mechanical failure modes",
"Fracture mechanics",
"Concrete degradation",
"Building engineering",
"Technological failures",
"Inorganic reactions",
"Materials science",
"Architecture",
"Construction",
"Materials degradation",
"Materials",
"Chemical kin... |
15,245,562 | https://en.wikipedia.org/wiki/ZNF330 | Zinc finger protein 330 is a protein that in humans is encoded by the ZNF330 gene.
References
Further reading
External links
Transcription factors | ZNF330 | [
"Chemistry",
"Biology"
] | 30 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,245,579 | https://en.wikipedia.org/wiki/KCNMB4 | Calcium-activated potassium channel subunit beta-4 is a protein that in humans is encoded by the KCNMB4 gene.
MaxiK channels are large conductance, voltage and calcium-sensitive potassium channels which are fundamental to the control of smooth muscle tone and neuronal excitability. MaxiK channels can be formed by 2 subunits: the pore-forming alpha subunit and the modulatory beta subunit. The protein encoded by this gene is an auxiliary beta subunit which slows activation kinetics, leads to steeper calcium sensitivity, and shifts the voltage range of current activation to more negative potentials than does the beta 1 subunit.
See also
BK channel
Voltage-gated potassium channel
References
Further reading
Ion channels | KCNMB4 | [
"Chemistry"
] | 146 | [
"Neurochemistry",
"Ion channels"
] |
15,245,634 | https://en.wikipedia.org/wiki/Basic%20leucine%20zipper%20and%20W2%20domain-containing%20protein%202 | Basic Leucine Zipper and W2 Domain-Containing Protein 2 is a protein that is encoded by the BZW2 gene. It is a eukaryotic translation factor found in species up to bacteria. In animals, it is localized in the cytoplasm and expressed ubiquitously throughout the body. The heart, placenta, skeletal muscle, and hippocampus show higher expression. In various cancers, upregulation tends to lead to higher severity and mortality. It has been found to interact with SARS-CoV-2.
Gene
BZW2 is known as Basic Leucine Zipper W2 Domain-Containing Protein 2, MST017, MSTP017, 5MP1, Eukaryotic Translation Factor 5, and HSPC028. It is located on chromosome 7 at p21.1 on the plus strand. The gene spans 60,389 base pairs, at coordinates 16,583,248 – 16,804,999. There are 12 exons.
Protein
There are two known isoforms of BZW2. Isoform 1 is 419 amino acids long and is the most abundant form. Isoform 2 is 225 amino acids, containing only 11 exons and a shorter N-terminus.
The coded protein is 419 amino acids long and weighs 48.3 kDa. As described in the name, the protein contains a leucine-zipper motif. Four “L……” repeats are present in the beginning, giving rise to the characteristic leucine zipper helix within the 3D structure. An eIF5C domain follows the leucine motif, which is a part of proteins that are important for strict regulation of cellular processes.
The amino acid composition of BZW2 has a higher amount of lysines and a lower amount of prolines in humans but a higher glutamic acid composition in its orthologs. The human BZW2 protein has an overall charge of -3 which can go down to -9 in orthologs. There are no significant charge clusters. There is also a KELQ repeat that has remained conserved in animals.
The secondary structure contains a majority of alpha helices. There are 19 alpha helices in all orthologs, except for two additional beta sheets which are absent in humans. The tertiary structure forms a repeated fold of alpha-helices, a structure that is conserved through bacteria.
Regulation
Gene-level
There are three known promoters for BZW2. It is regulated by numerous transcription factors, including an estrogen receptor transcription factor (ESR2, ES3), leucine zipper transcription factor (RRFIP1), and Y sex-determining transcription factors (SRY). With these transcription factors, BZW2 has regulated expression in organs that contribute to cellular functions. The Y sex-determining transcription factor works to regulate BZW2 expression in the testis. Throughout the body, BZW2 is ubiquitously expressed within tissues. There is elevated mRNA abundance in the heart, placenta, and skeletal muscle.
Transcript-level
There are four major stem loops in the 5’ untranslated and four in the 3’ untranslated region that function in transcript-level regulation.
Protein-level
BZW2 has multiple phosphorylation, acetylation, glycosylation, SUMOylation, and glycation sites for regulation. Since upregulation of BZW2 leads to disrupted cellular processes and severe cancer forms, post-translational modifications are needed to keep the gene highly regulated. The protein is localized within the cytoplasm and has no likely or confirmed nuclear or mitochondrial target peptides.
Evolution
BZW2 has a single paralog, BZW1 which is conserved up to plants. There are BZW2 orthologs up to a couple species of bacteria. The most distant ortholog was Microbacterium arborescens. BZW2 contains an eIF5C domain which is also present in eIF2BE, eIF4G, eIF5, and a GAP protein specific for eIF2.
Compared to Cytochrome C, a quickly diverging protein, and Fibrinogen, a slowly diverging protein, BZW2 has had slow corrected divergence over time, illustrating conservation and protein importance.
Interactions
BZW2 is known to interact with:
BZW1
EIF2S2
PSTPIP1
NEK4
ORF4
SNW1
rep
EIF2S2 and ORF4 work to synthesize and replicate BZW2. PSTPIP1 and NEK4 are regulatory proteins that help in the functionality of BZW2. SNW1, a spliceosome protein, splices BZW2 mRNA variants. The protein rep is part of SARS-CoV-2 virus and inhibits translation of BZW2.
Clinical significance
Cancer
BZW2 has been studied to determine its role in multiple cancers. Overall, the studies all showed that upregulation of BZW2 lead to more severe forms of cancer, higher rate of mortality, and increased likeliness of reoccurrence.
A 2019 study focused on the effect of BZW2 in colorectal cancer. It found that upregulation of BZW2 promoted tumor growth and had a downstream upregulation effect on c-Myc, a proto-oncogene. A second study from 2020 determined this upregulation also had a positive effect on the activation of the ERK/MAPK pathway.
In hepatocellular carcinoma, osteosarcoma, lung adenocarcinoma, and muscle-invasive bladder cancer, overexpression of BZW2 lead to overactivation of the AKT/mTOR signaling pathway by increasing phosphorylation of AKT and mTOR. The AKT/mTOR pathway is an important intracellular signaling pathway that regulates the cell cycle. When the pathway activity is increased, cells proliferated at a higher rate and apoptosis decreases, leading to tumor growth.
SARS-CoV-2
BZW2 interacts with the nsp8 protein of SARS-CoV-2. nsp8 dimerizes and forms a supercomplex which works to repress the translation of BZW2.
References
Further reading
External links
Proteins | Basic leucine zipper and W2 domain-containing protein 2 | [
"Chemistry"
] | 1,341 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
15,249,674 | https://en.wikipedia.org/wiki/Bilinear%20time%E2%80%93frequency%20distribution | Bilinear time–frequency distributions, or quadratic time–frequency distributions, arise in a sub-field of signal analysis and signal processing called time–frequency signal processing, and, in the statistical analysis of time series data. Such methods are used where one needs to deal with a situation where the frequency composition of a signal may be changing over time; this sub-field used to be called time–frequency signal analysis, and is now more often called time–frequency signal processing due to the progress in using these methods to a wide range of signal-processing problems.
Background
Methods for analysing time series, in both signal analysis and time series analysis, have been developed as essentially separate methodologies applicable to, and based in, either the time or the frequency domain. A mixed approach is required in time–frequency analysis techniques which are especially effective in analyzing non-stationary signals, whose frequency distribution and magnitude vary with time. Examples of these are acoustic signals. Classes of "quadratic time-frequency distributions" (or bilinear time–frequency distributions") are used for time–frequency signal analysis. This class is similar in formulation to Cohen's class distribution function that was used in 1966 in the context of quantum mechanics. This distribution function is mathematically similar to a generalized time–frequency representation which utilizes bilinear transformations. Compared with other time–frequency analysis techniques, such as short-time Fourier transform (STFT), the bilinear-transformation (or quadratic time–frequency distributions) may not have higher clarity for most practical signals, but it provides an alternative framework to investigate new definitions and new methods. While it does suffer from an inherent cross-term contamination when analyzing multi-component signals, by using a carefully chosen window function(s), the interference can be significantly mitigated, at the expense of resolution. All these bilinear distributions are inter-convertible to each other, cf. transformation between distributions in time–frequency analysis.
Wigner–Ville distribution
The Wigner–Ville distribution is a quadratic form that measures a local time-frequency energy given by:
The Wigner–Ville distribution remains real as it is the fourier transform of f(u + τ/2)·f*(u − τ/2), which has Hermitian symmetry in τ. It can also be written as a frequency integration by applying the Parseval formula:
Proposition 1. for any f in L2(R)
Moyal Theorem. For f and g in L2(R),
Proposition 2 (time-frequency support). If f has a compact support, then for all ξ the support of along u is equal to the support of f. Similarly, if has a compact support, then for all u the support of along ξ is equal to the support of .
Proposition 3 (instantaneous frequency). If then
Interference
Let be a composite signal. We can then write,
where
is the cross Wigner–Ville distribution of two signals. The interference term
is a real function that creates non-zero values at unexpected locations (close to the origin) in the plane. Interference terms present in a real signal can be avoided by computing the analytic part .
Positivity and smoothing kernel
The interference terms are oscillatory since the marginal integrals vanish and can be partially removed by smoothing with a kernel θ
The time-frequency resolution of this distribution depends on the spread of kernel θ in the neighborhood of . Since the interferences take negative values, one can guarantee that all interferences are removed by imposing that
The spectrogram and scalogram are examples of positive time-frequency energy distributions. Let a linear transform be defined over a family of time-frequency atoms . For any there exists a unique atom centered in time-frequency at . The resulting time-frequency energy density is
From the Moyal formula,
which is the time frequency averaging of a Wigner–Ville distribution. The smoothing kernel thus can be written as
The loss of time-frequency resolution depends on the spread of the distribution in the neighborhood of .
Example 1
A spectrogram computed with windowed fourier atoms,
For a spectrogram, the Wigner–Ville averaging is therefore a 2-dimensional convolution with . If g is a Gaussian window, is a 2-dimensional Gaussian. This proves that averaging with a sufficiently wide Gaussian defines positive energy density. The general class of time-frequency distributions obtained by convolving with an arbitrary kernel θ is called a Cohen's class, discussed below.
Wigner Theorem. There is no positive quadratic energy distribution Pf that satisfies the following time and frequency marginal integrals:
Mathematical definition
The definition of Cohen's class of bilinear (or quadratic) time–frequency distributions is as follows:
where is the ambiguity function (AF), which will be discussed later; and is Cohen's kernel function, which is often a low-pass function, and normally serves to mask out the interference. In the original Wigner representation, .
An equivalent definition relies on a convolution of the Wigner distribution function (WD) instead of the AF :
where the kernel function is defined in the time-frequency domain instead of the ambiguity one. In the original Wigner representation, . The relationship between the two kernels is the same as the one between the WD and the AF, namely two successive Fourier transforms (cf. diagram).
i.e.
or equivalently
Ambiguity function
The class of bilinear (or quadratic) time–frequency distributions can be most easily understood in terms of the ambiguity function, an explanation of which follows.
Consider the well known power spectral density and the signal auto-correlation function in the case of a stationary process. The relationship between these functions is as follows:
For a non-stationary signal , these relations can be generalized using a time-dependent power spectral density or equivalently the famous Wigner distribution function of as follows:
If the Fourier transform of the auto-correlation function is taken with respect to t instead of τ, we get the ambiguity function as follows:
The relationship between the Wigner distribution function, the auto-correlation function and the ambiguity function can then be illustrated by the following figure.
By comparing the definition of bilinear (or quadratic) time–frequency distributions with that of the Wigner distribution function, it is easily found that the latter is a special case of the former with . Alternatively, bilinear (or quadratic) time–frequency distributions can be regarded as a masked version of the Wigner distribution function if a kernel function is chosen. A properly chosen kernel function can significantly reduce the undesirable cross-term of the Wigner distribution function.
What is the benefit of the additional kernel function? The following figure shows the distribution of the auto-term and the cross-term of a multi-component signal in both the ambiguity and the Wigner distribution function.
For multi-component signals in general, the distribution of its auto-term and cross-term within its Wigner distribution function is generally not predictable, and hence the cross-term cannot be removed easily. However, as shown in the figure, for the ambiguity function, the auto-term of the multi-component signal will inherently tend to close the origin in the ητ-plane, and the cross-term will tend to be away from the origin. With this property, the cross-term in can be filtered out effortlessly if a proper low-pass kernel function is applied in ητ-domain. The following is an example that demonstrates how the cross-term is filtered out.
Kernel properties
The Fourier transform of is
The following proposition gives necessary and sufficient conditions to ensure that satisfies marginal energy properties like those of the Wigner–Ville distribution.
Proposition: The marginal energy properties
are satisfied for all if and only if
Some time-frequency distributions
Wigner distribution function
Aforementioned, the Wigner distribution function is a member of the class of quadratic time-frequency distributions (QTFDs) with the kernel function . The definition of Wigner distribution is as follows:
Modified Wigner distribution functions
Affine invariance
We can design time-frequency energy distributions that satisfy the scaling property
as does the Wigner–Ville distribution. If
then
This is equivalent to imposing that
and hence
The Rihaczek and Choi–Williams distributions are examples of affine invariant Cohen's class distributions.
Choi–Williams distribution function
The kernel of Choi–Williams distribution is defined as follows:
where α is an adjustable parameter.
Rihaczek distribution function
The kernel of Rihaczek distribution is defined as follows:
With this particular kernel a simple calculation proves that
Cone-shape distribution function
The kernel of cone-shape distribution function is defined as follows:
where α is an adjustable parameter. See Transformation between distributions in time-frequency analysis. More such QTFDs and a full list can be found in, e.g., Cohen's text cited.
Spectrum of non-stationary processes
A time-varying spectrum for non-stationary processes is defined from the expected Wigner–Ville distribution. Locally stationary processes appear in many physical systems where random fluctuations are produced by a mechanism that changes slowly in time. Such processes can be approximated locally by a stationary process. Let be a real valued zero-mean process with covariance
The covariance operator K is defined for any deterministic signal by
For locally stationary processes, the eigenvectors of K are well approximated by the Wigner–Ville spectrum.
Wigner–Ville spectrum
The properties of the covariance are studied as a function of and :
The process is wide-sense stationary if the covariance depends only on :
The eigenvectors are the complex exponentials and the corresponding eigenvalues are given by the power spectrum
For non-stationary processes, Martin and Flandrin have introduced a time-varying spectrum
To avoid convergence issues we suppose that X has compact support so that has compact support in . From above we can write
which proves that the time varying spectrum is the expected value of the Wigner–Ville transform of the process X. Here, the Wigner–Ville stochastic integral is interpreted as a mean-square integral:
References
L. Cohen, Time-Frequency Analysis, Prentice-Hall, New York, 1995.
B. Boashash, editor, "Time-Frequency Signal Analysis and Processing – A Comprehensive Reference", Elsevier Science, Oxford, 2003.
L. Cohen, "Time-Frequency Distributions—A Review," Proceedings of the IEEE, vol. 77, no. 7, pp. 941–981, 1989.
S. Qian and D. Chen, Joint Time-Frequency Analysis: Methods and Applications, Chap. 5, Prentice Hall, N.J., 1996.
H. Choi and W. J. Williams, "Improved time-frequency representation of multicomponent signals using exponential kernels," IEEE. Trans. Acoustics, Speech, Signal Processing, vol. 37, no. 6, pp. 862–871, June 1989.
Y. Zhao, L. E. Atlas, and R. J. Marks, "The use of cone-shape kernels for generalized time-frequency representations of nonstationary signals," IEEE Trans. Acoustics, Speech, Signal Processing, vol. 38, no. 7, pp. 1084–1091, July 1990.
B. Boashash, "Heuristic Formulation of Time-Frequency Distributions", Chapter 2, pp. 29–58, in B. Boashash, editor, Time-Frequency Signal Analysis and Processing: A Comprehensive Reference, Elsevier Science, Oxford, 2003.
B. Boashash, "Theory of Quadratic TFDs", Chapter 3, pp. 59–82, in B. Boashash, editor, Time-Frequency Signal Analysis & Processing: A Comprehensive Reference, Elsevier, Oxford, 2003.
Signal processing
Fourier analysis
Digital signal processing
Time–frequency analysis | Bilinear time–frequency distribution | [
"Physics",
"Technology",
"Engineering"
] | 2,443 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Spectrum (physical sciences)",
"Time–frequency analysis",
"Frequency-domain analysis"
] |
15,250,506 | https://en.wikipedia.org/wiki/Allotropes%20of%20plutonium | Plutonium occurs in a variety of allotropes, even at ambient pressure. These allotropes differ widely in crystal structure and density; the α and δ allotropes differ in density by more than 25% at constant pressure.
Overview
Plutonium normally has six allotropes and forms a seventh (zeta, ζ) under high temperature and a limited pressure range. These allotropes have very similar energy levels but significantly varying densities and crystal structures. This makes plutonium very sensitive to changes in temperature, pressure, or chemistry, and allows for dramatic volume changes following phase transitions. Unlike most materials, plutonium increases in density when it melts, by 2.5%, but the liquid metal exhibits a linear decrease in density with temperature. Densities of the different allotropes vary from 16.00 g/cm3 to 19.86 g/cm3.
Machining plutonium
The presence of these many allotropes makes machining plutonium very difficult, as it changes state very readily. For example, the alpha (α) phase exists at room temperature in unalloyed plutonium. It has machining characteristics similar to cast iron but changes to the beta (β) phase at slightly higher temperatures.
The reasons for the complicated phase diagram are not entirely understood; recent research has focused on constructing accurate computer models of the phase transitions. The α phase has a low-symmetry monoclinic structure, hence its poor conductivity, brittleness, strength and compressibility.
Stabilization
Plutonium in the delta (δ) phase normally exists in the 310 °C to 452 °C range but is stable at room temperature when alloyed with a small percentage of gallium, aluminium, or cerium, enhancing workability and allowing it to be welded in weapons applications. The δ phase has more typical metallic character and is roughly as strong and malleable as aluminium. In fission weapons, the explosive shock waves used to compress a plutonium core will also cause a transition from the usual δ phase plutonium to the denser α phase, significantly helping to achieve supercriticality. The plutonium–gallium alloy is the most common δ-stabilized alloy.
Gallium, aluminium, americium, scandium and cerium can stabilize the δ phase of plutonium for room temperature. Silicon, indium, zinc and zirconium allow formation of a metastable δ state when rapidly cooled. High amounts of hafnium, holmium and thallium also allows retaining some of the δ phase at room temperature. Neptunium is the only element that can stabilize the α phase at higher temperatures. Titanium, hafnium and zirconium stabilize the β phase at room temperature when rapidly cooled.
References
Plutonium
Plutonium | Allotropes of plutonium | [
"Physics",
"Chemistry"
] | 567 | [
"Periodic table",
"Properties of chemical elements",
"Allotropes",
"Materials",
"Matter"
] |
1,757,651 | https://en.wikipedia.org/wiki/Garmin%20G1000 | The Garmin G1000 is an electronic flight instrument system (EFIS) typically composed of two display units, one serving as a primary flight display, and one as a multi-function display. Manufactured by Garmin Aviation, it serves as a replacement for most conventional flight instruments and avionics. Introduced in June 2004, the system has since become one of the most popular integrated glass cockpit solutions for general aviation and business aircraft.
Components
An aircraft with a basic Garmin G1000 installation contains two LCDs (one acting as the primary flight display and the other as the multi-function display) as well as an integrated communications panel that fits between the two. These displays are designated as a GDU, Garmin Display Unit.
Beyond that, additional features are found on newer and larger G1000 installations, such as in business jets. This includes:
A third display unit, to act as a co-pilot PFD
An alphanumeric keyboard
An integrated flight director/autopilot (without it, the G1000 interfaces with an external autopilot)
Depending on the airplane manufacturer and whether or not a GFC 700 autopilot is installed, the G1000 system will consist of either two GDU 1040 displays (no autopilot), a GDU 1040 PFD/GDU 1043 MFD (GFC 700 autopilot installed), or a GDU 1045 PFD/GDU 1045 MFD (GFC 700 autopilot installed with VNAV).
The GDU 1040 is the standard base bezel with no autopilot/flight director mode selection keys below the heading bug. The GDU 1043 has autopilot/flight director keys for all GFC 700 modes except VNAV. The GDU 1045 is essentially identical to the GDU 1043 except for the addition of an autopilot/flight director mode for VNAV. Depending on how the units are installed, an MFD failure may, or may not, affect autopilot or flight director use. If a GDU 1040 is used as a PFD in an airplane equipped with a GFC 700 autopilot, a failure of the MFD (which houses the autopilot mode selection keys) will leave the autopilot engaged, but the modes cannot be changed because no autopilot keys are present on the PFD. But, if an MFD failure occurs in an airplane with the GFC 700 autopilot and either a GDU 1043 or a GDU 1045 bezel installed as a PFD, the pilot will have full use of the autopilot through the keys on the PFD.
Both the PFD and MFD each have two slots for SD memory cards. The top slot is used to update the Jeppesen aviation database (also known as NavData) every 28 days, and to load software and configuration to the system. The aviation database must be current to use GPS for navigation during IFR instrument approaches. The bottom slot houses the World terrain and Jeppesen obstacle databases. While terrain information rarely changes or needs to be updated, obstacle databases can be updated every 56 days through a subscription service. The top card can be removed from the G1000 system following an update, but the bottom card must stay in both the PFD and MFD to ensure accurate terrain awareness and TAWS-B information.
Primary flight display
The primary flight display (PFD) shows the basic flight instruments, such as the attitude indicator, airspeed indicator, altimeter, heading indicator, and course deviation indicator. A small map called the "inset map" can be enabled in the corner. The buttons on the PFD are used to set the squawk code on the transponder. The PFD can also be used for entering and activating flight plans. The PFD also has a "reversionary mode" which is capable of displaying all information shown on the MFD (for example, engine gauges and navigational information). This capability is provided in case of an PFD failure.
Multi-function display
The multi-function display (MFD) typically shows a moving map on the right side, and engine instrumentation on the left. Most of the other screens in the G1000 system are accessed by turning the knob on the lower right corner of the unit. Screens available from the MFD other than the map include the setup menus, information about nearest airports and NAVAIDs, Mode S traffic reports, terrain awareness, XM radio, flight plan programming, and GPS RAIM prediction.
Implementation
The G1000 system consists of several integrated components which sample and exchange data or display information to the pilot.
GDU display
The GDU display unit acts as the primary source of flight information for the pilot. Each display can interchangeably serve as a primary flight display (PFD) or multi-function display (MFD). The wiring harness within the aircraft specifies which role each display is in by default. All of the displays within an aircraft are interconnected using a high-speed Ethernet data bus. A G1000 installation may have two GDUs (one PFD and one MFD) or three (one PFD for each pilot and an MFD). There are several different GDU models in service, which have different screen sizes (from 10 inches to 15 inches) and different bezel controls.
In normal operation, the display in front of the pilot is the PFD and will provide aircraft attitude, airspeed, altitude, vertical speed, heading, rate-of-turn, slip-and-skid, navigation, transponder, inset map view (containing map, traffic, and terrain information), and systems annunciation data. The second display, typically positioned to the right of the PFD, operates in MFD mode and provides engine instrumentation and a moving map display. The moving map can be replaced or overlaid by various other types of data, such as satellite weather, checklists, system information, waypoint information, weather sensor data, and traffic awareness information.
Both displays provide redundant information regarding communications and navigation radio frequency settings even though each display is usually only paired with one GIA Integrated Avionics Unit. In the event of a single display failure, the remaining display will adopt a combined "reversionary mode" and automatically become a PFD combined with engine instrumentation data and other functions of the MFD. A red button labeled "reversionary mode" or "display backup," located on the GMA audio panel, is also available to the pilot to select this mode manually if desired.
GMA audio panel
The GMA panel provides buttons for selecting what audio sources are heard by each member of the cockpit. It also includes a button for forcing the integrated cockpit into its fail-safe reversionary mode.
GMC/GCU remote controllers
The GMC and GCU controllers are panel-mounted modules which provide a more intuitive interface for the pilot than that provided by the GDU. The GMC controls the G1000's autopilot, while the GCU is used to enter navigational data and control the GDU's
GIA integrated avionics unit
The GIA unit is a combined communications and navigation radio, and also serves as the primary data aggregator for the G1000 system. It provides a two-way VHF communications transceiver, a VHF navigation receiver with glideslope, a GPS receiver, and a variety of supporting processors. Each unit is paired with a GDU display, which acts as a controlling unit. The GIA 63W, found on many newer G1000 installations, is an updated version of the older GIA 63 which includes Wide Area Augmentation System support.
GDC air data computer
The GDC computer replaces the internal components of the pitot-static system in traditional aircraft instrumentation. It measures airspeed, altitude, vertical speed, and outside air temperature. This data is then provided to all the displays and integrated avionics units.
GRS attitude and heading reference system (AHRS)
The GRS system uses solid-state sensors to measure aircraft attitude, rate of turn, and slip and skid. This data is then provided to all the integrated avionics units and GDU display units. Unlike many competing systems, the AHRS can be rebooted and recalibrated in flight during turns of up to 20 degrees.
GMU magnetometer
The GMU magnetometer measures aircraft heading and is a digital version of a traditional compass. It does so through aligning itself with the magnetic flux lines of the earth.
GTX transponder
Either the GTX 32 or GTX 33 transponder can be used in the G1000 system, although the GTX 33 is far more common. The GTX 32 provides standard mode-C replies to ATC interrogations while the GTX 33 provides mode-S bidirectional communications with ATC and therefore can indicate traffic in the area as well as announce itself spontaneously via "squittering" without prior interrogation.
GEA engine/airframe unit
The GEA unit measures a large variety of engine and airframe parameters, including engine RPM, manifold pressure, oil temperature, cylinder head temperature, exhaust gas temperature, and fuel level in each tank. This data is then provided to the integrated avionics units.
GSD data aggregator
The GSD is a data aggregator system included on complex G1000 systems, such as that found on the Embraer Phenom 100. It serves as a point of connection which allows external systems to communicate with the G1000.
Backup systems
As a condition of certification, all aircraft utilizing the G1000 integrated cockpit must have a redundant airspeed indicator, altimeter, attitude indicator, and magnetic compass. In the event of a failure of the G1000 instrumentation, these backup instruments become primary.
In addition, a secondary power source is required to power the G1000 instrumentation for a limited time in the event of a failure of the aircraft's alternator and primary battery.
Certification
The Garmin G1000 is generally certified on new general aviation aircraft, including Beechcraft, Cessna, Diamond, Cirrus, Mooney, Piper, Quest (the Quest Kodiak), and Tiger. In late 2005, Garmin first announced in the G1000 in the Columbia Aircraft Model 400, later sold to Cessna. Garmin announced its first G1000 retrofit program for the Beechcraft C90 King Air in 2007. That same year the Garmin G1000 became a jet platform, as the avionics system for the Cessna Citation Mustang very light jet. Versions of the G1000 are also used in the Embraer Phenom 100 and Embraer Phenom 300, and PiperJet, as well as the Bell SLS helicopter.
Competition
The G1000 competes with the Avidyne Entegra and Chelton FlightLogic EFIS glass cockpits. However, there are significant differences with regard to the features, degree of integration, intuitive aspects of the design, and overall product utility. Note that the Chelton system is not typically found in airplanes that include the less expensive G1000 or Avidyne systems.
In 2009 Garmin introduced the Garmin G500 as a retrofit glass cockpit. The G500 has the majority of the capabilities of the G1000, other than integration with the aircraft engine system.
Advantages and drawbacks
As it has GPS, communication, and radio navigation components built directly into the system, it both consolidates components into a centralized location and, for the same reason, becomes potentially more costly to repair or replace. The system has the potential to reduce downtime as key components, such as the AHRS, ADC and PFD, are modular and easily replaced. The system's design also prevents the failure of a single component from "cascading" through other components.
The G1000 is compatible with the latest enhanced vision system (EVS) technology. Enhanced vision systems use thermal and infrared cameras to see real-time images and help turn obscurants such as bad weather, night time, fog, dust and brownouts into better images that can see 8-10 times farther than the naked eye.
There are some safety concerns with all glass cockpits, such as the failure of the primary flight displays (PFD). The Garmin G1000 system offers a reversionary mode that will present all of the primary flight instrumentation on the remaining display. In addition, there are multiple GPS units, and electronic redundancy incorporated extensively throughout the design of the system.
Training and training resources
Flying any glass cockpit aircraft requires transition training to familiarize the pilot with the aircraft's systems. Transition training is most effective when a pilot prepares ahead of time. Most general aviation manufacturers using the G1000 system have FAA Industry Training Standards (FITS) training programs for pilots transitioning into their airplanes. FAA FITS compliant training is recommended for any pilot transitioning to the G1000 or any other glass cockpit prior to operating the aircraft in instrument meteorological conditions (IMC) or if operating a glass cockpit aircraft for the first time. Glass cockpit aircraft may not be suitable for primary training.
One of the most effective resources for preparing for G1000 transition training include the Garmin simulator software. In addition, some flight schools now have G1000 flight training devices (FTDs) that provide realistic simulation.
All of the most current Garmin G1000 pilot's guides are available from Garmin as free downloads in PDF format.
See also
References
External links
Aircraft instruments
Avionics
Garmin
Glass cockpit | Garmin G1000 | [
"Technology",
"Engineering"
] | 2,825 | [
"Glass cockpit",
"Avionics",
"Aircraft instruments",
"Measuring instruments"
] |
1,758,819 | https://en.wikipedia.org/wiki/Phosphorus%20trichloride | Phosphorus trichloride is an inorganic compound with the chemical formula PCl3. A colorless liquid when pure, it is an important industrial chemical, being used for the manufacture of phosphites and other organophosphorus compounds. It is toxic and reacts readily with water to release hydrogen chloride.
History
Phosphorus trichloride was first prepared in 1808 by the French chemists Joseph Louis Gay-Lussac and Louis Jacques Thénard by heating calomel (Hg2Cl2) with phosphorus. Later during the same year, the English chemist Humphry Davy produced phosphorus trichloride by burning phosphorus in chlorine gas.
Preparation
World production exceeds one-third of a million tonnes. Phosphorus trichloride is prepared industrially by the reaction of chlorine with white phosphorus, using phosphorus trichloride as the solvent. In this continuous process PCl3 is removed as it is formed in order to avoid the formation of PCl5.
P4 + 6 Cl2 → 4 PCl3
Structure and spectroscopy
It has a trigonal pyramidal shape. Its 31P NMR spectrum exhibits a singlet around +220 ppm with reference to a phosphoric acid standard.
Reactions
The phosphorus in PCl3 is often considered to have the +3 oxidation state and the chlorine atoms are considered to be in the −1 oxidation state. Most of its reactivity is consistent with this description.
Oxidation
PCl3 is a precursor to other phosphorus compounds, undergoing oxidation to phosphorus pentachloride (PCl5), thiophosphoryl chloride (PSCl3), or phosphorus oxychloride (POCl3).
PCl3 as an electrophile
PCl3 reacts vigorously with water to form phosphorous acid (H3PO3) and hydrochloric acid:
PCl3 + 3 H2O → H3PO3 + 3 HCl
Phosphorus trichloride is the precursor to organophosphorus compounds. It reacts with phenol to give triphenyl phosphite:
Alcohols such as ethanol react similarly in the presence of a base such as a tertiary amine:
With one equivalent of alcohol and in the absence of base, the first product is alkoxyphosphorodichloridite:
In the absence of base, however, with excess alcohol, phosphorus trichloride converts to diethylphosphite:
PCl3 + 3 EtOH → (EtO)2P(O)H + 2 HCl + EtCl
Secondary amines (R2NH) form aminophosphines. For example, bis(diethylamino)chlorophosphine, (Et2N)2PCl, is obtained from diethylamine and PCl3. Thiols (RSH) form P(SR)3. An industrially relevant reaction of PCl3 with amines is phosphonomethylation, which employs formaldehyde:
R2NH + PCl3 + CH2O → (HO)2P(O)CH2NR2 + 3 HCl
The herbicide glyphosate is also produced this way.
The reaction of PCl3 with Grignard reagents and organolithium reagents is a useful method for the preparation of organic phosphines with the formula R3P (sometimes called phosphanes) such as triphenylphosphine, Ph3P.
Triphenylphosphine is produced industrially by the reaction between phosphorus trichlorid, chlorobenzene, and sodium:
, where Ph =
Under controlled conditions or especially with bulky R groups, similar reactions afford less substituted derivatives such as chlorodiisopropylphosphine.
Conversion of alcohols to alkyl chlorides
Phosphorus trichloride is commonly used to convert primary and secondary alcohols to the corresponding chlorides. As discussed above, the reaction of alcohols with phosphorus trichloride is sensitive to conditions. The mechanism for the ROH →RCl conversion involves the reaction of HCl with phosphite esters:
.
The first step proceeds with nearly ideal stereochemistry but the final step far less so owing to an SN1 pathway.
Redox reactions
Phosphorus trichloride undergoes a variety of redox reactions:
PCl3 as a nucleophile
Phosphorus trichloride has a lone pair, and therefore can act as a Lewis base, e.g., forming a 1:1 adduct Br3B-PCl3. Metal complexes such as Ni(PCl3)4 are known, again demonstrating the ligand properties of PCl3.
This Lewis basicity is exploited in the Kinnear–Perren reaction to prepare alkylphosphonyl dichlorides (RP(O)Cl2) and alkylphosphonate esters (RP(O)(OR')2). Alkylation of phosphorus trichloride is effected in the presence of aluminium trichloride give the alkyltrichlorophosphonium salts, which are versatile intermediates:
PCl3 + RCl + AlCl3 → RPCl + AlCl
The RPCl product can then be decomposed with water to produce an alkylphosphonic dichloride RP(=O)Cl2.
PCl3 as a ligand
PCl3, like the more popular phosphorus trifluoride, is a ligand in coordination chemistry. One example is Mo(CO)5PCl3.
Uses
PCl3 is important indirectly as a precursor to PCl5, POCl3 and PSCl3, which are used in many applications, including herbicides, insecticides, plasticisers, oil additives, and flame retardants.
For example, oxidation of PCl3 gives POCl3, which is used for the manufacture of triphenyl phosphate and tricresyl phosphate, which find application as flame retardants and plasticisers for PVC. They are also used to make insecticides such as diazinon. Phosphonates include the herbicide glyphosate.
PCl3 is the precursor to triphenylphosphine for the Wittig reaction, and phosphite esters which may be used as industrial intermediates, or used in the Horner-Wadsworth-Emmons reaction, both important methods for making alkenes. It can be used to make trioctylphosphine oxide (TOPO), used as an extraction agent, although TOPO is usually made via the corresponding phosphine.
PCl3 is also used directly as a reagent in organic synthesis. It is used to convert primary and secondary alcohols into alkyl chlorides, or carboxylic acids into acyl chlorides, although thionyl chloride generally gives better yields than PCl3.
Safety
600 ppm is lethal in just a few minutes.
25 ppm is the US NIOSH "Immediately Dangerous to Life and Health" level
0.5 ppm is the US OSHA "permissible exposure limit" over a time-weighted average of 8 hours.
0.2 ppm is the US NIOSH "recommended exposure limit" over a time-weighted average of 8 hours.
Under EU Directive 67/548/EEC, PCl3 is classified as very toxic and corrosive , and the risk phrases R14, R26/28, R35 and R48/20 are obligatory.
Industrial production of phosphorus trichloride is controlled under the Chemical Weapons Convention, where it is listed in schedule 3, as it can be used to produce mustard agents.
See also
Phosphorus pentachloride
Phosphoryl chloride
Phosphorus trifluorodichloride
References
Inorganic phosphorus compounds
Phosphorus chlorides
Phosphorus(III) compounds
Pulmonary agents | Phosphorus trichloride | [
"Chemistry"
] | 1,647 | [
"Inorganic phosphorus compounds",
"Inorganic compounds",
"Pulmonary agents",
"Chemical weapons"
] |
1,759,961 | https://en.wikipedia.org/wiki/Mannich%20reaction | In organic chemistry, the Mannich reaction is a three-component organic reaction that involves the amino alkylation of an acidic proton next to a carbonyl () functional group by formaldehyde () and a primary or secondary amine () or ammonia (). The final product is a β-amino-carbonyl compound also known as a Mannich base. Reactions between aldimines and α-methylene carbonyls are also considered Mannich reactions because these imines form between amines and aldehydes.
The reaction is named after Carl Mannich.
The Mannich reaction starts with the nucleophilic addition of an amine to a carbonyl group followed by dehydration to the Schiff base. The Schiff base is an electrophile which reacts in a second step in an electrophilic addition with an enol formed from a carbonyl compound containing an acidic alpha-proton. The Mannich reaction is a condensation reaction.
In the Mannich reaction, primary or secondary amines or ammonia react with formaldehyde to form a Schiff base. Tertiary amines lack an N–H proton and so do not react. The Schiff base can react with α-CH-acidic compounds (nucleophiles) that include carbonyl compounds, nitriles, acetylenes, aliphatic nitro compounds, α-alkyl-pyridines or imines. It is also possible to use activated phenyl groups and electron-rich heterocycles such as furan, pyrrole, and thiophene. Indole is a particularly active substrate; the reaction provides gramine derivatives.
The Mannich reaction can be considered to involve a mixed-aldol reaction, dehydration of the alcohol, and conjugate addition of an amine (Michael reaction) all happening in "one-pot". Double Mannich reactions can also occur.
Reaction mechanism
The mechanism of the Mannich reaction starts with the formation of an iminium ion from the amine and formaldehyde.
The compound with the carbonyl functional group (in this case a ketone) will tautomerize to the enol form, after which it attacks the iminium ion.
On methyl ketones, the enolization and the Mannich addition can occur twice, followed by an β-elimination to yield β-amino enone derivatives.
Asymmetric Mannich reactions
(S)-proline catalyzes an asymmetric Mannich reaction. It diastereoselects the syn adduct, with greater effect for larger aldehyde substituents; and enantioselects the (S, S) adduct. A substituted proline can instead catalyze the (R, S) anti adduct.
Applications
The Mannich reaction is used in many areas of organic chemistry, Examples include:
alkyl amines
peptides, nucleotides, antibiotics, and alkaloids (e.g. tropinone)
agrochemicals, such as plant growth regulators
polymers
catalysts
Formaldehyde tissue crosslinking
Pharmaceutical drugs (e.g. rolitetracycline (the Mannich product of tetracycline and pyrrolidine), fluoxetine (antidepressant), tramadol and tolmetin (anti-inflammatory drug).
soap and detergents, especially with application to automotive fuel
Polyetheramines from substituted branched chain alkyl ethers.
α,β-unsaturated ketones by the thermal degradation of Mannich reaction products (e.g. methyl vinyl ketone from 1-diethylamino-butan-3-one)
See also
Betti reaction
Kabachnik–Fields reaction
Pictet–Spengler reaction
Stork enamine alkylation
Nitro-Mannich reaction
Crabbé reaction
References
External links
Carbon-carbon bond forming reactions
Multiple component reactions
Name reactions | Mannich reaction | [
"Chemistry"
] | 823 | [
"Coupling reactions",
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
1,760,159 | https://en.wikipedia.org/wiki/Mannich%20base | A Mannich base is a beta-amino-ketone, which is formed in the reaction of an amine, formaldehyde (or an aldehyde) and a carbon acid. The Mannich base is an endproduct in the Mannich reaction, which is nucleophilic addition reaction of a non-enolizable aldehyde and any primary or secondary amine to produce resonance stabilized imine (iminium ion or imine salt). The addition of a carbanion from a CH acidic compound (any enolizable carbonyl compound, amide, carbamate, hydantoin or urea) to the imine gives the Mannich base.
Reactivity
With primary or secondary amines, Mannich bases react with additional aldehyde and carbon acid to larger adducts HN(CH2CH2COR)2 and N(CH2CH2COR)3. With multiple acidic hydrogen atoms on the carbon acid higher adducts are also possible. Ammonia can be split off in an elimination reaction to form enals and enones.
References
Amines
Ketones | Mannich base | [
"Chemistry"
] | 233 | [
"Ketones",
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
2,439,093 | https://en.wikipedia.org/wiki/Saba%20Valadkhan | Saba Valadkhan () is an Iranian American biomedical scientist, and an Assistant Professor and RNA researcher at Case Western Reserve University in Cleveland, Ohio. In 2005, she was awarded the GE / Science Young Scientist Award for her breakthrough in understanding the mechanism of spliceosomes - "akin to finding the Holy Grail of the splicing catalysis field" - a critical area of research, given that "20 percent or 30 percent of all human genetic diseases are caused by mistakes that the spliceosome makes".
Education
Valadkhan qualified as a medical doctor at Tehran University of Medial Sciences in Iran in 1996. She moved to America to pursue her Ph.D. at Columbia University, New York. In 2004, she joined as an Assistant Professor Case Western Reserve University in Cleveland, Ohio.
Doctoral research
Valadkhan studied the role of small nuclear RNAs in the human spliceosome under the supervision of Prof. James Manley. The main focus of her research is elucidating the structure and function of the catalytic core of the spliceosome by taking advantage of a novel, minimal spliceosome she recently developed. This minimal system, which consists of only two spliceosomal snRNAs, catalyzes a reaction identical to the splicing reaction. In addition to providing direct evidence for RNA catalysis in the spliceosome, and thus, settling the longstanding and central question of the identity of the catalytic domain, the minimal system provides a novel and powerful tool for studying the structure and function of the spliceosome.
Awards and honours
Valadkhan was presented with the Harold Weintraub award from the Fred Hutchinson Cancer Research Center in Seattle for her doctoral thesis. She was named a Searle Scholar in 2004. She was also awarded the American Association for Advancement of Science (AAAS) Young Scientist Grand Prize in the same year.
In 2006, she became a founding member of the Rosalind Franklin Society. She was also honoured with the Nsoroma Award from Cleveland Chapter of the National Technical Association in 2006.
See also
List of famous Iranian women
References
External links
Interview
Women molecular biologists
Year of birth missing (living people)
Living people
Iranian emigrants to the United States
Columbia University alumni
Case Western Reserve University faculty
Iranian biologists
Molecular biologists
Iranian expatriate academics
20th-century Iranian physicians
Fred Hutchinson Cancer Research Center people
20th-century Iranian scientists
20th-century Iranian women scientists
21st-century Iranian scientists
21st-century Iranian women scientists
21st-century Iranian women physicians
21st-century Iranian physicians | Saba Valadkhan | [
"Chemistry"
] | 520 | [
"Biochemists",
"Molecular biology",
"Molecular biologists"
] |
2,439,173 | https://en.wikipedia.org/wiki/Grease%20%28lubricant%29 | Grease is a solid or semisolid lubricant formed as a dispersion of thickening agents in a liquid lubricant. Grease generally consists of a soap emulsified with mineral or vegetable oil.
A common feature of greases is that they possess high initial viscosities, which upon the application of shear, drop to give the effect of an oil-lubricated bearing of approximately the same viscosity as the base oil used in the grease. This change in viscosity is called shear thinning. Grease is sometimes used to describe lubricating materials that are simply soft solids or high viscosity liquids, but these materials do not exhibit the shear-thinning properties characteristic of the classical grease. For example, petroleum jellies such as Vaseline are not generally classified as greases.
Greases are applied to mechanisms that can be lubricated only infrequently and where a lubricating oil would not stay in position. They also act as sealants to prevent the ingress of water and incompressible materials. Grease-lubricated bearings have greater frictional characteristics because of their high viscosities.
Properties
A true grease consists of an oil or other fluid lubricant that is mixed with a thickener, typically a soap, to form a solid or semisolid. Greases are usually shear-thinning or pseudo-plastic fluids, which means that the viscosity of the fluid is reduced under shear stress. After sufficient force to shear the grease has been applied, the viscosity drops and approaches that of the base lubricant, such as mineral oil. This sudden drop in shear force means that grease is considered a plastic fluid, and the reduction of shear force with time makes it thixotropic. A few greases are rheotropic, meaning they become more viscous when worked. Grease is often applied using a grease gun, which applies the grease to the part being lubricated under pressure, forcing the solid grease into the spaces in the part.
Thickeners
Soaps are the most common emulsifying agent used, and the selection of the type of soap is determined by the application. Soaps include calcium stearate, sodium stearate, lithium stearate, as well as mixtures of these components. Fatty acids derivatives other than stearates are also used, especially lithium 12-hydroxystearate. The nature of the soaps influences the temperature resistance (relating to the viscosity), water resistance, and chemical stability of the resulting grease. Calcium sulphonates and polyureas are increasingly common grease thickeners not based on metallic soaps.
Powdered solids may also be used as thickeners, especially as absorbent clays like bentonite. Fatty oil-based greases have also been prepared with other thickeners, such as tar, graphite, or mica, which also increase the durability of the grease. Silicone greases are generally thickened with silica.
Engineering assessment and analysis
Lithium-based greases are the most commonly used; sodium and lithium-based greases have higher melting point (dropping point) than calcium-based greases but are not resistant to the action of water. Lithium-based grease has a dropping point at . However the maximum usable temperature for lithium-based grease is 120 °C.
The amount of grease in a sample can be determined in a laboratory by extraction with a solvent followed by e.g. gravimetric determination.
Additives
Some greases are labeled "EP", which indicates "extreme pressure". Under high pressure or shock loading, normal grease can be compressed to the extent that the greased parts come into physical contact, causing friction and wear. EP greases have increased resistance to film breakdown, form sacrificial coatings on the metal surface to protect if the film does break down, or include solid lubricants such as graphite, molybdenum disulfide or hexagonal boron nitride (hBN) to provide protection even without any grease remaining.
Solid additives such as copper or ceramic powder (most often hBN) are added to some greases for static high pressure and/or high temperature applications, or where corrosion could prevent dis-assembly of components later in their service life. These compounds are working as a release agent. Solid additives cannot be used in bearings because of tight tolerances. Solid additives will cause increased wear in bearings.
History
Grease from the early Egyptian or Roman eras is thought to have been prepared by combining lime with olive oil. The lime saponifies some of the triglyceride that comprises oil to give a calcium grease. In the middle of the 19th century, soaps were intentionally added as thickeners to oils. Over the centuries, all manner of materials have been employed as greases. For example, black slugs Arion ater were used as axle-grease to lubricate wooden axle-trees or carts in Sweden.
Classification and standards
Jointly developed by ASTM International, the National Lubricating Grease Institute (NLGI) and SAE International, standard “standard classification and specification for automotive service greases” was first published in 1989 by ASTM International. It categorizes greases suitable for the lubrication of chassis components and wheel bearings of vehicles, based on performance requirements, using codes adopted from the NLGI's “chassis and wheel bearing service classification system”:
LA and LB: chassis lubricants (suitability up to mild and severe duty respectively)
GA, GB and GC: wheel-bearings (suitability up to mild, moderate and severe duty respectively)
A given performance category may include greases of different consistencies.
The measure of the consistency of grease is commonly expressed by its NLGI consistency number.
The main elements of standard and NLGI's consistency classification are reproduced and described in standard “automotive lubricating greases” published by SAE International.
Standard “lubricants, industrial oils and related products (class L) — classification — part 9: family X (greases)”, first released in 1987 by the International Organization for Standardization, establishes a detailed classification of greases used for the lubrication of equipment, components of machines, vehicles, etc. It assigns a single multi-part code to each grease based on its operational properties (including temperature range, effects of water, load, etc.) and its NLGI consistency number.
Other types
Silicone grease
Silicone grease is based on a silicone oil, usually thickened with amorphous fumed silica.
Fluoroether-based grease
Fluoropolymers containing C-O-C (ether) with fluorine (F) bonded to the carbon. They are more flexible and often used in demanding environments due to their inertness. Fomblin by Solvay Solexis and Krytox by duPont are prominent examples.
Laboratory grease
Apiezon, silicone-based, and fluoroether-based greases are all used commonly in laboratories for lubricating stopcocks and ground glass joints. The grease helps to prevent joints from "freezing", as well as ensuring high vacuum systems are properly sealed. Apiezon or similar hydrocarbon based greases are the cheapest, and most suitable for high vacuum applications. However, they dissolve in many organic solvents. This quality makes clean-up with pentane or hexanes trivial, but also easily leads to contamination of reaction mixtures.
Silicone-based greases are cheaper than fluoroether-based greases. They are relatively inert and generally do not affect reactions, though reaction mixtures often get contaminated (detected through NMR near δ 0). Silicone-based greases are not easily removed with solvent, but they are removed efficiently by soaking in a base bath.
Fluoroether-based greases are inert to many substances including solvents, acids, bases, and oxidizers. They are, however, expensive, and are not easily cleaned away.
Food-grade grease
Food-grade greases are those greases that may come in contact with food and as such are required to be safe to digest. Food-grade lubricant base oil are generally low sulfur petrochemical, less easily oxidized and emulsified. Another commonly used poly-α olefin base oil as well. The United States Department of Agriculture (USDA) has three food-grade designations: H1, H2 and H3. H1 lubricants are food-grade lubricants used in food-processing environments where there is the possibility of incidental food contact. H2 lubricants are industrial lubricants used on equipment and machine parts in locations with no possibility of contact. H3 lubricants are food-grade lubricants, typically edible oils, used to prevent rust on hooks, trolleys and similar equipment.
Water-soluble grease analogs
In some cases, the lubrication and high viscosity of a grease are desired in situations where non-toxic, non-oil based materials are required. Carboxymethyl cellulose, or CMC, is one popular material used to create a water-based analog of greases. CMC serves to both thicken the solution and add a lubricating effect, and often silicone-based lubricants are added for additional lubrication. The most familiar example of this type of lubricant, used as a surgical and personal lubricant, is K-Y Jelly.
Cork grease
Cork grease is a lubricant used to lubricate cork, for example in musical wind instruments. It is usually applied using small lip-balm/lip-stick like applicators.
See also
Bearing (mechanical)
Lubrication
Lubrication theory
Penetrant
Society of Tribologists and Lubrication Engineers
Timken OK Load
References
External links
U.S. Army Corps of Engineers grease definition and application guide (PDF file)
New location: Navigate to USACE Home > [Publications] > [Engineer Manuals] > [EM 1110-2-1424 Lubricants and Hydraulic Fluids]
U.S. Army Corps of Engineers grease definition and application guide (PDF file)
The Grocer's Encyclopedia online.
Interflon USA
Best greases
Greases
Tribology | Grease (lubricant) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,154 | [
"Tribology",
"Mechanical engineering",
"Materials science",
"Surface science"
] |
2,439,942 | https://en.wikipedia.org/wiki/International%20School%20for%20Advanced%20Studies | The International School for Advanced Studies (Italian: Scuola Internazionale Superiore di Studi Avanzati; SISSA) is an international, state-supported, post-graduate-education and research institute in Trieste, Italy.
SISSA is active in the fields of mathematics, physics and neuroscience, offering both undergraduate and post-graduate courses.
Each year, about 70 PhD students are admitted to SISSA based on their scientific qualifications. SISSA also runs master's programs in the same areas, in collaboration with both Italian and other European universities.
History
SISSA was founded in 1978, as a part of the reconstruction following the Friuli earthquake of 1976. Although the city of Trieste itself did not suffer any damage, physicist Paolo Budinich asked and obtained from the Italian government to include in the interventions the institution of a new, post-graduate teaching and research institute, modeled on the Scuola Normale Superiore di Pisa. The school became operative with a PhD course in theoretical physics, and Budinich himself was appointed as general director.
In 1986, Budinich left his position to Daniele Amati, who at the time was at the head of the theoretical division at CERN. Under his leadership, SISSA expanded its teaching and research activity towards the field of neuroscience, and instituted a new interdisciplinary laboratory aiming at connecting humanities and scientific studies.
From 2001 to 2004, the director was the Italian geneticist Edoardo Boncinelli, who fostered the development of the existing research areas. From 2004 to 2010, the director was the Italian physicist Stefano Fantoni. His period as director has been characterized by the design and construction of the new SISSA location. Other directors were appointed in the following years, which saw the strengthening of SISSA collaboration with other Italian and European universities in offering master's degree programs in the three areas of the School (mathematics, physics and neuroscience).
Physicist Stefano Ruffo served as the director from 2015 until 2021, when he was succeeded by Andrea Romanino.
Campus
Until July 2010, the school was located near the Miramare Park and marine reserve, about 10 kilometres from the city centre. The Miramare campus still hosts the ICTP (International Center for Theoretical Physics) and the Department of Theoretical Physics of the University of Trieste.
The campus is located in the borough of Opicina; it is accessible by bus 38 of Trieste Trasporti (TPL FVG). The campus is also equipped with a canteen, a kindergarten, a gym, as well as an open air theatre, which is used for shows, conferences and activities for the wider public.
Departments
SISSA houses the following research groups in the field of Astroparticle Physics, Astrophysics, Condensed Matter, Molecular and Statistical Biophysics, Statistical Physics, Theoretical Particle Physics, Cognitive Neuroscience Neurobiology, Molecular Biology, Applied Mathematics, Geometry Mathematical Analysis, and Mathematical Physics
In addition, there is the Interdisciplinary Laboratory for Natural and Humanistic Sciences (ILAS - Laboratorio Interdisciplinare Scienze Naturali e Umanistiche), which is endowed with the task of making connections between science, humanities, and the public. Since 1992 it also organizes a course in Science Communication and Scientific journalism.
SISSA also enjoys special teaching and scientific links with the International Centre for Theoretical Physics, the International Centre for Genetic Engineering and Biotechnology and the Elettra Synchrotron Light Laboratory. Ruffo signed a partnership with the International Centre for Genetic Engineering and Biotechnology to set up a new PhD program in Molecular Biology, with teaching activity organized by both institutions.
SISSA operates a 100 teraFLOPS supercomputer in partnership with the neighboring International Centre for Theoretical Physics. Moreover, it hosts a specialized library, a parallel Calculus Centre, several cellular-neurobiology laboratories, confocal microscopy and electronic microscopy facilities and multiple cognitive-neuroscience laboratories, which are also available to faculty and students of other scientific institutions in the Trieste area.
Ranking
According to the last aggregate data issued by ANVUR - the Italian National Agency for the Evaluation of the University and Research Systems - SISSA ranks:
first among medium-sized universities and research centers in physical science, with a 22% positive variance in the number of products compared to the Italian average;
first among small-sized universities and research centers in biological science, owing to the activity carried out in neuroscience, with a 64% positive variance;
second among small-sized universities in mathematical and computer science. With reference to the latter, the positive variance in the scientific production corresponded to 46% compared to the national average, placing SISSA 1% away from the Scuola Normale di Pisa.
Publications
SISSA publishes or sponsors several scientific journals and conference proceedings:
Journal of High Energy Physics (JHEP), with Springer, a peer-reviewed journal in particle physics
Journal of Cosmology and Astroparticle Physics (JCAP), with IOP Publishing, a peer-reviewed journal in physical cosmology and astroparticle physics
Journal of Statistical Mechanics: Theory and Experiment (JSTAT), with IOP Publishing, a peer-reviewed journal in statistical mechanics
Journal of Instrumentation (JINST), with IOP Publishing, a peer-reviewed journal in instrumentations for particle accelerators
Journal of Science Communication (JCOM), with IOP Publishing, a peer-reviewed journal in popular science
JCOM América Latina (JCOMAL), published in-house by SISSA; a Spanish- and Portuguese-language peer-reviewed journal for popular science in Latin America
Proceedings of Science (PoS), published in-house by SISSA, a non-peer-reviewed series of conference proceedings
See also
List of Italian universities
Notes
External links
SISSA Website
Trieste System
Graduate schools in Italy
Trieste
Educational institutions established in 1978
Education in Friuli-Venezia Giulia
1978 establishments in Italy
Neuroscience research centers in Italy
Theoretical physics institutes
International School for Advanced Studies | International School for Advanced Studies | [
"Physics"
] | 1,209 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
2,440,680 | https://en.wikipedia.org/wiki/Dendrotoxin | Dendrotoxins are a class of presynaptic neurotoxins produced by mamba snakes (Dendroaspis) that block particular subtypes of voltage-gated potassium channels in neurons, thereby enhancing the release of acetylcholine at neuromuscular junctions. Because of their high potency and selectivity for potassium channels, dendrotoxins have proven to be extremely useful as pharmacological tools for studying the structure and function of these ion channel proteins.
Dendrotoxins have been shown to block particular subtypes of voltage-gated potassium (K+) channels in neuronal tissue. In the nervous system, voltage-gated K+ channels control the excitability of nerves and muscles by controlling the resting membrane potential and by repolarizing the membrane during action potentials. Dendrotoxin has been shown to bind the nodes of Ranvier of motor neurons and to block the activity of these potassium channels. In this way, dendrotoxins prolong the duration of action potentials and increase acetylcholine release at the neuromuscular junction, which may result in muscle hyperexcitability and convulsive symptoms.
Dendrotoxin structure
Dendrotoxins are ~7kDa proteins consisting of a single peptide chain of approximately 57-60 amino acids. Several homologues of alpha-dendrotoxin have been isolated, all possessing a slightly different sequence. However, the molecular architecture and folding conformation of these proteins are all very similar. Dendrotoxins possess a very short 310-helix near the N-terminus of the peptide, while a two turn alpha-helix occurs near the C-terminus. A two-stranded antiparallel β-sheet occupies the central part of the molecular structure. These two β-strands are connected by a distorted β-turn region that is thought to be important for the binding activity of the protein. All dendrotoxins are cross-linked by three disulfide bridges, which add stability to the protein and greatly contribute to its structural conformation. The cysteine residues forming these disulfide bonds have been conserved among all members of the dendrotoxin family, and are located at C7-C57, C16-C40, and C32-C53 (numbering according to alpha-dendrotoxin).
The dendrotoxins are structurally homologous to the Kunitz-type serine protease inhibitors, including bovine pancreatic trypsin inhibitor (BPTI). Alpha-dendrotoxin and BPTI have been shown to have 35% sequence identity as well as identical disulfide bonds. Despite the structural homology between these two proteins, dendrotoxins do not appear to exhibit any measurable inhibitory protease activity like BPTI. This loss of activity appears to result from the absence of key amino acid residues that produce structural differences that hinder the key interactions necessary for the protease activity seen in BPTI.
Dendrotoxins are basic proteins that possess a net positive charge when present in neutral pH. Most of the positively charged amino acid residues of dendrotoxins are located in the lower part of the structure, creating a cationic domain on one side of the protein. Positive charge results from lysine (Lys) and arginine (Arg) residues that are concentrated in three primary regions of the protein: near the N-terminus (Arg3, Arg4, Lys5), near the C-terminus (Arg54, Arg55) and at the narrow β-turn region (Lys28, Lys29, Lys30). It is believed that these positively charged residues can play a critical role in dendrotoxin binding activity, as they can make potential interactions with the anionic sites (negatively charged amino acids) in the pore of potassium channels.
Biological activity
Pharmacology
A single dendrotoxin molecule associates reversibly with a potassium channel in order to exert its inhibitory effect. It is proposed that this interaction is mediated by electrostatic interactions between the positively charged amino acid residues in the cationic domain of dendrotoxin and the negatively charged residues in the ion channel pore. Potassium channels, similar to other cation-selective channels, are believed to have a cloud of negative charges that precede the opening to the channel pore that help conduct potassium ions through the permeation pathway. It is generally believed (though not proven) that a dendrotoxin molecules bind to anionic sites near the extracellular surface of the channel and physically occlude the pore, thereby preventing ion conductance. However, Imredy and MacKinnon have proposed that delta-dendrotoxin may have an off-center binding site on their target proteins, and may inhibit the channel by altering the structure of the channel, rather than physically blocking the pore.
Biologically important residues
Many studies have attempted to identify which amino acid residues are important for binding activity of dendrotoxins to their potassium channel targets. Harvey et al. used residue-specific modifications to identify positively charged residues that were crucial to the blocking activity of dendrotoxin-I. They reported that acetylation of Lys5 near the N-terminal region and Lys29 in the beta-turn region led to substantial decreases in DTX-I binding affinity. Similar results have been shown with dendrotoxin-K using site-directed mutagenesis to substitute positively charged lysine and arginine residues to neutral alanines. These results, along with many others, have implicated that the positively charged lysines in the N-terminal half, particularly Lys5 in the 310-helix, play a very important role in the dendrotoxin binding to their potassium channel targets. The lysine residues in the β-turn region has provided more confounding results, appearing to be biologically critical in some dendrotoxin homologues and not necessary for others. Furthermore, mutation of the entire lysine triplet (K28-K29-K30) to Ala-Ala-Gly in alpha-DTX resulted in very little change in biological activity.
There is a general agreement that the conserved lysine residue near the N-terminus (Lys5 in alpha-DTX) is crucial for the biological activity of all dendrotoxins, while additional residues, such as those in the beta-turn region, might play a role in dendrotoxin specificity by mediating the interactions of individual toxins to their individual target sites. This not only helps explain the stringent specificity of some dendrotoxins for different subtypes of voltage-gated K+ channels, but also accounts for differences in the potency of dendrotoxins for common K+ channels. For example, Wang et al. showed that the interaction of dendrotoxin-K with KV1.1 is mediated by its lysine residues in both the N-terminus and the β-turn region, while alpha-dendrotoxin appears to interact with its target solely through the N-terminus. This less expansive interactive domain may help explain why alpha-dendrotoxin is less discriminative while dendrotoxin-K is strictly selective for KV1.1.
Uses in research
Potassium channels of vertebrate neurons display a high degree of diversity that allows neurons to precisely tune their electrical signaling properties by expression of different combinations of potassium channel subunits. Furthermore, because they regulate ionic flux across biological membranes, they are important in many aspects of cellular regulation and signal transduction of different cell types. Therefore, voltage-gated potassium channels are targets for a wide range of potent biological toxins from such organisms as snakes, scorpions, sea anemones, and cone snails. Thus, venom purification has led to the isolation of peptide toxins such as the dendrotoxins, which have become useful pharmacological tools for the study of potassium channels. Because of their potency and selectivity for different subtypes of potassium channels, dendrotoxins have become useful as molecular probes for the structural and functional study of these proteins. This may help improve our understanding of the roles played by individual channel types, as well as assist in the pharmacological classification of these diverse channel types. Furthermore, the availability of radiolabelled dendrotoxins provides a tool for the screening of other sources in a search for new potassium channel toxins, such as the kalicludine class of potassium channel toxins in sea anemones. Lastly, the structural information provided by dendrotoxins may provide clues to the synthesis of therapeutic compounds that may target particular classes of potassium channels. Dendrotoxin I has also been used to help purify and characterize the K+ channel protein to which it binds via different binding assay and chromatography techniques.
References
External links
Dendroaspis
Neurotoxins
Ion channel toxins
Potassium channel blockers
Snake toxins | Dendrotoxin | [
"Chemistry"
] | 1,893 | [
"Neurochemistry",
"Neurotoxins"
] |
2,440,776 | https://en.wikipedia.org/wiki/Focal%20adhesion | In cell biology, focal adhesions (also cell–matrix adhesions or FAs) are large macromolecular assemblies through which mechanical force and regulatory signals are transmitted between the extracellular matrix (ECM) and an interacting cell. More precisely, focal adhesions are the sub-cellular structures that mediate the regulatory effects (i.e., signaling events) of a cell in response to ECM adhesion.
Focal adhesions serve as the mechanical linkages to the ECM, and as a biochemical signaling hub to concentrate and direct numerous signaling proteins at sites of integrin binding and clustering.
Structure and function
Focal adhesions are integrin-containing, multi-protein structures that form mechanical links between intracellular actin bundles and the extracellular substrate in many cell types. Focal adhesions are large, dynamic protein complexes through which the cytoskeleton of a cell connects to the ECM. They are limited to clearly defined ranges of the cell, at which the plasma membrane closes to within 15 nm of the ECM substrate. Focal adhesions are in a state of constant flux: proteins associate and disassociate with it continually as signals are transmitted to other parts of the cell, relating to anything from cell motility to cell cycle. Focal adhesions can contain over 100 different proteins, which suggests a considerable functional diversity. More than anchoring the cell, they function as signal carriers (sensors), which inform the cell about the condition of the ECM and thus affect their behavior. In sessile cells, focal adhesions are quite stable under normal conditions, while in moving cells their stability is diminished: this is because in motile cells, focal adhesions are being constantly assembled and disassembled as the cell establishes new contacts at the leading edge, and breaks old contacts at the trailing edge of the cell. One example of their important role is in the immune system, in which white blood cells migrate along the connective endothelium following cellular signals to damaged biological tissue.
Morphology
Connection between focal adhesions and proteins of the extracellular matrix generally involves integrins. Integrins bind to extra-cellular proteins via short amino acid sequences, such as the RGD motif (found in proteins such as fibronectin, laminin, or vitronectin), or the DGEA and GFOGER motifs found in collagen. Integrins are heterodimers which are formed from one beta and one alpha subunit. These subunits are present in different forms, their corresponding ligands classify these receptors into four groups: RGD receptors, laminin receptors, leukocyte-specific receptors and collagen receptors. Within the cell, the intracellular domain of integrin binds to the cytoskeleton via adapter proteins such as talin, α-actinin, filamin, vinculin and tensin. Many other intracellular signalling proteins, such as focal adhesion kinase, bind to and associate with this integrin-adapter protein–cytoskeleton complex, and this forms the basis of a focal adhesion.
Adhesion dynamics with migrating cells
The dynamic assembly and disassembly of focal adhesions plays a central role in cell migration. During cell migration, both the composition and the morphology of the focal adhesion change. Initially, small (0.25μm2) focal adhesions called focal complexes (FXs) are formed at the leading edge of the cell in lamellipodia: they consist of integrin, and some of the adapter proteins, such as talin, paxillin and tensin. Many of these focal complexes fail to mature and are disassembled as the lamellipodia withdraw. However, some focal complexes mature into larger and stable focal adhesions, and recruit many more proteins such as zyxin. Recruitment of components to the focal adhesion occurs in an ordered, sequential manner. Once in place, a focal adhesion remains stationary with respect to the extracellular matrix, and the cell uses this as an anchor on which it can push or pull itself over the ECM. As the cell progresses along its chosen path, a given focal adhesion moves closer and closer to the trailing edge of the cell. At the trailing edge of the cell the focal adhesion must be dissolved. The mechanism of this is poorly understood and is probably instigated by a variety of different methods depending on the circumstances of the cell. One possibility is that the calcium-dependent protease calpain is involved: it has been shown that the inhibition of calpain leads to the inhibition of focal adhesion-ECM separation. Focal adhesion components are amongst the known calpain substrates, and it is possible that calpain degrades these components to aid in focal adhesion disassembly
Actin retrograde flow
The assembly of nascent focal adhesions is highly dependent on the process of retrograde actin flow. This is the phenomenon in a migrating cell where actin filaments polymerize at the leading edge and flow back towards the cell body. This is the source of traction required for migration; the focal adhesion acts as a molecular clutch when it tethers to the ECM and impedes the retrograde movement of actin, thus generating the pulling (traction) force at the site of the adhesion that is necessary for the cell to move forward. This traction can be visualized with traction force microscopy. A common metaphor to explain actin retrograde flow is a large number of people being washed downriver, and as they do so, some of them hang on to rocks and branches along the bank to stop their downriver motion. Thus, a pulling force is generated onto the rock or branch that they are hanging on to. These forces are necessary for the successful assembly, growth, and maturation of focal adhesions.
Natural biomechanical sensor
Extracellular mechanical forces, which are exerted through focal adhesions, can activate Src kinase and stimulate the growth of the adhesions. This indicates that focal adhesions may function as mechanical sensors, and suggests that force generated from myosin fibers could contribute to maturing the focal complexes.
This gains further support from the fact that inhibition of myosin-generated forces leads to slow disassembly of focal adhesions, by changing the turnover kinetics of the focal adhesion proteins.
The relationship between forces on focal adhesions and their compositional maturation, however, remains unclear. For instance, preventing focal adhesion maturation by inhibiting myosin activity or stress fiber assembly does not prevent forces sustained by focal adhesions, nor does it prevent cells from migrating. Thus force propagation through focal adhesions may not be sensed directly by cells at all time and force scales.
Their role in mechanosensing is important for durotaxis.
See also
Actin
TES (protein)
Paxillin
References
External links
MBInfo - Focal Adhesion
MBInfo - Focal Adhesion Assembly
MBInfo - Regulation of Focal Adhesion Assembly
AdhesomeFAnetwork Database with all known focal adhesion proteins and their biochemical interactions
Intercellular Connections
Zaidel-Bar Cell Adhesion Lab
Cell biology
Cell movement
Cell signaling
Actin-based structures | Focal adhesion | [
"Biology"
] | 1,524 | [
"Cell biology"
] |
2,440,853 | https://en.wikipedia.org/wiki/Molecular%20electronic%20transition | In theoretical chemistry, molecular electronic transitions take place when electrons in a molecule are excited from one energy level to a higher energy level. The energy change associated with this transition provides information on the structure of the molecule and determines many of its properties, such as colour. The relationship between the energy involved in the electronic transition and the frequency of radiation is given by Planck's relation.
Organic molecules and other molecules
The electronic transitions in organic compounds and some other compounds can be determined by ultraviolet–visible spectroscopy, provided that transitions in the ultraviolet (UV) or visible range of the electromagnetic spectrum exist for the compound. Electrons occupying a HOMO (highest-occupied molecular orbital) of a sigma bond (σ) can get excited to the LUMO (lowest-unoccupied molecular orbital) of that bond. This process is denoted as a transition. Likewise, promotion of an electron from a pi-bonding orbital (π) to an antibonding pi orbital (π*) is denoted as a transition. Auxochromes with free electron pairs (denoted as "n") have their own transitions, as do aromatic pi bond transitions. Sections of molecules which can undergo such detectable electron transitions can be referred to as chromophores, since such transitions absorb electromagnetic radiation (light), which may be hypothetically perceived as color somewhere in the electromagnetic spectrum. The following molecular electronic transitions exist:
In addition to these assignments, electronic transitions also have so-called bands associated with them. The following bands are defined (by A. Burawoy in 1930):
The R-band ();
The K-band ();
The B-band (from benzoic);
The E-band (from ethylenic).
For example, the absorption spectrum for ethane shows a transition at 135 nm and that of water a transition at 167 nm with an extinction coefficient of 7,000. Benzene has three transitions; two E-bands at 180 and 200 nm and one B-band at 255 nm with extinction coefficients respectively 60,000, 8,000 and 215. These absorptions are not narrow bands but are generally broad because the electronic transitions are superimposed on the other molecular energy states.
Solvent shifts
The electronic transitions of molecules in solution can depend strongly on the type of solvent with additional bathochromic shifts or hypsochromic shifts.
Line spectra
Spectral lines are associated with atomic electronic transitions and polyatomic gases have their own absorption band system.
See also
Atomic electron transition
Resonance Raman spectroscopy
References
Spectroscopy
Quantum mechanics | Molecular electronic transition | [
"Physics",
"Chemistry"
] | 516 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Theoretical physics",
"Quantum mechanics",
"Spectroscopy"
] |
2,440,872 | https://en.wikipedia.org/wiki/Grad%E2%80%93Shafranov%20equation | The Grad–Shafranov equation (H. Grad and H. Rubin (1958); Vitalii Dmitrievich Shafranov (1966)) is the equilibrium equation in ideal magnetohydrodynamics (MHD) for a two dimensional plasma, for example the axisymmetric toroidal plasma in a tokamak. This equation takes the same form as the Hicks equation from fluid dynamics. This equation is a two-dimensional, nonlinear, elliptic partial differential equation obtained from the reduction of the ideal MHD equations to two dimensions, often for the case of toroidal axisymmetry (the case relevant in a tokamak). Taking as the cylindrical coordinates, the flux function is governed by the equation,where is the magnetic permeability, is the pressure, and the magnetic field and current are, respectively, given by
The nature of the equilibrium, whether it be a tokamak, reversed field pinch, etc. is largely determined by the choices of the two functions and as well as the boundary conditions.
Derivation (in Cartesian coordinates)
In the following, it is assumed that the system is 2-dimensional with as the invariant axis, i.e. produces 0 for any quantity. Then the magnetic field can be written in cartesian coordinates as
or more compactly,
where is the vector potential for the in-plane (x and y components) magnetic field. Note that based on this form for B we can see that A is constant along any given magnetic field line, since is everywhere perpendicular to B. (Also note that -A is the flux function mentioned above.)
Two dimensional, stationary, magnetic structures are described by the balance of pressure forces and magnetic forces, i.e.:
where p is the plasma pressure and j is the electric current. It is known that p is a constant along any field line, (again since is everywhere perpendicular to B). Additionally, the two-dimensional assumption () means that the z- component of the left hand side must be zero, so the z-component of the magnetic force on the right hand side must also be zero. This means that , i.e. is parallel to .
The right hand side of the previous equation can be considered in two parts:
where the subscript denotes the component in the plane perpendicular to the -axis. The component of the current in the above equation can be written in terms of the one-dimensional vector potential as
The in plane field is
and using Maxwell–Ampère's equation, the in plane current is given by
In order for this vector to be parallel to as required, the vector must be perpendicular to , and must therefore, like , be a field-line invariant.
Rearranging the cross products above leads to
and
These results can be substituted into the expression for to yield:
Since and are constants along a field line, and functions only of , hence and . Thus, factoring out and rearranging terms yields the Grad–Shafranov equation:
Derivation in contravariant representation
This derivation is only used for Tokamaks, but it can be enlightening. Using the definition of 'The Theory of Toroidally Confined Plasmas 1:3'(Roscoe White), Writing by contravariant basis :
we have :
then force balance equation:
Working out, we have:
References
Further reading
Grad, H., and Rubin, H. (1958) Hydromagnetic Equilibria and Force-Free Fields . Proceedings of the 2nd UN Conf. on the Peaceful Uses of Atomic Energy, Vol. 31, Geneva: IAEA p. 190.
Shafranov, V.D. (1966) Plasma equilibrium in a magnetic field, Reviews of Plasma Physics, Vol. 2, New York: Consultants Bureau, p. 103.
Woods, Leslie C. (2004) Physics of plasmas, Weinheim: WILEY-VCH Verlag GmbH & Co. KGaA, chapter 2.5.4
Haverkort, J.W. (2009) Axisymmetric Ideal MHD Tokamak Equilibria. Notes about the Grad–Shafranov equation, selected aspects of the equation and its analytical solutions.
Haverkort, J.W. (2009) Axisymmetric Ideal MHD equilibria with Toroidal Flow. Incorporation of toroidal flow, relation to kinetic and two-fluid models, and discussion of specific analytical solutions.
Magnetohydrodynamics
Elliptic partial differential equations
Eponymous equations of physics | Grad–Shafranov equation | [
"Physics",
"Chemistry"
] | 919 | [
"Magnetohydrodynamics",
"Eponymous equations of physics",
"Equations of physics",
"Fluid dynamics"
] |
2,443,027 | https://en.wikipedia.org/wiki/Supernova%20nucleosynthesis | Supernova nucleosynthesis is the nucleosynthesis of chemical elements in supernova explosions.
In sufficiently massive stars, the nucleosynthesis by fusion of lighter elements into heavier ones occurs during sequential hydrostatic burning processes called helium burning, carbon burning, oxygen burning, and silicon burning, in which the byproducts of one nuclear fuel become, after compressional heating, the fuel for the subsequent burning stage. In this context, the word "burning" refers to nuclear fusion and not a chemical reaction.
During hydrostatic burning these fuels synthesize overwhelmingly the alpha nuclides (), nuclei composed of integer numbers of helium-4 nuclei. Initially, two helium-4 nuclei fuse into a single beryllium-8 nucleus. The addition of another helium 4 nucleus to the beryllium yields carbon-12, followed by oxygen-16, neon-20 and so on, each time adding 2 protons and 2 neutrons to the growing nucleus. A rapid final explosive burning is caused by the sudden temperature spike owing to passage of the radially moving shock wave that was launched by the gravitational collapse of the core. W. D. Arnett and his Rice University colleagues demonstrated that the final shock burning would synthesize the non-alpha-nucleus isotopes more effectively than hydrostatic burning was able to do, suggesting that the expected shock-wave nucleosynthesis is an essential component of supernova nucleosynthesis. Together, shock-wave nucleosynthesis and hydrostatic-burning processes create most of the isotopes of the elements carbon (), oxygen (), and elements with (from neon to nickel). As a result of the ejection of the newly synthesized isotopes of the chemical elements by supernova explosions, their abundances steadily increased within interstellar gas. That increase became evident to astronomers from the initial abundances in newly born stars exceeding those in earlier-born stars.
Elements heavier than nickel are comparatively rare owing to the decline with atomic weight of their nuclear binding energies per nucleon, but they too are created in part within supernovae. Of greatest interest historically has been their synthesis by rapid capture of neutrons during the r-process, reflecting the common belief that supernova cores are likely to provide the necessary conditions. However, newer research has proposed a promising alternative (see the r-process below). The r-process isotopes are approximately 100,000 times less abundant than the primary chemical elements fused in supernova shells above. Furthermore, other nucleosynthesis processes in supernovae are thought to be responsible also for some nucleosynthesis of other heavy elements, notably, the proton capture process known as the rp-process, the slow capture of neutrons (s-process) in the helium-burning shells and in the carbon-burning shells of massive stars, and a photodisintegration process known as the -process (gamma-process). The latter synthesizes the lightest, most neutron-poor, isotopes of the elements heavier than iron from preexisting heavier isotopes.
History
In 1946, Fred Hoyle proposed that elements heavier than hydrogen and helium would be produced by nucleosynthesis in the cores of massive stars. It had previously been thought that the elements we see in the modern universe had been largely produced during its formation. At this time, the nature of supernovae was unclear and Hoyle suggested that these heavy elements were distributed into space by rotational instability. In 1954, the theory of nucleosynthesis of heavy elements in massive stars was refined and combined with more understanding of supernovae to calculate the abundances of the elements from carbon to nickel. Key elements of the theory included:
the prediction of the excited state in the C nucleus that enables the triple-alpha process to burn resonantly to carbon and oxygen;
the thermonuclear sequels of carbon-burning synthesizing Ne, Mg and Na; and
oxygen-burning synthesizing silicon, aluminum, and sulphur.
The theory predicted that silicon burning would happen as the final stage of core fusion in massive stars, although nuclear science could not then calculate exactly how. Hoyle also predicted that the collapse of the evolved cores of massive stars was "inevitable" owing to their increasing rate of energy loss by neutrinos and that the resulting explosions would produce further nucleosynthesis of heavy elements and eject them into space.
In 1957, a paper by the authors E. M. Burbidge, G. R. Burbidge, W. A. Fowler, and Hoyle expanded and refined the theory and achieved widespread acclaim. It became known as the B²FH or BBFH paper, after the initials of its authors. The earlier papers fell into obscurity for decades after the more-famous B²FH paper did not attribute Hoyle's original description of nucleosynthesis in massive stars. Donald D. Clayton has attributed the obscurity also to Hoyle's 1954 paper describing its key equation only in words, and a lack of careful review by Hoyle of the B²FH draft by coauthors who had themselves not adequately studied Hoyle's paper. During his 1955 discussions in Cambridge with his co-authors in preparation of the B²FH first draft in 1956 in Pasadena, Hoyle's modesty had inhibited him from emphasizing to them the great achievements of his 1954 theory.
Thirteen years after the B²FH paper, W.D. Arnett and colleagues demonstrated that the final burning in the passing shock wave launched by collapse of the core could synthesize non-alpha-particle isotopes more effectively than hydrostatic burning could, suggesting that explosive nucleosynthesis is an essential component of supernova nucleosynthesis. A shock wave rebounded from matter collapsing onto the dense core, if strong enough to lead to mass ejection of the mantle of supernovae, would necessarily be strong enough to provide the sudden heating of the shells of massive stars needed for explosive thermonuclear burning within the mantle. Understanding how that shock wave can reach the mantle in the face of continuing infall onto the shock became the theoretical difficulty. Supernova observations assured that it must occur.
White dwarfs were proposed as possible progenitors of certain supernovae in the late 1960s, although a good understanding of the mechanism and nucleosynthesis involved did not develop until the 1980s. This showed that ejected very large amounts of radioactive nickel and lesser amounts of other iron-peak elements, with the nickel decaying rapidly to cobalt and then iron.
Era of computer models
The papers of Hoyle (1946) and Hoyle (1954) and of B²FH (1957) were written by those scientists before the advent of the age of computers. They relied on hand calculations, deep thought, physical intuition, and familiarity with details of nuclear physics. Brilliant as these founding papers were, a cultural disconnect soon emerged with a younger generation of scientists who began to construct computer programs that would eventually yield numerical answers for the advanced evolution of stars and the nucleosynthesis within them.
Cause
A supernova is a violent explosion of a star that occurs under two principal scenarios. The first is that a white dwarf star, which is the remnant of a low-mass star that has exhausted its nuclear fuel, undergoes a thermonuclear explosion after its mass is increased beyond its Chandrasekhar limit by accreting nuclear-fuel mass from a more diffuse companion star (usually a red giant) with which it is in binary orbit. The resulting runaway nucleosynthesis completely destroys the star and ejects its mass into space. The second, and about threefold more common, scenario occurs when a massive star (12–35 times more massive than the sun), usually a supergiant at the critical time, reaches nickel-56 in its core nuclear fusion (or burning) processes. Without exothermic energy from fusion, the core of the pre-supernova massive star loses heat needed for pressure support, and collapses owing to the strong gravitational pull. The energy transfer from the core collapse causes the supernova display.
The nickel-56 isotope has one of the largest binding energies per nucleon of all isotopes, and is therefore the last isotope whose synthesis during core silicon burning releases energy by nuclear fusion, exothermically. The binding energy per nucleon declines for atomic weights heavier than , ending fusion's history of supplying thermal energy to the star. The thermal energy released when the infalling supernova mantle hits the semi-solid core is very large, about 10 ergs, about a hundred times the energy released by the supernova as the kinetic energy of its ejected mass. Dozens of research papers have been published in the attempt to describe the hydrodynamics of how that small one percent of the infalling energy is transmitted to the overlying mantle in the face of continuous infall onto the core. That uncertainty remains in the full description of core-collapse supernovae.
Nuclear fusion reactions that produce elements heavier than iron absorb nuclear energy and are said to be endothermic reactions. When such reactions dominate, the internal temperature that supports the star's outer layers drops. Because the outer envelope is no longer sufficiently supported by the radiation pressure, the star's gravity pulls its mantle rapidly inward. As the star collapses, this mantle collides violently with the growing incompressible stellar core, which has a density almost as great as an atomic nucleus, producing a shockwave that rebounds outward through the unfused material of the outer shell. The increase of temperature by the passage of that shockwave is sufficient to induce fusion in that material, often called explosive nucleosynthesis. The energy deposited by the shockwave somehow leads to the star's explosion, dispersing fusing matter in the mantle above the core into interstellar space.
Silicon burning
After a star completes the oxygen burning process, its core is composed primarily of silicon and sulfur. If it has sufficiently high mass, it further contracts until its core reaches temperatures in the range of 2.7–3.5 billion K (). At these temperatures, silicon and other isotopes suffer photoejection of nucleons by energetic thermal photons () ejecting especially alpha particles (He). The nuclear process of silicon burning differs from earlier fusion stages of nucleosynthesis in that it entails a balance between alpha-particle captures and their inverse photo ejection which establishes abundances of all alpha-particle elements in the following sequence in which each alpha particle capture shown is opposed by its inverse reaction, namely, photo ejection of an alpha particle by the abundant thermal photons:
{|
|-
| Si || + || || He || || || S || + ||
|-
| S || + || || He || || || Ar || + ||
|-
| Ar || + || || He || || || Ca || + ||
|-
| Ca || + || || He || || || Ti || + ||
|-
| Ti || + || || He || || || Cr || + ||
|-
| Cr || + || || He || || || Fe || + ||
|-
| Fe || + || || He || || || Ni || + ||
|-
| Ni || + || || He || || || Zn || + ||
|}
The alpha-particle nuclei Ti and those more massive in the final five reactions listed are all radioactive, but they decay after their ejection in supernova explosions into abundant isotopes of Ca, Ti, Cr, Fe and Ni. This post-supernova radioactivity became of great importance for the emergence of gamma-ray-line astronomy.
In these physical circumstances of rapid opposing reactions, namely alpha-particle capture and photo ejection of alpha particles, the abundances are not determined by alpha-particle-capture cross sections; rather they are determined by the values that the abundances must assume in order to balance the speeds of the rapid opposing-reaction currents. Each abundance takes on a stationary value that achieves that balance. This picture is called nuclear quasiequilibrium. Many computer calculations, for example, using the numerical rates of each reaction and of their reverse reactions have demonstrated that quasiequilibrium is not exact but does characterize well the computed abundances. Thus, the quasiequilibrium picture presents a comprehensible picture of what actually happens. It also fills in an uncertainty in Hoyle's 1954 theory. The quasiequilibrium buildup shuts off after Ni because the alpha-particle captures become slower whereas the photo ejections from heavier nuclei become faster. Non-alpha-particle nuclei also participate, using a host of reactions similar to
Ar + neutron Ar + photon
and its inverse which set the stationary abundances of the non-alpha-particle isotopes, where the free densities of protons and neutrons are also established by the quasiequilibrium. However, the abundance of free neutrons is also proportional to the excess of neutrons over protons in the composition of the massive star; therefore the abundance of Ar, using it as an example, is greater in ejecta from recent massive stars than it was from those in early stars of only H and He; therefore Cl, to which Ar decays after the nucleosynthesis, is called a "secondary isotope".
In interest of brevity, the next stage, an intricate photo-disintegration rearrangement, and the nuclear quasiequilibrium that it achieves, are referred to as silicon burning.
The silicon burning in the star progresses through a temporal sequence of such nuclear quasiequilibria in which the abundance of Si slowly declines and that of Ni slowly increases. This amounts to a nuclear abundance change 2 Si ≫ Ni, which may be thought of as silicon burning into nickel ("burning" in the nuclear sense).
The entire silicon-burning sequence lasts about one day in the core of a contracting massive star and stops after Ni has become the dominant abundance. The final explosive burning caused when the supernova shock passes through the silicon-burning shell lasts only seconds, but its roughly 50% increase in the temperature causes furious nuclear burning, which becomes the major contributor to nucleosynthesis in the mass range 28–60 .
After the final Ni stage, the star can no longer release energy via nuclear fusion, because a nucleus with 56 nucleons has the lowest mass per nucleon of all the elements in the sequence. The next step up in the alpha-particle chain would be Zn. However Zn has slightly more mass per nucleon than Ni, and thus would require a thermodynamic energy loss rather than a gain as happened in all prior stages of nuclear burning.
Ni (which has 28 protons) has a half-life of 6.02 days and decays via β decay to Co (27 protons), which in turn has a half-life of 77.3 days as it decays to Fe (26 protons). However, only minutes are available for the Ni to decay within the core of a massive star.
This establishes Ni as the most abundant of the radioactive nuclei created in this way. Its radioactivity energizes the late supernova light curve and creates the pathbreaking opportunity for gamma-ray-line astronomy. See SN 1987A light curve for the aftermath of that opportunity.
Clayton and Meyer have recently generalized this process still further by what they have named the secondary supernova machine, attributing the increasing radioactivity that energizes late supernova displays to the storage of increasing Coulomb energy within the quasiequilibrium nuclei called out above as the quasiequilibria shift from primarily Si to primarily Ni. The visible displays are powered by the decay of that excess Coulomb energy.
During this phase of the core contraction, the potential energy of gravitational compression heats the interior to roughly three billion kelvins, which briefly maintains pressure support and opposes rapid core contraction. However, since no additional heat energy can be generated via new fusion reactions, the final unopposed contraction rapidly accelerates into a collapse lasting only a few seconds. At that point, the central portion of the star is crushed into either a neutron star or, if the star is massive enough, into a black hole.
The outer layers of the star are blown off in an explosion triggered by the outward moving supernova shock, known as a Type II supernova whose displays last days to months. The escaping portion of the supernova core may initially contain a large density of free neutrons, which may synthesize, in about one second while inside the star, roughly half of the elements in the universe that are heavier than iron via a rapid neutron-capture mechanism known as the r-process. See below.
Nuclides synthesized
Stars with initial masses less than about eight times the sun never develop a core large enough to collapse and they eventually lose their atmospheres to become white dwarfs, stable cooling spheres of carbon supported by the pressure of degenerate electrons. Nucleosynthesis within those lighter stars is therefore limited to nuclides that were fused in material located above the final white dwarf. This limits their modest yields returned to interstellar gas to carbon-13 and nitrogen-14, and to isotopes heavier than iron by slow capture of neutrons (the s-process).
A significant minority of white dwarfs will explode, however, either because they are in a binary orbit with a companion star that loses mass to the stronger gravitational field of the white dwarf, or because of a merger with another white dwarf. The result is a white dwarf which exceeds its Chandrasekhar limit and explodes as a synthesizing about a solar mass of radioactive Ni isotopes, together with smaller amounts of other iron peak elements. The subsequent radioactive decay of the nickel to iron keeps Type Ia optically very bright for weeks and creates more than half of all the iron in the universe.
Virtually all of the remainder of stellar nucleosynthesis occurs, however, in stars that are massive enough to end as core collapse supernovae. In the pre-supernova massive star this includes helium burning, carbon burning, oxygen burning and silicon burning. Much of that yield may never leave the star but instead disappears into its collapsed core. The yield that is ejected is substantially fused in last-second explosive burning caused by the shock wave launched by core collapse. Prior to core collapse, fusion of elements between silicon and iron occurs only in the largest of stars, and then in limited amounts. Thus, the nucleosynthesis of the abundant primary elements defined as those that could be synthesized in stars of initially only hydrogen and helium (left by the Big Bang), is substantially limited to core-collapse supernova nucleosynthesis.
r-process nucleosynthesis
During supernova nucleosynthesis, the r-process creates very neutron-rich heavy isotopes, which decay after the event to the first stable isotope, thereby creating the neutron-rich stable isotopes of all heavy elements. This neutron capture process occurs in high neutron density with high temperature conditions.
In the r-process, any heavy nuclei are bombarded with a large neutron flux to form highly unstable neutron rich nuclei which very rapidly undergo beta decay to form more stable nuclei with higher atomic number and the same atomic mass. The neutron density is extremely high, about 10 neutrons per cubic centimeter.
Initial calculations of an evolving r-process, showing the evolution of calculated results with time, also suggested that the r-process abundances are a superposition of differing neutron fluences. Small fluence produces the first r-process abundance peak near atomic weight but no actinides, whereas large fluence produces the actinides uranium and thorium but no longer contains the abundance peak. These processes occur in a fraction of a second to a few seconds, depending on details. Hundreds of subsequent papers published have utilized this time-dependent approach. The only modern nearby supernova, 1987A, has not revealed r-process enrichments. Modern thinking is that the r-process yield may be ejected from some supernovae but swallowed up in others as part of the residual neutron star or black hole.
Entirely new astronomical data about the r-process was discovered in 2017 when the LIGO and Virgo gravitational-wave observatories discovered a merger of two neutron stars that had previously been orbiting one another. That can happen when both massive stars in orbit with one another become core-collapse supernovae, leaving neutron-star remnants.
The localization on the sky of the source of those gravitational waves radiated by that orbital collapse and merger of the two neutron stars, creating a black hole, but with significant ejected mass of highly neutronized matter, enabled several teams to discover and study the remaining optical counterpart of the merger, finding spectroscopic evidence of r-process material thrown off by the merging neutron stars.
The bulk of this material seems to consist of two types: Hot blue masses of highly radioactive r-process matter of lower-mass-range heavy nuclei () and cooler red masses of higher mass-number r-process nuclei () rich in actinides (such as uranium, thorium, californium etc.). When released from the huge internal pressure of the neutron star, this neutron-rich spherical ejecta expands and radiates detected optical light for about a week. Such duration of luminosity would not be possible without heating by internal radioactive decay, which is provided by r-process nuclei near their waiting points. Two distinct mass regions ( and ) for the r-process yields have been known since the first time dependent calculations of the r-process. Because of these spectroscopic features it has been argued that r-process nucleosynthesis in the Milky Way may have been primarily ejecta from neutron-star mergers rather than from supernovae.
See also
References
Other reading
External links
Astrophysics
Nuclear physics
Nucleosynthesis
Physical cosmological concepts
Nucleosynthesis | Supernova nucleosynthesis | [
"Physics",
"Chemistry",
"Astronomy"
] | 4,604 | [
"Physical cosmological concepts",
"Supernovae",
"Nuclear fission",
"Concepts in astrophysics",
"Astronomical events",
"Astrophysics",
"Nucleosynthesis",
"Explosions",
"Nuclear physics",
"Nuclear fusion",
"Astronomical sub-disciplines"
] |
2,443,683 | https://en.wikipedia.org/wiki/B-Method | The B method is a method of software development based on B, a tool-supported formal method based on an abstract machine notation, used in the development of computer software.
Overview
B was originally developed in the 1980s by Jean-Raymond Abrial in France and the UK. B is related to the Z notation (also originated by Abrial) and supports development of programming language code from specifications. B has been used in major safety-critical system applications in Europe (such as the automatic Paris Métro lines 14 and 1 and the Ariane 5 rocket). It has robust, commercially available tool support for specification, design, proof and code generation.
Compared to Z, B is slightly more low-level and more focused on refinement to code rather than just formal specification — hence it is easier to correctly implement a specification written in B than one in Z. In particular, there is good tool support for this.
The same language is used in specification, design and programming.
Mechanisms include encapsulation and data locality.
Event-B
Subsequently, another formal method called Event-B has been developed based on the B-Method, support by the Rodin Platform. Event-B is a formal method aimed at system-level modelling and analysis. Features of Event-B are the use of set theory for modelling, the use of refinement to represent systems at different levels of abstraction, and the use of mathematical proof for verifying consistency between these refinement levels.
The main components
The B notation depends on set theory and first order logic in order to specify different versions of software that covers the complete cycle of project development.
Abstract machine
In the first and the most abstract version, which is called Abstract Machine, the designer should specify the goal of the design.
Refinement
Then, during a refinement step, they may pad the specification in order to clarify the goal or to turn the abstract machine more concrete by adding details about data structures and algorithms that define, how the goal is achieved.
The new version, which is called Refinement, should be proven to be coherent and including all the properties of the abstract machine.
The designer may make use of B libraries in order to model data structures or to include or import existing components.
Implementation
The refinement continues until a deterministic version is achieved: the Implementation.
During all of the development steps the same notation is used and the last version may be translated to a programming language for compilation.
Software
B-Toolkit
The B-Toolkit is a collection of programming tools designed to support the use of the B-Tool, is a set theory-based mathematical interpreter, for the purposes of supporting the B-Method. Development was originally undertaken by Ib Holm Sørensen and others, at BP Research and then at B-Core (UK) Limited.
The toolkit uses a custom X Window Motif Interface for GUI management and runs primarily on the Linux, Mac OS X and Solaris operating systems.
The B-Toolkit source code is now available.
Atelier B
Developed by ClearSy, Atelier B is an industrial tool that allows for the operational use of the B Method to develop defect-free proven software (formal software). Two versions are available: 1) Community Edition available to anyone without any restriction; 2) Maintenance Edition for maintenance contract holders only. Atelier B has been used to develop safety automatisms for the various subways installed throughout the world by Alstom and Siemens, and also for Common Criteria certification and the development of system models by ATMEL and STMicroelectronics.
Rodin
The Rodin Platform is a tool that supports Event-B. Rodin is based on an Eclipse software IDE (integrated development environment) and provides support for refinement and mathematical proof. The platform is open source and forms part of the Eclipse framework It is extendable using software component plug-ins. The development of Rodin has been supported by the European Union projects DEPLOY (2008–2012), RODIN (2004–2007), and ADVANCE (2011–2014).
BHDL
BHDL provides a method for the correct design of digital circuits, combining
the advantages of the hardware description language VHDL with the formality of B.
APCB
APCB (, the International B Conference Steering Committee) has organized meetings associated with the B-Method. It has organized ZB conferences with the Z User Group and ABZ conferences, including Abstract State Machines (ASM) as well as the Z notation.
Books
The B-Book: Assigning Programs to Meanings, Jean-Raymond Abrial, Cambridge University Press, 1996. .
The B-Method: An Introduction, Steve Schneider, Palgrave Macmillan, Cornerstones of Computing series, 2001. .
Software Engineering with B, John Wordsworth, Addison Wesley Longman, 1996. .
The B Language and Method: A Guide to Practical Formal Development, Kevin Lano, Springer-Verlag, FACIT series, 1996. .
Specification in B: An Introduction using the B Toolkit, Kevin Lano, World Scientific Publishing Company, Imperial College Press, 1996. .
Modeling in Event-B: System and Software Engineering, Jean-Raymond Abrial, Cambridge University Press, 2010. .
Conferences
The following conferences have explicitly included the B-Method and/or Event-B:
Z2B Conference, Nantes, France, 10–12 October 1995
First B Conference, Nantes, France, 25–27 November 1996
Second B Conference, Montpellier, France, 22–24 April 1998
ZB 2000, York, United Kingdom, 28 August – 2 September 2000
ZB 2002, Grenoble, France, 23–25 January 2002
ZB 2003, Turku, Finland, 4–6 June 2003
ZB 2005, Guildford, United Kingdom, 2005
B 2007, Besançon, France, 2007
B, from research to teaching, Nantes, France, 16 June 2008
B, from research to teaching, Nantes, France, 8 June 2009
B, from research to teaching, Nantes, France, 7 June 2010
ABZ 2008, British Computer Society, London, United Kingdom, 16–18 September 2008
ABZ 2010, Orford, Québec, Canada, 23–25 February 2010
ABZ 2012, Pisa, Italy, 18–22 June 2012
ABZ 2014, Toulouse, France, 2–6 June 2014
ABZ 2016, Linz, Austria, 23–27 May 2016
ABZ 2018, Southampton, United Kingdom, 2018
ABZ 2020, Ulm, Germany, 2021 (delayed due to the COVID-19 pandemic)
ABZ 2021, Ulm, Germany, 2021
See also
Formal methods
Z notation
References
External links
B Method.com – work and subjects concerning the B method, a formal method with proof
Atelier B.eu : Atelier B is a systems engineering workshop, which enables software to be developed that is guaranteed to be flawless
Site B Grenoble
Formal methods
Formal methods tools
Formal specification languages | B-Method | [
"Mathematics",
"Engineering"
] | 1,406 | [
"Software engineering",
"Formal methods tools",
"Formal methods",
"Mathematical software"
] |
2,443,818 | https://en.wikipedia.org/wiki/Gingerol | Gingerol ([6]-gingerol) is a phenolic phytochemical compound found in fresh ginger that activates heat receptors on the tongue. It is normally found as a pungent yellow oil in the ginger rhizome, but can also form a low-melting crystalline solid. This chemical compound is found in all members of the Zingiberaceae family and is high in concentrations in the grains of paradise as well as an African Ginger species.
Cooking ginger transforms gingerol via a reverse aldol reaction into zingerone, which is less pungent and has a spicy-sweet aroma. When ginger is dried or mildly heated, gingerol undergoes a dehydration reaction forming shogaols, which are about twice as pungent as gingerol. This explains why dried ginger is more pungent than fresh ginger.
Ginger also contains [8]-gingerol, [10]-gingerol, and [12]-gingerol, collectively deemed gingerols.
Physiological potential
In a pre-clinical meta-analysis of gingerol compounds anticancer, anti-inflammatory, anti-fungal, antioxidant, neuroprotective and gastroprotective properties were reported, which include studies in-vitro and in-vivo. A few in-vivo studies have proposed that gingerols facilitate healthy glucose regulation for diabetics. Many studies have been around the effects of gingerols on a wide range of cancers including leukemia, prostate, breast, skin, ovarian, lung, pancreatic and colorectal. There has not been much clinical testing to observe gingerols physiological impacts in humans.
While many of the chemical mechanisms associated with the effects of gingerols on cells have been thoroughly studied, few have been in a clinical setting. This is due to the high variability in natural phytochemicals and the lack of efficacy in research. Most herbal medicine, which include gingerols, are under the restrictions of the Food and Drug Administration in the United States and experimental methods have not held up to scrutiny which has decreased the value in phytochemical research. Herbal medicine is untested for quality assurance, potency and effectiveness in clinical settings due to a lack of funding in eastern medical research. Most research on [6]-Gingerol has been on either mouse subjects (in-vivo) or on cultured human tissue (in-vitro) and may be used in the future to discuss possible applications for multi-target disease control.
An investigation scrutinizing gingerol's anti-fungal capabilities remarked that an African species of ginger tested higher in both gingerol and shogaol compounds than the more commonly cultivated Indonesian relative. When tested for the anti-fungal properties the African ginger combated against 13 human pathogens and was three times more effective than the commercial Indonesian counterpart. It is thought that gingerol compounds work in tandem with the other phytochemicals present including shogaols, paradols and zingerone.
In a meta analysis looking at many different phytochemical effects on prostate cancer, two specific studies using mice observed [6]-gingerol compounds induced apoptosis in cancer cells by interfering with the mitochondrial membrane. There were also observed mechanisms associated with the disruption of G1 phase proteins to stop the reproduction of cancer cells which is also an associated benefit of other relevant anticancer studies. The main mechanism by which gingerol phytochemicals act on cancer cells seems to be protein disruption. The anti-carcinogenic activity of [6]-gingerol and [6]-paradol was analysed in a study observing the cellular mechanisms associated with mouse skin cancer which targeted the activator proteins associated with tumor initiation. Gingerol compounds inhibited the transformation of normal cells into cancer cells by blocking AP-1 proteins and when cancer did develop paradol encouraged apoptosis due to its cytotoxic activity. [6]-Gingerol exhibits cell cycle arrest capabilities, apoptotic action and enzyme-coupled cell signaling receptor degradation in cancer cells. Gingerol has been observed to stop proliferation through inhibiting the translation of Cyclin proteins necessary for replication during G1 and G2 phase of cell division. To promote apoptosis in cancer cells Cytochrome C is ejected from the mitochondria which ceases ATP production leaving a dysfunctional mitochondria. The Cytochrome C assembles an apoptosome which activates the Caspase-9 and initiates an executioner Caspase cascade, effectively breaking down DNA into histones and promoting apoptosis. [6]-Gingerol also inhibits the anti-apoptotic Bcl-2 proteins on the surface of mitochondria, which in turn increases the capabilities for the pro-apoptotic Bcl-2 proteins to initiate cell death. Cancer cells exhibit high amounts of growth hormone activator proteins that are expressed through enzyme-coupled signaling pathways. By halting the phosphorylation of PI-3-Kinase the Akt protein cannot bind with its PH domain, effectively deactivating the downstream signal. Successively keeping Bad proteins bound to anti-apoptotic proteins which keeps them from promoting cell growth, consequently, a double negative cellular signaling pathway to promote apoptosis.
Cultured human breast cancer cells were subjected to various concentrations of [6]-gingerol to determine the impacts on live cells. These concentration dependent results concluded that there was no impact at 5 μM but a reduction of 16% occurred at 10 μM. [6]-gingerol targeted three specific proteins in breast cancer cells that promote metastasis and while adhesion remained relatively unchanged, [6]-gingerol inhibited the cancer cells from invading and increasing in size. This study suggests the mechanism by which cancer cell growth was impacted was due to a reduction in specific mRNA that transcribes for extracellular degrading enzymes called matrix metalloproteinases (MMP's). An examination using human cells in-vitro displayed gingerols capabilities in combating oxidative stress. The results concluded that gingerol had anti-inflammatory effects though shogaol showed the most promising effects combating free radicals. There was an inverted dose- concentration response and as dosage concentration increased the amount of free radicals in cells decreased.
Cisplatin is a chemotherapy drug that if used in high dosages causes renal failure which is considered a limiting factor for this life saving drug. By using [6]-gingerol it prevented the occurrence of renal failure in rats. [6]-gingerol improved glutathione production in dose-dependent results which suggested that the higher a dosage the more of an effect [6]-gingerol had.
Gingerol compounds are thought to help in diabetic patients because of increases in glutathione, a cellular toxin regulatory factor. Anti-hyperglycaemic effects were studied in diabetic and severely obese mice. Gingerol compounds increased glucose uptake in cells without the need of a synthetic insulin activator, while also decreasing fasting glucose and increasing glucose tolerance. In a different study the exact metabolic mechanisms associated with the physiological benefits of gingerol phytochemicals concluded that there was increased enzyme activity (CAT) and glutathione production while decreasing lipoprotein cholesterol and improving glucose tolerance in mice. Cardio-arrhythmia is a common side effect of diabetic patients and the anti-inflammatory effects of gingerol suppressed the risks by lowering blood glucose levels in-vivo.
The anti-oxidant properties of [6]-gingerol has been considered as a defense against Alzheimer’s. A study observed the molecular mechanisms responsible for the protection against DNA fragmentation and mitochondrial membrane potential deterioration of cells which suggests a neuroprotective support of gingerol. This study indicates that ginger up-regulates glutathione production in cells, including nerve cells, through anti-oxidative properties which decreases the risk of Alzheimer's in human neuroblastoma cells and mouse hippocampal cells.
While many studies suggest the low risk of using ginger phytochemicals to combat oxidation damage to cells, there are a few studies that suggest potential genotoxic effects. In one study too high of a dose to human hepatoma cells resulted in DNA fragmentation, chromosomal damage and organelle membrane instability which could result in apoptotic behavior. There are some pro-oxidant behaviors to gingerol compounds when the concentration reaches high levels although also considered, in normal conditions these phytochemicals observed have anti-inflammatory and anti-oxidant qualities. In another study [6]-Gingerol notably inhibited the metabolic rate of rats when given an intraperitoneal injection which induced a hypothermic reaction though, when consumed orally in excess there were no changes in body temperature.
Biosynthesis
Both ginger (Zingiber officinale) and turmeric (Curcuma longa) had been suspected to utilize phenylpropanoid pathway and produce putative type III polyketide synthase products based on the research of 6-gingerol biosynthesis by Denniff and Whiting in 1976 and by Schröder's research in 1997. 6-Gingerol is the major gingerol in ginger rhizomes and it possesses some interesting pharmacological activities like analgesic effect. While the biosynthesis of 6-gingerol is not fully elucidated, plausible pathways are presented here.
In the proposed biosynthetic pathway, Scheme 1, L-Phe (1) is used as the starting material. It is converted into Cinnamic acid (2) via phenylalanine ammonia lyase (PAL). Then it is turned into p-Coumaric acid (3) with use of cinnamate 4-hydroxylase (C4H). 4-coumarate:CoA ligase (4CL) is then used to get p-Coumaroyl-CoA (5). P-Coumaroyl shikimate transferase (CST) is the enzyme that is responsible for the bonding of shikimic acid and p-Coumaroyl-CoA. The complexed (5) is then selectively oxidized at C3 by p-coumaroyl 5-O-shikimate 3'-hydroxylase (CS3'H) to alcohol. With another action of CST, shikimate is broken off from this intermediate, thereby yielding Caffeoyl-CoA (7). In order to get desired substitution pattern on the aromatic ring, caffeoyl-CoA O-methyltransferase (CCOMT) converts the hydroxyl group at C3 into methoxy as seen in Feruloyl-CoA (8). Up until this step, according to Ramirez-Ahumada et al., the enzyme activities are very active. It is speculated that some polyketide synthases (PKS) and reductases are involved in final synthesis of 6-Gingerol (10).
Because it is unclear whether the methoxy group addition is performed before or after the condensation step of the polyketide synthase, alternative pathway is shown in Scheme 2, where methoxy group is introduced after PKS activity. In this alternative pathway, the enzymes involved are likely to be cytochrome p450 hydroxylases, and S-adenosyl-L-methionine-dependent O-methyltransferases (OMT). There are three possibilities for the reduction step by Reductase: directly after PKS activity, after PKS and Hydroxylase activity, or in the end after PKS, Hydroxylase, and OMT activity.
References
External links
Stilbenoid, diarylheptanoid and gingerol biosynthesis pathway at genome.jp
O-methylated natural phenols
Ginger
Pungent flavors
Ketones
Secondary alcohols
5-HT3 antagonists | Gingerol | [
"Chemistry"
] | 2,514 | [
"Ketones",
"Functional groups"
] |
2,444,137 | https://en.wikipedia.org/wiki/Desogestrel | Desogestrel is a progestin medication which is used in birth control pills. It is also used in the treatment of menopausal symptoms in women. The medication is available and used alone or in combination with an estrogen. It is taken by mouth.
Side effects of desogestrel include menstrual irregularities, headaches, nausea, breast tenderness, mood changes, acne, increased hair growth, and others. Desogestrel is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. It has very weak androgenic and glucocorticoid activity and no other important hormonal activity. The medication is a prodrug of etonogestrel (3-ketodesogestrel) in the body.
Desogestrel was discovered in 1972 and was introduced for medical use in Europe in 1981. It became available in the United States in 1992. Desogestrel is sometimes referred to as a "third-generation" progestin. Like norethisterone and Norgestrel, Desogestrel is widely available as a progestogen-only "mini pill" for birth control. Desogestrel is marketed widely throughout the world. It is available as a generic medication. In 2020, the version with ethinylestradiol was the 120th most commonly prescribed medication in the United States, with more than 5million prescriptions.
Medical uses
Desogestrel is a hormone blocker, progesterone receptors agonist, and antiandrogen. It is used in conjunction with estrogens and testosterones. Medications containing desogestrel and estrogen are used to treat endometriosis and as a component of menopausal hormone therapy. While commonly used as a female contraceptive, desogestrel suppresses spermogenesis and has been shown to have potential as a male contraceptive.
Desogestrel and norethisterone are the only progestins that are widely used as a progestogen-only "mini pill". It is also the only newer-generation progestin with reduced androgenic activity that is used in such formulations.
Available forms
Desogestrel is available alone in the form of 75 μg oral tablets and at a dose of 150 μg in combination with 20 or 30 μg ethinylestradiol in oral tablets. These formulations are all indicated specifically for contraceptive purposes.
Contraindications
Contraindications of desogestrel include:
Allergy to desogestrel or any other ingredients
Active thrombosis (deep vein thrombosis or pulmonary embolism)
Jaundice or severe liver disease
Hormone-sensitive cancers (e.g., breast cancer)
Unexplained vaginal bleeding
Desogestrel is not indicated for use in pregnancy. It is not contraindicated during lactation and breastfeeding.
Side effects
Common side effects of desogestrel may include menstrual irregularities, amenorrhea, headaches, nausea, breast tenderness, and mood changes (e.g., depression), as well as weight gain, acne, and hirsutism. However, it has also been reported to not adversely affect weight. In addition, acne and hirsutism are negligible when combined with ethinylestradiol, and this combination can actually be used to treat such symptoms. Desogestrel can also cause changes in total, , and cholesterol. Uncommon side effects of desogestrel may include vaginal infection, contact lens intolerance, vomiting, hair loss, dysmenorrhea, ovarian cysts, and fatigue, while rare side effects include rash, urticaria, and erythema nodosum. Breast discharge, ectopic pregnancies, and aggravation of angioedema may also occur with desogestrel. Serious side effects of combined oral contraceptives containing desogestrel may include venous thromboembolism, arterial thromboembolism, hormone-dependent tumors (e.g., liver tumors, breast cancer), and melasma.
Overdose
No serious harmful effects have been reported with overdose of desogestrel. Symptoms may include nausea, vomiting, and, in young girls, slight vaginal bleeding. In safety studies, dosages of up to 750 μg/day desogestrel in women showed no adverse effects on laboratory and various other parameters and produced no reported subjective side effects. There is no antidote to desogestrel overdose and treatment should be based on symptoms.
Interactions
Inducers of liver enzymes can increase the metabolism of desogestrel and etonogestrel and reduce their circulating levels. This may result in contraceptive failure. Examples of liver enzyme inducers include barbiturates (e.g., phenobarbital), bosentan, carbamazepine, efavirenz, phenytoin, primidone, rifampicin, and possibly also felbamate, griseofulvin, oxcarbazepine, rifabutin, St. John's Wort, and topiramate. Many antivirals for HIV/AIDS and HCV, such as boceprevir, nelfinavir, nevirapine, ritonavir, and telaprevir, may increase or decrease levels of desogestrel and etonogestrel. CYP3A4 inhibitors including strong inhibitors like clarithromycin, itraconazole, and ketoconazole and moderate inhibitors like diltiazem, erythromycin, and fluconazole may increase levels of desogestrel and etonogestrel. Hormonal contraceptives may interfere with the metabolism of other drugs, resulting in increased levels (e.g., ciclosporine) or decreased levels (e.g., lamotrigine).
Pharmacology
Pharmacodynamics
Desogestrel is a prodrug of etonogestrel (3-ketodesogestrel), and, via this active metabolite, it has progestogenic activity, antigonadotropic effects, very weak androgenic activity, very weak glucocorticoid activity, and no other hormonal activity.
Progestogenic activity
Desogestrel is a progestogen, or an agonist of the progesterone receptor (PR). It is an inactive prodrug of etonogestrel with essentially no affinity for the PR itself (about 1% of that of promegestone). Hence, etonogestrel is exclusively responsible for the effects of desogestrel. Etonogestrel has about 150% of the affinity of promegestone and 300% of the affinity of progesterone for the PR. Desogestrel (via etonogestrel) is a very potent progestogen and inhibits ovulation at very low doses, in the low microgram range. The effective minimum dosage for inhibition of ovulation is 60 μg/day desogestrel (alone, not in combination with an estrogen). However, some studies in combination with oral estradiol have suggested that higher doses may be necessary. Desogestrel and etonogestrel are among the most potent progestogens available, along with gestodene and levonorgestrel (which have effective ovulation-inhibiting dosages 40 μg/day and 60 μg/day, respectively). Oral desogestrel is clinically on the order of 5,000 times more potent than oral micronized progesterone (which has an effective ovulation-inhibiting dosage of more than 300 mg/day) in humans.
Due to its progestogenic activity, desogestrel has potent functional antiestrogenic effects in certain tissues. It dose-dependently antagonizes the effects of ethinylestradiol on the vaginal epithelium, cervical mucus, and endometrium, with marked progestogenic effects occurring at a dosage of 60 μg/day. There is a rise in body temperature in some women at 30 μg/day and in all women at 60 μg/day. Desogestrel also has antigonadotropic effects, which are similarly due to its progestogenic activity. The contraceptive effects of desogestrel in women are mediated not only by prevention of ovulation via its antigonadotropic effects but also by its marked progestogenic and antiestrogenic effects on cervical mucus and the endometrium.
Aside from its progestogenic activity, desogestrel also has some off-target hormonal activity at other steroid hormone receptors (see below). However, these activities are relatively weak, and desogestrel is said to be one of the most selective and pure progestogens used in oral contraceptives.
Antigonadotropic effects
Desogestrel has antigonadotropic effects via its progestogenic activity, similarly to other progestogens. It has been found to reduce testosterone levels by 15% in women at a dosage of 125 μg/day. In addition, desogestrel has been extensively investigated as an antigonadotropin at dosages of 150 to 300 μg/day in combination with testosterone in male contraceptive regimens. One study found that 150 μg/day and 300 μg/day desogestrel alone in healthy young men suppressed luteinizing hormone (LH) levels by about 35% and 42%, respectively; follicle-stimulating hormone (FSH) levels by about 47% and 55%, respectively; and testosterone levels by about 59% and 68%, respectively. LH levels were suppressed maximally by desogestrel within 3 days, whereas 14 days were necessary for maximal suppression of FSH and testosterone levels. A previous study by the same authors found that increasing the dosage of desogestrel from 300 μg/day to 450 μg/day resulted in no further suppression of gonadotropin concentrations. The addition of a low dose of 50 or 100 mg/week intramuscular testosterone enanthate after 3 weeks increased testosterone levels and further suppressed LH and FSH levels, to the limits of assay detection (i.e., to undetectable or near-undetectable levels), in both the 150 μg/day and 300 μg/day desogestrel groups. Upon cessation of treatment, levels of LH, FSH, and testosterone all recovered to baseline values within 4 weeks.
Androgenic activity
Etonogestrel has about 20% of the affinity of metribolone and 50% of the affinity of levonorgestrel for the androgen receptor (AR) while desogestrel has no affinity for this receptor. The 5α-reduced metabolite of etonogestrel, 5α-dihydroetonogestrel (3-keto-5α-dihydrodesogestrel), also has some affinity for the AR (about 17% of that of metribolone). Desogestrel (via etonogestrel) has very low androgenic potency, about 1.9 to 7.4% of that of methyltestosterone in animal assays, and hence is considered to be a very weak androgen. Although etonogestrel has about the same affinity for the AR as norethisterone, due to the relatively increased progestogenic potency and decreased androgenic activity of etonogestrel, the drug has markedly higher selectivity for the PR over the AR than older 19-nortestosterone progestins like norethisterone and levonorgestrel. Conversely, its selectivity for the PR over the AR is similar to other newer 19-nortestosterone progestins like gestodene and norgestimate. It has been estimated that 150 μg/day desogestrel has less than one-sixth of the androgenic effect of 1 mg/day norethisterone (these being common dosages of the drugs used in combined oral contraceptives). Clinical studies with norethisterone even at very high dosages (e.g., 10 to 60 mg/day) have observed only mild androgenic effects in a minority of women including acne, increased sebum production, hirsutism, and slight virilization of female fetuses.
In accordance with its very weak androgenic activity, desogestrel has minimal effects on lipid metabolism and the blood lipid profile, although there may still be some significant changes. Desogestrel also reduces sex hormone-binding globulin (SHBG) levels by 50% when given to women alone, but when combined with 30 μg/day ethinylestradiol, which in contrast strongly activates SHBG production, there is a 200% increase in SHBG concentrations. Desogestrel may slightly reduce ethinylestradiol-induced increases in SHBG levels. However, at the dosages used in oral contraceptives and in combination with ethinylestradiol, which has potent functional antiandrogenic effects mainly due to increased SHBG levels, the androgenic activity of desogestrel is said to be essentially without any clinical relevance. Indeed, combined oral contraceptives containing ethinylestradiol and desogestrel have been found to significantly decrease free concentrations of testosterone and to possess overall antiandrogenic effects, significantly reducing symptoms of acne and hirsutism in women with hyperandrogenism.
Glucocorticoid activity
Desogestrel has no affinity for the glucocorticoid receptor, but etonogestrel has about 14% of the affinity of dexamethasone for this receptor. Hence, desogestrel and etonogestrel have weak glucocorticoid activity. At typical clinical dosages, the glucocorticoid activity of desogestrel is said to be negligible or very weak and hence not clinically relevant. However, it may nonetheless possibly influence vascular function, with some upregulation of the thrombin receptor observed with etonogestrel in vascular smooth muscle cells in vitro. This could, in theory, increase coagulation and contribute to an increased risk of venous thromboembolism and atherosclerosis. The affinity of etonogestrel for the glucocorticoid receptor is a product of its C11 methylene substitution, as substitutions at the C11 position are a common feature of corticosteroids and as levonorgestrel, which is etonogestrel without the C11 methylene group (17α-ethynyl-18-methyl-19-nortestosterone), has only 1% of the affinity of dexamethasone for the receptor and hence is considered to have negligible glucocorticoid activity.
Other activities
Desogestrel and etonogestrel have no affinity for the estrogen receptor, and hence have no estrogenic activity. However, the metabolite 3β-hydroxydesogestrel has weak affinity for the estrogen receptor (about 2% of that of estradiol), although the significance of this is uncertain.
Desogestrel and etonogestrel have no affinity for the mineralocorticoid receptor, and hence have no mineralocorticoid or antimineralocorticoid activity.
Desogestrel and etonogestrel show some albeit weak inhibition of 5α-reductase (5.7% inhibition at 0.1 μM, 34.9% inhibition at 1 μM) and cytochrome P450 enzymes (e.g., CYP3A4) ( = 5 μM) in vitro.
Desogestrel stimulates the proliferation of MCF-7 breast cancer cells in vitro, an action that is independent of the classical PRs and is instead mediated via the progesterone receptor membrane component-1 (PGRMC1). Certain other progestins act similarly in this assay, whereas progesterone acts neutrally. It is unclear if these findings may explain the different risks of breast cancer observed with progesterone and progestins in clinical studies.
Pharmacokinetics
The bioavailability of desogestrel has been found to range from 40 to 100%, with an average of 76%. This significant interindividual variability is comparable to that with norethisterone and levonorgestrel. Peak concentrations of etonogestrel occur about 1.5 hours after a dose while concentrations of desogestrel are very low and have disappeared by 3 hours after a dose. Steady-state levels of etonogestrel are achieved after about 8 to 10 days of daily administration. Accumulation of etonogestrel is thought to be related to progressive inhibition of 5α-reductase and cytochrome P450 monooxygenases (e.g., CYP3A4). The plasma protein binding of desogestrel is 99% and it is bound exclusively to albumin. Etonogestrel is bound 95 to 98% to plasma proteins. It is bound about 65 to 66% to albumin and 30 to 32% to SHBG, with 2 to 5% free in the circulation. While desogestrel is not bound to SHBG, etonogestrel has relatively high affinity for this plasma protein of 3 to 15% of that of dihydrotestosterone, although this is considerably less than that of the related progestins levonorgestrel and gestodene. Neither desogestrel nor etonogestrel are bound by corticosteroid-binding globulin.
Desogestrel is a prodrug of etonogestrel (3-ketodesogestrel) and upon ingestion is rapidly and completely transformed into this metabolite in the intestines and liver. Hydroxylation of the C3 position of desogestrel catalyzed by cytochrome P450-dependent enzymes, with 3α-hydroxydesogestrel and 3β-hydroxydesogetrel as intermediates, followed by oxidation of the C3 hydroxyl group, is responsible for the transformation. A small percentage of desogestrel is metabolized into levonorgestrel, which involves the removal of the C11 methylene group. Following further metabolism of etonogestrel, which occurs mainly by reduction of the Δ4-3-keto group (by 5α- and 5β-reductases) and hydroxylation (by monooxygenases), the major metabolite of desogestrel is 3α,5α-tetrahydroetonogestrel. Desogestrel has a very short terminal half-life of about 1.5 hours while etonogestrel has a relatively long elimination half-life of about 21 to 38 hours, reflecting the nature of desogestrel as a prodrug. Desogestrel and etonogestrel are eliminated exclusively as metabolites 50% in urine and 35% in feces.
Chemistry
Desogestrel, also known as 3-deketo-11-methylene-17α-ethynyl-18-methyl-19-nortestosterone or as 11-methylene-17α-ethynyl-18-methylestr-4-en-17β-ol, is a synthetic estrane steroid and a derivative of testosterone. It is more specifically a derivative of norethisterone (17α-ethynyl-19-nortestosterone) and is a member of the gonane (13β-ethylgonane or 18-methylestrane) subgroup of the 19-nortestosterone family of progestins. Desogestrel is the C3 deketo analogue of etonogestrel and the C3 deketo and C11 methylene analogue of levonorgestrel.
Synthesis
A chemical synthesis of desogestrel has been published.
History
Desogestrel was synthesized in 1972 by Organon International in the Netherlands and was first described in the literature in 1975. It was developed following the discovery that C11 substitutions enhance the biological activity of norethisterone. Desogestrel was introduced for medical use in 1981 under the brand names Marvelon and Desogen in the Netherlands. Along with gestodene and norgestimate, it is sometimes referred to as a "third-generation" progestin based on the time of its introduction to the market. It was the first of the three "third-generation" progestins to be introduced. Although desogestrel was introduced in 1981 and was widely used in Europe from this time, it was not introduced in the United States until 1992.
Society and culture
Generic names
Desogestrel is the generic name of the drug and its , , , , , and . While under development, it was known as ORG-2969.
Brand names
Desogestrel is marketed under a variety of brand names throughout the world including Alenvona, Apri, Azalia, Azurette, Bekyree, Caziant, Cerazette, Cerelle, Cesia, Cyclessa, Cyred, Denise, Desogen, Desirett, Diamilla, Emoquette, Enskyce, Feanolla, Gedarel, Gracial, Hana, Isibloom, Juleber, Kalliga, Kariva, Laurina, Lovima, Marvelon, Mercilon, Mircette, Mirvala, Novynette, Ortho-Cept, Pimtrea, Reclipsen, Regulon, Simliya, Solia, Velivet, Viorele, and Volnea among others.
Availability
Desogestrel is available widely throughout the world, including in the United States, Canada, the United Kingdom, Ireland, many other European countries, Australia, New Zealand, South Africa, Latin America, Asia, and elsewhere. In the United States, it is available only in combination with ethinylestradiol as a combined oral contraceptive; it is not available alone and is not approved for any other indications.
In the UK, in July 2021, some Desogestrel pills were made available to purchase over the counter, without requiring a prescription from a doctor beforehand. Pharmacists use a suitability questionnaire to determine if the medication is going to be suitable for the person, and if it is then they can purchase it from a pharmacy or online (all online purchases require the suitability questionnaire completed before the medication is sent to the customer).
Controversy
In February 2007, the consumer advocacy group Public Citizen released a petition requesting that the Food and Drug Administration ban oral contraceptives containing desogestrel in the United States, citing studies going as far back as 1995 that suggest the risk of dangerous blood clots is doubled for women on such pills in comparison to other oral contraceptives. In 2009, Public Citizen released a list of recommendations that included numerous alternative, second-generation birth control pills that women could take in place of oral contraceptives containing desogestrel. Most of those second-generation medications have been on the market longer and have been shown to be as effective in preventing unwanted pregnancy, but with a lower risk of blood clots. Medications cited specifically in the petition include Apri-28, Cyclessa, Desogen, Kariva, Mircette, Ortho-Cept, Reclipsen, Velivet, and some generic pills, all of which contain desogestrel in combination with ethinylestradiol. Medications containing desogestrel as the only active ingredient (as opposed to being used in conjunction with ethinylestradiol, like in combined oral contraceptives) do not show an increased thrombosis risk and are therefore safer than second-generation birth-control pills in regards to thrombosis.
Research
Desogestrel has been studied extensively as an antigonadotropin for use in combination with testosterone as a hormonal contraceptive in men. Such combinations have been found to be effective in producing reversible azoospermia in most men and reversible azoospermia or severe oligozoospermia in almost all men.
References
Further reading
5α-Reductase inhibitors
Ethynyl compounds
Anabolic–androgenic steroids
Drugs developed by Schering-Plough
Drugs developed by Merck & Co.
Antigonadotropins
Contraception for males
Dienes
Estranes
Glucocorticoids
Hormonal contraception
Prodrugs
Progestogens
Tertiary alcohols
Vinylidene compounds | Desogestrel | [
"Chemistry"
] | 5,177 | [
"Chemicals in medicine",
"Prodrugs"
] |
2,444,141 | https://en.wikipedia.org/wiki/Drospirenone | Drospirenone is a progestin and antiandrogen medication which is used in birth control pills to prevent pregnancy and in menopausal hormone therapy, among other uses. It is available both alone under the brand name Slynd and in combination with an estrogen under the brand name Yasmin among others. The medication is an analog of the drug spironolactone. Drospirenone is taken by mouth.
Common side effects include acne, headache, breast tenderness, weight increase, and menstrual changes. Rare side effects may include high potassium levels and blood clots (when taken as a combined oestrogen-progestogen pill), among others. Drospirenone is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. It has additional antimineralocorticoid and antiandrogenic activity and no other important hormonal activity. Because of its antimineralocorticoid activity and lack of undesirable off-target activity, drospirenone is said to more closely resemble bioidentical progesterone than other progestins.
Drospirenone was patented in 1976 and introduced for medical use in 2000. It is available widely throughout the world. The medication is sometimes referred to as a "fourth-generation" progestin. It is available as a generic medication. In 2020, a formulation of drospirenone with ethinylestradiol was the 145th most commonly prescribed medication in the United States, with more than 4million prescriptions.
Medical uses
Drospirenone (DRSP) is used by itself as a progestogen-only birth control pill, in combination with the estrogens ethinylestradiol (EE) or estetrol (E4), with or without supplemental folic acid (vitamin B9), as a combined birth control pill, and in combination with the estrogen estradiol (E2) for use in menopausal hormone therapy. A birth control pill with low-dose ethinylestradiol is also indicated for the treatment of moderate acne, premenstrual syndrome (PMS), premenstrual dysphoric disorder (PMDD), and dysmenorrhea (painful menstruation). For use in menopausal hormone therapy, E2/DRSP is specifically approved to treat moderate to severe vasomotor symptoms (hot flashes), vaginal atrophy, and postmenopausal osteoporosis. The drospirenone component in this formulation is included specifically to prevent estrogen-induced endometrial hyperplasia. Drospirenone has also been used in combination with an estrogen as a component of hormone therapy for transgender women.
Studies have found that EE/DRSP is superior to placebo in reducing premenstrual emotional and physical symptoms while also improving quality of life. E2/DRSP has been found to increase bone mineral density and to reduce the occurrence of bone fractures in postmenopausal women. In addition, E2/DRSP has a favorable influence on cholesterol and triglyceride levels and decreases blood pressure in women with high blood pressure. Due to its antimineralocorticoid activity, drospirenone opposes estrogen-induced salt and water retention and maintains or slightly reduces body weight.
Available forms
Drospirenone is available in the following formulations, brand names, and indications:
Drospirenone 4 mg (Slynd) – progestogen-only birth control pill
Drospirenone 3 mg and estetrol 14.2 mg (Nextstellis (US)) – combined birth control pill
Ethinylestradiol 30 μg and drospirenone 3 mg (Ocella, Syeda, Yasmin, Zarah, Zumandimine) – combined birth control pill
Ethinylestradiol 20 μg and drospirenone 3 mg (Gianvi, Jasmiel, Loryna, Lo-Zumandimine, Nikki, Vestura, Yaz) – combined birth control pill, acne, PMS, PMDD, dysmenorrhea
Ethinylestradiol 30 μg, drospirenone 3 mg, and levomefolate calcium 0.451 mg (Beyaz, Tydemy) – combined birth control pill with vitamin B9 supplementation, acne, PMS
Estetrol 15 mg and drospirenone 3 mg (Nextstellis (CA)) – combined birth control pill
Estradiol 0.5 or 1 mg and drospirenone 0.25 or 0.5 mg (Angeliq) – menopausal hormone therapy (menopausal syndrome, postmenopausal osteoporosis)
Contraindications
Contraindications of drospirenone include renal impairment or chronic kidney disease, adrenal insufficiency, presence or history of cervical cancer or other progestogen-sensitive cancers, benign or malignant liver tumors or hepatic impairment, undiagnosed abnormal uterine bleeding, and hyperkalemia (high potassium levels). Renal impairment, hepatic impairment, and adrenal insufficiency are contraindicated because they increase exposure to drospirenone and/or increase the risk of hyperkalemia with drospirenone.
Side effects
Adverse effects of drospirenone alone occurring in more than 1% of women may include unscheduled menstrual bleeding (breakthrough or intracyclic) (40.3–64.4%), acne (3.8%), metrorrhagia (2.8%), headache (2.7%), breast pain (2.2%), weight gain (1.9%), dysmenorrhea (1.9%), nausea (1.8%), vaginal hemorrhage (1.7%), decreased libido (1.3%), breast tenderness (1.2%), and irregular menstruation (1.2%).
High potassium levels
Drospirenone is an antimineralocorticoid with potassium-sparing properties, though in most cases no increase of potassium levels is to be expected. In women with mild or moderate chronic kidney disease, or in combination with chronic daily use of other potassium-sparing medications (ACE inhibitors, angiotensin II receptor antagonists, potassium-sparing diuretics, heparin, antimineralocorticoids, or nonsteroidal anti-inflammatory drugs), a potassium level should be checked after two weeks of use to test for hyperkalemia. Persistent hyperkalemia that required discontinuation occurred in 2 out of around 1,000 women (0.2%) with 4 mg/day drospirenone alone in clinical trials.
Blood clots
Birth control pills containing ethinylestradiol and a progestin are associated with an increased risk of venous thromboembolism (VTE), including deep vein thrombosis (DVT) and pulmonary embolism (PE). The incidence is about 4-fold higher on average than in women not taking a birth control pill. The absolute risk of VTE with ethinylestradiol-containing birth control pills is small, in the area of 3 to 10 out of 10,000 women per year, relative to 1 to 5 out of 10,000 women per year not taking a birth control pill. The risk of VTE during pregnancy is 5 to 20 in 10,000 women per year and during the postpartum period is 40 to 65 per 10,000 women per year. The higher risk of VTE with combined birth control pills is thought to be due to the ethinylestradiol component, as ethinylestradiol has estrogenic effects on liver synthesis of coagulation factors which result in a procoagulatory state. In contrast to ethinylestradiol-containing birth control pills, neither progestogen-only birth control nor the combination of transdermal estradiol and an oral progestin in menopausal hormone therapy is associated with an increased risk of VTE.
Different progestins in ethinylestradiol-containing birth control pills have been associated with different risks of VTE. Birth control pills containing progestins such as desogestrel, gestodene, drospirenone, and cyproterone acetate have been found to have 2- to 3-fold the risk of VTE of birth control pills containing levonorgestrel in retrospective cohort and nested case–control observational studies. However, this area of research is controversial, and confounding factors may have been present in these studies. Other observational studies, specifically prospective cohort and case control studies, have found no differences in risk between different progestins, including between birth control pills containing drospirenone and birth control pills containing levonorgestrel. These kinds of observational studies have certain advantages over the aforementioned types of studies, like better ability to control for confounding factors. Systematic reviews and meta-analyses of all of the data in the mid-to-late 2010s found that birth control pills containing cyproterone acetate, desogestrel, drospirenone, or gestodene overall were associated with a risk of VTE of about 1.3- to 2.0-fold compared to that of levonorgestrel-containing birth control pills.
Androgenic progestins have been found to antagonize to some degree the effects of ethinylestradiol on coagulation. As a result, more androgenic progestins, like levonorgestrel and norethisterone, may oppose the procoagulatory effects of ethinylestradiol and result in a lower increase in risk of VTE. Conversely, this would be the case less or not at all with progestins that are less androgenic, like desogestrel and gestodene, as well as with progestins that are antiandrogenic, like drospirenone and cyproterone acetate.
In the early 2010s, the FDA updated the label for birth control pills containing drospirenone and other progestins to include warnings for stopping use prior to and after surgery, and to warn that such birth control pills may have a higher risk of blood clots.
Breast cancer
Drospirenone has been found to stimulate the proliferation and migration of breast cancer cells in preclinical research, similarly to certain other progestins. However, some evidence suggests that drospirenone may do this more weakly than certain other progestins, like medroxyprogesterone acetate. The combination of estradiol and drospirenone has been found to increase breast density, an established risk factor for breast cancer, in postmenopausal women.
Data on risk of breast cancer in women with newer progestins like drospirenone are lacking at present. Progestogen-only birth control is not generally associated with a higher risk of breast cancer. Conversely, combined birth control and menopausal hormone therapy with an estrogen and a progestogen are associated with higher risks of breast cancer.
Overdose
These have been no reports of serious adverse effects with overdose of drospirenone. Symptoms that may occur in the event of an overdose may include nausea, vomiting, and vaginal bleeding. There is no antidote for overdose of drospirenone and treatment of overdose should be based on symptoms. Since drospirenone has antimineralocorticoid activity, levels of potassium and sodium should be measured and signs of metabolic acidosis should be monitored.
Interactions
Inhibitors and inducers of the cytochrome P450 enzyme CYP3A4 may influence the levels and efficacy of drospirenone. Treatment for 10 days with 200 mg twice daily ketoconazole, a strong CYP3A4 inhibitor among other actions, has been found to result in a moderate 2.0- to 2.7-fold increase in exposure to drospirenone. Drospirenone does not appear to influence the metabolism of omeprazole (metabolized via CYP2C19), simvastatin (metabolized via CYP3A4), or midazolam (metabolized via CYP3A4), and likely does not influence the metabolism of other medications that are metabolized via these pathways. Drospirenone may interact with potassium-sparing medications such as ACE inhibitors, angiotensin II receptor antagonists, potassium-sparing diuretics, potassium supplements, heparin, antimineralocorticoids, and nonsteroidal anti-inflammatory drugs to further increase potassium levels. This may increase the risk of hyperkalemia (high potassium levels).
Pharmacology
Pharmacodynamics
Drospirenone binds with high affinity to the progesterone receptor (PR) and mineralocorticoid receptor (MR), with lower affinity to the androgen receptor (AR), and with very low affinity to the glucocorticoid receptor (GR). It is an agonist of the PR and an antagonist of the MR and AR, and hence is a progestogen, antimineralocorticoid, and antiandrogen. Drospirenone has no estrogenic activity and no appreciable glucocorticoid or antiglucocorticoid activity.
Progestogenic activity
Drospirenone is an agonist of the PR, the biological target of progestogens like progesterone. It has about 35% of the affinity of promegestone for the PR and about 19 to 70% of the affinity of progesterone for the PR. Drospirenone has antigonadotropic and functional antiestrogenic effects as a result of PR activation. The ovulation-inhibiting dosage of drospirenone is 2 to 3 mg/day. Inhibition of ovulation occurred in about 90% of women at a dose of 0.5 to 2 mg/day and in 100% of women at a dose of 3 mg/day. The total endometrial transformation dose of drospirenone is about 50 mg per cycle, whereas its daily dose is 2 mg for partial transformation and 4 to 6 mg for full transformation. The medication acts as a contraceptive by activating the PR, which suppresses the secretion of luteinizing hormone, inhibits ovulation, and alters the cervical membrane and endometrium.
Due to its antigonadotropic effects, drospirenone inhibits the secretion of the gonadotropins, luteinizing hormone (LH) and follicle-stimulating hormone (FSH), and suppresses gonadal sex hormone production, including of estradiol, progesterone, and testosterone. Drospirenone alone at 4 mg/day has been found to suppress estradiol levels in premenopausal women to about 40 to 80 pg/mL depending on the time of the cycle. No studies of the antigonadotropic effects of drospirenone or its influence on hormone levels appear to have been conducted in men. In male cynomolgus monkeys however, 4 mg/kg/day oral drospirenone strongly suppressed testosterone levels.
Antimineralocorticoid activity
Drospirenone is an antagonist of the MR, the biological target of mineralocorticoids like aldosterone, and hence is an antimineralocorticoid. It has about 100 to 500% of the affinity of aldosterone for the MR and about 50 to 230% of the affinity of progesterone for the MR. Drospirenone is about 5.5 to 11 times more potent as an antimineralocorticoid than spironolactone in animals. Accordingly, 3 to 4 mg drospirenone is said to be equivalent to about 20 to 25 mg spironolactone in terms of antimineralocorticoid activity. It has been said that the pharmacological profile of drospirenone more closely resembles that of progesterone than other progestins due to its antimineralocorticoid activity. Drospirenone is the only clinically used progestogen with prominent antimineralocorticoid activity besides progesterone. For comparison to progesterone, a 200 mg dose of oral progesterone is considered to be approximately equivalent in antimineralocorticoid effect to a 25 to 50 mg dose of spironolactone. Both drospirenone and progesterone are actually weak partial agonists of the MR in the absence of mineralocorticoids.
Due to its antimineralocorticoid activity, drospirenone increases natriuresis, decreases water retention and blood pressure, and produces compensatory increases in plasma renin activity as well as circulating levels and urinary excretion of aldosterone. This has been shown to occur at doses of 2 to 4 mg/day. Similar effects occur during the luteal phase of the menstrual cycle due to increased progesterone levels and the resulting antagonism of the MR. Estrogens, particularly ethinylestradiol, activate liver production of angiotensinogen and increase levels of angiotensinogen and angiotensin II, thereby activating the renin–angiotensin–aldosterone system. As a result, they can produce undesirable side effects including increased sodium excretion, water retention, weight gain, and increased blood pressure. Progesterone and drospirenone counteract these undesirable effects via their antimineralocorticoid activity. Accumulating research indicates that antimineralocorticoids like drospirenone and spironolactone may also have positive effects on adipose tissue and metabolic health.
Antiandrogenic activity
Drospirenone is an antagonist of the AR, the biological target of androgens like testosterone and dihydrotestosterone (DHT). It has about 1 to 65% of the affinity of the synthetic anabolic steroid metribolone for the AR. The medication is more potent as an antiandrogen than spironolactone, but is less potent than cyproterone acetate, with about 30% of its antiandrogenic activity in animals. Progesterone displays antiandrogenic activity in some assays similarly to drospirenone, although this issue is controversial and many researchers regard progesterone as having no significant antiandrogenic activity.
Drospirenone shows antiandrogenic effects on the serum lipid profile, including higher HDL cholesterol and triglyceride levels and lower LDL cholesterol levels, at a dose of 3 mg/day in women. The medication does not inhibit the effects of ethinylestradiol on sex hormone-binding globulin (SHBG) and serum lipids, in contrast to androgenic progestins like levonorgestrel but similarly to other antiandrogenic progestins like cyproterone acetate. SHBG levels are significantly higher with ethinylestradiol and cyproterone acetate than with ethinylestradiol and drospirenone, owing to the more potent antiandrogenic activity of cyproterone acetate relative to drospirenone. Androgenic progestins like levonorgestrel have been found to inhibit the procoagulatory effects of estrogens like ethinylestradiol on hepatic synthesis of coagulation factors, whereas this may occur less or not at all with weakly androgenic progestins like desogestrel and antiandrogenic progestins like drospirenone.
Other activity
Drospirenone stimulates the proliferation of MCF-7 breast cancer cells in vitro, an action that is independent of the classical PRs and is instead mediated via the progesterone receptor membrane component-1 (PGRMC1). Certain other progestins act similarly in this assay, whereas progesterone acts neutrally. It is unclear if these findings may explain the different risks of breast cancer observed with progesterone and progestins in clinical studies.
Pharmacokinetics
Absorption
The oral bioavailability of drospirenone is between 66 and 85%. Peak levels occur 1 to 6 hours after an oral dose. Levels are about 27 ng/mL after a single 4 mg dose. There is 1.5- to 2-fold accumulation in drospirenone levels with continuous administration, with steady-state levels of drospirenone achieved after 7 to 10 days of administration. Peak levels of drospirenone at steady state with 4 mg/day drospirenone are about 41 ng/mL. With the combination of 30 μg/day ethinylestradiol and 3 mg/day drospirenone, peak levels of drospirenone after a single dose are 35 ng/mL, and levels at steady state are 60 to 87 ng/mL at peak and 20 to 25 ng/mL at trough. The pharmacokinetics of oral drospirenone are linear with a single dose across a dose range of 1 to 10 mg. Intake of drospirenone with food does not influence the absorption of drospirenone.
Distribution
The distribution half-life of drospirenone is about 1.6 to 2 hours. The apparent volume of distribution of drospirenone is approximately 4 L/kg. The plasma protein binding of drospirenone is 95 to 97%. It is bound to albumin and 3 to 5% circulates freely or unbound. Drospirenone has no affinity for sex hormone-binding globulin (SHBG) or corticosteroid-binding globulin (CBG), and hence is not bound by these plasma proteins in the circulation.
Metabolism
The metabolism of drospirenone is extensive. It is metabolized into the acid form of drospirenone by opening of its lactone ring. The medication is also metabolized by reduction of its double bond between the C4 and C5 positions and subsequent sulfation. The two major metabolites of drospirenone are drospirenone acid and 4,5-dihydrodrospirenone 3-sulfate, and are both formed independently of the cytochrome P450 system. Neither of these metabolites are known to be pharmacologically active. Drospirenone also undergoes oxidative metabolism by CYP3A4.
Elimination
Drospirenone is excreted in urine and feces, with slightly more excreted in feces than in urine. Only trace amounts of unchanged drospirenone can be found in urine and feces. At least 20 different metabolites can be identified in urine and feces. Drospirenone and its metabolites are excreted in urine about 38% as glucuronide conjugates, 47% as sulfate conjugates, and less than 10% in unconjugated form. In feces, excretion is about 17% glucuronide conjugates, 20% sulfate conjugates, and 33% unconjugated.
The elimination half-life of drospirenone is between 25 and 33 hours. The half-life of drospirenone is unchanged with repeated administration. Elimination of drospirenone is virtually complete 10 days after the last dose.
Chemistry
Drospirenone, also known as 1,2-dihydrospirorenone or as 17β-hydroxy-6β,7β:15β,16β-dimethylene-3-oxo-17α-pregn-4-ene-21-carboxylic acid, γ-lactone, is a synthetic steroidal 17α-spirolactone, or more simply a spirolactone. It is an analogue of other spirolactones like spironolactone, canrenone, and spirorenone. Drospirenone differs structurally from spironolactone only in that the C7α acetyl thio substitution of spironolactone has been removed and two methylene groups have been substituted in at the C6β–7β and C15β–16β positions.
Spirolactones like drospirenone and spironolactone are derivatives of progesterone, which likewise has progestogenic and antimineralocorticoid activity. The loss of the C7α acetylthio group of spironolactone, a compound with negligible progestogenic activity, appears to be involved in the restoration of progestogenic activity in drospirenone, as SC-5233, the analogue of spironolactone without a C7α substitution, has potent progestogenic activity similarly to drospirenone.
History
Drospirenone was patented in 1976 and introduced for medical use in 2000. Schering AG of Germany has been granted several patents on the production of drospirenone, including WIPO and US patents, granted in 1998 and 2000, respectively. It was introduced for medical use in combination with ethinylestradiol as a combined birth control pill in 2000. Drospirenone is sometimes described as a "fourth-generation" progestin based on its time of introduction. The medication was approved for use in menopausal hormone therapy in combination with estradiol in 2005. Drospirenone was introduced for use as a progestogen-only birth control pill in 2019. A combined birth control pill containing estetrol and drospirenone was approved in 2021.
Society and culture
Generic names
Drospirenone is the generic name of the drug and its , , , and , while drospirénone is its . Its name is a shortened form of the name 1,2-dihydrospirorenone or dihydrospirenone. Drospirenone is also known by its developmental code names SH-470 and ZK-30595 (alone), BAY 86-5300, BAY 98-7071, and SH-T-00186D (in combination with ethinylestradiol), BAY 86-4891 (in combination with estradiol), and FSN-013 (in combination with estetrol).
Brand names
Drospirenone is marketed in combination with an estrogen under a variety of brand names throughout the world. Among others, it is marketed in combination with ethinylestradiol under the brand names Yasmin and Yaz, in combination with estetrol under the brand name Nextstellis, and in combination with estradiol under the brand name Angeliq.
Availability
Drospirenone is marketed widely throughout the world.
Generation
Drospirenone has been categorized as a "fourth-generation" progestin.
Litigation
Many lawsuits have been filed against Bayer, the manufacturer of drospirenone, due to the higher risk of venous thromboembolism (VTE) that has been observed with combined birth control pills containing drospirenone and certain other progestins relative to the risk with levonorgestrel-containing combined birth control pills.
In July 2012, Bayer notified its stockholders that there were more than 12,000 such lawsuits against the company involving Yaz, Yasmin, and other birth control pills with drospirenone. They also noted that the company by then had settled 1,977 cases for US$402.6 million, for an average of US$212,000 per case, while setting aside US$610.5 million to settle the others.
As of 17 July 2015, there have been at least 4,000 lawsuits and claims still pending regarding VTE related to drospirenone. This is in addition to around 10,000 claims that Bayer has already settled without admitting liability. These claims of VTE have amounted to US$1.97 billion. Bayer also reached a settlement for arterial thromboembolic events, including stroke and heart attacks, for US$56.9 million.
Research
A combination of ethinylestradiol, drospirenone, and prasterone is under development by Pantarhei Bioscience as a combined birth control pill for prevention of pregnancy in women. It includes prasterone (dehydroepiandrosterone; DHEA), an oral androgen prohormone, to replace testosterone and avoid testosterone deficiency caused by suppression of testosterone by ethinylestradiol and drospirenone. As of August 2018, the formulation is in phase II/III clinical trials.
Drospirenone has been suggested for potential use as a progestin in male hormonal contraception.
Drospirenone has been studied in forms for parenteral administration.
References
Further reading
Antimineralocorticoids
Drugs developed by Bayer
Cyclopropanes
Enantiopure drugs
Enones
Hormonal contraception
Lactones
Pregnanes
Progestogens
Spiro compounds
Spirolactones
Steroidal antiandrogens | Drospirenone | [
"Chemistry"
] | 6,386 | [
"Organic compounds",
"Stereochemistry",
"Enantiopure drugs",
"Spiro compounds"
] |
2,444,143 | https://en.wikipedia.org/wiki/Norethisterone%20acetate | Norethisterone acetate (NETA), also known as norethindrone acetate and sold under the brand name Primolut-Nor among others, is a progestin medication which is used in birth control pills, menopausal hormone therapy, and for the treatment of gynecological disorders. The medication available in low-dose and high-dose formulations and is used alone or in combination with an estrogen. It is ingested orally.
Side effects of NETA include menstrual irregularities, headaches, nausea, breast tenderness, mood changes, acne, increased hair growth, and others. NETA is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. It has weak androgenic and estrogenic activity and no other important hormonal activity. The medication is a prodrug of norethisterone in the body.
NETA was patented in 1957 and was introduced for medical use in 1964. It is sometimes referred to as a "first-generation" progestin. NETA is marketed widely throughout the world. It is available as a generic medication.
Medical uses
NETA is used as a hormonal contraceptive in combination with estrogen, in the treatment of gynecological disorders such as abnormal uterine bleeding, and as a component of menopausal hormone therapy for the treatment of menopausal symptoms.
Available forms
NETA is available in the form of tablets for use by mouth both alone and in combination with estrogens including estradiol, estradiol valerate, and ethinylestradiol. Transdermal patches providing a combination of 50 μg/day estradiol and 0.14 or 0.25 mg/day NETA are available under the brand names CombiPatch and Estalis.
NETA was previously available for use by intramuscular injection in the form of ampoules containing 20 mg NETA, 5 mg estradiol benzoate, 8 mg estradiol valerate, and 180 mg testosterone enanthate in oil solution under the brand name Ablacton to suppress lactation in postpartum women.
Contraindications
Side effects
Side effects of NETA include menstrual irregularities, headaches, nausea, breast tenderness, mood changes, acne, increased hair growth, and others.
Overdose
Interactions
Pharmacology
Pharmacodynamics
NETA is a prodrug of norethisterone in the body. Upon oral ingestion, it is rapidly converted into norethisterone by esterases during intestinal and first-pass hepatic metabolism. Hence, as a prodrug of norethisterone, NETA has essentially the same effects, acting as a potent progestogen with additional weak androgenic and estrogenic activity (the latter via its metabolite ethinylestradiol).
Progestogenic effects
In terms of dosage equivalence, norethisterone and NETA are typically used at respective dosages of 0.35 mg/day and 0.6 mg/day as progestogen-only contraceptives, and at respective dosages of 0.5–1 mg/day and 1–1.5 mg/day in combination with ethinylestradiol in combined oral contraceptives. Conversely, the two drugs have been used at about the same dosages in menopausal hormone therapy for the treatment of menopausal symptoms. NETA is of about 12% higher molecular weight than norethisterone due to the presence of its C17β acetate ester. Micronization of NETA has been found to increase its potency by several-fold in animals and women. The endometrial transformation dosage of micronized NETA per cycle is 12 to 14 mg, whereas that for non-micronized NETA is 30 to 60 mg.
Estrogenic effects
NETA metabolizes into ethinylestradiol at a rate of 0.20 to 0.33% across a dose range of 10 to 40 mg. Peak levels of ethinylestradiol with a 10, 20, or 40 mg dose of NETA were 58, 178, and 231 pg/mL, respectively. For comparison, a 30 to 40 μg dose of oral ethinylestradiol typically results in a peak ethinylestradiol level of 100 to 135 pg/mL. As such, in terms of ethinylestradiol exposure, 10 to 20 mg NETA may be equivalent to 20 to 30 μg ethinylestradiol and 40 mg NETA may be similar to 50 μg ethinylestradiol. In another study however, 5 mg NETA produced an equivalent of 28 μg ethinylestradiol (0.7% conversion rate) and 10 mg NETA produced an equivalent of 62 μg ethinylestradiol (1.0% conversion rate). Due to its estrogenic activity via ethinylestradiol, high doses of NETA have been proposed for add-back in the treatment of endometriosis without estrogen supplementation. Generation of ethinylestradiol with high doses of NETA may increase the risk of venous thromboembolism but may also decrease menstrual bleeding relative to progestogen exposure alone.
Antigonadotropic effects
NETA has antigonadotropic effects via its progestogenic activity and can dose-dependently suppress gonadotropin and sex hormone levels in women and men. The ovulation-inhibiting dose of NETA is about 0.5 mg/day in women. In healthy young men, NETA alone at a dose of 5 to 10 mg/day orally for 2 weeks suppressed testosterone levels from ~527 ng/dL to ~231 ng/dL (–56%).
Chemistry
NETA, also known as norethinyltestosterone acetate, as well as 17α-ethynyl-19-nortestosterone 17β-acetate or 17α-ethynylestra-4-en-17β-ol-3-one 17β-acetate, is a progestin, or synthetic progestogen, of the 19-nortestosterone group, and a synthetic estrane steroid. It is the C17β acetate ester of norethisterone. NETA is a derivative of testosterone with an ethynyl group at the C17α position, the methyl group at the C19 position removed, and an acetate ester attached at the C17β position. In addition to testosterone, it is a combined derivative of nandrolone (19-nortestosterone) and ethisterone (17α-ethynyltestosterone).
Synthesis
Chemical syntheses of NETA have been published.
History
Schering AG filed for a patent for NETA in June 1957, and the patent was issued in December 1960. The drug was first marketed, by Parke-Davis as Norlestrin in the United States, in March 1964. This was a combination formulation of 2.5 mg NETA and 50 μg ethinylestradiol and was indicated as an oral contraceptive. Other early brand names of NETA used in oral contraceptives included Minovlar and Anovlar.
Society and culture
Generic names
Norethisterone acetate is the , , and of NETA while norethindrone acetate is its and .
Brand names
NETA is marketed under a variety of brand names throughout the world including Primolut-Nor (major), Aygestin (), Gestakadin, Milligynon, Monogest, Norlutate (, ), Primolut N, SH-420 (), Sovel, and Styptin among others.
Availability
United States
NETA is marketed in high-dose 5 mg oral tablets in the United States under the brand names Aygestin and Norlutate for the treatment of gynecological disorders. In addition, it is available under a large number of brand names at much lower dosages (0.1 to 1 mg) in combination with estrogens such as ethinylestradiol and estradiol as a combined oral contraceptive and for use in menopausal hormone therapy for the treatment of menopausal symptoms.
Research
NETA has been studied for use as a potential male hormonal contraceptive in combination with testosterone in men.
See also
Ethinylestradiol/norethisterone acetate
Norethisterone enanthate
References
Acetate esters
Anabolic–androgenic steroids
Antigonadotropins
Estranes
Hormonal contraception
Enones
Progestogen esters
Synthetic estrogens
Systemic hormonal preparations
Pharmacology | Norethisterone acetate | [
"Chemistry"
] | 1,882 | [
"Pharmacology",
"Medicinal chemistry"
] |
2,444,267 | https://en.wikipedia.org/wiki/Norgestrel | Norgestrel, sold under the brand name Opill among others, is a progestin which is used in birth control pills. It is often combined with the estrogen ethinylestradiol, marketed as Ovral. It is also used in menopausal hormone therapy. It is taken by mouth.
Side effects of norgestrel include menstrual irregularities, headaches, nausea, and breast tenderness. The most common side effects of the norgestrel include irregular bleeding, headaches, dizziness, nausea, increased appetite, abdominal pain, cramps, or bloating. Norgestrel is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. It has weak androgenic activity and no other important hormonal activity.
Norgestrel was patented in 1961 and came into medical use, specifically in birth control pills, in 1966. It was subsequently introduced for use in menopausal hormone therapy as well. Norgestrel is sometimes referred to as a "second-generation" progestin. It is marketed widely throughout the world. Norgestrel is available as a generic medication. In 2022, the version with ethinylestradiol was the 264th most commonly prescribed medication in the United States, with more than 1million prescriptions. In July 2023, the US Food and Drug Administration (FDA) approved norgestrel for over-the-counter sale.
Medical uses
Norgestrel is used in combination with ethinylestradiol or quinestrol in combined birth control pills, alone in progestogen-only birth control pills, and in combination with estradiol or conjugated estrogens in menopausal hormone therapy. It has also been used as an emergency contraceptive in the Yuzpe regimen.
Side effects
Pharmacology
Pharmacodynamics
Norgestrel is a progestogen, or an agonist of the progesterone receptor. The biological activity of norgestrel lies in the levo enantiomer, levonorgestrel, whereas the dextro isomer is inactive. As such, norgestrel is identical in its hormonal activity to levonorgestrel except that it is half as potent by weight. Levonorgestrel, and by extension norgestrel, have some androgenic activity, but no estrogenic, antimineralocorticoid, or glucocorticoid activity.
The ovulation-inhibiting dose of norgestrel appears to be greater than 75μg/day, as ovulation occurred in 50 to 75% of cycles with this dosage of norgestrel in studies. The ovulation-inhibiting dosage of levonorgestrel, which is twice as potent as norgestrel, is approximately 50 to 60μg/day. One review lists the ovulation-inhibiting dose of norgestrel as 100μg/day. The endometrial transformation dose of norgestrel is listed as 12mg per cycle and the menstrual delay test dose of norgestrel is listed as 0.5 to 2mg/day.
Pharmacokinetics
The pharmacokinetics of norgestrel have been reviewed.
Chemistry
Norgestrel, also known as rac-13-ethyl-17α-ethynyl-19-nortestosterone or as rac-13-ethyl-17α-ethynylestr-4-en-17β-ol-3-one, is a synthetic estrane steroid and a derivative of testosterone. It is a racemic mixture of stereoisomers dextronorgestrel (the C13α isomer; l-norgestrel, L-norgestrel, or (+)-norgestrel) and levonorgestrel (the C13β isomer; d-norgestrel, D-norgestrel, or (–)-norgestrel), the former of which is inactive (making norgestrel exactly half as potent as levonorgestrel). Norgestrel is more specifically a derivative of norethisterone (17α-ethynyl-19-nortestosterone) and is a member of the gonane (18-methylestrane) subgroup of the 19-nortestosterone family of progestins.
Synthesis
Chemical syntheses of norgestrel have been published.
History
Norgestrel was first introduced, as a birth control pill in combination with ethinylestradiol, under the brand name Eugynon in Germany in 1966. It was subsequently marketed as a combined birth control pill with ethinylestradiol in the United States under the brand name Ovral in 1968, and was marketed in many other countries as well.
The contraceptive efficacy of norgestrel was established in the U.S. with the original approval for prescription use in 1973.
In July 2023, the FDA approved norgestrel for over-the-counter sale. The FDA granted the approval to Laboratoire HRA Pharma which was acquired by Perrigo Company plc.
Society and culture
Generic names
Norgestrel is the generic name of the drug and its international nonproprietary name, United States Adopted Name, United States Pharmacopeia, British Approved Name, Dénomination Commune Française, Denominazione Comune Italiana, and Japanese Accepted Name. It is also known as dl-norgestrel, DL-norgestrel, or (±)-norgestrel.
Brand names
Norgestrel is marketed under a variety of brand names including Cyclacur, Cryselle, Cyclo-Progynova, Duoluton, Elinest, Eugynon, Microgynon, Lo/Ovral, Low-Ogestrel, Logynon, Microlut, Minicon, Nordette, Neogest, Opill, Ogestrel, Ovral, Ovran, Ovranette, Ovrette, Planovar, Prempak, Progyluton, and Trinordiol among others.
References
Tertiary alcohols
Ethynyl compounds
Anabolic–androgenic steroids
Estranes
Hormonal contraception
Ketones
Progestogens | Norgestrel | [
"Chemistry"
] | 1,372 | [
"Ketones",
"Functional groups"
] |
18,165,923 | https://en.wikipedia.org/wiki/MIKE%2011 | MIKE 11 is a computer program that simulates flow and water level, water quality and sediment transport in rivers, flood plains, irrigation canals, reservoirs and other inland water bodies. MIKE 11 is a 1-dimensional river model. It was developed by DHI.
MIKE 11 has long been known as a software tool with advanced interface facilities.
Since the beginning MIKE 11 was operated through an efficient interactive menu system with systematic layouts and sequencing of menus. It is within that framework where the latest ‘Classic’ version of MIKE 11 – version 3.20 was developed.
The new generation of MIKE 11 combines the features and experiences from the mike11
MIKE 11 ‘Classic’ period, with the powerful Windows based user interface including graphical editing facilities and improved computational speed gained by the full utilization of 32-bit technology.
Modules
The computational core of MIKE 11 is a hydrodynamic simulation engine, and this is complemented by a wide range of additional modules and extensions covering almost all conceivable aspects of river modeling.
HD module: provides fully dynamic solution to the complete nonlinear 1-D Saint Venant equations, diffusive wave approximation and kinematic wave approximation, Muskingum method and Muskingum-Cunge method for simplified channel routing. It can automatically adapt to subcritical flow and supercritical flow. It has ability to simulate standard hydraulic structures such as weirs, culverts, bridges, pumps, energy loss and sluice gates.
GIS Extension: an extension of ArcMap from ESRI providing features for catchment/river delineation, cross-section and Digital Elevation Model(DEM) data, pollution load estimates, flood visualisation/animation as 2D maps and results presentation/analysis using Temporal Analyst.
RR module: a rainfall runoff module, including the unit hydrograph method (UHM), a lumped conceptual continuous hydrological model and a monthly soil moisture accounting model. It includes an auto-calibration tool to estimate model parameter based on statistic data of comparison of simulated water levels/discharges and observations.
SO module: a structure operation module. It simulates operational structures such as sluice gates, weirs, culverts, pumps, bridges with operating strategies.
DB module: a dam break module. It provides complete facilities for definition of dam geometry, breach development in time and space as well as failure mode.
AUTOCAL module: an automatic calibration tool. It allows automisation of the calibration process for a wide range of parameters, including rainfall runoff parameters, Manning's number, head loss coefficients, water quality parameters etc.
AD module: an advection dispersion module. It simulates transport and spreading of conservative pollutants and constituents as well as heat with linear decay.
ST/GST module: a noncohesive sediment module. It simulates transport, erosion and deposition of non-cohesive and graded noncohesive sediments, including simulations of river morphology.
ACS module: a cohesive sediment module. It has 3-layer bed description, including quasi-2D erosion.
MIKE ECO Lab module: provides ecological modeling. It can simulate BOD/DO, Ammonia, Nitrate, Eutrophication, heavy metals and Wetlands. It includes standard templates that are well documented and have been used extensively in numerous applications worldwide. Based on predefined process templates, one can develop his/her own templates.
MIKE 11 Stratified module: models vertical density differences such as salinity or temperature in two-layer or multi-layered stratified water bodies.
MIKE 11 Real Time module: a simulation package and GIS front-end for setting up operational flood forecasting systems. It includes real-time updating and kalman filtering.
Applications
MIKE 11 has been used in hundreds of application around the world. Its main application areas are flood analysis and alleviation design, real-time flood forecasting, dam break analysis, optimisation of reservoir and canal gate/structure operations, ecological and water quality assessments in rivers and wetlands, sediment transport and river morphology studies, salinity intrusion in rivers and estuaries.
External links
DHI
Hydrology models
Hydraulic engineering
Environmental engineering
Physical geography | MIKE 11 | [
"Physics",
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 855 | [
"Hydrology",
"Biological models",
"Environmental modelling",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydrology models",
"Environmental engineering",
"Hydraulic engineering"
] |
18,166,009 | https://en.wikipedia.org/wiki/Lower%20critical%20solution%20temperature | The lower critical solution temperature (LCST) or lower consolute temperature is the critical temperature below which the components of a mixture are miscible in all proportions. The word lower indicates that the LCST is a lower bound to a temperature interval of partial miscibility, or miscibility for certain compositions only.
The phase behavior of polymer solutions is an important property involved in the development and design of most polymer-related processes. Partially miscible polymer solutions often exhibit two solubility boundaries, the upper critical solution temperature (UCST) and the LCST, both of which depend on the molar mass and the pressure. At temperatures below LCST, the system is completely miscible in all proportions, whereas above LCST partial liquid miscibility occurs.
In the phase diagram of the mixture components, the LCST is the shared minimum of the concave up spinodal and binodal (or coexistence) curves. It is in general pressure dependent, increasing as a function of increased pressure.
For small molecules, the existence of an LCST is much less common than the existence of an upper critical solution temperature (UCST), but some cases do exist. For example, the system triethylamine-water has an LCST of 19 °C, so that these two substances are miscible in all proportions below 19 °C but not at higher temperatures. The nicotine-water system has an LCST of 61 °C, and also a UCST of 210 °C at pressures high enough for liquid water to exist at that temperature. The components are therefore miscible in all proportions below 61 °C and above 210 °C (at high pressure), and partially miscible in the interval from 61 to 210 °C.
Polymer-solvent mixtures
Some polymer solutions have an LCST at temperatures higher than the UCST. As shown in the diagram, this means that there is a temperature interval of complete miscibility, with partial miscibility at both higher and lower temperatures.
In the case of polymer solutions, the LCST also depends on polymer degree of polymerization, polydispersity and branching as well as on the polymer's composition and architecture. One of the most studied polymers whose aqueous solutions exhibit LCST is poly(N-isopropylacrylamide). Although it is widely believed that this phase transition occurs at , the actual temperatures may differ 5 to 10 °C (or even more) depending on the polymer concentration, molar mass of polymer chains, polymer dispersity as well as terminal moieties. Furthermore, other molecules in the polymer solution, such as salts or proteins, can alter the cloud point temperature. Another monomer whose homo- and co-polymers exhibit LCST behavior in solution is 2-(dimethylamino)ethyl methacrylate.
The LCST depends on the polymer preparation and in the case of copolymers, the monomer ratios, as well as the hydrophobic or hydrophilic nature of the polymer.
To date, over 70 examples of non-ionic polymers with an LCST in aqueous solution have been found.
Physical basis
A key physical factor which distinguishes the LCST from other mixture behavior is that the LCST phase separation is driven by unfavorable entropy of mixing. Since mixing of the two phases is spontaneous below the LCST and not above, the Gibbs free energy change (ΔG) for the mixing of these two phases is negative below the LCST and positive above, and the entropy change ΔS = – (dΔG/dT) is negative for this mixing process. This is in contrast to the more common and intuitive case in which entropies drive mixing due to the increased volume accessible to each component upon mixing.
In general, the unfavorable entropy of mixing responsible for the LCST has one of two physical origins. The first is associating interactions between the two components such as strong polar interactions or hydrogen bonds, which prevent random mixing. For example, in the triethylamine-water system, the amine molecules cannot form hydrogen bonds with each other but only with water molecules, so in solution they remain associated to water molecules with loss of entropy. The mixing which occurs below 19 °C is not due to entropy but due to the enthalpy of formation of the hydrogen bonds. Sufficiently strong, geometrically-informed, associative interactions between solute and solvent(s) have been shown to be sufficient to lead to an LCST.
The second physical factor which can lead to an LCST is compressibility effects, especially in polymer-solvent systems. For nonpolar systems such as polystyrene in cyclohexane, phase separation has been observed in sealed tubes (at high pressure) at temperatures approaching the liquid-vapor critical point of the solvent. At such temperatures the solvent expands much more rapidly than the polymer, whose segments are covalently linked. Mixing therefore requires contraction of the solvent for compatibility of the polymer, resulting in a loss of entropy.
Theory
Within statistical mechanics, the LCST may be modeled theoretically via the lattice fluid model, an extension of Flory–Huggins solution theory, that incorporates vacancies, and thus accounts for variable density and compressibility effects.
Newer extensions of the Flory-Huggins solution theory have shown that the inclusion of only geometrically-informed, associative interactions between solute and solvent are sufficient to observe the LCST.
Prediction of LCST (θ)
There are three groups of methods for correlating and predicting LCSTs. The first group proposes models that are based on a solid theoretical background using liquid–liquid or vapor–liquid experimental data. These methods require experimental data to adjust the unknown parameters, resulting in limited predictive ability . Another approach uses empirical equations that correlate θ (LCST) with physicochemical properties such as density, critical properties etc., but suffers from the disadvantage that these properties are not always available. A new approach proposed by Liu and Zhong develops linear models for the prediction of θ(LCST) using molecular connectivity indices, which depends only on the solvent and polymer structures. The latter approach has proven to be a very useful technique in quantitative structure–activity/property relationships (QSAR/QSPR) research for polymers and polymer solutions. QSAR/QSPR studies constitute an attempt to reduce the trial-and-error element in the design of compounds with desired activity/properties by establishing mathematical relationships between the activity/property of interest and measurable or computable parameters, such as topological, physicochemical, stereochemistry, or electronic indices. More recently QSPR models for the prediction of the θ (LCST) using molecular (electronic, physicochemical etc.) descriptors have been published. Using validated robust QSPR models, experimental time and effort can be reduced significantly as reliable estimates of θ (LCST) for polymer solutions can be obtained before they are actually synthesized in the laboratory.
See also
Upper critical solution temperature
Coil-globule transition
References
Critical phenomena
Temperature
Chemical mixtures | Lower critical solution temperature | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,456 | [
"Scalar physical quantities",
"Temperature",
"Physical phenomena",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Critical phenomena",
"Chemical mixtures",
"Thermodynamics",
"Condensed matter physics",
"nan",
"Statistical mechanics",
"Wiki... |
18,176,029 | https://en.wikipedia.org/wiki/Sum%20rule%20in%20quantum%20mechanics | In quantum mechanics, a sum rule is a formula for transitions between energy levels, in which the sum of the transition strengths is expressed in a simple form. Sum rules are used to describe the properties of many physical systems, including solids, atoms, atomic nuclei, and nuclear constituents such as protons and neutrons.
The sum rules are derived from general principles, and are useful in situations where the behavior of individual energy levels is too complex to be described by a precise quantum-mechanical theory. In general, sum rules are derived by using Heisenberg's quantum-mechanical algebra to construct operator equalities, which are then applied to the particles or energy levels of a system.
Derivation of sum rules
Assume that the Hamiltonian has a complete
set of eigenfunctions with eigenvalues
:
For the Hermitian operator we define the
repeated commutator iteratively by:
The operator is Hermitian since
is defined to be Hermitian. The operator is
anti-Hermitian:
By induction one finds:
and also
For a Hermitian operator we have
Using this relation we derive:
The result can be written as
For this gives:
See also
Oscillator strength
Sum rules (quantum field theory)
QCD sum rules
References
Quantum mechanics | Sum rule in quantum mechanics | [
"Physics"
] | 252 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
18,176,426 | https://en.wikipedia.org/wiki/Hydraulic%20structure | A hydraulic structure is a structure submerged or partially submerged in any body of water, which disrupts the natural flow of water. They can be used to divert, disrupt or completely stop the flow. An example of a hydraulic structure would be a dam, which slows the normal flow rate of the river in order to power turbines. A hydraulic structure can be built in rivers, a sea, or any body of water where there is a need for a change in the natural flow of water.
Hydraulic structures may also be used to measure the flow of water. When used to measure the flow of water, hydraulic structures are defined as a class of specially shaped, static devices over or through which water is directed in such a way that under free-flow conditions at a specified location (point of measurement) a known level to flow relationship exists. Hydraulic structures of this type can generally be divided into two categories: flumes and weirs.
See also
Hard engineering
References
Water
Hydraulic engineering | Hydraulic structure | [
"Physics",
"Engineering",
"Environmental_science"
] | 196 | [
"Hydrology",
"Physical systems",
"Hydrology stubs",
"Hydraulics",
"Civil engineering",
"Water",
"Hydraulic engineering"
] |
18,176,668 | https://en.wikipedia.org/wiki/Coronal%20seismology | Coronal seismology is a technique of studying the plasma of the Sun's corona with the use of magnetohydrodynamic (MHD) waves and oscillations. Magnetohydrodynamics studies the dynamics of electrically conducting fluids - in this case the fluid is the coronal plasma. Observed properties of the waves (e.g. period, wavelength, amplitude, temporal and spatial signatures (what is the shape of the wave perturbation?), characteristic scenarios of the wave evolution (is the wave damped?), combined with a theoretical modelling of the wave phenomena (dispersion relations, evolutionary equations, etc.), may reflect physical parameters of the corona which are not accessible in situ, such as the coronal magnetic field strength and Alfvén velocity
and coronal dissipative coefficients. Originally, the method of MHD coronal seismology was suggested by Y. Uchida in 1970 for propagating waves, and B. Roberts et al. in 1984 for standing waves, but was not practically applied until the late 90s due to a lack of necessary observational resolution.
Philosophically, coronal seismology is similar to the Earth's seismology, helioseismology, and MHD spectroscopy of laboratory plasma devices. In all these approaches, waves of various kind are used to probe a medium.
The theoretical foundation of coronal seismology is the dispersion relation of MHD modes of a plasma cylinder: a plasma structure which is nonuniform in the transverse direction and extended along the magnetic field. This model works well for the description of a number of plasma structures observed in the solar corona: e.g. coronal loops, prominence fibrils, plumes, various filaments. Such a structure acts as a waveguide of MHD waves.
This discussion is adapted from Nakariakov & Verwichte (2009).
Modes
There are several distinct kinds of MHD modes which have quite different dispersive, polarisation, and propagation properties.
Kink modes
Kink (or transverse) modes, which are oblique fast magnetoacoustic (also known as magnetosonic waves) guided by the plasma structure; the mode causes the displacement of the axis of the plasma structure. These modes are weakly compressible, but could nevertheless be observed with imaging instruments as periodic standing or propagating displacements of coronal structures, e.g. coronal loops. The frequency of transverse or "kink" modes is given by the following expression:
For kink modes the parameter the azimuthal wave number in a cylindrical model of a loop, is equal to 1, meaning that the cylinder is swaying with fixed ends.
Sausage modes
Sausage modes, which are also oblique fast magnetoacoustic waves guided by the plasma structure; the mode causes expansions and contractions of the plasma structure, but does not displace its axis. These modes are compressible and cause significant variation of the absolute value of the magnetic field in the oscillating structure. The frequency of sausage modes is given by the following expression:
For sausage modes the parameter is equal to 0; this would be interpreted as a "breathing" in and out, again with fixed endpoints.
Longitudinal modes
Longitudinal (or slow, or acoustic) modes, which are slow magnetoacoustic waves propagating mainly along the magnetic field in the plasma structure; these mode are essentially compressible. The magnetic field perturbation in these modes is negligible. The frequency of slow modes is given by the following expression:
Where we define as the sound speed and as the Alfvén velocity.
Torsional modes
Torsional (Alfvén or twist) modes are incompressible transverse perturbations of the magnetic field along certain individual magnetic surfaces. In contrast with kink modes, torsional modes cannot be observed with imaging instruments, as they do not cause the displacement of either the structure axis or its boundary.
Observations
Wave and oscillatory phenomena are observed in the hot plasma of the corona mainly in EUV, optical and microwave bands with a number of spaceborne and ground-based instruments, e.g. the Solar and Heliospheric Observatory (SOHO), the Transition Region and Coronal Explorer (TRACE), the Nobeyama Radioheliograph (NoRH, see the Nobeyama radio observatory). Phenomenologically, researchers distinguish between compressible waves in polar plumes and in legs of large coronal loops, flare-generated transverse oscillations of loops, acoustic oscillations of loops, propagating kink waves in loops and in structures above arcades (an arcade being a close collection of loops in a cylindrical structure, see image to right), sausage oscillations of flaring loops, and oscillations of prominences and fibrils (see solar prominence), and this list is continuously updated.
Coronal seismology is one of the aims of the Atmospheric Imaging Assembly (AIA) instrument on the Solar Dynamics Observatory (SDO) mission.
A mission to send a spacecraft as close as 9 solar radii from the sun, Parker Solar Probe, was launched in 2018 with aims to provide in-situ measurements of the solar magnetic field, solar wind and corona. It includes a magnetometer and plasma wave sensor, allowing unprecedented observations for coronal seismology.
Conclusions
The potential of coronal seismology in the estimation of the coronal magnetic field, density scale height, "fine structure" (by which is meant the variation in structure of an inhomogeneous structure such as an inhomogeneous coronal loop) and heating has been demonstrated by different research groups. Work relating to the coronal magnetic field was mentioned earlier.
It has been shown that sufficiently broadband slow magnetoacoustic waves, consistent with currently available observations in the low frequency part of the spectrum, could provide the rate of heat deposition sufficient to heat a coronal loop.
Regarding the density scale height, transverse oscillations of coronal loops that have both variable circular cross-sectional area and plasma density in the longitudinal direction have been studied theoretically. A second order ordinary differential equation has been derived describing the displacement of the loop axis. Together with boundary conditions, solving this equation determines the eigenfrequencies and eigenmodes. The coronal density scale height could then be estimated by using the observed ratio of the fundamental frequency and first overtone of loop kink oscillations. Little is known of coronal fine structure. Doppler shift oscillations in hot active region loops obtained with the Solar Ultraviolet Measurements of Emitted Radiation instrument (SUMER) aboard SOHO have been studied. The spectra were recorded along a 300 arcsec slit placed at a fixed position in the corona above the active regions. Some oscillations showed phase propagation along the slit in one or both directions with apparent speeds in the range of 8–102 km per second, together with distinctly different intensity and line width distributions along the slit. These features can be explained by the excitation of the oscillation at a footpoint of an inhomogeneous coronal loop, e.g. a loop with fine structure.
References
External links
Roberts, B., Nakariakov, V.M., "Coronal seismology – a new science", Frontiers 15, 2003
Verwichte, E., Plasma diagnostics using MHD waves
Stepanov, A.V., Zaitsev, V.V. and Nakariakov, V.M., "Coronal Seismology" Wiley-VCH 2012
Sun
Solar phenomena
Space plasmas
Waves in plasmas | Coronal seismology | [
"Physics"
] | 1,583 | [
"Waves in plasmas",
"Space plasmas",
"Physical phenomena",
"Plasma phenomena",
"Astrophysics",
"Waves",
"Solar phenomena",
"Stellar phenomena"
] |
4,525,764 | https://en.wikipedia.org/wiki/Barium%20fluoride | Barium fluoride is an inorganic compound with the formula . It is a colorless solid that occurs in nature as the rare mineral frankdicksonite. Under standard conditions it adopts the fluorite structure and at high pressure the structure. Like , it is resilient to and insoluble in water.
Above ca. 500 °C, is corroded by moisture, but in dry environments it can be used up to 800 °C. Prolonged exposure to moisture degrades transmission in the vacuum UV range. It is less resistant to water than calcium fluoride, but it is the most resistant of all the optical fluorides to high-energy radiation, though its far ultraviolet transmittance is lower than that of the other fluorides. It is quite hard, very sensitive to thermal shock and fractures quite easily.
Optical properties
Barium fluoride is transparent from the ultraviolet to the infrared, from 150 to 200 nm to 11–11.5 μm. It is used in windows for infrared spectroscopy, in particular in the field of fuel oil analysis. Its transmittance at 200 nm is relatively low (0.60), but at 500 nm it goes up to 0.96–0.97 and stays at that level until 9 μm, then it starts falling off (0.85 for 10 μm and 0.42 for 12 μm). The refractive index is about 1.46 from 700 nm to 5 μm.
Barium fluoride is also a common, very fast (one of the fastest) scintillators for the detection of X-rays, gamma rays or other high energy particles. One of its applications is the detection of 511 keV gamma photons in positron emission tomography. It responds also to alpha and beta particles, but, unlike most scintillators, it does not emit ultraviolet light. It can be also used for detection of high-energy (10–150 MeV) neutrons, using pulse shape discrimination techniques to separate them from simultaneously occurring gamma photons.
Barium fluoride is used as a preopacifying agent and in enamel and glazing frits production. Its other use is in the production of welding agents (an additive to some fluxes, a component of coatings for welding rods and in welding powders). It is also used in metallurgy, as a molten bath for refining aluminium.
Gas phase structure
In the vapor phase the molecule is non-linear with an F-Ba-F angle of approximately 108°. Its nonlinearity violates VSEPR theory. Ab initio calculations indicate that contributions from d orbitals in the shell below the valence shell are responsible. Another proposal is that polarisation of the electron core of the barium atom creates an approximately tetrahedral distribution of charge that interacts with the Ba-F bonds.
References
Cited sources
External links
MSDS at Oxford University
Barium compounds
Fluorides
Optical materials
Phosphors and scintillators
Crystals
Alkaline earth metal halides
Fluorite crystal structure | Barium fluoride | [
"Physics",
"Chemistry",
"Materials_science"
] | 631 | [
"Luminescence",
"Salts",
"Crystallography",
"Materials",
"Crystals",
"Optical materials",
"Phosphors and scintillators",
"Fluorides",
"Matter"
] |
7,840,300 | https://en.wikipedia.org/wiki/Global%20Ocean%20Ecosystem%20Dynamics | Global Ocean Ecosystem Dynamics (GLOBEC) is the International Geosphere-Biosphere Programme (IGBP) core project responsible for understanding how global change will affect the abundance, diversity and productivity of marine populations. The programme was initiated by SCOR and the IOC of UNESCO in 1991, to understand how global change will affect the abundance, diversity and productivity of marine populations comprising a major component of oceanic ecosystems.
The aim of GLOBEC is to advance our understanding of the structure and functioning of the global ocean ecosystem, its major subsystems, and its response to physical forcing so that a capability can be developed to forecast the responses of the marine ecosystem to global change.
Structure
GLOBEC encompasses an integrated suite of research activities consisting of Regional Programmes, National Activities and cross-cutting research focal activities. The GLOBEC programme has been developed by the Scientific Steering Committee (SSC) and is co-ordinated through the GLOBEC International Project Office (IPO).
Regional Programmes:
Ecosystem Structure of Subarctic Seas (ESSAS)
CLimate Impacts on Oceanic TOp Predators (CLIOTOP)
ICES Cod and Climate Change (CCC)
PICES Climate Change and Carrying Capacity (CCCC)
Southern Ocean GLOBEC (SO GLOBEC)
Small Pelagic Fish and Climate Change (SPACC)
National Programmes:
GLOBEC has several active national programmes and scientists from nearly 30 countries participate in GLOBEC activities on a national or regional level.
Focus Working Groups:
There are four GLOBEC cross-cutting research focal activities:
Focus 1. Retrospective analysis
Focus 2. Process studies
Focus 3. Prediction and modelling
Focus 4. Feedback from ecosystem changes
Publications
GLOBEC produces a report series, special contributions series and a biannual newsletter, all of which can be downloaded from the GLOBEC website. GLOBEC science has contributed to over 2000 refereed scientific publications which can be searched from a database on the GLOBEC website.
Recent GLOBEC Reports:
GLOBEC Report No.22: Report of a GLOBEC/SPACC meeting on small pelagic fish spawning habitat dynamics and the daily egg production method (DEPM), 14–16 January 2004, Concepcion, Chile.
GLOBEC Report No.21: Report of a GLOBEC/SPACC workshop on small pelagic fish spawning habitat dynamics and the daily egg production method(DEPM), 12–13 January 2004, Concepcion, Chile.
GLOBEC Report No.20: Background on the climatology, physical oceanography and ecosystems of the sub-Arctic seas. Appendix to the ESSAS Science Plan.
GLOBEC Report No.19: Ecosystem Studies of Sub-Arctic Seas (ESSAS) Science Plan.
GLOBEC Report No.18: CLimate Impacts on Oceanic TOp Predators (CLIOTOP) Science Plan and Implementation Strategy.
See also
Global Ocean Data Analysis Project (GLODAP)
Joint Global Ocean Flux Study (JGOFS)
World Ocean Atlas (WOA)
World Ocean Circulation Experiment (WOCE)
External links
GLOBEC website
Oceanography
Ecological experiments
Biology organizations
Fisheries and aquaculture research institutes | Global Ocean Ecosystem Dynamics | [
"Physics",
"Environmental_science"
] | 614 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
7,840,468 | https://en.wikipedia.org/wiki/Counter-rotating%20propellers | Counter-rotating propellers (CRP) are propellers which turn in opposite directions to each other. They are used on some twin- and multi-engine propeller-driven aircraft.
The propellers on most conventional twin-engined aircraft turn clockwise (as viewed from behind the engine). Counter-rotating propellers generally turn clockwise on the left engine and counterclockwise on the right. The advantage of such designs is that counter-rotating propellers balance the effects of torque and P-factor, meaning that such aircraft do not have a critical engine in the case of engine failure.
Drawbacks of counter-rotating propellers come from the fact that, in order to reverse the rotation of one propeller, either one propeller must have an additional reversing gearbox, or the engines themselves must be adapted to turn in opposite directions. (Meaning that there are essentially two engine designs, one with left-turning and the other with right-turning parts, which complicates manufacture and maintenance.)
History
Counter-rotating propellers have been used since the earliest days of aviation, in order to avoid the aircraft tipping sideways from the torque reaction against propellers turning in a single direction. They were fitted to the very first controlled powered aeroplane, the Wright Flyer, and to other subsequent types such as the Dunne D.1 of 1907 and the more successful Dunne D.5 of 1910.
In designing the Lockheed P-38 Lightning, the decision was made to reverse the counter-rotation such that the tops of the propeller arcs move outwards (counterclockwise on the left and clockwise on the right), away from each other. Tests on the initial XP-38 prototype demonstrated greater accuracy in gunnery with the unusual configuration.
The counter-rotating powerplants of the German World War II Junkers Ju 288 prototype series (as the Bomber B contract winning design), the Gotha Go 244 light transport, Henschel Hs 129 ground attack aircraft, Heinkel He 177A heavy bomber and Messerschmitt Me 323 transport used the same rotational "sense" as the production P-38 did – this has also been done for the modern American Bell Boeing V-22 Osprey tiltrotor VTOL military aircraft design. The following German World War II aviation engines were designed as opposing-rotation pairs for counter-rotation needs:
BMW 801A and B, and G/H subtypes
Daimler-Benz DB 604
Daimler-Benz DB 606
Daimler-Benz DB 610
Junkers Jumo 222
The aerodynamics of a propeller on one side of an aircraft change according to which way it turns, as it affects the P-factor. This can in turn affect performance under extreme conditions and therefore flight safety certification. Some modern types, such as the Airbus A400M, have counter-rotating propellers in order to meet air safety requirements under engine-out conditions.
List of aircraft with counter-rotating propellers
See also
References
Notes
Bibliography
Gunston, Bill. Jane's Aerospace Dictionary. London, England. Jane's Publishing Company Ltd, 1980.
Aircraft engines
Propellers
Aircraft configurations | Counter-rotating propellers | [
"Technology",
"Engineering"
] | 619 | [
"Aerospace engineering",
"Aircraft configurations",
"Engines",
"Aircraft engines"
] |
7,840,768 | https://en.wikipedia.org/wiki/Injective%20metric%20space | In metric geometry, an injective metric space, or equivalently a hyperconvex metric space, is a metric space with certain properties generalizing those of the real line and of L∞ distances in higher-dimensional vector spaces. These properties can be defined in two seemingly different ways: hyperconvexity involves the intersection properties of closed balls in the space, while injectivity involves the isometric embeddings of the space into larger spaces. However it is a theorem of that these two different types of definitions are equivalent.
Hyperconvexity
A metric space is said to be hyperconvex if it is convex and its closed balls have the binary Helly property. That is:
Any two points and can be connected by the isometric image of a line segment of length equal to the distance between the points (i.e. is a path space).
If is any family of closed balls such that each pair of balls in meets, then there exists a point common to all the balls in .
Equivalently, a metric space is hyperconvex if, for any set of points in and radii satisfying for each and , there is a point in that is within distance of each (that is, for all ).
Injectivity
A retraction of a metric space is a function mapping to a subspace of itself, such that
for all we have that ; that is, is the identity function on its image (i.e. it is idempotent), and
for all we have that ; that is, is nonexpansive.
A retract of a space is a subspace of that is an image of a retraction.
A metric space is said to be injective if, whenever is isometric to a subspace of a space , that subspace is a retract of .
Examples
Examples of hyperconvex metric spaces include
The real line
with the ∞ distance
Manhattan distance (L1) in the plane (which is equivalent up to rotation and scaling to the L∞), but not in higher dimensions
The tight span of a metric space
Any complete real tree
– see Metric space aimed at its subspace
Due to the equivalence between hyperconvexity and injectivity, these spaces are all also injective.
Properties
In an injective space, the radius of the minimum ball that contains any set is equal to half the diameter of . This follows since the balls of radius half the diameter, centered at the points of , intersect pairwise and therefore by hyperconvexity have a common intersection; a ball of radius half the diameter centered at a point of this common intersection contains all of . Thus, injective spaces satisfy a particularly strong form of Jung's theorem.
Every injective space is a complete space, and every metric map (or, equivalently, nonexpansive mapping, or short map) on a bounded injective space has a fixed point. A metric space is injective if and only if it is an injective object in the category of metric spaces and metric maps.
Notes
References
Correction (1957), Pacific J. Math. 7: 1729, .
Metric spaces | Injective metric space | [
"Mathematics"
] | 656 | [
"Mathematical structures",
"Space (mathematics)",
"Metric spaces"
] |
7,841,648 | https://en.wikipedia.org/wiki/Kurtoxin | Kurtoxin is a toxin found in the venom of the South African scorpion Parabuthus transvaalicus. It affects the gating of voltage-gated sodium channels and calcium channels.
Sources
Many venoms are evolved among animals and most of them is a peptide in nature. Kurtotoxin is found in the venom of the South African scorpion Parabuthus transvaalicus.
Chemistry
Kurtoxin is a protein containing 63 amino acid residues with a mass of 7386.1 daltons. Its formula is C324H478N94O90S8. It can be isolated from the venom of Parabuthus transvaalicus by high-performance liquid chromatography (HPLC). Kurtoxin is closely related to α-scorpion toxins, a family of toxins that slow inactivation of voltage-gated sodium channels. The complete primary amino-acid sequence of kurtoxin is: KIDGYPVDYW NCKRICWYNN KYCNDLCKGL KADSGYCWGW TLSCYCQGLP DNARIKRSGR CRA.
Target
In research on Xenopus oocytes it was found that kurtoxin affects low-threshold α1G and α1H calcium channels, but not the high-threshold α1A, α1B, α1C, and α1E Ca channels. Like other α-scorpion toxins kurtoxin was also found to interact with voltage-gated sodium channels.
In rat neurons, less selectivity for kurtoxin on calcium channels is found. Here the toxin interacts with high affinity with T-type, L-type, N-type, and P-type channels.
Mode of action
Kurtoxin inhibits ion calcium channels by modifying channel gating. The effect of the toxin is voltage-dependent. In a voltage-clamp experiment it was found that calcium channels are more strongly inhibited by minor depolarization than by a strong depolarization of the cell. The peptide toxin binds close to the channel voltage sensor and thereby produces complex gating modifications specific for each channel type. In rats, kurtoxin inhibited T-type, L-type, and N-type Ca channels and facilitated P-type channels. Deactivation was accelerated in T-type and L-type channels, slowed down in P-type channels and not affected in N-type calcium channels.
Kurtoxin also has an effect on sodium channels. It slows down both activation and inactivation of the channel.
References
Chuang, R.S., Jaffe, H., Cribbs, L., Perez-Reyes, E., Swartz, K.J. (1998). Inhibition of T-type voltage-gated calcium channels by a new scorpion toxin. Nature Neuroscience 1(8), 668–674.
Sidach, S.S., Mintz, I.M. (2002). Kurtoxin, a gating modifier of neuronal high- and low threshold Ca channels. The Journal of Neuroscience, 22(6), 2023–2034.
Neurotoxins
Ion channel toxins | Kurtoxin | [
"Chemistry"
] | 654 | [
"Neurochemistry",
"Neurotoxins"
] |
7,842,233 | https://en.wikipedia.org/wiki/Aminoallyl%20nucleotide | Aminoallyl nucleotide is a nucleotide with a modified base containing an allylamine. They are used in post-labeling of nucleic acids by fluorescence detection in microarray. They are reactive with N-Hydroxysuccinimide ester group which helps attach a fluorescent dye to the primary amino group on the nucleotide. These nucleotides are known as 5-(3-aminoallyl)-nucleotides since the aminoallyl group is usually attached to carbon 5 of the pyrimidine ring of uracil or cytosine. The primary amine group in the aminoallyl moiety is aliphatic and thus more reactive compared to the amine groups that are directly attached to the rings (aromatic) of the bases. Common names of aminoallyl nucleosides are initially abbreviated with aa- or AA- to indicate aminoallyl. The 5-carbon sugar is indicated with or without the lowercase "d" indicating deoxyribose if included or ribose if not. Finally the nitrogenous base and number of phosphates are indicated (i.e. aa-UTP = aminoallyl uridine triphosphate).
History
The goal of combining fluorescence and nucleic acids has been to provide a non-isotopic tag that is detectable to study DNA or RNA. This type of labeling allows scientists to study DNA or RNA in their structure, function, or formation with other nucleic acids. The first base modification for fluorescent labeling occurred in 1971 with a 4-thiouridine and 4-thiouracil. This research along with others, which included various types of direct and non-direct labeling via: analogs, addition via enzymes, or other methods made labeling of nucleotides much safer for scientist to study DNA.
As instrumentation and technologies become more advanced in the field of DNA microarray, better reagents and techniques will be needed to further scientific studies. Fluorescent labeling with Cy3 was shown to be more insufficient and skew results; the method of aminoallyl nucleotide incorporation was opted instead. Using aminoallyl nucleotides as indirect fluorescent labeling seemed to nullify the sensitivity issues seen in cyanine-labeling.
Synthesis
Aminoallyl nucleosides can be synthesized via Heck coupling as shown in the image below.
In the image above, on the left is a modified nucleoside with an iodine (the iodine is added via electrophilic halogenation) in the fifth carbon in the pyrimidine ring. Its formation can be associated with a reaction with an allylamine and various reagents via heck coupling are able to remove the halogen group from the base and add the allylamine to become the aminoallyl nucleoside shown on the right. The product on the right is then used to in molecular biology in RNA synthesis.
Other reactions include using a single pot synthesis with other halogens.
Reaction
The primary amine on the aminoallyl nucleotide reacts with amino-reactive dyes such as a cyanine and patented dyes which contain a reactive leaving group, such as a succinimidyl ester (NHS).The amine groups directly attached to the ring of the base are not affected. These nucleotides are used for labeling DNA.
Uses
Aminoallyl NTPs are used for indirect DNA labeling in PCR, nick translation, primer extensions and cDNA synthesis. These labeled NTPs are helpful because of their application in molecular biology labs where they do not have the capacity to handle radioactive material. For example, 5-(3-Aminoallyl)-Uridine(AA-UTPs) are more effective for high density labeling of DNA than pre-labeling the DNA. After the enzymatic addition of the NTPs, amine reactant fluorescent dyes can be added for detection of the DNA molecule. When incorporated into DNA or RNA molecules by DNA/RNA polymerase, 5-(3-aminoallyl)-UTP provide a reactive group for the addition of other chemical groups. Thus aminoallyl modified DNA or RNA can be labeled with any compound which has an amine-reactive group.
aa-NTPs incorporated into DNA/RNA in combination with a secondary dye coupling reagents can probe for an array analysis.
cDNA relies on aminoallyl labeling for detection purposes. Although direct labeling of dNTP is the quickest and cheapest method of fluorescent labeling, it is disadvantageous as the sequence allows for only one modified nucleotide for use. Another disadvantage of direct labeling is the bulky nucleotides, however this can be overcome by indirect labeling using aminoallyl modified nucleotides. An easy way to check for labeling success is the color;Good labeling will result in visible blue (Cy5) or red (Cy3) color in the final material.
Another process which uses aminoallyl labeling is NASBA ( Nucleic Acid Sequence Based Amplification), a highly sensitive technique for amplifying RNA. In this specific case, the aaUTP modified RNAs were tagged with fluorescent market Cy3. NASBA combined with aminoallyl-UTP labeling is very useful for many different areas of microbial diagnostics including environmental monitoring, bio threat detection, industrial process monitoring and clinical microbiology. DNA microarray is another method which utilizes specifically AA-NTP's making DNA microarray testing quicker and cheaply.
Post-synthesis labeling avoids the problems found in direct enzymatic incorporation of Cy-labeled dNTPs by generating probes with equal labeling effectiveness. With indirect labeling, amine-modified NTPs are incorporated during reverse transcription, RNA amplification, or PCR. Amino allyl-NTPs are incorporated with similar efficiency as unmodified NTPs during polymerization.
Concerns with labeling:
The amine group, in aminoallyl-modified nucleotide, is reactive with dyes such as the cyanine series, or other patented dyes. A problem arises when the dyes react with buffering agents which are necessary for the proper storage of the nucleotides. However, a carbonate buffer can be used to overcome this problem.
See also
NASBA
PCR
Nick Translation
cDNA
Microarray
Fluorophore
Reverse Transcription
References
External links
Example protocol by Holly Bennet and Joe DeRisi originated at Rosetta Informatics modified by Chris Seidel.
Nucleic acids
Nucleotides
Molecular biology
Biotechnology
Synthetic biology | Aminoallyl nucleotide | [
"Chemistry",
"Engineering",
"Biology"
] | 1,345 | [
"Synthetic biology",
"Biomolecules by chemical classification",
"Biological engineering",
"Biotechnology",
"Bioinformatics",
"Molecular genetics",
"nan",
"Molecular biology",
"Biochemistry",
"Nucleic acids"
] |
7,842,592 | https://en.wikipedia.org/wiki/Kugelblitz%20%28astrophysics%29 | A kugelblitz () is a theoretical astrophysical object predicted by general relativity. It is a concentration of heat, light or radiation so intense that its energy forms an event horizon and becomes self-trapped. In other words, if enough radiation is aimed into a region of space, the concentration of energy can warp spacetime so much that it creates a black hole. This would be a black hole the original mass–energy of which was in the form of radiant energy rather than matter; however, there is currently no uniformly accepted method of distinguishing black holes by origin.
John Archibald Wheeler's 1955 Physical Review paper entitled "Geons" refers to the kugelblitz phenomenon and explores the idea of creating such particles (or toy models of particles) from spacetime curvature.
A study published in Physical Review Letters in 2024 argues that the formation of a kugelblitz is impossible due to dissipative quantum effects like vacuum polarization, which prevent sufficient energy buildup to create an event horizon. The study concludes that such a phenomenon cannot occur in any realistic scenario within our universe.
The kugelblitz phenomenon has been considered a possible basis for interstellar engines (drives) for future black hole starships.
In fiction
A kugelblitz is a major plot point in the third season of the American superhero television series The Umbrella Academy.
A kugelblitz is the home of a major faction in Frederik Pohl's "Gateway" novels.
See also
Bekenstein bound
Micro black hole
References
Black holes
General relativity
Light | Kugelblitz (astrophysics) | [
"Physics",
"Astronomy"
] | 314 | [
"Black holes",
"Physical phenomena",
"Spectrum (physical sciences)",
"Physical quantities",
"Unsolved problems in physics",
"Electromagnetic spectrum",
"Astrophysics",
"General relativity",
"Astronomy stubs",
"Waves",
"Astrophysics stubs",
"Stellar astronomy stubs",
"Light",
"Density",
"... |
7,844,595 | https://en.wikipedia.org/wiki/Center-of-momentum%20frame | In physics, the center-of-momentum frame (COM frame), also known as zero-momentum frame, is the inertial frame in which the total momentum of the system vanishes. It is unique up to velocity, but not origin. The center of momentum of a system is not a location, but a collection of relative momenta/velocities: a reference frame. Thus "center of momentum" is a short for "center-of-momentum ".
A special case of the center-of-momentum frame is the center-of-mass frame: an inertial frame in which the center of mass (which is a single point) remains at the origin. In all center-of-momentum frames, the center of mass is at rest, but it is not necessarily at the origin of the coordinate system. In special relativity, the COM frame is necessarily unique only when the system is isolated.
Properties
General
The center of momentum frame is defined as the inertial frame in which the sum of the linear momenta of all particles is equal to 0. Let S denote the laboratory reference system and S′ denote the center-of-momentum reference frame. Using a Galilean transformation, the particle velocity in S′ is
where
is the velocity of the mass center. The total momentum in the center-of-momentum system then vanishes:
Also, the total energy of the system is the minimal energy as seen from all inertial reference frames.
Special relativity
In relativity, the COM frame exists for an isolated massive system. This is a consequence of Noether's theorem. In the COM frame the total energy of the system is the rest energy, and this quantity (when divided by the factor c2, where c is the speed of light) gives the invariant mass (rest mass) of the system:
The invariant mass of the system is given in any inertial frame by the relativistic invariant relation
but for zero momentum the momentum term (p/c)2 vanishes and thus the total energy coincides with the rest energy.
Systems that have nonzero energy but zero rest mass (such as photons moving in a single direction, or, equivalently, plane electromagnetic waves) do not have COM frames, because there is no frame in which they have zero net momentum. Due to the invariance of the speed of light, a massless system must travel at the speed of light in any frame, and always possesses a net momentum. Its energy is – for each reference frame – equal to the magnitude of momentum multiplied by the speed of light:
Two-body problem
An example of the usage of this frame is given below – in a two-body collision, not necessarily elastic (where kinetic energy is conserved). The COM frame can be used to find the momentum of the particles much easier than in a lab frame: the frame where the measurement or calculation is done. The situation is analyzed using Galilean transformations and conservation of momentum (for generality, rather than kinetic energies alone), for two particles of mass m1 and m2, moving at initial velocities (before collision) u1 and u2 respectively. The transformations are applied to take the velocity of the frame from the velocity of each particle from the lab frame (unprimed quantities) to the COM frame (primed quantities):
where V is the velocity of the COM frame. Since V is the velocity of the COM, i.e. the time derivative of the COM location R (position of the center of mass of the system):
so at the origin of the COM frame, , this implies
The same results can be obtained by applying momentum conservation in the lab frame, where the momenta are p1 and p2:
and in the COM frame, where it is asserted definitively that the total momenta of the particles, p1' and p2', vanishes:
Using the COM frame equation to solve for V returns the lab frame equation above, demonstrating any frame (including the COM frame) may be used to calculate the momenta of the particles. It has been established that the velocity of the COM frame can be removed from the calculation using the above frame, so the momenta of the particles in the COM frame can be expressed in terms of the quantities in the lab frame (i.e. the given initial values):
Notice that the relative velocity in the lab frame of particle 1 to 2 is
and the 2-body reduced mass is
so the momenta of the particles compactly reduce to
This is a substantially simpler calculation of the momenta of both particles; the reduced mass and relative velocity can be calculated from the initial velocities in the lab frame and the masses, and the momentum of one particle is simply the negative of the other. The calculation can be repeated for final velocities v1 and v2 in place of the initial velocities u1 and u2, since after the collision the velocities still satisfy the above equations:
so at the origin of the COM frame, R = 0, this implies after the collision
In the lab frame, the conservation of momentum fully reads:
This equation does not imply that
instead, it simply indicates the total mass M multiplied by the velocity of the centre of mass V is the total momentum P of the system:
Similar analysis to the above obtains
where the final relative velocity in the lab frame of particle 1 to 2 is
See also
Laboratory frame of reference
Breit frame
References
Classical mechanics
Coordinate systems
Frames of reference
Geometric centers
Kinematics
Momentum | Center-of-momentum frame | [
"Physics",
"Mathematics",
"Technology"
] | 1,130 | [
"Symmetry",
"Physical phenomena",
"Kinematics",
"Machines",
"Physical quantities",
"Point (geometry)",
"Coordinate systems",
"Frames of reference",
"Geometric centers",
"Quantity",
"Classical mechanics",
"Physical systems",
"Motion (physics)",
"Mechanics",
"Theory of relativity",
"Mome... |
7,850,102 | https://en.wikipedia.org/wiki/Quantum%20mind | The quantum mind or quantum consciousness is a group of hypotheses proposing that local physical laws and interactions from classical mechanics or connections between neurons alone cannot explain consciousness, positing instead that quantum-mechanical phenomena, such as entanglement and superposition that cause nonlocalized quantum effects, interacting in smaller features of the brain than cells, may play an important part in the brain's function and could explain critical aspects of consciousness. These scientific hypotheses are as yet unvalidated, and they can overlap with quantum mysticism.
History
Eugene Wigner developed the idea that quantum mechanics has something to do with the workings of the mind. He proposed that the wave function collapses due to its interaction with consciousness. Freeman Dyson argued that "mind, as manifested by the capacity to make choices, is to some extent inherent in every electron".
Other contemporary physicists and philosophers considered these arguments unconvincing. Victor Stenger characterized quantum consciousness as a "myth" having "no scientific basis" that "should take its place along with gods, unicorns and dragons".
David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness. He argues that quantum theories of consciousness suffer from the same weakness as more conventional theories. Just as he argues that there is no particular reason why particular macroscopic physical features in the brain should give rise to consciousness, he also thinks that there is no particular reason why a particular quantum feature, such as the EM field in the brain, should give rise to consciousness either.
Approaches
Bohm
David Bohm viewed quantum theory and relativity as contradictory, which implied a more fundamental level in the universe. He claimed that both quantum theory and relativity pointed to this deeper theory, a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.
Bohm's proposed order applies both to matter and consciousness. He suggested that it could explain the relationship between them. He saw mind and matter as projections into our explicate order from the underlying implicate order. Bohm claimed that when we look at matter, we see nothing that helps us to understand consciousness.
Bohm never proposed a specific means by which his proposal could be falsified, nor a neural mechanism through which his "implicate order" could emerge in a way relevant to consciousness. He later collaborated on Karl Pribram's holonomic brain theory as a model of quantum consciousness.
David Bohm also collaborated with Basil Hiley on work that claimed mind and matter both emerge from an "implicate order". Hiley in turn worked with philosopher Paavo Pylkkänen. According to Pylkkänen, Bohm's suggestion "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level".
Penrose and Hameroff
Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as "orchestrated objective reduction" (Orch-OR). Penrose and Hameroff initially developed their ideas separately and later collaborated to produce Orch-OR in the early 1990s. They reviewed and updated their theory in 2013.
Penrose's argument stemmed from Gödel's incompleteness theorems. In his first book on consciousness, The Emperor's New Mind (1989), he argued that while a formal system cannot prove its own consistency, Gödel's unprovable results are provable by human mathematicians. Penrose took this to mean that human mathematicians are not formal proof systems and not running a computable algorithm. According to Bringsjord and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation. In the same book, Penrose wrote: "One might speculate, however, that somewhere deep in the brain, cells are to be found of single quantum sensitivity. If this proves to be the case, then quantum mechanics will be significantly involved in brain activity."
Penrose determined that wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, he proposed a new form of wave function collapse that occurs in isolation and called it objective reduction. He suggested each quantum superposition has its own piece of spacetime curvature and that when these become separated by more than one Planck length, they become unstable and collapse. Penrose suggested that objective reduction represents neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derives.
Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior. Microtubules are composed of tubulin protein dimer subunits. The dimers each have hydrophobic pockets that are 8 nm apart and may contain delocalized π electrons. Tubulins have other smaller non-polar regions that contain π-electron-rich indole rings separated by about 2 nm. Hameroff proposed that these electrons are close enough to become entangled. He originally suggested that the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited. He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules, but this too was experimentally discredited.
In other words, there is a missing link between physics and neuroscience. For instance, the proposed predominance of A-lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al., who showed that all in vivo microtubules have a B lattice and a seam. The proposed existence of gap junctions between neurons and glial cells was also falsified. Orch-OR predicted that microtubule coherence reaches the synapses through dendritic lamellar bodies (DLBs), but De Zeeuw et al. proved this impossible by showing that DLBs are micrometers away from gap junctions.
In 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013 corroborates Orch-OR theory. Experiments that showed that anaesthetic drugs reduce how long microtubules can sustain suspected quantum excitations appear to support the quantum theory of consciousness.
In April 2022, the results of two related experiments at the University of Alberta and Princeton University were announced at The Science of Consciousness conference, providing further evidence to support quantum processes operating within microtubules. In a study Stuart Hameroff was part of, Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process called delayed luminescence, in which microtubules and tubulins trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility. In the second experiment, Gregory D. Scholes and Aarat Kalra of Princeton University used lasers to excite molecules within tubulins, causing a prolonged excitation to diffuse through microtubules further than expected, which did not occur when repeated under anesthesia. However, diffusion results have to be interpreted carefully, since even classical diffusion can be very complex due to the wide range of length scales in the fluid filled extracellular space. Nevertheless, University of Oxford quantum physicist Vlatko Vedral told that this connection with consciousness is a really long shot.
Also in 2022, a group of Italian physicists conducted several experiments that failed to provide evidence in support of a gravity-related quantum collapse model of consciousness, weakening the possibility of a quantum explanation for consciousness.
Although these theories are stated in a scientific framework, it is difficult to separate them from scientists' personal opinions. The opinions are often based on intuition or subjective ideas about the nature of consciousness. For example, Penrose wrote:
[M]y own point of view asserts that you can't even simulate conscious activity. What's going on in conscious thinking is something you couldn't properly imitate at all by computer.... If something behaves as though it's conscious, do you say it is conscious? People argue endlessly about that. Some people would say, "Well, you've got to take the operational viewpoint; we don't know what consciousness is. How do you judge whether a person is conscious or not? Only by the way they act. You apply the same criterion to a computer or a computer-controlled robot." Other people would say, "No, you can't say it feels something merely because it behaves as though it feels something." My view is different from both those views. The robot wouldn't even behave convincingly as though it was conscious unless it really was—which I say it couldn't be, if it's entirely computationally controlled.
Penrose continues:
A lot of what the brain does you could do on a computer. I'm not saying that all the brain's action is completely different from what you do on a computer. I am claiming that the actions of consciousness are something different. I'm not saying that consciousness is beyond physics, either—although I'm saying that it's beyond the physics we know now.... My claim is that there has to be something in physics that we don't yet understand, which is very important, and which is of a noncomputational character. It's not specific to our brains; it's out there, in the physical world. But it usually plays a totally insignificant role. It would have to be in the bridge between quantum and classical levels of behavior—that is, where quantum measurement comes in.
Umezawa, Vitiello, Freeman
Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage. Giuseppe Vitiello and Walter Freeman proposed a dialog model of the mind. This dialog takes place between the classical and the quantum parts of the brain. Their quantum field theory models of brain dynamics are fundamentally different from the Penrose–Hameroff theory.
Quantum brain dynamics
As described by Harald Atmanspacher, "Since quantum theory is the most fundamental theory of matter that is currently available, it is a legitimate question to ask whether quantum theory can help us to understand consciousness."
The original motivation in the early 20th century for relating quantum theory to consciousness was essentially philosophical. It is fairly plausible that conscious free decisions (“free will”) are problematic in a perfectly deterministic world, so quantum randomness might indeed open up novel possibilities for free will. (On the other hand, randomness is problematic for goal-directed volition!)
Ricciardi and Umezawa proposed in 1967 a general theory of quanta of long-range coherent waves within and between brain cells, and showed a possible mechanism of memory storage and retrieval in terms of Nambu–Goldstone bosons.
Mari Jibu and Kunio Yasue later popularized these results under the name "quantum brain dynamics" (QBD) as the hypothesis to explain the function of the brain within the framework of quantum field theory with implications on consciousness.
Pribram
Karl Pribram's holonomic brain theory (quantum holography) invoked quantum mechanics to explain higher-order processing by the mind. He argued that his holonomic model solved the binding problem. Pribram collaborated with Bohm in his work on quantum approaches to mind and he provided evidence on how much of the processing in the brain was done in wholes. He proposed that ordered water at dendritic membrane surfaces might operate by structuring Bose–Einstein condensation supporting quantum dynamics.
Stapp
Henry Stapp proposed that quantum waves are reduced only when they interact with consciousness. He argues from the that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state. Stapp's work drew criticism from scientists such as David Bourget and Danko Georgiev.
Catecholaminergic Neuron Electron Transport (CNET)
CNET is a hypothesized neural signaling mechanism in catecholaminergic neurons that would use quantum mechanical electron transport. The hypothesis is based in part on the observation by many independent researchers that electron tunneling occurs in ferritin, an iron storage protein that is prevalent in those neurons, at room temperature and ambient conditions. The hypothesized function of this mechanism is to assist in action selection, but the mechanism itself would be capable of integrating millions of cognitive and sensory neural signals using a physical mechanism associated with strong electron-electron interactions. Each tunneling event would involve a collapse of an electron wave function, but the collapse would be incidental to the physical effect created by strong electron-electron interactions.
CNET predicted a number of physical properties of these neurons that have been subsequently observed experimentally, such as electron tunneling in substantia nigra pars compacta (SNc) tissue and the presence of disordered arrays of ferritin in SNc tissue. The hypothesis also predicted that disordered ferritin arrays like those found in SNc tissue should be capable of supporting long-range electron transport and providing a switching or routing function, both of which have also been subsequently observed.
Another prediction of CNET was that the largest SNc neurons should mediate action selection. This prediction was contrary to earlier proposals about the function of those neurons at that time, which were based on predictive reward dopamine signaling. A team led by Dr. Pascal Kaeser of Harvard Medical School subsequently demonstrated that those neurons do in fact code movement, consistent with the earlier predictions of CNET. While the CNET mechanism has not yet been directly observed, it may be possible to do so using quantum dot fluorophores tagged to ferritin or other methods for detecting electron tunneling.
CNET is applicable to a number of different consciousness models as a binding or action selection mechanism, such as Integrated Information Theory (IIT) and Sensorimotor Theory (SMT). It is noted that many existing models of consciousness fail to specifically address action selection or binding. For example, O’Regan and Noë call binding a “pseudo problem,” but also state that “the fact that object attributes seem perceptually to be part of a single object does not require them to be ‘represented’ in any unified kind of way, for example, at a single location in the brain, or by a single process. They may be so represented, but there is no logical necessity for this.” Simply because there is no “logical necessity” for a physical phenomenon does not mean that it does not exist, or that once it is identified that it can be ignored. Likewise, global workspace theory (GWT) models appear to treat dopamine as modulatory, based on the prior understanding of those neurons from predictive reward dopamine signaling research, but GWT models could be adapted to include modeling of moment-by-moment activity in the striatum to mediate action selection, as observed by Kaiser. CNET is applicable to those neurons as a selection mechanism for that function, as otherwise that function could result in seizures from simultaneous actuation of competing sets of neurons. While CNET by itself is not a model of consciousness, it is able to integrate different models of consciousness through neural binding and action selection. However, a more complete understanding of how CNET might relate to consciousness would require a better understanding of strong electron-electron interactions in ferritin arrays, which implicates the many-body problem.
Criticism
These hypotheses of the quantum mind remain hypothetical speculation, as Penrose admits in his discussions. Until they make a prediction that is tested by experimentation, the hypotheses are not based on empirical evidence. In 2010, Lawrence Krauss was guarded in criticising Penrose's ideas. He said: "Roger Penrose has given lots of new-age crackpots ammunition... Many people are dubious that Penrose's suggestions are reasonable, because the brain is not an isolated quantum-mechanical system. To some extent it could be, because memories are stored at the molecular level, and at a molecular level quantum mechanics is significant." According to Krauss, "It is true that quantum mechanics is extremely strange, and on extremely small scales for short times, all sorts of weird things happen. And in fact, we can make weird quantum phenomena happen. But what quantum mechanics doesn't change about the universe is, if you want to change things, you still have to do something. You can't change the world by thinking about it."
The process of testing the hypotheses with experiments is fraught with conceptual/theoretical, practical, and ethical problems.
Conceptual problems
The idea that a quantum effect is necessary for consciousness to function is still in the realm of philosophy. Penrose proposes that it is necessary, but other theories of consciousness do not indicate that it is needed. For example, Daniel Dennett proposed a theory called multiple drafts model, which doesn't indicate that quantum effects are needed, in his 1991 book Consciousness Explained. A philosophical argument on either side is not a scientific proof, although philosophical analysis can indicate key differences in the types of models and show what type of experimental differences might be observed. But since there is no clear consensus among philosophers, there is no conceptual support that a quantum mind theory is needed.
A possible conceptual approach is to use quantum mechanics as an analogy to understand a different field of study like consciousness, without expecting that the laws of quantum physics will apply. An example of this approach is the idea of Schrödinger's cat. Erwin Schrödinger described how one could, in principle, create entanglement of a large-scale system by making it dependent on an elementary particle in a superposition. He proposed a scenario with a cat in a locked steel chamber, wherein the cat's survival depended on the state of a radioactive atom—whether it had decayed and emitted radiation. According to Schrödinger, the Copenhagen interpretation implies that the cat is both alive and dead until the state has been observed. Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; he intended the example to illustrate the absurdity of the existing view of quantum mechanics. But since Schrödinger's time, physicists have given other interpretations of the mathematics of quantum mechanics, some of which regard the "alive and dead" cat superposition as quite real. Schrödinger's famous thought experiment poses the question of when a system stops existing as a quantum superposition of states. In the same way, one can ask whether the act of making a decision is analogous to having a superposition of states of two decision outcomes, so that making a decision means "opening the box" to reduce the brain from a combination of states to one state. This analogy of decision-making uses a formalism derived from quantum mechanics, but does not indicate the actual mechanism by which the decision is made.
In this way, the idea is similar to quantum cognition. This field clearly distinguishes itself from the quantum mind, as it is not reliant on the hypothesis that there is something micro-physical quantum-mechanical about the brain. Quantum cognition is based on the quantum-like paradigm, generalized quantum paradigm, or quantum structure paradigm that information processing by complex systems such as the brain can be mathematically described in the framework of quantum information and quantum probability theory. This model uses quantum mechanics only as an analogy and does not propose that quantum mechanics is the physical mechanism by which it operates. For example, quantum cognition proposes that some decisions can be analyzed as if there is interference between two alternatives, but it is not a physical quantum interference effect.
Practical problems
The main theoretical argument against the quantum-mind hypothesis is the assertion that quantum states in the brain would lose coherency before they reached a scale where they could be useful for neural processing. This supposition was elaborated by Max Tegmark. His calculations indicate that quantum systems in the brain decohere at sub-picosecond timescales. No response by a brain has shown computational results or reactions on this fast of a timescale. Typical reactions are on the order of milliseconds, trillions of times longer than sub-picosecond timescales.
Daniel Dennett uses an experimental result in support of his multiple drafts model of an optical illusion that happens on a timescale of less than a second or so. In this experiment, two different-colored lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change color as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change color before the second light is observed. Velmans argues that the cutaneous rabbit illusion, another illusion that happens in about a second, demonstrates that there is a delay while modelling occurs in the brain and that this delay was discovered by Libet. These slow illusions that happen at times of less than a second do not support a proposal that the brain functions on the picosecond timescale.
Penrose says:
The problem with trying to use quantum mechanics in the action of the brain is that if it were a matter of quantum nerve signals, these nerve signals would disturb the rest of the material in the brain, to the extent that the quantum coherence would get lost very quickly. You couldn't even attempt to build a quantum computer out of ordinary nerve signals, because they're just too big and in an environment that's too disorganized. Ordinary nerve signals have to be treated classically. But if you go down to the level of the microtubules, then there's an extremely good chance that you can get quantum-level activity inside them.
For my picture, I need this quantum-level activity in the microtubules; the activity has to be a large-scale thing that goes not just from one microtubule to the next but from one nerve cell to the next, across large areas of the brain. We need some kind of coherent activity of a quantum nature which is weakly coupled to the computational activity that Hameroff argues is taking place along the microtubules.
There are various avenues of attack. One is directly on the physics, on quantum theory, and there are certain experiments that people are beginning to perform, and various schemes for a modification of quantum mechanics. I don't think the experiments are sensitive enough yet to test many of these specific ideas. One could imagine experiments that might test these things, but they'd be very hard to perform.
Penrose also said in an interview:
...whatever consciousness is, it must be beyond computable physics.... It's not that consciousness depends on quantum mechanics, it's that it depends on where our current theories of quantum mechanics go wrong. It's to do with a theory that we don't know yet.
A demonstration of a quantum effect in the brain has to explain this problem or explain why it is not relevant, or that the brain somehow circumvents the problem of the loss of quantum coherency at body temperature. As Penrose proposes, it may require a new type of physical theory, something "we don't know yet."
Ethical problems
Deepak Chopra has referred a "quantum soul" existing "apart from the body", human "access to a field of infinite possibilities", and other quantum mysticism topics such as quantum healing or quantum effects of consciousness. Seeing the human body as being undergirded by a "quantum-mechanical body" composed not of matter but of energy and information, he believes that "human aging is fluid and changeable; it can speed up, slow down, stop for a time, and even reverse itself", as determined by one's state of mind. Robert Carroll states that Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings. Chopra argues that what he calls "quantum healing" cures any manner of ailments, including cancer, through effects that he claims are based on the same principles as quantum mechanics. This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body. Chopra said: "I think quantum theory has a lot of things to say about the observer effect, about non-locality, about correlations. So I think there’s a school of physicists who believe that consciousness has to be equated, or at least brought into the equation, in understanding quantum mechanics." On the other hand, he also claims that quantum effects are "just a metaphor. Just like an electron or a photon is an indivisible unit of information and energy, a thought is an indivisible unit of consciousness." In his book Quantum Healing, Chopra stated the conclusion that quantum entanglement links everything in the Universe, and therefore it must create consciousness.
According to Daniel Dennett, "On this topic, Everybody's an expert... but they think that they have a particular personal authority about the nature of their own conscious experiences that can trump any hypothesis they find unacceptable."
While quantum effects are significant in the physiology of the brain, critics of quantum mind hypotheses challenge whether the effects of known or speculated quantum phenomena in biology scale up to have significance in neuronal computation, much less the emergence of consciousness as phenomenon. Daniel Dennett said, "Quantum effects are there in your car, your watch, and your computer. But most things—most macroscopic objects—are, as it were, oblivious to quantum effects. They don't amplify them; they don't hinge on them."
See also
Artificial consciousness
Bohm interpretation of quantum mechanics
Coincidence detection in neurobiology
Critical brain hypothesis
Electromagnetic theories of consciousness
Evolutionary neuroscience
Hameroff-Penrose Orchestrated Objective Reduction
Hard problem of consciousness
Holonomic brain theory
Many-minds interpretation
Mechanism (philosophy)
Neuroplasticity
Quantum cognition
Quantum neural network
References
Further reading
McFadden, Johnjoe (2000) Quantum Evolution HarperCollins. ; . Final chapter on the quantum mind.
External links
Center for Consciousness Studies, directed by Stuart Hameroff
PhilPapers on Philosophy of Mind, edited by David Bourget and David Chalmers
Quantum Approaches to Consciousness, entry in Stanford Encyclopedia of Philosophy
Fringe science
Quantum mechanics
Theory of mind | Quantum mind | [
"Physics"
] | 5,579 | [
"Quantum mind",
"Theoretical physics",
"Quantum mechanics"
] |
418,142 | https://en.wikipedia.org/wiki/Facet | Facets () are flat faces on geometric shapes. The organization of naturally occurring facets was key to early developments in crystallography, since they reflect the underlying symmetry of the crystal structure. Gemstones commonly have facets cut into them in order to improve their appearance by allowing them to reflect light.
Facet arrangements
Of the hundreds of facet arrangements that have been used, the most famous is probably the round brilliant cut, used for diamond and many colored gemstones. This first early version of what would become the modern Brilliant Cut is said to have been devised by an Italian named Peruzzi, sometime in the late 17th century. Later on, the first angles for an "ideal" cut diamond were calculated by Marcel Tolkowsky in 1919. Slight modifications have been made since then, but angles for "ideal" cut diamonds are still similar to Tolkowsky's formula. Round brilliants cut before the advent of "ideal" angles are often referred to as "Early round brilliant cut" or "Old European brilliant cut" and are considered poorly cut by today's standards, though there is still interest in them from collectors. Other historic diamond cuts include the "Old Mine Cut" which is similar to early versions of the round brilliant, but has a rectangular outline, and the "Rose Cut" which is a simple cut consisting of a flat, polished back, and varying numbers of angled facets on the crown, producing a faceted dome. Sometimes a 58th facet, called a culet is cut on the bottom of the stone to help prevent chipping of the pavilion point. Earlier brilliant cuts often have very large culets, while modern brilliant cut diamonds generally lack the culet facet, or it may be present in minute size.
Cutting facets
The art of cutting a gem is an exacting procedure performed on a faceting machine. The ideal product of facet cutting is a gemstone that displays a pleasing balance of internal reflections of light known as brilliance, strong and colorful dispersion which is commonly referred to as "fire", and brightly colored flashes of reflected light known as scintillation. Typically transparent to translucent stones are faceted, although opaque materials may occasionally be faceted as the luster of the gem will produce appealing reflections. Pleonaste (black spinel) and black diamond are examples of opaque faceted gemstones.
Facet angles
The angles used for each facet play a crucial role in the outcome of a gem. While the general facet arrangement of a particular gemstone cut may appear the same in any given gem material, the angles of each facet must be carefully adjusted to maximize the optical performance. The angles used will vary based on the refractive index of the gem material. When light passes through a gemstone and strikes a polished facet, the minimum angle possible for the facet to reflect the light back into the gemstone is called the critical angle. If the ray of light strikes a surface lower than this angle, it will leave the gem material instead of reflecting through the gem as brilliance. These lost light rays are sometimes referred to as "light leakage", and the effect caused by it is called "windowing" as the area will appear transparent and without brilliance. This is especially common in poorly cut commercial gemstones. Gemstones with higher refractive indexes generally make more desirable gemstones, the critical angle decreases as refractive indices increase, allowing for greater internal reflections as the light is less likely to escape.
The faceting machine
This machine uses a motor-driven plate to hold a precisely flat disk (known as a "lap") for the purpose of cutting or polishing. Diamond abrasives bonded to metal or resin are typically used for cutting laps, and a wide variety of materials are used for polishing laps in conjunction with either very fine diamond powder or oxide-based polishes. Water is typically used for cutting, while either oil or water is used for the polishing process.
The machine uses a system generally called a "mast" which consists of an angle readout, height adjustment and typically a gear (called an "index gear") with a particular number of teeth is used as a means of setting the rotational angle. The angles of rotation are evenly divided by the number of teeth present on the gear, though many machines include additional means of adjusting the rotational angle in finer increments, often called a "cheater". The stone is bonded to a (typically metal) rod known as a "dop" or "dop stick" and is held in place by part of the mast referred to as the "quill".
The modern faceting process
The dopped stone is ground at precise angles and indexes on cutting laps of progressively finer grit, and then the process is repeated a final time to polish each facet. Accurate repetition of angles in the cutting and polishing process is aided by the angle readout and index gear. The physical process of polishing is a subject of debate. One commonly accepted theory is that the fine abrasive particles of a polishing compound produce abrasions smaller than the wavelengths of light, thus making the minute scratches invisible. Since gemstones have two sides (the crown and pavilion), a device often called a "transfer jig" is used to flip the stone so that each side may be cut and polished.
Other methods
Cleaving relies on planar weaknesses of the chemical bonds in the crystal structure of a mineral. If a sharp blow is applied at the correct angle, the stone may split cleanly apart. While cleaving is sometimes used to split uncut gemstones into smaller pieces, it is never used to produce facets. Cleaving of diamonds was once common, but as the risk of damaging a stone is too high, undesirable diamond pieces often resulted. The preferred method of splitting diamonds into smaller pieces is now sawing.
An older and more primitive style of faceting machine called a jamb peg machine used wooden dop sticks of precise length and a "mast" system consisting of a plate with holes carefully placed in it. By placing the back end of the dop into one of the many holes, the stone could be introduced to the lap at precise angles. These machines took considerable skill to operate effectively.
Another method of facet cutting involves the use of cylinders to produce curved, concave facets. This technique can produce many unusual and artistic variations of the traditional faceting process.
Natural faceting
Many crystals naturally grow in faceted shapes. For instance, common table salt forms cubes and quartz forms hexagonal prisms. These characteristic shapes are a consequence of the crystal structure of the material and the surface energy, as well as the general conditions under which the crystal formed.
The Bravais lattice of the crystal structure defines a set of possible "low-energy planes", which are usually planes on which the atoms are close-packed. For instance, a cubic crystal may have low-energy planes on the faces of the cube or on the diagonals. The planes are low-energy in the sense that if the crystal is cleaved along these planes, there will be relatively few broken bonds and a relatively small increase in energy over the unbroken crystal. Equivalently, these planes have a low surface energy. The planes with the lowest energy will form the largest facets, in order to minimize the overall thermodynamic free energy of the crystal. If the surface energy as a function of the planes is known, the equilibrium shape of the crystal may be found via the Wulff construction.
Growth conditions, including the surface the crystal is growing on top of (the substrate), may change the expected shape of the crystal; for instance, if the base of the crystal is under stress from the substrate, this may favor the crystal growing taller rather than growing outwards along the substrate. The surface energy, including the relative energies of the different planes, depend on many factors including the temperature, the composition of the surroundings (e.g. humidity), and the pressure.
See also
Diamond cut
Princess cut
References
External links
Gem faceting process — Step by step pictures from rough stone to faceted gem.
Gemology
Crystallography | Facet | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,671 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
418,156 | https://en.wikipedia.org/wiki/Second%20quantization | Second quantization, also referred to as occupation number representation, is a formalism used to describe and analyze quantum many-body systems. In quantum field theory, it is known as canonical quantization, in which the fields (typically as the wave functions of matter) are thought of as field operators, in a manner similar to how the physical quantities (position, momentum, etc.) are thought of as operators in first quantization. The key ideas of this method were introduced in 1927 by Paul Dirac, and were later developed, most notably, by Pascual Jordan and Vladimir Fock.
In this approach, the quantum many-body states are represented in the Fock state basis, which are constructed by filling up each single-particle state with a certain number of identical particles. The second quantization formalism introduces the creation and annihilation operators to construct and handle the Fock states, providing useful tools to the study of the quantum many-body theory.
Quantum many-body states
The starting point of the second quantization formalism is the notion of indistinguishability of particles in quantum mechanics. Unlike in classical mechanics, where each particle is labeled by a distinct position vector and different configurations of the set of s correspond to different many-body states, in quantum mechanics, the particles are identical, such that exchanging two particles, i.e. , does not lead to a different many-body quantum state. This implies that the quantum many-body wave function must be invariant (up to a phase factor) under the exchange of two particles. According to the statistics of the particles, the many-body wave function can either be symmetric or antisymmetric under the particle exchange:
if the particles are bosons,
if the particles are fermions.
This exchange symmetry property imposes a constraint on the many-body wave function. Each time a particle is added or removed from the many-body system, the wave function must be properly symmetrized or anti-symmetrized to satisfy the symmetry constraint. In the first quantization formalism, this constraint is guaranteed by representing the wave function as linear combination of permanents (for bosons) or determinants (for fermions) of single-particle states. In the second quantization formalism, the issue of symmetrization is automatically taken care of by the creation and annihilation operators, such that its notation can be much simpler.
First-quantized many-body wave function
Consider a complete set of single-particle wave functions labeled by (which may be a combined index of a number of quantum numbers). The following wave function
represents an N-particle state with the ith particle occupying the single-particle state . In the shorthanded notation, the position argument of the wave function may be omitted, and it is assumed that the ith single-particle wave function describes the state of the ith particle. The wave function has not been symmetrized or anti-symmetrized, thus in general not qualified as a many-body wave function for identical particles. However, it can be brought to the symmetrized (anti-symmetrized) form by operators for symmetrizer, and for antisymmetrizer.
For bosons, the many-body wave function must be symmetrized,
while for fermions, the many-body wave function must be anti-symmetrized,
Here is an element in the N-body permutation group (or symmetric group) , which performs a permutation among the state labels , and denotes the corresponding permutation sign. is the normalization operator that normalizes the wave function. (It is the operator that applies a suitable numerical normalization factor to the symmetrized tensors of degree n; see the next section for its value.)
If one arranges the single-particle wave functions in a matrix , such that the row-i column-j matrix element is , then the boson many-body wave function can be simply written as a permanent , and the fermion many-body wave function as a determinant (also known as the Slater determinant).
Second-quantized Fock states
First quantized wave functions involve complicated symmetrization procedures to describe physically realizable many-body states because the language of first quantization is redundant for indistinguishable particles. In the first quantization language, the many-body state is described by answering a series of questions like "Which particle is in which state?". However these are not physical questions, because the particles are identical, and it is impossible to tell which particle is which in the first place. The seemingly different states and are actually redundant names of the same quantum many-body state. So the symmetrization (or anti-symmetrization) must be introduced to eliminate this redundancy in the first quantization description.
In the second quantization language, instead of asking "each particle on which state", one asks "How many particles are there in each state?". Because this description does not refer to the labeling of particles, it contains no redundant information, and hence leads to a precise and simpler description of the quantum many-body state. In this approach, the many-body state is represented in the occupation number basis, and the basis state is labeled by the set of occupation numbers, denoted
meaning that there are particles in the single-particle state (or as ). The occupation numbers sum to the total number of particles, i.e. . For fermions, the occupation number can only be 0 or 1, due to the Pauli exclusion principle; while for bosons it can be any non-negative integer
The occupation number states are also known as Fock states. All the Fock states form a complete basis of the many-body Hilbert space, or Fock space. Any generic quantum many-body state can be expressed as a linear combination of Fock states.
Note that besides providing a more efficient language, Fock space allows for a variable number of particles. As a Hilbert space, it is isomorphic to the sum of the n-particle bosonic or fermionic tensor spaces described in the previous section, including a one-dimensional zero-particle space C.
The Fock state with all occupation numbers equal to zero is called the vacuum state, denoted . The Fock state with only one non-zero occupation number is a single-mode Fock state, denoted . In terms of the first quantized wave function, the vacuum state is the unit tensor product and can be denoted . The single-particle state is reduced to its wave function . Other single-mode many-body (boson) states are just the tensor product of the wave function of that mode, such as and
. For multi-mode Fock states (meaning more than one single-particle state is involved), the corresponding first-quantized wave function will require proper symmetrization according to the particle statistics, e.g. for a boson state, and for a fermion state (the symbol between and is omitted for simplicity). In general, the normalization is found to be , where N is the total number of particles. For fermion, this expression reduces to as can only be either zero or one. So the first-quantized wave function corresponding to the Fock state reads
for bosons and
for fermions. Note that for fermions, only, so the tensor product above is effectively just a product over all occupied single-particle states.
Creation and annihilation operators
The creation and annihilation operators are introduced to add or remove a particle from the many-body system. These operators lie at the core of the second quantization formalism, bridging the gap between the first- and the second-quantized states. Applying the creation (annihilation) operator to a first-quantized many-body wave function will insert (delete) a single-particle state from the wave function in a symmetrized way depending on the particle statistics. On the other hand, all the second-quantized Fock states can be constructed by applying the creation operators to the vacuum state repeatedly.
The creation and annihilation operators (for bosons) are originally constructed in the context of the quantum harmonic oscillator as the raising and lowering operators, which are then generalized to the field operators in the quantum field theory. They are fundamental to the quantum many-body theory, in the sense that every many-body operator (including the Hamiltonian of the many-body system and all the physical observables) can be expressed in terms of them.
Insertion and deletion operation
The creation and annihilation of a particle is implemented by the insertion and deletion of the single-particle state from the first quantized wave function in an either symmetric or anti-symmetric manner. Let be a single-particle state, let 1 be the tensor identity (it is the generator of the zero-particle space C and satisfies in the tensor algebra over the fundamental Hilbert space), and let be a generic tensor product state. The insertion and the deletion operators are linear operators defined by the following recursive equations
Here is the Kronecker delta symbol, which gives 1 if , and 0 otherwise. The subscript of the insertion or deletion operators indicates whether symmetrization (for bosons) or anti-symmetrization (for fermions) is implemented.
Boson creation and annihilation operators
The boson creation (resp. annihilation) operator is usually denoted as (resp. ). The creation operator adds a boson to the single-particle state , and the annihilation operator removes a boson from the single-particle state . The creation and annihilation operators are Hermitian conjugate to each other, but neither of them are Hermitian operators ().
Definition
The boson creation (annihilation) operator is a linear operator, whose action on a N-particle first-quantized wave function is defined as
where inserts the single-particle state in possible insertion positions symmetrically, and deletes the single-particle state from possible deletion positions symmetrically.
Examples
Hereinafter the tensor symbol between single-particle states is omitted for simplicity. Take the state , create one more boson on the state ,
Then annihilate one boson from the state ,
Action on Fock states
Starting from the single-mode vacuum state , applying the creation operator repeatedly, one finds
The creation operator raises the boson occupation number by 1. Therefore, all the occupation number states can be constructed by the boson creation operator from the vacuum state
On the other hand, the annihilation operator lowers the boson occupation number by 1
It will also quench the vacuum state as there has been no boson left in the vacuum state to be annihilated. Using the above formulae, it can be shown that
meaning that defines the boson number operator.
The above result can be generalized to any Fock state of bosons.
These two equations can be considered as the defining properties of boson creation and annihilation operators in the second-quantization formalism. The complicated symmetrization of the underlying first-quantized wave function is automatically taken care of by the creation and annihilation operators (when acting on the first-quantized wave function), so that the complexity is not revealed on the second-quantized level, and the second-quantization formulae are simple and neat.
Operator identities
The following operator identities follow from the action of the boson creation and annihilation operators on the Fock state,
These commutation relations can be considered as the algebraic definition of the boson creation and annihilation operators. The fact that the boson many-body wave function is symmetric under particle exchange is also manifested by the commutation of the boson operators.
The raising and lowering operators of the quantum harmonic oscillator also satisfy the same set of commutation relations, implying that the bosons can be interpreted as the energy quanta (phonons) of an oscillator. The position and momentum operators of a Harmonic oscillator (or a collection of Harmonic oscillating modes) are given by Hermitian combinations of phonon creation and annihilation operators,
which reproduce the canonical commutation relation between position and momentum operators (with )
This idea is generalized in the quantum field theory, which considers each mode of the matter field as an oscillator subject to quantum fluctuations, and the bosons are treated as the excitations (or energy quanta) of the field.
Fermion creation and annihilation operators
The fermion creation (annihilation) operator is usually denoted as (). The creation operator adds a fermion to the single-particle state , and the annihilation operator removes a fermion from the single-particle state .
Definition
The fermion creation (annihilation) operator is a linear operator, whose action on a N-particle first-quantized wave function is defined as
where inserts the single-particle state in possible insertion positions anti-symmetrically, and deletes the single-particle state from possible deletion positions anti-symmetrically.
It is particularly instructive to view the results of creation and annihilation operators on states of two (or more) fermions, because they demonstrate the effects of exchange. A few illustrative operations are given in the example below. The complete algebra for creation and annihilation operators on a two-fermion state can be found in Quantum Photonics.
Examples
Hereinafter the tensor symbol between single-particle states is omitted for simplicity. Take the state , attempt to create one more fermion on the occupied state will quench the whole many-body wave function,
Annihilate a fermion on the state,
take the state ,
The minus sign (known as the fermion sign) appears due to the anti-symmetric property of the fermion wave function.
Action on Fock states
Starting from the single-mode vacuum state , applying the fermion creation operator ,
If the single-particle state is empty, the creation operator will fill the state with a fermion. However, if the state is already occupied by a fermion, further application of the creation operator will quench the state, demonstrating the Pauli exclusion principle that two identical fermions can not occupy the same state simultaneously. Nevertheless, the fermion can be removed from the occupied state by the fermion annihilation operator ,
The vacuum state is quenched by the action of the annihilation operator.
Similar to the boson case, the fermion Fock state can be constructed from the vacuum state using the fermion creation operator
It is easy to check (by enumeration) that
meaning that defines the fermion number operator.
The above result can be generalized to any Fock state of fermions.
Recall that the occupation number can only take 0 or 1 for fermions. These two equations can be considered as the defining properties of fermion creation and annihilation operators in the second quantization formalism. Note that the fermion sign structure , also known as the Jordan-Wigner string, requires there to exist a predefined ordering of the single-particle states (the spin structure) and involves a counting of the fermion occupation numbers of all the preceding states; therefore the fermion creation and annihilation operators are considered non-local in some sense. This observation leads to the idea that fermions are emergent particles in the long-range entangled local qubit system.
Operator identities
The following operator identities follow from the action of the fermion creation and annihilation operators on the Fock state,
These anti-commutation relations can be considered as the algebraic definition of the fermion creation and annihilation operators. The fact that the fermion many-body wave function is anti-symmetric under particle exchange is also manifested by the anti-commutation of the fermion operators.
The creation and annihilation operators are Hermitian conjugate to each other, but neither of them are Hermitian operators (). The Hermitian combination of the fermion creation and annihilation operators
are called Majorana fermion operators. They can be viewed as the fermionic analog of position and momentum operators of a "fermionic" Harmonic oscillator. They satisfy the anticommutation relation
where labels any Majorana fermion operators on equal footing (regardless their origin from Re or Im combination of complex fermion operators ). The anticommutation relation indicates that Majorana fermion operators generates a Clifford algebra, which can be systematically represented as Pauli operators in the many-body Hilbert space.
Quantum field operators
Defining as a general annihilation (creation) operator for a single-particle state that could be either fermionic or bosonic , the real space representation of the operators defines the quantum field operators and by
These are second quantization operators, with coefficients and that are ordinary first-quantization wavefunctions. Thus, for example, any expectation values will be ordinary first-quantization wavefunctions. Loosely speaking, is the sum of all possible ways to add a particle to the system at position r through any of the basis states , not necessarily plane waves, as below.
Since and are second quantization operators defined in every point in space they are called quantum field operators. They obey the following fundamental commutator and anti-commutator relations,
boson fields,
fermion fields.
For homogeneous systems it is often desirable to transform between real space and the momentum representations, hence, the quantum fields operators in Fourier basis yields:
Comment on nomenclature
The term "second quantization", introduced by Jordan, is a misnomer that has persisted for historical reasons. At the origin of quantum field theory, it was inappositely thought that the Dirac equation described a relativistic wavefunction (hence the obsolete "Dirac sea" interpretation), rather than a classical spinor field which, when quantized (like the scalar field), yielded a fermionic quantum field (vs. a bosonic quantum field).
One is not quantizing "again", as the term "second" might suggest; the field that is being quantized is not a Schrödinger wave function that was produced as the result of quantizing a particle, but is a classical field (such as the electromagnetic field or Dirac spinor field), essentially an assembly of coupled oscillators, that was not previously quantized. One is merely quantizing each oscillator in this assembly, shifting from a semiclassical treatment of the system to a fully quantum-mechanical one.
See also
Canonical quantization
First quantization
Geometric quantization
Quantization (physics)
Schrödinger functional
Scalar field theory
References
Quantum field theory
Mathematical quantization | Second quantization | [
"Physics"
] | 3,978 | [
"Quantum field theory",
"Mathematical quantization",
"Quantum mechanics"
] |
418,237 | https://en.wikipedia.org/wiki/Mach%20wave | In fluid dynamics, a Mach wave, also known as a weak discontinuity, is a pressure wave traveling with the speed of sound caused by a slight change of pressure added to a compressible flow. These weak waves can combine in supersonic flow to become a shock wave if sufficient Mach waves are present at any location. Such a shock wave is called a Mach stem or Mach front. Thus, it is possible to have shockless compression or expansion in a supersonic flow by having the production of Mach waves sufficiently spaced (cf. isentropic compression in supersonic flows). A Mach wave is the weak limit of an oblique shock wave where time averages of flow quantities don't change (a normal shock is the other limit). If the size of the object moving at the speed of sound is near 0, then this domain of influence of the wave is called a Mach cone.
Mach angle
A Mach wave propagates across the flow at the Mach angle μ, which is the angle formed between the Mach wave wavefront and a vector that points opposite to the vector of motion. It is given by
where M is the Mach number.
Mach waves can be used in schlieren or shadowgraph observations to determine the local Mach number of the flow. Early observations by Ernst Mach used grooves in the wall of a duct to produce Mach waves in a duct, which were then photographed by the schlieren method, to obtain data about the flow in nozzles and ducts. Mach angles may also occasionally be visualized out of their condensation in air, for example vapor cones around aircraft during transonic flight.
See also
Compressible flow
Prandtl–Meyer expansion fan
Shadowgraph technique
Schlieren photography
Shock wave
References
External links
Supersonic wind tunnel test demonstration (Mach 2.5) with flat plate and wedge creating an oblique shock along with numerous Mach waves(Video)
Fluid dynamics
Waves
Ernst Mach | Mach wave | [
"Physics",
"Chemistry",
"Engineering"
] | 394 | [
"Physical phenomena",
"Chemical engineering",
"Waves",
"Motion (physics)",
"Piping",
"Fluid dynamics"
] |
418,292 | https://en.wikipedia.org/wiki/Strain%20gauge | A strain gauge (also spelled strain gage) is a device used to measure strain on an object. Invented by Edward E. Simmons and Arthur C. Ruge in 1938, the most common type of strain gauge consists of an insulating flexible backing which supports a metallic foil pattern. The gauge is attached to the object by a suitable adhesive, such as cyanoacrylate. As the object is deformed, the foil is deformed, causing its electrical resistance to change. This resistance change, usually measured using a Wheatstone bridge, is related to the strain by the quantity known as the gauge factor.
History
Edward E. Simmons and Professor Arthur C. Ruge independently invented the strain gauge.
Simmons was involved in a research project by Dätwyler and Clark at Caltech between 1936 and 1938. They researched the stress-strain behavior of metals under shock loads. Simmons came up with an original way to measure the force introduced into the sample by equipping a dynamometer with fine resistance wires.
Arthur C. Ruge, a professor at MIT, on the other hand, conducted research in seismology. He tried to analyze the behavior of a model water tank installed on a vibration table. He was not able to utilize the standard optical strain measurement methods of his time due to the small scale and low strains in his model. Professor Ruge (and his assistant J. Hanns Meier) had the epiphany of measuring the resistance change caused by strain in metallic wires cemented on the thin walls of the water tank model.
The development of the strain gauge was essentially just a byproduct of other research projects. Edward E. Simmons and Professor Arthur C. Ruge developed a widely used and useful measurement tool due to the lack of an alternative at their times.
Arthur C. Ruge realized the commercial utility of the strain gauge. His employer at MIT waived all claims on the right of the invention, as they did not predict the economic and large-scale usage potential. This prediction turned out to be false. The strain gauge applications were quickly gaining traction as they served to indirectly detect all other quantities that induce strain. Additionally, they were simple to install by the scientists, did not cause any obstruction or property changes to the observed object and thus falsifying the measurement results. Probably the last and most important property was the ease of transmission of the electrical output signal.
Physical operation
A strain gauge takes advantage of the physical property of electrical conductance and its dependence on the conductor's geometry. When an electrical conductor is stretched within the limits of its elasticity such that it does not break or permanently deform, it will become narrower and longer, which increases its electrical resistance end-to-end. Conversely, when a conductor is compressed such that it does not buckle, it will broaden and shorten, which decreases its electrical resistance end-to-end. From the measured electrical resistance of the strain gauge, the amount of induced stress may be inferred.
A typical strain gauge arranges a long, thin conductive strip in a zig-zag pattern of parallel lines. This does not increase the sensitivity, since the percentage change in resistance for a given strain for the entire zig-zag is the same as for any single trace. A single linear trace would have to be extremely thin, hence liable to overheating (which would change its resistance and cause it to expand), or would need to be operated at a much lower voltage, making it difficult to measure resistance changes accurately.
Gauge factor
The gauge factor is defined as:
where
is the change in resistance caused by strain,
is the resistance of the undeformed gauge, and
is strain.
For common metallic foil gauges, the gauge factor is usually a little over 2. For a single active gauge and three dummy resistors of the same resistance about the active gauge in a balanced Wheatstone bridge configuration, the output sensor voltage from the bridge is approximately:
where
is the bridge excitation voltage.
Foil gauges typically have active areas of about 2–10 mm2 in size. With careful installation, the correct gauge, and the correct adhesive, strains up to at least 10% can be measured.
In practice
An excitation voltage is applied to input leads of the gauge network, and a voltage reading is taken from the output leads. Typical input voltages are 5 V or 12 V and typical output readings are in millivolts.
Foil strain gauges are used in many situations. Different applications place different requirements on the gauge. In most cases the orientation of the strain gauge is significant.
Gauges attached to a load cell would normally be expected to remain stable over a period of years, if not decades; while those used to measure response in a dynamic experiment may only need to remain attached to the object for a few days, be energized for less than an hour, and operate for less than a second.
Strain gauges are attached to the substrate with a special glue. The type of glue depends on the required lifetime of the measurement system. For short term measurements (up to some weeks) cyanoacrylate glue is appropriate, for long lasting installation epoxy glue is required. Usually epoxy glue requires high temperature curing (at about 80-100 °C). The preparation of the surface where the strain gauge is to be glued is of the utmost importance. The surface must be smoothed (e.g. with very fine sand paper), deoiled with solvents, the solvent traces must then be removed and the strain gauge must be glued immediately after this to avoid oxidation or pollution of the prepared area. If these steps are not followed the strain gauge binding to the surface may be unreliable and unpredictable measurement errors may be generated.
Strain gauge based technology is used commonly in the manufacture of pressure sensors. The gauges used in pressure sensors themselves are commonly made from silicon, polysilicon, metal film, thick film, and bonded foil.
Variations in temperature
Variations in temperature will cause a multitude of effects. The object will change in size by thermal expansion, which will be detected as a strain by the gauge. Resistance of the gauge will change, and resistance of the connecting wires will change.
Most strain gauges are made from a constantan alloy. Various constantan alloys and Karma alloys have been designed so that the temperature effects on the resistance of the strain gauge itself largely cancel out the resistance change of the gauge due to the thermal expansion of the object under test. Because different materials have different amounts of thermal expansion, self-temperature compensation (STC) requires selecting a particular alloy matched to the material of the object under test.
Strain gauges that are not self-temperature-compensated (such as isoelastic alloy) can be temperature compensated by use of the dummy gauge technique. A dummy gauge (identical to the active strain gauge) is installed on an unstrained sample of the same material as the test specimen. The sample with the dummy gauge is placed in thermal contact with the test specimen, adjacent to the active gauge. The dummy gauge is wired into a Wheatstone bridge on an adjacent arm to the active gauge so that the temperature effects on the active and dummy gauges cancel each other. (Murphy's law was originally coined in response to a set of gauges being incorrectly wired into a Wheatstone bridge.)
Every material reacts when it heats up or when it cools down. This will cause strain gauges to register a deformation in the material which will make it change signal. To prevent this from happening strain gauges are made so they will compensate this change due to temperature. Dependent on the material of the surface where the strain gauge is assembled on, a different expansion can be measured.
Temperature effects on the lead wires can be cancelled by using a "3-wire bridge" or a "4-wire ohm circuit" (also called a "4-wire Kelvin connection").
In any case it is a good engineering practice to keep the Wheatstone bridge voltage drive low enough to avoid the self heating of the strain gauge. The self heating of the strain gauge depends on its mechanical characteristic (large strain gauges are less prone to self heating). Low voltage drive levels of the bridge reduce the sensitivity of the overall system.
Applications
Structural health monitoring
Structural health monitoring (SHM) is used to monitor structures after their completion. To prevent failures, strain gauges are used to detect and locate damages and creep. A specific example is the monitoring of bridge cables increasing safety by detecting possible damages. Also, the bridge's behavior to unusual loads such as special heavy-duty transports can be analyzed.
Biological measurements
Measuring the strain of skin can provide a multitude of biomechanic measurements such as posture, joint rotation, respiration and swelling both in humans and other animals. Resistive foil strain gauges are seldom used for these applications, however, due to their low strain limit. Instead, soft and deformable strain gauges are often attached to a host garment, to make it simple to apply the sensor to the correct part of the body, though sometimes they are attached directly to the skin. Typically in these applications, such soft strain gauges are known as stretch sensors. For medical use, the sensors must be accurate and repeatable which typically requires the use of capacitive stretch sensors.
Predictive maintenance
Many objects and materials in industrial applications have a finite life. To improve their lifetime and cost of ownership, predictive maintenance principles are used. Strain gauges can be used to monitor the strain as an indicator of fatigue in materials to enable software systems to predict when certain components need to be replaced or serviced. Resistive foil gauges can be used to instrument stiff materials like metals, ceramics, composites and similar, whereas highly elastic strain gauges are used to monitor softer materials such as rubber, plastics, textiles and the like.
Aviation
In aviation, strain gauges are the standard approach to measuring the structural load and calculating wing deflection. Strain gauges are fixed in several locations on the aircraft. However, deflection measurement systems have been shown to measure reliable strains remotely. This reduces instrumentation weight on the aircraft and thus is replacing the strain gauge.
Repurposing
There are also applications where it isn't first obvious that you would measure strain to get to the wanted result. So for example in the detection of intruders on certain structures, strain gauges can be used to detect the presence of such an intruder. This is done by measuring the slight change in strain of the said structure.
Errors and compensations
Zero Offset - If the impedance of the four gauge arms are not exactly the same after bonding the gauge to the force collector, there will be a zero offset which can be compensated by introducing a parallel resistor to one or more of the gauge arms.
Temperature coefficient of gauge factor (TCGF) is the change of sensitivity of the device to strain with change in temperature. This is generally compensated for by the introduction of a fixed resistance in the input leg, whereby the effective supplied voltage will decrease with a temperature increase, compensating for the increase in sensitivity with the temperature increase. This is known as modulus compensation in transducer circuits. As the temperature rises the load cell element becomes more elastic and therefore under a constant load will deform more and lead to an increase in output; but the load is still the same. The clever bit in all this is that the resistor in the bridge supply must be a temperature sensitive resistor that is matched to both the material to which the gauge is bonded and also to the gauge element material. The value of that resistor is dependent on both of those values and can be calculated. In simple terms if the output increases then the resistor value also increase thereby reducing the net voltage to the transducer. Get the resistor value right and you will see no change.
Zero shift with temperature - If the TCGF of each gauge is not the same, there will be a zero shift with temperature. This is also caused by anomalies in the force collector. This is usually compensated for with one or more resistors strategically placed in the compensation network.
Linearity is an error whereby the sensitivity changes across the pressure range. This is commonly a function of the force collection thickness selection for the intended pressure and the quality of the bonding.
Hysteresis is an error of return to zero after pressure excursion.
Repeatability - This error is sometimes tied-in with hysteresis but is across the pressure range.
Electromagnetic interference (EMI)-induced errors - As the output voltage of strain gauges is in the mV range, even μV if the Wheatstone bridge voltage drive is kept low to avoid self heating of the element, special care must be taken in output signal amplification to avoid amplifying also the superimposed noise. A solution which is frequently adopted is to use "carrier frequency" amplifiers, which convert the voltage variation into a frequency variation (as in voltage-controlled oscillators) and have a narrow bandwidth, thus reducing out of band EMI.
Overloading – If a strain gauge is loaded beyond its design limit (measured in microstrain) its performance degrades and can not be recovered. Normally good engineering practice suggests not to stress strain gauges beyond ±3000 microstrain.
Humidity – If the wires connecting the strain gauge to the signal conditioner are not protected against humidity, such as bare wire, corrosion can occur, leading to parasitic resistance. This can allow currents to flow between the wires and the substrate to which the strain gauge is glued, or between the two wires directly, introducing an error which competes with the current flowing through the strain gauge. For this reason, high-current, low-resistance strain gauges (120 ohm) are less prone to this type of error. To avoid this error it is sufficient to protect the strain gauges wires with insulating enamel (e.g., epoxy or polyurethane type). Strain gauges with unprotected wires may be used only in a dry laboratory environment but not in an industrial one.
In some applications, strain gauges add mass and damping to the vibration profiles of the hardware they are intended to measure. In the turbomachinery industry, one used alternative to strain gauge technology in the measurement of vibrations on rotating hardware is the non-intrusive stress measurement system, which allows measurement of blade vibrations without any blade or disc-mounted hardware.
Geometries of strain gauges
The following different kind of strain gauges are available in the market:
Linear strain gauges
Membrane Rosette strain gauges
Double linear strain gauges
Full bridge strain gauges
Shear strain gauges
Half bridge strain gauges
Column strain gauges
45°-Rosette (3 measuring directions)
90°-Rosette (2 measuring directions).
Other types
Strain Gauge measurement devices are prone to drift problems. Additionally, their manufacturing requires precise requirements during all the production steps. So there are multiple different ways of also measuring strain.
For measurements of small strain, semiconductor strain gauges, so called piezoresistors, are often preferred over foil gauges. A semiconductor gauge usually has a larger gauge factor than a foil gauge. Semiconductor gauges tend to be more expensive, more sensitive to temperature changes, and are more fragile than foil gauges.
Nanoparticle-based strain gauges emerge as a new promising technology. These resistive sensors whose active area is made by an assembly of conductive nanoparticles, such as gold or carbon, combine a high gauge factor, a large deformation range and a small electrical consumption due to their high impedance.
In biological measurements, especially blood flow and tissue swelling, a variant called mercury-in-rubber strain gauge is used. This kind of strain gauge consists of a small amount of liquid mercury enclosed in a small rubber tube, which is applied around e.g., a toe or leg. Swelling of the body part results in stretching of the tube, making it both longer and thinner, which increases electrical resistance.
Fiber optic sensing can be employed to measure strain along an optical fiber. Measurements can be distributed along the fiber, or taken at predetermined points on the fiber. The 2010 America's Cup boats Alinghi 5 and USA-17 both employ embedded sensors of this type.
Other optical measuring techniques can be used to measure strains like electronic speckle pattern interferometry or digital image correlation.
Microscale strain gauges are widely used in microelectromechanical systems (MEMS) to measure strains such as those induced by force, acceleration, pressure or sound. As example, airbags in cars are often triggered with MEMS accelerometers. As alternative to piezo-resistant strain gauges, integrated optical ring resonators may be used to measure strain in microoptoelectromechanical systems (MOEMS).
Capacitive strain gauges use a variable capacitor to indicate the level of mechanical deformation.
Vibrating wire strain gauges are used in geotechnical and civil engineering applications. The gauge consists of a vibrating, tensioned wire. The strain is calculated by measuring the resonant frequency of the wire (an increase in tension increases the resonant frequency).
Quartz crystal strain gauges are also used in geotechnical applications. A pressure sensor, a resonant quartz crystal strain gauge with a bourdon tube force collector is the critical sensor of DART. DART detects tsunami waves from the bottom of the open ocean. It has a pressure resolution of approximately 1mm of water when measuring pressure at a depth of several kilometers.
Multi-axis force sensors could have plenty of advantages over strain gauges regarding their safety, dexterity, and collaborative perspectives. They are based on pre-stress resonant composite plates of which the measurements are performed by piezoelectric transducers. It allows for measuring 3 components of external forces. Moreover, the hardware needed is cheaper than classical strain gauges.
Non-contact strain measurements
Strain can also be measured using digital image correlation (DIC). With this technique one or two cameras are used in conjunction with a DIC software to track features on the surface of components to detect small motion. The full strain map of the tested sample can be calculated, providing similar display as a finite-element analysis. This technique is used in many industries to replace traditional strain gauges or other sensors like extensometers, string pots, LVDT, accelerometers... The accuracy of commercially available DIC software typically ranges around 1/100 to 1/30 of a pixels for displacements measurements which result in strain sensitivity between 20 and 100 μm/m. The DIC technique allows to quickly measure shape, displacements and strain non-contact, avoiding some issues of traditional contacting methods, especially with impacts, high strain, high-temperature or high cycle fatigue testing.
Literature
In 1995 Prof. Dr.-Ing. Stefan Keil published the first edition of a detailed book about strain gauges and how to use them called “Dehnungsmessstreifen”. Although this first edition was only published in German, it became popular outside of Germany because of the wide range of uses of strain gauges in different fields. After more than 20 years (in 2017), he published a second edition that was translated into English, hence available to more engineers that use strain gauges. This newest book is titled “Technology and Practical Use of Strain Gages”.
Strain gauge theory (sociology)
The term "strain gauge" can be encountered in sociology. The social strain gauge theory is an approach to understanding accusations of witchcraft and sorcery. South African anthropologist Maxwell Marwick studied these sociological phenomena in Zambia and Malawi in 1965. Accusations of witchcraft reflect strain on relationships and or the whole social structure. The theory says that the sorcery accusations were a pressure valve of society.
See also
Resistance thermometer
Compressometer
References
Sensors
Elasticity (physics) | Strain gauge | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 4,077 | [
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Measuring instruments",
"Sensors",
"Physical properties"
] |
418,403 | https://en.wikipedia.org/wiki/Volumetric%20flow%20rate | In physics and engineering, in particular fluid dynamics, the volumetric flow rate (also known as volume flow rate, or volume velocity) is the volume of fluid which passes per unit time; usually it is represented by the symbol (sometimes ). It contrasts with mass flow rate, which is the other main type of fluid flow rate. In most contexts a mention of rate of fluid flow is likely to refer to the volumetric rate. In hydrometry, the volumetric flow rate is known as discharge.
Volumetric flow rate should not be confused with volumetric flux, as defined by Darcy's law and represented by the symbol , with units of m3/(m2·s), that is, m·s−1. The integration of a flux over an area gives the volumetric flow rate.
The SI unit is cubic metres per second (m3/s). Another unit used is standard cubic centimetres per minute (SCCM). In US customary units and imperial units, volumetric flow rate is often expressed as cubic feet per second (ft3/s) or gallons per minute (either US or imperial definitions). In oceanography, the sverdrup (symbol: Sv, not to be confused with the sievert) is a non-SI metric unit of flow, with equal to ; it is equivalent to the SI derived unit cubic hectometer per second (symbol: hm3/s or hm3⋅s−1). Named after Harald Sverdrup, it is used almost exclusively in oceanography to measure the volumetric rate of transport of ocean currents.
Fundamental definition
Volumetric flow rate is defined by the limit
that is, the flow of volume of fluid through a surface per unit time .
Since this is only the time derivative of volume, a scalar quantity, the volumetric flow rate is also a scalar quantity. The change in volume is the amount that flows after crossing the boundary for some time duration, not simply the initial amount of volume at the boundary minus the final amount at the boundary, since the change in volume flowing through the area would be zero for steady flow.
IUPAC prefers the notation and for volumetric flow and mass flow respectively, to distinguish from the notation for heat.
Alternative definition
Volumetric flow rate can also be defined by
where
= flow velocity,
= cross-sectional vector area/surface.
The above equation is only true for uniform or homogeneous flow velocity and a flat or planar cross section. In general, including spatially variable or non-homogeneous flow velocity and curved surfaces, the equation becomes a surface integral:
This is the definition used in practice. The area required to calculate the volumetric flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface. The vector area is a combination of the magnitude of the area through which the volume passes through, , and a unit vector normal to the area, . The relation is .
Derivation
The reason for the dot product is as follows. The only volume flowing through the cross-section is the amount normal to the area, that is, parallel to the unit normal. This amount is
where is the angle between the unit normal and the velocity vector of the substance elements. The amount passing through the cross-section is reduced by the factor . As increases less volume passes through. Substance which passes tangential to the area, that is perpendicular to the unit normal, does not pass through the area. This occurs when and so this amount of the volumetric flow rate is zero:
These results are equivalent to the dot product between velocity and the normal direction to the area.
Relationship with mass flow rate
When the mass flow rate is known, and the density can be assumed constant, this is an easy way to get :
where
= mass flow rate (in kg/s),
= density (in kg/m3).
Related quantities
In internal combustion engines, the time area integral is considered over the range of valve opening. The time lift integral is given by
where is the time per revolution, is the distance from the camshaft centreline to the cam tip, is the radius of the camshaft (that is, is the maximum lift), is the angle where opening begins, and is where the valve closes (seconds, mm, radians). This has to be factored by the width (circumference) of the valve throat. The answer is usually related to the cylinder's swept volume.
Some key examples
In cardiac physiology: the cardiac output
In hydrology: discharge
List of rivers by discharge
List of waterfalls by flow rate
Weir § Flow measurement
In dust collection systems: the air-to-cloth ratio
See also
Bulk velocity
Flow measurement
Flowmeter
Mass flow rate
Orifice plate
Poiseuille's law
Stokes flow
References
Fluid dynamics
Temporal rates
Mechanical quantities | Volumetric flow rate | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 975 | [
"Temporal quantities",
"Mechanical quantities",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Temporal rates",
"Mechanics",
"Piping",
"Fluid dynamics"
] |
419,200 | https://en.wikipedia.org/wiki/Schr%C3%B6dinger%20picture | In physics, the Schrödinger picture or Schrödinger representation is a formulation of quantum mechanics in which the state vectors evolve in time, but the operators (observables and others) are mostly constant with respect to time (an exception is the Hamiltonian which may change if the potential changes). This differs from the Heisenberg picture which keeps the states constant while the observables evolve in time, and from the interaction picture in which both the states and the observables evolve in time. The Schrödinger and Heisenberg pictures are related as active and passive transformations and commutation relations between operators are preserved in the passage between the two pictures.
In the Schrödinger picture, the state of a system evolves with time. The evolution for a closed quantum system is brought about by a unitary operator, the time evolution operator. For time evolution from a state vector at time 0 to a state vector at time , the time-evolution operator is commonly written , and one has
In the case where the Hamiltonian of the system does not vary with time, the time-evolution operator has the form
where the exponent is evaluated via its Taylor series.
The Schrödinger picture is useful when dealing with a time-independent Hamiltonian ; that is, .
Background
In elementary quantum mechanics, the state of a quantum-mechanical system is represented by a complex-valued wavefunction . More abstractly, the state may be represented as a state vector, or ket, . This ket is an element of a Hilbert space, a vector space containing all possible states of the system. A quantum-mechanical operator is a function which takes a ket and returns some other ket .
The differences between the Schrödinger and Heisenberg pictures of quantum mechanics revolve around how to deal with systems that evolve in time: the time-dependent nature of the system must be carried by some combination of the state vectors and the operators. For example, a quantum harmonic oscillator may be in a state for which the expectation value of the momentum, , oscillates sinusoidally in time. One can then ask whether this sinusoidal oscillation should be reflected in the state vector , the momentum operator , or both. All three of these choices are valid; the first gives the Schrödinger picture, the second the Heisenberg picture, and the third the interaction picture.
The time evolution operator
Definition
The time-evolution operator U(t, t0) is defined as the operator which acts on the ket at time t0 to produce the ket at some other time t:
For bras,
Properties
Unitarity
The time evolution operator must be unitary. For the norm of the state ket must not change with time. That is, Therefore,
Identity
When t = t0, U is the identity operator, since
Closure
Time evolution from t0 to t may be viewed as a two-step time evolution, first from t0 to an intermediate time t1, and then from t1 to the final time t. Therefore,
Differential equation for time evolution operator
We drop the t0 index in the time evolution operator with the convention that and write it as U(t). The Schrödinger equation is
where H is the Hamiltonian. Now using the time-evolution operator U to write ,
Since is a constant ket (the state ket at ), and since the above equation is true for any constant ket in the Hilbert space, the time evolution operator must obey the equation
If the Hamiltonian is independent of time, the solution to the above equation is
Since H is an operator, this exponential expression is to be evaluated via its Taylor series:
Therefore,
Note that is an arbitrary ket. However, if the initial ket is an eigenstate of the Hamiltonian, with eigenvalue E:
The eigenstates of the Hamiltonian are stationary states: they only pick up an overall phase factor as they evolve with time.
If the Hamiltonian is dependent on time, but the Hamiltonians at different times commute, then the time evolution operator can be written as
If the Hamiltonian is dependent on time, but the Hamiltonians at different times do not commute, then the time evolution operator can be written as
where T is time-ordering operator, which is sometimes known as the Dyson series, after Freeman Dyson.
The alternative to the Schrödinger picture is to switch to a rotating reference frame, which is itself being rotated by the propagator. Since the undulatory rotation is now being assumed by the reference frame itself, an undisturbed state function appears to be truly static. This is the Heisenberg picture.
Summary comparison of evolution in all pictures
For a time-independent Hamiltonian HS, where H0,S is the free Hamiltonian,
See also
Hamilton–Jacobi equation
Interaction picture
Heisenberg picture
Phase space formulation
POVM
Mathematical formulation of quantum mechanics
Schrödinger functional
Notes
References
Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G. M. Temmer. North Holland, John Wiley & Sons.
Merzbacher E., Quantum Mechanics (3rd ed., John Wiley 1998) p. 430–1
Online copy
R. Shankar (1994); Principles of Quantum Mechanics, Plenum Press, .
J. J. Sakurai (1993); Modern Quantum Mechanics (Revised Edition), .
Foundational quantum physics
Picture | Schrödinger picture | [
"Physics"
] | 1,130 | [
"Foundational quantum physics",
"Quantum mechanics"
] |
419,952 | https://en.wikipedia.org/wiki/Calcium%20metabolism |
Reabsorption
Intestine
Since about 15 mmol of calcium is excreted into the intestine via the bile per day, the total amount of calcium that reaches the duodenum and jejunum each day is about 40 mmol (25 mmol from the diet plus 15 mmol from the bile), of which, on average, 20 mmol is absorbed (back) into the blood. The net result is that about 5 mmol more calcium is absorbed from the gut than is excreted into it via the bile. If there is no active bone building (as in childhood), or increased need for calcium during pregnancy and lactation, the 5 mmol calcium that is absorbed from the gut makes up for urinary losses that are only partially regulated.
Kidneys
The kidneys filter 250 mmol of calcium ions a day in pro-urine (or glomerular filtrate), and resorbs 245 mmol, leading to a net average loss in the urine of about 5 mmol/d. The quantity of calcium ions excreted in the urine per day is partially under the influence of the plasma parathyroid hormone (PTH) level - high levels of PTH decreasing the rate of calcium ion excretion, and low levels increasing it. However, parathyroid hormone has a greater effect on the quantity of phosphate ions (HPO42−) excreted in the urine. Phosphates form insoluble salts in combination with calcium ions. High concentrations of HPO42− in the plasma, therefore, lower the ionized calcium level in the extra-cellular fluids. Thus, the excretion of more phosphate than calcium ions in the urine raises the plasma ionized calcium level, even though the total calcium concentration might be lowered.
The kidney influences the plasma ionized calcium concentration in yet another manner. It processes vitamin D3 into calcitriol, the active form that is most effective in promoting the intestinal absorption of calcium. This conversion of vitamin D3 into calcitriol, is also promoted by high plasma parathyroid hormone levels.
Excretion
Intestine
Most excretion of excess calcium is via the bile and feces, because the plasma calcitriol levels (which ultimately depend on the plasma calcium levels) regulate how much of the biliary calcium is reabsorbed from the intestinal contents.
Kidneys
Urinary excretion of calcium is normally about 5 mmol (200 mg) /day. This is less in comparison to what is excreted via the feces (15 mmol/day).
Regulation
The plasma ionized calcium concentration is regulated within narrow limits (1.3–1.5 mmol/L). This is achieved by both the parafollicular cells of the thyroid gland, and the parathyroid glands constantly sensing (i.e. measuring) the concentration of calcium ions in the blood flowing through them.
High plasma level
When the concentration of calcium rises, the parafollicular cells of the thyroid gland increase their secretion of calcitonin, a polypeptide hormone, into the blood. At the same time, the parathyroid glands reduce the secretion of parathyroid hormone (PTH), also a polypeptide hormone, into the blood. The resulting high levels of calcitonin in the blood stimulate osteoblasts in bone to remove calcium from blood plasma and deposit it as bone.
The reduced levels of PTH inhibit removal of calcium from the skeleton. The low levels of PTH have several other effects: there is increased loss of calcium in the urine, but more importantly, the loss of phosphate ions through urine is inhibited. Phosphate ions will therefore be retained in the plasma where they form insoluble salts with calcium ions, thereby removing them from the ionized calcium pool in the blood. The low levels of PTH also inhibit the formation of calcitriol (not to be confused with calcitonin) from cholecalciferol (vitamin D3) by the kidneys.
The reduction in the blood calcitriol concentration acts (comparatively slowly) on the epithelial cells (enterocytes) of the duodenum, inhibiting their ability to absorb calcium from the intestinal contents. The low calcitriol levels also act on bone causing the osteoclasts to release fewer calcium ions into the blood plasma.
Low plasma level
When the plasma ionized calcium level is low or falls the opposite happens. Calcitonin secretion is inhibited and PTH secretion is stimulated, resulting in calcium being removed from bone to rapidly correct the plasma calcium level. The high plasma PTH levels inhibit calcium loss via the urine while stimulating the excretion of phosphate ions via that route. They also stimulate the kidneys to manufacture calcitriol (a steroid hormone), which enhances the ability of the cells lining the gut to absorb calcium from the intestinal contents into the blood, by stimulating the production of calbindin in these cells. The PTH stimulated production of calcitriol also causes calcium to be released from bone into the blood, by the release of RANKL (a cytokine, or local hormone) from the osteoblasts which increases the bone resorptive activity by the osteoclasts. These are, however, relatively slow processes
Thus fast short term regulation of the plasma ionized calcium level primarily involves rapid movements of calcium into or out of the skeleton. Long term regulation is achieved by regulating the amount of calcium absorbed from the gut or lost via the feces.
Disorders
Hypocalcemia (low blood calcium) and hypercalcemia (high blood calcium) are both serious medical disorders. Osteoporosis, osteomalacia and rickets are bone disorders linked to calcium metabolism disorders and effects of vitamin D. Renal osteodystrophy is a consequence of chronic kidney failure related to the calcium metabolism.
A diet adequately rich in calcium may reduce calcium loss from bone with advancing (post-menopausal) age. A low dietary calcium intake may be a risk factor in the development of osteoporosis in later life; and a diet with sustained adequate amounts of calcium may reduce the risk of osteoporosis.
Research
The role that calcium might have in reducing the rates of colorectal cancer has been the subject of many studies. However, given its modest efficacy, there is no current medical recommendation to use calcium for cancer reduction.
See also
European Calcium Society
Footnotes
References
External links
Calcium at Lab Tests Online
Physiology
Calcium
Human homeostasis
Endocrine system | Calcium metabolism | [
"Biology"
] | 1,373 | [
"Endocrine system",
"Physiology",
"Human homeostasis",
"Organ systems",
"Homeostasis"
] |
11,556,937 | https://en.wikipedia.org/wiki/Bitumen%20of%20Judea | Bitumen of Judea is a sort of natural tar known from ancient times. It is a naturally occurring asphalt used since ancient times as a wood colorant, and in early photography.
Wood coloration usage
Bitumen of Judea may be used as a colorant for wood for an aged, natural and rustic appearance. It is soluble in turpentine and some other terpenes, and can be combined with oils, waxes, varnishes and glazes.
Light-sensitive properties
It is a light-sensitive material in what is accepted to be the first complete photographic process, i.e., one capable of producing durable light-fast results. The technique was developed by French scientist and inventor Nicéphore Niépce in the 1820s. In 1826 or 1827, he applied a thin coating of the tar-like material to a pewter plate and took a picture of parts of the buildings and surrounding countryside of his estate, producing what is usually described as the first photograph. It is considered to be the oldest known surviving photograph made in a camera. The plate was exposed in the camera for at least eight hours.
The bitumen, initially soluble in spirits and oils, was hardened and made insoluble (probably polymerized) in the brightest areas of the image. The unhardened part was then rinsed away with a solvent.
Niépce's primary objective was not a photoengraving or photolithography process, but rather a photo-etching process, since engraving requires the intervention of a physical rather than chemical process and lithography involves a grease and water resistance process. However, Niépce's famous image of Pope Pius VI was produced first by photo-etching and then "improved" by hand engraving. Bitumen, superbly resistant to strong acids, was in fact later widely used as a photoresist in making printing plates for mechanical printing processes. The surface of a zinc or other metal plate was coated, exposed, developed with a solvent that laid bare the unexposed areas, then etched in an acid bath, producing the required surface relief.
References
Photographic processes dating from the 19th century
Asphalt | Bitumen of Judea | [
"Physics",
"Chemistry"
] | 439 | [
"Amorphous solids",
"Asphalt",
"Unsolved problems in physics",
"Chemical mixtures"
] |
11,558,054 | https://en.wikipedia.org/wiki/Microelectrode | A microelectrode is an electrode used in electrophysiology either for recording neural signals or for the electrical stimulation of nervous tissue (they were first developed by Ida Hyde in 1921). Pulled glass pipettes with tip diameters of 0.5 μm or less are usually filled with 3 molars potassium chloride solution as the electrical conductor. When the tip penetrates a cell membrane the lipids in the membrane seal onto the glass, providing an excellent electrical connection between the tip and the interior of the cell, which is apparent because the microelectrode becomes electrically negative compared to the extracellular solution. There are also microelectrodes made with insulated metal wires, made from inert metals with high Young modulus such as tungsten, stainless steel, or platinum-iridium alloy and coated with glass or polymer insulator with exposed conductive tips. These are mostly used for recording from the external side of the cell membrane. More recent advances in lithography have produced silicon-based microelectrodes.
See also
Single-unit recording
Microelectrode array
References
Neurophysiology
Physiology
Electrophysiology
Laboratory techniques | Microelectrode | [
"Chemistry",
"Biology"
] | 238 | [
"nan",
"Physiology"
] |
11,559,418 | https://en.wikipedia.org/wiki/International%20System%20of%20Quantities | The International System of Quantities (ISQ) is a standard system of quantities used in physics and in modern science in general. It includes basic quantities such as length and mass and the relationships between those quantities. This system underlies the International System of Units (SI) but does not itself determine the units of measurement used for the quantities.
The system is formally described in a multi-part ISO standard ISO/IEC 80000 (which also defines many other quantities used in science and technology), first completed in 2009 and subsequently revised and expanded.
Base quantities
The base quantities of a given system of physical quantities is a subset of those quantities, where no base quantity can be expressed in terms of the others, but where every quantity in the system can be expressed in terms of the base quantities. Within this constraint, the set of base quantities is chosen by convention. There are seven ISQ base quantities. The symbols for them, as for other quantities, are written in italics.
The dimension of a physical quantity does not include magnitude or units. The conventional symbolic representation of the dimension of a base quantity is a single upper-case letter in roman (upright) sans-serif type.
Derived quantities
A derived quantity is a quantity in a system of quantities that is defined in terms of only the base quantities of that system. The ISQ defines many derived quantities and corresponding derived units.
Dimensional expression of derived quantities
The conventional symbolic representation of the dimension of a derived quantity is the product of powers of the dimensions of the base quantities according to the definition of the derived quantity. The dimension of a quantity is denoted by , where the dimensional exponents are positive, negative, or zero. The dimension symbol may be omitted if its exponent is zero. For example, in the ISQ, the quantity dimension of velocity is denoted . The following table lists some quantities defined by the ISQ.
Dimensionless quantities
A quantity of dimension one is historically known as a dimensionless quantity (a term that is still commonly used); all its dimensional exponents are zero and its dimension symbol is . Such a quantity can be regarded as a derived quantity in the form of the ratio of two quantities of the same dimension. The named dimensionless units "radian" (rad) and "steradian" (sr) are acceptable for distinguishing dimensionless quantities of different kind, respectively plane angle and solid angle.
Logarithmic quantities
Level
The level of a quantity is defined as the logarithm of the ratio of the quantity with a stated reference value of that quantity. Within the ISQ it is differently defined for a root-power quantity (also known by the deprecated term field quantity) and for a power quantity. It is not defined for ratios of quantities of other kinds. Within the ISQ, all levels are treated as derived quantities of dimension 1. Several units for levels are defined by the SI and classified as "non-SI units accepted for use with the SI units".
An example of level is sound pressure level, with the unit of decibel.
Other logarithmic quantities
Units of logarithmic frequency ratio include the octave, corresponding to a factor of 2 in frequency (precisely) and the decade, corresponding to a factor 10.
The ISQ recognizes another logarithmic quantity, information entropy, for which the coherent unit is the natural unit of information (symbol nat).
Documentation
The system is formally described in a multi-part ISO standard ISO/IEC 80000, first completed in 2009 but subsequently revised and expanded, which replaced standards published in 1992, ISO 31 and ISO 1000. Working jointly, ISO and IEC have formalized parts of the ISQ by giving information and definitions concerning quantities, systems of quantities, units, quantity and unit symbols, and coherent unit systems, with particular reference to the ISQ. ISO/IEC 80000 defines physical quantities that are measured with the SI units and also includes many other quantities in modern science and technology. The name "International System of Quantities" is used by the General Conference on Weights and Measures (CGPM) to describe the system of quantities that underlie the International System of Units.
See also
Dimensional analysis
List of physical quantities
International System of Units
SI base unit
Notes
References
Further reading
B. N. Taylor, Ambler Thompson, International System of Units (SI), National Institute of Standards and Technology 2008 edition, .
+
Measurement
International standards | International System of Quantities | [
"Physics",
"Mathematics"
] | 892 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"Physical properties"
] |
11,563,117 | https://en.wikipedia.org/wiki/Distinguishing%20attack | In cryptography, a distinguishing attack is any form of cryptanalysis on data encrypted by a cipher that allows an attacker to distinguish the encrypted data from random data. Modern symmetric-key ciphers are specifically designed to be immune to such an attack. In other words, modern encryption schemes are pseudorandom permutations and are designed to have ciphertext indistinguishability. If an algorithm is found that can distinguish the output from random faster than a brute force search, then that is considered a break of the cipher.
A similar concept is the known-key distinguishing attack, whereby an attacker knows the key and can find a structural property in the cipher, where the transformation from plaintext to ciphertext is not random.
Overview
To prove that a cryptographic function is safe, it is often compared to a random oracle. If a function were a random oracle, then an attacker is not able to predict any of the output of the function. If a function is distinguishable from a random oracle, it has non-random properties. That is, there exists a relation between different outputs, or between input and output, which can be used by an attacker for example to find (a part of) the input.
Example
Let T be a sequence of random bits, generated by a random oracle and S be a sequence generated by a pseudo-random bit generator. Two parties use one encryption system to encrypt a message M of length n as the bitwise XOR of M and the next n bits of T or S respectively. The output of the encryption using T is truly random. Now, if the sequence S cannot be distinguished from T, the output of the encryption with S will appear random as well. If the sequence S is distinguishable, then the encryption of M with S may reveal information of M.
Two systems S and T are said to be indistinguishable if there exists no algorithm D, connected to either S or T, able to decide whether it is connected to S or T.
A distinguishing attack is given by such an algorithm D. It is broadly an attack in which the attacker is given a black box containing either an instance of the system under attack with an unknown key, or a random object in the domain that the system aims to emulate, then if the algorithm is able to tell whether the system or the random object is in the black box, one has an attack. For example, a distinguishing attack on a stream cipher such as RC4 might be one that determines whether a given stream of bytes is random or generated by RC4 with an unknown key.
Examples
Classic examples of distinguishing attack on a popular stream cipher was by Itsik Mantin and Adi Shamir who showed that the 2nd output byte of RC4 was heavily biased toward zero. In another example, Souradyuti Paul and Bart Preneel of COSIC have shown that the XOR value of the 1st and 2nd outputs of RC4 is also non-uniform. Significantly, both the above theoretical biases can be demonstrable through computer simulation.
See also
Randomness test
References
External links
Source
Indifferentiability
Cryptographic attacks | Distinguishing attack | [
"Technology"
] | 639 | [
"Cryptographic attacks",
"Computer security exploits"
] |
5,974,662 | https://en.wikipedia.org/wiki/Relational%20quantum%20mechanics | Relational quantum mechanics (RQM) is an interpretation of quantum mechanics which treats the state of a quantum system as being relational, that is, the state is the relation between the observer and the system. This interpretation was first delineated by Carlo Rovelli in a 1994 preprint, and has since been expanded upon by a number of theorists. It is inspired by the key idea behind special relativity, that the details of an observation depend on the reference frame of the observer, and uses some ideas from Wheeler on quantum information.
The physical content of the theory has not to do with objects themselves, but the relations between them. As Rovelli puts it:
"Quantum mechanics is a theory about the physical description of physical systems relative to other systems, and this is a complete description of the world".
The essential idea behind RQM is that different observers may give different accurate accounts of the same system. For example, to one observer, a system is in a single, "collapsed" eigenstate. To a second observer, the same system is in a superposition of two or more states and the first observer is in a correlated superposition of two or more states. RQM argues that this is a complete picture of the world because the notion of "state" is always relative to some observer. There is no privileged, "real" account.
The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system.
The terms "observer" and "observed" apply to any arbitrary system, microscopic or macroscopic. The classical limit is a consequence of aggregate systems of very highly correlated subsystems.
A "measurement event" is thus described as an ordinary physical interaction where two systems become correlated to some degree with respect to each other.
Rovelli criticizes describing this as a form of "observer-dependence" which suggests reality depends upon the presence of a conscious observer, when his point is instead that reality is relational and thus the state of a system can be described even in relation to any physical object and not necessarily a human observer.
The proponents of the relational interpretation argue that this approach resolves some of the traditional interpretational difficulties with quantum mechanics. By giving up our preconception of a global privileged state, issues around the measurement problem and local realism are resolved.
In 2020, Carlo Rovelli published an account of the main ideas of the relational interpretation in his popular book Helgoland, which was published in an English translation in 2021 as Helgoland: Making Sense of the Quantum Revolution.
History and development
Relational quantum mechanics arose from a comparison of the quandaries posed by the interpretations of quantum mechanics with those resulting from Lorentz transformations prior to the development of special relativity. Rovelli suggested that just as pre-relativistic interpretations of Lorentz's equations were complicated by incorrectly assuming an observer-independent time exists, a similarly incorrect assumption frustrates attempts to make sense of the quantum formalism. The assumption rejected by relational quantum mechanics is the existence of an observer-independent state of a system.
The idea has been expanded upon by Lee Smolin and Louis Crane, who have both applied the concept to quantum cosmology, and the interpretation has been applied to the EPR paradox, revealing not only a peaceful co-existence between quantum mechanics and special relativity, but a formal indication of a completely local character to reality.
The problem of the observer and the observed
This problem was initially discussed in detail in Everett's thesis, The Theory of the Universal Wavefunction. Consider observer , measuring the state of the quantum system . We assume that has complete information on the system, and that can write down the wavefunction describing it. At the same time, there is another observer , who is interested in the state of the entire - system, and likewise has complete information.
To analyse this system formally, we consider a system which may take one of two states, which we shall designate and , ket vectors in the Hilbert space . Now, the observer wishes to make a measurement on the system. At time , this observer may characterize the system as follows:
where and are probabilities of finding the system in the respective states, and these add up to 1. For our purposes here, we can assume that in a single experiment, the outcome is the eigenstate (but this can be substituted throughout, without loss of generality, by ). So, we may represent the sequence of events in this experiment, with observer doing the observing, as follows:
This is the description of the measurement event given by observer . Now, any measurement is also a physical interaction between two or more systems. Accordingly, we can consider the tensor product Hilbert space , where is the Hilbert space inhabited by state vectors describing . If the initial state of is , some degrees of freedom in become correlated with the state of after the measurement, and this correlation can take one of two values: or where the direction of the arrows in the subscripts corresponds to the outcome of the measurement that has made on . If we now consider the description of the measurement event by the other observer, , who describes the combined system, but does not interact with it, the following gives the description of the measurement event according to , from the linearity inherent in the quantum formalism:
Thus, on the assumption (see hypothesis 2 below) that quantum mechanics is complete, the two observers and give different but equally correct accounts of the events .
Note that the above scenario is directly linked to Wigner's Friend thought experiment, which serves as a prime example when understanding different interpretations of quantum theory.
Central principles
Observer-dependence of state
According to , at , the system is in a determinate state, namely spin up. And, if quantum mechanics is complete, then so is this description. But, for , is not uniquely determinate, but is rather entangled with the state of note that his description of the situation at is not factorisable no matter what basis chosen. But, if quantum mechanics is complete, then the description that gives is also complete.
Thus the standard mathematical formulation of quantum mechanics allows different observers to give different accounts of the same sequence of events. There are many ways to overcome this perceived difficulty. It could be described as an epistemic limitation observers with a full knowledge of the system, we might say, could give a complete and equivalent description of the state of affairs, but that obtaining this knowledge is impossible in practice. But whom? What makes 's description better than that of , or vice versa? Alternatively, we could claim that quantum mechanics is not a complete theory, and that by adding more structure we could arrive at a universal description (the troubled hidden variables approach). Yet another option is to give a preferred status to a particular observer or type of observer, and assign the epithet of correctness to their description alone. This has the disadvantage of being ad hoc, since there are no clearly defined or physically intuitive criteria by which this super-observer ("who can observe all possible sets of observations by all observers over the entire universe") ought to be chosen.
RQM, however, takes the point illustrated by this problem at face value. Instead of trying to modify quantum mechanics to make it fit with prior assumptions that we might have about the world, Rovelli says that we should modify our view of the world to conform to what amounts to our best physical theory of motion. Just as forsaking the notion of absolute simultaneity helped clear up the problems associated with the interpretation of the Lorentz transformations, so many of the conundrums associated with quantum mechanics dissolve, provided that the state of a system is assumed to be observer-dependent like simultaneity in Special Relativity. This insight follows logically from the two main hypotheses which inform this interpretation:
Hypothesis 1: the equivalence of systems. There is no a priori distinction that should be drawn between quantum and macroscopic systems. All systems are, fundamentally, quantum systems.
Hypothesis 2: the completeness of quantum mechanics. There are no hidden variables or other factors which may be appropriately added to quantum mechanics, in light of current experimental evidence.
Thus, if a state is to be observer-dependent, then a description of a system would follow the form "system S is in state x with reference to observer O" or similar constructions, much like in relativity theory. In RQM it is meaningless to refer to the absolute, observer-independent state of any system.
Information and correlation
It is generally well established that any quantum mechanical measurement can be reduced to a set of yes–no questions or bits that are either 1 or 0. RQM makes use of this fact to formulate the state of a quantum system (relative to a given observer!) in terms of the physical notion of information developed by Claude Shannon. Any yes/no question can be described as a single bit of information. This should not be confused with the idea of a qubit from quantum information theory, because a qubit can be in a superposition of values, whilst the "questions" of RQM are ordinary binary variables.
Any quantum measurement is fundamentally a physical interaction between the system being measured and some form of measuring apparatus. By extension, any physical interaction may be seen to be a form of quantum measurement, as all systems are seen as quantum systems in RQM. A physical interaction is seen by other observers unaware of the result, as establishing a correlation between the system and the observer, and this correlation is what is described and predicted by the quantum formalism.
But, Rovelli points out, this form of correlation is precisely the same as the definition of information in Shannon's theory. Specifically, an observer O observing a system S will, after measurement, have some degrees of freedom correlated with those of S, as described by another observer unaware of the result. The amount of this correlation is given by log2k bits, where k is the number of possible values which this correlation may take the number of "options" there are, as described by the other observer.
Note that if the other observer is aware of the measurement result, there is only one possible value for the correlation, so they will not regard the (first observer's) measurement as producing any information, as expected.
All systems are quantum systems
All physical interactions are, at bottom, quantum interactions, and must ultimately be governed by the same rules. Thus, an interaction between two particles does not, in RQM, differ fundamentally from an interaction between a particle and some "apparatus". There is no true wave collapse, in the sense in which it occurs in some interpretations.
Because "state" is expressed in RQM as the correlation between two systems, there can be no meaning to "self-measurement". If observer measures system , 's "state" is represented as a correlation between and . itself cannot say anything with respect to its own "state", because its own "state" is defined only relative to another observer, . If the compound system does not interact with any other systems, then it will possess a clearly defined state relative to . However, because 's measurement of breaks its unitary evolution with respect to , will not be able to give a full description of the system (since it can only speak of the correlation between and itself, not its own behaviour). A complete description of the system can only be given by a further, external observer, and so forth.
Taking the model system discussed above, if has full information on the system, it will know the Hamiltonians of both and , including the interaction Hamiltonian. Thus, the system will evolve entirely unitarily (without any form of collapse) relative to , if measures . The only reason that will perceive a "collapse" is because has incomplete information on the system (specifically, does not know its own Hamiltonian, and the interaction Hamiltonian for the measurement).
Consequences and implications
Coherence
In our system above, may be interested in ascertaining whether or not the state of accurately reflects the state of . We can draw up for an operator, , which is specified as:
with an eigenvalue of 1 meaning that indeed accurately reflects the state of . So there is a 0 probability of reflecting the state of as being if it is in fact , and so forth. The implication of this is that at time , can predict with certainty that the system is in some eigenstate of , but cannot say which eigenstate it is in, unless itself interacts with the system.
An apparent paradox arises when one considers the comparison, between two observers, of the specific outcome of a measurement. In the problem of the observer observed section above, let us imagine that the two experiments want to compare results. It is obvious that if the observer has the full Hamiltonians of both and , he will be able to say with certainty that at time , has a determinate result for 's spin, but he will not be able to say what 's result is without interaction, and hence breaking the unitary evolution of the compound system (because he doesn't know his own Hamiltonian). The distinction between knowing "that" and knowing "what" is a common one in everyday life: everyone knows that the weather will be like something tomorrow, but no-one knows exactly what the weather will be like.
But, let us imagine that measures the spin of , and finds it to have spin down (and note that nothing in the analysis above precludes this from happening). What happens if he talks to , and they compare the results of their experiments? , it will be remembered, measured a spin up on the particle. This would appear to be paradoxical: the two observers, surely, will realise that they have disparate results.
However, this apparent paradox only arises as a result of the question being framed incorrectly: as long as we presuppose an "absolute" or "true" state of the world, this would, indeed, present an insurmountable obstacle for the relational interpretation. However, in a fully relational context, there is no way in which the problem can even be coherently expressed. The consistency inherent in the quantum formalism, exemplified by the "M-operator" defined above, guarantees that there will be no contradictions between records. The interaction between and whatever he chooses to measure, be it the compound system or and individually, will be a physical interaction, a quantum interaction, and so a complete description of it can only be given by a further observer , who will have a similar "M-operator" guaranteeing coherency, and so on out. In other words, a situation such as that described above cannot violate any physical observation, as long as the physical content of quantum mechanics is taken to refer only to relations.
Relational networks
An interesting implication of RQM arises when we consider that interactions between material systems can only occur within the constraints prescribed by Special Relativity, namely within the intersections of the light cones of the systems: when they are spatiotemporally contiguous, in other words. Relativity tells us that objects have location only relative to other objects. By extension, a network of relations could be built up based on the properties of a set of systems, which determines which systems have properties relative to which others, and when (since properties are no longer well defined relative to a specific observer after unitary evolution breaks down for that observer). On the assumption that all interactions are local (which is backed up by the analysis of the EPR paradox presented below), one could say that the ideas of "state" and spatiotemporal contiguity are two sides of the same coin: spacetime location determines the possibility of interaction, but interactions determine spatiotemporal structure. The full extent of this relationship, however, has not yet fully been explored.
RQM and quantum cosmology
The universe is the sum total of everything in existence with any possibility of direct or indirect interaction with a local observer. A (physical) observer outside of the universe would require physically breaking of gauge invariance, and a concomitant alteration in the mathematical structure of gauge-invariance theory.
Similarly, RQM conceptually forbids the possibility of an external observer. Since the assignment of a quantum state requires at least two "objects" (system and observer), which must both be physical systems, there is no meaning in speaking of the "state" of the entire universe. This is because this state would have to be ascribed to a correlation between the universe and some other physical observer, but this observer in turn would have to form part of the universe. As was discussed above, it is not possible for an object to contain a complete specification of itself. Following the idea of relational networks above, an RQM-oriented cosmology would have to account for the universe as a set of partial systems providing descriptions of one another. Such a construction was developed in particular by Francesca Vidotto .
Relationship with other interpretations
The only group of interpretations of quantum mechanics with which RQM is almost completely incompatible is that of hidden variables theories. RQM shares some deep similarities with other views, but differs from them all to the extent to which the other interpretations do not accord with the "relational world" put forward by RQM.
Copenhagen interpretation
RQM is, in essence, quite similar to the Copenhagen interpretation, but with an important difference. In the Copenhagen interpretation, the macroscopic world is assumed to be intrinsically classical in nature, and wave function collapse occurs when a quantum system interacts with macroscopic apparatus. In RQM, any interaction, be it micro or macroscopic, causes the linearity of Schrödinger evolution to break down. RQM could recover a Copenhagen-like view of the world by assigning a privileged status (not dissimilar to a preferred frame in relativity) to the classical world. However, by doing this one would lose sight of the key features that RQM brings to our view of the quantum world.
Hidden-variables theories
Bohm's interpretation of QM does not sit well with RQM. One of the explicit hypotheses in the construction of RQM is that quantum mechanics is a complete theory, that is it provides a full account of the world. Moreover, the Bohmian view seems to imply an underlying, "absolute" set of states of all systems, which is also ruled out as a consequence of RQM.
We find a similar incompatibility between RQM and suggestions such as that of Penrose, which postulate that some process (in Penrose's case, gravitational effects) violate the linear evolution of the Schrödinger equation for the system.
Relative-state formulation
The many-worlds family of interpretations (MWI) shares an important feature with RQM, that is, the relational nature of all value assignments (that is, properties). Everett, however, maintains that the universal wavefunction gives a complete description of the entire universe, while Rovelli argues that this is problematic, both because this description is not tied to a specific observer (and hence is "meaningless" in RQM), and because RQM maintains that there is no single, absolute description of the universe as a whole, but rather a net of interrelated partial descriptions.
Consistent histories approach
In the consistent histories approach to QM, instead of assigning probabilities to single values for a given system, the emphasis is given to sequences of values, in such a way as to exclude (as physically impossible) all value assignments which result in inconsistent probabilities being attributed to observed states of the system. This is done by means of ascribing values to "frameworks", and all values are hence framework-dependent.
RQM accords perfectly well with this view. However, the consistent histories approach does not give a full description of the physical meaning of framework-dependent value (that is it does not account for how there can be "facts" if the value of any property depends on the framework chosen). By incorporating the relational view into this approach, the problem is solved: RQM provides the means by which the observer-independent, framework-dependent probabilities of various histories are reconciled with observer-dependent descriptions of the world.
EPR and quantum non-locality
RQM provides an unusual solution to the EPR paradox. Indeed, it manages to dissolve the problem altogether, inasmuch as there is no superluminal transportation of information involved in a Bell test experiment: the principle of locality is preserved inviolate for all observers.
The problem
In the EPR thought experiment, a radioactive source produces two electrons in a singlet state, meaning that the sum of the spin on the two electrons is zero. These electrons are fired off at time towards two spacelike separated observers, Alice and Bob, who can perform spin measurements, which they do at time . The fact that the two electrons are a singlet means that if Alice measures z-spin up on her electron, Bob will measure z-spin down on his, and vice versa: the correlation is perfect. If Alice measures z-axis spin, and Bob measures the orthogonal y-axis spin, however, the correlation will be zero. Intermediate angles give intermediate correlations in a way that, on careful analysis, proves inconsistent with the idea that each particle has a definite, independent probability of producing the observed measurements (the correlations violate Bell's inequality).
This subtle dependence of one measurement on the other holds even when measurements are made simultaneously and a great distance apart, which gives the appearance of a superluminal communication taking place between the two electrons. Put simply, how can Bob's electron "know" what Alice measured on hers, so that it can adjust its own behavior accordingly?
Relational solution
In RQM, an interaction between a system and an observer is necessary for the system to have clearly defined properties relative to that observer. Since the two measurement events take place at spacelike separation, they do not lie in the intersection of Alice's and Bob's light cones. Indeed, there is no observer who can instantaneously measure both electrons' spin.
The key to the RQM analysis is to remember that the results obtained on each "wing" of the experiment only become determinate for a given observer once that observer has interacted with the other observer involved. As far as Alice is concerned, the specific results obtained on Bob's wing of the experiment are indeterminate for her, although she will know that Bob has a definite result. In order to find out what result Bob has, she has to interact with him at some time in their future light cones, through ordinary classical information channels.
The question then becomes one of whether the expected correlations in results will appear: will the two particles behave in accordance with the laws of quantum mechanics? Let us denote by the idea that the observer (Alice) measures the state of the system (Alice's particle).
So, at time , Alice knows the value of : the spin of her particle, relative to herself. But, since the particles are in a singlet state, she knows that
and so if she measures her particle's spin to be , she can predict that Bob's particle () will have spin . All this follows from standard quantum mechanics, and there is no "spooky action at a distance" yet. From the "coherence-operator" discussed above, Alice also knows that if at she measures Bob's particle and then measures Bob (that is asks him what result he got) or vice versa the results will be consistent:
Finally, if a third observer (Charles, say) comes along and measures Alice, Bob, and their respective particles, he will find that everyone still agrees, because his own "coherence-operator" demands that
and
while knowledge that the particles were in a singlet state tells him that
Thus the relational interpretation, by shedding the notion of an "absolute state" of the system, allows for an analysis of the EPR paradox which neither violates traditional locality constraints, nor implies superluminal information transfer, since we can assume that all observers are moving at comfortable sub-light velocities. And, most importantly, the results of every observer are in full accordance with those expected by conventional quantum mechanics.
Whether or not this account of locality is successful has been a matter of debate.
Derivation
A promising feature of this interpretation is that RQM offers the possibility of being derived from a small number of axioms, or postulates based on experimental observations. Rovelli's derivation of RQM uses three fundamental postulates. However, it has been suggested that it may be possible to reformulate the third postulate into a weaker statement, or possibly even do away with it altogether. The derivation of RQM parallels, to a large extent, quantum logic. The first two postulates are motivated entirely by experimental results, while the third postulate, although it accords perfectly with what we have discovered experimentally, is introduced as a means of recovering the full Hilbert space formalism of quantum mechanics from the other two postulates. The two empirical postulates are:
Postulate 1: there is a maximum amount of relevant information that may be obtained from a quantum system.
Postulate 2: it is always possible to obtain new information from a system.
We let denote the set of all possible questions that may be "asked" of a quantum system, which we shall denote by , . We may experimentally find certain relations between these questions: , corresponding to {intersection, orthogonal sum, orthogonal complement, inclusion, and orthogonality} respectively, where .
Structure
From the first postulate, it follows that we may choose a subset of mutually independent questions, where is the number of bits contained in the maximum amount of information. We call such a question a complete question. The value of can be expressed as an N-tuple sequence of binary valued numerals, which has possible permutations of "0" and "1" values. There will also be more than one possible complete question. If we further assume that the relations are defined for all , then is an orthomodular lattice, while all the possible unions of sets of complete questions form a Boolean algebra with the as atoms.
The second postulate governs the event of further questions being asked by an observer of a system , when already has a full complement of information on the system (an answer to a complete question). We denote by the probability that a "yes" answer to a question will follow the complete question . If is independent of , then , or it might be fully determined by , in which case . There is also a range of intermediate possibilities, and this case is examined below.
If the question that wants to ask the system is another complete question, , the probability of a "yes" answer has certain constraints upon it:
1.
2.
3.
The three constraints above are inspired by the most basic of properties of probabilities, and are satisfied if
,
where is a unitary matrix.
Postulate 3 If and are two complete questions, then the unitary matrix associated with their probability described above satisfies the equality , for all and .
This third postulate implies that if we set a complete question as a basis vector in a complex Hilbert space, we may then represent any other question as a linear combination:
And the conventional probability rule of quantum mechanics states that if two sets of basis vectors are in the relation above, then the probability is
Dynamics
The Heisenberg picture of time evolution accords most easily with RQM. Questions may be labelled by a time parameter , and are regarded as distinct if they are specified by the same operator but are performed at different times. Because time evolution is a symmetry in the theory (it forms a necessary part of the full formal derivation of the theory from the postulates), the set of all possible questions at time is isomorphic to the set of all possible questions at time . It follows, by standard arguments in quantum logic, from the derivation above that the orthomodular lattice has the structure of the set of linear subspaces of a Hilbert space, with the relations between the questions corresponding to the relations between linear subspaces.
It follows that there must be a unitary transformation that satisfies:
and
where is the Hamiltonian, a self-adjoint operator on the Hilbert space and the unitary matrices are an abelian group.
Problems and discussion
The question is whether RQM denies any objective reality, or otherwise stated: there is only a subjectively knowable reality. Rovelli limits the scope of this claim by stating that RQM relates to the variables of a physical system and not to constant, intrinsic properties, such as the mass and charge of an electron. Indeed, mechanics in general only predicts the behavior of a physical system under various conditions. In classical mechanics this behavior is mathematically represented in a phase space with certain degrees of freedom; in quantum mechanics this is a state space, mathematically represented as a multidimensional complex Hilbert space, in which the dimensions correspond to the above variables.
Dorato, however, argues that all intrinsic properties of a physical system, including mass and charge, are only knowable in a subjective interaction between the observer and the physical system. The unspoken thought behind this is that intrinsic properties are essentially quantum mechanical properties as well.
See also
Coherence (physics)
Measurement in quantum mechanics
Measurement problem
Philosophy of information
Philosophy of physics
Quantum decoherence
Quantum entanglement
Quantum information
Quantum Zeno effect
Schrödinger's cat
Notes
References
Bitbol, M.: "An analysis of the Einstein–Podolsky–Rosen correlations in terms of events"; Physics Letters 96A, 1983: 66–70.
Crane, L.: "Clock and Category: Is Quantum Gravity Algebraic?"; Journal of Mathematical Physics 36; 1993: 6180–6193; .
Everett, H.: "The Theory of the Universal Wavefunction"; Princeton University Doctoral Dissertation; in DeWitt, B.S. & Graham, R.N. (eds.): "The Many-Worlds Interpretation of Quantum Mechanics"; Princeton University Press; 1973.
Finkelstein, D.R.: "Quantum Relativity: A Synthesis of the Ideas of Einstein and Heisenberg"; Springer-Verlag; 1996.
Floridi, L.: "Informational Realism"; Computers and Philosophy 2003 - Selected Papers from the Computer and Philosophy conference (CAP 2003), Conferences in Research and Practice in Information Technology, '37', 2004, edited by J. Weckert. and Y. Al-Saggaf, ACS, pp. 7–12. .
Laudisa, F.: "The EPR Argument in a Relational Interpretation of Quantum Mechanics"; Foundations of Physics Letters, 14 (2); 2001: pp. 119–132; .
Laudisa, F. & Rovelli, C.: "Relational Quantum Mechanics"; The Stanford Encyclopedia of Philosophy (Fall 2005 Edition), Edward N. Zalta (ed.); online article.
Pienaar, L.: "Comment on 'The Notion of Locality in Relational Quantum Mechanics'"; Foundations of Physics 49 2019; 1404–1414; .
Rovelli, C.: Helgoland; Adelphi; 2020; English translation 2021 Helgoland: Making Sense of the Quantum Revolution.
Rovelli, C. & Smerlak, M.: "Relational EPR"; Preprint: .
Rovelli, C.: "Relational Quantum Mechanics"; International Journal of Theoretical Physics 35; 1996: 1637–1678; .
Smolin, L.: "The Bekenstein Bound, Topological Quantum Field Theory and Pluralistic Quantum Field Theory"; Preprint: .
Wheeler, J. A.: "Information, physics, quantum: The search for links"; in Zurek, W., ed.: "Complexity, Entropy and the Physics of Information"; pp. 3–28; Addison-Wesley; 1990.
External links
Relational Quantum Mechanics, The Stanford Encyclopedia of Philosophy (revised edition, 2019)
Interpretations of quantum mechanics
Quantum measurement | Relational quantum mechanics | [
"Physics"
] | 6,580 | [
"Interpretations of quantum mechanics",
"Quantum measurement",
"Quantum mechanics"
] |
10,121,788 | https://en.wikipedia.org/wiki/Fuel%20fraction | In aerospace engineering, an aircraft's fuel fraction, fuel weight fraction, or a spacecraft's propellant fraction, is the weight of the fuel or propellant divided by the gross take-off weight of the craft (including propellant):
The fractional result of this mathematical division is often expressed as a percent. For aircraft with external drop tanks, the term internal fuel fraction is used to exclude the weight of external tanks and fuel.
Fuel fraction is a key parameter in determining an aircraft's range, the distance it can fly without refueling.
Breguet’s aircraft range equation describes the relationship of range with airspeed, lift-to-drag ratio, specific fuel consumption, and the part of the total fuel fraction available for cruise, also known as the cruise fuel fraction, or cruise fuel weight fraction.
In this context, the Breguet range is proportional to
Fighter aircraft
At today’s state of the art for jet fighter aircraft, fuel fractions of 29 percent and below typically yield subcruisers; 33 percent provides a quasi–supercruiser; and 35 percent and above are needed for useful supercruising missions. The U.S. F-22 Raptor’s fuel fraction is 29 percent, Eurofighter is 31 percent, both similar to those of the subcruising F-4 Phantom II, F-15 Eagle and the Russian Mikoyan MiG-29 "Fulcrum". The Russian supersonic interceptor, the Mikoyan MiG-31 "Foxhound", has a fuel fraction of over 45 percent. The Panavia Tornado had a relatively low internal fuel fraction of 26 percent, and frequently carried drop tanks.
Civilian Aircraft
Airliners have a fuel fraction of less than half their takeoff weight, between 26% for medium-haul to 45% for long-haul.
General aviation
The Rutan Voyager took off on its 1986 around-the-world flight at 72 percent, the highest figure ever at the time. Steve Fossett's Virgin Atlantic GlobalFlyer could attain a fuel fraction of nearly 83 percent, meaning that it carried more than five times its empty weight in fuel.
See also
Mass ratio
References
Aerospace engineering | Fuel fraction | [
"Engineering"
] | 443 | [
"Aerospace engineering"
] |
10,122,069 | https://en.wikipedia.org/wiki/Trivial%20measure | In mathematics, specifically in measure theory, the trivial measure on any measurable space (X, Σ) is the measure μ which assigns zero measure to every measurable set: μ(A) = 0 for all A in Σ.
Properties of the trivial measure
Let μ denote the trivial measure on some measurable space (X, Σ).
A measure ν is the trivial measure μ if and only if ν(X) = 0.
μ is an invariant measure (and hence a quasi-invariant measure) for any measurable function f : X → X.
Suppose that X is a topological space and that Σ is the Borel σ-algebra on X.
μ trivially satisfies the condition to be a regular measure.
μ is never a strictly positive measure, regardless of (X, Σ), since every measurable set has zero measure.
Since μ(X) = 0, μ is always a finite measure, and hence a locally finite measure.
If X is a Hausdorff topological space with its Borel σ-algebra, then μ trivially satisfies the condition to be a tight measure. Hence, μ is also a Radon measure. In fact, it is the vertex of the pointed cone of all non-negative Radon measures on X.
If X is an infinite-dimensional Banach space with its Borel σ-algebra, then μ is the only measure on (X, Σ) that is locally finite and invariant under all translations of X. See the article There is no infinite-dimensional Lebesgue measure.
If X is n-dimensional Euclidean space Rn with its usual σ-algebra and n-dimensional Lebesgue measure λn, μ is a singular measure with respect to λn: simply decompose Rn as A = Rn \ {0} and B = {0} and observe that μ(A) = λn(B) = 0.
References
Measures (measure theory) | Trivial measure | [
"Physics",
"Mathematics"
] | 394 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
10,124,187 | https://en.wikipedia.org/wiki/Isocitric%20acid | Isocitric acid is a structural isomer of citric acid. Since citric acid and isocitric acid are structural isomers, they share similar physical and chemical properties. Due to these similar properties, it is difficult to separate the isomers. Salts and esters of isocitric acid are known as isocitrates. The isocitrate anion is a substrate of the citric acid cycle. Isocitrate is formed from citrate with the help of the enzyme aconitase, and is acted upon by isocitrate dehydrogenase.
Isocitric acid is commonly used as a marker to detect the authenticity and quality of fruit products, most often citrus juices. In authentic orange juice, for example, the ratio of citric acid to D-isocitric acid is usually less than 130. An isocitric acid value higher than this may be indicative of fruit juice adulteration.
Isocitric acid has largely been used as a biochemical agent due to limited amounts. However, isocitric acid has been shown to have pharmaceutical and therapeutic effects. Isocitric acid has been shown to effectively treat iron deficient anemia. Additionally, isocitric acid could be used to treat Parkinson's disease. Yarrowia lipolytica can be used to produce isocitric acid and is inexpensive compared to other methods. Furthermore, other methods produce unequal amounts of citric acid to isocitric acid ratio, mostly producing citric acid. Use of Yarrowia lipolytica produces a better yield, making equal amounts of citric acid to isocitric acid.
Interactive pathway map
See also
Citric acid
Tartaric acid
Malic acid
References
Alpha hydroxy acids
Tricarboxylic acids
Citric acid cycle compounds
Aldols | Isocitric acid | [
"Chemistry"
] | 363 | [
"Citric acid cycle compounds",
"Biomolecules"
] |
10,124,190 | https://en.wikipedia.org/wiki/Oxalosuccinic%20acid | Oxalosuccinic acid is a substrate of the citric acid cycle. It is acted upon by isocitrate dehydrogenase. Salts and esters of oxalosuccinic acid are known as oxalosuccinates.
Oxalosuccinic acid/oxalosuccinate is an unstable 6-carbon intermediate in the tricarboxylic acid cycle. It's a keto acid, formed during the oxidative decarboxylation of isocitrate to alpha-ketoglutarate, which is catalyzed by the enzyme isocitrate dehydrogenase. Isocitrate is first oxidized by coenzyme NAD+ to form oxalosuccinic acid/oxalosuccinate. Oxalosuccinic acid is both an alpha-keto and a beta-keto acid (an unstable compound) and it is the beta-ketoic property that allows the loss of carbon dioxide in the enzymatic reaction in conversion to the five-carbon molecule 2-oxoglutarate.
References
Tricarboxylic acids
Alpha-keto acids
Beta-keto acids | Oxalosuccinic acid | [
"Chemistry",
"Biology"
] | 239 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
10,125,619 | https://en.wikipedia.org/wiki/Fundamental%20matrix%20%28linear%20differential%20equation%29 | In mathematics, a fundamental matrix of a system of n homogeneous linear ordinary differential equationsis a matrix-valued function whose columns are linearly independent solutions of the system.
Then every solution to the system can be written as , for some constant vector (written as a column vector of height ).
A matrix-valued function is a fundamental matrix of if and only if and is a non-singular matrix for all
Control theory
The fundamental matrix is used to express the state-transition matrix, an essential component in the solution of a system of linear ordinary differential equations.
See also
Linear differential equation
Liouville's formula
Systems of ordinary differential equations
References
Matrices
Differential calculus | Fundamental matrix (linear differential equation) | [
"Mathematics"
] | 133 | [
"Calculus",
"Mathematical objects",
"Matrices (mathematics)",
"Differential calculus",
"Matrix stubs"
] |
10,127,300 | https://en.wikipedia.org/wiki/Creative%20Wave%20Blaster | The Wave Blaster was an add-on MIDI-synthesizer for Creative Sound Blaster 16 and Sound Blaster AWE32 family of PC soundcards. It was a sample-based synthesis General MIDI compliant synthesizer. For General MIDI scores, the Wave Blaster's wavetable-engine produced more realistic instrumental music than the SB16's onboard Yamaha-OPL3.
The Wave Blaster attached to a SB16 through a 26-pin expansion-header, eliminating the need for extra cabling between the SB16 and the Wave Blaster. The SB16 emulated an MPU-401 UART, giving existing MIDI-software the option to send MIDI-sequences directly to the attached Wave Blaster, instead of driving an external MIDI-device. The Wave Blaster's analog stereo-output fed into a dedicated line-in on the SB16, where the onboard-mixer allowed equalization, mixing, and volume adjustment.
The Wave Blaster port was adopted by other sound card manufacturers who produced both daughterboards and soundcards with the expansion-header: Diamond, Ensoniq, Guillemot, Oberheim, Orchid, Roland, TerraTec, Turtle Beach, and Yamaha. The header also appeared on devices such as the Korg NX5R MIDI sound module, the Oberheim MC-1000/MC-2000 keyboards, and the TerraTec Axon AX-100 Guitar-to-MIDI converter.
Since 2000, Wave Blaster-capable sound cards for computers are becoming rare. In 2005, Terratec released a new Wave Blaster daughterboard called the Wave XTable with 16mb of on-board sample memory comprising 500 instruments and 10 drum kits. In 2014, a new compatible card called Dreamblaster S1 was produced by the Belgian company Serdaco. In 2015 that same company released a high end card named Dreamblaster X1, comparable to Yamaha and Roland cards. In 2016 DreamBlaster X2 was released, a board with both a Wave Blaster interface and a USB interface.
Wave Blaster II
Creative released the Wave Blaster II (CT1910) shortly after the original Wave Blaster. Wave Blaster II used a newer E-mu EMU8000 synthesis-engine (which later appeared in the AWE32).
By the time the SB16 reached the height of its popularity, competing MIDI-daughterboards had already pushed aside the Wave Blaster. In particular, Roland's Sound Canvas daughterboards (SCD-10/15), priced higher than Creative's offering, were highly regarded for their unrivalled musical reproduction in MIDI-scored game titles. (This was due to Roland's dominance in the production aspect of the MIDI game soundtracks; Roland's daughterboards shared the same synthesis-engine and instrument sound-set as the popular Sound Canvas 55, a commercial MIDI module favored by game composers.) By comparison, the Wave Blaster's instruments were improperly balanced, with many instruments striking at different volume-levels (relative to the de facto standard, Sound Canvas.)
Reception
Computer Gaming World in 1993 praised the Wave Blaster's audio quality and stated that the card was the best wave-table synthesis device for those with a compatible sound card.
Wave Blaster connector pinout
AGnd = Analog ground
DGnd = Digital ground
Some Wave Blaster cards offer audio inputs (Yamaha DB50XG)
Some Wave Blaster cards offer TTL-MIDI output
Reset is active low
References
External links
Wave Blaster pin-out information
Wave Blaster card photos (text in Japanese)
Wave Blaster Card Collection
2014 Dreamblaster Module
2015 dreamblaster X1 review
dreamblaster X1 vs Yamaha vs Roland
IBM PC compatibles
Computer peripherals
Creative Technology products
Sound cards | Creative Wave Blaster | [
"Technology"
] | 767 | [
"Computer peripherals",
"Components"
] |
10,129,446 | https://en.wikipedia.org/wiki/Tsen%27s%20theorem | In mathematics, Tsen's theorem states that a function field K of an algebraic curve over an algebraically closed field is quasi-algebraically closed (i.e., C1). This implies that the Brauer group of any such field vanishes, and more generally that all the Galois cohomology groups H i(K, K*) vanish for i ≥ 1. This result is used to calculate the étale cohomology groups of an algebraic curve.
The theorem was published by Chiungtze C. Tsen in 1933.
See also
Tsen rank
References
Theorems in algebraic geometry | Tsen's theorem | [
"Mathematics"
] | 125 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
10,129,831 | https://en.wikipedia.org/wiki/Explosion%20vent | An explosion vent or rupture panel is a safety device to protect equipment or buildings against excessive internal, explosion-incurred pressures, by means of pressure relief. An explosion vent will relieve pressure from the instant its opening (or activation) pressure pstat has been exceeded.
Several explosion vent panels can be installed on the same process vessel to be protected. Explosion vents are available in the versions self-destructive, non-self-re-closing and re-usable, self-re-closing.
Explosion vent construction must balance the contradictory requirements "low inertia" and "high strength". Inertia negatively affects an explosion vent's efficiency. High strength is required to endure the considerable forces that move the vent's venting element in order to open the venting orifice. Unintended disintegration must not cause disintegrating parts turning into a missile.
The evaluation of an explosion vent's efficiency and its range of application are subject to rules. See National Fire Protection Association 68, EN 14797.
During normal venting, the explosion is freely discharged, allowing flames to exit the process being protected. When the protected vessel or pipe is located indoors, ducts are generally used to safely convey the explosion outside the building. However, ductwork has disadvantages and may result in decreased venting efficiency. Flameless venting, in combination with explosion vents, can extinguish the flame from the vented explosion without the use of expensive ducting, limitations to equipment location, or more costly explosion protection.
See also
Blast damper
Dust explosion
Electrical equipment in hazardous areas
Explosion protection
Explosives safety
Inert gas
Pressure relief valve
Prestressed structure
Rupture disc
References
Explosion protection
Safety engineering
Safety equipment | Explosion vent | [
"Chemistry",
"Engineering"
] | 351 | [
"Systems engineering",
"Explosion protection",
"Safety engineering",
"Combustion engineering",
"Explosions"
] |
10,130,725 | https://en.wikipedia.org/wiki/Protein%20domain | In molecular biology, a protein domain is a region of a protein's polypeptide chain that is self-stabilizing and that folds independently from the rest. Each domain forms a compact folded three-dimensional structure. Many proteins consist of several domains, and a domain may appear in a variety of different proteins. Molecular evolution uses domains as building blocks and these may be recombined in different arrangements to create proteins with different functions. In general, domains vary in length from between about 50 amino acids up to 250 amino acids in length. The shortest domains, such as zinc fingers, are stabilized by metal ions or disulfide bridges. Domains often form functional units, such as the calcium-binding EF hand domain of calmodulin. Because they are independently stable, domains can be "swapped" by genetic engineering between one protein and another to make chimeric proteins.
Background
The concept of the domain was first proposed in 1973 by Wetlaufer after X-ray
crystallographic studies of hen lysozyme and papain
and by limited proteolysis studies of immunoglobulins. Wetlaufer defined domains as stable units of protein structure that could fold autonomously. In the past domains have been described as units of:
compact structure
function and evolution
folding.
Each definition is valid and will often overlap, i.e. a compact structural domain that is found amongst diverse proteins is likely to fold independently within its structural environment. Nature often brings several domains together to form multidomain and multifunctional proteins with a vast number of possibilities. In a multidomain protein, each domain may fulfill its own function independently, or in a concerted manner with its neighbours. Domains can either serve as modules for building up large assemblies such as virus particles or muscle fibres, or can provide specific catalytic or binding sites as found in enzymes or regulatory proteins.
Example: Pyruvate kinase
An appropriate example is pyruvate kinase (see first figure), a glycolytic enzyme that plays an important role in regulating the flux from fructose-1,6-biphosphate to pyruvate. It contains an all-β nucleotide-binding domain (in blue), an α/β-substrate binding domain (in grey) and an α/β-regulatory domain (in olive green), connected by several polypeptide linkers. Each domain in this protein occurs in diverse sets of protein families.
The central α/β-barrel substrate binding domain is one of the most common enzyme folds. It is seen in many different enzyme families catalysing completely unrelated reactions. The α/β-barrel is commonly called the TIM barrel named after triose phosphate isomerase, which was the first such structure to be solved. It is currently classified into 26 homologous families in the CATH domain database. The TIM barrel is formed from a sequence of β-α-β motifs closed by the first and last strand hydrogen bonding together, forming an eight stranded barrel. There is debate about the evolutionary origin of this domain. One study has suggested
that a single ancestral enzyme could have diverged into several families, while another suggests that a stable TIM-barrel structure has evolved
through convergent evolution.
The TIM-barrel in pyruvate kinase is 'discontinuous', meaning that more than one segment of the polypeptide is required to form the domain. This is likely to be the result of the insertion of one domain into another during the protein's evolution. It has been shown from known structures that about a quarter of structural domains are discontinuous. The inserted β-barrel regulatory domain is 'continuous', made up of a single stretch of polypeptide.
Units of protein structure
The primary structure (string of amino acids) of a protein ultimately encodes its uniquely folded three-dimensional (3D) conformation. The most important factor governing the folding of a protein into 3D structure is the distribution of polar and non-polar side chains. Folding is driven by the burial of hydrophobic side chains into the interior of the molecule so to avoid contact with the aqueous environment. Generally proteins have a core of hydrophobic residues surrounded by a shell of hydrophilic residues. Since the peptide bonds themselves are polar they are neutralised by hydrogen bonding with each other when in the hydrophobic environment. This gives rise to regions of the polypeptide that form regular 3D structural patterns called secondary structure. There are two main types of secondary structure: α-helices and β-sheets.
Some simple combinations of secondary structure elements have been found to frequently occur in protein structure and are referred to as supersecondary structure or motifs. For example, the β-hairpin motif consists of two adjacent antiparallel β-strands joined by a small loop. It is present in most antiparallel β structures both as an isolated ribbon and as part of more complex β-sheets. Another common super-secondary structure is the β-α-β motif, which is frequently used to connect two parallel β-strands. The central α-helix connects the C-termini of the first strand to the N-termini of the second strand, packing its side chains against the β-sheet and therefore shielding the hydrophobic residues of the β-strands from the surface.
Covalent association of two domains represents a functional and structural advantage since there is an increase in stability when compared with the same structures non-covalently associated. Other, advantages are the protection of intermediates within inter-domain enzymatic clefts that may
otherwise be unstable in aqueous environments, and a fixed stoichiometric ratio of the enzymatic activity necessary for a sequential set of reactions.
Structural alignment is an important tool for determining domains.
Tertiary structure
Several motifs pack together to form compact, local, semi-independent units called domains.
The overall 3D structure of the polypeptide chain is referred to as the protein's tertiary structure. Domains are the fundamental units of tertiary structure, each domain containing an individual hydrophobic core built from secondary structural units connected by loop regions. The packing of the polypeptide is usually much tighter in the interior than the exterior of the domain producing a solid-like core and a fluid-like surface. Core residues are often conserved in a protein family, whereas the residues in loops are less conserved, unless they are involved in the protein's function. Protein tertiary structure can be divided into four main classes based on the secondary structural content of the domain.
All-α domains have a domain core built exclusively from α-helices. This class is dominated by small folds, many of which form a simple bundle with helices running up and down.
All-β domains have a core composed of antiparallel β-sheets, usually two sheets packed against each other. Various patterns can be identified in the arrangement of the strands, often giving rise to the identification of recurring motifs, for example the Greek key motif.
α+β domains are a mixture of all-α and all-β motifs. Classification of proteins into this class is difficult because of overlaps to the other three classes and therefore is not used in the CATH domain database.
α/β domains are made from a combination of β-α-β motifs that predominantly form a parallel β-sheet surrounded by amphipathic α-helices. The secondary structures are arranged in layers or barrels.
Limits on size
Domains have limits on size. The size of individual structural domains varies from 36 residues in E-selectin to 692 residues in lipoxygenase-1, but the majority, 90%, have fewer than 200 residues with an average of approximately 100 residues. Very short domains, less than 40 residues, are often stabilised by metal ions or disulfide bonds. Larger domains, greater than 300 residues, are likely to consist of multiple hydrophobic cores.
Quaternary structure
Many proteins have a quaternary structure, which consists of several polypeptide chains that associate into an oligomeric molecule. Each polypeptide chain in such a protein is called a subunit. Hemoglobin, for example, consists of two α and two β subunits. Each of the four chains has an all-α globin fold with a heme pocket.
Domain swapping
Domain swapping is a mechanism for forming oligomeric assemblies. In domain swapping, a secondary or tertiary element of a monomeric protein is replaced by the same element of another protein. Domain swapping can range from secondary structure elements to whole structural domains. It also represents a model of evolution for functional adaptation by oligomerisation, e.g. oligomeric enzymes that have their active site at subunit interfaces.
Domains as evolutionary modules
Nature is a tinkerer and not an inventor, new sequences are adapted from pre-existing sequences rather than invented. Domains are the common material used by nature to generate new sequences; they can be thought of as genetically mobile units, referred to as 'modules'. Often, the C and N termini of domains are close together in space, allowing them to easily be "slotted into" parent structures during the process of evolution. Many domain families are found in all three forms of life, Archaea, Bacteria and Eukarya. Protein modules are a subset of protein domains which are found across a range of different proteins with a particularly versatile structure. Examples can be found among extracellular proteins associated with clotting, fibrinolysis, complement, the extracellular matrix, cell surface adhesion molecules and cytokine receptors. Four concrete examples of widespread protein modules are the following domains: SH2, immunoglobulin, fibronectin type 3 and the kringle.
Molecular evolution gives rise to families of related proteins with similar sequence and structure. However, sequence similarities can be extremely low between proteins that share the same structure. Protein structures may be similar because proteins have diverged from a common ancestor. Alternatively, some folds may be more favored than others as they represent stable arrangements of secondary structures and some proteins may converge towards these folds over the course of evolution. There are currently about 110,000 experimentally determined protein 3D structures deposited within the Protein Data Bank (PDB). However, this set contains many identical or very similar structures. All proteins should be classified to structural families to understand their evolutionary relationships. Structural comparisons are best achieved at the domain level. For this reason many algorithms have been developed to automatically assign domains in proteins with known 3D structure (see ).
The CATH domain database classifies domains into approximately 800 fold families; ten of these folds are highly populated and are referred to as 'super-folds'. Super-folds are defined as folds for which there are at least three structures without significant sequence similarity. The most populated is the α/β-barrel super-fold, as described previously.
Multidomain proteins
The majority of proteins, two-thirds in unicellular organisms and more than 80% in metazoa, are multidomain proteins. However, other studies concluded that 40% of prokaryotic proteins consist of multiple domains while eukaryotes have approximately 65% multi-domain proteins.
Many domains in eukaryotic multidomain proteins can be found as independent proteins in prokaryotes, suggesting that domains in multidomain proteins have once existed as independent proteins. For example, vertebrates have a multi-enzyme polypeptide containing the GAR synthetase, AIR synthetase and GAR transformylase domains (GARs-AIRs-GARt; GAR: glycinamide ribonucleotide synthetase/transferase; AIR: aminoimidazole ribonucleotide synthetase). In insects, the polypeptide appears as GARs-(AIRs)2-GARt, in yeast GARs-AIRs is encoded separately from GARt, and in bacteria each domain is encoded separately.
Origin
Multidomain proteins are likely to have emerged from selective pressure during evolution to create new functions. Various proteins have diverged from common ancestors by different combinations and associations of domains. Modular units frequently move about, within and between biological systems through mechanisms of genetic shuffling:
transposition of mobile elements including horizontal transfers (between species);
gross rearrangements such as inversions, translocations, deletions and duplications;
homologous recombination;
slippage of DNA polymerase during replication.
Types of organization
The simplest multidomain organization seen in proteins is that of a single domain repeated in tandem. The domains may interact with each other (domain-domain interaction) or remain isolated, like beads on string. The giant 30,000 residue muscle protein titin comprises about 120 fibronectin-III-type and Ig-type domains. In the serine proteases, a gene duplication event has led to the formation of a two β-barrel domain enzyme. The repeats have diverged so widely that there is no obvious sequence similarity between them. The active site is located at a cleft between the two β-barrel domains, in which functionally important residues are contributed from each domain. Genetically engineered mutants of the chymotrypsin serine protease were shown to have some proteinase activity even though their active site residues were abolished and it has therefore been postulated that the duplication event enhanced the enzyme's activity.
Modules frequently display different connectivity relationships, as illustrated by the kinesins and ABC transporters. The kinesin motor domain can be at either end of a polypeptide chain that includes a coiled-coil region and a cargo domain. ABC transporters are built with up to four domains consisting of two unrelated modules, ATP-binding cassette and an integral membrane module, arranged in various combinations.
Not only do domains recombine, but there are many examples of a domain having been inserted into another. Sequence or structural similarities to other
domains demonstrate that homologues of inserted and parent domains can exist independently. An example is that of the 'fingers' inserted into the 'palm' domain within the polymerases of the Pol I family. Since a domain can be inserted into another, there should always be at least one continuous domain in a multidomain protein. This is the main difference between definitions of structural domains and evolutionary/functional domains. An evolutionary domain will be limited to one or two connections between domains, whereas structural domains can have unlimited connections, within a given criterion of the existence of a common core. Several structural domains could be assigned to an evolutionary domain.
A superdomain consists of two or more conserved domains of nominally independent origin, but subsequently inherited as a single structural/functional unit. This combined superdomain can occur in diverse proteins that are not related by gene duplication alone. An example of a superdomain is the protein tyrosine phosphatase–C2 domain pair in PTEN, tensin, auxilin and the membrane protein TPTE2. This superdomain is found in proteins in animals, plants and fungi. A key feature of the PTP-C2 superdomain is amino acid residue conservation in the domain interface.
Domains are autonomous folding units
Folding
Protein folding - the unsolved problem : Since the seminal work of Anfinsen in the early 1960s, the goal to completely understand the mechanism by which a polypeptide rapidly folds into its stable native conformation remains elusive. Many experimental folding studies have contributed much to our understanding, but the principles that govern protein folding are still based on those discovered in the very first studies of folding. Anfinsen showed that the native state of a protein is thermodynamically stable, the conformation being at a global minimum of its free energy.
Folding is a directed search of conformational space allowing the protein to fold on a biologically feasible time scale. The Levinthal paradox states that if an averaged sized protein would sample all possible conformations before finding the one with the lowest energy, the whole process would take billions of years. Proteins typically fold within 0.1 and 1000 seconds. Therefore, the protein folding process must be directed some way through a specific folding pathway. The forces
that direct this search are likely to be a combination of local and global influences whose effects are felt at various stages of the reaction.
Advances in experimental and theoretical studies have shown that folding can be viewed in terms of energy landscapes, where folding kinetics is considered as a progressive organisation of an ensemble of partially folded structures through which a protein passes on its way to the folded structure. This has been described in terms of a folding funnel, in which an unfolded protein has a large number of conformational states available and there are fewer states available to the folded protein. A funnel implies that for protein folding there is a decrease in energy and loss of entropy with increasing tertiary structure formation. The local roughness of the funnel reflects kinetic traps, corresponding to the accumulation of misfolded intermediates. A folding chain progresses toward lower intra-chain free-energies by increasing its compactness. The chain's conformational options become increasingly narrowed ultimately toward one native structure.
Advantage of domains in protein folding
The organisation of large proteins by structural domains represents an advantage for protein folding, with each domain being able to individually fold, accelerating the folding process and reducing a potentially large combination of residue interactions. Furthermore, given the observed random distribution of hydrophobic residues in proteins, domain formation appears to be the optimal solution for a large protein to bury its hydrophobic residues while keeping the hydrophilic residues at the surface.
However, the role of inter-domain interactions in protein folding and in energetics of stabilisation of the native structure, probably differs for each protein. In T4 lysozyme, the influence of one domain on the other is so strong that the entire molecule is resistant to proteolytic cleavage. In this case, folding is a sequential process where the C-terminal domain is required to fold independently in an early step, and the other domain requires the presence of the folded C-terminal domain for folding and stabilisation.
It has been found that the folding of an isolated domain can take place at the same rate or sometimes faster than that of the integrated domain, suggesting that unfavourable interactions with the rest of the protein can occur during folding. Several arguments suggest that the slowest step in the folding of large proteins is the pairing of the folded domains. This is either because the domains are not folded entirely correctly or because the small adjustments required for their interaction are energetically unfavourable, such as the removal of water from the domain interface.
Domains and protein flexibility
Protein domain dynamics play a key role in a multitude of molecular recognition and signaling processes.
Protein domains, connected by intrinsically disordered flexible linker domains, induce long-range allostery via protein domain dynamics.
The resultant dynamic modes cannot be generally predicted from static structures of either the entire protein or individual domains. They can however be inferred by comparing different structures of a protein (as in Database of Molecular Motions). They can also be suggested by sampling in extensive molecular dynamics trajectories and principal component analysis, or they can be directly observed using spectra
measured by neutron spin echo spectroscopy.
Domain definition from structural co-ordinates
The importance of domains as structural building blocks and elements of evolution has brought about many automated methods for their identification and classification in proteins of known structure. Automatic procedures for reliable domain assignment is essential for the generation of the domain databases, especially as the number of known protein structures is increasing. Although the boundaries of a domain can be determined by visual inspection, construction of an automated method is not straightforward. Problems occur when faced with domains that are discontinuous or highly associated. The fact that there is no standard definition of what a domain really is has meant that domain assignments have varied enormously, with each researcher using a unique set of criteria.
A structural domain is a compact, globular sub-structure with more interactions within it than with the rest of the protein.
Therefore, a structural domain can be determined by two visual characteristics: its compactness and its extent of isolation. Measures of local compactness in proteins have been used in many of the early methods of domain assignment and in several of the more recent methods.
Methods
One of the first algorithms used a Cα-Cα distance map together with a hierarchical clustering routine that considered proteins as several small segments, 10 residues in length. The initial segments were clustered one after another based on inter-segment distances; segments with the shortest distances were clustered and considered as single segments thereafter. The stepwise clustering finally included the full protein. Go also exploited the fact that inter-domain distances are normally larger than intra-domain distances; all possible Cα-Cα distances were represented as diagonal plots in which there were distinct patterns for helices, extended strands and combinations of secondary structures.
The method by Sowdhamini and Blundell clusters secondary structures in a protein based on their Cα-Cα distances and identifies domains from the pattern in
their dendrograms. As the procedure does not consider the protein as a continuous chain of amino acids there are no problems in treating discontinuous domains. Specific nodes in these dendrograms are identified as tertiary structural clusters of the protein, these include both super-secondary structures and domains. The DOMAK algorithm is used to create the 3Dee domain database. It calculates a 'split value' from the number of each type of contact when the protein is divided arbitrarily into two parts. This split value is
large when the two parts of the structure are distinct.
The method of Wodak and Janin was based on the calculated interface areas between two chain segments repeatedly cleaved at various residue positions. Interface areas were calculated by comparing surface areas of the cleaved segments with that of the native structure. Potential domain boundaries can be identified at a site where the interface area was at a minimum. Other methods have used measures of solvent accessibility to calculate compactness.
The PUU algorithm incorporates a harmonic model used to approximate inter-domain dynamics. The underlying physical concept is that many rigid interactions will occur within each domain and loose interactions will occur between domains. This algorithm is used to define domains in the FSSP domain database.
Swindells (1995) developed a method, DETECTIVE, for identification of domains in protein structures based on the idea that domains have a hydrophobic
interior. Deficiencies were found to occur when hydrophobic cores from different domains continue through the interface region.
RigidFinder is a novel method for identification of protein rigid blocks (domains and loops) from two different conformations. Rigid blocks are defined as blocks where all inter residue distances are conserved across conformations.
The method RIBFIND developed by Pandurangan and Topf identifies rigid bodies in protein structures by performing spacial clustering of secondary structural elements in proteins. The RIBFIND rigid bodies have been used to flexibly fit protein structures into cryo electron microscopy density maps.
A general method to identify dynamical domains, that is protein
regions that behave approximately as rigid units in the course of
structural fluctuations, has been introduced by Potestio et al. and, among other applications was also used
to compare the consistency of the dynamics-based domain
subdivisions with standard structure-based ones. The method,
termed PiSQRD, is publicly available in the form of a webserver. The latter allows users to optimally subdivide single-chain
or multimeric proteins into quasi-rigid domains based on the collective modes of fluctuation of the system. By default the
latter are calculated through an elastic network model;
alternatively pre-calculated essential dynamical spaces can be
uploaded by the user.
Example domains
Armadillo repeats: named after the β-catenin-like Armadillo protein of the fruit fly Drosophila melanogaster.
Basic leucine zipper domain (bZIP domain): found in many DNA-binding eukaryotic proteins. One part of the domain contains a region that mediates sequence-specific DNA-binding properties and the Leucine zipper that is required for the dimerization of two DNA-binding regions. The DNA-binding region comprises a number of basic aminoacids such as arginine and lysine.
Cadherin repeats: Cadherins function as Ca2+-dependent cell–cell adhesion proteins. Cadherin domains are extracellular regions which mediate cell-to-cell homophilic binding between cadherins on the surface of adjacent cells.
Death effector domain (DED): allows protein–protein binding by homotypic interactions (DED-DED). Caspase proteases trigger apoptosis via proteolytic cascades. Pro-caspase-8 and pro-caspase-9 bind to specific adaptor molecules via DED domains, which leads to autoactivation of caspases.
EF hand: a helix-turn-helix structural motif found in each structural domain of the signaling protein calmodulin and in the muscle protein troponin-C.
Foldon domain: A small protein domain from fibritin in T4 bacteriophage that can cause proteins to trimerize.
Immunoglobulin-like domains: found in proteins of the immunoglobulin superfamily (IgSF). They contain about 70-110 amino acids and are classified into different categories (IgV, IgC1, IgC2 and IgI) according to their size and function. They possess a characteristic fold in which two beta sheets form a "sandwich" that is stabilized by interactions between conserved cysteines and other charged amino acids. They are important for protein–protein interactions in processes of cell adhesion, cell activation, and molecular recognition. These domains are commonly found in molecules with roles in the immune system.
Phosphotyrosine-binding domain (PTB): PTB domains usually bind to phosphorylated tyrosine residues. They are often found in signal transduction proteins. PTB-domain binding specificity is determined by residues to the amino-terminal side of the phosphotyrosine. Examples: the PTB domains of both SHC and IRS-1 bind to a NPXpY sequence. PTB-containing proteins such as SHC and IRS-1 are important for insulin responses of human cells.
Pleckstrin homology domain (PH): PH domains bind phosphoinositides with high affinity. Specificity for PtdIns(3)P, PtdIns(4)P, PtdIns(3,4)P2, PtdIns(4,5)P2, and PtdIns(3,4,5)P3 have all been observed. Given the fact that phosphoinositides are sequestered to various cell membranes (due to their long lipophilic tail) the PH domains usually causes recruitment of the protein in question to a membrane where the protein can exert a certain function in cell signalling, cytoskeletal reorganization or membrane trafficking.
Src homology 2 domain (SH2): SH2 domains are often found in signal transduction proteins. SH2 domains confer binding to phosphorylated tyrosine (pTyr). Named after the phosphotyrosine binding domain of the src viral oncogene, which is itself a tyrosine kinase. See also: SH3 domain.
Zinc finger DNA-binding domain (ZnF_GATA): ZnF_GATA domain-containing proteins are typically transcription factors that usually bind to the DNA sequence [AT]GATA[AG] of promoters.
Domains of unknown function
A large fraction of domains are of unknown function. A domain of unknown function (DUF) is a protein domain that has no characterized function. These families have been collected together in the Pfam database using the prefix DUF followed by a number, with examples being DUF2992 and DUF1220. There are now over 3,000 DUF families within the Pfam database representing over 20% of known families. Surprisingly, the number of DUFs in Pfam has increased from 20% (in 2010) to 22% (in 2019), mostly due to an increasing number of new genome sequences. Pfam release 32.0 (2019) contained 3,961 DUFs.
See also
Binding domain
Cofactor transferase family
PANDIT, a biological database covering protein domains
Pfam: database of protein domains
Protein
Protein structure
Protein structure prediction
Protein structure prediction software
Protein superfamily
Protein tandem repeats
Protein family
Protein subfamily
Short linear motif
Structural biology
Structural Classification of Proteins (SCOP)
CATH Protein Structure Classification database
Sequence motif
Structural motif
References
George, R. A. (2002) "Predicting Structural Domains in Proteins" Thesis, University College London (contributed by its author).
Key papers
External links
Structural domain databases
Conserved Domains at the National Center for Biotechnology website
3Dee
CATH
DALI
PFAM clan browser
Sequence domain databases
InterPro
PROSITE
ProDom
SMART
NCBI Conserved Domain Database
SUPERFAMILY Library of HMMs representing superfamilies and database of (superfamily and family) annotations for all completely sequenced organisms
Functional domain databases
dcGO A comprehensive database of domain-centric ontologies on functions, phenotypes and diseases.
Protein structure
Protein families
Protein superfamilies | Protein domain | [
"Chemistry",
"Biology"
] | 6,047 | [
"Protein classification",
"Protein domains",
"Structural biology",
"Protein families",
"Protein structure",
"Protein superfamilies"
] |
10,131,478 | https://en.wikipedia.org/wiki/PDIFF | In geometric topology, PDIFF, for piecewise differentiable, is the category of piecewise-smooth manifolds and piecewise-smooth maps between them. It properly contains DIFF (the category of smooth manifolds and smooth functions between them) and PL (the category of piecewise linear manifolds and piecewise linear maps between them), and the reason it is defined is to allow one to relate these two categories. Further, piecewise functions such as splines and polygonal chains are common in mathematics, and PDIFF provides a category for discussing them.
Motivation
PDIFF is mostly a technical point: smooth maps are not piecewise linear (unless linear), and piecewise linear maps are not smooth (unless globally linear) – the intersection is linear maps, or more precisely affine maps (because not based) – so they cannot directly be related: they are separate generalizations of the notion of an affine map.
However, while a smooth manifold is not a PL manifold, it carries a canonical PL structure – it is uniquely triangularizable; conversely, not every PL manifold is smoothable. For a particular smooth manifold or smooth map between smooth manifolds, this can be shown by breaking up the manifold into small enough pieces, and then linearizing the manifold or map on each piece: for example, a circle in the plane can be approximated by a triangle, but not by a 2-gon, since this latter cannot be linearly embedded.
This relation between Diff and PL requires choices, however, and is more naturally shown and understood by including both categories in a larger category, and then showing that the inclusion of PL is an equivalence: every smooth manifold and every PL manifold is a PDiff manifold. Thus, going from Diff to PDiff and PL to PDiff are natural – they are just inclusion. The map PL to PDiff, while not an equality – not every piecewise smooth function is piecewise linear – is an equivalence: one can go backwards by linearize pieces. Thus it can for some purposes be inverted, or considered an isomorphism, which gives a map These categories all sit inside TOP, the category of topological manifold and continuous maps between them.
In summary, PDiff is more general than Diff because it allows pieces (corners), and one cannot in general smooth corners, while PL is no less general that PDiff because one can linearize pieces (more precisely, one may need to break them up into smaller pieces and then linearize, which is allowed in PDiff).
History
That every smooth (indeed, C1) manifold has a unique PL structure was originally proven in . A detailed expositionary proof is given in . The result is elementary and rather technical to prove in detail, so it is generally only sketched in modern texts, as in the brief proof outline given in . A very brief outline is given in , while a short but detailed proof is given in .
References
Geometric topology | PDIFF | [
"Mathematics"
] | 598 | [
"Topology",
"Geometric topology"
] |
10,132,968 | https://en.wikipedia.org/wiki/Fuel%20mass%20fraction | In combustion physics, fuel mass fraction is the ratio of fuel mass flow to the total mass flow of a fuel mixture. If an air flow is fuel free, the fuel mass fraction is zero; in pure fuel without trapped gases, the ratio is unity. As fuel is burned in a combustion process, the fuel mass fraction is reduced. The definition reads as
where
is the mass of the fuel in the mixture
is the total mass of the mixture
References
Chemical physics
Combustion
Engineering ratios | Fuel mass fraction | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 96 | [
"Chemical process stubs",
"Applied and interdisciplinary physics",
"Metrics",
"Engineering ratios",
"Quantity",
"Combustion",
"nan",
"Chemical reaction stubs",
"Chemical physics"
] |
12,488,576 | https://en.wikipedia.org/wiki/Deep%20Foundations%20Institute | Deep Foundations Institute (DFI) is an international association of contractors, engineers, manufacturers, suppliers, academics and owners in the deep foundations industry.
DFI was incorporated as a 501(c)6 association in January 1976. It was founded by Jack Dougherty and Hal Hunt during the “Pile Talk” seminars and became a multidisciplinary worldwide membership organization.
Technical committees
Augered Cast-in-Place and Drilled Displacement Pile
Anchored Earth Retention
BIM and Digitalisation (Europe)
Continuous Flight Auger Pile (India)
Codes and Standards
Deep Foundations for Landslides and Slope Stabilization
Drilled Shaft
Driven Pile
Electric Power Foundation Systems
Geotechnical Characterization for Foundations (India)
Ground Improvement
Helical Piles and Tiebacks
Information Management Systems
International Grouting
Manufacturers, Suppliers and Service Providers
Micropile
Risk and Contracts
Seismic and Lateral Loads
Structural Slurry Wall and Seepage Control
Soil Mixing
Subsurface Characterization for Deep Foundations
Sustainability
Testing and Evaluation
Tunneling and Underground
Women in Deep Foundations
International chapters
DFI currently supports chapters in Europe, Middle East and India.
DFI Europe was formed in 2005 as a DFI Regional Chapter for European DFI Members, who enjoy the benefits of joint DFI and DFI Europe membership. DFI Europe’s secretariat is located in Belgium.
DFI Middle East was formed in 2010 following a successful Piling Summit in Dubai, UAE. The Regional Chapter provides joint membership in DFI and DFI Middle East.
DFI of India was formed in 2013 following the success of the Deep Foundation Technologies for Infrastructure Development in India held in Chennai, India in 2012. The Regional Chapter provides joint membership in DFI and DFI of India.
References
External links
Deep Foundations Institute Website
DFI Corporate Member Directory of Foundation Specialists
DFI Europe Website
DFI India Website
DFI Educational Trust Webpage
Geotechnical organizations
501(c)(6) nonprofit organizations | Deep Foundations Institute | [
"Engineering"
] | 376 | [
"Geotechnical organizations",
"Civil engineering organizations"
] |
12,488,689 | https://en.wikipedia.org/wiki/Brazilian%20Journal%20of%20Chemical%20Engineering | The Brazilian Journal of Chemical Engineering publishes papers, reporting basic and applied research and innovation in the field of chemical engineering and related areas. It was first published by the Associação Brasileira de Engenharia Química, São Paulo, in 1983 as the Revista Brasileira de Engenharia, Caderno de Engenharia Química. With vol. 11 (1994), it continued as the Brazilian Journal of Chemical Engineering.
It continues the Revista Brasileira de Engenharia, Caderno de Engenharia Química from 1994 on.
Fulltext of the journal is available via SciElo starting from vol. 14 (1997) to vol. 36 (2019).
From January 2020 on, the journal is published by Springer.
See also
Anais da ABQ
Journal of the Brazilian Chemical Society
Química Nova
Revista Brasileira de Química
References
External links
Homepage of the journal (up to December 2019):
Homepage of the Associação Brasileira de Engenharia Química (ABEQ):
Homepage of the journal (from January 2020):
Chemical engineering journals
Academic journals published by learned and professional societies of Brazil
Academic journals established in 1983
Portuguese-language journals | Brazilian Journal of Chemical Engineering | [
"Chemistry",
"Engineering"
] | 256 | [
"Chemical engineering",
"Chemical engineering journals"
] |
12,488,904 | https://en.wikipedia.org/wiki/Capua%20Leg | The Capua leg was an artificial leg, found in a grave in Capua, Italy in about 1884. Dating from 300 BC, the leg is one of the earliest known prosthetic limbs. There was no sign of an artificial foot which may have been made from a valuable metal. The limb was kept at the Royal College of Surgeons in London, but was destroyed in World War II during an air raid. A copy of the limb is held at the Science Museum, London. and another was made by 3D printing in 2021.
Bibliography
Von Brunn, Walther: Der Stelzfuß von Capua und die antiken Prothesen. In: Archiv für Geschichte der Medizin. Vol. 18, No. 4 (1. November 1926). Stuttgart: Steiner, 1926, pp. 351–360.
Bliquez, Lawrence J.: Prosthetics in Classical Antiquity: Greek, Etruscan, and Roman Prosthetics. In: Haase, Wolfgang; Temproini, Hildegard (ed.): Aufstieg und Niedergang der römischen Welt. Teil II: Principat, Vol. 37.3. Berlin / New York: De Gruyter, 1996, pp. 2640–2676.
References
Prosthetics
Archaeological artifacts | Capua Leg | [
"Engineering",
"Biology"
] | 277 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
94,620 | https://en.wikipedia.org/wiki/Saeculum | A is a length of time roughly equal to the potential lifetime of a person or, equivalently, the complete renewal of a human population.
Background
Originally it meant the time from the moment that something happened (for example the founding of a city) until the point in time that all people who had lived at the first moment had died. At that point a new would start. According to legend, the gods had allotted a certain number of to every people or civilization; the Etruscans, for example, had been given ten saecula.
By the 2nd century BC, Roman historians were using the to periodize their chronicles and track wars. At the time of the reign of emperor Augustus, the Romans decided that a was 110 years. In 17 BC, Caesar Augustus organized Ludi saeculares ("saecular games") for the first time to celebrate the "fifth saeculum of Rome". Augustus aimed to link the with imperial authority.
Emperors such as Claudius and Septimius Severus celebrated the passing of with games at irregular intervals. In 248, Philip the Arab combined Ludi saeculares with the 1,000th anniversary of the founding of Rome. The new millennium that Rome entered was called the saeculum novum, a term that received a metaphysical connotation in Christianity, referring to the worldly age (hence "secular").
Roman emperors legitimised their political authority by referring to the in various media, linked to a golden age of imperial glory. In response, Christian writers began to define the as referring to 'this present world', as opposed to the expectation of eternal life in the 'world to come'. This results in the modern sense of 'secular' as 'belonging to the world and its affairs'.
The English word secular, an adjective meaning something happening once in an eon, is derived from the Latin saeculum. The descendants of Latin saeculum in the Romance languages generally mean "century" (i.e., 100 years): French siècle, Spanish siglo, Portuguese século, Italian secolo, etc.
See also
Aeon, comparable Greek concept
Century
Generation
In saecula saeculorum
New world order (politics)
Social cycle theory
Strauss–Howe generational theory
Saeculum obscurum
References
Units of time
Ageing
Latin words and phrases | Saeculum | [
"Physics",
"Mathematics"
] | 484 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
95,154 | https://en.wikipedia.org/wiki/Associative%20array | In computer science, an associative array, map, symbol table, or dictionary is an abstract data type that stores a collection of (key, value) pairs, such that each possible key appears at most once in the collection. In mathematical terms, an associative array is a function with finite domain. It supports 'lookup', 'remove', and 'insert' operations.
The dictionary problem is the classic problem of designing efficient data structures that implement associative arrays.
The two major solutions to the dictionary problem are hash tables and search trees.
It is sometimes also possible to solve the problem using directly addressed arrays, binary search trees, or other more specialized structures.
Many programming languages include associative arrays as primitive data types, while many other languages provide software libraries that support associative arrays. Content-addressable memory is a form of direct hardware-level support for associative arrays.
Associative arrays have many applications including such fundamental programming patterns as memoization and the decorator pattern.
The name does not come from the associative property known in mathematics. Rather, it arises from the association of values with keys. It is not to be confused with associative processors.
Operations
In an associative array, the association between a key and a value is often known as a "mapping"; the same word may also be used to refer to the process of creating a new association.
The operations that are usually defined for an associative array are:
Insert or put
add a new pair to the collection, mapping the key to its new value. Any existing mapping is overwritten. The arguments to this operation are the key and the value.
Remove or delete
remove a pair from the collection, unmapping a given key from its value. The argument to this operation is the key.
Lookup, find, or get
find the value (if any) that is bound to a given key. The argument to this operation is the key, and the value is returned from the operation. If no value is found, some lookup functions raise an exception, while others return a default value (such as zero, null, or a specific value passed to the constructor).
Associative arrays may also include other operations such as determining the number of mappings or constructing an iterator to loop over all the mappings. For such operations, the order in which the mappings are returned is usually implementation-defined.
A multimap generalizes an associative array by allowing multiple values to be associated with a single key. A bidirectional map is a related abstract data type in which the mappings operate in both directions: each value must be associated with a unique key, and a second lookup operation takes a value as an argument and looks up the key associated with that value.
Properties
The operations of the associative array should satisfy various properties:
lookup(k, insert(j, v, D)) = if k == j then v else lookup(k, D)
lookup(k, new()) = fail, where fail is an exception or default value
remove(k, insert(j, v, D)) = if k == j then remove(k, D) else insert(j, v, remove(k, D))
remove(k, new()) = new()
where k and j are keys, v is a value, D is an associative array, and new() creates a new, empty associative array.
Example
Suppose that the set of loans made by a library is represented in a data structure. Each book in a library may be checked out by one patron at a time. However, a single patron may be able to check out multiple books. Therefore, the information about which books are checked out to which patrons may be represented by an associative array, in which the books are the keys and the patrons are the values. Using notation from Python or JSON, the data structure would be:
{
"Pride and Prejudice": "Alice",
"Wuthering Heights": "Alice",
"Great Expectations": "John"
}
A lookup operation on the key "Great Expectations" would return "John". If John returns his book, that would cause a deletion operation, and if Pat checks out a book, that would cause an insertion operation, leading to a different state:
{
"Pride and Prejudice": "Alice",
"The Brothers Karamazov": "Pat",
"Wuthering Heights": "Alice"
}
Implementation
For dictionaries with very few mappings, it may make sense to implement the dictionary using an association list, which is a linked list of mappings. With this implementation, the time to perform the basic dictionary operations is linear in the total number of mappings. However, it is easy to implement and the constant factors in its running time are small.
Another very simple implementation technique, usable when the keys are restricted to a narrow range, is direct addressing into an array: the value for a given key k is stored at the array cell A[k], or if there is no mapping for k then the cell stores a special sentinel value that indicates the lack of a mapping. This technique is simple and fast, with each dictionary operation taking constant time. However, the space requirement for this structure is the size of the entire keyspace, making it impractical unless the keyspace is small.
The two major approaches for implementing dictionaries are a hash table or a search tree.
Hash table implementations
The most frequently used general-purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate "bucket" of the array. The basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key's hash, combined with accessing the corresponding bucket within the array. As such, hash tables usually perform in O(1) time, and usually outperform alternative implementations.
Hash tables must be able to handle collisions: the mapping by the hash function of two different keys to the same bucket of the array. The two most widespread approaches to this problem are separate chaining and open addressing. In separate chaining, the array does not store the value itself but stores a pointer to another container, usually an association list, that stores all the values matching the hash. By contrast, in open addressing, if a hash collision is found, the table seeks an empty spot in an array to store the value in a deterministic manner, usually by looking at the next immediate position in the array.
Open addressing has a lower cache miss ratio than separate chaining when the table is mostly empty. However, as the table becomes filled with more elements, open addressing's performance degrades exponentially. Additionally, separate chaining uses less memory in most cases, unless the entries are very small (less than four times the size of a pointer).
Tree implementations
Self-balancing binary search trees
Another common approach is to implement an associative array with a self-balancing binary search tree, such as an AVL tree or a red–black tree.
Compared to hash tables, these structures have both strengths and weaknesses. The worst-case performance of self-balancing binary search trees is significantly better than that of a hash table, with a time complexity in big O notation of O(log n). This is in contrast to hash tables, whose worst-case performance involves all elements sharing a single bucket, resulting in O(n) time complexity. In addition, and like all binary search trees, self-balancing binary search trees keep their elements in order. Thus, traversing its elements follows a least-to-greatest pattern, whereas traversing a hash table can result in elements being in seemingly random order. Because they are in order, tree-based maps can also satisfy range queries (find all values between two bounds) whereas a hashmap can only find exact values. However, hash tables have a much better average-case time complexity than self-balancing binary search trees of O(1), and their worst-case performance is highly unlikely when a good hash function is used.
A self-balancing binary search tree can be used to implement the buckets for a hash table that uses separate chaining. This allows for average-case constant lookup, but assures a worst-case performance of O(log n). However, this introduces extra complexity into the implementation and may cause even worse performance for smaller hash tables, where the time spent inserting into and balancing the tree is greater than the time needed to perform a linear search on all elements of a linked list or similar data structure.
Other trees
Associative arrays may also be stored in unbalanced binary search trees or in data structures specialized to a particular type of keys such as radix trees, tries, Judy arrays, or van Emde Boas trees, though the relative performance of these implementations varies. For instance, Judy trees have been found to perform less efficiently than hash tables, while carefully selected hash tables generally perform more efficiently than adaptive radix trees, with potentially greater restrictions on the data types they can handle. The advantages of these alternative structures come from their ability to handle additional associative array operations, such as finding the mapping whose key is the closest to a queried key when the query is absent in the set of mappings.
Comparison
Ordered dictionary
The basic definition of a dictionary does not mandate an order. To guarantee a fixed order of enumeration, ordered versions of the associative array are often used. There are two senses of an ordered dictionary:
The order of enumeration is always deterministic for a given set of keys by sorting. This is the case for tree-based implementations, one representative being the container of C++.
The order of enumeration is key-independent and is instead based on the order of insertion. This is the case for the "ordered dictionary" in .NET Framework, the LinkedHashMap of Java and Python.
The latter is more common. Such ordered dictionaries can be implemented using an association list, by overlaying a doubly linked list on top of a normal dictionary, or by moving the actual data out of the sparse (unordered) array and into a dense insertion-ordered one.
Language support
Associative arrays can be implemented in any programming language as a package and many language systems provide them as part of their standard library. In some languages, they are not only built into the standard system, but have special syntax, often using array-like subscripting.
Built-in syntactic support for associative arrays was introduced in 1969 by SNOBOL4, under the name "table". TMG offered tables with string keys and integer values. MUMPS made multi-dimensional associative arrays, optionally persistent, its key data structure. SETL supported them as one possible implementation of sets and maps. Most modern scripting languages, starting with AWK and including Rexx, Perl, PHP, Tcl, JavaScript, Maple, Python, Ruby, Wolfram Language, Go, and Lua, support associative arrays as a primary container type. In many more languages, they are available as library functions without special syntax.
In Smalltalk, Objective-C, .NET, Python, REALbasic, Swift, VBA and Delphi they are called dictionaries; in Perl, Ruby and Seed7 they are called hashes; in C++, C#, Java, Go, Clojure, Scala, OCaml, Haskell they are called maps (see map (C++), unordered_map (C++), and ); in Common Lisp and Windows PowerShell, they are called hash tables (since both typically use this implementation); in Maple and Lua, they are called tables. In PHP and R, all arrays can be associative, except that the keys are limited to integers and strings. In JavaScript (see also JSON), all objects behave as associative arrays with string-valued keys, while the Map and WeakMap types take arbitrary objects as keys. In Lua, they are used as the primitive building block for all data structures. In Visual FoxPro, they are called Collections. The D language also supports associative arrays.
Permanent storage
Many programs using associative arrays will need to store that data in a more permanent form, such as a computer file. A common solution to this problem is a generalized concept known as archiving or serialization, which produces a text or binary representation of the original objects that can be written directly to a file. This is most commonly implemented in the underlying object model, like .Net or Cocoa, which includes standard functions that convert the internal data into text. The program can create a complete text representation of any group of objects by calling these methods, which are almost always already implemented in the base associative array class.
For programs that use very large data sets, this sort of individual file storage is not appropriate, and a database management system (DB) is required. Some DB systems natively store associative arrays by serializing the data and then storing that serialized data and the key. Individual arrays can then be loaded or saved from the database using the key to refer to them. These key–value stores have been used for many years and have a history as long as that of the more common relational database (RDBs), but a lack of standardization, among other reasons, limited their use to certain niche roles. RDBs were used for these roles in most cases, although saving objects to a RDB can be complicated, a problem known as object-relational impedance mismatch.
After approximately 2010, the need for high-performance databases suitable for cloud computing and more closely matching the internal structure of the programs using them led to a renaissance in the key–value store market. These systems can store and retrieve associative arrays in a native fashion, which can greatly improve performance in common web-related workflows.
See also
Tuple
Function (mathematics)
References
External links
NIST's Dictionary of Algorithms and Data Structures: Associative Array
Abstract data types
Composite data types
Data types | Associative array | [
"Mathematics"
] | 3,011 | [
"Type theory",
"Mathematical structures",
"Abstract data types"
] |
19,190,191 | https://en.wikipedia.org/wiki/Assigned%20amount%20unit | An assigned amount unit was a tradable "Kyoto unit" or "carbon credit" representing an allowance to emit greenhouse gases comprising "one metric tonne of carbon dioxide equivalent, calculated using global warming potentials". Assigned amount units were issued up to the level of initial "assigned amount" of an Annex 1 Party to the Kyoto Protocol.
The "assigned amounts" were the Kyoto Protocol Annex B emission targets (or "quantified emission limitation and reduction objectives") expressed as levels of allowed emissions over the 2008–2012 commitment period.
Application
Article 17 of the Kyoto Protocol allowed emissions trading between Annex B Parties (countries). Parties that had "assigned amount units" to spare because of reductions in emissions below their Kyoto commitment set out in Article 3 and Annex B could sell those units to countries that had emissions exceeding their targets. Article 17 also required that any such emissions trading must be supplemental to domestic action for the purpose of meeting quantified emission limitation and reduction commitments.
See also
Certified Emission Reduction
Emission Reduction Unit
Removal Units
Voluntary Emissions Reduction
Flexible mechanisms
List of Kyoto Protocol signatories
References
Carbon finance
United Nations Framework Convention on Climate Change
Greenhouse gas emissions | Assigned amount unit | [
"Chemistry"
] | 234 | [
"Greenhouse gases",
"Greenhouse gas emissions"
] |
19,190,743 | https://en.wikipedia.org/wiki/Glycine%20cleavage%20system | The glycine cleavage system (GCS) is also known as the glycine decarboxylase complex or GDC. The system is a series of enzymes that are triggered in response to high concentrations of the amino acid glycine. The same set of enzymes is sometimes referred to as glycine synthase when it runs in the reverse direction to form glycine. The glycine cleavage system is composed of four proteins: the T-protein, P-protein, L-protein, and H-protein. They do not form a stable complex, so it is more appropriate to call it a "system" instead of a "complex". The H-protein is responsible for interacting with the three other proteins and acts as a shuttle for some of the intermediate products in glycine decarboxylation. In both animals and plants, the glycine cleavage system is loosely attached to the inner membrane of the mitochondria. Mutations in this enzymatic system are linked with glycine encephalopathy.
Components
Function
In plants, animals and bacteria the glycine cleavage system catalyzes the following reversible reaction:
Glycine + H4folate + NAD+ ↔ 5,10-methylene-H4folate + CO2 + NH3 + NADH + H+
In the enzymatic reaction, H-protein activates the P-protein, which catalyzes the decarboxylation of glycine and attaches the intermediate molecule to the H-protein to be shuttled to the T-protein. The H-protein forms a complex with the T-protein that uses tetrahydrofolate and yields ammonia and 5,10-methylenetetrahydrofolate. After interaction with the T-protein, the H-protein is left with two fully reduced thiol groups in the lipoate group. The glycine protein system is regenerated when the H-protein is oxidized to regenerate the disulfide bond in the active site by interaction with the L-protein, which reduces NAD+ to NADH and H+.
When coupled to serine hydroxymethyltransferase, the glycine cleavage system overall reaction becomes:
2 glycine + NAD+ + H2O → serine + CO2 + NH3 + NADH + H+
In humans and most vertebrates, the glycine cleavage system is part of the most prominent glycine and serine catabolism pathway. This is due in large part to the formation 5,10-methylenetetrahydrofolate, which is one of the few C1 donors in biosynthesis. In this case the methyl group derived from the catabolism of glycine can be transferred to other key molecules such as purines and methionine.
This reaction, and by extension the glycine cleavage system, is required for photorespiration in C3 plants. The glycine cleavage system takes glycine, which is created from an unwanted byproduct of the Calvin cycle, and converts it to serine which can reenter the cycle. The ammonia generated by the glycine cleavage system, is assimilated by the Glutamine synthetase-Glutamine oxoglutarate aminotransferase cycle but costs the cell one ATP and one NADPH. The upside is that one CO2 is produced for every two O2 that are mistakenly taken up by the cell, generating some value in an otherwise energy depleting cycle. Together the proteins involved in these reactions comprise about half the proteins in mitochondria from spinach and pea leaves. The glycine cleavage system is constantly present in the leaves of plants, but in small amounts until they are exposed to light. During peak photosynthesis, the concentration of the glycine cleavage system increases ten-fold.
In the anaerobic bacteria, Clostridium acidiurici, the glycine cleavage system runs mostly in the direction of glycine synthesis. While glycine synthesis through the cleavage system is possible due to the reversibility of the overall reaction, it is not readily seen in animals.
Clinical significance
Glycine encephalopathy, also known as non-ketotic hyperglycinemia (NKH), is a primary disorder of the glycine cleavage system, resulting from lowered function of the glycine cleavage system causing increased levels of glycine in body fluids. The disease was first clinically linked to the glycine cleavage system in 1969. Early studies showed high levels of glycine in blood, urine and cerebrospinal fluid. Initial research using carbon labeling showed decreased levels of CO2 and serine production in the liver, pointing directly to deficiencies glycine cleavage reaction. Further research has shown that deletions and mutations in the 5' region of the P-protein are the major genetic causes of nonketotic hyperglycinemia. . In more rare cases, a missense mutation in the genetic code of the T-protein, causing the histidine in position 42 to be mutated to arginine, was also found to result in nonketotic hypergycinemia. This specific mutation directly affected the active site of the T-protein, causing lowered efficiency of the glycine cleavage system.
See also
dihydrolipoamide dehydrogenase
lipoic acid
glycine encephalopathy
References
Cellular respiration
NADH-dependent enzymes
Enzymes of unknown structure
it:Glicina deidrogenasi | Glycine cleavage system | [
"Chemistry",
"Biology"
] | 1,166 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
19,192,069 | https://en.wikipedia.org/wiki/STRI%20Group | STRI, formerly the Sports Turf Research Institute, is a consultancy for the development of sports surfaces, based in St Ives, Bingley, West Yorkshire, England, providing advice on the research, design, construction and management of both natural and artificial sports fields of play around the world.
History
STRI was established in the UK in 1929 in response to The Royal and Ancient Golf Club of St Andrews wanting improved greens. Originally, the new outfit rented rooms in St Ives mansion, before moving out into new buildings on the same estate. The institute now operates globally out of three research and design hubs in United Kingdom, Qatar and the Redlands Research Station in Queensland, Australia, servicing over 2,000 clients annually. STRI clients include sports venues, international tournaments, sports governing bodies, sports club owners and facilities managers, local authorities and schools. They provide advice and consultancy to the All England Club for each years Wimbledon championships, and have historically been advisors to the FIFA football World Cup.
In June 1961, Prince Philip became the patron of the institute.
STRI capabilities include R&D, design, consultancy and sustainability disciplines. The headquarters of the STRI is in St Ives, near to Bingley in West Yorkshire, where they have dedicated to turf research. In 2019, a new office was opened in Hong Kong, which is tied into the Chinese governments' drive to build 60,000 sports pitches.
Specialities
Research & Development, Sports Surfaces Design & Construction, Product Testing & Material Analysis, Stadia Pitch Design and Management, Agronomy & Ecology, Sportsturf Consultancy, Planning, Drainage & Irrigation, Aviation, Environment, Green Spaces, Training.
From 2014 through to 2018, the STRI advised the Commonwealth War Graves Commission on turf related matters in the run up to the 100 year commemorations of the First World War. This included over 23,000 locations in 153 countries.
Notable sporting events
Wimbledon (1990–present)
FIFA World Cup, 2010
London Olympics 2012
References
Companies based in the City of Bradford
Grasses
Research institutes in West Yorkshire
1929 establishments in the United Kingdom
Research institutes in the United Kingdom | STRI Group | [
"Chemistry"
] | 423 | [
"Synthetic materials",
"Artificial turf"
] |
14,149,048 | https://en.wikipedia.org/wiki/FLI1 | Friend leukemia integration 1 transcription factor (FLI1), also known as transcription factor ERGB, is a protein that in humans is encoded by the FLI1 gene, which is a proto-oncogene.
Function
Fli-1 is a member of the ETS transcription factor family that was first identified in erythroleukemias induced by Friend Murine Leukemia Virus (F-MuLV). Fli-1 is activated through retroviral insertional mutagenesis in 90% of F-MuLV-induced erythroleukemias. The constitutive activation of fli-1 in erythroblasts leads to a dramatic shift in the Epo/Epo-R signal transduction pathway, blocking erythroid differentiation, activating the Ras pathway, and resulting in massive Epo-independent proliferation of erythroblasts. These results suggest that Fli-1 overexpression in erythroblasts alters their responsiveness to Epo and triggers abnormal proliferation by switching the signaling event(s) associated with terminal differentiation to proliferation.
Clinical significance
In addition to Friend erythroleukemia, proviral integration at the fli-1 locus also occurs in leukemias induced by the 10A1, Graffi, and Cas-Br-E viruses. Fli-1 aberrant expression is also associated with chromosomal abnormalities in humans. In pediatric Ewing’s sarcoma a chromosomal translocation generates a fusion of the 5’ transactivation domain of EWSR1 (also known as EWS) with the 3’ Ets domain of Fli-1. The resulting fusion oncoprotein, EWS/Fli-1, acts as an aberrant transcriptional activator. with strong transforming capabilities. EWS/Fli-1 may steer clinically important genes via interaction with enhancer-like GGAA-microsatellites. The importance of Fli-1 in the development of human leukemia, such as acute myelogenous leukemia (AML), has been demonstrated in studies of translocation involving the Tel transcription factor, which interacts with Fli-1 through protein-protein interactions. A recent study has demonstrated high levels of Fli-1 expression in several benign and malignant neoplasms using immunohistochemistry.
A possible association with Paris-Trousseau syndrome has been suggested.
References
Further reading
External links
Transcription factors | FLI1 | [
"Chemistry",
"Biology"
] | 524 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,149,235 | https://en.wikipedia.org/wiki/Energy%20profile%20%28chemistry%29 | In theoretical chemistry, an energy profile is a theoretical representation of a chemical reaction or process as a single energetic pathway as the reactants are transformed into products. This pathway runs along the reaction coordinate, which is a parametric curve that follows the pathway of the reaction and indicates its progress; thus, energy profiles are also called reaction coordinate diagrams. They are derived from the corresponding potential energy surface (PES), which is used in computational chemistry to model chemical reactions by relating the energy of a molecule(s) to its structure (within the Born–Oppenheimer approximation).
Qualitatively, the reaction coordinate diagrams (one-dimensional energy surfaces) have numerous applications. Chemists use reaction coordinate diagrams as both an analytical and pedagogical aid for rationalizing and illustrating kinetic and thermodynamic events. The purpose of energy profiles and surfaces is to provide a qualitative representation of how potential energy varies with molecular motion for a given reaction or process.
Potential energy surfaces
In simplest terms, a potential energy surface or PES is a mathematical or graphical representation of the relation between energy of a molecule and its geometry. The methods for describing the potential energy are broken down into a classical mechanics interpretation (molecular mechanics) and a quantum mechanical interpretation. In the quantum mechanical interpretation an exact expression for energy can be obtained for any molecule derived from quantum principles (although an infinite basis set may be required) but ab initio calculations/methods will often use approximations to reduce computational cost. Molecular mechanics is empirically based and potential energy is described as a function of component terms that correspond to individual potential functions such as torsion, stretches, bends, Van der Waals energies, electrostatics and cross terms. Each component potential function is fit to experimental data or properties predicted by ab initio calculations. Molecular mechanics is useful in predicting equilibrium geometries and transition states as well as relative conformational stability. As a reaction occurs the atoms of the molecules involved will generally undergo some change in spatial orientation through internal motion as well as its electronic environment. Distortions in the geometric parameters result in a deviation from the equilibrium geometry (local energy minima). These changes in geometry of a molecule or interactions between molecules are dynamic processes which call for understanding all the forces operating within the system. Since these forces can be mathematically derived as first derivative of potential energy with respect to a displacement, it makes sense to map the potential energy of the system as a function of geometric parameters , , and so on. The potential energy at given values of the geometric parameters is represented as a hyper-surface (when ) or a surface (when ). Mathematically, it can be written as
For the quantum mechanical interpretation, a PES is typically defined within the Born–Oppenheimer approximation (in order to distinguish between nuclear and electronic motion and energy) which states that the nuclei are stationary relative to the electrons. In other words, the approximation allows the kinetic energy of the nuclei (or movement of the nuclei) to be neglected and therefore the nuclei repulsion is a constant value (as static point charges) and is only considered when calculating the total energy of the system. The electronic energy is then taken to depend parametrically on the nuclear coordinates, meaning a new electronic energy () must be calculated for each corresponding atomic configuration.
PES is an important concept in computational chemistry and greatly aids in geometry and transition state optimization.
Degrees of freedom
An -atom system is defined by coordinates: for each atom. These degrees of freedom can be broken down to include 3 overall translational and 3 (or 2) overall rotational degrees of freedom for a non-linear system (for a linear system). However, overall translational or rotational degrees do not affect the potential energy of the system, which only depends on its internal coordinates. Thus an -atom system will be defined by (non-linear) or (linear) coordinates. These internal coordinates may be represented by simple stretch, bend, torsion coordinates, or symmetry-adapted linear combinations, or redundant coordinates, or normal modes coordinates, etc. For a system described by -internal coordinates a separate potential energy function can be written with respect to each of these coordinates by holding the other parameters at a constant value allowing the potential energy contribution from a particular molecular motion (or interaction) to be monitored while the other parameters are defined.
Consider a diatomic molecule AB which can macroscopically visualized as two balls (which depict the two atoms A and B) connected through a spring which depicts the bond. As this spring (or bond) is stretched or compressed, the potential energy of the ball-spring system (AB molecule) changes and this can be mapped on a 2-dimensional plot as a function of distance between A and B, i.e. bond length.
The concept can be expanded to a tri-atomic molecule such as water where we have two bonds and bond angle as variables on which the potential energy of a water molecule will depend. We can safely assume the two bonds to be equal. Thus, a PES can be drawn mapping the potential energy E of a water molecule as a function of two geometric parameters, bond length and bond angle. The lowest point on such a PES will define the equilibrium structure of a water molecule.
The same concept is applied to organic compounds like ethane, butane etc. to define their lowest energy and most stable conformations.
Characterizing a PES
The most important points on a PES are the stationary points where the surface is flat, i.e. parallel to a horizontal line corresponding to one geometric parameter, a plane corresponding to two such parameters or even a hyper-plane corresponding to more than two geometric parameters. The energy values corresponding to the transition states and the ground state of the reactants and products can be found using the potential energy function by calculating the function's critical points or the stationary points. Stationary points occur when the 1st partial derivative of the energy with respect to each geometric parameter is equal to zero.
Using analytical derivatives of the derived expression for energy, one can find and characterize a stationary point as minimum, maximum or a saddle point. The ground states are represented by local energy minima and the transition states by saddle points.
Minima represent stable or quasi-stable species, i.e. reactants and products with finite lifetime. Mathematically, a minimum point is given as
A point may be local minimum when it is lower in energy compared to its surrounding only or a global minimum which is the lowest energy point on the entire potential energy surface.
Saddle point represents a maximum along only one direction (that of the reaction coordinate) and is a minimum along all other directions. In other words, a saddle point represents a transition state along the reaction coordinate. Mathematically, a saddle point occurs when
for all except along the reaction coordinate and
along the reaction coordinate.
Reaction coordinate diagrams
The intrinsic reaction coordinate (IRC), derived from the potential energy surface, is a parametric curve that connects two energy minima in the direction that traverses the minimum energy barrier (or shallowest ascent) passing through one or more saddle point(s). However, in reality if reacting species attains enough energy it may deviate from the IRC to some extent. The energy values (points on the hyper-surface) along the reaction coordinate result in a 1-D energy surface (a line) and when plotted against the reaction coordinate (energy vs reaction coordinate) gives what is called a reaction coordinate diagram (or energy profile). Another way of visualizing an energy profile is as a cross section of the hyper surface, or surface, long the reaction coordinate. Figure 5 shows an example of a cross section, represented by the plane, taken along the reaction coordinate and the potential energy is represented as a function or composite of two geometric variables to form a 2-D energy surface. In principle, the potential energy function can depend on N variables but since an accurate visual representation of a function of 3 or more variables cannot be produced (excluding level hypersurfaces) a 2-D surface has been shown. The points on the surface that intersect the plane are then projected onto the reaction coordinate diagram (shown on the right) to produce a 1-D slice of the surface along the IRC. The reaction coordinate is described by its parameters, which are frequently given as a composite of several geometric parameters, and can change direction as the reaction progresses so long as the smallest energy barrier (or activation energy (Ea)) is traversed. The saddle point represents the highest energy point lying on the reaction coordinate connecting the reactant and product; this is known as the transition state. A reaction coordinate diagram may also have one or more transient intermediates which are shown by high energy wells connected via a transition state peak. Any chemical structure that lasts longer than the time for typical bond vibrations (10−13 – 10−14s) can be considered as intermediate.
A reaction involving more than one elementary step has one or more intermediates being formed which, in turn, means there is more than one energy barrier to overcome. In other words, there is more than one transition state lying on the reaction pathway. As it is intuitive that pushing over an energy barrier or passing through a transition state peak would entail the highest energy, it becomes clear that it would be the slowest step in a reaction pathway. However, when more than one such barrier is to be crossed, it becomes important to recognize the highest barrier which will determine the rate of the reaction. This step of the reaction whose rate determines the overall rate of reaction is known as rate determining step or rate limiting step. The height of energy barrier is always measured relative to the energy of the reactant or starting material. Different possibilities have been shown in figure 6.
Reaction coordinate diagrams also give information about the equilibrium between a reactant or a product and an intermediate. If the barrier energy for going from intermediate to product is much higher than the one for reactant to intermediate transition, it can be safely concluded that a complete equilibrium is established between the reactant and intermediate. However, if the two energy barriers for reactant-to-intermediate and intermediate-to-product transformation are nearly equal, then no complete equilibrium is established and steady state approximation is invoked to derive the kinetic rate expressions for such a reaction.
Drawing a reaction coordinate diagram
Although a reaction coordinate diagram is essentially derived from a potential energy surface, it is not always feasible to draw one from a PES. A chemist draws a reaction coordinate diagram for a reaction based on the knowledge of free energy or enthalpy change associated with the transformation which helps him to place the reactant and product into perspective and whether any intermediate is formed or not. One guideline for drawing diagrams for complex reactions is the principle of least motion which says that a favored reaction proceeding from a reactant to an intermediate or from one intermediate to another or product is one which has the least change in nuclear position or electronic configuration. Thus, it can be said that the reactions involving dramatic changes in position of nuclei actually occur through a series of simple chemical reactions. Hammond postulate is another tool which assists in drawing the energy of a transition state relative to a reactant, an intermediate or a product. It states that the transition state resembles the reactant, intermediate or product that it is closest in energy to, as long the energy difference between the transition state and the adjacent structure is not too large. This postulate helps to accurately predict the shape of a reaction coordinate diagram and also gives an insight into the molecular structure at the transition state.
Kinetic and thermodynamic considerations
A chemical reaction can be defined by two important parameters- the Gibbs free energy associated with a chemical transformation and the rate of such a transformation. These parameters are independent of each other. While free energy change describes the stability of products relative to reactants, the rate of any reaction is defined by the energy of the transition state relative to the starting material. Depending on these parameters, a reaction can be favorable or unfavorable, fast or slow and reversible or irreversible, as shown in figure 8.
A favorable reaction is one in which the change in free energy ∆G° is negative (exergonic) or in other words, the free energy of product, G°product, is less than the free energy of the starting materials, G°reactant. ∆G°> 0 (endergonic) corresponds to an unfavorable reaction. The ∆G° can be written as a function of change in enthalpy (∆H°) and change in entropy (∆S°) as ∆G°= ∆H° – T∆S°. Practically, enthalpies, not free energy, are used to determine whether a reaction is favorable or unfavorable, because ∆H° is easier to measure and T∆S° is usually too small to be of any significance (for T < 100 °C). A reaction with ∆H°<0 is called exothermic reaction while one with ∆H°>0 is endothermic.
The relative stability of reactant and product does not define the feasibility of any reaction all by itself. For any reaction to proceed, the starting material must have enough energy to cross over an energy barrier. This energy barrier is known as activation energy (∆G≠) and the rate of reaction is dependent on the height of this barrier. A low energy barrier corresponds to a fast reaction and high energy barrier corresponds to a slow reaction.
A reaction is in equilibrium when the rate of forward reaction is equal to the rate of reverse reaction. Such a reaction is said to be reversible. If the starting material and product(s) are in equilibrium then their relative abundance is decided by the difference in free energy between them. In principle, all elementary steps are reversible, but in many cases the equilibrium lies so much towards the product side that the starting material is effectively no longer observable or present in sufficient concentration to have an effect on reactivity. Practically speaking, the reaction is considered to be irreversible.
While most reversible processes will have a reasonably small K of 103 or less, this is not a hard and fast rule, and a number of chemical processes require reversibility of even very favorable reactions. For instance, the reaction of an carboxylic acid with amines to form a salt takes place with K of 105–6, and at ordinary temperatures, this process is regarded as irreversible. Yet, with sufficient heating, the reverse reaction takes place to allow formation of the tetrahedral intermediate and, ultimately, amide and water. (For an extreme example requiring reversibility of a step with K > 1011, see demethylation.) A reaction can also be rendered irreversible if a subsequent, faster step takes place to consume the initial product(s), or a gas is evolved in an open system. Thus, there is no value of K that serves as a "dividing line" between reversible and irreversible processes. Instead, reversibility depends on timescale, temperature, the reaction conditions, and the overall energy landscape.
When a reactant can form two different products depending on the reaction conditions, it becomes important to choose the right conditions to favor the desired product. If a reaction is carried out at relatively lower temperature, then the product formed is one lying across the smaller energy barrier. This is called kinetic control and the ratio of the products formed depends on the relative energy barriers leading to the products. Relative stabilities of the products do not matter. However, at higher temperatures the molecules have enough energy to cross over both energy barriers leading to the products. In such a case, the product ratio is determined solely by the energies of the products and energies of the barrier do not matter. This is known as thermodynamic control and it can only be achieved when the products can inter-convert and equilibrate under the reaction condition. A reaction coordinate diagram can also be used to qualitatively illustrate kinetic and thermodynamic control in a reaction.
Applications
Following are few examples on how to interpret reaction coordinate diagrams and use them in analyzing reactions.
Solvent Effect: In general, if the transition state for the rate determining step corresponds to a more charged species relative to the starting material then increasing the polarity of the solvent will increase the rate of the reaction since a more polar solvent be more effective at stabilizing the transition state (ΔG‡ would decrease). If the transition state structure corresponds to a less charged species then increasing the solvents polarity would decrease the reaction rate since a more polar solvent would be more effective at stabilizing the starting material (ΔGo would decrease which in turn increases ΔG‡).
SN1 vs SN2
The SN1 and SN2 mechanisms are used as an example to demonstrate how solvent effects can be indicated in reaction coordinate diagrams.
SN1: Figure 10 shows the rate determining step for an SN1 mechanism, formation of the carbocation intermediate, and the corresponding reaction coordinate diagram. For an SN1 mechanism the transition state structure shows a partial charge density relative to the neutral ground state structure. Therefore, increasing the solvent polarity, for example from hexanes (shown as blue) to ether (shown in red), would decrease the rate of the reaction. As shown in figure 9, the starting material has approximately the same stability in both solvents (therefore ΔΔGo=ΔGopolar – ΔGonon polar is small) and the transition state is stabilized more in ether meaning ΔΔG≠ = ΔG≠polar – ΔG≠non-polar is large.
SN2: For an SN2 mechanism a strongly basic nucleophile (i.e. a charged nucleophile) is favorable. In figure 11 below the rate determining step for Williamson ether synthesis is shown. The starting material is methyl chloride and an ethoxide ion which has a localized negative charge meaning it is more stable in polar solvents. The figure shows a transition state structure as the methyl chloride undergoes nucleophilic attack. In the transition state structure the charge is distributed between the Cl and the O atoms and the more polar solvent is less effective at stabilizing the transition state structure relative to the starting materials. In other words, the energy difference between the polar and non-polar solvent is greater for the ground state (for the starting material) than in the transition state.
Catalysts: There are two types of catalysts, positive and negative. Positive catalysts increase the reaction rate and negative catalysts (or inhibitors) slow down a reaction and possibly cause the reaction not occur at all. The purpose of a catalyst is to alter the activation energy. Figure 12 illustrates the purpose of a catalyst in that only the activation energy is changed and not the relative thermodynamic stabilities, shown in the figure as ΔH, of the products and reactants. This means that a catalyst will not alter the equilibrium concentrations of the products and reactants but will only allow the reaction to reach equilibrium faster. Figure 13 shows the catalyzed pathway occurring in multiple steps which is a more realistic depiction of a catalyzed process. The new catalyzed pathway can occur through the same mechanism as the uncatalyzed reaction or through an alternate mechanism. An enzyme is a biological catalyst that increases the rate for many vital biochemical reactions. Figure 13 shows a common way to illustrate the effect of an enzyme on a given biochemical reaction.
See also
Gibbs free energy
Enthalpy
Entropy
Computational chemistry
Molecular mechanics
Born–Oppenheimer approximation
References
Computational chemistry | Energy profile (chemistry) | [
"Chemistry"
] | 4,011 | [
"Theoretical chemistry",
"Computational chemistry"
] |
14,151,698 | https://en.wikipedia.org/wiki/ASIX | ASIX Electronics Corp. () is a fabless semiconductor supplier with a focus on networking, communication, and connectivity applications. ASIX Electronics specializes in Ethernet-centric silicon products such as non-PCI Ethernet controller, USB 2.0 to LAN controller, and network SoC for embedded networking applications.
Corporate history
ASIX was founded in May 1995 in Hsinchu Science Park, Taiwan. In 2002, ASIX announced its first USB to MII chip. In June 2007, electronicstalk.com featured the AX11005BF, billed as the industry smallest single-chip embedded Ethernet MCU. Electronicstalk.com describes powering embedded systems in a machine to machine world (M2M) in reference to the AX110xx family of chips.
ASIX Electronics introduced the industry's first:
USB 3.0 to Gigabit Ethernet controller
Non-PCI/USB 2.0 Gigabit Ethernet controller
Single chip microcontroller with TCP/IP, 10/100 Mbit Fast Ethernet MAC/PHY, and flash
Industry smallest single chip embedded Ethernet MCU
Asix Electronics saw its revenues jump 59.3% sequentially to NT$31.5 million (US$957,000) in December 2006 on shipments of USB-to-Ethernet controller ICs for Nintendo's Wii consoles, according to market sources.
ASIX is listed as a vendor in the 2007 EDN Microprocessor Directory.
ASIX Electronics Corp:To acquire 100 pct stake in ZYWYN CORPORATION with amount of $8 million.
Products
The current offerings are as follows:
Non-PCI/PCMCIA embedded Ethernet
High-speed USB-to-LAN
Embedded network SoC
I/O connectivity
Embedded Wireless Modules
Wii LAN Adapter
ASIX manufactures the chipset in the Wii LAN Adapter. The Wii is equipped with Wi-Fi but does not include an Ethernet port; gamers can purchase a Wii LAN Adapter sold by Nintendo and other manufacturers to give Ethernet capability to the Wii.
See also
List of companies of Taiwan
List of system-on-a-chip suppliers
Network interface controller (NIC)
Semiconductor industry in Taiwan
References
Computer companies of Taiwan
Computer hardware companies
Taiwanese companies established in 1995
Semiconductor companies of Taiwan
Fabless semiconductor companies
Companies listed on the Taiwan Stock Exchange
Electronics companies established in 1995
Networking hardware companies
Networking companies | ASIX | [
"Technology"
] | 475 | [
"Computer hardware companies",
"Computers"
] |
14,154,981 | https://en.wikipedia.org/wiki/Focused%20impedance%20measurement | Focused Impedance Measurement (FIM) is a recent technique for quantifying the electrical resistance in tissues of the human body with improved zone localization compared to conventional methods. This method was proposed and developed by Department of Biomedical Physics and Technology of University of Dhaka under the supervision of Prof. Khondkar Siddique-e-Rabbani; who first introduced the idea. FIM can be considered a bridge between Four Electrode Impedance Measurement (FEIM) and Electrical impedance Tomography (EIT), and provides a middle ground in terms of simplicity and accuracy.
Many biological parameters and processes can be detected and monitored through their effects on bioimpedance. Bioimpedance measurement can be performed with a few simple instruments and non-invasively.
Measurement of electrical impedance to obtain physiological or diagnostic information has been of interest to researchers for many years. However, the human body is geometrically and conductively uneven, with variation between individuals and phases of normal body activity, and bioimpedance results from many factors, including ion concentrations, cell geometry, extra-cellular fluids, intra-cellular fluids, and organ geometry. This makes accurate analysis of results from a small number of electrodes difficult and unreliable. Identifying zones with specific impedances can provide greater certainty regarding the factors behind the impedance.
Conventional Four Electrode or Tetra-polar Impedance Measurement (TPIM) is simple, but the zone of sensitivity is not well defined and may include organs other that those of interest, making interpretation difficult and unreliable. On the other hand, Electrical impedance tomography (EIT) offers reasonable resolution, but is complex and require many electrodes. By placing two FEIM systems perpendicular to each other over a common zone at the center and combining the results, it is possible to obtain enhanced sensitivity over this central zone. This is the basis of FIM, which may be useful for impedance measurements of large organs like stomach, heart, and lungs. Being much simpler in comparison to EIT, multifrequency systems can be simply built for FIM.
FIM may be useful in other fields where impedance measurements are performed, like geology.
See also
Impedance cardiography
References
Medical tests
Electrophysiology
Impedance measurements
Bangladeshi inventions | Focused impedance measurement | [
"Physics"
] | 461 | [
"Impedance measurements",
"Physical quantities",
"Electrical resistance and conductance"
] |
14,155,727 | https://en.wikipedia.org/wiki/Trusted%20timestamping | Trusted timestamping is the process of securely keeping track of the creation and modification time of a document. Security here means that no one—not even the owner of the document—should be able to change it once it has been recorded provided that the timestamper's integrity is never compromised.
The administrative aspect involves setting up a publicly available, trusted timestamp management infrastructure to collect, process and renew timestamps.
History
The idea of timestamping information is centuries old. For example, when Robert Hooke discovered Hooke's law in 1660, he did not want to publish it yet, but wanted to be able to claim priority. So he published the anagram ceiiinosssttuv and later published the translation ut tensio sic vis (Latin for "as is the extension, so is the force"). Similarly, Galileo first published his discovery of the phases of Venus in the anagram form.
Sir Isaac Newton, in responding to questions from Leibniz in a letter in 1677, concealed the details of his "fluxional technique" with an anagram:
The foundations of these operations is evident enough, in fact; but because I cannot proceed with the explanation of it now, I have preferred to conceal it thus: 6accdae13eff7i3l9n4o4qrr4s8t12ux. On this foundation I have also tried to simplify the theories which concern the squaring of curves, and I have arrived at certain general Theorems.
Trusted digital timestamping has first been discussed in literature by Stuart Haber and W. Scott Stornetta.
Classification
There are many timestamping schemes with different security goals:
PKI-based – timestamp token is protected using PKI digital signature.
Linking-based schemes – timestamp is generated in such a way that it is related to other timestamps.
Distributed schemes – timestamp is generated in cooperation of multiple parties.
Transient key scheme – variant of PKI with short-living signing keys.
MAC – simple secret key-based scheme, found in ANSI ASC X9.95 Standard.
Database – document hashes are stored in trusted archive; there is online lookup service for verification.
Hybrid schemes – the linked and signed method is prevailing, see X9.95.
Coverage in standards:
For systematic classification and evaluation of timestamping schemes see works by Masashi Une.
Trusted (digital) timestamping
According to the RFC 3161 standard, a trusted timestamp is a timestamp issued by a Trusted Third Party (TTP) acting as a Time Stamping Authority (TSA). It is used to prove the existence of certain data before a certain point (e.g. contracts, research data, medical records, ...) without the possibility that the owner can backdate the timestamps. Multiple TSAs can be used to increase reliability and reduce vulnerability.
The newer ANSI ASC X9.95 Standard for trusted timestamps augments the RFC 3161 standard with data-level security requirements to ensure data integrity against a reliable time source that is provable to any third party. This standard has been applied to authenticating digitally signed data for regulatory compliance, financial transactions, and legal evidence.
Creating a timestamp
The technique is based on digital signatures and hash functions. First a hash is calculated from the data. A hash is a sort of digital fingerprint of the original data: a string of bits that is practically impossible to duplicate with any other set of data. If the original data is changed then this will result in a completely different hash. This hash is sent to the TSA. The TSA concatenates a timestamp to the hash and calculates the hash of this concatenation. This hash is in turn digitally signed with the private key of the TSA. This signed hash + the timestamp is sent back to the requester of the timestamp who stores these with the original data (see diagram).
Since the original data cannot be calculated from the hash (because the hash function is a one way function), the TSA never gets to see the original data, which allows the use of this method for confidential data.
Checking the timestamp
Anyone trusting the timestamper can then verify that the document was not created after the date that the timestamper vouches. It can also no longer be repudiated that the requester of the timestamp was in possession of the original data at the time given by the timestamp. To prove this (see diagram) the hash of the original data is calculated, the timestamp given by the TSA is appended to it and the hash of the result of this concatenation is calculated, call this hash A.
Then the digital signature of the TSA needs to be validated. This is done by decrypting the digital signature using public key of TSA, producing hash B. Hash A is then compared with hash B inside the signed TSA message to confirm they are equal, proving that the timestamp and message is unaltered and was issued by the TSA. If not, then either the timestamp was altered or the timestamp was not issued by the TSA.
Decentralized timestamping on the blockchain
With the advent of cryptocurrencies like bitcoin, it has become possible to get some level of secure timestamp accuracy in a decentralized and tamper-proof manner. Digital data can be hashed and the hash can be incorporated into a transaction stored in the blockchain, which serves as evidence of the time at which that data existed. For proof of work blockchains, the security derives from the tremendous amount of computational effort performed after the hash was submitted to the blockchain. Tampering with the timestamp would require more computational resources than the rest of the network combined, and cannot be done unnoticed in an actively defended blockchain.
However, the design and implementation of Bitcoin in particular makes its timestamps vulnerable to some degree of manipulation, allowing timestamps up to two hours in the future, and accepting new blocks with timestamps earlier than the previous block.
The decentralized timestamping approach using the blockchain has also found applications in other areas, such as in dashboard cameras, to secure the integrity of video files at the time of their recording, or to prove priority for creative content and ideas shared on social media platforms.
See also
Timestamp
Timestamping (computing)
Certificate Transparency
Cryptography
Computer security
Digital signature
Digital Postmarks
Smart contract
CAdES – CMS Advanced Electronic Signature
PAdES – PDF Advanced Electronic Signature
XAdES – XML Advanced Electronic Signature
References
External links
Internet X.509 Public Key Infrastructure Time-Stamp Protocol (TSP)
Policy Requirements for Time-Stamping Authorities (TSAs)
Decentralized Trusted Timestamping (DTT) using the Crypto Currency Bitcoin
ANSI ASC X9.95 Standard for Trusted Time Stamps
ETSI TS 101 861 V1.4.1 Electronic Signatures and Infrastructures (ESI); Time stamping profile
ETSI TS 102 023 V1.2.2 Electronic Signatures and Infrastructures (ESI); Policy requirements for time-stamping authorities
Analysis of a Secure Time Stamp Device (2001) SANS Institute
Implementation of TSP Protocol CMSC 681 Project Report, Youyong Zou
Time
Authentication methods | Trusted timestamping | [
"Physics",
"Mathematics"
] | 1,539 | [
"Physical quantities",
"Time",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
14,156,018 | https://en.wikipedia.org/wiki/FHL2 | Four and a half LIM domains protein 2 also known as FHL-2 is a protein that in humans is encoded by the FHL2 gene. LIM proteins contain a highly conserved double zinc finger motif called the LIM domain.
Function
FHL-2 is thought to have a role in the assembly of extracellular membranes and may function as a link between presenilin-2 and an intracellular signaling pathway.
Family
The Four-and-a-half LIM (FHL)-only protein subfamily is one of the members of the LIM-only protein family. Protein members within the group might be originated from a common ancestor and share a high degree of similarity in their amino acid sequence. These proteins are defined by the presence of the four and a half cysteine-rich LIM homeodomain with the half-domain always located in its N-terminus. The name LIM was derived from the first letter of the transcription factors LIN-11, ISL-1 and MEC-3, from which the domain was originally characterized. No direct interactions between LIM domain and DNA have been reported. Instead, extensive evidence points towards the functional role of FHL2 in supporting protein-protein interactions of LIM-containing proteins and its binding partners. Thus far, five members have been categorized into the FHL subfamily, which are FHL1, FHL2, FHL3, FHL4 and activator of CREM in testis (ACT) in human. FHL1, FHL2 and FHL3 are predominantly expressed in muscle, while FHL4 and FHL5 are expressed exclusively in testis.
Gene
FHL2 is the best studied member within the subfamily. The protein is encoded by the fhl2 gene being mapped in the region of human chromosome 2q12-q14. Two alternative promoters, 1a and 1b, as well as 5 transcript variants of fhl2 have been reported.
Tissue distribution
FHL2 exhibits diverse expression patterns in a cell/tissue-specific manner, which has been found in liver, kidney, lung, ovary, pancreas, prostate, stomach, colon, cortex, and in particular, the heart. However, its expression in some immune-related tissues like the spleen, thymus and blood leukocytes has not been documented. Intriguingly, the FHL2 expression and function varies significantly between different types of cancer. Such discrepancies are most likely due to the existence of the wide variety of transcription factors governing FHL2 expression.
Regulation of expression
Different transcription factors that have been reported responsible for the regulation of fhl2 expression include the well-known tumor suppressor protein p53, serum response factor (SRF), specificity protein 1 (Sp1). the pleiotropic factor IL-1β, MEF-2, and activator protein-1 (AP-1). Apart from being regulated by different transcription factors, FHL2 is itself involved extensively in regulating the expression of other genes. FHL2 exerts its transcriptional regulatory effects by functioning as an adaptor protein interacting indirectly with the targeted genes. In fact, LIM domain is a platform for the formation of multimeric protein complexes. Therefore, FHL2 can contribute to human carcinogenesis by interacting with transcription factors of cancer-related genes and modulates the signaling pathways underlying the expression of these genes. Different types of cancer are associated with FHL2 which act either as the cancer suppressor or inducer, for example in breast cancer, gastrointestinal (GI) cancers, liver cancer and prostate cancer.
Clinical significance
The expression and functions of FHL2 varies greatly depending on the cancer types. It appeared that phenomenon is highly related to the differential mechanistic transcriptional regulations of FHL2 in the various types of cancer. However, the participation of fhl2 mutations and the posttranslational modifications of fhl2 in carcinogenesis cannot be ignored. In fact, functional mutation of fhl2 has been identified in a patient with familial dilated cardiomyopathy (DCM) and is associated with its pathogenesis. This implied that fhl2 mutation may also profoundly affect the diverse cancer progressions. However, records describing the effects of fhl2 mutations on carcinogenesis are scarce.
Phosphorylation of FHL-2 protein has no significant effects on FHL2 functioning both in vitro and in vivo. Provided that the existence of posttranscriptional modifications on FHL2 other than phosphorylation is still unclear and FHL2 functions almost exclusively through protein-protein interactions, research works in this direction is still interested. In particular, the mechanisms underpinning the subcellular localization of FHL2 should be focused. FHL2 can traffic freely between nuclear and the different cellular compartments. It also interacts with other proteinaceous binding partners belonging to different functional classes including, but not limited to, transcription factors and signal transducers. Therefore, FHL2 translocation could be important in regulating the different molecular signaling pathways which modify carcinogenesis, for example, nuclear translocation of FHL2 is related to aggressiveness and recurrence of prostate cancer Similar evidence also has been identified in experiment using A7FIL+ cells and NIH 3T3 cell line as the disease model.
Breast cancer
The FHL2 protein interacts with the breast cancer type 1 susceptibility gene (BRCA1) which enhances the transactivation of BRCA1. In addition, intratumoral FHL2 level was one of the factors determining the worse survival of breast cancer patients
Gastrointestinal cancer
FHL2 is related to gastrointestinal cancers and in particular, colon cancer. Fhl2 demonstrates an oncogenic property in colon cancer which induces the differentiation of some in vitro colon cancer models. FHL2 is as well crucial to colon cancer cells invasion, migration and adhesion to extracellular matrix. The expression of FHL2 is positively regulated by transforming growth factor beta 1 (TGF-β1) stimulations which induces epithelial-mesenchymal transition (EMT) and endows cancer cells with metastatic properties. The TGF-β1-midiated alternation of FHL2 expression level might therefore trigger colon cell invasion. Besides, the subcellular localization of FHL2 can be modulated by TGF-β1 in sporadic colon cancer which resulted in the polymerization of alpha smooth muscle actin (α-SMA). This process induces the fibroblast to take up a myofibroblast phenotype and contributes to cancer invasion. FHL2 can also induce EMT and cancer cell migration by affecting the structural integrity of membrane-associated E-cadherin-β-catenin complex.
Liver cancer
In the most common form liver cancer, the hepatocellular carcinoma (HCC), FHL2 is always downregulated in the clinical samples. Therefore, fhl2 is exhibiting a tumor-suppressive effect on HCC. Similar to p53, overexpression of FHL2 inhibit the proliferative activity of the HCC Hep3B cell line by decreasing its cyclin D1 expression and increasing P21 and P27 expression supporting the time-dependent cellular repair process. Of note, a database of FHL2-regulated genes in murine liver has recently been established by using microarray and bioinformatics analysis, which provide useful information concerning most of the pathways and new genes related to FHL2.
Prostate cancer
The molecular communication between androgen receptor (AR) and FHL2 is linked to the disease development of prostate cancer such as aggressiveness and biochemical recurrence (i.e., rise in circulatory prostate-specific antigen (PSA) levels after surgical or radiography treatment) FHL2 expression is profoundly initiated by androgen through the mediation of serum response factor (SFR) and the RhoA / actin / megakaryocytic acute leukemia (MAL) signaling axis functioning upstream of SRF. On the other hand, FHL2 is the coactivator of AR and is able to modulate AR signaling by altering the effect of Aryl hydrocarbon receptor (AhR) imposing AR activity with as yet unknown mechanisms. Calpain cleavage of cytoskeletal protein filamin which is increased in prostate cancer could induce the nuclear translocation of FHL2, and this subsequently increase AR coactivation.
Interactions
FHL2 has been shown to interact with:
Androgen receptor,
BRCA1,
CTNNB1,
CD18,
CD29,
CD49c,
CREB1,
EIF6,
FHL3,
IGFBP5,
ITGA7,
ITGB6,
MAPK1,
PSEN2,
TRAF6,
TTN,
ZNF638, and
ZBTB16.
Notes
References
Further reading
External links
Transcription factors | FHL2 | [
"Chemistry",
"Biology"
] | 1,861 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,156,407 | https://en.wikipedia.org/wiki/GATA2 | GATA2 or GATA-binding factor 2 is a transcription factor, i.e. a nuclear protein which regulates the expression of genes. It regulates many genes that are critical for the embryonic development, self-renewal, maintenance, and functionality of blood-forming, lymphatic system-forming, and other tissue-forming stem cells. GATA2 is encoded by the GATA2 gene, a gene which often suffers germline and somatic mutations which lead to a wide range of familial and sporadic diseases, respectively. The gene and its product are targets for the treatment of these diseases.
Inactivating mutations of the GATA2 gene cause a reduction in the cellular levels of GATA2 and the development of a wide range of familial hematological, immunological, lymphatic, and/or other disorders that are grouped together into a common disease termed GATA2 deficiency. Less commonly, these disorders are associated with non-familial (i.e. sporadic or acquired) GATA inactivating mutations. GATA2 deficiency often begins with seemingly benign abnormalities but if untreated progresses to life-threatening opportunistic infections, virus-induced cancers, lung failure, the myelodysplastic syndrome (i.e. MDS), and/or acute myeloid leukemia, principally acute myeloid leukemia (AML), less commonly chronic myelomonocytic leukemia (CMML), and rarely a lymphoid leukemia.
Overexpression of the GATA2 transcription factor that is not due to mutations in the GATA2 gene appears to be a secondary factor that promotes the aggressiveness of non-familial EVI1 positive AML as well as the progression of prostate cancer.
GATA2 gene
The GATA2 gene is a member of the evolutionarily conserved GATA transcription factor gene family. All vertebrate species tested so far, including humans and mice, express 6 GATA genes, GATA1 through GATA6. The human GATA2 gene is located on the long (or "q") arm of chromosome 3 at position 21.3 (i.e. the 3q21.3 locus) and consists of 8 exons. Two sites, termed C-ZnF and N-ZnF, of the gene code for two Zinc finger structural motifs of the GATA2 transcription factor. These sites are critical for regulating the ability of the transcription factor to stimulate its target genes.
The GATA2 gene has at least five separate sites which bind nuclear factors that regulate its expression. One particularly important such site is located in intron 4. This site, termed the 9.5 kb enhancer, is located 9.5 kilobases (i.e. kb) down-stream from the gene's transcript initiation site and is a critically important enhancer of the gene's expression. Regulation of GATA2 expression is highly complex. For example, in hematological stem cells, GATA2 transcription factor itself binds to one of these sites and in doing so is part of functionally important positive feedback autoregulation circuit wherein the transcription factor acts to promote its own production; in a second example of a positive feed back circuit, GATA2 stimulates production of Interleukin 1 beta and CXCL2 which act indirectly to simulate GATA2 expression. In an example of a negative feedback circuit, the GATA2 transcription factor indirectly causes activation of the G protein coupled receptor, GPR65, which then acts, also indirectly, to repress GATA2 gene expression. In a second example of negative feed-back, GATA2 transcription factor stimulates the expression of the GATA1 transcription factor which in turn can displace GATA2 transcription factor from its gene-stimulating binding sites thereby limiting GATA2's actions.
The human GATA2 gene is expressed in hematological bone marrow cells at the stem cell and later progenitor cell stages of their development. Increases and/or decreases in the gene's expression regulate the self-renewal, survival, and progression of these immature cells toward their final mature forms viz., erythrocytes, certain types of lymphocytes (i.e. B cells, NK cells, and T helper cells), monocytes, neutrophils, platelets, plasmacytoid dendritic cells, macrophages and mast cells. The gene is likewise critical for the formation of the lymphatic system, particularly for the development of its valves. The human gene is also expressed in endothelium, some non-hematological stem cells, the central nervous system, and, to lesser extents, prostate, endometrium, and certain cancerous tissues.
The Gata2 gene in mice has a structure similar to its human counterpart, Deletion of both parental Gata2 genes in mice is lethal by day 10 of embryogenesis due to a total failure in the formation of mature blood cells. Inactivation of one mouse Gata2 gene is neither lethal nor associated with most of the signs of human GATA2 deficiency; however, these animals do show a ~50% reduction in their hematopoietic stem cells along with a reduced ability to repopulate the bone marrow of mouse recipients. The latter findings, human clinical studies, and experiments on human tissues support the conclusion that in humans both parental GATA2 genes are required for sufficient numbers of hematopoietic stem cells to emerge from the hemogenic endothelium during embryogenesis and for these cells and subsequent progenitor cells to survive, self-renew, and differentiate into mature cells. As GATA2 deficient individuals age, their deficiency in hematopoietic stem cells worsens, probably as a result of factors such as infections or other stresses. In consequence, the signs and symptoms of their disease appear and/or become progressively more severe. The role of GATA2 deficiency in leading to any of the leukemia types is not understood. Likewise, the role of GATA2 overexpression in non-familial AML as well as development of the blast crisis in chronic myelogenous leukemia and progression of prostate cancer is not understood.
Mutations
Scores of different types of inactivating GATA mutations have been associated with GATA2 deficiency; these include frameshift, point, insertion, splice site and deletion mutations scattered throughout the gene but concentrated in the region encoding the GATA2 transcription factor's C-ZnF, N-ZnF, and 9.5 kb sites. Rare cases of GATA2 deficiency involve large mutational deletions that include the 3q21.3 locus plus contiguous adjacent genes; these mutations seem more likely than other types of GATA mutations to cause increased susceptibilities to viral infections, developmental lymphatic disorders, and neurological disturbances.
One GATA2 mutation is a gain of function type, i.e. it is associated with an increase in the activity rather than levels of GATA2. This mutation substitutes valine for leucine in the 359 amino acid position (i.e. within the N-ZnF site) of the transcription factor and has been detected in individuals undergoing the blast crisis of chronic myelogenous leukemia.
Pathological inhibition
Analyses of individuals with AML have discovered many cases of GATA2 deficiency in which one parental GATA2 gene was not mutated but silenced by hypermethylation of its gene promoter. Further studies are required to integrate this hypermethylation-induced form of GATA2 deficiency into the diagnostic category of GATA2 deficiency.
Pathological stimulation
Non-mutational stimulation of GATA2 expression and consequential aggressiveness in EVI1-positive AML appears due to the ability of EVI1, a transcription factor, to directly stimulate the expression of the GATA2 gene. The reason for the overexpression of GATA2 that begins in the early stages of prostate cancer is unclear but may involve the ability of FOXA1 to act indirect to stimulate the expression of the GATA2 gene.
GATA2
The full length GATA2 transcription factor is a moderately sized protein consisting of 480 amino acids. Of its two zinc fingers, C-ZnF (located toward the protein's C-terminus) is responsible for binding to specific DNA sites while its N-ZnF (located toward the proteins N-terminus) is responsible for interacting with various other nuclear proteins that regulate its activity. The transcription factor also contains two transactivation domains and one negative regulatory domain which interact with other nuclear proteins to up-regulate and down-regulate, respectively, its activity. In promoting embryonic and/or adult-type haematopoiesis (i.e. maturation of hematological and immunological cells), GATA2 interacts with other transcription factors (viz., RUNX1, SCL/TAL1, GFI1, GFI1b, MYB, IKZF1, Transcription factor PU.1, LYL1) and cellular receptors (viz., MPL, GPR56). In a wide range of tissues, GATA2 similarly interacts with HDAC3, LMO2, POU1F1, POU5F1, PML SPI1, and ZBTB16.
GATA2 binds to a specific nucleic acid sequence viz., (T/A(GATA)A/G), on the promoter and enhancer sites of its target genes and in doing so either stimulates or suppresses the expression of these target genes. However, there are thousands of sites in human DNA with this nucleotide sequence but for unknown reasons GATA2 binds to <1% of these. Furthermore, all members of the GATA transcription factor family bind to this same nucleotide sequence and in doing so may in certain instances serve to interfere with GATA2 binding or even displace the GATA2 that is already bound to these sites. For example, displacement of GATA2 bond to this sequence by the GATA1 transcription factor appears important for the normal development of some types of hematological stem cells. This displacement phenomenon is termed the "GATA switch". In all events, the actions of GATA2, particularly with referenced to its interactions with many other gene-regulating factors, in controlling its target genes is extremely complex and not fully understood.
GATA2-related disorders
Inactivating GATA2 mutations
Familial and sporadic inactivating mutations in one of the two parental GATA2 genes causes a reduction, i.e. a haploinsufficiency, in the cellular levels of the GATA2 transcription factor. In consequence, individuals commonly develop a disease termed GATA2 deficiency. GATA2 deficiency is a grouping of various clinical presentations in which GATA2 haploinsufficiency results in the development over time of hematological, immunological, lymphatic, and/or other presentations that may begin as apparently benign abnormalities but commonly progress to life-threatening opportunistic infections, virus infection-induced cancers, the myelodysplastic syndrome, and/or leukemias, particularly AML. The various presentations of GATA2 deficiency include all cases of Monocytopenia and Mycobacterium Avium Complex/Dendritic Cell Monocyte, B and NK Lymphocyte deficiency (i.e. MonoMAC) and the Emberger syndrome as well as a significant percentage of cases of familial myelodysplastic syndrome/acute myeloid leukemia, congenital neutropenia, chronic myelomonocytic leukemia, aplastic anemia, and several other presentations.
Activating GATA2 mutation
The L359V gain of function mutation (see above section on mutation) increases the activity of the GATA2 transcription factor. The mutation occurs during the blast crisis of chronic myelogenous leukemia and is proposed to play a role in the transformation of the chronic and/or accelerated phases of this disease to its blast crisis phase.
Repression of GATA2
The repression of GATA2 expression due to methylation of promoter sites in the GATA2 gene rather than a mutation in this gene has been suggested to be an alternate cause for the GATA2 deficiency syndrome. This epigenetic gene silencing also occurs in certain types of non-small-cell lung carcinoma and is suggested to have a protective effect on progression of the disease.
Overexpression of GATA2
Elevated levels of GATA2 transcription factor due to overexpression of its gene GATA2 is a common finding in AML. It is associated with a poor prognosis, appears to promote progression of the disease, and therefore proposed to be a target for therapeutic intervention. This overexpression is not due to mutation but rather caused at least in part by the overexpression of EVI1, a transcription factor that stimulates GATA2 expression. GATA2 overexpression also occurs in prostate cancer where it appears to increase metastasis in the early stages of androgen-dependent disease and
to stimulate prostate cancer cell survival and proliferation through activating by an unknown mechanism the androgen pathway in androgen-independent (i.e. castration-resistant) disease).
See also
GATA transcription factor
GATA2 deficiency
MonoMAC
Emberger syndrome
References
Further reading
External links
Transcription factors | GATA2 | [
"Chemistry",
"Biology"
] | 2,805 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,156,439 | https://en.wikipedia.org/wiki/Martinez%20beavers | The Martinez beavers are a family of North American beavers living in Alhambra Creek in downtown Martinez, California. Best known as the longtime home of famed 19th/20th-century naturalist John Muir, Martinez has become a national example of urban stream restoration utilizing beavers as ecosystem engineers.
In late 2006, a male and female beaver arrived in Alhambra Creek, proceeding to produce 4 kits over the course of the summer. After a decision by the city of Martinez to exterminate the beavers, local conservationists formed an organization called Worth a Dam and as a result of their activism, the decision was overturned. Subsequently, wildlife populations have increased in diversity along the Alhambra Creek watershed, most likely due to the dams maintained by the beavers.
Alhambra Creek
In late 2006, Alhambra Creek, which runs through the city of Martinez, was adopted by two beavers. The beavers built a dam 30 feet wide and at one time 6 feet high, and chewed through half the willows and other creekside landscaping the city planted as part of its 9.7 million 1999 flood-improvement project (after a flood in 1997).
In November 2007, the city declared that the risk of flooding from the dam necessitated removal of the beavers. Since the California Department of Fish and Game (DFG) does not allow relocation, extermination was the only solution. Residents voiced objections, prompting a beaver vigil and rally, as well as local media interest. Within three days of the announcement of the decision to exterminate the beavers, downtown Martinez was invaded by news cameras and curious spectators. Because of the public outcry, the city obtained an exception from DFG, who pledged to pay for their successful relocation. This 11th-hour decision relieved much of the tension, but residents continued to press the city to allow the beavers to stay. In a heavily-attended city council meeting, the city was alternately praised for gaining the DFG exception and chided for not researching effective flood control measures. Concerns of downtown shopkeepers were raised, but strategies for flow management were mentioned by most. Offers of help came from the Sierra Club, the Humane Society, the Superintendent of schools and many private residents.
After this meeting, Mayor Robert Schroder formed a subcommittee dedicated to the beavers. The city hired Skip Lisle of Beaver Deceivers in Vermont to install a flow device. Resolution included installing a pipe through the beaver dam so that the pond's water level could not become excessive. The flow device, as of 2013, was controlling the water level well.
A keystone species, the beaver have transformed Alhambra Creek from a trickle into multiple dams and beaver ponds, which in turn, has led to the return of steelhead trout (Oncorhynchus mykiss) and river otter (Lontra canadensis) in 2008, and mink (Neovison vison) in 2009. Examples of the impact of the beaver as a keystone species in 2010, include a green heron (Butorides virescens) catching a tule perch (Hysterocarpus traskii traskii), the first recorded sighting of the perch in Alhambra Creek, and the December arrival of a pair of hooded mergansers (Lophodytes cucullatus) (see photos). The beaver parents have produced babies every year since their 2006 arrival.
In November, 2009 the Martinez City Council approved the placement of an 81-tile wildlife mural on the Escobar Street bridge. The mural was created by schoolchildren and donated by Worth a Dam to memorialize the beavers and other fauna in Alhambra Creek.
In June, 2010, after birthing and successfully weaning triplets that year (and quadruplets the previous three years), "Mom Beaver" died of infection caused by a broken tooth, as confirmed by necropsy.
In 2011, a new adult female arrived at the creek and has given birth to three beavers. Flooding, following heavy rains, washed away the beaver lodge and all four dams on Alhambra Creek in March 2011
In September, 2011, Martinez officials ordered Mario Alfaro, a local artist commissioned to paint an outdoor mural celebrating the heritage of the city, to paint over the depiction of a beaver he had included in his panorama. He complied and also painted over his own name in apparent protest.
In 2014, the beaver population of the creek was seven. From 2006 to 2014, a total of 22 beavers lived in the creek at various times; of these, 8 died, 7 are still living in the creek, and the fate of the other 7 is unknown.
From 2008 to 2014, there has been an annual Beaver Festival in Martinez.
History
The Martinez beavers probably originated from the Sacramento-San Joaquin River Delta. Historically, before the California Fur Rush of the late eighteenth and early nineteenth centuries, the Delta probably held the largest concentration of beaver in North America. It was California's early fur trade, more than any other single factor, that opened up the West, and the San Francisco Bay Area in particular, to world trade. The Spanish, French, English, Russians and Americans engaged in the California fur trade before 1825, harvesting prodigious quantities of beaver, river otter, marten, fisher, mink, fox, weasel, harbor and fur seals and sea otter. When the coastal and oceanic fur industry began to decline, the focus shifted to California's inland fur resources. Between 1826 and 1845 the Hudson's Bay Company sent parties out annually from Fort Astoria and Fort Vancouver into the Sacramento and the San Joaquin valleys as far south as French Camp on the San Joaquin River. These trapping expeditions must have been extremely profitable to justify the long overland trip each year. It appears that the beaver (species: Castor canadensis, subspecies: subauratus) was one of the most valued of the animals taken, and apparently was found in great abundance. Thomas McKay reported that in one year the Hudson's Bay Company took 4,000 beaver skins on the shores of San Francisco Bay. At the time, these pelts sold for $2.50 a pound or about $4 each.
The Delta area is probably where McKay was so successful, rather than the Bay itself. In 1840, explorer Captain Thomas Farnham wrote that beaver were very numerous near the mouths of the Sacramento and San Joaquin rivers and on the hundreds of small "rushcovered" islands. Farnham, who had travelled extensively in North America, said: "There is probably no spot of equal extent in the whole continent of America which contains so many of these muchsought animals."
See also
Beaver in the Sierra Nevada
References
External links
The Martinez Beavers
Beavers
Endemic fauna of California
Beavers
Natural history of Contra Costa County, California
Ecological restoration
Fauna without expected TNC conservation status
Animal reintroduction | Martinez beavers | [
"Chemistry",
"Engineering"
] | 1,409 | [
"Ecological restoration",
"Environmental engineering"
] |
14,156,629 | https://en.wikipedia.org/wiki/ATF1 | Cyclic AMP-dependent transcription factor ATF-1 is a protein that in humans is encoded by the ATF1 gene.
This gene encodes an activating transcription factor, which belongs to the ATF subfamily and bZIP (basic-region leucine zipper) family. It influences cellular physiologic processes by regulating the expression of downstream target genes, which are related to growth, survival, and other cellular activities. This protein is phosphorylated at serine 63 in its kinase-inducible domain by serine/threonine kinases, cAMP-dependent protein kinase A, calmodulin-dependent protein kinase I/II, mitogen- and stress-activated protein kinase and cyclin-dependent kinase 3 (cdk-3). Its phosphorylation enhances its transactivation and transcriptional activities, and enhances cell transformation.
Clinical significance
Fusion of this gene and FUS on chromosome 16 or EWSR1 on chromosome 22 induced by translocation generates chimeric proteins in angiomatoid fibrous histiocytoma and clear cell sarcoma. This gene has a pseudogene on chromosome 6.
See also
Activating transcription factor
Interactions
ATF1 has been shown to interact with:
BRCA1,
CSNK2A2,
CSNK2A1, and
EWS.
References
Further reading
External links
Transcription factors
Oncogenes | ATF1 | [
"Chemistry",
"Biology"
] | 292 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,156,801 | https://en.wikipedia.org/wiki/PBX1 | Pre-B-cell leukemia transcription factor 1 is a protein that in humans is encoded by the PBX1 gene. The homologous protein in Drosophila is known as extradenticle, and causes changes in embryonic development.
Function
Mice studies suggest PBX1 is involved in bone generation and skeletal patterning.
Interactions
PBX1 has been shown to interact with:
HOXB1,
HOXB7,
MEIS1, and
Prep1.
Fruit fly homolog
The Drosophila melangoster gene called extradenticle encodes a homeodomain protein that is 71% similar to the Pbx1 protein, and is considered homologous to PBX1. extradenticle is a homeodomain transcription factor expressed during embryogenesis and is related to morphological changes and development.
Reduced levels of extradenticle cause segmental transformations, without affecting the functionality or location of homeotic genes. Complete removal of extradenticle both maternally and zygotically leads to alterations from failure of non-extradenticle protein expression.
A monoclonal antibody study of the expression of extradenticle protein in embryonic development found that it is uniformly distributed, as well as excluded from cell nuclei, until gastrulation. During the germ band retraction stage of development, extradenticle protein begins to accumulate in the nuclei of cells in a specific pattern. Proximal areas of wing and leg imaginal discs have extradenticle present in the nucleus, while distal areas only have it in the cytoplasm.
References
Further reading
External links
Transcription factors | PBX1 | [
"Chemistry",
"Biology"
] | 330 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,157,979 | https://en.wikipedia.org/wiki/Uranyl%20formate | Uranyl formate (UO2(CHO2)2·H2O) is a salt that exists as a fine yellow free-flowing powder occasionally used in transmission electron microscopy.
It is occasionally used as a 0.5% or 1% aqueous negative stain in transmission electron microscopy (TEM) because it shows a finer grain structure than uranyl acetate. However, uranyl formate does not easily go into solution, and once dissolved, has a rather limited lifetime as a stain. It is quite sensitive to light, especially ultraviolet light, and will precipitate if exposed.
See also
Electron microscope
References
2SPI.com , compound information, retrieved May 3, 2011.
Electron microscopy stains
Uranyl compounds
Nuclear materials
Formates | Uranyl formate | [
"Physics",
"Chemistry"
] | 159 | [
"Inorganic compounds",
"Inorganic compound stubs",
"Materials",
"Nuclear materials",
"Matter"
] |
14,158,490 | https://en.wikipedia.org/wiki/Chaos%20computing | In theoretical computer science, chaos computing is the idea of using chaotic systems for computation. In particular, chaotic systems can be made to produce all types of logic gates and further allow them to be morphed into each other.
Introduction
Chaotic systems generate large numbers of patterns of behavior and are irregular because they switch between these patterns. They exhibit sensitivity to initial conditions which, in practice, means that chaotic systems can switch between patterns extremely fast.
Modern digital computers perform computations based upon digital logic operations implemented at the lowest level as logic gates. There are essentially seven basic logic functions implemented as logic gates: AND, OR, NOT, NAND, NOR, XOR and XNOR.
A chaotic morphing logic gate consists of a generic nonlinear circuit that exhibits chaotic dynamics producing various patterns. A control mechanism is used to select patterns that correspond to different logic gates. The sensitivity to initial conditions is used to switch between different patterns extremely fast (well under a computer clock cycle).
Chaotic morphing
As an example of how chaotic morphing works, consider a generic chaotic system known as the logistic map. This nonlinear map is very well studied for its chaotic behavior and its functional representation is given by:
.
In this case, the value of is chaotic when >~ 3.57... and rapidly switches between different patterns in the value of as one iterates the value of . A simple threshold controller can control or direct the chaotic map or system to produce one of many patterns. The controller basically sets a threshold on the map such that if the iteration ("chaotic update") of the map takes on a value of that lies above a given threshold value, *, then the output corresponds to a 1, otherwise it corresponds to a 0. One can then reverse engineer the chaotic map to establish a lookup table of thresholds that robustly produce any of the logic gate operations. Since the system is chaotic, various gates ("patterns") can be switch between exponentially fast.
ChaoGate
The ChaoGate is an implementation of a chaotic morphing logic gate developed by William Ditto, Sudeshna Sinha, and K. Murali.
A chaotic computer, made up of a lattice of ChaoGates, has been demonstrated by Chaologix Inc.
Research
Recent research has shown how chaotic computers can be recruited in fault tolerant applications, by introduction of dynamic based fault detection methods. Also it has been demonstrated that multidimensional dynamical states available in a single ChaoGate can be exploited to implement parallel chaos computing, and as an example, this parallel architecture can lead to constructing an SR like memory element through one ChaoGate. As another example, it has been proved that any logic function can be constructed directly from just one ChaoGate.
Chaos allows order to be found in such diverse systems as the atmosphere, heart beating, fluids, seismology, metallurgy, physiology, or the behavior of a stock market.
See also
Chua's circuit
Unconventional computing
References
"The 10 Coolest Technologies You’ve Never Heard Of – Chaos Computing," PC Magazine, Vol. 25, No. 13, page p. 66, August 8, 2006.
"Logic from Chaos," MIT Technology Review, June 15, 2006.
"Exploiting the controlled responses of chaotic elements to design configurable hardware," W. L. Ditto and S. Sinha, Philosophical Transactions of the Royal Society London A, 364, pp. 2483–2494 (2006) .
"Chaos Computing: ideas and implementations" William L. Ditto, K. Murali and S. Sinha, Philosophical Transactions of the Royal Society London A, (2007) .
"Experimental realization of the fundamental NOR Gate using a chaotic circuit," K. Murali, Sudeshna Sinha and William L. Ditto Phys. Rev. E 68, 016205 (2003).
"Implementation of NOR gate by a chaotic Chua’s circuit," K. Murali, Sudeshna Sinha and William L. Ditto, International Journal of Bifurcation and Chaos, Vol. 13, No. 9, pp. 1–4, (2003).
"Fault tolerance and detection in chaotic Computers" M.R. Jahed-Motlagh, B. Kia, W.L. Ditto and S. Sinha, International Journal of Bifurcation and Chaos 17, 1955-1968(2007)
"Chaos-based computation via Chua's circuit: parallel computing with application to the SR flip-flop"D. Cafagna, G. Grassi, International Symposium on Signals, Circuits and Systems, ISSCS 2005, Volume: 2, 749-752 (2005)
"Parallel computing with extended dynamical systems" S. Sinha, T. Munakata and W.L. Ditto; Physical Review E, 65 036214 [1-7](2002)
Classes of computers
Models of computation
Theoretical computer science | Chaos computing | [
"Mathematics",
"Technology"
] | 1,016 | [
"Theoretical computer science",
"Applied mathematics",
"Computer systems",
"Computers",
"Classes of computers"
] |
14,160,015 | https://en.wikipedia.org/wiki/Malgrange%E2%80%93Ehrenpreis%20theorem | In mathematics, the Malgrange–Ehrenpreis theorem states that every non-zero linear differential operator with constant coefficients has a Green's function. It was first proved independently by and
.
This means that the differential equation
where is a polynomial in several variables and is the Dirac delta function, has a distributional solution . It can be used to show that
has a solution for any compactly supported distribution . The solution is not unique in general.
The analogue for differential operators whose coefficients are polynomials (rather than constants) is false: see Lewy's example.
Proofs
The original proofs of Malgrange and Ehrenpreis were non-constructive as they used the Hahn–Banach theorem. Since then several constructive proofs have been found.
There is a very short proof using the Fourier transform and the Bernstein–Sato polynomial, as follows. By taking Fourier transforms the Malgrange–Ehrenpreis theorem is equivalent to the fact that every non-zero polynomial has a distributional inverse. By replacing by the product with its complex conjugate, one can also assume that is non-negative. For non-negative polynomials the existence of a distributional inverse follows from the existence of the Bernstein–Sato polynomial, which implies that can be analytically continued as a meromorphic distribution-valued function of the complex variable ; the constant term of the Laurent expansion of at is then a distributional inverse of .
Other proofs, often giving better bounds on the growth of a solution, are given in , and .
gives a detailed discussion of the regularity properties of the fundamental solutions.
A short constructive proof was presented in :
is a fundamental solution of , i.e., , if is the principal part of ,
with , the real numbers are pairwise different, and
References
Differential equations
Theorems in analysis
Schwartz distributions | Malgrange–Ehrenpreis theorem | [
"Mathematics"
] | 374 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical objects",
"Differential equations",
"Equations",
"Mathematical problems",
"Mathematical theorems"
] |
14,161,069 | https://en.wikipedia.org/wiki/Gallium%20manganese%20arsenide | Gallium manganese arsenide, chemical formula is a magnetic semiconductor. It is based on the world's second most commonly used semiconductor, gallium arsenide, (chemical formula ), and readily compatible with existing semiconductor technologies. Differently from other dilute magnetic semiconductors, such as the majority of those based on II-VI semiconductors, it is not paramagnetic
but ferromagnetic, and hence exhibits hysteretic magnetization behavior. This memory effect is of importance for the creation of persistent devices. In , the manganese atoms provide a magnetic moment, and each also acts as an acceptor, making it a p-type material. The presence of carriers allows the material to be used for spin-polarized currents. In contrast, many other ferromagnetic magnetic semiconductors are strongly insulating
and so do not possess free carriers. is therefore a candidate material for spintronic devices but it is likely to remain only a testbed for basic research as its Curie temperature could only be raised up to approximately 200 K.
Growth
Like other magnetic semiconductors, is formed by doping a standard semiconductor with magnetic elements. This is done using the growth technique molecular beam epitaxy, whereby crystal structures can be grown with atom layer precision. In the manganese substitute into gallium sites in the GaAs crystal and provide a magnetic moment. Because manganese has a low solubility in GaAs, incorporating a sufficiently high concentration for ferromagnetism to be achieved proves challenging. In standard molecular beam epitaxy growth, to ensure that a good structural quality is obtained, the temperature the substrate is heated to, known as the growth temperature, is normally high, typically ~600 °C. However, if a large flux of manganese is used in these conditions, instead of being incorporated, segregation occurs where the manganese accumulate on the surface and form complexes with elemental arsenic atoms.
This problem was overcome using the technique of low-temperature molecular beam epitaxy. It was found, first in
and then later used for ,
that by utilising non-equilibrium crystal growth techniques larger dopant concentrations could be successfully incorporated. At lower temperatures, around 250 °C, there is insufficient thermal energy for surface segregation to occur but still sufficient for a good quality single crystal alloy to form.
In addition to the substitutional incorporation of manganese, low-temperature molecular beam epitaxy also causes the inclusion of other impurities. The two other common impurities are interstitial manganese
and arsenic antisites.
The former is where the manganese atom sits between the other atoms in the zinc-blende lattice structure and the latter is where an arsenic atom occupies a gallium site. Both impurities act as double donors, removing the holes provided by the substitutional manganese, and as such they are known as compensating defects. The interstitial manganese also bond antiferromagnetically to substitutional manganese, removing the magnetic moment. Both these defects are detrimental to the ferromagnetic properties of the , and so are undesired.
The temperature below which the transition from paramagnetism to ferromagnetism occurs is known as the Curie temperature, TC. Theoretical predictions based on the Zener model suggest that the Curie temperature scales with the quantity of manganese, so TC above 300K is possible if manganese doping levels as high as 10% can be achieved.
After its discovery by Ohno et al., the highest reported Curie temperatures in rose from 60K to 110K. However, despite the predictions of room-temperature ferromagnetism, no improvements in TC were made for several years.
As a result of this lack of progress, predictions started to be made that 110K was a fundamental limit for . The self-compensating nature of the defects would limit the possible hole concentrations, preventing further gains in TC.
The major breakthrough came from improvements in post-growth annealing. By using annealing temperatures comparable to the growth temperature it was possible to pass the 110K barrier.
These improvements have been attributed to the removal of the highly mobile interstitial manganese.
Currently, the highest reported values of TC in are around 173K,
still well below the much sought room-temperature. As a result, measurements on this material must be done at cryogenic temperatures, currently precluding any application outside of the laboratory. Naturally, considerable effort is being spent in the search for an alternative magnetic semiconductors that does not share this limitation.
In addition to this, as molecular beam epitaxy techniques and equipment are refined and improved it is hoped that greater control over growth conditions will allow further incremental advances in the Curie temperature of .
Properties
Regardless of the fact that room-temperature ferromagnetism has not yet been achieved, magnetic semiconductors materials such as , have shown considerable success. Thanks to the rich interplay of physics inherent to magnetic semiconductors a variety of novel phenomena and device structures have been demonstrated. It is therefore instructive to make a critical review of these main developments.
A key result in magnetic semiconductors technology is gateable ferromagnetism, where an electric field is used to control the ferromagnetic properties. This was achieved by Ohno et al.
using an insulating-gate field-effect transistor with as the magnetic channel. The magnetic properties were inferred from magnetization dependent Hall measurements of the channel. Using the gate action to either deplete or accumulate holes in the channel it was possible to change the characteristic of the Hall response to be either that of a paramagnet or of a ferromagnet. When the temperature of the sample was close to its TC it was possible to turn the ferromagnetism on or off by applying a gate voltage which could change the TC by ±1K.
A similar transistor device was used to provide further examples of gateable ferromagnetism.
In this experiment the electric field was used to modify the coercive field at which magnetization reversal occurs. As a result of the dependence of the magnetic hysteresis on the gate bias the electric field could be used to assist magnetization reversal or even demagnetize the ferromagnetic material.
The combining of magnetic and electronic functionality demonstrated by this experiment is one of the goals of spintronics and may be expected to have a great technological impact.
Another important spintronic functionality that has been demonstrated in magnetic semiconductors is that of spin injection. This is where the high spin polarization inherent to these magnetic materials is used to transfer spin polarized carriers into a non-magnetic material.
In this example, a fully epitaxial heterostructure was used where spin polarized holes were injected from a layer to an (In,Ga)As quantum well where they combine with unpolarized electrons from an n-type substrate. A polarization of 8% was measured in the resulting electroluminescence. This is again of potential technological interest as it shows the possibility that the spin states in non-magnetic semiconductors can be manipulated without the application of a magnetic field.
offers an excellent material to study domain wall mechanics because the domains can have a size of the order of 100 μm.
Several studies have been done in which lithographically defined lateral constrictions
or other pinning points
are used to manipulate domain walls. These experiments are crucial to understanding domain wall nucleation and propagation which would be necessary for the creation of complex logic circuits based on domain wall mechanics.
Many properties of domain walls are still not fully understood and one particularly outstanding issue is of the magnitude and size of the resistance associated with current passing through domain walls. Both positive
and negative
values of domain wall resistance have been reported, leaving this an open area for future research.
An example of a simple device that utilizes pinned domain walls is provided by reference.
This experiment consisted of a lithographically defined narrow island connected to the leads via a pair of nanoconstrictions. While the device operated in a diffusive regime the constrictions would pin domain walls, resulting in a giant magnetoresistance signal. When the device operates in a tunnelling regime another magnetoresistance effect is observed, discussed below.
A furtherproperty of domain walls is that of current induced domain wall motion. This reversal is believed to occur as a result of the spin-transfer torque exerted by a spin polarized current.
It was demonstrated in reference
using a lateral device containing three regions which had been patterned to have different coercive fields, allowing the easy formation of a domain wall. The central region was designed to have the lowest coercivity so that the application of current pulses could cause the orientation of the magnetization to be switched. This experiment showed that the current required to achieve this reversal in was two orders of magnitude lower than that of metal systems. It has also been demonstrated that current-induced magnetization reversal can occur across a vertical tunnel junction.
Another novel spintronic effect, which was first observed in based tunnel devices, is tunnelling anisotropic magnetoresistance. This effect arises from the intricate dependence of the tunnelling density of states on the magnetization, and can result in magnetoresistance of several orders of magnitude. This was demonstrated first in vertical tunnelling structures
and then later in lateral devices.
This has established tunnelling anisotropic magnetoresistance as a generic property of ferromagnetic tunnel structures. Similarly, the dependence of the single electron charging energy on the magnetization has resulted in the observation of another dramatic magnetoresistance effect in a device, the so-called Coulomb blockade anisotropic magnetoresistance.
References
Semiconductor materials
Ferromagnetic materials
Gallium compounds
Arsenides
Manganese(III) compounds
Zincblende crystal structure | Gallium manganese arsenide | [
"Physics",
"Chemistry"
] | 2,022 | [
"Semiconductor materials",
"Materials",
"Ferromagnetic materials",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.