id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
18,349,854 | https://en.wikipedia.org/wiki/Darwinian%20demon | A Darwinian demon is a hypothetical organism that would result if there were no biological constraints on evolution. Such an organism would maximize all aspects of fitness simultaneously and would exist if there were no limitations from available variation or physiological constraints. It is named for the English scientist Charles Darwin, who first posited evolution in his work On the Origin of Species in 1859. Such organisms would reproduce directly after being born, produce infinitely many offspring, and live indefinitely. Even though no such organisms exist, biologists use Darwinian demons in thought experiments to understand different life history strategies among different organisms.
Darwinian demons (inspired by Maxwell's demons) have been seen in many articles. It personifies an entity that is able to consciously direct an organism’s evolution, allowing it to maximize all fitness components at once. Some organisms such as duckweed and queen ants mimic Darwinian demons; however, they fall short. An organism’s acquisition of adaptations is restricted by trade-offs, gene flow and a limited source of variation.
See also
Demon (thought experiment)
Natural selection
References
Further reading
Silvertown, J. W. (2005) Demons in Eden: The Paradox of Plant Diversity Chicago: University of Chicago Press
Evolutionary biology
Thought experiments | Darwinian demon | [
"Biology"
] | 247 | [
"Evolutionary biology"
] |
18,351,721 | https://en.wikipedia.org/wiki/Protein%20synthesis%20inhibitor | A protein synthesis inhibitor is a compound that stops or slows the growth or proliferation of cells by disrupting the processes that lead directly to the generation of new proteins.
While a broad interpretation of this definition could be used to describe nearly any compound depending on concentration, in practice, it usually refers to compounds that act at the molecular level on translational machinery (either the ribosome itself or the translation factor), taking advantages of the major differences between prokaryotic and eukaryotic ribosome structures.
Mechanism
In general, protein synthesis inhibitors work at different stages of bacterial mRNA translation into proteins, like initiation, elongation (including aminoacyl tRNA entry, proofreading, peptidyl transfer, and bacterial translocation) and termination:
Earlier stages
Rifamycin inhibits bacterial DNA transcription into mRNA by inhibiting DNA-dependent RNA polymerase by binding its beta-subunit.
alpha-Amanitin is a powerful inhibitor of eukaryotic DNA transcription machinery.
Initiation
Linezolid acts at the initiation stage, probably by preventing the formation of the initiation complex, although the mechanism is not fully understood.
Ribosome assembly
Aminoglycosides prevent ribosome assembly by binding to the bacterial 30S ribosomal subunit.
Aminoacyl tRNA entry
Tetracyclines and Tigecycline (a glycylcycline related to tetracyclines) block the A site on the ribosome, preventing the binding of aminoacyl tRNAs.
Proofreading
Aminoglycosides, among other potential mechanisms of action, interfere with the proofreading process, causing increased rate of error in synthesis with premature termination.
Peptidyl transfer
Chloramphenicol blocks the peptidyl transfer step of elongation on the 50S ribosomal subunit in both bacteria and mitochondria.
Macrolides (as well as inhibiting ribosomal translocation and other potential mechanisms) bind to the 50s ribosomal subunits, inhibiting peptidyl transfer.
Quinupristin/dalfopristin act synergistically, with dalfopristin, enhancing the binding of quinupristin, as well as inhibiting peptidyl transfer. Quinupristin binds to a nearby site on the 50S ribosomal subunit and prevents elongation of the polypeptide, as well as causing incomplete chains to be released.
Geneticin, also called G418, inhibits the elongation step in both prokaryotic and eukaryotic ribosomes.
Trichothecene mycotoxins are potent and non selective inhibitors of peptide elongation.
Ribosomal translocation
Macrolides, clindamycin and aminoglycosides (with all these three having other potential mechanisms of action as well), have evidence of inhibition of ribosomal translocation.
Fusidic acid prevents the turnover of elongation factor G (EF-G) from the ribosome.
Ricin inhibits elongation by enzymatically modifying an rRNA of the eukaryotic 60S ribosomal subunit.
Termination
Macrolides and clindamycin (both also having other potential mechanisms) cause premature dissociation of the peptidyl-tRNA from the ribosome.
Puromycin has a structure similar to that of the tyrosinyl aminoacyl-tRNA. Thus, it binds to the ribosomal A site and participates in peptide bond formation, producing peptidyl-puromycin. However, it does not engage in translocation and quickly dissociates from the ribosome, causing a premature termination of polypeptide synthesis.
Streptogramins also cause premature release of the peptide chain.
Protein synthesis inhibitors of unspecified mechanism
Retapamulin
Mupirocin
Fusidic acid
Binding site
The following antibiotics bind to the 30S subunit of the ribosome:
Aminoglycosides
Tetracyclines
The following antibiotics bind to the 50S ribosomal subunit:
Chloramphenicol
Clindamycin
Linezolid (an oxazolidinone)
Macrolides
Telithromycin
Streptogramins
Retapamulin
See also
Protein biosynthesis
Bacterial translation
Eukaryotic translation
Archaeal translation
References
Protein biosynthesis
Protein synthesis inhibitor antibiotics | Protein synthesis inhibitor | [
"Chemistry"
] | 913 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
18,353,210 | https://en.wikipedia.org/wiki/Photoinduced%20phase%20transitions | Photoinduced phase transition is a technique used in solid-state physics. It is a process to the nonequilibrium phases generated from an equilibrium by shining on high energy photons, and the nonequilibrium phase is a macroscopic excited domain that has new structural and electronic orders quite different from the starting ground state (equilibrium phase).
References
Phase transitions | Photoinduced phase transitions | [
"Physics",
"Chemistry",
"Materials_science"
] | 76 | [
"Physical phenomena",
"Phase transitions",
"Materials science stubs",
"Condensed matter stubs",
"Phases of matter",
"Critical phenomena",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
10,289,427 | https://en.wikipedia.org/wiki/Foundational%20Model%20of%20Anatomy | The Foundational Model of Anatomy Ontology (FMA) is a reference ontology for the domain of human anatomy. It is a symbolic representation of the canonical, phenotypic structure of an organism; a spatial-structural ontology of anatomical entities and relations which form the physical organization of an organism at all salient levels of granularity.
FMA is developed and maintained by the Structural Informatics Group at the University of Washington.
Description
FMA ontology contains approximately 75,000 classes and over 120,000 terms, over 2.1 million relationship instances from over 168 relationship types.
See also
Terminologia Anatomica
Anatomography
References
External links
The Foundational Model of Anatomy Ontology
The Foundational Model of Anatomy Browser
FMA Ontology Browser
Bioinformatics
Ontology (information science)
Anatomical terminology | Foundational Model of Anatomy | [
"Engineering",
"Biology"
] | 168 | [
"Bioinformatics",
"Biological engineering"
] |
10,292,285 | https://en.wikipedia.org/wiki/PHLPP | The PHLPP isoforms (PH domain and Leucine rich repeat Protein Phosphatases) are a pair of protein phosphatases, PHLPP1 and PHLPP2, that are important regulators of Akt serine-threonine kinases (Akt1, Akt2, Akt3) and conventional/novel protein kinase C (PKC) isoforms. PHLPP may act as a tumor suppressor in several types of cancer due to its ability to block growth factor-induced signaling in cancer cells.
PHLPP dephosphorylates Ser-473 (the hydrophobic motif) in Akt, thus partially inactivating the kinase.
In addition, PHLPP dephosphorylates conventional and novel members of the protein kinase C family at their hydrophobic motifs, corresponding to Ser-660 in PKCβII.
Domain structure
PHLPP is a member of the PPM family of phosphatases, which requires magnesium or manganese for their activity and are insensitive to most common phosphatase inhibitors, including [okadaic acid]. PHLPP1 and PHLPP2 have a similar domain structure, which includes a putative Ras association domain, a pleckstrin homology domain, a series of leucine-rich repeats, a PP2C phosphatase domain, and a C-terminal PDZ ligand. PHLPP1 has two splice variants, PHLPP1α and PHLPP1β, of which PHLPP1β is larger by approximately 1.5 kilobase pairs. PHLPP1α, which was the first PHLPP isoform to be characterized, lacks the N-terminal portion of the protein, including the Ras association domain. PHLPP's domain structure influences its ability to dephosphorylate its substrates. A PHLPP construct lacking the PH domain is unable to decrease PKC phosphorylation, while PHLPP lacking the PDZ ligand is unable to decrease Akt phosphorylation.
Dephosphorylation of Akt
The phosphatases in the PHLPP family, PHLPP1 and PHLPP2 have been shown to directly dephosphorylate, and therefore inactivate, distinct Akt isoforms, at one of the two critical phosphorylation sites required for activation: Serine473. PHLPP2 dephosphorylates AKT1 and AKT3, whereas PHLPP1 is specific for AKT2 and AKT3. Lack of PHLPP appears to have effects on growth factor-induced Akt phosphorylation. When both PHLPP1 and PHLPP2 are knocked down using siRNA and cells are stimulated using epidermal growth factor, peak Akt phosphorylation at both Serine473 and Threonine308 (the other site required for full Akt activation) is increased dramatically.
The Akt family of kinases
In humans, there are three genes in the Akt family: AKT1, AKT2, and AKT3. These enzymes are members of the serine/threonine-specific protein kinase family ().
Akt1 is involved in cellular survival pathways and inhibition of apoptotic processes. Akt1 is also able to induce protein synthesis pathways, and is therefore a key signaling protein in the cellular pathways that lead to skeletal muscle hypertrophy, and general tissue growth. Since it can block apoptosis, and thereby promote cell survival, Akt1 has been implicated as a major factor in many types of cancer. Akt (now also called Akt1) was originally identified as the oncogene in the transforming retrovirus, AKT8.
Akt2 is important in the insulin signaling pathway. It is required to induce glucose transport.
These separate roles for Akt1 and Akt2 were demonstrated by studying mice in which either the Akt1 or the Akt2 gene was deleted, or "knocked out". In a mouse that is null for Akt1 but normal for Akt2, glucose homeostasis is unperturbed, but the animals are smaller, consistent with a role for Akt1 in growth. In contrast, mice that do not have Akt2 but have normal Akt1 have mild growth deficiency and display a diabetic phenotype (insulin resistance), again consistent with the idea that Akt2 is more specific for the insulin receptor signaling pathway.
The role of Akt3 is less clear, though it appears to be expressed predominantly in brain. It has been reported that mice lacking Akt3 have small brains.
Phosphorylation of Akt by PDK1 and PDK2
Once correctly positioned in the membrane via binding of PIP3, Akt can then be phosphorylated by its activating kinases, phosphoinositide-dependent kinase 1 (PDK1) and PDK2. Serine473, the hydrophobic motif, is phosphorylated in an mTORC2-dependent manner, leading some investigators to hypothesize that mTORC2 is the long-sought PDK2 molecule. Threonine308, the activation loop, is phosphorylated by PDK1, allowing full Akt activation. Activated Akt can then go on to activate or deactivate its myriad substrates via its kinase activity. The PHLPPs therefore antagonize PDK1 and PDK2, since they dephosphorylate the site that PDK2 phosphorylates.
Dephosphorylation of protein kinase C
PHLPP1 and 2 also dephosphorylate the hydrophobic motifs of two classes of the protein kinase C (PKC) family: the conventional PKCs and the novel PKCs. (The third class of PKCs, known as the atypicals, have a phospho-mimetic at the hydrophobic motif, rendering them insensitive to PHLPP.)
The PKC family of kinases consists of 10 isoforms, whose sensitivity to various second messengers is dictated by their domain structure. The conventional PKCs can be activated by calcium and diacylglycerol, two important mediators of G protein-coupled receptor signaling. The novel PKCs are activated by diacylglycerol but not calcium, while the atypical PKCs are activated by neither.
The PKC family, like Akt, plays roles in cell survival and motility. Most PKC isoforms are anti-apoptotic, although PKCδ (a novel PKC isoform) is pro-apoptotic in some systems.
Although PKC possesses the same phosphorylation sites as Akt, its regulation is quite different. PKC is constitutively phosphorylated, and its acute activity is regulated by binding of the enzyme to membranes. Dephosphorylation of PKC at the hydrophobic motif by PHLPP allows PKC to be dephosphorylated at two other sites (the activation loop and the turn motif). This in turn renders PKC sensitive to degradation. Thus, prolonged increases in PHLPP expression or activity inhibit PKC phosphorylation and stability, decreasing the total levels of PKC over time.
Role in cancer
Investigators have hypothesized that the PHLPP isoforms may play roles in cancer, for several reasons. First, the genetic loci coding for PHLPP1 and 2 are commonly lost in cancer. The region including PHLPP1, 18q21.33, commonly undergoes loss of heterozygosity (LOH) in colon cancers, while 16q22.3, which includes the PHLPP2 gene, undergoes LOH in breast and ovarian cancers, Wilms tumors, prostate cancer and hepatocellular carcinoma. Second, experimental overexpression of PHLPP in cancer cell lines tends to decrease apoptosis and increase proliferation, and stable colon and glioblastoma cell lines overexpressing PHLPP1 show decreased tumor formation in xenograft models. Recent studies have also shown that Bcr-Abl, the fusion protein responsible for chronic myelogenous leukemia (CML), downregulates PHLPP1 and PHLPP2 levels, and that decreasing PHLPP levels interferes with the efficacy of Bcr-Abl inhibitors, including Gleevec, in CML cell lines.
Finally, both Akt and PKC are known to be tumor promoters, suggesting that their negative regulator PHLPP may act as a tumor suppressor.
References
EC 3.1.3
Signal transduction | PHLPP | [
"Chemistry",
"Biology"
] | 1,873 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
10,297,305 | https://en.wikipedia.org/wiki/Hodge%20structure | In mathematics, a Hodge structure, named after W. V. D. Hodge, is an algebraic structure at the level of linear algebra, similar to the one that Hodge theory gives to the cohomology groups of a smooth and compact Kähler manifold. Hodge structures have been generalized for all complex varieties (even if they are singular and non-complete) in the form of mixed Hodge structures, defined by Pierre Deligne (1970). A variation of Hodge structure is a family of Hodge structures parameterized by a manifold, first studied by Phillip Griffiths (1968). All these concepts were further generalized to mixed Hodge modules over complex varieties by Morihiko Saito (1989).
Hodge structures
Definition of Hodge structures
A pure Hodge structure of integer weight n consists of an abelian group and a decomposition of its complexification into a direct sum of complex subspaces , where , with the property that the complex conjugate of is :
An equivalent definition is obtained by replacing the direct sum decomposition of by the Hodge filtration, a finite decreasing filtration of by complex subspaces subject to the condition
The relation between these two descriptions is given as follows:
For example, if is a compact Kähler manifold, is the -th cohomology group of X with integer coefficients, then is its -th cohomology group with complex coefficients and Hodge theory provides the decomposition of into a direct sum as above, so that these data define a pure Hodge structure of weight . On the other hand, the Hodge–de Rham spectral sequence supplies with the decreasing filtration by as in the second definition.
For applications in algebraic geometry, namely, classification of complex projective varieties by their periods, the set of all Hodge structures of weight on is too big. Using the Riemann bilinear relations, in this case called Hodge Riemann bilinear relations, it can be substantially simplified. A polarized Hodge structure of weight n consists of a Hodge structure and a non-degenerate integer bilinear form on (polarization), which is extended to by linearity, and satisfying the conditions:
In terms of the Hodge filtration, these conditions imply that
where is the Weil operator on , given by on .
Yet another definition of a Hodge structure is based on the equivalence between the -grading on a complex vector space and the action of the circle group U(1). In this definition, an action of the multiplicative group of complex numbers viewed as a two-dimensional real algebraic torus, is given on . This action must have the property that a real number a acts by an. The subspace is the subspace on which acts as multiplication by
A-Hodge structure
In the theory of motives, it becomes important to allow more general coefficients for the cohomology. The definition of a Hodge structure is modified by fixing a Noetherian subring A of the field of real numbers, for which is a field. Then a pure Hodge A-structure of weight n is defined as before, replacing with A. There are natural functors of base change and restriction relating Hodge A-structures and B-structures for A a subring of B.
Mixed Hodge structures
It was noticed by Jean-Pierre Serre in the 1960s based on the Weil conjectures that even singular (possibly reducible) and non-complete algebraic varieties should admit 'virtual Betti numbers'. More precisely, one should be able to assign to any algebraic variety X a polynomial PX(t), called its virtual Poincaré polynomial, with the properties
If X is nonsingular and projective (or complete)
If Y is closed algebraic subset of X and U = X \ Y
The existence of such polynomials would follow from the existence of an analogue of Hodge structure in the cohomologies of a general (singular and non-complete) algebraic variety. The novel feature is that the nth cohomology of a general variety looks as if it contained pieces of different weights. This led Alexander Grothendieck to his conjectural theory of motives and motivated a search for an extension of Hodge theory, which culminated in the work of Pierre Deligne. He introduced the notion of a mixed Hodge structure, developed techniques for working with them, gave their construction (based on Heisuke Hironaka's resolution of singularities) and related them to the weights on l-adic cohomology, proving the last part of the Weil conjectures.
Example of curves
To motivate the definition, consider the case of a reducible complex algebraic curve X consisting of two nonsingular components, and , which transversally intersect at the points and . Further, assume that the components are not compact, but can be compactified by adding the points . The first cohomology group of the curve X (with compact support) is dual to the first homology group, which is easier to visualize. There are three types of one-cycles in this group. First, there are elements representing small loops around the punctures . Then there are elements that are coming from the first homology of the compactification of each of the components. The one-cycle in () corresponding to a cycle in the compactification of this component, is not canonical: these elements are determined modulo the span of . Finally, modulo the first two types, the group is generated by a combinatorial cycle which goes from to along a path in one component and comes back along a path in the other component . This suggests that admits an increasing filtration
whose successive quotients Wn/Wn−1 originate from the cohomology of smooth complete varieties, hence admit (pure) Hodge structures, albeit of different weights. Further examples can be found in "A Naive Guide to Mixed Hodge Theory".
Definition of mixed Hodge structure
A mixed Hodge structure on an abelian group consists of a finite decreasing filtration Fp on the complex vector space H (the complexification of ), called the Hodge filtration and a finite increasing filtration Wi on the rational vector space (obtained by extending the scalars to rational numbers), called the weight filtration, subject to the requirement that the n-th associated graded quotient of with respect to the weight filtration, together with the filtration induced by F on its complexification, is a pure Hodge structure of weight n, for all integer n. Here the induced filtration on
is defined by
One can define a notion of a morphism of mixed Hodge structures, which has to be compatible with the filtrations F and W and prove the following:
Theorem. Mixed Hodge structures form an abelian category. The kernels and cokernels in this category coincide with the usual kernels and cokernels in the category of vector spaces, with the induced filtrations.
The total cohomology of a compact Kähler manifold has a mixed Hodge structure, where the nth space of the weight filtration Wn is the direct sum of the cohomology groups (with rational coefficients) of degree less than or equal to n. Therefore, one can think of classical Hodge theory in the compact, complex case as providing a double grading on the complex cohomology group, which defines an increasing filtration Fp and a decreasing filtration Wn that are compatible in certain way. In general, the total cohomology space still has these two filtrations, but they no longer come from a direct sum decomposition. In relation with the third definition of the pure Hodge structure, one can say that a mixed Hodge structure cannot be described using the action of the group An important insight of Deligne is that in the mixed case there is a more complicated noncommutative proalgebraic group that can be used to the same effect using Tannakian formalism.
Moreover, the category of (mixed) Hodge structures admits a good notion of tensor product, corresponding to the product of varieties, as well as related concepts of inner Hom and dual object, making it into a Tannakian category. By Tannaka–Krein philosophy, this category is equivalent to the category of finite-dimensional representations of a certain group, which Deligne, Milne and et el. has explicitly described, see and . The description of this group was recast in more geometrical terms by . The corresponding (much more involved) analysis for rational pure polarizable Hodge structures was done by .
Mixed Hodge structure in cohomology (Deligne's theorem)
Deligne has proved that the nth cohomology group of an arbitrary algebraic variety has a canonical mixed Hodge structure. This structure is functorial, and compatible with the products of varieties (Künneth isomorphism) and the product in cohomology. For a complete nonsingular variety X this structure is pure of weight n, and the Hodge filtration can be defined through the hypercohomology of the truncated de Rham complex.
The proof roughly consists of two parts, taking care of noncompactness and singularities. Both parts use the resolution of singularities (due to Hironaka) in an essential way. In the singular case, varieties are replaced by simplicial schemes, leading to more complicated homological algebra, and a technical notion of a Hodge structure on complexes (as opposed to cohomology) is used.
Using the theory of motives, it is possible to refine the weight filtration on the cohomology with rational coefficients to one with integral coefficients.
Examples
The Tate–Hodge structure is the Hodge structure with underlying module given by (a subgroup of ), with So it is pure of weight −2 by definition and it is the unique 1-dimensional pure Hodge structure of weight −2 up to isomorphisms. More generally, its nth tensor power is denoted by it is 1-dimensional and pure of weight −2n.
The cohomology of a compact Kähler manifold has a Hodge structure, and the nth cohomology group is pure of weight n.
The cohomology of a complex variety (possibly singular or non-proper) has a mixed Hodge structure. This was shown for smooth varieties by , and in general by .
For a projective variety with normal crossing singularities there is a spectral sequence with a degenerate E2-page which computes all of its mixed Hodge structures. The E1-page has explicit terms with a differential coming from a simplicial set.
Any smooth variety X admits a smooth compactification with complement a normal crossing divisor. The corresponding logarithmic forms can be used to describe the mixed Hodge structure on the cohomology of X explicitly.
The Hodge structure for a smooth projective hypersurface of degree was worked out explicitly by Griffiths in his "Period Integrals of Algebraic Manifolds" paper. If is the polynomial defining the hypersurface then the graded Jacobian quotient ring contains all of the information of the middle cohomology of . He shows that For example, consider the K3 surface given by , hence and . Then, the graded Jacobian ring is The isomorphism for the primitive cohomology groups then read hence Notice that is the vector space spanned by which is 19-dimensional. There is an extra vector in given by the Lefschetz class . From the Lefschetz hyperplane theorem and Hodge duality, the rest of the cohomology is in as is -dimensional. Hence the Hodge diamond reads
We can also use the previous isomorphism to verify the genus of a degree plane curve. Since is a smooth curve and the Ehresmann fibration theorem guarantees that every other smooth curve of genus is diffeomorphic, we have that the genus then the same. So, using the isomorphism of primitive cohomology with the graded part of the Jacobian ring, we see that This implies that the dimension is as desired.
The Hodge numbers for a complete intersection are also readily computable: there is a combinatorial formula found by Friedrich Hirzebruch.
Applications
The machinery based on the notions of Hodge structure and mixed Hodge structure forms a part of still largely conjectural theory of motives envisaged by Alexander Grothendieck. Arithmetic information for nonsingular algebraic variety X, encoded by eigenvalue of Frobenius elements acting on its l-adic cohomology, has something in common with the Hodge structure arising from X considered as a complex algebraic variety. Sergei Gelfand and Yuri Manin remarked around 1988 in their Methods of homological algebra, that unlike Galois symmetries acting on other cohomology groups, the origin of "Hodge symmetries" is very mysterious, although formally, they are expressed through the action of the fairly uncomplicated group on the de Rham cohomology. Since then, the mystery has deepened with the discovery and mathematical formulation of mirror symmetry.
Variation of Hodge structure
A variation of Hodge structure (, , ) is a family of Hodge structures
parameterized by a complex manifold X. More precisely a variation of Hodge structure of weight n on a complex manifold X consists of a locally constant sheaf S of finitely generated abelian groups on X, together with a decreasing Hodge filtration F on S ⊗ OX, subject to the following two conditions:
The filtration induces a Hodge structure of weight n on each stalk of the sheaf S
(Griffiths transversality) The natural connection on S ⊗ OX maps into
Here the natural (flat) connection on S ⊗ OX induced by the flat connection on S and the flat connection d on OX, and OX is the sheaf of holomorphic functions on X, and is the sheaf of 1-forms on X. This natural flat connection is a Gauss–Manin connection ∇ and can be described by the Picard–Fuchs equation.
A variation of mixed Hodge structure can be defined in a similar way, by adding a grading or filtration W to S. Typical examples can be found from algebraic morphisms . For example,
has fibers
which are smooth plane curves of genus 10 for and degenerate to a singular curve at Then, the cohomology sheaves
give variations of mixed hodge structures.
Hodge modules
Hodge modules are a generalization of variation of Hodge structures on a complex manifold. They can be thought of informally as something like sheaves of Hodge structures on a manifold; the precise definition is rather technical and complicated. There are generalizations to mixed Hodge modules, and to manifolds with singularities.
For each smooth complex variety, there is an abelian category of mixed Hodge modules associated with it. These behave formally like the categories of sheaves over the manifolds; for example, morphisms f between manifolds induce functors f∗, f*, f!, f! between (derived categories of) mixed Hodge modules similar to the ones for sheaves.
See also
Mixed Hodge structure
Hodge conjecture
Jacobian ideal
Hodge–Tate structure, a p-adic analogue of Hodge structures.
Notes
Introductory references
(Gives tools for computing hodge numbers using sheaf cohomology)
A Naive Guide to Mixed Hodge Theory
(Gives a formula and generators for mixed Hodge numbers of affine Milnor fiber of a weighted homogenous polynomial, and also a formula for complements of weighted homogeneous polynomials in a weighted projective space.)
Survey articles
References
This constructs a mixed Hodge structure on the cohomology of a complex variety.
This constructs a mixed Hodge structure on the cohomology of a complex variety.
This constructs a mixed Hodge structure on the cohomology of a complex variety.
. An annotated version of this article can be found on J. Milne's homepage.
Homological algebra
Hodge theory
Structures on manifolds | Hodge structure | [
"Mathematics",
"Engineering"
] | 3,223 | [
"Tensors",
"Mathematical structures",
"Hodge theory",
"Differential forms",
"Fields of abstract algebra",
"Category theory",
"Homological algebra"
] |
234,417 | https://en.wikipedia.org/wiki/Rebar | Rebar (short for reinforcement bar or reinforcing bar), known when massed as reinforcing steel or steel reinforcement, is a tension device added to concrete to form reinforced concrete and reinforced masonry structures to strengthen and aid the concrete under tension. Concrete is strong under compression, but has low tensile strength. Rebar usually consists of steel bars which significantly increase the tensile strength of the structure. Rebar surfaces feature a continuous series of ribs, lugs or indentations to promote a better bond with the concrete and reduce the risk of slippage.
The most common type of rebar is carbon steel, typically consisting of hot-rolled round bars with deformation patterns embossed into its surface. Steel and concrete have similar coefficients of thermal expansion, so a concrete structural member reinforced with steel will experience minimal differential stress as the temperature changes.
Other readily available types of rebar are manufactured of stainless steel, and composite bars made of glass fiber, carbon fiber, or basalt fiber. The carbon steel reinforcing bars may also be coated in zinc or an epoxy resin designed to resist the effects of corrosion, especially when used in saltwater environments. Bamboo has been shown to be a viable alternative to reinforcing steel in concrete construction. These alternative types tend to be more expensive or may have lesser mechanical properties and are thus more often used in specialty construction where their physical characteristics fulfill a specific performance requirement that carbon steel does not provide.
History
Reinforcing bars in masonry construction have been used since antiquity, with Rome using iron or wooden rods in arch construction. Iron tie rods and anchor plates were later employed across Medieval Europe, as a device to reinforce arches, vaults, and cupolas. 2,500 meters of rebar was used in the 14th-century Château de Vincennes.
During the 18th century, rebar was used to form the carcass of the Leaning Tower of Nevyansk in Russia, built on the orders of the industrialist Akinfiy Demidov. The cast iron used for the rebar was of high quality, and there is no corrosion on the bars to this day. The carcass of the tower was connected to its cast iron tented roof, crowned with one of the first known lightning rods.
However, not until the mid-19th century, with the embedding of steel bars into concrete (thus producing modern reinforced concrete), did rebar display its greatest strengths. Several people in Europe and North America developed reinforced concrete in the 1850s. These include Joseph-Louis Lambot of France, who built reinforced concrete boats in Paris (1854) and Thaddeus Hyatt of the United States, who produced and tested reinforced concrete beams. Joseph Monier of France is one of the most notable figures for the invention and popularization of reinforced concrete. As a French gardener, Monier patented reinforced concrete flowerpots in 1867, before proceeding to build reinforced concrete water tanks and bridges.
Ernest L. Ransome, an English engineer and architect who worked in the United States, made a significant contribution to the development of reinforcing bars in concrete construction. He invented twisted iron rebar, which he initially thought of while designing self-supporting sidewalks for the Masonic Hall in Stockton, California. His twisted rebar was, however, not initially appreciated and even ridiculed at the Technical Society of California, where members stated that the twisting would weaken the iron. In 1889, Ransome worked on the West Coast mainly designing bridges. One of these, the Alvord Lake Bridge in San Francisco's Golden Gate Park, was the first reinforced concrete bridge built in the United States. He used twisted rebar in this structure.
At the same time Ransome was inventing twisted steel rebar, C.A.P. Turner was designing his "mushroom system" of reinforced concrete floor slabs with smooth round rods and Julius Kahn was experimenting with an innovative rolled diamond-shaped rebar with flat-plate flanges angled upwards at 45° (patented in 1902). Kahn predicted concrete beams with this reinforcing system would bend like a Warren truss, and also thought of this rebar as shear reinforcement. Kahn's reinforcing system was built in concrete beams, joists, and columns.
The system was both praised and criticized by Kahn's engineering contemporaries: Turner voiced strong objections to this system as it could cause catastrophic failure to concrete structures. He rejected the idea that Kahn's reinforcing system in concrete beams would act as a Warren truss and also noted that this system would not provide the adequate amount of shear stress reinforcement at the ends of the simply supported beams, the place where the shear stress is greatest. Furthermore, Turner warned that Kahn's system could result in a brittle failure as it did not have longitudinal reinforcement in the beams at the columns.
This type of failure manifested in the partial collapse of the Bixby Hotel in Long Beach, California and total collapse of the Eastman Kodak Building in Rochester, New York, both during construction in 1906. It was, however, concluded that both failures were the consequences of poor-quality labor. With the increase in demand of construction standardization, innovative reinforcing systems such as Kahn's were pushed to the side in favor of the concrete reinforcing systems seen today.
Requirements for deformations on steel bar reinforcement were not standardized in US construction until about 1950. Modern requirements for deformations were established in "Tentative Specifications for the Deformations of Deformed Steel Bars for Concrete Reinforcement", ASTM A305-47T. Subsequently, changes were made that increased rib height and reduced rib spacing for certain bar sizes, and the qualification of “tentative” was removed when the updated standard ASTM A305-49 was issued in 1949. The requirements for deformations found in current specifications for steel bar reinforcing, such as ASTM A615 and ASTM A706, among others, are the same as those specified in ASTM A305-49.
Use in concrete and masonry
Concrete is a material that is very strong in compression, but relatively weak in tension. To compensate for this imbalance in concrete's behavior, rebar is cast into it to carry the tensile loads. Most steel reinforcement is divided into primary and secondary reinforcement:
Primary reinforcement refers to the steel which is employed to guarantee the resistance needed by the structure as a whole to support the design loads.
Secondary reinforcement, also known as distribution or thermal reinforcement, is employed for durability and aesthetic reasons, by providing enough localized resistance to limit cracking and resist stresses caused by effects such as temperature changes and shrinkage.
Secondary applications include rebar embedded in masonry walls, which includes both bars placed horizontally in a mortar joint (every fourth or fifth course of block) or vertically (in the horizontal voids of cement blocks and cored bricks, which is then fixed in place with grout. Masonry structures held together with grout have similar properties to concrete – high compressive resistance but a limited ability to carry tensile loads. When rebar is added they are known as "reinforced masonry".
A similar approach (of embedding rebar vertically in designed voids in engineered blocks) is also used in dry-laid landscape walls, at least pinning the lowest course in place into the earth, also employed securing the lowest course and/or deadmen in walls made of engineered concrete or wooden landscape ties.
In unusual cases, steel reinforcement may be embedded and partially exposed, as in the steel tie bars that constrain and reinforce the masonry of Nevyansk Tower or ancient structures in Rome and the Vatican.
Physical characteristics
Steel has a thermal expansion coefficient nearly equal to that of modern concrete. If this were not so, it would cause problems through additional longitudinal and perpendicular stresses at temperatures different from the temperature of the setting. Although rebar has ribs that bind it mechanically to the concrete, it can still be pulled out of the concrete under high stresses, an occurrence that often accompanies a larger-scale collapse of the structure. To prevent such a failure, rebar is either deeply embedded into adjacent structural members (40–60 times the diameter), or bent and hooked at the ends to lock it around the concrete and other rebar. This first approach increases the friction locking the bar into place, while the second makes use of the high compressive strength of concrete.
Common rebar is made of unfinished tempered steel, making it susceptible to rusting. Normally the concrete cover is able to provide a pH value higher than 12 avoiding the corrosion reaction. Too little concrete cover can compromise this guard through carbonation from the surface, and salt penetration. Too much concrete cover can cause bigger crack widths which also compromises the local guard. As rust takes up greater volume than the steel from which it was formed, it causes severe internal pressure on the surrounding concrete, leading to cracking, spalling, and, ultimately, structural failure. This phenomenon is known as oxide jacking.
This is a particular problem where the concrete is exposed to salt water, as in bridges where salt is applied to roadways in winter, or in marine applications. Uncoated, corrosion-resistant low-carbon/chromium (microcomposite), silicon bronze, epoxy-coated, galvanized, or stainless steel rebars may be employed in these situations at greater initial expense, but significantly lower expense over the service life of the project.
Extra care is taken during the transport, fabrication, handling, installation, and concrete placement process when working with epoxy-coated rebar, because damage will reduce the long-term corrosion resistance of these bars. Even damaged epoxy-coated bars have shown better performance than uncoated reinforcing bars, though issues from debonding of the epoxy coating from the bars and corrosion under the epoxy film have been reported. These epoxy-coated bars are used in over 70,000 bridge decks in the US, but this technology was slowly being phased out in favor of stainless steel rebar as of 2005 because of its poor performance.
Requirements for deformations are found in US-standard product specifications for steel bar reinforcing, such as ASTM A615 and ASTM A706, and dictate lug spacing and height.
Fibre-reinforced plastic rebar is also used in high-corrosion environments. It is available in many forms, such as spirals for reinforcing columns, common rods, and meshes. Most commercially available rebar is made from unidirectional fibers set in a thermoset polymer resin and is often referred to as FRP.
Some special construction such as research and manufacturing facilities with very sensitive electronics may require the use of reinforcement that is non-conductive to electricity, and medical imaging equipment rooms may require non-magnetic properties to avoid interference. FRP rebar, notably glass fibre types have low electrical conductivity and are non-magnetic which is commonly used for such needs. Stainless steel rebar with low magnetic permeability is available and is sometimes used to avoid magnetic interference issues.
Reinforcing steel can also be displaced by impacts such as earthquakes, resulting in structural failure. The prime example of this is the collapse of the Cypress Street Viaduct in Oakland, California as a result of the 1989 Loma Prieta earthquake, causing 42 fatalities. The shaking of the earthquake caused rebars to burst from the concrete and buckle. Updated building designs, including more circumferential rebar, can address this type of failure.
Sizes and grades
US sizes
US/Imperial bar sizes give the diameter in units of for bar sizes #2 through #8, so that #8 = inch = diameter.
There are no fractional bar sizes in this system. The "#" symbol indicates the number sign, and thus "#6" is read as "number six". The use of the "#" sign is customary for US sizes, but "No." is sometimes used instead. Within the trades rebar is known by a shorthand utilizing the bar diameter as descriptor, such as "four-bar" for bar that is four-eighths (or one-half) of an inch.
The cross-sectional area of a bar, as given by πr², works out to (bar size/9.027)², which is approximated as (bar size/9)² square inches. For example, the area of #8 bar is (8/9)² = 0.79 square inches.
Bar sizes larger than #8 follow the -inch rule imperfectly and skip sizes #12–13, and #15–17 due to historical convention. In early concrete construction bars of one inch and larger were only available in square sections, and when large format deformed round bars became available around 1957, the industry manufactured them to provide the cross-sectional area equivalent of standard square bar sizes that were formerly used. The diameter of the equivalent large format round shape is rounded to the nearest inch to provide the bar size. For example, #9 bar has a cross section of , and therefore a diameter of . #10, #11, #14, and #18 sizes correspond to 1 inch, 1, 1, and 2-inch square bars, respectively.
Sizes smaller than #3 are no longer recognized as standard sizes. These are most commonly manufactured as plain round undeformed rod steel but can be made with deformations. Sizes smaller than #3 are typically referred to as "wire" products and not "bar" and specified by either their nominal diameter or wire gage number. #2 bars are often informally called "pencil rod" as they are about the same size as a pencil.
When US/Imperial sized rebar are used in projects with metric units, the equivalent metric size is typically specified as the nominal diameter rounded to the nearest millimeter. These are not considered standard metric sizes, and thus is often referred to as a soft conversion or the "soft metric" size. The US/Imperial bar size system recognizes the use of true metric bar sizes (No. 10, 12, 16, 20, 25, 28, 32, 36, 40, 50 and 60 specifically) which indicates the nominal bar diameter in millimeters, as an "alternate size" specification. Substituting a true metric size for a US/Imperial size is called a hard conversion, and sometimes results in the use of a physically different sized bar.
Canadian sizes
Metric bar designations represent the nominal bar diameter in millimeters, rounded to the nearest 5 mm.
European sizes
Metric bar designations represent the nominal bar diameter in millimetres. Preferred bar sizes in Europe are specified to comply with Table 6 of the standard EN 10080, although various national standards still remain in force (e.g. BS 4449 in the United Kingdom). In Switzerland some sizes are different from European standard.
Australian sizes
Reinforcement for use in concrete construction is subject to the requirements of Australian Standards AS3600 (Concrete Structures) and AS/NZS4671 (Steel Reinforcing for Concrete). There are other standards that apply to testing, welding and galvanizing.
The designation of reinforcement is defined in AS/NZS4671 using the following formats:
Shape/ Section
D- deformed ribbed bar, R- round / plain bar, I- deformed indented bar
Ductility Class
L- low ductility, N- normal ductility, E- seismic (Earthquake) ductility
Standard grades (MPa)
250N, 300E, 500L, 500N, 500E
Examples:
D500N12 is deformed bar, 500 MPa strength, normal ductility and 12 mm nominal diameter – also known as "N12"
Bars are typically abbreviated to simply 'N' (hot-rolled deformed bar), 'R' (hot-rolled round bar), 'RW' (cold-drawn ribbed wire) or 'W' (cold-drawn round wire), as the yield strength and ductility class can be implied from the shape. For example, all commercially available wire has a yield strength of 500 MPa and low ductility, while round bars are 250 MPa and normal ductility.
New Zealand
Reinforcement for use in concrete construction is subject to the requirements of AS/NZS4671 (Steel Reinforcing for Concrete). There are other standards that apply to testing, welding and galvanizing.
'Reinforcement steel bar Grade 300 & 500 Class E
India
Rebars are available in the following grades as per IS:1786-2008 FE 415/FE 415D/FE 415S/FE 500/FE 500D/FE 500S/FE 550, FE550D, FE 600. Rebars are quenched with water at a high level pressure so that the outer surface is hardened while the inner core remains soft. Rebars are ribbed so that the concrete can have a better grip. Coastal regions use galvanized rebars to prolong their life. BIS rebar sizes are 10, 12, 16, 20, 25, 28, 32, 36, 40 and 50 millimeters.
Jumbo and threaded bar sizes
Very large format rebar sizes are widely available and produced by specialty manufacturers. The tower and sign industries commonly use "jumbo" bars as anchor rods for large structures which are fabricated from slightly oversized blanks such that threads can be cut at the ends to accept standard anchor nuts. Fully threaded rebar is also produced with very coarse threads which satisfy rebar deformation standards and allow for custom nuts and couplers to be used. These customary sizes, while in common use, do not have consensus standards associated with them, and properties may vary by manufacturer.
Grades
Rebar is available in grades and specifications that vary in yield strength, ultimate tensile strength, chemical composition, and percentage of elongation.
The use of a grade by itself only indicates the minimum permissible yield strength, and it must be used in the context of a material specification in order to fully describe product requirements for rebar. Material specifications set the requirements for grades as well as additional properties such as, chemical composition, minimum elongation, physical tolerances, etc. Fabricated rebar must exceed the grade's minimum yield strength and any other material specification requirements when inspected and tested.
In US use, the grade designation is equal to the minimum yield strength of the bar in ksi (1000 psi); for example, grade 60 rebar has a minimum yield strength of 60 ksi. Rebar is most commonly manufactured in grades 40, 60, and 75 with higher strength readily available in grades 80, 100, 120 and 150. Grade 60 (420 MPa) is the most widely used rebar grade in modern US construction. Historic grades include 30, 33, 35, 36, 50 and 55, which are not in common use today.
Some grades are only manufactured for specific bar sizes; for example, under ASTM A615, Grade 40 (280 MPa) is only furnished for US bar sizes #3 through #6 (soft metric No.10 through 19). Sometimes limitations on available material grades for specific bar sizes is related to the manufacturing process used, as well as the availability of controlled quality raw materials used.
Some material specifications cover multiple grades, and in such cases it is necessary to indicate both the material specification and grade. Rebar grades are customarily noted on engineering documents, even when there are no other grade options within the material specification, in order to eliminate confusion and avoid potential quality issues such as might occur if a material substitution is made. "Gr." is the common engineering abbreviation for "grade", with variations on letter capitalization and the use of a period.
In certain cases, such as earthquake engineering and blast-resistant design where post-yield behavior is expected, it is important to be able to predict and control properties such as the maximum yield strength and minimum ratio of tensile strength to yield strength. ASTM A706 Gr. 60 is an example of a controlled property range material specification which has a minimum yield strength of 60 ksi (420 MPa), maximum yield strength of 78 ksi (540 MPa), minimum tensile strength of 80 ksi (550 MPa) and not less than 1.25 times the actual yield strength, and minimum elongation requirements that vary by bar size.
In countries that use the metric system, the grade designation is typically the yield strength in megapascals (MPa), for example grade 400 (similar to US grade 60; however, metric grade 420 is actually the exact substitution for the US grade).
Common US specifications, published by ACI and ASTM, are:
American Concrete Institute: "ACI 318-14 Building Code Requirements for Structural Concrete and Commentary", (2014)
ASTM A82: Specification for Plain Steel Wire for Concrete Reinforcement
ASTM A184/A184M: Specification for Fabricated Deformed Steel Bar Mats for Concrete Reinforcement
ASTM A185: Specification for Welded Plain Steel Wire Fabric for Concrete Reinforcement
ASTM A496: Specification for Deformed Steel Wire for Concrete Reinforcement
ASTM A497: Specification for Welded Deformed Steel Wire Fabric for Concrete Reinforcement
ASTM A615/A615M: Deformed and plain carbon-steel bars for concrete reinforcement
ASTM A616/A616M: Specification for Rail-Steel Deformed and Plain Bars for Concrete Reinforcement
ASTM A617/A617M: Specification for Axle-Steel Deformed and Plain Bars for Concrete Reinforcement
ASTM A706/A706M: Low-alloy steel deformed and plain bars for concrete reinforcement
ASTM A722/A722M: Standard Specification for High-Strength Steel Bars for Prestressed Concrete
ASTM A767/A767M: Specification for Zinc-Coated (Galvanized) Steel Bars for Concrete Reinforcement
ASTM A775/A775M: Specification for Epoxy-Coated Reinforcing Steel Bars
ASTM A934/A934M: Specification for Epoxy-Coated Prefabricated Steel Reinforcing Bars
ASTM A955: Deformed and plain stainless-steel bars for concrete reinforcement (Supplementary Requirement S1 is used when specifying magnetic permeability testing)
ASTM A996: Rail-steel and axle-steel deformed bars for concrete reinforcement
ASTM A1035: Standard Specification for Deformed and Plain, Low-carbon, Chromium, Steel Bars for Concrete Reinforcement
ASTM marking designations are:
'S' billet A615
'I' rail A616 ()
'IR' Rail Meeting Supplementary Requirements S1 A616 )
'A' Axle A617 )
'W' Low-alloy — A706
Historically in Europe, rebar is composed of mild steel material with a yield strength of approximately 250 MPa (36 ksi). Modern rebar is composed of high-yield steel, with a yield strength more typically 500 MPa (72.5 ksi). Rebar can be supplied with various grades of ductility. The more ductile steel is capable of absorbing considerably more energy when deformed – a behavior that resists earthquake forces and is used in design. These high-yield-strength ductile steels are usually produced using the TEMPCORE process, a method of thermomechanical processing. The manufacture of reinforcing steel by re-rolling finished products (e.g. sheets or rails) is not allowed. In contrast to structural steel, rebar steel grades are not harmonized yet across Europe, each country having their own national standards. However, some standardization of specification and testing methods exist under EN 10080 and EN ISO 15630:
BS EN 10080: Steel for the reinforcement of concrete. Weldable reinforcing steel. General. (2005)
BS 4449: Steel for the reinforcement of concrete. Weldable reinforcing steel. Bar, coil and product. Specification. (2005/2009)
BS 4482: Steel wire for the reinforcement of concrete products. Specification (2005)
BS 4483: Steel fabric for the reinforcement of concrete. Specification (2005)
BS 6744: Stainless steel bars for the reinforcement of and use in concrete. Requirements and test methods. (2001/2009)
DIN 488-1: Reinforcing steels - Part 1: Grades, properties, marking (2009)
DIN 488-2: Reinforcing steels - Part 2: Reinforcing steel bars (2009)
DIN 488-3: Reinforcing steels - Part 3: Reinforcing steel in coils, steel wire (2009)
DIN 488-4: Reinforcing steels - Part 4: Welded fabric (2009)
DIN 488-5: Reinforcing steels - Part 5: Lattice girders (2009)
DIN 488-6: Reinforcing steel - Part 6: Assessment of conformity (2010)
BS EN ISO 15630-1: Steel for the reinforcement and prestressing of concrete. Test methods. Reinforcing bars, wire rod and wire. (2010)
BS EN ISO 15630-2: Steel for the reinforcement and prestressing of concrete. Test methods. Welded fabric. (2010)
Placing rebar
Rebar cages are fabricated either on or off the project site commonly with the help of hydraulic benders and shears. However, for small or custom work a tool known as a Hickey, or hand rebar bender, is sufficient. The rebars are placed by steel fixers ("rodbusters" or concrete reinforcing iron workers), with bar supports and concrete or plastic rebar spacers separating the rebar from the concrete formwork to establish concrete cover and ensure that proper embedment is achieved. The rebars in the cages are connected by spot welding, tying steel wire, sometimes using an electric rebar tier, or with mechanical connections. For tying epoxy-coated or galvanized rebars, epoxy-coated or galvanized wire is normally used, respectively.
Stirrups
Stirrups form the outer part of a rebar cage. The function of stirrups (often referred to as 'reinforcing steel links' and 'shear links') is threefold: to give the main reinforcement bars structure, to maintain a correct level of concrete cover, and to maintain an equal transferance of force throughout the supporting elements. Stirrups are usually rectangular in beams, and circular in piers and are placed at regular intervals along a column or beam as defined by civil or structural engineers in construction drawings.
Welding
The American Welding Society (AWS) D 1.4 sets out the practices for welding rebar in the US. Without special consideration the only rebar that is ready to weld is W grade (Low-alloy — A706). Rebar that is not produced to the ASTM A706 specification is generally not suitable for welding without calculating the "carbon-equivalent". Material with a carbon-equivalent of less than 0.55 can be welded.
ASTM A 616 & ASTM A 617 (now replaced by the combined standard A996) reinforcing bars are re-rolled rail steel and re-rolled rail axle steel with uncontrolled chemistry, phosphorus and carbon content. These materials are not common.
Rebar cages are normally tied together with wire, although spot welding of cages has been the norm in Europe for many years, and is becoming more common in the United States. High strength steels for prestressed concrete cannot be welded.
Reinforcement placement in rolls
Roll reinforcement system is a remarkably fast and cost-efficient method for placing a large quantity of reinforcement over a short period of time. Roll reinforcement is usually prepared off-site and easily unrolled on site. Roll reinforcement placement has been applied successfully in slabs (decks, foundations), wind energy mast foundations, walls, ramps, etc.
Mechanical connections
Also known as "mechanical couplers" or "mechanical splices", mechanical connections are used to connect reinforcing bars together. Mechanical couplers are an effective means to reduce rebar congestion in highly reinforced areas for cast-in-place concrete construction. These couplers are also used in precast concrete construction at the joints between members.
The structural performance criteria for mechanical connections varies between countries, codes, and industries. As a minimum requirement, codes typically specify that the rebar to splice connection meets or exceeds 125% of the specified yield strength of the rebar. More stringent criteria also requires the development of the specified ultimate strength of the rebar. As an example, ACI 318 specifies either Type 1 (125% Fy) or Type 2 (125% Fy and 100% Fu) performance criteria.
For concrete structures designed with ductility in mind, it is recommended that the mechanical connections are also capable of failing in a ductile manner, typically known in the reinforcing steel industry as achieving "bar-break". As an example, Caltrans specifies a required mode of failure (i.e., "necking of the bar").
Safety
To prevent injury, the protruding ends of steel rebar are often bent over or covered with special steel-reinforced plastic caps.
Designations
Reinforcement is usually tabulated in a "reinforcement schedule" on construction drawings. This eliminates ambiguity in the notations used around the world. The following list provides examples of the notations used in the architectural, engineering, and construction industry.
Reuse and recycling
Rebar is frequently recycled, and rebar is often made entirely from recycled steel. Nucor, the largest steel producer in the United States, claims its steel bar products are made from 97% recycled steel.
References
External links
OSHA rebar impalement protection measures
Building materials
Concrete
Russian inventions
Steels
Steel objects | Rebar | [
"Physics",
"Engineering"
] | 6,127 | [
"Structural engineering",
"Building engineering",
"Steels",
"Architecture",
"Construction",
"Materials",
"Alloys",
"Concrete",
"Matter",
"Building materials"
] |
234,444 | https://en.wikipedia.org/wiki/Automorphic%20number | In mathematics, an automorphic number (sometimes referred to as a circular number) is a natural number in a given number base whose square "ends" in the same digits as the number itself.
Definition and properties
Given a number base , a natural number with digits is an automorphic number if is a fixed point of the polynomial function over , the ring of integers modulo . As the inverse limit of is , the ring of -adic integers, automorphic numbers are used to find the numerical representations of the fixed points of over .
For example, with , there are four 10-adic fixed points of , the last 10 digits of which are:
Thus, the automorphic numbers in base 10 are 0, 1, 5, 6, 25, 76, 376, 625, 9376, 90625, 109376, 890625, 2890625, 7109376, 12890625, 87109376, 212890625, 787109376, 1787109376, 8212890625, 18212890625, 81787109376, 918212890625, 9918212890625, 40081787109376, 59918212890625, ... .
A fixed point of is a zero of the function . In the ring of integers modulo , there are zeroes to , where the prime omega function is the number of distinct prime factors in . An element in is a zero of if and only if or for all . Since there are two possible values in , and there are such , there are zeroes of , and thus there are fixed points of . According to Hensel's lemma, if there are zeroes or fixed points of a polynomial function modulo , then there are corresponding zeroes or fixed points of the same function modulo any power of , and this remains true in the inverse limit. Thus, in any given base there are -adic fixed points of .
As 0 is always a zero-divisor, 0 and 1 are always fixed points of , and 0 and 1 are automorphic numbers in every base. These solutions are called trivial automorphic numbers. If is a prime power, then the ring of -adic numbers has no zero-divisors other than 0, so the only fixed points of are 0 and 1. As a result, nontrivial automorphic numbers, those other than 0 and 1, only exist when the base has at least two distinct prime factors.
Automorphic numbers in base b
All -adic numbers are represented in base , using A−Z to represent digit values 10 to 35.
Extensions
Automorphic numbers can be extended to any such polynomial function of degree with b-adic coefficients . These generalised automorphic numbers form a tree.
a-automorphic numbers
An -automorphic number occurs when the polynomial function is
For example, with and , as there are two fixed points for in ( and ), according to Hensel's lemma there are two 10-adic fixed points for ,
so the 2-automorphic numbers in base 10 are 0, 8, 88, 688, 4688...
Trimorphic numbers
A trimorphic number or spherical number occurs when the polynomial function is . All automorphic numbers are trimorphic. The terms circular and spherical were formerly used for the slightly different case of a number whose powers all have the same last digit as the number itself.
For base , the trimorphic numbers are:
0, 1, 4, 5, 6, 9, 24, 25, 49, 51, 75, 76, 99, 125, 249, 251, 375, 376, 499, 501, 624, 625, 749, 751, 875, 999, 1249, 3751, 4375, 4999, 5001, 5625, 6249, 8751, 9375, 9376, 9999, ...
For base , the trimorphic numbers are:
0, 1, 3, 4, 5, 7, 8, 9, B, 15, 47, 53, 54, 5B, 61, 68, 69, 75, A7, B3, BB, 115, 253, 368, 369, 4A7, 5BB, 601, 715, 853, 854, 969, AA7, BBB, 14A7, 2369, 3853, 3854, 4715, 5BBB, 6001, 74A7, 8368, 8369, 9853, A715, BBBB, ...
Programming example
def hensels_lemma(polynomial_function, base: int, power: int) -> list[int]:
"""Hensel's lemma."""
if power == 0:
return [0]
if power > 0:
roots = hensels_lemma(polynomial_function, base, power - 1)
new_roots = []
for root in roots:
for i in range(0, base):
new_i = i * base ** (power - 1) + root
new_root = polynomial_function(new_i) % pow(base, power)
if new_root == 0:
new_roots.append(new_i)
return new_roots
base = 10
digits = 10
def automorphic_polynomial(x: int) -> int:
return x ** 2 - x
for i in range(1, digits + 1):
print(hensels_lemma(automorphic_polynomial, base, i))
See also
Arithmetic dynamics
Kaprekar number
p-adic number
p-adic analysis
Zero-divisor
References
External links
Arithmetic dynamics
Base-dependent integer sequences
Mathematical analysis
Modular arithmetic
Number theory
P-adic numbers
Ring theory | Automorphic number | [
"Mathematics"
] | 1,233 | [
"Mathematical analysis",
"P-adic numbers",
"Discrete mathematics",
"Recreational mathematics",
"Ring theory",
"Arithmetic dynamics",
"Fields of abstract algebra",
"Arithmetic",
"Modular arithmetic",
"Number theory",
"Dynamical systems"
] |
234,564 | https://en.wikipedia.org/wiki/Bullet%20time | Bullet time (also known as frozen moment, dead time, flow motion or time slice) is a visual effect or visual impression of detaching the time and space of a camera (or viewer) from that of its visible subject. It is a depth enhanced simulation of variable-speed action and performance found in films, broadcast advertisements, and realtime graphics within video games and other special media. It is characterized by its extreme transformation of both time (slow enough to show normally imperceptible and unfilmable events, such as flying bullets), and of space (by way of the ability of the camera angle—the audience's point-of-view—to move around the scene at a normal speed while events are slowed). This is almost impossible with conventional slow motion, as the physical camera would have to move implausibly fast; the concept implies that only a "virtual camera", often illustrated within the confines of a computer-generated environment such as a virtual world or virtual reality, would be capable of "filming" bullet-time types of moments. Technical and historical variations of this effect have been referred to as time slicing, view morphing, temps mort (French: "dead time") and virtual cinematography.
The term "bullet time" was first used with reference to the 1999 film The Matrix, and later in reference to the slow motion effects in the 2001 video game Max Payne. In the years since the introduction of the term via the Matrix films it has become a commonly applied expression in popular culture.
History
The technique of using a group of still cameras to freeze motion occurred before the invention of cinema itself with preliminary work by Eadweard Muybridge on chronophotography. In The Horse in Motion (1878), Muybridge analyzed the motion of a galloping horse by using a line of cameras to photograph the animal as it ran past. Eadweard Muybridge used still cameras placed along a racetrack, and each camera was actuated by a taut string stretched across the track; as the horse galloped past, the camera shutters snapped, taking one frame at a time. Muybridge later assembled the pictures into a rudimentary animation, by having them traced onto a glass disk, rotating in a type of magic lantern with a stroboscopic shutter. This zoopraxiscope may have been an inspiration for Thomas Edison to explore the idea of motion pictures. In 1878–1879, Muybridge made dozens of studies of foreshortenings of horses and athletes with five cameras capturing the same moment from different positions. For his studies with the University of Pennsylvania, published as Animal Locomotion (1887), Muybridge also took photos from six angles at the same instant, as well as series of 12 phases from three angles.
A debt may also be owed to MIT professor Harold Edgerton, who, in the 1940s, captured now-iconic photos of bullets using xenon strobe lights to "freeze" motion.
Bullet-time as a concept was frequently developed in cel animation. One of the earliest examples is the shot at the end of the title sequence for the 1966 Japanese anime series Speed Racer: as Speed leaps from the Mach Five, he freezes in mid-jump, and then the "camera" does an arc shot from front to sideways.
In 1980, Tim Macmillan started producing pioneering film and later, video, in this field while studying for a BA at the (then named) Bath Academy of Art using 16mm film arranged in a progressing circular arrangement of pinhole cameras. They were the first iteration of the Time-Slice' Motion-Picture Array Cameras" which he developed in the early 1990s when still cameras for the array capable of high image quality for broadcast and movie applications became available. In 1997 he founded Time-Slice Films Ltd. (UK). He applied the technique to his artistic practice in a video projection, titled Dead Horse in an ironic reference to Muybridge, that was exhibited at the London Electronic Arts Gallery in 1998 and in 2000 was nominated for the Citibank Prize for photography.
Another precursor of the bullet-time technique was "Midnight Mover", a 1985 Accept video. In this video, Academy Award winning special effects director Zbigniew Rybczynski mounted thirteen 16mm film cameras on a specially constructed hexagonal rig that encircled the performers. The resulting footage was meticulously edited to create the illusion of the band members spinning in place while moving in real time. In the 1990s, a morphing-based variation on time-slicing was employed by director Michel Gondry and the visual effects company BUF Compagnie in the music video for The Rolling Stones' "Like A Rolling Stone", and in a 1996 Smirnoff commercial the effect was used to depict slow-motion bullets being dodged. Similar time-slice effects were also featured in commercials for The Gap (which was directed by M. Rolston and again produced by BUF), and in feature films such as Lost in Space (1998) and Buffalo '66 (1998) and the television program The Human Body.
It is well-established for feature films' action scenes to be depicted using slow-motion footage, for example the gunfights in The Wild Bunch (directed by Sam Peckinpah) and the heroic bloodshed films of John Woo. Subsequently, the 1998 film Blade featured a scene that used computer-generated bullets and slow-motion footage to illustrate characters' superhuman bullet-dodging reflexes. The 1999 film The Matrix combined these elements (gunfight action scenes, superhuman bullet-dodging, and time-slice effects), popularizing both the effect and the term "bullet-time". The Matrix version of the effect was created by John Gaeta and Manex Visual Effects. Rigs of still cameras were set up in patterns determined by simulations, and then shot either simultaneously (producing an effect similar to previous time-slice scenes) or sequentially (which added a temporal element to the effect). Interpolation effects, digital compositing, and computer-generated "virtual" scenery were used to improve the fluidity of the apparent camera motion. Gaeta said of The Matrix use of the effect:
For artistic inspiration for bullet time, I would credit Otomo Katsuhiro, who co-wrote and directed Akira, which definitely blew me away, along with director Michel Gondry. His music videos experimented with a different type of technique called view-morphing and it was just part of the beginning of uncovering the creative approaches toward using still cameras for special effects. Our technique was significantly different because we built it to move around objects that were themselves in motion, and we were also able to create slow-motion events that 'virtual cameras' could move around – rather than the static action in Gondry's music videos with limited camera moves.
Following The Matrix, bullet time and other slow-motion effects were featured as key gameplay mechanics in various video games. While some games like Cyclone Studios' Requiem: Avenging Angel, released in March 1999, featured slow-motion effects, Remedy Entertainment's 2001 video game Max Payne is considered to be the first true implementation of a bullet-time effect that enables the player to have added limited control (such as aiming and shooting) during the slow-motion mechanic; this mechanic was explicitly called "Bullet Time" in the game. The mechanic is also used extensively in the F.E.A.R. series, combining it with squad-based enemy design encouraging the player to use bullet time to avoid being overwhelmed.
Bullet time was used for the first time in a live music environment in October 2009 for Creed's live DVD Creed Live.
The popular science television program, Time Warp, used high speed camera techniques to examine everyday occurrences and singular talents, including breaking glass, bullet trajectories and their impact effects.
Technology
The bullet time effect was originally achieved photographically by a set of still cameras surrounding the subject. The cameras are fired sequentially, or all at the same time, depending on the desired effect. Single frames from each camera are then arranged and displayed consecutively to produce an orbiting viewpoint of an action frozen in time or as hyper-slow-motion. This technique suggests the limitless perspectives and variable frame rates possible with a virtual camera. However, if the still array process is done with real cameras, it is often limited to assigned paths.
In The Matrix, the camera path was pre-designed using computer-generated visualizations as a guide. Cameras were arranged, behind a green or blue screen, on a track and aligned through a laser targeting system, forming a complex curve through space. The cameras were then triggered at extremely close intervals, so the action continued to unfold, in extreme slow-motion, while the viewpoint moved. Additionally, the individual frames were scanned for computer processing. Using sophisticated interpolation software, extra frames could be inserted to slow down the action further and improve the fluidity of the movement (especially the frame rate of the images); frames could also be dropped to speed up the action. This approach provides greater flexibility than a purely photographic one. The same effect can also be simulated using pure CGI, motion capture and other approaches.
Bullet time evolved further through The Matrix series with the introduction of high-definition computer-generated approaches like virtual cinematography and universal capture. Universal capture, a machine vision guided system, was the first ever motion picture deployment of an array of high definition cameras focused on a common human subject (actor, Neo) in order to create a volumetric photography. Like the concept of bullet time, the subject could be viewed from any angle yet, at the same time, the depth based media could be recomposed as well as spatially integrated within computer-generated constructs. It moved past a visual concept of a virtual camera to becoming an actual virtual camera. Virtual elements within the Matrix Trilogy utilized state-of-the-art image-based computer rendering techniques pioneered in Paul Debevec's 1997 film The Campanile and custom evolved for The Matrix by George Borshukov, an early collaborator of Debevec. Inspiration aside, virtual camera methodologies pioneered within the Matrix trilogy have been often credited as fundamentally contributing to capture approaches required for emergent virtual reality and other immersive experience platforms.
For many years, it has been possible to use computer vision techniques to capture scenes and render images of novel viewpoints sufficient for bullet time type effects. More recently, these have been formalized into what is becoming known as free viewpoint television (FTV). At the time of The Matrix, FTV was not a fully mature technology. FTV is effectively the live action version of bullet time, without the slow motion.
See also
Time-lapse photography
References
Special effects
Theatrical combat
Visual effects
Slow motion
The Matrix (franchise)
1999 neologisms | Bullet time | [
"Physics"
] | 2,221 | [
"Spacetime",
"Slow motion",
"Physical quantities",
"Time"
] |
234,625 | https://en.wikipedia.org/wiki/Fenamidone | Fenamidone is a foliar fungicide used on grapes, ornamentals, potatoes, tobacco, and vegetables such as tomatoes. It exerts its fungicidal effects by acting as a Qo inhibitor.
References
Fungicides | Fenamidone | [
"Chemistry",
"Biology"
] | 51 | [
"Fungicides",
"Organic compounds",
"Biocides",
"Organic compound stubs",
"Organic chemistry stubs"
] |
234,654 | https://en.wikipedia.org/wiki/Collimated%20beam | A collimated beam of light or other electromagnetic radiation has parallel rays, and therefore will spread minimally as it propagates. A laser beam is an archetypical example. A perfectly collimated light beam, with no divergence, would not disperse with distance. However, diffraction prevents the creation of any such beam.
Light can be approximately collimated by a number of processes, for instance by means of a collimator. Perfectly collimated light is sometimes said to be focused at infinity. Thus, as the distance from a point source increases, the spherical wavefronts become flatter and closer to plane waves, which are perfectly collimated.
Other forms of electromagnetic radiation can also be collimated. In radiology, X-rays are collimated to reduce the volume of the patient's tissue that is irradiated, and to remove stray photons that reduce the quality of the x-ray image ("film fog"). In scintigraphy, a gamma ray collimator is used in front of a detector to allow only photons perpendicular to the surface to be detected.
The term collimated may also be applied to particle beams – a collimated particle beam – where typically shielding blocks of high density materials (such as lead, bismuth alloys, etc.) may be used to absorb or block peripheral particles from a desired forward direction, especially a sequence of such absorbing collimators. This method of particle collimation is routinely deployed and is ubiquitous in every particle accelerator complex in the world. An additional method enabling this same forward collimation effect, less well studied, may deploy strategic nuclear polarization (magnetic polarization of nuclei) if the requisite reactions are designed into any given experimental applications.
Etymology
The word "collimate" comes from the Latin verb collimare, which originated in a misreading of collineare, "to direct in a straight line".
Sources
Lasers
Laser light from gas or crystal lasers is highly collimated because it is formed in an optical cavity between two parallel mirrors which constrain the light to a path perpendicular to the surfaces of the mirrors. In practice, gas lasers can use concave mirrors, flat mirrors, or a combination of both. The divergence of high-quality laser beams is commonly less than 1 milliradian (3.4 arcmin), and can be much less for large-diameter beams. Laser diodes emit less-collimated light due to their short cavity, and therefore higher collimation requires a collimating lens.
Synchrotron light
Synchrotron light is very well collimated. It is produced by bending relativistic electrons (i.e. those moving at relativistic speeds) around a circular track. When the electrons are at relativistic speeds, the resulting radiation is highly collimated, a result which does not occur at lower speeds.
Distant sources
The light from stars (other than the Sun) arrives at Earth precisely collimated, because stars are so far away they present no detectable angular size. However, due to refraction and turbulence in the Earth's atmosphere, starlight arrives slightly uncollimated at the ground with an apparent angular diameter of about 0.4 arcseconds. Direct rays of light from the Sun arrive at the Earth uncollimated by one-half degree, this being the angular diameter of the Sun as seen from Earth. During a solar eclipse, the Sun's light becomes increasingly collimated as the visible surface shrinks to a thin crescent and ultimately a small point, producing the phenomena of distinct shadows and shadow bands.
Lenses and mirrors
A perfect parabolic mirror will bring parallel rays to a focus at a single point. Conversely, a point source at the focus of a parabolic mirror will produce a beam of collimated light creating a collimator. Since the source needs to be small, such an optical system cannot produce much optical power. Spherical mirrors are easier to make than parabolic mirrors and they are often used to produce approximately collimated light. Many types of lenses can also produce collimated light from point-like sources.
Collimation and decollimation
"Collimation" refers to all the optical elements in an instrument being on their designed optical axis. It also refers to the process of adjusting an optical instrument so that all its elements are on that designed axis (in line and parallel). The unconditional aligning of binoculars is a 3-axis collimation, meaning both optical axis that provide stereoscopic vision are aligned parallel with the axis of the hinge used to select various interpupillary distance settings. With regards to a telescope, the term refers to the fact that the optical axis of each optical component should be centered and parallel, so that collimated light emerges from the eyepiece. Most amateur reflector telescopes need to be re-collimated every few years to maintain optimum performance. This can be done by simple visual methods such as looking down the optical assembly with no eyepiece to make sure the components are lined up, by using a Cheshire eyepiece, or with the assistance of a simple laser collimator or autocollimator. Collimation can also be tested using a shearing interferometer, which is often used to test laser collimation.
"Decollimation" is any mechanism or process which causes a beam with the minimum possible ray divergence to diverge or converge from parallelism. Decollimation may be deliberate for systems reasons, or may be caused by many factors, such as refractive index inhomogeneities, occlusions, scattering, deflection, diffraction, reflection, and refraction. Decollimation must be accounted for to fully treat many systems such as radio, radar, sonar, and optical communications.
See also
Autocollimation
Cross-cockpit collimated display
Schlieren photography
References
Bibliography
Pfister, J. & Kneedler, J.A. (s.d.). A guide to lasers in the OR.
Optics
Observational astronomy | Collimated beam | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,259 | [
"Applied and interdisciplinary physics",
"Optics",
"Observational astronomy",
" molecular",
"Atomic",
"Astronomical sub-disciplines",
" and optical physics"
] |
234,714 | https://en.wikipedia.org/wiki/Medical%20imaging | Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to reveal internal structures hidden by the skin and bones, as well as to diagnose and treat disease. Medical imaging also establishes a database of normal anatomy and physiology to make it possible to identify abnormalities. Although imaging of removed organs and tissues can be performed for medical reasons, such procedures are usually considered part of pathology instead of medical imaging.
Measurement and recording techniques that are not primarily designed to produce images, such as electroencephalography (EEG), magnetoencephalography (MEG), electrocardiography (ECG), and others, represent other technologies that produce data susceptible to representation as a parameter graph versus time or maps that contain data about the measurement locations. In a limited comparison, these technologies can be considered forms of medical imaging in another discipline of medical instrumentation.
As of 2010, 5 billion medical imaging studies had been conducted worldwide. Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States. Medical imaging equipment is manufactured using technology from the semiconductor industry, including CMOS integrated circuit chips, power semiconductor devices, sensors such as image sensors (particularly CMOS sensors) and biosensors, and processors such as microcontrollers, microprocessors, digital signal processors, media processors and system-on-chip devices. , annual shipments of medical imaging chips amount to 46million units and .
The term "noninvasive" is used to denote a procedure where no instrument is introduced into a patient's body, which is the case for most imaging techniques used.
Types
In the clinical context, "invisible light" medical imaging is generally equated to radiology or "clinical imaging". "Visible light" medical imaging involves digital video or still pictures that can be seen without special equipment. Dermatology and wound care are two modalities that use visible light imagery. Interpretation of medical images is generally undertaken by a physician specialising in radiology known as a radiologist; however, this may be undertaken by any healthcare professional who is trained and certified in radiological clinical evaluation. Increasingly interpretation is being undertaken by non-physicians, for example radiographers frequently train in interpretation as part of expanded practice. Diagnostic radiography designates the technical aspects of medical imaging and in particular the acquisition of medical images. The radiographer (also known as a radiologic technologist) is usually responsible for acquiring medical images of diagnostic quality; although other professionals may train in this area, notably some radiological interventions performed by radiologists are done so without a radiographer.
As a field of scientific investigation, medical imaging constitutes a sub-discipline of biomedical engineering, medical physics or medicine depending on the context: Research and development in the area of instrumentation, image acquisition (e.g., radiography), modeling and quantification are usually the preserve of biomedical engineering, medical physics, and computer science; Research into the application and interpretation of medical images is usually the preserve of radiology and the medical sub-discipline relevant to medical condition or area of medical science (neuroscience, cardiology, psychiatry, psychology, etc.) under investigation. Many of the techniques developed for medical imaging also have scientific and industrial applications.
Radiography
Two forms of radiographic images are in use in medical imaging. Projection radiography and fluoroscopy, with the latter being useful for catheter guidance. These 2D techniques are still in wide use despite the advance of 3D tomography due to the low cost, high resolution, and depending on the application, lower radiation dosages with 2D technique. This imaging modality uses a wide beam of X-rays for image acquisition and is the first imaging technique available in modern medicine.
Fluoroscopy produces real-time images of internal structures of the body in a similar fashion to radiography, but employs a constant input of X-rays, at a lower dose rate. Contrast media, such as barium, iodine, and air are used to visualize internal organs as they work. Fluoroscopy is also used in image-guided procedures when constant feedback during a procedure is required. An image receptor is required to convert the radiation into an image after it has passed through the area of interest. Early on, this was a fluorescing screen, which gave way to an Image Amplifier (IA) which was a large vacuum tube that had the receiving end coated with cesium iodide, and a mirror at the opposite end. Eventually the mirror was replaced with a TV camera.
Projectional radiographs, more commonly known as X-rays, are often used to determine the type and extent of a fracture as well as for detecting pathological changes in the lungs. With the use of radio-opaque contrast media, such as barium, they can also be used to visualize the structure of the stomach and intestines – this can help diagnose ulcers or certain types of colon cancer.
Magnetic resonance imaging
A magnetic resonance imaging instrument (MRI scanner), or "nuclear magnetic resonance (NMR) imaging" scanner as it was originally known, uses powerful magnets to polarize and excite hydrogen nuclei (i.e., single protons) of water molecules in human tissue, producing a detectable signal which is spatially encoded, resulting in images of the body. The MRI machine emits a radio frequency (RF) pulse at the resonant frequency of the hydrogen atoms on water molecules. Radio frequency antennas ("RF coils") send the pulse to the area of the body to be examined. The RF pulse is absorbed by protons, causing their direction with respect to the primary magnetic field to change. When the RF pulse is turned off, the protons "relax" back to alignment with the primary magnet and emit radio-waves in the process. This radio-frequency emission from the hydrogen-atoms on water is what is detected and reconstructed into an image. The resonant frequency of a spinning magnetic dipole (of which protons are one example) is called the Larmor frequency and is determined by the strength of the main magnetic field and the chemical environment of the nuclei of interest. MRI uses three electromagnetic fields: a very strong (typically 1.5 to 3 teslas) static magnetic field to polarize the hydrogen nuclei, called the primary field; gradient fields that can be modified to vary in space and time (on the order of 1 kHz) for spatial encoding, often simply called gradients; and a spatially homogeneous radio-frequency (RF) field for manipulation of the hydrogen nuclei to produce measurable signals, collected through an RF antenna.
Like CT, MRI traditionally creates a two-dimensional image of a thin "slice" of the body and is therefore considered a tomographic imaging technique. Modern MRI instruments are capable of producing images in the form of 3D blocks, which may be considered a generalization of the single-slice, tomographic, concept. Unlike CT, MRI does not involve the use of ionizing radiation and is therefore not associated with the same health hazards. For example, because MRI has only been in use since the early 1980s, there are no known long-term effects of exposure to strong static fields (this is the subject of some debate; see 'Safety' in MRI) and therefore there is no limit to the number of scans to which an individual can be subjected, in contrast with X-ray and CT. However, there are well-identified health risks associated with tissue heating from exposure to the RF field and the presence of implanted devices in the body, such as pacemakers. These risks are strictly controlled as part of the design of the instrument and the scanning protocols used.
Because CT and MRI are sensitive to different tissue properties, the appearances of the images obtained with the two techniques differ markedly. In CT, X-rays must be blocked by some form of dense tissue to create an image, so the image quality when looking at soft tissues will be poor. In MRI, while any nucleus with a net nuclear spin can be used, the proton of the hydrogen atom remains the most widely used, especially in the clinical setting, because it is so ubiquitous and returns a large signal. This nucleus, present in water molecules, allows the excellent soft-tissue contrast achievable with MRI.
A number of different pulse sequences can be used for specific MRI diagnostic imaging (multiparametric MRI or mpMRI). It is possible to differentiate tissue characteristics by combining two or more of the following imaging sequences, depending on the information being sought: T1-weighted (T1-MRI), T2-weighted (T2-MRI), diffusion weighted imaging (DWI-MRI), dynamic contrast enhancement (DCE-MRI), and spectroscopy (MRI-S). For example, imaging of prostate tumors is better accomplished using T2-MRI and DWI-MRI than T2-weighted imaging alone. The number of applications of mpMRI for detecting disease in various organs continues to expand, including liver studies, breast tumors, pancreatic tumors, and assessing the effects of vascular disruption agents on cancer tumors.
Nuclear medicine
Nuclear medicine encompasses both diagnostic imaging and treatment of disease, and may also be referred to as molecular medicine or molecular imaging and therapeutics. Nuclear medicine uses certain properties of isotopes and the energetic particles emitted from radioactive material to diagnose or treat various pathology. Different from the typical concept of anatomic radiology, nuclear medicine enables assessment of physiology. This function-based approach to medical evaluation has useful applications in most subspecialties, notably oncology, neurology, and cardiology. Gamma cameras and PET scanners are used in e.g. scintigraphy, SPECT and PET to detect regions of biologic activity that may be associated with a disease. Relatively short-lived isotope, such as 99mTc is administered to the patient. Isotopes are often preferentially absorbed by biologically active tissue in the body, and can be used to identify tumors or fracture points in bone. Images are acquired after collimated photons are detected by a crystal that gives off a light signal, which is in turn amplified and converted into count data.
Scintigraphy ("scint") is a form of diagnostic test wherein radioisotopes are taken internally, for example, intravenously or orally. Then, gamma cameras capture and form two-dimensional images from the radiation emitted by the radiopharmaceuticals.
SPECT is a 3D tomographic technique that uses gamma camera data from many projections and can be reconstructed in different planes. A dual detector head gamma camera combined with a CT scanner, which provides localization of functional SPECT data, is termed a SPECT-CT camera, and has shown utility in advancing the field of molecular imaging. In most other medical imaging modalities, energy is passed through the body and the reaction or result is read by detectors. In SPECT imaging, the patient is injected with a radioisotope, most commonly Thallium 201TI, Technetium 99mTC, Iodine 123I, and Gallium 67Ga. The radioactive gamma rays are emitted through the body as the natural decaying process of these isotopes takes place. The emissions of the gamma rays are captured by detectors that surround the body. This essentially means that the human is now the source of the radioactivity, rather than the medical imaging devices such as X-ray or CT.
Positron emission tomography (PET) uses coincidence detection to image functional processes. Short-lived positron emitting isotope, such as 18F, is incorporated with an organic substance such as glucose, creating F18-fluorodeoxyglucose, which can be used as a marker of metabolic utilization. Images of activity distribution throughout the body can show rapidly growing tissue, like tumor, metastasis, or infection. PET images can be viewed in comparison to computed tomography scans to determine an anatomic correlate. Modern scanners may integrate PET, allowing PET-CT, or PET-MRI to optimize the image reconstruction involved with positron imaging. This is performed on the same equipment without physically moving the patient off of the gantry. The resultant hybrid of functional and anatomic imaging information is a useful tool in non-invasive diagnosis and patient management.
Fiduciary markers are used in a wide range of medical imaging applications. Images of the same subject produced with two different imaging systems may be correlated (called image registration) by placing a fiduciary marker in the area imaged by both systems. In this case, a marker which is visible in the images produced by both imaging modalities must be used. By this method, functional information from SPECT or positron emission tomography can be related to anatomical information provided by magnetic resonance imaging (MRI). Similarly, fiducial points established during MRI can be correlated with brain images generated by magnetoencephalography to localize the source of brain activity.
Ultrasound
Medical ultrasound uses high frequency broadband sound waves in the megahertz range that are reflected by tissue to varying degrees to produce (up to 3D) images. This is commonly associated with imaging the fetus in pregnant women. Uses of ultrasound are much broader, however. Other important uses include imaging the abdominal organs, heart, breast, muscles, tendons, arteries and veins. While it may provide less anatomical detail than techniques such as CT or MRI, it has several advantages which make it ideal in numerous situations, in particular that it studies the function of moving structures in real-time, emits no ionizing radiation, and contains speckle that can be used in elastography. Ultrasound is also used as a popular research tool for capturing raw data, that can be made available through an ultrasound research interface, for the purpose of tissue characterization and implementation of new image processing techniques. The concepts of ultrasound differ from other medical imaging modalities in the fact that it is operated by the transmission and receipt of sound waves. The high frequency sound waves are sent into the tissue and depending on the composition of the different tissues; the signal will be attenuated and returned at separate intervals. A path of reflected sound waves in a multilayered structure can be defined by an input acoustic impedance (ultrasound sound wave) and the Reflection and transmission coefficients of the relative structures. It is very safe to use and does not appear to cause any adverse effects. It is also relatively inexpensive and quick to perform. Ultrasound scanners can be taken to critically ill patients in intensive care units, avoiding the danger caused while moving the patient to the radiology department. The real-time moving image obtained can be used to guide drainage and biopsy procedures. Doppler capabilities on modern scanners allow the blood flow in arteries and veins to be assessed.
Elastography
Elastography is a relatively new imaging modality that maps the elastic properties of soft tissue. This modality emerged in the last two decades. Elastography is useful in medical diagnoses, as elasticity can discern healthy from unhealthy tissue for specific organs/growths. For example, cancerous tumours will often be harder than the surrounding tissue, and diseased livers are stiffer than healthy ones. There are several elastographic techniques based on the use of ultrasound, magnetic resonance imaging and tactile imaging. The wide clinical use of ultrasound elastography is a result of the implementation of technology in clinical ultrasound machines. Main branches of ultrasound elastography include Quasistatic Elastography/Strain Imaging, Shear Wave Elasticity Imaging (SWEI), Acoustic Radiation Force Impulse imaging (ARFI), Supersonic Shear Imaging (SSI), and Transient Elastography. In the last decade, a steady increase of activities in the field of elastography is observed demonstrating successful application of the technology in various areas of medical diagnostics and treatment monitoring.
Photoacoustic imaging
Photoacoustic imaging is a recently developed hybrid biomedical imaging modality based on the photoacoustic effect. It combines the advantages of optical absorption contrast with an ultrasonic spatial resolution for deep imaging in (optical) diffusive or quasi-diffusive regime. Recent studies have shown that photoacoustic imaging can be used in vivo for tumor angiogenesis monitoring, blood oxygenation mapping, functional brain imaging, and skin melanoma detection, etc.
Tomography
Tomography is the imaging by sections or sectioning. The main such methods in medical imaging are:
X-ray computed tomography (CT), or Computed Axial Tomography (CAT) scan, is a helical tomography technique (latest generation), which traditionally produces a 2D image of the structures in a thin section of the body. In CT, a beam of X-rays spins around an object being examined and is picked up by sensitive radiation detectors after having penetrated the object from multiple angles. A computer then analyses the information received from the scanner's detectors and constructs a detailed image of the object and its contents using the mathematical principles laid out in the Radon transform. It has a greater ionizing radiation dose burden than projection radiography; repeated scans must be limited to avoid health effects. CT is based on the same principles as X-ray projections but in this case, the patient is enclosed in a surrounding ring of detectors assigned with 500–1000 scintillation detectors (fourth-generation X-ray CT scanner geometry). Previously in older generation scanners, the X-ray beam was paired by a translating source and detector. Computed tomography has almost completely replaced focal plane tomography in X-ray tomography imaging.
Positron emission tomography (PET) also used in conjunction with computed tomography, PET-CT, and magnetic resonance imaging PET-MRI.
Magnetic resonance imaging (MRI) commonly produces tomographic images of cross-sections of the body. (See separate MRI section in this article.)
Echocardiography
When ultrasound is used to image the heart it is referred to as an echocardiogram. Echocardiography allows detailed structures of the heart, including chamber size, heart function, the valves of the heart, as well as the pericardium (the sac around the heart) to be seen. Echocardiography uses 2D, 3D, and Doppler imaging to create pictures of the heart and visualize the blood flowing through each of the four heart valves. Echocardiography is widely used in an array of patients ranging from those experiencing symptoms, such as shortness of breath or chest pain, to those undergoing cancer treatments. Transthoracic ultrasound has been proven to be safe for patients of all ages, from infants to the elderly, without risk of harmful side effects or radiation, differentiating it from other imaging modalities. Echocardiography is one of the most commonly used imaging modalities in the world due to its portability and use in a variety of applications. In emergency situations, echocardiography is quick, easily accessible, and able to be performed at the bedside, making it the modality of choice for many physicians.
Functional near-infrared spectroscopy
FNIR Is a relatively new non-invasive imaging technique. NIRS (near infrared spectroscopy) is used for the purpose of functional neuroimaging and has been widely accepted as a brain imaging technique.
Magnetic particle imaging
Using superparamagnetic iron oxide nanoparticles, magnetic particle imaging (MPI) is a developing diagnostic imaging technique used for tracking superparamagnetic iron oxide nanoparticles. The primary advantage is the high sensitivity and specificity, along with the lack of signal decrease with tissue depth. MPI has been used in medical research to image cardiovascular performance, neuroperfusion, and cell tracking.
In pregnancy
Medical imaging may be indicated in pregnancy because of pregnancy complications, a pre-existing disease or an acquired disease in pregnancy, or routine prenatal care. Magnetic resonance imaging (MRI) without MRI contrast agents as well as obstetric ultrasonography are not associated with any risk for the mother or the fetus, and are the imaging techniques of choice for pregnant women. Projectional radiography, CT scan and nuclear medicine imaging result some degree of ionizing radiation exposure, but have with a few exceptions much lower absorbed doses than what are associated with fetal harm. At higher dosages, effects can include miscarriage, birth defects and intellectual disability.
Maximizing imaging procedure use
The amount of data obtained in a single MR or CT scan is very extensive. Some of the data that radiologists discard could save patients time and money, while reducing their exposure to radiation and risk of complications from invasive procedures. Another approach for making the procedures more efficient is based on utilizing additional constraints, e.g., in some medical imaging modalities one can improve the efficiency of the data acquisition by taking into account the fact the reconstructed density is positive.
Creation of three-dimensional images
Volume rendering techniques have been developed to enable CT, MRI and ultrasound scanning software to produce 3D images for the physician. Traditionally CT and MRI scans produced 2D static output on film. To produce 3D images, many scans are made and then combined by computers to produce a 3D model, which can then be manipulated by the physician. 3D ultrasounds are produced using a somewhat similar technique.
In diagnosing disease of the viscera of the abdomen, ultrasound is particularly sensitive on imaging of biliary tract, urinary tract and female reproductive organs (ovary, fallopian tubes). As for example, diagnosis of gallstone by dilatation of common bile duct and stone in the common bile duct.
With the ability to visualize important structures in great detail, 3D visualization methods are a valuable resource for the diagnosis and surgical treatment of many pathologies. It was a key resource for the famous, but ultimately unsuccessful attempt by Singaporean surgeons to separate Iranian twins Ladan and Laleh Bijani in 2003. The 3D equipment was used previously for similar operations with great success.
Other proposed or developed techniques include:
Diffuse optical tomography
Elastography
Electrical impedance tomography
Optoacoustic imaging
Ophthalmology
A-scan
B-scan
Corneal topography
Optical coherence tomography
Scanning laser ophthalmoscopy
Some of these techniques are still at a research stage and not yet used in clinical routines.
Non-diagnostic imaging
Neuroimaging has also been used in experimental circumstances to allow people (especially disabled persons) to control outside devices, acting as a brain computer interface.
Many medical imaging software applications are used for non-diagnostic imaging, specifically because they do not have an FDA approval and not allowed to use in clinical research for patient diagnosis. Note that many clinical research studies are not designed for patient diagnosis anyway.
Archiving and recording
Used primarily in ultrasound imaging, capturing the image produced by a medical imaging device is required for archiving and telemedicine applications. In most scenarios, a frame grabber is used in order to capture the video signal from the medical device and relay it to a computer for further processing and operations.
DICOM
The Digital Imaging and Communication in Medicine (DICOM) Standard is used globally to store, exchange, and transmit medical images. The DICOM Standard incorporates protocols for imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and radiation therapy.
Compression of medical images
Medical imaging techniques produce very large amounts of data, especially from CT, MRI and PET modalities. As a result, storage and communications of electronic image data are prohibitive without the use of compression. JPEG 2000 image compression is used by the DICOM standard for storage and transmission of medical images. The cost and feasibility of accessing large image data sets over low or various bandwidths are further addressed by use of another DICOM standard, called JPIP, to enable efficient streaming of the JPEG 2000 compressed image data.
Medical imaging in the cloud
There has been growing trend to migrate from on-premise PACS to a cloud-based PACS. A recent article by Applied Radiology said, "As the digital-imaging realm is embraced across the healthcare enterprise, the swift transition from terabytes to petabytes of data has put radiology on the brink of information overload. Cloud computing offers the imaging department of the future the tools to manage data much more intelligently."
Use in pharmaceutical clinical trials
Medical imaging has become a major tool in clinical trials since it enables rapid diagnosis with visualization and quantitative assessment.
A typical clinical trial goes through multiple phases and can take up to eight years. Clinical endpoints or outcomes are used to determine whether the therapy is safe and effective. Once a patient reaches the endpoint, he or she is generally excluded from further experimental interaction. Trials that rely solely on clinical endpoints are very costly as they have long durations and tend to need large numbers of patients.
In contrast to clinical endpoints, surrogate endpoints have been shown to cut down the time required to confirm whether a drug has clinical benefits. Imaging biomarkers (a characteristic that is objectively measured by an imaging technique, which is used as an indicator of pharmacological response to a therapy) and surrogate endpoints have shown to facilitate the use of small group sizes, obtaining quick results with good statistical power.
Imaging is able to reveal subtle change that is indicative of the progression of therapy that may be missed out by more subjective, traditional approaches. Statistical bias is reduced as the findings are evaluated without any direct patient contact.
Imaging techniques such as positron emission tomography (PET) and magnetic resonance imaging (MRI) are routinely used in oncology and neuroscience areas. For example, measurement of tumour shrinkage is a commonly used surrogate endpoint in solid tumour response evaluation. This allows for faster and more objective assessment of the effects of anticancer drugs. In Alzheimer's disease, MRI scans of the entire brain can accurately assess the rate of hippocampal atrophy, while PET scans can measure the brain's metabolic activity by measuring regional glucose metabolism, and beta-amyloid plaques using tracers such as Pittsburgh compound B (PiB). Historically less use has been made of quantitative medical imaging in other areas of drug development although interest is growing.
An imaging-based trial will usually be made up of three components:
A realistic imaging protocol. The protocol is an outline that standardizes (as far as practically possible) the way in which the images are acquired using the various modalities (PET, SPECT, CT, MRI). It covers the specifics in which images are to be stored, processed and evaluated.
An imaging centre that is responsible for collecting the images, perform quality control and provide tools for data storage, distribution and analysis. It is important for images acquired at different time points are displayed in a standardised format to maintain the reliability of the evaluation. Certain specialised imaging contract research organizations provide end to end medical imaging services, from protocol design and site management through to data quality assurance and image analysis.
Clinical sites that recruit patients to generate the images to send back to the imaging centre.
Risks and safety issues
Medical imaging can lead to patient and healthcare provider harm through exposure to ionizing radiation, iodinated contrast, magnetic fields, and other hazards.
Lead is the main material used for radiographic shielding against scattered X-rays.
In magnetic resonance imaging, there is MRI RF shielding as well as magnetic shielding to prevent external disturbance of image quality.
Privacy protection
Medical imaging are generally covered by laws of medical privacy. For example, in the United States the Health Insurance Portability and Accountability Act (HIPAA) sets restrictions for health care providers on utilizing protected health information, which is any individually identifiable information relating to the past, present, or future physical or mental health of any individual. While there has not been any definitive legal decision in the matter, at least one study has indicated that medical imaging may contain biometric information that can uniquely identify a person, and so may qualify as PHI.
The UK General Medical Council's ethical guidelines indicate that the Council does not require consent prior to making recordings of X-ray images. However, the same guidance indicates that the images and recordings need to be anonimized, and acknowledges that in deciding whether a recording is anonymised, one should bear in mind that apparently insignificant details may still be capable of identifying a patient. As such, one should be particularly careful about the anonymity of a recordings of an X-ray image before using or publishing them without consent in journals and other learning materials, whether they are printed or in an electronic format.
Industry
Organizations in the medical imaging industry include manufacturers of imaging equipment, freestanding radiology facilities, and hospitals.
The global market for manufactured devices was estimated at $5 billion in 2018. Notable manufacturers as of 2012 included Fujifilm,GE HealthCare, Siemens Healthineers, Philips, Shimadzu, Toshiba, Carestream Health, Hitachi, Hologic, and Esaote. In 2016, the manufacturing industry was characterized as oligopolistic and mature; new entrants included in Samsung and Neusoft Medical.
In the United States, as estimate as of 2015 places the US market for imaging scans at about $100b, with 60% occurring in hospitals and 40% occurring in freestanding clinics, such as the RadNet chain.
Copyright
United States
As per chapter 300 of the Compendium of U.S. Copyright Office Practices, "the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author" including "Medical imaging produced by X-rays, ultrasounds, magnetic resonance imaging, or other diagnostic equipment." This position differs from the broad copyright protections afforded to photographs. While the Copyright Compendium is an agency statutory interpretation and not legally binding, courts are likely to give deference to it if they find it reasonable. Yet, there is no U.S. federal case law directly addressing the issue of the copyrightability of X-ray images.
Derivatives
An extensive definition of the term derivative work is given by the United States Copyright Act in :
A "derivative work" is a work based upon one or more preexisting works, such as a translation... art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted. A work consisting of editorial revisions, annotations, elaborations, or other modifications which, as a whole, represent an original work of authorship, is a "derivative work".
provides:
The copyright in a compilation or derivative work extends only to the material contributed by the author of such work, as distinguished from the preexisting material employed in the work, and does not imply any exclusive right in the preexisting material. The copyright in such work is independent of, and does not affect or enlarge the scope, duration, ownership, or subsistence of, any copyright protection in the preexisting material.
Germany
In Germany, X-ray images as well as MRI, medical ultrasound, PET and scintigraphy images are protected by (copyright-like) related rights or neighbouring rights. This protection does not require creativity (as would be necessary for regular copyright protection) and lasts only for 50 years after image creation, if not published within 50 years, or for 50 years after the first legitimate publication. The letter of the law grants this right to the "Lichtbildner", i.e. the person who created the image. The literature seems to uniformly consider the medical doctor, dentist or veterinary physician as the rights holder, which may result from the circumstance that in Germany many X-rays are performed in ambulatory settings.
United Kingdom
Medical images created in the United Kingdom will normally be protected by copyright due to "the high level of skill, labour and judgement required to produce a good quality X-ray, particularly to show contrast between bones and various soft tissues". The Society of Radiographers believe this copyright is owned by employer (unless the radiographer is self-employed—though even then their contract might require them to transfer ownership to the hospital). This copyright owner can grant certain permissions to whoever they wish, without giving up their ownership of the copyright. So the hospital and its employees will be given permission to use such radiographic images for the various purposes that they require for medical care. Physicians employed at the hospital will, in their contracts, be given the right to publish patient information in journal papers or books they write (providing they are made anonymous). Patients may also be granted permission to "do what they like with" their own images.
Sweden
The Cyber Law in Sweden states: "Pictures can be protected as photographic works or as photographic pictures. The former requires a higher level of originality; the latter protects all types of photographs, also the ones taken by amateurs, or within medicine or science. The protection requires some sort of photographic technique being used, which includes digital cameras as well as holograms created by laser technique. The difference between the two types of work is the term of protection, which amounts to seventy years after the death of the author of a photographic work as opposed to fifty years, from the year in which the photographic picture was taken."
Medical imaging may possibly be included in the scope of "photography", similarly to a U.S. statement that "MRI images, CT scans, and the like are analogous to photography."
See also
Biological imaging
Medical image sharing
Radiologists Without Borders
Confocal endoscopy
Explanatory notes
References
Further reading
External links
Image processing
Medical physics
Nuclear medicine | Medical imaging | [
"Physics"
] | 6,901 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
234,806 | https://en.wikipedia.org/wiki/Diazepam | {{Infobox drug
| Verifiedfields = changed
| Watchedfields = changed
| verifiedrevid = 443634166
| image = Diazepam structure.svg
| image_class = skin-invert-image
| width = 200
| alt =
| caption =
| image2 = Diazepam-from-xtal-3D-balls.png
| alt2 =
| pronounce =
| tradename = Valium, others
| Drugs.com =
| MedlinePlus = a682047
| DailyMedID = Diazepam
| pregnancy_AU = C
| pregnancy_AU_comment =
| pregnancy_category =
| dependency_liability = High
| addiction_liability = Moderate
| routes_of_administration = oral, intramuscular, intravenous, rectal, nasal, buccal film
| class = Benzodiazepine
| ATC_prefix = N05
| ATC_suffix = BA01
| ATC_supplemental =
| legal_AU = S4
| legal_AU_comment =
| legal_BR = B1
| legal_BR_comment =
| legal_CA = Schedule IV
| legal_CA_comment =
| legal_DE = Rx-only/Anlage III
| legal_DE_comment =
| legal_NZ = Class C
| legal_NZ_comment =
| legal_UK = Class C <ref>
Diazepam, sold under the brand name Valium among others, is a medicine of the benzodiazepine family that acts as an anxiolytic. It is used to treat a range of conditions, including anxiety, seizures, alcohol withdrawal syndrome, muscle spasms, insomnia, and restless legs syndrome. It may also be used to cause memory loss during certain medical procedures. It can be taken orally (by mouth), as a suppository inserted into the rectum, intramuscularly (injected into muscle), intravenously (injection into a vein) or used as a nasal spray. When injected intravenously, effects begin in one to five minutes and last up to an hour. When taken by mouth, effects begin after 15 to 60 minutes.
Common side effects include sleepiness and trouble with coordination. Serious side effects are rare. They include increased risk of suicide, decreased breathing, and an increased risk of seizures if used too frequently in those with epilepsy. Occasionally, excitement or agitation may occur. Long-term use can result in tolerance, dependence, and withdrawal symptoms on dose reduction. Abrupt stopping after long-term use can be potentially dangerous. After stopping, cognitive problems may persist for six months or longer. It is not recommended during pregnancy or breastfeeding. Its mechanism of action works by increasing the effect of the neurotransmitter gamma-aminobutyric acid (GABA).
Diazepam was patented in 1959 by Hoffmann-La Roche. It has been one of the most frequently prescribed medications in the world since its launch in 1963. In the United States it was the best-selling medication between 1968 and 1982, selling more than 2billion tablets in 1978 alone. In 2022, it was the 169th most commonly prescribed medication in the United States, with more than 3million prescriptions. In 1985, the patent ended, and there are more than 500 brands available on the market. It is on the World Health Organization's List of Essential Medicines.
Structure, physical and chemical properties
Diazepam does not possess any chiral centers in its structure, but it does have two conformers. The two conformers mentioned were the 'P'-conformer and 'M'-conformer. Diazepam is an equimolar mixture and it was shown through CD spectra in serum protein solutions, that the 'P'-conformer is preferred by α1-acid glycoprotein binding.
The drug diazepam occurs as a pale yellow-white crystalline powder without a distinctive smell and has a low molecular weight (MW = 284.74 g/mol). This classic aryl 1,4-benzodiazepine possesses three acceptors and no hydrogen bond donors. Diazepam is moderately lipophilic with LogP (Octanol-Water Partition Coefficient) value of 2,82 and hydrophilic with a TPSA (Topological Polar Surface Area) value of 32.7 Ų. The LogP value indicates that diazepam tends to dissolve more readily in lipid-based environments, such as chloroform, acetone, ethanol and ether, compared to water. The TPSA value implies that a segment of the molecule exhibits a degree of polarity or hydrophilicity and represents the collective surface area of polar atoms, like oxygen or nitrogen, along with their connected hydrogen atoms. A TPSA value of 32,7 Ų signifies a moderate level of polarity within the compound. TPSA is especially useful in medical chemistry as it shows the ability of a molecule to permeate cells. Molecules with PSA value smaller than 60-70 Ų have a better ability to permeate cells. The balance between its lipophilic and hydrophilic characteristics can impact various aspects of the molecule’s behavior, including its solubility, absorption, distribution, metabolism, and potential interactions within the biological system.
Diazepam is overall a stable molecule. The British Pharmacopoeia lists it as being very slightly soluble in water, soluble in alcohol, and freely soluble in chloroform. The United States Pharmacopoeia lists diazepam as soluble 1 in 16 ethyl alcohol, 1 in 2 of chloroform, 1 in 39 ether, and practically insoluble in water. The pH of diazepam is neutral (i.e., pH = 7). Due to additives such as benzoic acid/benzoate in the injectable form. Diazepam has a shelf life of five years for oral tablets and three years for IV/IM solutions.
Diazepam is stored at room temperature (15–30 °C). The solution for parenteral injection is kept so that it is protected from light and kept from freezing. The oral forms are stored in air-tight containers and protected from light.
Diazepam can be absorbed into plastics, so liquid preparations are not kept in plastic bottles or syringes, etc. As such, it can leach into the plastic bags and tubing used for intravenous infusions. Absorption appears to depend on several factors, such as temperature, concentration, flow rates, and tube length. Diazepam is not be administered if a precipitate has formed and does not dissolve.
Medical uses
Diazepam is mainly used to treat anxiety, insomnia, panic attacks, and symptoms of acute alcohol withdrawal. It is also used as a premedication for inducing sedation, anxiolysis, or amnesia before certain medical procedures (e.g., endoscopy). In 2020, it was approved for use in the United States as a nasal spray to interrupt seizure activity in people with epilepsy. Diazepam is the most commonly used benzodiazepine for "tapering" benzodiazepine dependence due to the drug's comparatively long half-life, allowing for more efficient dose reduction. Benzodiazepines have a relatively low toxicity in overdose.
Diazepam has several uses, including:
Treatment of anxiety, panic attacks, and states of agitation
Treatment of neurovegetative symptoms associated with vertigo
Treatment of the symptoms of alcohol, opiate, and benzodiazepine withdrawal
Short-term treatment of insomnia
Treatment of muscle spasms
Treatment of tetanus, together with other measures of intensive treatment
Adjunctive treatment of spastic muscular paresis (paraplegia/tetraplegia) caused by cerebral or spinal cord conditions such as stroke, multiple sclerosis, or spinal cord injury (long-term treatment is coupled with other rehabilitative measures)
Palliative treatment of stiff person syndrome
Pre- or postoperative sedation, anxiolysis or amnesia (e.g., before endoscopic or surgical procedures)
Treatment of complications with stimulant overdoses and psychosis, such as cocaine or methamphetamine
Used in the treatment of organophosphate poisoning and reduces the risk of seizure-induced brain and cardiac damage.
Preventive treatment of oxygen toxicity during hyperbaric oxygen therapy
Dosages are typically determined on an individual basis, depending on the condition being treated, severity of symptoms, patient body weight, and any other conditions the person may have.
Seizures
Intravenous diazepam or lorazepam are first-line treatments for status epilepticus. However, intravenous lorazepam has advantages over intravenous diazepam, including a higher rate of terminating seizures and a more prolonged anticonvulsant effect. Diazepam gel was better than placebo gel in reducing the risk of non-cessation of seizures. Diazepam is rarely used for the long-term treatment of epilepsy because tolerance to its anticonvulsant effects usually develops within six to twelve months of treatment, effectively rendering it useless for that purpose.
The anticonvulsant effects of diazepam can help in the treatment of seizures due to a drug overdose or chemical toxicity as a result of exposure to sarin, VX, or soman (or other organophosphate poisons), lindane, chloroquine, physostigmine, or pyrethroids.
Diazepam is sometimes used intermittently for the prevention of febrile seizures that may occur in children under five years of age. Recurrence rates are reduced, but side effects are common and the decision to treat febrile seizures (which are benign in nature) with medication uses these factors as part of the evaluation. Long-term use of diazepam for the management of epilepsy is not recommended; however, a subgroup of individuals with treatment-resistant epilepsy benefit from long-term benzodiazepines, and for such individuals, clorazepate has been recommended due to its slower onset of tolerance to the anticonvulsant effects.
Alcohol withdrawal
Because of its relatively long duration of action, and evidence of safety and efficacy, diazepam is preferred over other benzodiazepines for the treatment of persons experiencing moderate to severe alcohol withdrawal. An exception to this is when a medication is required intramuscular in which case either lorazepam or midazolam is recommended.
Other
Diazepam is used for the emergency treatment of eclampsia when IV magnesium sulfate and blood-pressure control measures have failed. Benzodiazepines do not have any pain-relieving properties themselves and are generally recommended to be avoided in individuals with pain. However, benzodiazepines such as diazepam can be used for their muscle-relaxant properties to alleviate pain caused by muscle spasms and various dystonias, including blepharospasm. Tolerance often develops to the muscle relaxant effects of benzodiazepines such as diazepam. Baclofen is sometimes used as an alternative to diazepam.
Availability
Diazepam is marketed in over 500 brands throughout the world. It is supplied in oral, injectable, inhalation, and rectal forms.
The United States military employs a specialized diazepam preparation known as Convulsive Antidote, Nerve Agent (), which contains diazepam. One CANA kit is typically issued to service members, along with three Mark I NAAK kits, when operating in circumstances where chemical weapons in the form of nerve agents are considered a potential hazard. Both of these kits deliver drugs using autoinjectors. They are intended for use in "buddy aid" or "self-aid" administration of the drugs in the field before decontamination and delivery of the patient to definitive medical care.
Contraindications
Use of diazepam is avoided, when possible, in individuals with:
Ataxia
Severe hypoventilation
Acute narrow-angle glaucoma
Severe hepatic deficiencies (hepatitis and liver cirrhosis decrease elimination by a factor of two)
Severe renal deficiencies (for example, patients on dialysis)
Liver disorders
Severe sleep apnea
Severe depression, particularly when accompanied by suicidal tendencies
Psychosis
Pregnancy or breast feeding
Caution required in elderly or debilitated patients
Coma or shock
Abrupt discontinuation of therapy
Acute intoxication with alcohol, narcotics, or other psychoactive substances (with the exception of hallucinogens or some stimulants, where it is occasionally used as a treatment for overdose)
History of alcohol or drug dependence
Myasthenia gravis, an autoimmune disorder causing marked fatiguability
Hypersensitivity or allergy to any drug in the benzodiazepine class
Abuse and special populations
Benzodiazepine abuse and misuse is guarded against when prescribed to those with alcohol or drug dependencies or who have psychiatric disorders.
Pediatric patients
For Less than 18 years of age, this treatment is usually not indicated, except for treatment of epilepsy, and pre-or postoperative treatment. The smallest possible effective dose is typically used for this group of patients.
Under 6 months of age, safety and effectiveness have not been established; diazepam is not given to those in this age group.
Elderly and very ill patients can experience apnea or cardiac arrest. Concomitant use of other central nervous system depressants increases this risk. The smallest possible effective dose is generally used for this group of people. The elderly metabolise benzodiazepines much more slowly than younger adults, and are also more sensitive to the effects of benzodiazepines, even at similar blood plasma levels. Doses of diazepam are recommended to be about half of those given to younger people, and treatment is limited to a maximum of two weeks. Long-acting benzodiazepines such as diazepam are not recommended for the elderly. Diazepam can also be dangerous in geriatric patients owing to a significantly increased risk of falls.
Intravenous or intramuscular injections in hypotensive people or those in shock is administered carefully and vital signs are closely monitored.
Benzodiazepines such as diazepam are lipophilic and rapidly penetrate membranes, thus rapidly cross over into the placenta with significant uptake of the drug. Use of benzodiazepines including diazepam in late pregnancy, especially high doses, can result in floppy infant syndrome. Diazepam when taken late in pregnancy, during the third trimester, causes a definite risk of a severe benzodiazepine withdrawal syndrome in the neonate with symptoms including hypotonia, and reluctance to suck, to apnoeic spells, cyanosis, and impaired metabolic responses to cold stress. Floppy infant syndrome and sedation in the newborn may also occur. Symptoms of floppy infant syndrome and the neonatal benzodiazepine withdrawal syndrome have been reported to persist from hours to months after birth.
Adverse effects
Benzodiazepines, such as diazepam, can cause anterograde amnesia, confusion, and sedation. The elderly are more prone to diazepam's confusion, amnesia, ataxia, hangover symptoms, and falls. Long-term use of benzodiazepines, such as diazepam, induces tolerance, dependency, and withdrawal syndrome. Like other benzodiazepines, diazepam impairs short-term memory and learning new information. Diazepam and other benzodiazepines can produce anterograde amnesia, but not retrograde amnesia, which means information learned before using benzodiazepines is not impaired. Short-term benzodiazepine use does not lead to tolerance, and the elderly are more sensitive to them. Additionally, after stopping benzodiazepines, cognitive problems may last at least six months; it is unclear if these problems last for longer than six months or are permanent. Benzodiazepines may also cause or worsen depression. Infusions or repeated intravenous injections of diazepam when managing seizures, for example, may lead to drug toxicity, including respiratory depression, sedation, and hypotension. Drug tolerance may also develop to infusions of diazepam if it is given for longer than 24 hours. Sedatives and sleeping pills, including diazepam, have been associated with an increased risk of death.
In September 2020, the U.S. Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class.
Diazepam has a range of side effects common to most benzodiazepines, including:
Suppression of REM sleep and slow wave sleep
Impaired motor function
Impaired coordination
Impaired balance
Dizziness
Reflex tachycardia
Less commonly, paradoxical reactions can occur, including nervousness, irritability, excitement, worsening of seizures, insomnia, muscle cramps, changes in libido, and in some cases, rage and violence. These adverse reactions are more likely to occur in children, the elderly, and individuals with a history of a substance use disorder, such as an alcohol use disorder, or a history of aggressive behavior. In some people, diazepam may increase the propensity toward self-harming behavior and, in extreme cases, may provoke suicidal tendencies or acts. Very rarely dystonia can occur.
Diazepam may impair the ability to drive vehicles or operate machinery. The impairment is worsened by the consumption of alcohol because both act as central nervous system depressants.
During therapy, tolerance to the sedative effects usually develops, but not to the anxiolytic and myorelaxant effects.
Patients with severe attacks of apnea during sleep may experience respiratory depression (hypoventilation), leading to respiratory arrest and death.
Diazepam in doses of or more causes significant deterioration in alertness performance combined with increased feelings of sleepiness.
Tolerance and withdrawal
Diazepam, as with other benzodiazepine drugs, can cause tolerance, physical dependence, substance use disorder, and benzodiazepine withdrawal syndrome. Withdrawal from diazepam or other benzodiazepines often leads to withdrawal symptoms similar to those seen during barbiturate or alcohol withdrawal. The higher the dose and the longer the drug is taken, the greater the risk of experiencing unpleasant withdrawal symptoms.
Withdrawal symptoms can occur from standard dosages and also after short-term use, and can range from insomnia and anxiety to more serious symptoms, including seizures and psychosis. Withdrawal symptoms can sometimes resemble pre-existing conditions and be misdiagnosed. Diazepam may produce less intense withdrawal symptoms due to its long elimination half-life.
Benzodiazepine treatment is recommended to be discontinued as soon as possible by a slow and gradual dose reduction regimen. Tolerance develops to the therapeutic effects of benzodiazepines; for example, tolerance occurs to the anticonvulsant effects and as a result benzodiazepines are not generally recommended for the long-term management of epilepsy. Dose increases may overcome the effects of tolerance, but tolerance may then develop to the higher dose and adverse effects may increase. The mechanism of tolerance to benzodiazepines includes uncoupling of receptor sites, alterations in gene expression, down-regulation of receptor sites, and desensitisation of receptor sites to the effect of GABA. About one-third of individuals who take benzodiazepines for longer than four weeks become dependent and experience withdrawal syndrome on cessation.
Differences in rates of withdrawal (50–100%) vary depending on the patient sample. For example, a random sample of long-term benzodiazepine users typically finds around 50% experience few or no withdrawal symptoms, with the other 50% experiencing notable withdrawal symptoms. Certain select patient groups show a higher rate of notable withdrawal symptoms, up to 100%.
Rebound anxiety, more severe than baseline anxiety, is also a common withdrawal symptom when discontinuing diazepam or other benzodiazepines. Diazepam is therefore only recommended for short-term therapy at the lowest possible dose owing to risks of severe withdrawal problems from low doses even after gradual reduction. The risk of pharmacological dependence on diazepam is significant, and In humans, tolerance to the anticonvulsant effects of diazepam occurs frequently.
Dependence
Improper or excessive use of diazepam can lead to dependence. At a particularly high risk for diazepam misuse, substance use disorder or dependence are:
People with a history of a substance use disorder or substance dependence. Diazepam increases craving for alcohol in problem alcohol consumers.
People with severe personality disorders, such as borderline personality disorder
Patients from the aforementioned groups are monitored very closely during therapy for signs of abuse and development of dependence. Therapy is recommended to be discontinued if any of these signs are noted. If dependence has developed, therapy is still discontinued gradually to avoid severe withdrawal symptoms. Long-term therapy in such instances is not recommended.
People suspected of being dependent on benzodiazepine drugs are very gradually tapered off the drug. Withdrawals can be life-threatening, particularly when excessive doses have been taken for extended periods. Therefore, equal prudence is used whether dependence has occurred in therapeutic or recreational contexts.
Diazepam is seen as a good choice for tapering for those using high doses of other benzodiazepines since it has a long half-life thus withdrawal symptoms are tolerable. The process is very slow (usually from 14 to 28 weeks) but is considered safe when done appropriately.
Overdose
An individual who has consumed too much diazepam typically displays one or more of these symptoms in approximately four hours immediately following a suspected overdose:
Drowsiness
Mental confusion
Hypotension
Impaired motor function
Impaired reflexes
Impaired coordination
Impaired balance
Dizziness
Coma
Although not usually fatal when taken alone, a diazepam overdose is considered a medical emergency and generally requires the immediate attention of medical personnel. The antidote for an overdose of diazepam (or any other benzodiazepine) is flumazenil (Anexate). This drug is only used in cases with severe respiratory depression or cardiovascular complications. Because flumazenil is a short-acting drug, and the effects of diazepam can last for days, several doses of flumazenil may be necessary. Artificial respiration and stabilization of cardiovascular functions may also be necessary. Though not routinely indicated, activated charcoal can be used for decontamination of the stomach following a diazepam overdose. Emesis is contraindicated. Dialysis is minimally effective. Hypotension may be treated with levarterenol or metaraminol.
The oral (lethal dose in 50% of the population) of diazepam is in mice and in rats. D. J. Greenblatt and colleagues reported in 1978 on two patients who had taken and of diazepam, respectively, went into moderately-deep comas, and were discharged within 48 hours without having experienced any important complications, despite having high concentrations of diazepam and its metabolites desmethyldiazepam, oxazepam, and temazepam, according to samples taken in the hospital and as follow-up.
Overdoses of diazepam with alcohol, opiates, or other depressants may be fatal.
Interactions
If diazepam is administered concomitantly with other drugs, it is recommended that attention be paid to the possible pharmacological interactions. Particular care is taken with drugs that potentiate the effects of diazepam, such as barbiturates, phenothiazines, opioids, and antidepressants.
Diazepam does not increase or decrease hepatic enzyme activity and does not alter the metabolism of other compounds. No evidence has suggested that diazepam alters its metabolism with chronic administration.
Agents with an effect on hepatic cytochrome P450 pathways or conjugation can alter the rate of diazepam metabolism. These interactions would be expected to be most significant with long-term diazepam therapy, and their clinical significance is variable.
Diazepam increases the central depressive effects of alcohol, other hypnotics/sedatives (e.g., barbiturates), other muscle relaxants, certain antidepressants, sedative antihistamines, opioids, and antipsychotics, as well as anticonvulsants such as phenobarbital, phenytoin, and carbamazepine. The euphoriant effects of opioids may be increased, leading to an increased risk of psychological dependence.
Cimetidine, omeprazole, oxcarbazepine, ticlopidine, topiramate, ketoconazole, itraconazole, disulfiram, fluvoxamine, isoniazid, erythromycin, probenecid, propranolol, imipramine, ciprofloxacin, fluoxetine, and valproic acid prolong the action of diazepam by inhibiting its elimination.
Alcohol in combination with diazepam may cause a synergistic enhancement of the hypotensive properties of benzodiazepines and alcohol.
Oral contraceptives significantly decrease the elimination of desmethyldiazepam, a major metabolite of diazepam.
Rifampin, phenytoin, carbamazepine, and phenobarbital increase the metabolism of diazepam, thus decreasing drug levels and effects. Dexamethasone and St John's wort also increase the metabolism of diazepam.
Diazepam increases the serum levels of phenobarbital.
Nefazodone can cause increased blood levels of benzodiazepines.
Cisapride may enhance the absorption, and therefore the sedative activity, of diazepam.
Small doses of theophylline may inhibit the action of diazepam.
Diazepam may block the action of levodopa (used in the treatment of Parkinson's disease).
Diazepam may alter digoxin serum concentrations.
Other drugs that may have interactions with diazepam include antipsychotics (e.g. chlorpromazine), MAO inhibitors, and ranitidine.
Because it acts on the GABA receptor, the herb valerian may produce an adverse effect.
Foods that acidify the urine can lead to faster absorption and elimination of diazepam, reducing drug levels and activity.
Foods that alkalinize the urine can lead to slower absorption and elimination of diazepam, increasing drug levels and activity.
Reports conflict as to whether food in general has any effects on the absorption and activity of orally administered diazepam.
Pharmacology
Diazepam is a long-acting "classical" benzodiazepine. Other classical benzodiazepines include chlordiazepoxide, clonazepam, lorazepam, oxazepam, nitrazepam, temazepam, flurazepam, bromazepam, and clorazepate. Diazepam has anticonvulsant properties. Benzodiazepines act via micromolar benzodiazepine binding sites as calcium channel blockers and significantly inhibit depolarization-sensitive calcium uptake in rat nerve cell preparations.
Diazepam inhibits acetylcholine release in mouse hippocampal synaptosomes. This has been found by measuring sodium-dependent high-affinity choline uptake in mouse brain cells in vitro, after pretreatment of the mice with diazepam in vivo. This may play a role in explaining diazepam's anticonvulsant properties.
Diazepam binds with high affinity to glial cells in animal cell cultures. Diazepam at high doses has been found to decrease histamine turnover in mouse brain via diazepam's action at the benzodiazepine-GABA receptor complex. Diazepam also decreases prolactin release in rats.
Mechanism of action
Benzodiazepines are positive allosteric modulators of the GABA type A receptors (GABAA). The GABAA receptors are ligand-gated chloride-selective ion channels that are activated by GABA, the major inhibitory neurotransmitter in the brain. The binding of benzodiazepines to this receptor complex promotes the binding of GABA, which in turn increases the total conduction of chloride ions across the neuronal cell membrane. This increased chloride ion influx hyperpolarizes the neuron's membrane potential. As a result, the difference between resting potential and threshold potential is increased and firing is less likely. As a result, the arousal of the cortical and limbic systems in the central nervous system is reduced.
The GABAA receptor is a heteromer composed of five subunits, the most common ones being two αs, two βs, and one γ (α2β2γ). For each subunit, many subtypes exist (α1–6, β1–3, and γ1–3). GABAA receptors containing the α1 subunit mediate the sedative, the anterograde amnesic, and partly the anticonvulsive effects of diazepam. GABAA receptors containing α2 mediate the anxiolytic actions and to a large degree the myorelaxant effects. GABAA receptors containing α3 and α5 also contribute to benzodiazepines myorelaxant actions, whereas GABAA receptors comprising the α5 subunit were shown to modulate the temporal and spatial memory effects of benzodiazepines. Diazepam is not the only drug to target these GABAA receptors. Drugs such as flumazenil also bind to GABAA to induce their effects.
Diazepam appears to act on areas of the limbic system, thalamus, and hypothalamus, inducing anxiolytic effects. Benzodiazepine drugs including diazepam increase the inhibitory processes in the cerebral cortex.
The anticonvulsant properties of diazepam and other benzodiazepines may be in part or entirely due to binding to voltage-dependent sodium channels rather than GABAA receptors. Sustained repetitive firing seems limited by benzodiazepines' effect of slowing recovery of sodium channels from inactivation.
The muscle relaxant properties of diazepam are produced via inhibition of polysynaptic pathways in the spinal cord.
Pharmacokinetics
Diazepam can be administered orally, intravenously (it is always diluted, as it is painful and damaging to veins), intramuscularly (IM), or as a suppository.
The onset of action is one to five minutes for IV administration and 15–30 minutes for IM administration. The duration of diazepam's peak pharmacological effects is 15 minutes to one hour for both routes of administration. The half-life of diazepam, in general, is 30–56 hours. Peak plasma levels occur between 30 and 90 minutes after oral administration and between 30 and 60 minutes after intramuscular administration; after rectal administration, peak plasma levels occur after 10 to 45 minutes. Diazepam is highly plasma protein-bound, with 96–99% of the absorbed drug being protein-bound. The distribution half-life of diazepam is two to 13 minutes.
Diazepam is highly lipid-soluble and is widely distributed throughout the body after administration. It easily crosses both the blood–brain barrier and the placenta, and is excreted into breast milk. After absorption, diazepam is redistributed into muscle and adipose tissue. Continual daily doses of diazepam quickly build to a high concentration in the body (mainly in adipose tissue), far above the actual dose for any given day.
Diazepam is stored preferentially in some organs, including the heart. Absorption by any administered route and the risk of accumulation is significantly increased in the neonate, and withdrawal of diazepam during pregnancy and breastfeeding is clinically justified.
Diazepam undergoes oxidative metabolism by demethylation (CYP2C9, 2C19, 2B6, 3A4, and 3A5), hydroxylation (CYP3A4 and 2C19) and glucuronidation in the liver as part of the cytochrome P450 enzyme system. It has several pharmacologically active metabolites. The main active metabolite of diazepam is desmethyldiazepam (also known as nordazepam or nordiazepam). Its other active metabolites include the minor active metabolites temazepam and oxazepam. These metabolites are conjugated with glucuronide and are excreted primarily in the urine. Because of these active metabolites, the serum values of diazepam alone are not useful in predicting the effects of the drug. Diazepam has a biphasic half-life of about one to three days and two to seven days for the active metabolite desmethyldiazepam. Most of the drug is metabolized; very little diazepam is excreted unchanged. The elimination half-life of diazepam and also the active metabolite desmethyldiazepam increases significantly in the elderly, which may result in prolonged action, as well as accumulation of the drug during repeated administration.
Synthesis
The synthesis of Diazepam was first achieved through a reaction pathway developed by Leo Sternbach and his team at Hoffmann-La Roche in the late 1950s.
Sternbach's method commenced with 2-amino-5-chlorobenzophenone, which undergoes cyclocondensation with glycine ethyl ester hydrochloride to construct the benzodiazepine core. This core is subsequently alkylated at the nitrogen in the 1-position using dimethyl sulfate in the presence of sodium methoxide and methanol under reflux conditions. Although the direct transformation from 2-amino-5-chlorobenzophenone to Nordazepam is conceptually straightforward, an alternative approach involving the treatment of 2-amino-5-chlorobenzophenon with chloroacetyl chloride, succeeded by ammoniation and heating, culminates in Nordazepam with enhanced yield and facilitates easier purification processes.
Detection in body fluids
Diazepam may be quantified in blood or plasma to confirm a diagnosis of poisoning in hospitalized patients, provide evidence in an impaired driving arrest, or to assist in a medicolegal death investigation. Blood or plasma diazepam concentrations are usually in a range of in persons receiving the drug therapeutically. Most commercial immunoassays for the benzodiazepine class of drugs cross-react with diazepam, but confirmation and quantitation are usually performed using chromatographic techniques.
Environmental
Diazepam is a common environmental contamination finding near human settlements.
History
Diazepam was the second benzodiazepine invented by Leo Sternbach of Hoffmann-La Roche at the company's Nutley, New Jersey, facility following chlordiazepoxide (Librium), which was approved for use in 1960. Released in 1963 as an improved version of Librium, diazepam became incredibly popular, helping Roche to become a pharmaceutical industry giant. It is 2.5 times more potent than its predecessor, which it quickly surpassed in terms of sales. After this initial success, other pharmaceutical companies began to introduce other benzodiazepine derivatives.
The benzodiazepines gained popularity among medical professionals as an improvement over barbiturates, which have a comparatively narrow therapeutic index, and are far more sedative at therapeutic doses. The benzodiazepines are also far less dangerous; death rarely results from diazepam overdose, except in cases where it is consumed with large amounts of other depressants (such as alcohol or opioids). Benzodiazepine drugs such as diazepam initially had widespread public support, but with time the view changed to one of growing criticism and calls for restrictions on their prescription.
Marketed by Roche using an advertising campaign conceived by the William Douglas McAdams Agency under the leadership of Arthur Sackler, diazepam was the top-selling pharmaceutical in the United States from 1969 to 1982, with peak annual sales in 1978 of 2.3 billion tablets. Diazepam, along with oxazepam, nitrazepam and temazepam, represents 82% of the benzodiazepine market in Australia. While psychiatrists continue to prescribe diazepam for the short-term relief of anxiety, neurology has taken the lead in prescribing diazepam for the palliative treatment of certain types of epilepsy and spastic activity, for example, forms of paresis. It is also the first line of defense for a rare disorder called stiff-person syndrome.
Society and culture
Recreational use
Diazepam is a medication with a high risk of misuse and can cause drug dependence. Urgent action by national governments has been recommended to improve prescribing patterns of benzodiazepines such as diazepam. A single dose of diazepam modulates the dopamine system in similar ways to how morphine and alcohol modulate the dopaminergic pathways.
Between 50 and 64% of rats will self-administer diazepam.
Diazepam can substitute for the behavioral effects of barbiturates in a primate study.
Diazepam has been found as an adulterant in heroin.
Diazepam drug misuse can occur either through recreational misuse where the drug is taken to achieve a high or when the drug is continued long term against medical advice.
Sometimes, it is used by stimulant users to "come down" and sleep and to help control the urge to binge. These users often escalate dosage from 2 to 25 times the therapeutic dose of to .
A large-scale study in the US, conducted by SAMHSA, using data from 2011, determined benzodiazepines were present in 28.7% of emergency department visits involving nonmedical use of pharmaceuticals. In this regard, benzodiazepines are second only to opiates, the study found in 39.2% of visits. About 29.3% of drug-related suicide attempts involve benzodiazepines, making them the most frequently represented class in drug-related suicide attempts. Males misuse benzodiazepines as commonly as females.
Diazepam was detected in 26% of cases of people suspected of driving under the influence of drugs in Sweden and its active metabolite nordazepam was detected in 28% of cases. Other benzodiazepines, zolpidem, and zopiclone also were found in high numbers. Many drivers had blood levels far exceeding the therapeutic dose range, suggesting a high degree of potential for misuse of benzodiazepines, zolpidem, and zopiclone. In Northern Ireland, in cases where drugs were detected in samples from impaired drivers who were not impaired by alcohol, benzodiazepines were found in 87% of cases. Diazepam was the most commonly detected benzodiazepine.
Legal status
Diazepam is regulated as a prescription medication:
International
Diazepam is a Schedule IV controlled drug under the Convention on Psychotropic Substances.
UK
Classified as a controlled drug, listed under Schedule IV, Part I (CD Benz POM) of the Misuse of Drugs Regulations 2001, allowing possession with a valid prescription. The Misuse of Drugs Act 1971 makes it illegal to possess the drug without a prescription, and for such purposes, it is classified as a Class C drug.
Germany
Classified as a prescription drug, or in high dosage as a restricted drug (Betäubungsmittelgesetz, Anlage III).
Australia
Diazepam is a Schedule 4 substance under the Poisons Standard (June 2018). A Schedule 4 drug is outlined in the Poisons Act 1964 as, "Substances, the use or supply of which should be by or on the order of persons permitted by State or Territory legislation to prescribe and should be available from a pharmacist on prescription".
United States
Diazepam is controlled as a Schedule IV substance.
Judicial executions
The states of California and Florida offer diazepam to condemned inmates as a pre-execution sedative as part of their lethal injection program, although the state of California has not executed a prisoner since 2006. In August 2018, Nebraska used diazepam as part of the drug combination used to execute Carey Dean Moore, the first death row inmate executed in Nebraska in over 21 years.
Veterinary uses
Diazepam is used as a short-term sedative and anxiolytic for cats and dogs, sometimes used as an appetite stimulant. It can also be used to stop seizures in dogs and cats.
References
Further reading
External links
21-Hydroxylase inhibitors
Anxiolytics
Benzodiazepines
Chemical substances for emergency medicine
Chloroarenes
Euphoriants
Drugs developed by Genentech
Drugs developed by Hoffmann-La Roche
Glycine receptor antagonists
Hallucinogen antidotes
Lactams
TSPO ligands
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Diazepam | [
"Chemistry"
] | 8,764 | [
"Chemicals in medicine",
"Chemical substances for emergency medicine"
] |
234,819 | https://en.wikipedia.org/wiki/Venipuncture | In medicine, venipuncture or venepuncture is the process of obtaining intravenous access for the purpose of venous blood sampling (also called phlebotomy) or intravenous therapy. In healthcare, this procedure is performed by medical laboratory scientists, medical practitioners, some EMTs, paramedics, phlebotomists, dialysis technicians, and other nursing staff. In veterinary medicine, the procedure is performed by veterinarians and veterinary technicians.
It is essential to follow a standard procedure for the collection of blood specimens to get accurate laboratory results. Any error in collecting the blood or filling the test tubes may lead to erroneous laboratory results.
Venipuncture is one of the most routinely performed invasive procedures and is carried out for any of five reasons:
to obtain blood for diagnostic purposes;
to monitor levels of blood components;
to administer therapeutic treatments including medications, nutrition, or chemotherapy;
to remove blood due to excess levels of iron or erythrocytes (red blood cells); or
to collect blood for later uses, mainly transfusion either in the donor or in another person.
Blood analysis is an important diagnostic tool available to clinicians within healthcare.
Blood is most commonly obtained from the superficial veins of the upper limb. The median cubital vein, which lies within the cubital fossa anterior to the elbow, is close to the surface of the skin without many large nerves positioned nearby. Other veins that can be used in the cubital fossa for venipuncture include the cephalic, basilic, and median antebrachial veins.
Minute quantities of blood may be taken by fingerstick sampling and collected from infants by means of a heelprick or from scalp veins with a winged infusion needle.
Phlebotomy (incision into a vein) is also the treatment of certain diseases such as hemochromatosis and primary and secondary polycythemia.
Complications
A 1996 study of blood donors (a larger needle is used in blood donation than in routine venipuncture) found that 1 in 6,300 donors sustained a nerve injury.
Risk and side affects can include a variety of things. Dizziness, sweating, and a drop in your heart rate and blood pressure.
Equipment
There are many ways in which blood can be drawn from a vein, and the method used depends on the person's age, the equipment available, and the type of tests required.
Most blood collection in the US, UK, Canada and Hong Kong is done with an evacuated tube system. Two common systems are Vacutainer (Becton, Dickinson and company) and Vacuette (Greiner Bio-One). The equipment consists of a plastic adapter, also known as a tube or needle holder/hub, a hypodermic needle and a vacuum tube. Under certain circumstances, a syringe may be used, often with a butterfly needle, which is a plastic catheter attached to a short needle. In the developing world, the evacuated tube system is the preferred method of drawing blood.
With evacuated or vacuum tubes
Greiner Bio-One manufactured the first ever plastic evacuated blood collection tube in 1985 under the VACUETTE brand name. Today, many companies sell vacuum tubes as the patent for this device is now in the public domain. These tubes are manufactured with a specific volume of gas removed from the sealed tube. When a needle from a hub or transfer device is inserted into the stopper, the tube's vacuum automatically pulls in the required volume of blood.
The basic Evacuated Tube System (ETS) consists of a needle, a tube holder, and the evacuated tubes. The needle is attached to the tube holder by the phlebotomist prior to collection, or may come from the manufacturer as one unit. The needle protrudes through the end of the tube holder, and has a needle on each end. After first cleaning the venipuncture site and applying a tourniquet, the phlebotomist uncaps the needle attached to the tube holder, inserts the needle into the vein, then slides evacuated tubes into the tube holder, where the tube's stopper is pierced by the back end of the needle. The vacuum in the tube then automatically draws the needed blood directly from the vein. Multiple vacuum tubes can be attached to and removed in turn from a single needle, allowing multiple samples to be obtained from a single procedure. This is possible due to the multiple sample sleeve, which is a flexible rubber fitting over the posterior end of the needle cannula which seals the needle until it is pushed out of the way. This keeps blood from freely draining out of the back of the needle inserted in the vein, as each test tube is removed and the next impaled. OSHA safety regulations require that needles or tube holders come equipped with a safety device to cover the needle after the procedure to prevent accidental needle stick injury.
Fittings and adapters used to fill evacuated tubes from butterfly needle kits and syringes are also available.
There are several needle gauges for a phlebotomist to choose from. The most commonly used are as follows: a 21g (green top) needle, a 22g (black top) needle, a 21g (green label) butterfly needle, a 23g (light blue label) butterfly needle, and a 25g (orange or dark blue label) butterfly needle (however this needle is only used in pediatrics or extreme cases as it is so small that it can often result in hemolyzing the blood sample). There are also a variety of tube and bottle sizes and volumes for different test requirements.
Additives and order of draw
The test tubes in which blood is collected may contain one or more of several additives. In general, tests requiring whole blood call for blood samples collected in test tubes containing some form of the anticoagulant EDTA. EDTA chelates calcium to prevent clotting. EDTA is preferred for hematology tests because it does minimum damage to cell morphology. Sodium citrate is the anticoagulant used in specimens collected for coagulation tests. The majority of chemistry and immunology tests are performed on serum, which is produced by clotting and then separating the blood specimen via centrifuge. These specimens are collected in either a non-additive tube or one containing a clotting activator. This clotting activator can interfere with some assays, and so a plain tube is recommended in these cases, but will delay testing. Tubes containing lithium heparin or sodium heparin are also commonly used for a variety of chemistry tests, as they do not require clotting and can be centrifuged immediately after collection. A combination of sodium fluoride and potassium oxalate is used for glucose tests, as these additives both prevent clotting and stop glycolosis, so that blood glucose levels are preserved after collection. Another specialty tube is an opaque amber colored tube used to collect blood for light sensitive analytes, such as bilirubin.
Test tubes are labeled with the additive they contain, but the stopper on each tube is color coded according to additive as well. While colors vary between manufacturers, stopper colors generally are associated with each additive as listed below. Because the additives from each tube can be left on the needle used to fill the tubes, they must be drawn in a specific order to ensure that cross contamination will not negatively affect testing of the samples if multiple tubes are to be drawn at once. The "order of draw" varies by collection method. Below in the order of draw generally required for the Evacuated Tube System (ETS) collection method are the most common tubes, listing additive and color:
In children
Use of lidocaine iontophoresis is effective for reducing pain and alleviating distress during venipuncture in children. A needle-free powder lignocaine delivery system has been shown to decrease the pain of venipuncture in children. Rapid dermal anesthesia can be achieved by local anesthetic infiltration, but it may evoke anxiety in children frightened by needles or distort the skin, making vascular access more difficult and increasing the risk of needle exposure to health care workers. Dermal anesthesia can also be achieved without needles by the topical application of local anesthetics or by lidocaine iontophoresis. By contrast, noninvasive dermal anesthesia can be established in 5–15 min without distorting underlying tissues by lidocaine iontophoresis, where a direct electric current facilitates dermal penetration of positively charged lidocaine molecules when placed under the positive electrode.
One study concluded that the iontophoretic administration of lidocaine was safe and effective in providing dermal anesthesia for venipuncture in children 6–17 years old. This technique may not be applicable to all children. Future studies may provide information on the minimum effective iontophoretic dose for dermal anesthesia in children and the comparison of the anesthetic efficacy and satisfaction of lidocaine iontophoresis with topical anesthetic creams and subcutaneous infiltration.
Non-pharmacological treatments for pain associated with venipuncture in children includes hypnosis and distraction. These treatments reduced self reported pain and when combined with cognitive-behavioural therapy (CBT) the reduction of pain was even greater. Other interventions have not been found to be effective and these are suggestion, blowing out air, and distraction with parent coaching did not differ from control for pain and distress.
With needle and syringe
Some health care workers prefer to use a syringe-needle technique for venipuncture. Sarstedt manufactures a blood-drawing system (S-Monovette) that uses this principle. This method can be preferred on the elderly, those with cancer, severe burns, obesity, or where the veins are unreliable or fragile. Because syringes are manually operated, the amount of suction applied may be easily controlled. This is particularly helpful when veins are small which may collapse under the suction of an evacuated tube. In children or other circumstances where the quantity of blood gained may be limited it can be helpful to know how much blood can be obtained before distributing it amongst the various additives that the laboratory will require. Another alternative is drawing blood from indwelling cannulae.
Blood cultures
There are times when a blood culture collection is required. The culture will determine if there are pathogens in the blood. Normally blood is sterile. When drawing blood from cultures use a sterile solution such as Betadine rather than alcohol. This is done using sterile gloves, while not wiping away the surgical solution, touching the puncture site, or in any way compromising the sterile process. It is vital that the procedure is performed in as sterile a manner as possible as the persistent presence of skin commensals in blood cultures could indicate endocarditis but they are most often found as contaminants.
It is encouraged to use an abrasive method of skin preparation. This removes the upper layers of dead skin cells along with their contaminating bacteria. Povidone-iodine has traditionally been used but in the UK a 2% chlorhexidine in 70% ethanol or isopropyl alcohol solution is preferred and time must be allowed for it to dry. The tops of any containers used when drawing a blood culture should also be disinfected using a similar solution. Some labs will actively discourage iodine use where iodine is thought to degrade the rubber stopper through which blood enters the bottle, thus allowing contaminates to enter the container.
The blood is collected into special transport bottles, which are like vacuum tubes but shaped differently. The blood culture bottle contains transport media to preserve any microorganisms present while they are being transported to the laboratory for cultures. Because it is unknown whether the pathogens are anaerobic (living without oxygen) or aerobic (living with oxygen), blood is collected to test for both. The aerobic bottle is filled first, and then the anaerobic bottle is filled. However, if the collection is performed using a syringe, the anaerobic bottle is filled first. If a butterfly collection kit is used, the aerobic bottle is filled first, so that any air in the tubing is released into the oxygen-containing bottle.
Specially designed blood culture collection bottles eliminate the need for either the syringe or butterfly collection method. These specially designed bottles have long necks that fit into the evacuated tubes holders that are use for regular venipuncture collection. These bottles also allow for collection of other blood specimens via evacuated tubes, to be collected without additional venipuncture.
The amount of blood that is collected is critical for the optimal recovery of microorganisms. Up to 10mL of blood is typical, but can vary according to the recommends of the manufacturer of the collection bottle. Collection from infants and children are 1 to 5 mL. If too little blood is collected, the ratio of blood-to-nutrient broth will inhibit the growth of microorganisms. If too much blood is collected, there is the risk of a hospital-induced anemia and the ratio of blood-to-nutrient broth will tilt in the opposite direction, which also is not conductive to optimal growth.
The bottles are then incubated in specialized units for 24 hours before a lab technician studies and/or tests it. This step allows the very small numbers of bacteria (potentially 1 or 2 organisms) to multiply to a level which is sufficient for identification +/-antibiotic resistance testing. Modern blood culture bottles have an indicator in the base which changes color in the presence of bacterial growth and can be read automatically by machine. (For this reason the barcoded stickers found on these bottles should not be removed as they are used by the laboratory's automated systems.)
Taking blood samples from animals
Blood samples from living laboratory animals may be collected using following methods:
Blood collection not requiring anesthesia:
Saphenous vein (rat, mice, guinea pig)
Dorsal pedal vein (rat, mice)
Blood collection requiring anesthesia (local/general anesthesia):
Tail vein (rat, mice)
Tail snip (mice)
Orbital sinus (rat, mice)
Jugular vein (rat, mice)
Temporary cannula (rat, mice)
Blood vessel cannulation (guinea pig, ferret)
Tarsal vein (guinea pig)
Marginal ear vein or artery (rabbit)
Terminal procedure:
Cardiac puncture (rat, mice, guinea pig, rabbit, ferret)
Orbital sinus (rat, mice)
Posterior vena cava (rat, mice)
The volume of the blood sample collection is very important in experimental animals. All nonterminal blood collection without replacement of fluids is limited up to 10% of total circulating blood volume in healthy, normal, adult animals on a single occasion and collection may be repeated after three to four weeks. In case repeated blood samples are required at short intervals, a maximum of 0.6 ml/kg/day or 1.0% of an animal's total blood volume can be removed every 24 hours. The estimated blood volume in adult animals is 55 to 70 ml/kg body weight. Care should be taken for older and obese animals. If blood collection volume exceeds more than 10% of total blood volume, fluid replacement may be required. Lactated Ringer's solution (LRS) is recommended as the best fluid replacement by National Institutes of Health (NIH). If the volume of blood collection exceeds more than 30% of the total circulatory blood volume, adequate care should be taken so that the animal does not develop hypovolemia.
Blood alcohol tests
It is generally not advisable to use isopropyl alcohol to cleanse the venipuncture site when obtaining a specimen for a blood alcohol test. This has been related largely to the potential legal implications associated with the use of alcohol-based cleaners that could theoretically impact analysis. Numerous police alcohol collection kits have been marketed that incorporate a sodium fluoride/potassium oxalate preservative and non-alcohol-based cleansing agents to ensure proper collection. Using soap and hot water or a povidone-iodine swab are advisable alternatives to isopropyl alcohol in this case.
See also
Arterial blood is taken from an artery instead of a vein
References
Articles containing video clips
Blood tests
Hematology
Phlebotomy | Venipuncture | [
"Chemistry"
] | 3,402 | [
"Blood tests",
"Chemical pathology"
] |
235,077 | https://en.wikipedia.org/wiki/Reverse%20transcription%20polymerase%20chain%20reaction | Reverse transcription polymerase chain reaction (RT-PCR) is a laboratory technique combining reverse transcription of RNA into DNA (in this context called complementary DNA or cDNA) and amplification of specific DNA targets using polymerase chain reaction (PCR). It is primarily used to measure the amount of a specific RNA. This is achieved by monitoring the amplification reaction using fluorescence, a technique called real-time PCR or quantitative PCR (qPCR). Confusion can arise because some authors use the acronym RT-PCR to denote real-time PCR. In this article, RT-PCR will denote Reverse Transcription PCR. Combined RT-PCR and qPCR are routinely used for analysis of gene expression and quantification of viral RNA in research and clinical settings.
The close association between RT-PCR and qPCR has led to metonymic use of the term qPCR to mean RT-PCR. Such use may be confusing, as RT-PCR can be used without qPCR, for example to enable molecular cloning, sequencing or simple detection of RNA. Conversely, qPCR may be used without RT-PCR, for example, to quantify the copy number of a specific piece of DNA.
Nomenclature
The combined RT-PCR and qPCR technique has been described as quantitative RT-PCR or real-time RT-PCR (sometimes even called quantitative real-time RT-PCR), has been variously abbreviated as qRT-PCR, RT-qPCR, RRT-PCR, and rRT-PCR. In order to avoid confusion, the following abbreviations will be used consistently throughout this article:
Not all authors, especially earlier ones, use this convention and the reader should be cautious when following links. RT-PCR has been used to indicate both real-time PCR (qPCR) and reverse transcription PCR (RT-PCR).
History
Since its introduction in 1977, Northern blot has been used extensively for RNA quantification despite its shortcomings: (a) time-consuming technique, (b) requires a large quantity of RNA for detection, and (c) quantitatively inaccurate in the low abundance of RNA content. However, since PCR was invented by Kary Mullis in 1983, RT PCR has since displaced Northern blot as the method of choice for RNA detection and quantification.
RT-PCR has risen to become the benchmark technology for the detection and/or comparison of RNA levels for several reasons: (a) it does not require post PCR processing, (b) a wide range (>107-fold) of RNA abundance can be measured, and (c) it provides insight into both qualitative and quantitative data. Due to its simplicity, specificity and sensitivity, RT-PCR is used in a wide range of applications from experiments as simple as quantification of yeast cells in wine to more complex uses as diagnostic tools for detecting infectious agents such as the avian flu virus and SARS-CoV-2.
Principles
In RT-PCR, the RNA template is first converted into a complementary DNA (cDNA) using a reverse transcriptase (RT). The cDNA is then used as a template for exponential amplification using PCR. The use of RT-PCR for the detection of RNA transcript has revolutionized the study of gene expression in the following important ways:
Made it theoretically possible to detect the transcripts of practically any gene
Enabled sample amplification and eliminated the need for abundant starting material required when using northern blot analysis
Provided tolerance for RNA degradation as long as the RNA spanning the primer is intact
One-step RT-PCR vs two-step RT-PCR
The quantification of mRNA using RT-PCR can be achieved as either a one-step or a two-step reaction. The difference between the two approaches lies in the number of tubes used when performing the procedure. The two-step reaction requires that the reverse transcriptase reaction and PCR amplification be performed in separate tubes. The disadvantage of the two-step approach is susceptibility to contamination due to more frequent sample handling. On the other hand, the entire reaction from cDNA synthesis to PCR amplification occurs in a single tube in the one-step approach. The one-step approach is thought to minimize experimental variation by containing all of the enzymatic reactions in a single environment. It eliminates the steps of pipetting cDNA product, which is labor-intensive and prone to contamination, to PCR reaction. The further use of inhibitor-tolerant thermostable DNA polymerases, polymerase enhancers with an optimized one-step RT-PCR condition, supports the reverse transcription of the RNA from unpurified or crude samples, such as whole blood and serum. However, the starting RNA templates are prone to degradation in the one-step approach, and the use of this approach is not recommended when repeated assays from the same sample is required. Additionally, the one-step approach is reported to be less accurate compared to the two-step approach. It is also the preferred method of analysis when using DNA binding dyes such as SYBR Green since the elimination of primer-dimers can be achieved through a simple change in the melting temperature. Nevertheless, the one-step approach is a relatively convenient solution for the rapid detection of target RNA directly in biosensing.
End-point RT-PCR vs real-time RT-PCR
Quantification of RT-PCR products can largely be divided into two categories: end-point and real-time. The use of end-point RT-PCR is preferred for measuring gene expression changes in small number of samples, but the real-time RT-PCR has become the gold standard method for validating quantitative results obtained from array analyses or gene expression changes on a global scale.
End-point RT-PCR
The measurement approaches of end-point RT-PCR requires the detection of gene expression levels by the use of fluorescent dyes like ethidium bromide, P32 labeling of PCR products using phosphorimager, or by scintillation counting. End-point RT-PCR is commonly achieved using three different methods: relative, competitive and comparative.
Relative RT-PCR Relative quantifications of RT-PCR involves the co-amplification of an internal control simultaneously with the gene of interest. The internal control is used to normalize the samples. Once normalized, a direct comparison of relative transcript abundances across multiple samples of mRNA can be made. One precaution to note is that the internal control must be chosen so that it is not affected by the experimental treatment. The expression level should be constant across all samples and with the mRNA of interest for the results to be accurate and meaningful. Because the quantification of the results are analyzed by comparing the linear range of the target and control amplification, it is crucial to take into consideration the starting target molecules concentration and their amplification rate prior to starting the analysis. The results of the analysis are expressed as the ratios of gene signal to internal control signal, which the values can then be used for the comparison between the samples in the estimation of relative target RNA expression.
Competitive RT-PCR Competitive RT-PCR technique is used for absolute quantification. It involves the use of a synthetic “competitor” RNA that can be distinguished from the target RNA by a small difference in size or sequence. It is important for the design of the synthetic RNA be identical in sequence but slightly shorter than the target RNA for accurate results. Once designed and synthesized, a known amount of the competitor RNA is added to experimental samples and is co-amplified with the target using RT-PCR. Then, a concentration curve of the competitor RNA is produced and it is used to compare the RT-PCR signals produced from the endogenous transcripts to determine the amount of target present in the sample.
Comparative RT-PCR Comparative RT-PCR is similar to the competitive RT-PCR in that the target RNA competes for amplification reagents within a single reaction with an internal standard of unrelated sequence. Once the reaction is complete, the results are compared to an external standard curve to determine the target RNA concentration. In comparison to the relative and competitive quantification methods, comparative RT-PCR is considered to be the more convenient method to use since it does not require the investigator to perform a pilot experiment; in relative RT-PCR, the exponential amplification range of the mRNA must be predetermined and in competitive RT-PCR, a synthetic competitor RNA must be synthesized.
Real-time RT-PCR
The emergence of novel fluorescent DNA labeling techniques in the past few years has enabled the analysis and detection of PCR products in real-time and has consequently led to the widespread adoption of real-time RT-PCR for the analysis of gene expression. Not only is real-time RT-PCR now the method of choice for quantification of gene expression, it is also the preferred method of obtaining results from array analyses and gene expressions on a global scale. Currently, there are four different fluorescent DNA probes available for the real-time RT-PCR detection of PCR products: SYBR Green, TaqMan, molecular beacons, and scorpion probes. All of these probes allow the detection of PCR products by generating a fluorescent signal. While the SYBR Green dye emits its fluorescent signal simply by binding to the double-stranded DNA in solution, the TaqMan probes', molecular beacons' and scorpions' generation of fluorescence depend on Förster Resonance Energy Transfer (FRET) coupling of the dye molecule and a quencher moiety to the oligonucleotide substrates.
SYBR Green When the SYBR Green binds to the double-stranded DNA of the PCR products, it will emit light upon excitation. The intensity of the fluorescence increases as the PCR products accumulate. This technique is easy to use since designing of probes is not necessary given lack of specificity of its binding. However, since the dye does not discriminate the double-stranded DNA from the PCR products and those from the primer-dimers, overestimation of the target concentration is a common problem. Where accurate quantification is an absolute necessity, further assay for the validation of results must be performed. Nevertheless, among the real-time RT-PCR product detection methods, SYBR Green is the most economical and easiest to use.
TaqMan probes TaqMan probes are oligonucleotides that have a fluorescent probe attached to the 5' end and a quencher to the 3' end. During PCR amplification, these probes will hybridize to the target sequences located in the amplicon and as polymerase replicates the template with TaqMan bound, it also cleaves the fluorescent probe due to polymerase 5'- nuclease activity. Because the close proximity between the quench molecule and the fluorescent probe normally prevents fluorescence from being detected through FRET, the decoupling results in the increase of intensity of fluorescence proportional to the number of the probe cleavage cycles. Although well-designed TaqMan probes produce accurate real-time RT-PCR results, it is expensive and time-consuming to synthesize when separate probes must be made for each mRNA target analyzed. Additionally, these probes are light sensitive and must be carefully frozen as aliquots to prevent degradation.
Molecular beacon probes Similar to the TaqMan probes, molecular beacons also make use of FRET detection with fluorescent probes attached to the 5' end and a quencher attached to the 3' end of an oligonucleotide substrate. However, whereas the TaqMan fluorescent probes are cleaved during amplification, molecular beacon probes remain intact and rebind to a new target during each reaction cycle. When free in solution, the close proximity of the fluorescent probe and the quencher molecule prevents fluorescence through FRET. However, when molecular beacon probes hybridize to a target, the fluorescent dye and the quencher are separated resulting in the emittance of light upon excitation. As is with the TaqMan probes, molecular beacons are expensive to synthesize and require separate probes for each RNA target.
Scorpion probes The scorpion probes, like molecular beacons, will not be fluorescent active in an unhybridized state, again, due to the fluorescent probe on the 5' end being quenched by the moiety on the 3' end of an oligonucleotide. With Scorpions, however, the 3' end also contains sequence that is complementary to the extension product of the primer on the 5' end. When the Scorpion extension binds to its complement on the amplicon, the Scorpion structure opens, prevents FRET, and enables the fluorescent signal to be measured.
Multiplex probes TaqMan probes, molecular beacons, and scorpions allow the concurrent measurement of PCR products in a single tube. This is possible because each of the different fluorescent dyes can be associated with a specific emission spectra. Not only does the use of multiplex probes save time and effort without compromising test utility, its application in wide areas of research such as gene deletion analysis, mutation and polymorphism analysis, quantitative analysis, and RNA detection, make it an invaluable technique for laboratories of many discipline.
Two strategies are commonly employed to quantify the results obtained by real-time RT-PCR; the standard curve method and the comparative threshold method.
Application
The exponential amplification via reverse transcription polymerase chain reaction provides for a highly sensitive technique in which a very low copy number of RNA molecules can be detected. RT-PCR is widely used in the diagnosis of genetic diseases and, semiquantitatively, in the determination of the abundance of specific different RNA molecules within a cell or tissue as a measure of gene expression.
Research methods
RT-PCR is commonly used in research methods to measure gene expression. For example, Lin et al. used qRT-PCR to measure expression of Gal genes in yeast cells. First, Lin et al. engineered a mutation of a protein suspected to participate in the regulation of Gal genes. This mutation was hypothesized to selectively abolish Gal expression. To confirm this, gene expression levels of yeast cells containing this mutation were analyzed using qRT-PCR. The researchers were able to conclusively determine that the mutation of this regulatory protein reduced Gal expression. Northern blot analysis is used to study the RNA's gene expression further.
Gene insertion
RT-PCR can also be very useful in the insertion of eukaryotic genes into prokaryotes. Because most eukaryotic genes contain introns, which are present in the genome but not in the mature mRNA, the cDNA generated from a RT-PCR reaction is the exact (without regard to the error-prone nature of reverse transcriptases) DNA sequence that would be directly translated into protein after transcription. When these genes are expressed in prokaryotic cells for the sake of protein production or purification, the RNA produced directly from transcription need not undergo splicing as the transcript contains only exons. (Prokaryotes, such as E. coli, lack the mRNA splicing mechanism of eukaryotes).
Genetic disease diagnosis
RT-PCR can be used to diagnose genetic disease such as Lesch–Nyhan syndrome. This genetic disease is caused by a malfunction in the HPRT1 gene, which clinically leads to the fatal uric acid urinary stone and symptoms similar to gout. Analyzing a pregnant mother and a fetus for mRNA expression levels of HPRT1 will reveal if the mother is a carrier and if the fetus will likely to develop Lesch–Nyhan syndrome.
Cancer detection
Scientists are working on ways to use RT-PCR in cancer detection to help improve prognosis, and monitor response to therapy. Circulating tumor cells produce unique mRNA transcripts depending on the type of cancer. The goal is to determine which mRNA transcripts serve as the best biomarkers for a particular cancer cell type and then analyze its expression levels with RT-PCR.
RT-PCR is commonly used in studying the genomes of viruses whose genomes are composed of RNA, such as Influenzavirus A, retroviruses like HIV and SARS-CoV-2.
Challenges
Despite its major advantages, RT-PCR is not without drawbacks. The exponential growth of the reverse transcribed complementary DNA (cDNA) during the multiple cycles of PCR produces inaccurate end point quantification due to the difficulty in maintaining linearity. In order to provide accurate detection and quantification of RNA content in a sample, qRT-PCR was developed using fluorescence-based modification to monitor the amplification products during each cycle of PCR. The extreme sensitivity of the technique can be a double-edged sword since even the slightest DNA contamination can lead to undesirable results. A simple method for elimination of false positive results is to include anchors, or tags, to the 5' region of a gene specific primer. Additionally, planning and design of quantification studies can be technically challenging due to the existence of numerous sources of variation including template concentration and amplification efficiency. Spiking in a known quantity of RNA into a sample, adding a series of RNA dilutions generating a standard curve, and adding in a no template copy sample (no cDNA) may used as controls.
Protocol
RT-PCR can be carried out by the one-step RT-PCR protocol or the two-step RT-PCR protocol.
One-step RT-PCR
One-step RT-PCR subjects mRNA targets (up to 6 kb) to reverse transcription followed by PCR amplification in a single test tube. Using intact, high-quality RNA and a sequence-specific primer will produce the best results.
Once a one-step RT-PCR kit with a mix of reverse transcriptase, Taq DNA polymerase, and a proofreading polymerase is selected and all necessary materials and equipment are obtained a reaction mix is to be prepared. The reaction mix includes dNTPs, primers, template RNA, necessary enzymes, and a buffer solution. The reaction mix is added to a PCR tube for each reaction, followed by template RNA. The PCR tubes are then placed in a thermal cycler to begin cycling. In the first cycle, the synthesis of cDNA occurs. The second cycle is the initial denaturation wherein reverse transcriptase is inactivated. The remaining 40-50 cycles are the amplification, which includes denaturation, annealing, and elongation. When amplification is complete, the RT-PCR products can be analyzed with gel electrophoresis.
(PCR Applications Manual and Biotools)
Two-step RT-PCR
Two-step RT-PCR, as the name implies, occurs in two steps. First the reverse transcription and then the PCR. This method is more sensitive than the one-step method. Kits are also useful for two-step RT-PCR. Just as for one-step PCR, use only intact, high-quality RNA for the best results. The primer for two-step PCR does not have to be sequence-specific.
Step one
First combine template RNA, primer, dNTP mix, and nuclease-free water in a PCR tube. Then, add an RNase inhibitor and reverse transcriptase to the PCR tube. Next, place the PCR tube into a thermal cycler for one cycle wherein annealing, extending, and inactivating of reverse transcriptase occurs. Finally, proceed directly to step two which is PCR, or store product on ice until PCR can be performed.
Step two
Add master mix which contains buffer, dNTP mix, MgCl2, Taq polymerase, and nuclease-free water to each PCR tube. Then add the necessary primer to the tubes. Next, place the PCR tubes in a thermal cycler for 30 cycles of the amplification program. This includes denaturation, annealing, and elongation. The products of RT-PCR can be analyzed with gel electrophoresis.
Publication guidelines
Quantitative RT-PCR assay is considered to be the gold standard for measuring the number of copies of specific cDNA targets in a sample but it is poorly standardized. As a result, while there are numerous publications utilizing the technique, many provide inadequate experimental detail and use unsuitable data analysis to draw inappropriate conclusions. Due to the inherent variability in the quality of any quantitative PCR data, not only do reviewers have a difficult time evaluating these manuscripts, but the studies also become impossible to replicate. Recognizing the need for the standardization of the reporting of experimental conditions, the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE, pronounced mykee) guidelines have been published by an international consortium of academic scientists. The MIQE guidelines describe the minimum information necessary for evaluating quantitative PCR experiments that should be required for publication to encourage better experimental practice and ensuring the relevance, accuracy, correct interpretation, and repeatability of quantitative PCR data.
Besides reporting guidelines, the MIQE stresses the need to standardize the nomenclature associated with quantitative PCR to avoid confusion; for example, the abbreviation qPCR should be used for quantitative real-time PCR, while RT-qPCR should be used for reverse transcription-qPCR, and genes used for normalization should be referred to as reference genes instead of housekeeping genes. It also proposes that commercially derived terms like TaqMan probes should not be used, but instead referred to as hydrolysis probes. Additionally, it is proposed that the quantification cycle (Cq) be used to describe the PCR cycle used for quantification instead of the threshold cycle (Ct), crossing point (Cp), and takeoff point (TOP), which refer to the same value but were coined by different manufacturers of real-time instruments.
The guideline consists of the following elements: 1) experimental design, 2) sample, 3) nucleic acid extraction, 4) reverse transcription, 5) qPCR target information, 6) oligonucleotides, 7) protocol, 8) validation, and 9) data analysis. Specific items within each element carry a label of either E (essential) or D (desirable). Those labeled E are considered critical and indispensable while those labeled D are considered peripheral yet important for best practices.
Research
In 2023, researchers developed a working prototype of an RT-LAMP lab-on-a-chip system, which provided results for SARS-CoV-2 tests within three minutes. The technology integrates microfluidic channels into printed circuit boards with, which may enable low-cost mass production.
References
External links
RT-PCR protocols from Penn state University
Database of validated PCR primer sets (website critique)
Animation to illustrate RT-PCR procedure, from Cold Spring Harbor Laboratory
The Reference in qPCR – an Academic & Industrial Information Platform
Top 5 Government Rt Pcr Centres in Mumbai
Laboratory techniques
Molecular biology
Polymerase chain reaction
Biotechnology
fr:RT-PCR#Après transcription inverse | Reverse transcription polymerase chain reaction | [
"Chemistry",
"Biology"
] | 4,868 | [
"Biochemistry methods",
"Genetics techniques",
"Polymerase chain reaction",
"Biotechnology",
"nan",
"Molecular biology",
"Biochemistry"
] |
235,169 | https://en.wikipedia.org/wiki/DNA%20paternity%20testing | DNA paternity testing is the use of DNA profiles to determine whether an individual is the biological parent of another individual. Paternity testing can be especially important when the rights and duties of the father are in issue and a child's paternity is in doubt. Tests can also determine the likelihood of someone being a biological grandparent. Though genetic testing is the most reliable standard, older methods also exist, including ABO blood group typing, analysis of various other proteins and enzymes, or using human leukocyte antigen antigens. The current techniques for paternity testing are using polymerase chain reaction (PCR) and restriction fragment length polymorphism (RFLP). Paternity testing can now also be performed while the woman is still pregnant from a blood draw.
DNA testing is currently the most advanced and accurate technology to determine parentage. In a DNA paternity test, the result (called the 'probability of parentage) is 0% when the alleged parent is not biologically related to the child, and the probability of parentage is typically 99.99% when the alleged parent is biologically related to the child. However, while almost all individuals have a single and distinct set of genes, rare individuals, known as "chimeras", have at least two different sets of genes, which can result in a false negative result if their reproductive tissue has a different genetic make-up from the tissue sampled for the test.
Paternity or maternity testing for child or adult
The DNA test is performed by collecting buccal (cheek) cells found on the inside of a person's cheek using a buccal or cheek swab. These swabs have wooden or plastic stick handles with a cotton on synthetic tip. The collector rubs the inside of a person's cheek to collect as many buccal cells as possible, which are then sent to a laboratory for testing. Samples from the alleged father or mother and the child would be needed.
Prenatal paternity testing for unborn child
Invasive prenatal paternity testing
It is possible to determine who the biological father of the fetus is while the woman is still pregnant through procedures called chorionic villus sampling or amniocentesis. Chorionic villus sampling retrieves placental tissue in either a transcervical or transabdominal manner. Amniocentesis retrieves amniotic fluid by inserting a needle through the pregnant mother's abdominal wall. These procedures are highly accurate because they are taking a sample directly from the fetus; however, there is a small risk for the woman to miscarry and lose the pregnancy as a result. Both CVS and amniocentesis require the pregnant woman to visit a genetic specialist known as a maternal-fetal medicine specialist who will perform the procedure.
Non-invasive prenatal paternity testing
Advances in genetic testing have led to the ability to identify the biological father while the woman is still pregnant. There is a small amount of fetal DNA (cffDNA) present in the mother's blood during pregnancy. This allows for accurate fetal DNA paternity testing during pregnancy from a blood draw with no risk of miscarriage. Studies have shown that cffDNA can first be observed as early as seven weeks gestation, and the amount of cffDNA increases as the pregnancy progresses.
DNA profiling
The DNA of an individual is the same in every somatic (nonreproductive) cell. Sexual reproduction brings the DNA of both parents together to create a unique combination of genetic material in a new cell, so the genetic material of an individual is derived from the genetic material of each parent in equal amounts; this genetic material is known as the nuclear genome of the individual, because it is found in the nucleus.
Comparing the DNA sequence of one person to that of another can prove if one of them was derived from the other, but DNA paternity tests are not currently 100% accurate. Specific sequences are examined to see if they were copied verbatim from one individual's genome; if so, then the genetic material of one individual could have been derived from that of the other (i.e. one is the parent of the other). This is called Autosomal DNA testing. It is currently the gold standard in paternity testing as it allows a comparison of the child’s DNA to that of the mother and alleged father. The genetic contribution to the child from the mother can be evaluated, resulting in possible genotypes for the true father. If the alleged father cannot be excluded as the true father, then statistical calculations can be conducted to determine how likely the alleged father is the true father compared to if another random man was the true father.
Besides nuclear DNA, mitochondria also have their own genetic material called mitochondrial DNA. Mitochondrial DNA comes only from the mother, without any shuffling. Proving a relationship based on comparison of the mitochondrial genome is much easier than that based on the nuclear genome. However, testing the mitochondrial genome can prove only if two individuals are related by common descent through maternal lines only from a common ancestor and is, thus, of limited value (i.e., it could not be used to test for paternity).
In testing the paternity of a male child, comparison of the Y chromosome can be used, since it is passed directly from father to son. However, similar to mitochondrial DNA, the Y chromosome is passed through the paternal line. This means that two brothers share the Y chromosome of their father. Therefore if one brother is the suspected father, his biological brother could also be the father based on Y chromosomal data alone. This is true with any male related to the suspected father on the paternal line. For this reason autosomal DNA testing would be a more precise paternity testing method.
In the US, the AABB has regulations for DNA paternity and family relationship testing, but AABB accreditation is not required. DNA test results are legally admissible if the collection and the processing follows a chain of custody. Similarly in Canada, the SCC has regulations on DNA paternity and relationship testing, but this accreditation, while recommended, is not required.
The Paternity Testing Commission of the International Society for Forensic Genetics has taken up the task of establishing the biostatistical recommendations in accordance with the ISO/IEC 17025 standards. Bio-statistical evaluations of paternity should be based on a likelihood ratio principle - yielding the Paternity Index, PI. The recommendations provide guidance on concepts of genetic hypotheses and calculation concerns needed to produce valid PIs, as well as on specific issues related to population genetics.
History
The first form of any kind of parental testing was blood typing, or matching blood types between the child and alleged parent, which became available in the 1920s, after scientists recognized that blood types, which had been discovered in the early 1900s, were genetically inherited. Under this form of testing, the blood types of the child and parents are compared, and it can be determined whether there is any possibility of a parental link. For example, two O blood type parents can produce a child only with an O blood type, and two parents with a B blood type can produce a child with either a B or an O blood type. This often led to inconclusive results, as 30% of the entire population can be excluded from being the possible parent under this form of testing. In the 1930s, serological testing, which tests certain proteins in the blood, became available, with a 40% exclusion rate.
In the 1960s, accurate genetic paternity testing became a possibility when HLA typing was developed, which compares the genetic fingerprints on white blood cells between the child and alleged parent. HLA tests could be done with 80% accuracy but could not distinguish between close relatives. Genetic parental testing technology advanced further with the isolation of the first restriction enzyme in 1970. Highly accurate DNA parental testing became available in the 1980s with the development of RFLP. In the 1990s, PCR became the standard method for DNA parental testing: a simpler, faster, and more accurate method of testing than RFLP, it has an exclusion rate of 99.99% or higher.
Legal evidence
The DNA parentage test that follows strict chain of custody can generate legally admissible results that are used for child support, inheritance, social welfare benefits, immigration, or adoption purposes. To satisfy the chain-of-custody legal requirements, all tested parties have to be properly identified and their specimens collected by a third-party professional who is not related to any of the tested parties and has no interest in the outcome of the test.
The quantum of evidence needed is clear and convincing evidence: that is, more evidence than an ordinary case in civil litigation, but less than beyond a reasonable doubt required to convict a defendant in a criminal case.
In recent years, immigration authorities in various countries, such as the United States, United Kingdom, Canada, Australia, France, and others, may accept DNA parentage test results from immigration petitioners and beneficiaries in a family-based immigration case when primary documents that prove biological relationship are missing or inadequate.
In the U.S., immigration applicants bear the responsibility of arranging and paying for DNA testing. The U.S. immigration authorities require that the DNA test, if pursued, be performed by one of the laboratories accredited by the AABB (formerly American Association of Blood Banks). Similarly, in Canada, the laboratory needs to be accredited by the Standards Council of Canada.
Although paternity tests are more common than maternity tests, there may be circumstances in which the biological mother of the child is unclear: examples include cases of an adopted child attempting to reunify with his or her biological mother, potential hospital mix-ups, and in vitro fertilization where the laboratory may have implanted an unrelated embryo inside the mother.
Other factors, such as new laws regarding reproductive technologies using donated eggs and sperm and surrogate mothers, can also mean that the female giving birth is not necessarily the legal mother of the child. For example, in Canada, the federal Human Assisted Reproduction Act provides for the use of hired surrogate mothers. The legal mother of the child may be the egg donor. Similar laws are in place in the United Kingdom and Australia.
In Brazil in 2019, two male identical twins were ordered to both pay maintenance for a child fathered by one of them, because the father could not be identified with DNA.
Legal issues
Australia
Peace-of-mind parentage tests are widely available on the internet. For a parentage test (paternity or maternity) to be admissible for legal purposes, such as for changing a birth certificate, Family Law Court proceedings, visa/citizenship applications or child support claims, the process must comply with the Family Law Regulations 1984 (Cth). Further, the laboratory processing the samples must be accredited by the National Association of Testing Authorities (NATA).
Canada
Personal paternity-testing kits are available. The Standards Council of Canada regulates paternity testing in Canada whereby laboratories are ISO 17025-approved. In Canada, only a handful of labs have this approval, and it is recommended that testing is performed in these labs. Courts also have the power to order paternity tests during divorce cases.
China
In China, paternity testing is legally available to fathers who suspect their child is not theirs. Chinese law also requires a paternity test for any child born outside the one-child policy for the child to be eligible for a hukou, or family registration record. Family tie formed by adoption can also only be confirmed by a paternity test. A large number of Chinese citizens seek paternity testing each year, and this has given rise to many unlicensed illegal testing centers being set up.
France
DNA paternity testing is solely performed on decision of a judge in case of a judiciary procedure in order either to establish or contest paternity or to obtain or deny child support. Non consensual private DNA paternity testing is illegal, including through laboratories in other countries, and is punishable by up to a year in prison and a €15,000 fine. The French Council of State has described the law's purpose as upholding the "French regime of filiation" and preserving "the peace of families."
Germany
Under the Gene Diagnostics Act of 2009, secret paternity testing is illegal. Any paternity testing must be conducted by a licensed physician or by an expert with a university degree in science and special education in parentage testing, and the laboratory carrying out genetic testing must be accredited according to ISO/IEC 17025. Full informed consent of both parents is required, and prenatal paternity testing is prohibited, with the exception of sexual abuse and rape cases. Any genetic testing done without the other parent's consent is punishable with a €5,000 fine. Due to an amendment of the civil law section 1598a in 2005, any man who contests paternity no longer automatically severs legal rights and obligations to the child.
Israel
A paternity test with any legal standing must be ordered by a family court. Though parents have access to "peace of mind" parental tests through overseas laboratories, family courts are under no obligation to accept them as evidence. It is also illegal to take genetic material for a parental test from a minor over 16 years of age without the minor's consent. Family courts have the power to order paternity tests against the will of the father in divorce and child support cases, as well as in other cases such as determining heirs and settling the question involving the population registry. A man seeking to prove that he is not the father of the child registered as his is entitled to a paternity test, even if the mother and natural guardian object. Paternity tests are not ordered when it is believed it could lead to the murder of the mother, and until 2007, were not ordered when there was a chance that the child of a married woman could have been fathered by a man other than her husband, thereby making the child a mamzer under Jewish law.
Philippines
DNA paternity testing for personal knowledge is legal, and home test kits are available by mail from representatives of AABB- and ISO 17025-certified laboratories. DNA Paternity Testing for official purposes, such as sustento (child support) and inheritance disputes, must follow the Rule on DNA Evidence A.M. No. 06-11-5-SC, which was promulgated by the Philippine Supreme Court on October 15, 2007. Tests are sometimes ordered by courts when proof of paternity is required.
Spain
In Spain, peace-of-mind paternity tests are a "big business," partly due to the French ban on paternity testing, with many genetic testing companies being based in Spain.
United Kingdom
In the United Kingdom, there were no restrictions on paternity tests until the Human Tissue Act 2004 came into force in September 2006. Section 45 states that it is an offence to possess without appropriate consent any human bodily material with the intent of analysing its DNA. Legally declared fathers have access to paternity-testing services under the new regulations, provided the putative parental DNA being tested is their own. Tests are sometimes ordered by courts when proof of paternity is required. In the UK, the Ministry of Justice accredits bodies that can conduct this testing. The Department of Health produced a voluntary code of practice on genetic paternity testing in 2001. This document is currently under review, and responsibility for it has been transferred to the Human Tissue Authority.
In the 2018 case of Anderson V Spencer the Court of Appeal permitted for the very first time DNA samples taken from a Deceased person to be used for paternity testing.
United States
In the United States, paternity testing is fully legal, and fathers may test their children without the consent or knowledge of the mother. Paternity testing take-home kits are readily available for purchase, though their results are not admissible in court and are for personal knowledge only.
Only a court-ordered paternity test may be used as evidence in court proceedings. If parental testing is being submitted for legal purposes, including immigration, testing must be ordered through a lab that has AABB accreditation for relationship DNA testing.
The legal implications of a parentage result test vary by state and according to whether the putative parents are unmarried or married. If a parentage test does not meet forensic standards for the state in question, a court-ordered test may be required for the results of the test to be admissible for legal purposes. For unmarried parents, if a parent is currently receiving child support or custody, but DNA testing later proves that the man is not the father, support automatically stops. However, in many states, this testing must be performed during a narrow window of time, if a voluntary acknowledgement of parentage form has already been signed by the putative father; otherwise, the results of the test may be disregarded by law, and in many cases, a man may be required to pay child support, though the child is biologically unrelated. In a few states, if the mother is receiving the support, then that alleged father has the right to file a lawsuit to get back any money that he lost from paying support. As of 2011, in most states, unwed parents confronted with a voluntary acknowledgement of parentage form are informed of the possibility and right to request a DNA paternity test. If testing is refused by the mother, the father may not be required to sign the birth certificate or the voluntary acknowledgement of parentage form for the child. For wedded putative parents, the husband of the mother is presumed to be the father of the child. But, in most states, this presumption can be overturned by the application of a forensic paternity test; in many states, the time for overturning this presumption may be limited to the first few years of the child's life.
Reverse paternity testing
Reverse paternity determination is the ability to establish the biological father when the father of that person is not available. The test uses the STR alleles in mother and her child, other children and brothers of the alleged father, and deduction of genetic constitution of the father by the basis of genetic laws, all to create a rough amalgamation. This can compare the father's DNA when a direct sample of the father's DNA is unavailable. An episode of Solved shows this test being used to know if a blood sample matches with the victim of a kidnapping.
See also
Paternity fraud
Mosaicism and chimerism, rare genetic conditions that can result in false negative results on DNA-based tests
Non-paternity event
Lauren Lake's Paternity Court, a television series that debuted in fall 2013
Genetic:
Heritability
List of Mendelian traits in humans
References
External links
UK paternity testing regulations per the Human Tissue Authority
Applied genetics
DNA
Family law
Fathers' rights
Forensic genetics
Genetics techniques
Parenting
Testing | DNA paternity testing | [
"Engineering",
"Biology"
] | 3,890 | [
"Genetics techniques",
"Genetic engineering"
] |
235,287 | https://en.wikipedia.org/wiki/Nitric%20oxide | Nitric oxide (nitrogen oxide or nitrogen monoxide) is a colorless gas with the formula . It is one of the principal oxides of nitrogen. Nitric oxide is a free radical: it has an unpaired electron, which is sometimes denoted by a dot in its chemical formula (•N=O or •NO). Nitric oxide is also a heteronuclear diatomic molecule, a class of molecules whose study spawned early modern theories of chemical bonding.
An important intermediate in industrial chemistry, nitric oxide forms in combustion systems and can be generated by lightning in thunderstorms. In mammals, including humans, nitric oxide is a signaling molecule in many physiological and pathological processes. It was proclaimed the "Molecule of the Year" in 1992. The 1998 Nobel Prize in Physiology or Medicine was awarded for discovering nitric oxide's role as a cardiovascular signalling molecule. Its impact extends beyond biology, with applications in medicine, such as the development of sildenafil (Viagra), and in industry, including semiconductor manufacturing.
Nitric oxide should not be confused with nitrogen dioxide (NO2), a brown gas and major air pollutant, or with nitrous oxide (N2O), an anesthetic gas.
History
Nitric oxide (NO) was first identified by Joseph Priestley in the late 18th century, originally seen as merely a toxic byproduct of combustion and an environmental pollutant. Its biological significance was later uncovered in the 1980s when researchers Robert F. Furchgott, Louis J. Ignarro, and Ferid Murad discovered its critical role as a vasodilator in the cardiovascular system, a breakthrough that earned them the 1998 Nobel Prize in Physiology or Medicine.
Physical properties
Electronic configuration
The ground state electronic configuration of NO is, in united atom notation:
The first two orbitals are actually pure atomic 1sO and 1sN from oxygen and nitrogen respectively and therefore are usually not noted in the united atom notation. Orbitals noted with an asterisk are antibonding. The ordering of 5σ and 1π according to their binding energies is subject to discussion. Removal of a 1π electron leads to 6 states whose energies span over a range starting at a lower level than a 5σ electron an extending to a higher level. This is due to the different orbital momentum couplings between a 1π and a 2π electron.
The lone electron in the 2π orbital makes NO a doublet (X ²Π) in its ground state whose degeneracy is split in the fine structure from spin-orbit coupling with a total momentum J= or J=.
Dipole
The dipole of NO has been measured experimentally to 0.15740 D and is oriented from O to N (⁻NO⁺) due to the transfer of negative electronic charge from oxygen to nitrogen.
Reactions
With di- and triatomic molecules
Upon condensing to a liquid, nitric oxide dimerizes to dinitrogen dioxide, but the association is weak and reversible. The N–N distance in crystalline NO is 218 pm, nearly twice the N–O distance.
Since the heat of formation of •NO is endothermic, NO can be decomposed to the elements. Catalytic converters in cars exploit this reaction:
2 •NO → O2 + N2
When exposed to oxygen, nitric oxide converts into nitrogen dioxide:
2 •NO + O2 → 2 •NO2
This reaction is thought to occur via the intermediates ONOO• and the red compound ONOONO.
In water, nitric oxide reacts with oxygen to form nitrous acid (HNO2). The reaction is thought to proceed via the following stoichiometry:
4 •NO + O2 + 2 H2O → 4 HNO2
Nitric oxide reacts with fluorine, chlorine, and bromine to form the nitrosyl halides, such as nitrosyl chloride:
2 •NO + Cl2 → 2 NOCl
With NO2, also a radical, NO combines to form the intensely blue dinitrogen trioxide:
•NO + •NO2 ON−NO2
Organic chemistry
The addition of a nitric oxide moiety to another molecule is often referred to as nitrosylation. The Traube reaction is the addition of a two equivalents of nitric oxide onto an enolate, giving a diazeniumdiolate (also called a nitrosohydroxylamine). The product can undergo a subsequent retro-aldol reaction, giving an overall process similar to the haloform reaction. For example, nitric oxide reacts with acetone and an alkoxide to form a diazeniumdiolate on each α position, with subsequent loss of methyl acetate as a by-product:
This reaction, which was discovered around 1898, remains of interest in nitric oxide prodrug research. Nitric oxide can also react directly with sodium methoxide, ultimately forming sodium formate and nitrous oxide by way of an N-methoxydiazeniumdiolate.
Coordination complexes
Nitric oxide reacts with transition metals to give complexes called metal nitrosyls. The most common bonding mode of nitric oxide is the terminal linear type (M−NO). Alternatively, nitric oxide can serve as a one-electron pseudohalide. In such complexes, the M−N−O group is characterized by an angle between 120° and 140°. The NO group can also bridge between metal centers through the nitrogen atom in a variety of geometries.
Production and preparation
In commercial settings, nitric oxide is produced by the oxidation of ammonia at 750–900 °C (normally at 850 °C) with platinum as catalyst in the Ostwald process:
4 NH3 + 5 O2 → 4 •NO + 6 H2O
The uncatalyzed endothermic reaction of oxygen (O2) and nitrogen (N2), which is effected at high temperature (>2000 °C) by lightning has not been developed into a practical commercial synthesis (see Birkeland–Eyde process):
N2 + O2 → 2 •NO
Laboratory methods
In the laboratory, nitric oxide is conveniently generated by reduction of dilute nitric acid with copper:
8 HNO3 + 3 Cu → 3 Cu(NO3)2 + 4 H2O + 2 •NO
An alternative route involves the reduction of nitrous acid in the form of sodium nitrite or potassium nitrite:
2 NaNO2 + 2 NaI + 2 H2SO4 → I2 + 2 Na2SO4 + 2 H2O + 2 •NO
2 NaNO2 + 2 FeSO4 + 3 H2SO4 → Fe2(SO4)3 + 2 NaHSO4 + 2 H2O + 2 •NO
3 KNO2 + KNO3 + Cr2O3 → 2 K2CrO4 + 4 •NO
The iron(II) sulfate route is simple and has been used in undergraduate laboratory experiments. So-called NONOate compounds are also used for nitric oxide generation.
Detection and assay
Nitric oxide concentration can be determined using a chemiluminescent reaction involving ozone. A sample containing nitric oxide is mixed with a large quantity of ozone. The nitric oxide reacts with the ozone to produce oxygen and nitrogen dioxide, accompanied with emission of light (chemiluminescence):
•NO + O3 → •NO2 + O2 + hν
which can be measured with a photodetector. The amount of light produced is proportional to the amount of nitric oxide in the sample.
Other methods of testing include electroanalysis (amperometric approach), where ·NO reacts with an electrode to induce a current or voltage change. The detection of NO radicals in biological tissues is particularly difficult due to the short lifetime and concentration of these radicals in tissues. One of the few practical methods is spin trapping of nitric oxide with iron-dithiocarbamate complexes and subsequent detection of the mono-nitrosyl-iron complex with electron paramagnetic resonance (EPR).
A group of fluorescent dye indicators that are also available in acetylated form for intracellular measurements exist. The most common compound is 4,5-diaminofluorescein (DAF-2).
Environmental effects
Acid rain deposition
Nitric oxide reacts with the hydroperoxyl radical () to form nitrogen dioxide (NO2), which then can react with a hydroxyl radical (HO•) to produce nitric acid (HNO3):
•NO + → •NO2 + HO•
•NO2 + HO• → HNO3
Nitric acid, along with sulfuric acid, contributes to acid rain deposition.
Ozone depletion
•NO participates in ozone layer depletion. Nitric oxide reacts with stratospheric ozone to form O2 and nitrogen dioxide:
•NO + O3 → •NO2 + O2
This reaction is also utilized to measure concentrations of •NO in control volumes.
Precursor to NO2
As seen in the acid deposition section, nitric oxide can transform into nitrogen dioxide (this can happen with the hydroperoxy radical, , or diatomic oxygen, O2). Symptoms of short-term nitrogen dioxide exposure include nausea, dyspnea and headache. Long-term effects could include impaired immune and respiratory function.
Biological functions
NO is a gaseous signaling molecule. It is a key vertebrate biological messenger, playing a role in a variety of biological processes. It is a bioproduct in almost all types of organisms, including bacteria, plants, fungi, and animal cells.
Nitric oxide, an endothelium-derived relaxing factor (EDRF), is biosynthesized endogenously from L-arginine, oxygen, and NADPH by various nitric oxide synthase (NOS) enzymes. Reduction of inorganic nitrate may also make nitric oxide. One of the main enzymatic targets of nitric oxide is guanylyl cyclase. The binding of nitric oxide to the heme region of the enzyme leads to activation, in the presence of iron. Nitric oxide is highly reactive (having a lifetime of a few seconds), yet diffuses freely across membranes. These attributes make nitric oxide ideal for a transient paracrine (between adjacent cells) and autocrine (within a single cell) signaling molecule. Once nitric oxide is converted to nitrates and nitrites by oxygen and water, cell signaling is deactivated.
The endothelium (inner lining) of blood vessels uses nitric oxide to signal the surrounding smooth muscle to relax, resulting in vasodilation and increasing blood flow. Sildenafil (Viagra) is a drug that uses the nitric oxide pathway. Sildenafil does not produce nitric oxide, but enhances the signals that are downstream of the nitric oxide pathway by protecting cyclic guanosine monophosphate (cGMP) from degradation by cGMP-specific phosphodiesterase type 5 (PDE5) in the corpus cavernosum, allowing for the signal to be enhanced, and thus vasodilation. Another endogenous gaseous transmitter, hydrogen sulfide (H2S) works with NO to induce vasodilation and angiogenesis in a cooperative manner.
Nasal breathing produces nitric oxide within the body, while oral breathing does not.
Occupational safety and health
In the U.S., the Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for nitric oxide exposure in the workplace as 25 ppm (30 mg/m3) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 25 ppm (30 mg/m3) over an 8-hour workday. At levels of 100 ppm, nitric oxide is immediately dangerous to life and health.
Explosion hazard
Liquid nitrogen oxide is very sensitive to detonation even in the absence of fuel, and can be initiated as readily as nitroglycerin. Detonation of the endothermic liquid oxide close to its b.p. (-152°C) generated a 100 kbar pulse and fragmented the test equipment. It is the simplest molecule that is capable of detonation in all three phases. The liquid oxide is sensitive and may explode during distillation, and this has been the cause of industrial accidents. Gaseous nitric oxide detonates at about 2300 m/s, but as a solid it can reach a detonation velocity of 6100 m/s.
References
Notes
Further reading
Butler A. and Nicholson R.; "Life, death and NO." Cambridge 2003. .
van Faassen, E. E.; Vanin, A. F. (eds); "Radicals for life: The various forms of Nitric Oxide." Elsevier, Amsterdam 2007. .
Ignarro, L. J. (ed.); "Nitric oxide:biology and pathobiology." Academic Press, San Diego 2000. .
External links
International Chemical Safety Card 1311
Microscale Gas Chemistry: Experiments with Nitrogen Oxides
Your Brain Boots Up Like a Computer – new insights about the biological role of nitric oxide.
Assessing The Potential of Nitric Oxide in the Diabetic Foot
New Discoveries About Nitric Oxide Can Provide Drugs For Schizophrenia
Free radicals
Gaseous signaling molecules
GABAA receptor positive allosteric modulators
Mitochondrial toxins
Nitrogen oxides
Neurotransmitters
Nitrogen cycle
NMDA receptor antagonists
Orphan drugs
Diatomic molecules
Albanian discoveries | Nitric oxide | [
"Physics",
"Chemistry",
"Biology"
] | 2,832 | [
"Neurochemistry",
"Molecules",
"Free radicals",
"Neurotransmitters",
"Signal transduction",
"Senescence",
"Gaseous signaling molecules",
"Nitrogen cycle",
"Biomolecules",
"Diatomic molecules",
"Metabolism",
"Matter"
] |
235,343 | https://en.wikipedia.org/wiki/Potassium%20hydroxide | Potassium hydroxide is an inorganic compound with the formula KOH, and is commonly called caustic potash.
Along with sodium hydroxide (NaOH), KOH is a prototypical strong base. It has many industrial and niche applications, most of which utilize its caustic nature and its reactivity toward acids. An estimated 700,000 to 800,000 tonnes were produced in 2005. KOH is noteworthy as the precursor to most soft and liquid soaps, as well as numerous potassium-containing chemicals. It is a white solid that is dangerously corrosive.
Properties and structure
KOH exhibits high thermal stability. Because of this high stability and relatively low melting point, it is often melt-cast as pellets or rods, forms that have low surface area and convenient handling properties. These pellets become tacky in air because KOH is hygroscopic. Most commercial samples are ca. 90% pure, the remainder being water and carbonates. Its dissolution in water is strongly exothermic. Concentrated aqueous solutions are sometimes called potassium lyes. Even at high temperatures, solid KOH does not dehydrate readily.
Structure
At higher temperatures, solid KOH crystallizes in the NaCl crystal structure. The group is either rapidly or randomly disordered so that it is effectively a spherical anion of radius 1.53 Å (between and in size). At room temperature, the groups are ordered and the environment about the centers is distorted, with distances ranging from 2.69 to 3.15 Å, depending on the orientation of the OH group. KOH forms a series of crystalline hydrates, namely the monohydrate , the dihydrate and the tetrahydrate .
Reactions
Solubility and desiccating properties
About 112 g of KOH dissolve in 100 mL water at room temperature, which contrasts with 100 g/100 mL for NaOH. Thus on a molar basis, KOH is slightly more soluble than NaOH. Lower molecular-weight alcohols such as methanol, ethanol, and propanols are also excellent solvents. They participate in an acid-base equilibrium. In the case of methanol the potassium methoxide (methylate) forms:
Because of its high affinity for water, KOH serves as a desiccant in the laboratory. It is often used to dry basic solvents, especially amines and pyridines.
As a nucleophile in organic chemistry
KOH, like NaOH, serves as a source of , a highly nucleophilic anion that attacks polar bonds in both inorganic and organic materials. Aqueous KOH saponifies esters:
When R is a long chain, the product is called a potassium soap. This reaction is manifested by the "greasy" feel that KOH gives when touched; fats on the skin are rapidly converted to soap and glycerol.
Molten KOH is used to displace halides and other leaving groups. The reaction is especially useful for aromatic reagents to give the corresponding phenols.
Reactions with inorganic compounds
Complementary to its reactivity toward acids, KOH attacks oxides. Thus, SiO2 is attacked by KOH to give soluble potassium silicates. KOH reacts with carbon dioxide to give potassium bicarbonate:
Manufacture
Historically, KOH was made by adding potassium carbonate to a strong solution of calcium hydroxide (slaked lime). The salt metathesis reaction results in precipitation of solid calcium carbonate, leaving potassium hydroxide in solution:
Filtering off the precipitated calcium carbonate and boiling down the solution gives potassium hydroxide ("calcinated or caustic potash"). This method of producing potassium hydroxide remained dominant until the late 19th century, when it was largely replaced by the current method of electrolysis of potassium chloride solutions. The method is analogous to the manufacture of sodium hydroxide (see chloralkali process):
Hydrogen gas forms as a byproduct on the cathode; concurrently, an anodic oxidation of the chloride ion takes place, forming chlorine gas as a byproduct. Separation of the anodic and cathodic spaces in the electrolysis cell is essential for this process.
Uses
KOH and NaOH can be used interchangeably for a number of applications, although in industry, NaOH is preferred because of its lower cost.
Catalyst for hydrothermal gasification process
In industry, KOH is a good catalyst for hydrothermal gasification process. In this process, it is used to improve the yield of gas and amount of hydrogen in process. For example, production of coke (fuel) from coal often produces much coking wastewater. In order to degrade it, supercritical water is used to convert it to the syngas containing carbon monoxide, carbon dioxide, hydrogen and methane. Using pressure swing adsorption, we could separate various gases and then use power-to-gas technology to convert them to fuel. On the other hand, the hydrothermal gasification process could degrade other waste such as sewage sludge and waste from food factories.
Precursor to other potassium compounds
Many potassium salts are prepared by neutralization reactions involving KOH. The potassium salts of carbonate, cyanide, permanganate, phosphate, and various silicates are prepared by treating either the oxides or the acids with KOH. The high solubility of potassium phosphate is desirable in fertilizers.
Manufacture of soft soaps
The saponification of fats with KOH is used to prepare the corresponding "potassium soaps", which are softer than the more common sodium hydroxide-derived soaps. Because of their softness and greater solubility, potassium soaps require less water to liquefy, and can thus contain more cleaning agent than liquefied sodium soaps.
As an electrolyte
Aqueous potassium hydroxide is employed as the electrolyte in alkaline batteries based on nickel-cadmium, nickel-hydrogen, and manganese dioxide-zinc. Potassium hydroxide is preferred over sodium hydroxide because its solutions are more conductive. The nickel–metal hydride batteries in the Toyota Prius use a mixture of potassium hydroxide and sodium hydroxide. Nickel–iron batteries also use potassium hydroxide electrolyte.
Food industry
In food products, potassium hydroxide acts as a food thickener, pH control agent and food stabilizer. The FDA considers it generally safe as a direct food ingredient when used in accordance with Good Manufacturing Practices. It is known in the E number system as E525.
Niche applications
Like sodium hydroxide, potassium hydroxide attracts numerous specialized applications, virtually all of which rely on its properties as a strong chemical base with its consequent ability to degrade many materials. For example, in a process commonly referred to as "chemical cremation" or "resomation", potassium hydroxide hastens the decomposition of soft tissues, both animal and human, to leave behind only the bones and other hard tissues. Entomologists wishing to study the fine structure of insect anatomy may use a 10% aqueous solution of KOH to apply this process.
In chemical synthesis, the choice between the use of KOH and the use of NaOH is guided by the solubility or keeping quality of the resulting salt.
The corrosive properties of potassium hydroxide make it a useful ingredient in agents and preparations that clean and disinfect surfaces and materials that can themselves resist corrosion by KOH.
KOH is also used for semiconductor chip fabrication (for example anisotropic wet etching).
Potassium hydroxide is often the main active ingredient in chemical "cuticle removers" used in manicure treatments.
Because aggressive bases like KOH damage the cuticle of the hair shaft, potassium hydroxide is used to chemically assist the removal of hair from animal hides. The hides are soaked for several hours in a solution of KOH and water to prepare them for the unhairing stage of the tanning process. This same effect is also used to weaken human hair in preparation for shaving. Preshave products and some shave creams contain potassium hydroxide to force open the hair cuticle and to act as a hygroscopic agent to attract and force water into the hair shaft, causing further damage to the hair. In this weakened state, the hair is more easily cut by a razor blade.
Potassium hydroxide is used to identify some species of fungi. A 3–5% aqueous solution of KOH is applied to the flesh of a mushroom and the researcher notes whether or not the color of the flesh changes. Certain species of gilled mushrooms, boletes, polypores, and lichens are identifiable based on this color-change reaction.
Safety
Potassium hydroxide is a caustic alkali and its solutions range from irritating to skin and other tissue in low concentrations, to highly corrosive in high concentrations. Eyes are particularly vulnerable, and dust or mist is severely irritating to lungs and can cause pulmonary edema. Safety considerations are similar to those of sodium hydroxide.
The caustic effects arise from being highly alkaline, but if potassium hydroxide is neutralised with a non-toxic acid then it becomes a non-toxic potassium salt. It is approved as a food additive under the code E525.
See also
Potash
Soda lime
Saltwater soap – sailors' soap
References
External links
Newscientist article dn10104
MSDS from JTBaker
CDC - NIOSH Pocket Guide to Chemical Hazards
Deliquescent materials
Desiccants
E-number additives
Hydroxides
Photographic chemicals
Potassium compounds | Potassium hydroxide | [
"Physics",
"Chemistry"
] | 1,989 | [
"Hydroxides",
"Desiccants",
"Materials",
"Deliquescent materials",
"Bases (chemistry)",
"Matter"
] |
235,550 | https://en.wikipedia.org/wiki/Sequence%20analysis | In bioinformatics, sequence analysis is the process of subjecting a DNA, RNA or peptide sequence to any of a wide range of analytical methods to understand its features, function, structure, or evolution. It can be performed on the entire genome, transcriptome or proteome of an organism, and can also involve only selected segments or regions, like tandem repeats and transposable elements. Methodologies used include sequence alignment, searches against biological databases, and others.
Since the development of methods of high-throughput production of gene and protein sequences, the rate of addition of new sequences to the databases increased very rapidly. Such a collection of sequences does not, by itself, increase the scientist's understanding of the biology of organisms. However, comparing these new sequences to those with known functions is a key way of understanding the biology of an organism from which the new sequence comes. Thus, sequence analysis can be used to assign function to coding and non-coding regions in a biological sequence usually by comparing sequences and studying similarities and differences. Nowadays, there are many tools and techniques that provide the sequence comparisons (sequence alignment) and analyze the alignment product to understand its biology.
Sequence analysis in molecular biology includes a very wide range of processes:
The comparison of sequences to find similarity, often to infer if they are related (homologous)
Identification of intrinsic features of the sequence such as active sites, post translational modification sites, gene-structures, reading frames, distributions of introns and exons and regulatory elements
Identification of sequence differences and variations such as point mutations and single nucleotide polymorphism (SNP) in order to get the genetic marker.
Revealing the evolution and genetic diversity of sequences and organisms
Identification of molecular structure from sequence alone.
History
Since the very first sequences of the insulin protein were characterized by Fred Sanger in 1951, biologists have been trying to use this knowledge to understand the function of molecules. He and his colleagues' discoveries contributed to the successful sequencing of the first DNA-based genome. The method used in this study, which is called the “Sanger method” or Sanger sequencing, was a milestone in sequencing long strand molecules such as DNA. This method was eventually used in the human genome project. According to Michael Levitt, sequence analysis was born in the period from 1969 to 1977. In 1969 the analysis of sequences of transfer RNAs was used to infer residue interactions from correlated changes in the nucleotide sequences, giving rise to a model of the tRNA secondary structure. In 1970, Saul B. Needleman and Christian D. Wunsch published the first computer algorithm for aligning two sequences. Over this time, developments in obtaining nucleotide sequence improved greatly, leading to the publication of the first complete genome of a bacteriophage in 1977. Robert Holley and his team in Cornell University were believed to be the first to sequence an RNA molecule.
Overview of nucleotide sequence analysis (DNA & RNA)
Nucleotide sequence analyses identify functional elements like protein binding sites, uncover genetic variations like SNPs, study gene expression patterns, and understand the genetic basis of traits. It helps to understand mechanisms that contribute to processes like replication and transcription. Some of the tasks involved are outlined below.
Quality control and preprocessing
Quality control assesses the quality of sequencing reads obtained from the sequencing technology (e.g. Illumina). It is the first step in sequence analysis to limit wrong conclusions due to poor quality data. The tools used at this stage depend on the sequencing platform. For instance, FastQC checks the quality of short reads (including RNA sequences), Nanoplot or PycoQC are used for long read sequences (e.g. Nanopore sequence reads), and MultiQC aggregates the result of FastQC in a webpage format.
Quality control provides information such as read lengths, GC content, presence of adapter sequences (for short reads), and a quality score, which is often expressed on a PHRED scale. If adapters or other artifacts from PCR amplification are present in the reads (particularly short reads), they are removed using software such as Trimmomatic or Cutadapt.
Read alignment
At this step, sequencing reads whose quality have been improved are mapped to a reference genome using alignment tools like BWA for short DNA sequence reads, minimap for long read DNA sequences, and STAR for RNA sequence reads. The purpose of mapping is to find the origin of any given read based on the reference sequence. It is also important for detecting variations or phylogenetic studies.
The output from this step, that is, the aligned reads, are stored in compatible file formats known as SAM, which contains information about the reference genome as well as individual reads. Alternatively, BAM file formats are preferred as they use much less desk or storage space.
Note: This is different from sequence alignment which compares two or more whole sequences (or sequence regions) to quantify similarity or differences or to identify an unknown sequence (as discussed below).
The following analyses steps are peculiar to DNA sequences:
Variant calling
Identifying variants is a popular aspect of sequence analysis as variants often contain information of biological significance, such as explaining the mechanism of drug resistance in an infectious disease. These variants could be single nucleotide variants (SNVs), small insertions/deletions (indels), and large structural variants. The read alignments are sorted using SAMtools, after which variant callers such as GATK are used to identify differences compared to the reference sequence.
The choice of variant calling tool depends heavily on the sequencing technology used, so GATK is often used when working with short reads, while long read sequences require tools like DeepVariant and Sniffles. Tools may also differ based on organism (prokaryotes or eukaryotes), source of sequence data (cancer vs metagenomic), and variant type of interest (SNVs or structural variants). The output of variant calling is typically in vcf format, and can be filtered using allele frequencies, quality scores, or other factors based on the research question at hand.
Variant annotation
This step adds context to the variant data using curated information from peer-reviewed papers and publicly available databases like gnomAD and Ensembl. Variants can be annotated with information about genomic features, functional consequences, regulatory elements, and population frequencies using tools like ANNOVAR or SnpEff, or custom scripts and pipeline. The output from this step is an annotation file in bed or txt format.
Visualization and interpretation
Genomic data, such as read alignments, coverage plots, and variant calls, can be visualized using genome browsers like IGV (Integrative Genomics Viewer) or UCSC Genome Browser. Interpretation of the results is done in the context of the biological question or hypothesis under investigation. The output can be a graphical representation of data in the forms of Circos plots, volcano plots, etc., or other forms of report describing the observations.
DNA sequence analysis could also involve statistical modeling to infer relationships and epigenetic analysis, like identifying differential methylation regions using a tool like DSS.
The following steps are peculiar to RNA sequences:
Gene expression analysis
Mapped RNA sequences are analyzed to estimate gene expression levels using quantification tools such as HTSeq, and identify differentially expressed genes (DEGs) between experimental conditions using statistical methods like DESeq2. This is carried out to compare the expression levels of genes or isoforms between or across different samples, and infer biological relevance.
The output of gene expression analysis is typically a table with values representing the expression levels of gene IDs or names in rows and samples in the columns as well as standard errors and p-values. The results in the table can be further visualized using volcano plots and heatmaps, where colors represent the estimated expression level. Packages like ggplot2 in R and Matplotlib in Python are often used to create the visuals. The table can also be annotated using a reference annotation file, usually in GTF or GFF format to provide more context about the genes, such as the chromosome name, strand, and start and positions, and aid result interpretation.
Functional enrichment analysis
Functional enrichment analysis identifies biological processes, pathways, and functional impacts associated with differentially expressed genes obtained from the previous step. It uses tools like GOSeq and Pathview. This creates a table with information about what pathways and molecular processes are associated with the differentially expressed genes, what genes are down or upregulated, and what gene ontology terms are recurrent or over-represented.
RNA sequence analysis explores gene expression dynamics and regulatory mechanisms underlying biological processes and diseases. Interpretation of images and tables are carried out within the context of the hypotheses being investigated.
See also: Transcriptomic technologies.
Analyzing protein sequences
Proteome sequence analysis studies the complete set of proteins expressed by an organism or a cell under specific conditions. It describes protein structure, function, post-translational modifications, and interactions within biological systems. It often starts with raw mass spectrometry (MS) data from proteomics experiments, typically in mzML, mzXML, or RAW file formats.
Beyond preprocessing raw MS data to remove noise, normalize intensities, and detect peaks and converting proprietary file formats (e.g., RAW) to open-source formats (mzML, mzXML) for compatibility with downstream analysis tools, other analytical steps include peptide identification, peptide quantification, protein inference and quantification, generating quality control report, and normalization, imputation and significance testing. The choice and order of analytical steps depend on the MS method used, which can either be data dependent acquisition (DDA) or independent acquisition (DIA).
Genome browsers in sequence analysis
Genome browsers offer a non-code, user-friendly interface to visualize genomes and genomic segments, identify genomic features, and analyze the relationship between numerous genomic elements. The three primary genome browsers—Ensembl genome browser, UCSC genome browser, and the National Centre for Biotechnology Information (NCBI)—support different sequence analysis procedures, including genome assembly, genome annotation, and comparative genomics like exploring differential expression patterns and identifying conserved regions. All browsers support multiple data formats for upload and download and provide links to external tools and resources for sequence analyses, which contributes to their versatility.
Sequence alignment
There are millions of protein and nucleotide sequences known. These sequences fall into many groups of related sequences known as protein families or gene families. Relationships between these sequences are usually discovered by aligning them together and assigning this alignment a score. There are two main types of sequence alignment. Pair-wise sequence alignment only compares two sequences at a time and multiple sequence alignment compares many sequences. Two important algorithms for aligning pairs of sequences are the Needleman-Wunsch algorithm and the Smith-Waterman algorithm. Popular tools for sequence alignment include:
Pair-wise alignment - BLAST, Dot plots
Multiple alignment - ClustalW, PROBCONS, MUSCLE, MAFFT, and T-Coffee.
A common use for pairwise sequence alignment is to take a sequence of interest and compare it to all known sequences in a database to identify homologous sequences. In general, the matches in the database are ordered to show the most closely related sequences first, followed by sequences with diminishing similarity. These matches are usually reported with a measure of statistical significance such as an Expectation value.
Profile comparison
In 1987, Michael Gribskov, Andrew McLachlan, and David Eisenberg introduced the method of profile comparison for identifying distant similarities between proteins. Rather than using a single sequence, profile methods use a multiple sequence alignment to encode a profile which contains information about the conservation level of each residue. These profiles can then be used to search collections of sequences to find sequences that are related. Profiles are also known as Position Specific Scoring Matrices (PSSMs). In 1993, a probabilistic interpretation of profiles was introduced by Anders Krogh and colleagues using hidden Markov models. These models have become known as profile-HMMs.
In recent years, methods have been developed that allow the comparison of profiles directly to each other. These are known as profile-profile comparison methods.
Sequence assembly
Sequence assembly refers to the reconstruction of a DNA sequence by aligning and merging small DNA fragments. It is an integral part of modern DNA sequencing. Since presently-available DNA sequencing technologies are ill-suited for reading long sequences, large pieces of DNA (such as genomes) are often sequenced by (1) cutting the DNA into small pieces, (2) reading the small fragments, and (3) reconstituting the original DNA by merging the information on various fragments.
Recently, sequencing multiple species at one time is one of the top research objectives. Metagenomics is the study of microbial communities directly obtained from the environment. Different from cultured microorganisms from the lab, the wild sample usually contains dozens, sometimes even thousands of types of microorganisms from their original habitats. Recovering the original genomes can prove to be very challenging.
Gene prediction
Gene prediction or gene finding refers to the process of identifying the regions of genomic DNA that encode genes. This includes protein-coding genes as well as RNA genes, but may also include the prediction of other functional elements such as regulatory regions. Geri is one of the first and most important steps in understanding the genome of a species once it has been sequenced. In general, the prediction of bacterial genes is significantly simpler and more accurate than the prediction of genes in eukaryotic species that usually have complex intron/exon patterns. Identifying genes in long sequences remains a problem, especially when the number of genes is unknown. Hidden markov models can be part of the solution. Machine learning has played a significant role in predicting the sequence of transcription factors. Traditional sequencing analysis focused on the statistical parameters of the nucleotide sequence itself (The most common programs used are listed in Table 4.1). Another method is to identify homologous sequences based on other known gene sequences (Tools see Table 4.3). The two methods described here are focused on the sequence. However, the shape feature of these molecules such as DNA and protein have also been studied and proposed to have an equivalent, if not higher, influence on the behaviors of these molecules.
Protein structure prediction
The 3D structures of molecules are of major importance to their functions in nature. Since structural prediction of large molecules at an atomic level is a largely intractable problem, some biologists introduced ways to predict 3D structure at a primary sequence level. This includes the biochemical or statistical analysis of amino acid residues in local regions and structural the inference from homologs (or other potentially related proteins) with known 3D structures.
There have been a large number of diverse approaches to solve the structure prediction problem. In order to determine which methods were most effective, a structure prediction competition was founded called CASP (Critical Assessment of Structure Prediction).
Computational approaches and techniques
Sequence analysis tasks are often non-trivial to resolve and require the use of relatively complex approaches, many of which are the backbone behind many existing sequence analysis tools. Of the many methods used in practice, the most popular include the following:
Dynamic programming
Artificial neural network
Hidden Markov model
Support vector machine
Clustering
Bayesian network
Regression analysis
Sequence mining
Alignment-free sequence analysis
See also
List of sequence alignment software
List of alignment visualization software
List of phylogenetics software
List of phylogenetic tree visualization software
List of protein structure prediction software
List of RNA structure prediction software
Software section in Sequence analysis in social sciences
References
Bioinformatics | Sequence analysis | [
"Engineering",
"Biology"
] | 3,234 | [
"Bioinformatics",
"Biological engineering"
] |
235,926 | https://en.wikipedia.org/wiki/DNA%20polymerase | A DNA polymerase is a member of a family of enzymes that catalyze the synthesis of DNA molecules from nucleoside triphosphates, the molecular precursors of DNA. These enzymes are essential for DNA replication and usually work in groups to create two identical DNA duplexes from a single original DNA duplex. During this process, DNA polymerase "reads" the existing DNA strands to create two new strands that match the existing ones.
These enzymes catalyze the chemical reaction
deoxynucleoside triphosphate + DNAn pyrophosphate + DNAn+1.
DNA polymerase adds nucleotides to the three prime (3')-end of a DNA strand, one nucleotide at a time. Every time a cell divides, DNA polymerases are required to duplicate the cell's DNA, so that a copy of the original DNA molecule can be passed to each daughter cell. In this way, genetic information is passed down from generation to generation.
Before replication can take place, an enzyme called helicase unwinds the DNA molecule from its tightly woven form, in the process breaking the hydrogen bonds between the nucleotide bases. This opens up or "unzips" the double-stranded DNA to give two single strands of DNA that can be used as templates for replication in the above reaction.
History
In 1956, Arthur Kornberg and colleagues discovered DNA polymerase I (Pol I), in Escherichia coli. They described the DNA replication process by which DNA polymerase copies the base sequence of a template DNA strand. Kornberg was later awarded the Nobel Prize in Physiology or Medicine in 1959 for this work. DNA polymerase II was discovered by Thomas Kornberg (the son of Arthur Kornberg) and Malcolm E. Gefter in 1970 while further elucidating the role of Pol I in E. coli DNA replication. Three more DNA polymerases have been found in E. coli, including DNA polymerase III (discovered in the 1970s) and DNA polymerases IV and V (discovered in 1999). From 1983 on, DNA polymerases have been used in the polymerase chain reaction (PCR), and from 1988 thermostable DNA polymerases were used instead, as they do not need to be added in every cycle of a PCR.
Function
The main function of DNA polymerase is to synthesize DNA from deoxyribonucleotides, the building blocks of DNA. The DNA copies are created by the pairing of nucleotides to bases present on each strand of the original DNA molecule. This pairing always occurs in specific combinations, with cytosine along with guanine, and thymine along with adenine, forming two separate pairs, respectively. By contrast, RNA polymerases synthesize RNA from ribonucleotides from either RNA or DNA.
When synthesizing new DNA, DNA polymerase can add free nucleotides only to the 3' end of the newly forming strand. This results in elongation of the newly forming strand in a 5'–3' direction.
It is important to note that the directionality of the newly forming strand (the daughter strand) is opposite to the direction in which DNA polymerase moves along the template strand. Since DNA polymerase requires a free 3' OH group for initiation of synthesis, it can synthesize in only one direction by extending the 3' end of the preexisting nucleotide chain. Hence, DNA polymerase moves along the template strand in a 3'–5' direction, and the daughter strand is formed in a 5'–3' direction. This difference enables the resultant double-strand DNA formed to be composed of two DNA strands that are antiparallel to each other.
The function of DNA polymerase is not quite perfect, with the enzyme making about one mistake for every billion base pairs copied. Error correction is a property of some, but not all DNA polymerases. This process corrects mistakes in newly synthesized DNA. When an incorrect base pair is recognized, DNA polymerase moves backwards by one base pair of DNA. The 3'–5' exonuclease activity of the enzyme allows the incorrect base pair to be excised (this activity is known as proofreading). Following base excision, the polymerase can re-insert the correct base and replication can continue forwards. This preserves the integrity of the original DNA strand that is passed onto the daughter cells.
Fidelity is very important in DNA replication. Mismatches in DNA base pairing can potentially result in dysfunctional proteins and could lead to cancer. Many DNA polymerases contain an exonuclease domain, which acts in detecting base pair mismatches and further performs in the removal of the incorrect nucleotide to be replaced by the correct one. The shape and the interactions accommodating the Watson and Crick base pair are what primarily contribute to the detection or error. Hydrogen bonds play a key role in base pair binding and interaction. The loss of an interaction, which occurs at a mismatch, is said to trigger a shift in the balance, for the binding of the template-primer, from the polymerase, to the exonuclease domain. In addition, an incorporation of a wrong nucleotide causes a retard in DNA polymerization. This delay gives time for the DNA to be switched from the polymerase site to the exonuclease site. Different conformational changes and loss of interaction occur at different mismatches. In a purine:pyrimidine mismatch there is a displacement of the pyrimidine towards the major groove and the purine towards the minor groove. Relative to the shape of DNA polymerase's binding pocket, steric clashes occur between the purine and residues in the minor groove, and important van der Waals and electrostatic interactions are lost by the pyrimidine. Pyrimidine:pyrimidine and purine:purine mismatches present less notable changes since the bases are displaced towards the major groove, and less steric hindrance is experienced. However, although the different mismatches result in different steric properties, DNA polymerase is still able to detect and differentiate them so uniformly and maintain fidelity in DNA replication. DNA polymerization is also critical for many mutagenesis processes and is widely employed in biotechnologies.
Structure
The known DNA polymerases have highly conserved structure, which means that their overall catalytic subunits vary very little from species to species, independent of their domain structures. Conserved structures usually indicate important, irreplaceable functions of the cell, the maintenance of which provides evolutionary advantages. The shape can be described as resembling a right hand with thumb, finger, and palm domains. The palm domain appears to function in catalyzing the transfer of phosphoryl groups in the phosphoryl transfer reaction. DNA is bound to the palm when the enzyme is active. This reaction is believed to be catalyzed by a two-metal-ion mechanism. The finger domain functions to bind the nucleoside triphosphates with the template base. The thumb domain plays a potential role n the processivity, translocation, and positioning of the DNA.
Processivity
DNA polymerase's rapid catalysis due to its processive nature. Processivity is a characteristic of enzymes that function on polymeric substrates. In the case of DNA polymerase, the degree of processivity refers to the average number of nucleotides added each time the enzyme binds a template. The average DNA polymerase requires about one second locating and binding a primer/template junction. Once it is bound, a nonprocessive DNA polymerase adds nucleotides at a rate of one nucleotide per second. Processive DNA polymerases, however, add multiple nucleotides per second, drastically increasing the rate of DNA synthesis. The degree of processivity is directly proportional to the rate of DNA synthesis. The rate of DNA synthesis in a living cell was first determined as the rate of phage T4 DNA elongation in phage infected E. coli. During the period of exponential DNA increase at 37 °C, the rate was 749 nucleotides per second.
DNA polymerase's ability to slide along the DNA template allows increased processivity. There is a dramatic increase in processivity at the replication fork. This increase is facilitated by the DNA polymerase's association with proteins known as the sliding DNA clamp. The clamps are multiple protein subunits associated in the shape of a ring. Using the hydrolysis of ATP, a class of proteins known as the sliding clamp loading proteins open up the ring structure of the sliding DNA clamps allowing binding to and release from the DNA strand. Protein–protein interaction with the clamp prevents DNA polymerase from diffusing from the DNA template, thereby ensuring that the enzyme binds the same primer/template junction and continues replication. DNA polymerase changes conformation, increasing affinity to the clamp when associated with it and decreasing affinity when it completes the replication of a stretch of DNA to allow release from the clamp.
DNA polymerase processivity has been studied with in vitro single-molecule experiments (namely, optical tweezers and magnetic tweezers) have revealed the synergies between DNA polymerases and other molecules of the replisome (helicases and SSBs) and with the DNA replication fork. These results have led to the development of synergetic kinetic models for DNA replication describing the resulting DNA polymerase processivity increase.
Variation across species
Based on sequence homology, DNA polymerases can be further subdivided into seven different families: A, B, C, D, X, Y, and RT.
Some viruses also encode special DNA polymerases, such as Hepatitis B virus DNA polymerase. These may selectively replicate viral DNA through a variety of mechanisms. Retroviruses encode an unusual DNA polymerase called reverse transcriptase, which is an RNA-dependent DNA polymerase (RdDp). It polymerizes DNA from a template of RNA.
Prokaryotic polymerase
Prokaryotic polymerases exist in two forms: core polymerase and holoenzyme. Core polymerase synthesizes DNA from the DNA template but it cannot initiate the synthesis alone or accurately. Holoenzyme accurately initiates synthesis.
Pol I
Prokaryotic family A polymerases include the DNA polymerase I (Pol I) enzyme, which is encoded by the polA gene and ubiquitous among prokaryotes. This repair polymerase is involved in excision repair with both 3'–5' and 5'–3' exonuclease activity and processing of Okazaki fragments generated during lagging strand synthesis. Pol I is the most abundant polymerase, accounting for >95% of polymerase activity in E. coli; yet cells lacking Pol I have been found suggesting Pol I activity can be replaced by the other four polymerases. Pol I adds ~15-20 nucleotides per second, thus showing poor processivity. Instead, Pol I starts adding nucleotides at the RNA primer:template junction known as the origin of replication (ori). Approximately 400 bp downstream from the origin, the Pol III holoenzyme is assembled and takes over replication at a highly processive speed and nature.
Taq polymerase is a heat-stable enzyme of this family that lacks proofreading ability.
Pol II
DNA polymerase II is a family B polymerase encoded by the polB gene. Pol II has 3'–5' exonuclease activity and participates in DNA repair, replication restart to bypass lesions, and its cell presence can jump from ~30-50 copies per cell to ~200–300 during SOS induction. Pol II is also thought to be a backup to Pol III as it can interact with holoenzyme proteins and assume a high level of processivity. The main role of Pol II is thought to be the ability to direct polymerase activity at the replication fork and help stalled Pol III bypass terminal mismatches.
Pfu DNA polymerase is a heat-stable enzyme of this family found in the hyperthermophilic archaeon Pyrococcus furiosus. Detailed classification divides family B in archaea into B1, B2, B3, in which B2 is a group of pseudoenzymes. Pfu belongs to family B3. Others PolBs found in archaea are part of "Casposons", Cas1-dependent transposons. Some viruses (including Φ29 DNA polymerase) and mitochondrial plasmids carry polB as well.
Pol III
DNA polymerase III holoenzyme is the primary enzyme involved in DNA replication in E. coli and belongs to family C polymerases. It consists of three assemblies: the pol III core, the beta sliding clamp processivity factor, and the clamp-loading complex. The core consists of three subunits: α, the polymerase activity hub, ɛ, exonucleolytic proofreader, and θ, which may act as a stabilizer for ɛ. The beta sliding clamp processivity factor is also present in duplicate, one for each core, to create a clamp that encloses DNA allowing for high processivity. The third assembly is a seven-subunit (τ2γδδχψ) clamp loader complex.
The old textbook "trombone model" depicts an elongation complex with two equivalents of the core enzyme at each replication fork (RF), one for each strand, the lagging and leading. However, recent evidence from single-molecule studies indicates an average of three stoichiometric equivalents of core enzyme at each RF for both Pol III and its counterpart in B. subtilis, PolC. In-cell fluorescent microscopy has revealed that leading strand synthesis may not be completely continuous, and Pol III* (i.e., the holoenzyme α, ε, τ, δ and χ subunits without the ß2 sliding clamp) has a high frequency of dissociation from active RFs. In these studies, the replication fork turnover rate was about 10s for Pol III*, 47s for the ß2 sliding clamp, and 15m for the DnaB helicase. This suggests that the DnaB helicase may remain stably associated at RFs and serve as a nucleation point for the competent holoenzyme. In vitro single-molecule studies have shown that Pol III* has a high rate of RF turnover when in excess, but remains stably associated with replication forks when concentration is limiting. Another single-molecule study showed that DnaB helicase activity and strand elongation can proceed with decoupled, stochastic kinetics.
Pol IV
In E. coli, DNA polymerase IV (Pol IV) is an error-prone DNA polymerase involved in non-targeted mutagenesis. Pol IV is a Family Y polymerase expressed by the dinB gene that is switched on via SOS induction caused by stalled polymerases at the replication fork. During SOS induction, Pol IV production is increased tenfold and one of the functions during this time is to interfere with Pol III holoenzyme processivity. This creates a checkpoint, stops replication, and allows time to repair DNA lesions via the appropriate repair pathway. Another function of Pol IV is to perform translesion synthesis at the stalled replication fork like, for example, bypassing N2-deoxyguanine adducts at a faster rate than transversing undamaged DNA. Cells lacking the dinB gene have a higher rate of mutagenesis caused by DNA damaging agents.
Pol V
DNA polymerase V (Pol V) is a Y-family DNA polymerase that is involved in SOS response and translesion synthesis DNA repair mechanisms. Transcription of Pol V via the umuDC genes is highly regulated to produce only Pol V when damaged DNA is present in the cell generating an SOS response. Stalled polymerases causes RecA to bind to the ssDNA, which causes the LexA protein to autodigest. LexA then loses its ability to repress the transcription of the umuDC operon. The same RecA-ssDNA nucleoprotein posttranslationally modifies the UmuD protein into UmuD' protein. UmuD and UmuD' form a heterodimer that interacts with UmuC, which in turn activates umuC's polymerase catalytic activity on damaged DNA. In E. coli, a polymerase "tool belt" model for switching pol III with pol IV at a stalled replication fork, where both polymerases bind simultaneously to the β-clamp, has been proposed. However, the involvement of more than one TLS polymerase working in succession to bypass a lesion has not yet been shown in E. coli. Moreover, Pol IV can catalyze both insertion and extension with high efficiency, whereas pol V is considered the major SOS TLS polymerase. One example is the bypass of intra strand guanine thymine cross-link where it was shown on the basis of the difference in the mutational signatures of the two polymerases, that pol IV and pol V compete for TLS of the intra-strand crosslink.
Family D
In 1998, the family D of DNA polymerase was discovered in Pyrococcus furiosus and Methanococcus jannaschii. The PolD complex is a heterodimer of two chains, each encoded by DP1 (small proofreading) and DP2 (large catalytic). Unlike other DNA polymerases, the structure and mechanism of the DP2 catalytic core resemble that of multi-subunit RNA polymerases. The DP1-DP2 interface resembles that of Eukaryotic Class B polymerase zinc finger and its small subunit. DP1, a Mre11-like exonuclease, is likely the precursor of small subunit of Pol α and ε, providing proofreading capabilities now lost in Eukaryotes. Its N-terminal HSH domain is similar to AAA proteins, especially Pol III subunit δ and RuvB, in structure. DP2 has a Class II KH domain. Pyrococcus abyssi polD is more heat-stable and more accurate than Taq polymerase, but has not yet been commercialized. It has been proposed that family D DNA polymerase was the first to evolve in cellular organisms and that the replicative polymerase of the Last Universal Cellular Ancestor (LUCA) belonged to family D.
Eukaryotic DNA polymerase
Polymerases β, λ, σ, μ (beta, lambda, sigma, mu) and TdT
Family X polymerases contain the well-known eukaryotic polymerase pol β (beta), as well as other eukaryotic polymerases such as Pol σ (sigma), Pol λ (lambda), Pol μ (mu), and Terminal deoxynucleotidyl transferase (TdT). Family X polymerases are found mainly in vertebrates, and a few are found in plants and fungi. These polymerases have highly conserved regions that include two helix-hairpin-helix motifs that are imperative in the DNA-polymerase interactions. One motif is located in the 8 kDa domain that interacts with downstream DNA and one motif is located in the thumb domain that interacts with the primer strand. Pol β, encoded by POLB gene, is required for short-patch base excision repair, a DNA repair pathway that is essential for repairing alkylated or oxidized bases as well as abasic sites. Pol λ and Pol μ, encoded by the POLL and POLM genes respectively, are involved in non-homologous end-joining, a mechanism for rejoining DNA double-strand breaks due to hydrogen peroxide and ionizing radiation, respectively. TdT is expressed only in lymphoid tissue, and adds "n nucleotides" to double-strand breaks formed during V(D)J recombination to promote immunological diversity.
Polymerases α, δ and ε (alpha, delta, and epsilon)
Pol α (alpha), Pol δ (delta), and Pol ε (epsilon) are members of Family B Polymerases and are the main polymerases involved with nuclear DNA replication. Pol α complex (pol α-DNA primase complex) consists of four subunits: the catalytic subunit POLA1, the regulatory subunit POLA2, and the small and the large primase subunits PRIM1 and PRIM2 respectively. Once primase has created the RNA primer, Pol α starts replication elongating the primer with ~20 nucleotides. Due to its high processivity, Pol δ takes over the leading and lagging strand synthesis from Pol α. Pol δ is expressed by genes POLD1, creating the catalytic subunit, POLD2, POLD3, and POLD4 creating the other subunits that interact with Proliferating Cell Nuclear Antigen (PCNA), which is a DNA clamp that allows Pol δ to possess processivity. Pol ε is encoded by the POLE1, the catalytic subunit, POLE2, and POLE3 gene. It has been reported that the function of Pol ε is to extend the leading strand during replication, while Pol δ primarily replicates the lagging strand; however, recent evidence suggested that Pol δ might have a role in replicating the leading strand of DNA as well. Pol ε's C-terminus "polymerase relic" region, despite being unnecessary for polymerase activity, is thought to be essential to cell vitality. The C-terminus region is thought to provide a checkpoint before entering anaphase, provide stability to the holoenzyme, and add proteins to the holoenzyme necessary for initiation of replication. Pol ε has a larger "palm" domain that provides high processivity independently of PCNA.
Compared to other Family B polymerases, the DEDD exonuclease family responsible for proofreading is inactivated in Pol α. Pol ε is unique in that it has two zinc finger domains and an inactive copy of another family B polymerase in its C-terminal. The presence of this zinc finger has implications in the origins of Eukaryota, which in this case is placed into the Asgard group with archaeal B3 polymerase.
Polymerases η, ι and κ (eta, iota, and kappa)
Pol η (eta), Pol ι (iota), and Pol κ (kappa), are Family Y DNA polymerases involved in the DNA repair by translation synthesis and encoded by genes POLH, POLI, and POLK respectively. Members of Family Y have five common motifs to aid in binding the substrate and primer terminus and they all include the typical right hand thumb, palm and finger domains with added domains like little finger (LF), polymerase-associated domain (PAD), or wrist. The active site, however, differs between family members due to the different lesions being repaired. Polymerases in Family Y are low-fidelity polymerases, but have been proven to do more good than harm as mutations that affect the polymerase can cause various diseases, such as skin cancer and Xeroderma Pigmentosum Variant (XPS). The importance of these polymerases is evidenced by the fact that gene encoding DNA polymerase η is referred as XPV, because loss of this gene results in the disease Xeroderma Pigmentosum Variant. Pol η is particularly important for allowing accurate translesion synthesis of DNA damage resulting from ultraviolet radiation. The functionality of Pol κ is not completely understood, but researchers have found two probable functions. Pol κ is thought to act as an extender or an inserter of a specific base at certain DNA lesions. All three translesion synthesis polymerases, along with Rev1, are recruited to damaged lesions via stalled replicative DNA polymerases. There are two pathways of damage repair leading researchers to conclude that the chosen pathway depends on which strand contains the damage, the leading or lagging strand.
Polymerases Rev1 and ζ (zeta)
Pol ζ another B family polymerase, is made of two subunits Rev3, the catalytic subunit, and Rev7 (MAD2L2), which increases the catalytic function of the polymerase, and is involved in translesion synthesis. Pol ζ lacks 3' to 5' exonuclease activity, is unique in that it can extend primers with terminal mismatches. Rev1 has three regions of interest in the BRCT domain, ubiquitin-binding domain, and C-terminal domain and has dCMP transferase ability, which adds deoxycytidine opposite lesions that would stall replicative polymerases Pol δ and Pol ε. These stalled polymerases activate ubiquitin complexes that in turn disassociate replication polymerases and recruit Pol ζ and Rev1. Together Pol ζ and Rev1 add deoxycytidine and Pol ζ extends past the lesion. Through a yet undetermined process, Pol ζ disassociates and replication polymerases reassociate and continue replication. Pol ζ and Rev1 are not required for replication, but loss of REV3 gene in budding yeast can cause increased sensitivity to DNA-damaging agents due to collapse of replication forks where replication polymerases have stalled.
Telomerase
Telomerase is a ribonucleoprotein which functions to replicate ends of linear chromosomes since normal DNA polymerase cannot replicate the ends, or telomeres. The single-strand 3' overhang of the double-strand chromosome with the sequence 5'-TTAGGG-3' recruits telomerase. Telomerase acts like other DNA polymerases by extending the 3' end, but, unlike other DNA polymerases, telomerase does not require a template. The TERT subunit, an example of a reverse transcriptase, uses the RNA subunit to form the primer–template junction that allows telomerase to extend the 3' end of chromosome ends. The gradual decrease in size of telomeres as the result of many replications over a lifetime are thought to be associated with the effects of aging.
Polymerases γ, θ and ν (gamma, theta and nu)
Pol γ (gamma), Pol θ (theta), and Pol ν (nu) are Family A polymerases. Pol γ, encoded by the POLG gene, was long thought to be the only mitochondrial polymerase. However, recent research shows that at least Pol β (beta), a Family X polymerase, is also present in mitochondria. Any mutation that leads to limited or non-functioning Pol γ has a significant effect on mtDNA and is the most common cause of autosomal inherited mitochondrial disorders. Pol γ contains a C-terminus polymerase domain and an N-terminus 3'–5' exonuclease domain that are connected via the linker region, which binds the accessory subunit. The accessory subunit binds DNA and is required for processivity of Pol γ. Point mutation A467T in the linker region is responsible for more than one-third of all Pol γ-associated mitochondrial disorders. While many homologs of Pol θ, encoded by the POLQ gene, are found in eukaryotes, its function is not clearly understood. The sequence of amino acids in the C-terminus is what classifies Pol θ as Family A polymerase, although the error rate for Pol θ is more closely related to Family Y polymerases. Pol θ extends mismatched primer termini and can bypass abasic sites by adding a nucleotide. It also has Deoxyribophosphodiesterase (dRPase) activity in the polymerase domain and can show ATPase activity in close proximity to ssDNA. Pol ν (nu) is considered to be the least effective of the polymerase enzymes. However, DNA polymerase nu plays an active role in homology repair during cellular responses to crosslinks, fulfilling its role in a complex with helicase.
Plants use two Family A polymerases to copy both the mitochondrial and plastid genomes. They are more similar to bacterial Pol I than they are to mammalian Pol γ.
Reverse transcriptase
Retroviruses encode an unusual DNA polymerase called reverse transcriptase, which is an RNA-dependent DNA polymerase (RdDp) that synthesizes DNA from a template of RNA. The reverse transcriptase family contain both DNA polymerase functionality and RNase H functionality, which degrades RNA base-paired to DNA. An example of a retrovirus is HIV. Reverse transcriptase is commonly employed in amplification of RNA for research purposes. Using an RNA template, PCR can utilize reverse transcriptase, creating a DNA template. This new DNA template can then be used for typical PCR amplification. The products of such an experiment are thus amplified PCR products from RNA.
Each HIV retrovirus particle contains two RNA genomes, but, after an infection, each virus generates only one provirus. After infection, reverse transcription is accompanied by template switching between the two genome copies (copy choice recombination). From 5 to 14 recombination events per genome occur at each replication cycle. Template switching (recombination) appears to be necessary for maintaining genome integrity and as a repair mechanism for salvaging damaged genomes.
Bacteriophage T4 DNA polymerase
Bacteriophage (phage) T4 encodes a DNA polymerase that catalyzes DNA synthesis in a 5' to 3' direction. The phage polymerase also has an exonuclease activity that acts in a 3' to 5' direction, and this activity is employed in the proofreading and editing of newly inserted bases. A phage mutant with a temperature sensitive DNA polymerase, when grown at permissive temperatures, was observed to undergo recombination at frequencies that are about two-fold higher than that of wild-type phage.
It was proposed that a mutational alteration in the phage DNA polymerase can stimulate template strand switching (copy choice recombination) during replication.
See also
Biological machines
DNA sequencing
Enzyme catalysis
Genetic recombination
Molecular cloning
Polymerase chain reaction
Protein domain dynamics
Reverse transcription
RNA polymerase
Taq DNA polymerase
References
Further reading
External links
Unusual repair mechanism in DNA polymerase lambda, Ohio State University, July 25, 2006.
A great animation of DNA Polymerase from WEHI at 1:45 minutes in
3D macromolecular structures of DNA polymerase from the EM Data Bank(EMDB)
EC 2.7.7
DNA replication
DNA
Enzymes | DNA polymerase | [
"Biology"
] | 6,400 | [
"Genetics techniques",
"DNA replication",
"Molecular genetics"
] |
236,801 | https://en.wikipedia.org/wiki/Markov%20chain%20Monte%20Carlo | In statistics, Markov chain Monte Carlo (MCMC) is a class of algorithms used to draw samples from a probability distribution. Given a probability distribution, one can construct a Markov chain whose elements' distribution approximates it – that is, the Markov chain's equilibrium distribution matches the target distribution. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution.
Markov chain Monte Carlo methods are used to study probability distributions that are too complex or too highly dimensional to study with analytic techniques alone. Various algorithms exist for constructing such Markov chains, including the Metropolis–Hastings algorithm.
Applications
MCMC methods are primarily used for calculating numerical approximations of multi-dimensional integrals, for example in Bayesian statistics, computational physics, clinical research,computational biology and computational linguistics.
In Bayesian statistics, Markov chain Monte Carlo methods are typically used to calculate moments and credible intervals of posterior probability distributions. The use of MCMC methods makes it possible to compute large hierarchical models that require integrations over hundreds to thousands of unknown parameters.
In rare event sampling, they are also used for generating samples that gradually populate the rare failure region.
General explanation
Markov chain Monte Carlo methods create samples from a continuous random variable, with probability density proportional to a known function. These samples can be used to evaluate an integral over that variable, as its expected value or variance.
Practically, an ensemble of chains is generally developed, starting from a set of points arbitrarily chosen and sufficiently distant from each other. These chains are stochastic processes of "walkers" which move around randomly according to an algorithm that looks for places with a reasonably high contribution to the integral to move into next, assigning them higher probabilities.
Random walk Monte Carlo methods are a kind of random simulation or Monte Carlo method. However, whereas the random samples of the integrand used in a conventional Monte Carlo integration are statistically independent, those used in MCMC are autocorrelated. Correlations of samples introduces the need to use the Markov chain central limit theorem when estimating the error of mean values.
These algorithms create Markov chains such that they have an equilibrium distribution which is proportional to the function given.
Reducing correlation
While MCMC methods were created to address multi-dimensional problems better than generic Monte Carlo algorithms, when the number of dimensions rises they too tend to suffer the curse of dimensionality: regions of higher probability tend to stretch and get lost in an increasing volume of space that contributes little to the integral. One way to address this problem could be shortening the steps of the walker, so that it does not continuously try to exit the highest probability region, though this way the process would be highly autocorrelated and expensive (i.e. many steps would be required for an accurate result). More sophisticated methods such as Hamiltonian Monte Carlo and the Wang and Landau algorithm use various ways of reducing this autocorrelation, while managing to keep the process in the regions that give a higher contribution to the integral. These algorithms usually rely on a more complicated theory and are harder to implement, but they usually converge faster.
Examples
Random walk
Metropolis–Hastings algorithm: This method generates a Markov chain using a proposal density for new steps and a method for rejecting some of the proposed moves. It is actually a general framework which includes as special cases the very first and simpler MCMC (Metropolis algorithm) and many more recent alternatives listed below.
Gibbs sampling: When target distribution is multi-dimensional, Gibbs sampling algorithm updates each coordinate from its full conditional distribution given other coordinates. Gibbs sampling can be viewed as a special case of Metropolis–Hastings algorithm with acceptance rate uniformly equal to 1. When drawing from the full conditional distributions is not straightforward other samplers-within-Gibbs are used (e.g., see ). Gibbs sampling is popular partly because it does not require any 'tuning'. Algorithm structure of the Gibbs sampling highly resembles that of the coordinate ascent variational inference in that both algorithms utilize the full-conditional distributions in the updating procedure.
Metropolis-adjusted Langevin algorithm and other methods that rely on the gradient (and possibly second derivative) of the log target density to propose steps that are more likely to be in the direction of higher probability density.
Hamiltonian (or hybrid) Monte Carlo (HMC): Tries to avoid random walk behaviour by introducing an auxiliary momentum vector and implementing Hamiltonian dynamics, so the potential energy function is the target density. The momentum samples are discarded after sampling. The result of hybrid Monte Carlo is that proposals move across the sample space in larger steps; they are therefore less correlated and converge to the target distribution more rapidly.
Pseudo-marginal Metropolis–Hastings: This method replaces the evaluation of the density of the target distribution with an unbiased estimate and is useful when the target density is not available analytically, e.g. latent variable models.
Slice sampling: This method depends on the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. It alternates uniform sampling in the vertical direction with uniform sampling from the horizontal 'slice' defined by the current vertical position.
Multiple-try Metropolis: This method is a variation of the Metropolis–Hastings algorithm that allows multiple trials at each point. By making it possible to take larger steps at each iteration, it helps address the curse of dimensionality.
Reversible-jump: This method is a variant of the Metropolis–Hastings algorithm that allows proposals that change the dimensionality of the space. Markov chain Monte Carlo methods that change dimensionality have long been used in statistical physics applications, where for some problems a distribution that is a grand canonical ensemble is used (e.g., when the number of molecules in a box is variable). But the reversible-jump variant is useful when doing Markov chain Monte Carlo or Gibbs sampling over nonparametric Bayesian models such as those involving the Dirichlet process or Chinese restaurant process, where the number of mixing components/clusters/etc. is automatically inferred from the data.
Interacting particle methods
Interacting MCMC methodologies are a class of mean-field particle methods for obtaining random samples from a sequence of probability distributions with an increasing level of sampling complexity. These probabilistic models include path space state models with increasing time horizon, posterior distributions w.r.t. sequence of partial observations, increasing constraint level sets for conditional distributions, decreasing temperature schedules associated with some Boltzmann–Gibbs distributions, and many others. In principle, any Markov chain Monte Carlo sampler can be turned into an interacting Markov chain Monte Carlo sampler. These interacting Markov chain Monte Carlo samplers can be interpreted as a way to run in parallel a sequence of Markov chain Monte Carlo samplers. For instance, interacting simulated annealing algorithms are based on independent Metropolis–Hastings moves interacting sequentially with a selection-resampling type mechanism. In contrast to traditional Markov chain Monte Carlo methods, the precision parameter of this class of interacting Markov chain Monte Carlo samplers is only related to the number of interacting Markov chain Monte Carlo samplers. These advanced particle methodologies belong to the class of Feynman–Kac particle models, also called Sequential Monte Carlo or particle filter methods in Bayesian inference and signal processing communities. Interacting Markov chain Monte Carlo methods can also be interpreted as a mutation-selection genetic particle algorithm with Markov chain Monte Carlo mutations.
Quasi-Monte Carlo
The quasi-Monte Carlo method is an analog to the normal Monte Carlo method that uses low-discrepancy sequences instead of random numbers. It yields an integration error that decays faster than that of true random sampling, as quantified by the Koksma–Hlawka inequality. Empirically it allows the reduction of both estimation error and convergence time by an order of magnitude. Markov chain quasi-Monte Carlo methods such as the Array–RQMC method combine randomized quasi–Monte Carlo and Markov chain simulation by simulating chains simultaneously in a way that better approximates the true distribution of the chain than with ordinary MCMC. In empirical experiments, the variance of the average of a function of the state sometimes converges at rate or even faster, instead of the Monte Carlo rate.
Convergence
Usually it is not hard to construct a Markov chain with the desired properties. The more difficult problem is to determine how many steps are needed to converge to the stationary distribution within an acceptable error. A good chain will have rapid mixing: the stationary distribution is reached quickly starting from an arbitrary position. A standard empirical method to assess convergence is to run several independent simulated Markov chains and check that the ratio of inter-chain to intra-chain variances for all the parameters sampled is close to 1.
Typically, Markov chain Monte Carlo sampling can only approximate the target distribution, as there is always some residual effect of the starting position. More sophisticated Markov chain Monte Carlo-based algorithms such as coupling from the past can produce exact samples, at the cost of additional computation and an unbounded (though finite in expectation) running time.
Many random walk Monte Carlo methods move around the equilibrium distribution in relatively small steps, with no tendency for the steps to proceed in the same direction. These methods are easy to implement and analyze, but unfortunately it can take a long time for the walker to explore all of the space. The walker will often double back and cover ground already covered.
Further consideration of convergence is at Markov chain central limit theorem. See for a discussion of the theory related to convergence and stationarity of the Metropolis–Hastings algorithm.
Software
Several software programs provide MCMC sampling capabilities, for example:
ParaMonte parallel Monte Carlo software available in multiple programming languages including C, C++, Fortran, MATLAB, and Python.
Packages that use dialects of the BUGS model language:
WinBUGS / OpenBUGS/ MultiBUGS
JAGS
MCSim
Julia language with packages like
Turing.jl
DynamicHMC.jl
AffineInvariantMCMC.jl
Gen.jl
and the ones in StanJulia repository.
Python (programming language) with the packages:
Blackjax.
emcee,
NumPyro
PyMC
R (programming language) with the packages adaptMCMC, atmcmc, BRugs, mcmc, MCMCpack, ramcmc, rjags, rstan, etc.
Stan
TensorFlow Probability (probabilistic programming library built on TensorFlow)
Korali high-performance framework for Bayesian UQ, optimization, and reinforcement learning.
MacMCMC — Full-featured application (freeware) for MacOS, with advanced functionality, available at causaScientia
See also
Coupling from the past
Integrated nested Laplace approximations
Markov chain central limit theorem
Metropolis-adjusted Langevin algorithm
References
Citations
Sources
Christophe Andrieu, Nando De Freitas, Arnaud Doucet and Michael I. Jordan An Introduction to MCMC for Machine Learning, 2003
Carlin, Brad; Chib, Siddhartha (1995). "Bayesian Model Choice via Markov Chain Monte Carlo Methods". Journal of the Royal Statistical Society, Series B, 57(3), 473–484.
(See Chapter 11.)
Further reading
Monte Carlo methods
Computational statistics
Markov models
Bayesian estimation | Markov chain Monte Carlo | [
"Physics",
"Mathematics"
] | 2,312 | [
"Monte Carlo methods",
"Computational statistics",
"Computational mathematics",
"Computational physics"
] |
237,037 | https://en.wikipedia.org/wiki/Cartesian%20closed%20category | In category theory, a category is Cartesian closed if, roughly speaking, any morphism defined on a product of two objects can be naturally identified with a morphism defined on one of the factors. These categories are particularly important in mathematical logic and the theory of programming, in that their internal language is the simply typed lambda calculus. They are generalized by closed monoidal categories, whose internal language, linear type systems, are suitable for both quantum and classical computation.
Etymology
Named after René Descartes (1596–1650), French philosopher, mathematician, and scientist, whose formulation of analytic geometry gave rise to the concept of Cartesian product, which was later generalized to the notion of categorical product.
Definition
The category C is called Cartesian closed iff it satisfies the following three properties:
It has a terminal object.
Any two objects X and Y of C have a product X ×Y in C.
Any two objects Y and Z of C have an exponential ZY in C.
The first two conditions can be combined to the single requirement that any finite (possibly empty) family of objects of C admit a product in C, because of the natural associativity of the categorical product and because the empty product in a category is the terminal object of that category.
The third condition is equivalent to the requirement that the functor – ×Y (i.e. the functor from C to C that maps objects X to X ×Y and morphisms φ to φ×idY) has a right adjoint, usually denoted –Y, for all objects Y in C.
For locally small categories, this can be expressed by the existence of a bijection between the hom-sets
which is natural in X, Y, and Z.
Take care to note that a Cartesian closed category need not have finite limits; only finite products are guaranteed.
If a category has the property that all its slice categories are Cartesian closed, then it is called locally cartesian closed. Note that if C is locally Cartesian closed, it need not actually be Cartesian closed; that happens if and only if C has a terminal object.
Basic constructions
Evaluation
For each object Y, the counit of the exponential adjunction is a natural transformation
called the (internal) evaluation map. More generally, we can construct the partial application map as the composite
In the particular case of the category Set, these reduce to the ordinary operations:
Composition
Evaluating the exponential in one argument at a morphism p : X → Y gives morphisms
corresponding to the operation of composition with p. Alternate notations for the operation pZ include p* and p∘-. Alternate notations for the operation Zp include p* and -∘p.
Evaluation maps can be chained as
the corresponding arrow under the exponential adjunction
is called the (internal) composition map.
In the particular case of the category Set, this is the ordinary composition operation:
Sections
For a morphism p:X → Y, suppose the following pullback square exists, which defines the subobject of XY corresponding to maps whose composite with p is the identity:
where the arrow on the right is pY and the arrow on the bottom corresponds to the identity on Y. Then ΓY(p) is called the object of sections of p. It is often abbreviated as ΓY(X).
If ΓY(p) exists for every morphism p with codomain Y, then it can be assembled into a functor ΓY : C/Y → C on the slice category, which is right adjoint to a variant of the product functor:
The exponential by Y can be expressed in terms of sections:
Examples
Examples of Cartesian closed categories include:
The category Set of all sets, with functions as morphisms, is Cartesian closed. The product X×Y is the Cartesian product of X and Y, and ZY is the set of all functions from Y to Z. The adjointness is expressed by the following fact: the function f : X×Y → Z is naturally identified with the curried function g : X → ZY defined by g(x)(y) = f(x,y) for all x in X and y in Y.
The subcategory of finite sets, with functions as morphisms, is also Cartesian closed for the same reason.
If G is a group, then the category of all G-sets is Cartesian closed. If Y and Z are two G-sets, then ZY is the set of all functions from Y to Z with G action defined by (g.F)(y) = g.F(g−1.y) for all g in G, F:Y → Z and y in Y.
The subcategory of finite G-sets is also Cartesian closed.
The category Cat of all small categories (with functors as morphisms) is Cartesian closed; the exponential CD is given by the functor category consisting of all functors from D to C, with natural transformations as morphisms.
If C is a small category, then the functor category SetC consisting of all covariant functors from C into the category of sets, with natural transformations as morphisms, is Cartesian closed. If F and G are two functors from C to Set, then the exponential FG is the functor whose value on the object X of C is given by the set of all natural transformations from to F.
The earlier example of G-sets can be seen as a special case of functor categories: every group can be considered as a one-object category, and G-sets are nothing but functors from this category to Set
The category of all directed graphs is Cartesian closed; this is a functor category as explained under functor category.
In particular, the category of simplicial sets (which are functors X : Δop → Set) is Cartesian closed.
Even more generally, every elementary topos is Cartesian closed.
In algebraic topology, Cartesian closed categories are particularly easy to work with. Neither the category of topological spaces with continuous maps nor the category of smooth manifolds with smooth maps is Cartesian closed. Substitute categories have therefore been considered: the category of compactly generated Hausdorff spaces is Cartesian closed, as is the category of Frölicher spaces.
In order theory, complete partial orders (cpos) have a natural topology, the Scott topology, whose continuous maps do form a Cartesian closed category (that is, the objects are the cpos, and the morphisms are the Scott continuous maps). Both currying and apply are continuous functions in the Scott topology, and currying, together with apply, provide the adjoint.
A Heyting algebra is a Cartesian closed (bounded) lattice. An important example arises from topological spaces. If X is a topological space, then the open sets in X form the objects of a category O(X) for which there is a unique morphism from U to V if U is a subset of V and no morphism otherwise. This poset is a Cartesian closed category: the "product" of U and V is the intersection of U and V and the exponential UV is the interior of .
A category with a zero object is Cartesian closed if and only if it is equivalent to a category with only one object and one identity morphism. Indeed, if 0 is an initial object and 1 is a final object and we have , then which has only one element.
In particular, any non-trivial category with a zero object, such as an abelian category, is not Cartesian closed. So the category of modules over a ring is not Cartesian closed. However, the functor tensor product with a fixed module does have a right adjoint. The tensor product is not a categorical product, so this does not contradict the above. We obtain instead that the category of modules is monoidal closed.
Examples of locally Cartesian closed categories include:
Every elementary topos is locally Cartesian closed. This example includes Set, FinSet, G-sets for a group G, as well as SetC for small categories C.
The category LH whose objects are topological spaces and whose morphisms are local homeomorphisms is locally Cartesian closed, since LH/X is equivalent to the category of sheaves . However, LH does not have a terminal object, and thus is not Cartesian closed.
If C has pullbacks and for every arrow p : X → Y, the functor p* : C/Y → C/X given by taking pullbacks has a right adjoint, then C is locally Cartesian closed.
If C is locally Cartesian closed, then all of its slice categories C/X are also locally Cartesian closed.
Non-examples of locally Cartesian closed categories include:
Cat is not locally Cartesian closed.
Applications
In Cartesian closed categories, a "function of two variables" (a morphism f : X×Y → Z) can always be represented as a "function of one variable" (the morphism λf : X → ZY). In computer science applications, this is known as currying; it has led to the realization that simply-typed lambda calculus can be interpreted in any Cartesian closed category.
The Curry–Howard–Lambek correspondence provides a deep isomorphism between intuitionistic logic, simply-typed lambda calculus and Cartesian closed categories.
Certain Cartesian closed categories, the topoi, have been proposed as a general setting for mathematics, instead of traditional set theory.
Computer scientist John Backus has advocated a variable-free notation, or Function-level programming, which in retrospect bears some similarity to the internal language of Cartesian closed categories. CAML is more consciously modelled on Cartesian closed categories.
Dependent sum and product
Let C be a locally Cartesian closed category. Then C has all pullbacks, because the pullback of two arrows with codomain Z is given by the product in C/Z.
For every arrow p : X → Y, let P denote the corresponding object of C/Y. Taking pullbacks along p gives a functor p* : C/Y → C/X which has both a left and a right adjoint.
The left adjoint is called the dependent sum and is given by composition .
The right adjoint is called the dependent product.
The exponential by P in C/Y can be expressed in terms of the dependent product by the formula .
The reason for these names is because, when interpreting P as a dependent type , the functors and correspond to the type formations and respectively.
Equational theory
In every Cartesian closed category (using exponential notation), (XY)Z and (XZ)Y are isomorphic for all objects X, Y and Z. We write this as the "equation"
(xy)z = (xz)y.
One may ask what other such equations are valid in all Cartesian closed categories. It turns out that all of them follow logically from the following axioms:
x×(y×z) = (x×y)×z
x×y = y×x
x×1 = x (here 1 denotes the terminal object of C)
1x = 1
x1 = x
(x×y)z = xz×yz
(xy)z = x(y×z)
Bicartesian closed categories
Bicartesian closed categories extend Cartesian closed categories with binary coproducts and an initial object, with products distributing over coproducts. Their equational theory is extended with the following axioms, yielding something similar to Tarski's high school axioms but with a zero:
x + y = y + x
(x + y) + z = x + (y + z)
x×(y + z) = x×y + x×z
x(y + z) = xy×xz
0 + x = x
x×0 = 0
x0 = 1
Note however that the above list is not complete; type isomorphism in the free BCCC is not finitely axiomatizable, and its decidability is still an open problem.
References
External links
Closed categories
Lambda calculus | Cartesian closed category | [
"Mathematics"
] | 2,537 | [
"Closed categories",
"Mathematical structures",
"Category theory"
] |
237,132 | https://en.wikipedia.org/wiki/Ribozyme | Ribozymes (ribonucleic acid enzymes) are RNA molecules that have the ability to catalyze specific biochemical reactions, including RNA splicing in gene expression, similar to the action of protein enzymes. The 1982 discovery of ribozymes demonstrated that RNA can be both genetic material (like DNA) and a biological catalyst (like protein enzymes), and contributed to the RNA world hypothesis, which suggests that RNA may have been important in the evolution of prebiotic self-replicating systems.
The most common activities of natural or in vitro evolved ribozymes are the cleavage (or ligation) of RNA and DNA and peptide bond formation. For example, the smallest ribozyme known (GUGGC-3') can aminoacylate a GCCU-3' sequence in the presence of PheAMP. Within the ribosome, ribozymes function as part of the large subunit ribosomal RNA to link amino acids during protein synthesis. They also participate in a variety of RNA processing reactions, including RNA splicing, viral replication, and transfer RNA biosynthesis. Examples of ribozymes include the hammerhead ribozyme, the VS ribozyme, leadzyme, and the hairpin ribozyme.
Researchers who are investigating the origins of life through the RNA world hypothesis have been working on discovering a ribozyme with the capacity to self-replicate, which would require it to have the ability to catalytically synthesize polymers of RNA. This should be able to happen in prebiotically plausible conditions with high rates of copying accuracy to prevent degradation of information but also allowing for the occurrence of occasional errors during the copying process to allow for Darwinian evolution to proceed.
Attempts have been made to develop ribozymes as therapeutic agents, as enzymes which target defined RNA sequences for cleavage, as biosensors, and for applications in functional genomics and gene discovery.
Discovery
Before the discovery of ribozymes, enzymes—which were defined [solely] as catalytic proteins—were the only known biological catalysts. In 1967, Carl Woese, Francis Crick, and Leslie Orgel were the first to suggest that RNA could act as a catalyst. This idea was based upon the discovery that RNA can form complex secondary structures. These ribozymes were found in the intron of an RNA transcript, which removed itself from the transcript, as well as in the RNA component of the RNase P complex, which is involved in the maturation of pre-tRNAs. In 1989, Thomas R. Cech and Sidney Altman shared the Nobel Prize in chemistry for their "discovery of catalytic properties of RNA". The term ribozyme was first introduced by Kelly Kruger et al. in a paper published in Cell in 1982.
It had been a firmly established belief in biology that catalysis was reserved for proteins. However, the idea of RNA catalysis is motivated in part by the old question regarding the origin of life: Which comes first, enzymes that do the work of the cell or nucleic acids that carry the information required to produce the enzymes? The concept of "ribonucleic acids as catalysts" circumvents this problem. RNA, in essence, can be both the chicken and the egg.
In the 1980s, Thomas Cech, at the University of Colorado Boulder, was studying the excision of introns in a ribosomal RNA gene in Tetrahymena thermophila. While trying to purify the enzyme responsible for the splicing reaction, he found that the intron could be spliced out in the absence of any added cell extract. As much as they tried, Cech and his colleagues could not identify any protein associated with the splicing reaction. After much work, Cech proposed that the intron sequence portion of the RNA could break and reform phosphodiester bonds. At about the same time, Sidney Altman, a professor at Yale University, was studying the way tRNA molecules are processed in the cell when he and his colleagues isolated an enzyme called RNase-P, which is responsible for conversion of a precursor tRNA into the active tRNA. Much to their surprise, they found that RNase-P contained RNA in addition to protein and that RNA was an essential component of the active enzyme. This was such a foreign idea that they had difficulty publishing their findings. The following year, Altman demonstrated that RNA can act as a catalyst by showing that the RNase-P RNA subunit could catalyze the cleavage of precursor tRNA into active tRNA in the absence of any protein component.
Since Cech's and Altman's discovery, other investigators have discovered other examples of self-cleaving RNA or catalytic RNA molecules. Many ribozymes have either a hairpin – or hammerhead – shaped active center and a unique secondary structure that allows them to cleave other RNA molecules at specific sequences. It is now possible to make ribozymes that will specifically cleave any RNA molecule. These RNA catalysts may have pharmaceutical applications. For example, a ribozyme has been designed to cleave the RNA of HIV. If such a ribozyme were made by a cell, all incoming virus particles would have their RNA genome cleaved by the ribozyme, which would prevent infection.
Structure and mechanism
Despite having only four choices for each monomer unit (nucleotides), compared to 20 amino acid side chains found in proteins, ribozymes have diverse structures and mechanisms. In many cases they are able to mimic the mechanism used by their protein counterparts. For example, in self cleaving ribozyme RNAs, an in-line SN2 reaction is carried out using the 2’ hydroxyl group as a nucleophile attacking the bridging phosphate and causing 5’ oxygen of the N+1 base to act as a leaving group. In comparison, RNase A, a protein that catalyzes the same reaction, uses a coordinating histidine and lysine to act as a base to attack the phosphate backbone.
Like many protein enzymes, metal binding is also critical to the function of many ribozymes. Often these interactions use both the phosphate backbone and the base of the nucleotide, causing drastic conformational changes. There are two mechanism classes for the cleavage of a phosphodiester backbone in the presence of metal. In the first mechanism, the internal 2’- OH group attacks the phosphorus center in a SN2 mechanism. Metal ions promote this reaction by first coordinating the phosphate oxygen and later stabling the oxyanion. The second mechanism also follows a SN2 displacement, but the nucleophile comes from water or exogenous hydroxyl groups rather than RNA itself. The smallest ribozyme is UUU, which can promote the cleavage between G and A of the GAAA tetranucleotide via the first mechanism in the presence of Mn2+. The reason why this trinucleotide (rather than the complementary tetramer) catalyzes this reaction may be because the UUU-AAA pairing is the weakest and most flexible trinucleotide among the 64 conformations, which provides the binding site for Mn2+.
Phosphoryl transfer can also be catalyzed without metal ions. For example, pancreatic ribonuclease A and hepatitis delta virus (HDV) ribozymes can catalyze the cleavage of RNA backbone through acid-base catalysis without metal ions. Hairpin ribozyme can also catalyze the self-cleavage of RNA without metal ions, but the mechanism for this is still unclear.
Ribozyme can also catalyze the formation of peptide bond between adjacent amino acids by lowering the activation entropy.
Activities
Although ribozymes are quite rare in most cells, their roles are sometimes essential to life. For example, the functional part of the ribosome, the biological machine that translates RNA into proteins, is fundamentally a ribozyme, composed of RNA tertiary structural motifs that are often coordinated to metal ions such as Mg2+ as cofactors. In a model system, there is no requirement for divalent cations in a five-nucleotide RNA catalyzing trans-phenylalanation of a four-nucleotide substrate with 3 base pairs complementary with the catalyst, where the catalyst/substrate were devised by truncation of the C3 ribozyme.
The best-studied ribozymes are probably those that cut themselves or other RNAs, as in the original discovery by Cech and Altman. However, ribozymes can be designed to catalyze a range of reactions, many of which may occur in life but have not been discovered in cells.
RNA may catalyze folding of the pathological protein conformation of a prion in a manner similar to that of a chaperonin.
Ribozymes and the origin of life
RNA can also act as a hereditary molecule, which encouraged Walter Gilbert to propose that in the distant past, the cell used RNA as both the genetic material and the structural and catalytic molecule rather than dividing these functions between DNA and protein as they are today; this hypothesis is known as the "RNA world hypothesis" of the origin of life. Since nucleotides and RNA (and thus ribozymes) can arise by inorganic chemicals, they are candidates for the first enzymes, and in fact, the first "replicators" (i.e., information-containing macro-molecules that replicate themselves). An example of a self-replicating ribozyme that ligates two substrates to generate an exact copy of itself was described in 2002.
The discovery of the catalytic activity of RNA solved the "chicken and egg" paradox of the origin of life, solving the problem of origin of peptide and nucleic acid central dogma. According to this scenario, at the origin of life, all enzymatic activity and genetic information encoding was done by one molecule: RNA.
Ribozymes have been produced in the laboratory that are capable of catalyzing the synthesis of other RNA molecules from activated monomers under very specific conditions, these molecules being known as RNA polymerase ribozymes. The first RNA polymerase ribozyme was reported in 1996, and was capable of synthesizing RNA polymers up to 6 nucleotides in length. Mutagenesis and selection has been performed on an RNA ligase ribozyme from a large pool of random RNA sequences, resulting in isolation of the improved "Round-18" polymerase ribozyme in 2001 which could catalyze RNA polymers now up to 14 nucleotides in length. Upon application of further selection on the Round-18 ribozyme, the B6.61 ribozyme was generated and was able to add up to 20 nucleotides to a primer template in 24 hours, until it decomposes by cleavage of its phosphodiester bonds.
The rate at which ribozymes can polymerize an RNA sequence multiples substantially when it takes place within a micelle.
The next ribozyme discovered was the "tC19Z" ribozyme, which can add up to 95 nucleotides with a fidelity of 0.0083 mutations/nucleotide. Next, the "tC9Y" ribozyme was discovered by researchers and was further able to synthesize RNA strands up to 206 nucleotides long in the eutectic phase conditions at below-zero temperature, conditions previously shown to promote ribozyme polymerase activity.
The RNA polymerase ribozyme (RPR) called tC9-4M was able to polymerize RNA chains longer than itself (i.e. longer than 177 nt) in magnesium ion concentrations close to physiological levels, whereas earlier RPRs required prebiotically implausible concentrations of up to 200 mM. The only factor required for it to achieve this was the presence of a very simple amino acid polymer called lysine decapeptide.
The most complex RPR synthesized by that point was called 24-3, which was newly capable of polymerizing the sequences of a substantial variety of nucleotide sequences and navigating through complex secondary structures of RNA substrates inaccessible to previous ribozymes. In fact, this experiment was the first to use a ribozyme to synthesize a tRNA molecule. Starting with the 24-3 ribozyme, Tjhung et al. applied another fourteen rounds of selection to obtain an RNA polymerase ribozyme by in vitro evolution termed '38-6' that has an unprecedented level of activity in copying complex RNA molecules. However, this ribozyme is unable to copy itself and its RNA products have a high mutation rate. In a subsequent study, the researchers began with the 38-6 ribozyme and applied another 14 rounds of selection to generate the '52-2' ribozyme, which compared to 38-6, was again many times more active and could begin generating detectable and functional levels of the class I ligase, although it was still limited in its fidelity and functionality in comparison to copying of the same template by proteins such as the T7 RNA polymerase.
An RPR called t5(+1) adds triplet nucleotides at a time instead of just one nucleotide at a time. This heterodimeric RPR can navigate secondary structures inaccessible to 24-3, including hairpins. In the initial pool of RNA variants derived only from a previously synthesized RPR known as the Z RPR, two sequences separately emerged and evolved to be mutualistically dependent on each other. The Type 1 RNA evolved to be catalytically inactive, but complexing with the Type 5 RNA boosted its polymerization ability and enabled intermolecular interactions with the RNA template substrate obviating the need to tether the template directly to the RNA sequence of the RPR, which was a limitation of earlier studies. Not only did t5(+1) not need tethering to the template, but a primer was not needed either as t5(+1) had the ability to polymerize a template in both 3' → 5' and 5' 3 → 3' directions.
A highly evolved RNA polymerase ribozyme was able to function as a reverse transcriptase, that is, it can synthesize a DNA copy using an RNA template. Such an activity is considered to have been crucial for the transition from RNA to DNA genomes during the early history of life on earth. Reverse transcription capability could have arisen as a secondary function of an early RNA-dependent RNA polymerase ribozyme.
An RNA sequence that folds into a ribozyme is capable of invading duplexed RNA, rearranging into an open holopolymerase complex, and then searching for a specific RNA promoter sequence, and upon recognition rearrange again into a processive form that polymerizes a complementary strand of the sequence. This ribozyme is capable of extending duplexed RNA by up to 107 nucleotides, and does so without needing to tether the sequence being polymerized.
Artificial ribozymes
Since the discovery of ribozymes that exist in living organisms, there has been interest in the study of new synthetic ribozymes made in the laboratory. For example, artificially produced self-cleaving RNAs with good enzymatic activity have been produced. Tang and Breaker isolated self-cleaving RNAs by in vitro selection of RNAs originating from random-sequence RNAs. Some of the synthetic ribozymes that were produced had novel structures, while some were similar to the naturally occurring hammerhead ribozyme.
In 2015, researchers at Northwestern University and the University of Illinois Chicago engineered a tethered ribosome that works nearly as well as the authentic cellular component that produces all the proteins and enzymes within the cell. Called Ribosome-T, or Ribo-T, the artificial ribosome was created by Michael Jewett and Alexander Mankin. The techniques used to create artificial ribozymes involve directed evolution. This approach takes advantage of RNA's dual nature as both a catalyst and an informational polymer, making it easy for an investigator to produce vast populations of RNA catalysts using polymerase enzymes. The ribozymes are mutated by reverse transcribing them with reverse transcriptase into various cDNA and amplified with error-prone PCR. The selection parameters in these experiments often differ. One approach for selecting a ligase ribozyme involves using biotin tags, which are covalently linked to the substrate. If a molecule possesses the desired ligase activity, a streptavidin matrix can be used to recover the active molecules.
Lincoln and Joyce used in vitro evolution to develop ribozyme ligases capable of self-replication in about an hour, via the joining of pre-synthesized highly complementary oligonucleotides.
Although not true catalysts, the creation of artificial self-cleaving riboswitches, termed aptazymes, has also been an active area of research. Riboswitches are regulatory RNA motifs that change their structure in response to a small molecule ligand to regulate translation. While there are many known natural riboswitches that bind a wide array of metabolites and other small organic molecules, only one ribozyme based on a riboswitch has been described: glmS. Early work in characterizing self-cleaving riboswitches was focused on using theophylline as the ligand. In these studies, an RNA hairpin is formed which blocks the ribosome binding site, thus inhibiting translation. In the presence of the ligand, in these cases theophylline, the regulatory RNA region is cleaved off, allowing the ribosome to bind and translate the target gene. Much of this RNA engineering work was based on rational design and previously determined RNA structures rather than directed evolution as in the above examples. More recent work has broadened the ligands used in ribozyme riboswitches to include thymine pyrophosphate. Fluorescence-activated cell sorting has also been used to engineering aptazymes.
Applications
Ribozymes have been proposed and developed for the treatment of disease through gene therapy. One major challenge of using RNA-based enzymes as a therapeutic is the short half-life of the catalytic RNA molecules in the body. To combat this, the 2’ position on the ribose is modified to improve RNA stability. One area of ribozyme gene therapy has been the inhibition of RNA-based viruses.
A type of synthetic ribozyme directed against HIV RNA called gene shears has been developed and has entered clinical testing for HIV infection.
Similarly, ribozymes have been designed to target the hepatitis C virus RNA, SARS coronavirus (SARS-CoV), Adenovirus and influenza A and B virus RNA. The ribozyme is able to cleave the conserved regions of the virus's genome, which has been shown to reduce the virus in mammalian cell culture. Despite these efforts by researchers, these projects have remained in the preclinical stage.
Known ribozymes
Well-validated naturally occurring ribozyme classes:
GIR1 branching ribozyme
glmS ribozyme
Group I self-splicing intron
Group II self-splicing intron – Spliceosome is likely derived from Group II self-splicing ribozymes.
Hairpin ribozyme
Hammerhead ribozyme
HDV ribozyme
rRNA – Found in all living cells and links amino acids to form proteins.
RNase P
Twister ribozyme
Twister sister ribozyme
VS ribozyme
Pistol ribozyme
Hatchet ribozyme
Viroids
See also
Deoxyribozyme
Spiegelman Monster
Catalysis
Enzyme
RNA world hypothesis
Peptide nucleic acid
Nucleic acid analogues
PAH world hypothesis
SELEX
OLE RNA
Notes and references
Further reading
External links
Tom Cech's Short Talk: "Discovering Ribozymes"
RNA
Catalysts
Biomolecules
Metabolism
Chemical kinetics
RNA splicing | Ribozyme | [
"Chemistry",
"Biology"
] | 4,175 | [
"Catalysis",
"Catalysts",
"Chemical reaction engineering",
"Natural products",
"Biochemistry",
"Organic compounds",
"Cellular processes",
"Biomolecules",
"Molecular biology",
"Structural biology",
"Chemical kinetics",
"Ribozymes",
"Metabolism"
] |
237,207 | https://en.wikipedia.org/wiki/Ultimate%20tensile%20strength | Ultimate tensile strength (also called UTS, tensile strength, TS, ultimate strength or in notation) is the maximum stress that a material can withstand while being stretched or pulled before breaking. In brittle materials, the ultimate tensile strength is close to the yield point, whereas in ductile materials, the ultimate tensile strength can be higher.
The ultimate tensile strength is usually found by performing a tensile test and recording the engineering stress versus strain. The highest point of the stress–strain curve is the ultimate tensile strength and has units of stress. The equivalent point for the case of compression, instead of tension, is called the compressive strength.
Tensile strengths are rarely of any consequence in the design of ductile members, but they are important with brittle members. They are tabulated for common materials such as alloys, composite materials, ceramics, plastics, and wood.
Definition
The ultimate tensile strength of a material is an intensive property; therefore its value does not depend on the size of the test specimen. However, depending on the material, it may be dependent on other factors, such as the preparation of the specimen, the presence or otherwise of surface defects, and the temperature of the test environment and material.
Some materials break very sharply, without plastic deformation, in what is called a brittle failure. Others, which are more ductile, including most metals, experience some plastic deformation and possibly necking before fracture.
Tensile strength is defined as a stress, which is measured as force per unit area. For some non-homogeneous materials (or for assembled components) it can be reported just as a force or as a force per unit width. In the International System of Units (SI), the unit is the pascal (Pa) (or a multiple thereof, often megapascals (MPa), using the SI prefix mega); or, equivalently to pascals, newtons per square metre (N/m2). A United States customary unit is pounds per square inch (lb/in2 or psi). Kilopounds per square inch (ksi, or sometimes kpsi) is equal to 1000 psi, and is commonly used in the United States, when measuring tensile strengths.
Ductile materials
Many materials can display linear elastic behavior, defined by a linear stress–strain relationship, as shown in figure 1 up to point 3. The elastic behavior of materials often extends into a non-linear region, represented in figure 1 by point 2 (the "yield strength"), up to which deformations are completely recoverable upon removal of the load; that is, a specimen loaded elastically in tension will elongate, but will return to its original shape and size when unloaded. Beyond this elastic region, for ductile materials, such as steel, deformations are plastic. A plastically deformed specimen does not completely return to its original size and shape when unloaded. For many applications, plastic deformation is unacceptable, and is used as the design limitation.
After the yield point, ductile metals undergo a period of strain hardening, in which the stress increases again with increasing strain, and they begin to neck, as the cross-sectional area of the specimen decreases due to plastic flow. In a sufficiently ductile material, when necking becomes substantial, it causes a reversal of the engineering stress–strain curve (curve A, figure 2); this is because the engineering stress is calculated assuming the original cross-sectional area before necking. The reversal point is the maximum stress on the engineering stress–strain curve, and the engineering stress coordinate of this point is the ultimate tensile strength, given by point 1.
Ultimate tensile strength is not used in the design of ductile static members because design practices dictate the use of the yield stress. It is, however, used for quality control, because of the ease of testing. It is also used to roughly determine material types for unknown samples.
The ultimate tensile strength is a common engineering parameter to design members made of brittle material because such materials have no yield point.
Testing
Typically, the testing involves taking a small sample with a fixed cross-sectional area, and then pulling it with a tensometer at a constant strain (change in gauge length divided by initial gauge length) rate until the sample breaks.
When testing some metals, indentation hardness correlates linearly with tensile strength. This important relation permits economically important nondestructive testing of bulk metal deliveries with lightweight, even portable equipment, such as hand-held Rockwell hardness testers. This practical correlation helps quality assurance in metalworking industries to extend well beyond the laboratory and universal testing machines.
Typical tensile strengths
Many of the values depend on manufacturing process and purity or composition.
Multiwalled carbon nanotubes have the highest tensile strength of any material yet measured, with one measurement of 63 GPa, still well below one theoretical value of 300 GPa. The first nanotube ropes (20 mm in length) whose tensile strength was published (in 2000) had a strength of 3.6 GPa. The density depends on the manufacturing method, and the lowest value is 0.037 or 0.55 (solid).
The strength of spider silk is highly variable. It depends on many factors including kind of silk (Every spider can produce several for sundry purposes.), species, age of silk, temperature, humidity, swiftness at which stress is applied during testing, length stress is applied, and way the silk is gathered (forced silking or natural spinning). The value shown in the table, 1,000 MPa, is roughly representative of the results from a few studies involving several different species of spider however specific results varied greatly.
Human hair strength varies by genetics, environmental factors, and chemical treatments.
Typical properties of annealed elements
See also
Flexural strength
Strength of materials
Tensile structure
Toughness
Failure
Tension (physics)
Young's modulus
References
Further reading
Giancoli, Douglas, Physics for Scientists & Engineers Third Edition (2000). Upper Saddle River: Prentice Hall.
T Follett, Life without metals
George E. Dieter, Mechanical Metallurgy (1988). McGraw-Hill, UK
Materials science
Elasticity (physics) | Ultimate tensile strength | [
"Physics",
"Materials_science",
"Engineering"
] | 1,268 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Elasticity (physics)",
"Deformation (mechanics)",
"Materials science",
"nan",
"Physical properties"
] |
237,213 | https://en.wikipedia.org/wiki/Quotient%20space%20%28topology%29 | In topology and related areas of mathematics, the quotient space of a topological space under a given equivalence relation is a new topological space constructed by endowing the quotient set of the original topological space with the quotient topology, that is, with the finest topology that makes continuous the canonical projection map (the function that maps points to their equivalence classes). In other words, a subset of a quotient space is open if and only if its preimage under the canonical projection map is open in the original topological space.
Intuitively speaking, the points of each equivalence class are or "glued together" for forming a new topological space. For example, identifying the points of a sphere that belong to the same diameter produces the projective plane as a quotient space.
Definition
Let be a topological space, and let be an equivalence relation on The quotient set is the set of equivalence classes of elements of The equivalence class of is denoted
The construction of defines a canonical surjection As discussed below, is a quotient mapping, commonly called the canonical quotient map, or canonical projection map, associated to
The quotient space under is the set equipped with the quotient topology, whose open sets are those subsets whose preimage is open. In other words, is open in the quotient topology on if and only if is open in Similarly, a subset is closed if and only if is closed in
The quotient topology is the final topology on the quotient set, with respect to the map
Quotient map
A map is a quotient map (sometimes called an identification map) if it is surjective and is equipped with the final topology induced by The latter condition admits two more-elementary formulations: a subset is open (closed) if and only if is open (resp. closed). Every quotient map is continuous but not every continuous map is a quotient map.
Saturated sets
A subset of is called saturated (with respect to ) if it is of the form for some set which is true if and only if
The assignment establishes a one-to-one correspondence (whose inverse is ) between subsets of and saturated subsets of
With this terminology, a surjection is a quotient map if and only if for every subset of is open in if and only if is open in
In particular, open subsets of that are saturated have no impact on whether the function is a quotient map (or, indeed, continuous: a function is continuous if and only if, for every saturated such that is open in the set is open in
Indeed, if is a topology on and is any map, then the set of all that are saturated subsets of forms a topology on If is also a topological space then is a quotient map (respectively, continuous) if and only if the same is true of
Quotient space of fibers characterization
Given an equivalence relation on denote the equivalence class of a point by and let denote the set of equivalence classes. The map that sends points to their equivalence classes (that is, it is defined by for every ) is called . It is a surjective map and for all if and only if consequently, for all In particular, this shows that the set of equivalence class is exactly the set of fibers of the canonical map
If is a topological space then giving the quotient topology induced by will make it into a quotient space and make into a quotient map.
Up to a homeomorphism, this construction is representative of all quotient spaces; the precise meaning of this is now explained.
Let be a surjection between topological spaces (not yet assumed to be continuous or a quotient map) and declare for all that if and only if Then is an equivalence relation on such that for every which implies that (defined by ) is a singleton set; denote the unique element in by (so by definition, ).
The assignment defines a bijection between the fibers of and points in
Define the map as above (by ) and give the quotient topology induced by (which makes a quotient map). These maps are related by:
From this and the fact that is a quotient map, it follows that is continuous if and only if this is true of Furthermore, is a quotient map if and only if is a homeomorphism (or equivalently, if and only if both and its inverse are continuous).
Related definitions
A is a surjective map with the property that for every subset the restriction is also a quotient map.
There exist quotient maps that are not hereditarily quotient.
Examples
Gluing. Topologists talk of gluing points together. If is a topological space, gluing the points and in means considering the quotient space obtained from the equivalence relation if and only if or (or ).
Consider the unit square and the equivalence relation ~ generated by the requirement that all boundary points be equivalent, thus identifying all boundary points to a single equivalence class. Then is homeomorphic to the sphere
Adjunction space. More generally, suppose is a space and is a subspace of One can identify all points in to a single equivalence class and leave points outside of equivalent only to themselves. The resulting quotient space is denoted The 2-sphere is then homeomorphic to a closed disc with its boundary identified to a single point:
Consider the set of real numbers with the ordinary topology, and write if and only if is an integer. Then the quotient space is homeomorphic to the unit circle via the homeomorphism which sends the equivalence class of to
A generalization of the previous example is the following: Suppose a topological group acts continuously on a space One can form an equivalence relation on by saying points are equivalent if and only if they lie in the same orbit. The quotient space under this relation is called the orbit space, denoted In the previous example acts on by translation. The orbit space is homeomorphic to
Note: The notation is somewhat ambiguous. If is understood to be a group acting on via addition, then the quotient is the circle. However, if is thought of as a topological subspace of (that is identified as a single point) then the quotient (which is identifiable with the set ) is a countably infinite bouquet of circles joined at a single point
This next example shows that it is in general true that if is a quotient map then every convergent sequence (respectively, every convergent net) in has a lift (by ) to a convergent sequence (or convergent net) in Let and Let and let be the quotient map so that and for every The map defined by is well-defined (because ) and a homeomorphism. Let and let be any sequences (or more generally, any nets) valued in such that in Then the sequence converges to in but there does not exist any convergent lift of this sequence by the quotient map (that is, there is no sequence in that both converges to some and satisfies for every ). This counterexample can be generalized to nets by letting be any directed set, and making into a net by declaring that for any holds if and only if both (1) and (2) if then the -indexed net defined by letting equal and equal to has no lift (by ) to a convergent -indexed net in
Properties
Quotient maps are characterized among surjective maps by the following property: if is any topological space and is any function, then is continuous if and only if is continuous.
The quotient space together with the quotient map is characterized by the following universal property: if is a continuous map such that implies for all then there exists a unique continuous map such that In other words, the following diagram commutes:
One says that descends to the quotient for expressing this, that is that it factorizes through the quotient space. The continuous maps defined on are, therefore, precisely those maps which arise from continuous maps defined on that respect the equivalence relation (in the sense that they send equivalent elements to the same image). This criterion is copiously used when studying quotient spaces.
Given a continuous surjection it is useful to have criteria by which one can determine if is a quotient map. Two sufficient criteria are that be open or closed. Note that these conditions are only sufficient, not necessary. It is easy to construct examples of quotient maps that are neither open nor closed. For topological groups, the quotient map is open.
Compatibility with other topological notions
Separation
In general, quotient spaces are ill-behaved with respect to separation axioms. The separation properties of need not be inherited by and may have separation properties not shared by
is a T1 space if and only if every equivalence class of is closed in
If the quotient map is open, then is a Hausdorff space if and only if ~ is a closed subset of the product space
Connectedness
If a space is connected or path connected, then so are all its quotient spaces.
A quotient space of a simply connected or contractible space need not share those properties.
Compactness
If a space is compact, then so are all its quotient spaces.
A quotient space of a locally compact space need not be locally compact.
Dimension
The topological dimension of a quotient space can be more (as well as less) than the dimension of the original space; space-filling curves provide such examples.
See also
Topology
Algebra
Notes
References
Theory of continuous functions
General topology
Group actions (mathematics)
Space (topology)
Topology | Quotient space (topology) | [
"Physics",
"Mathematics"
] | 1,986 | [
"General topology",
"Group actions",
"Theory of continuous functions",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Symmetry"
] |
3,415,287 | https://en.wikipedia.org/wiki/Conformal%20gravity | Conformal gravity refers to gravity theories that are invariant under conformal transformations in the Riemannian geometry sense; more accurately, they are invariant under Weyl transformations where is the metric tensor and is a function on spacetime.
Weyl-squared theories
The simplest theory in this category has the square of the Weyl tensor as the Lagrangian
where is the Weyl tensor. This is to be contrasted with the usual Einstein–Hilbert action where the Lagrangian is just the Ricci scalar. The equation of motion upon varying the metric is called the Bach tensor,
where is the Ricci tensor. Conformally flat metrics are solutions of this equation.
Since these theories lead to fourth-order equations for the fluctuations around a fixed background, they are not manifestly unitary. It has therefore been generally believed that they could not be consistently quantized. This is now disputed.
Four-derivative theories
Conformal gravity is an example of a 4-derivative theory. This means that each term in the wave equation can contain up to four derivatives. There are pros and cons of 4-derivative theories. The pros are that the quantized version of the theory is more convergent and renormalisable. The cons are that there may be issues with causality. A simpler example of a 4-derivative wave equation is the scalar 4-derivative wave equation:
The solution for this in a central field of force is:
The first two terms are the same as a normal wave equation. Because this equation is a simpler approximation to conformal gravity, m corresponds to the mass of the central source. The last two terms are unique to 4-derivative wave equations. It has been suggested that small values be assigned to them to account for the galactic acceleration constant (also known as dark matter) and the dark energy constant. The solution equivalent to the Schwarzschild solution in general relativity for a spherical source for conformal gravity has a metric with:
to show the difference between general relativity. 6bc is very small, and so can be ignored. The problem is that now c is the total mass-energy of the source, and b is the integral of density, times the distance to source, squared. So this is a completely different potential from general relativity and not just a small modification.
The main issue with conformal gravity theories, as well as any theory with higher derivatives, is the typical presence of ghosts, which point to instabilities of the quantum version of the theory, although there might be a solution to the ghost problem.
An alternative approach is to consider the gravitational constant as a symmetry broken scalar field, in which case you would consider a small correction to Newtonian gravity like this (where we consider to be a small correction):
in which case the general solution is the same as the Newtonian case except there can be an additional term:
where there is an additional component varying sinusoidally over space. The wavelength of this variation could be quite large, such as an atomic width. Thus there appear to be several stable potentials around a gravitational force in this model.
Conformal unification to the Standard Model
By adding a suitable gravitational term to the Standard Model action in curved spacetime, the theory develops a local conformal (Weyl) invariance. The conformal gauge is fixed by choosing a reference mass scale based on the gravitational constant. This approach generates the masses for the vector bosons and matter fields similar to the Higgs mechanism without traditional spontaneous symmetry breaking.
See also
Conformal supergravity
Hoyle–Narlikar theory of gravity
References
Further reading
Falsification of Mannheim's conformal gravity at CERN
Mannheim's rebuttal of above at arXiv.
Conformal geometry
Lagrangian mechanics
Spacetime
Theories of gravity | Conformal gravity | [
"Physics",
"Mathematics"
] | 770 | [
"Vector spaces",
"Theoretical physics",
"Theories of gravity",
"Lagrangian mechanics",
"Classical mechanics",
"Space (mathematics)",
"Theory of relativity",
"Spacetime",
"Dynamical systems"
] |
3,415,348 | https://en.wikipedia.org/wiki/Sgoldstino | A sgoldstino is any of the spin-0 superpartners of the goldstino in relativistic quantum field theories with spontaneously broken supersymmetry. The term sgoldstino was first used in 1998.
In 2016, Petersson and Torre hypothesized that a sgoldstino particle might be responsible for the observed 750 GeV diphoton excess observed by Large Hadron Collider experiments.
References
Supersymmetric quantum field theory
Bosons
Hypothetical elementary particles
Subatomic particles with spin 0 | Sgoldstino | [
"Physics"
] | 113 | [
"Matter",
"Supersymmetric quantum field theory",
"Unsolved problems in physics",
"Bosons",
"Subatomic particles",
"Particle physics",
"Particle physics stubs",
"Hypothetical elementary particles",
"Supersymmetry",
"Physics beyond the Standard Model",
"Symmetry"
] |
3,415,504 | https://en.wikipedia.org/wiki/Antarctic%20Impulsive%20Transient%20Antenna | The Antarctic Impulsive Transient Antenna (ANITA) experiment has been designed to study ultra-high-energy (UHE) cosmic neutrinos by detecting the radio pulses emitted by their interactions with the Antarctic ice sheet. This is to be accomplished using an array of radio antennas suspended from a helium balloon flying at a height of about 37,000 meters.
The neutrinos, with energies on the order of 1018 eV, produce radio pulses in the ice because of the Askaryan effect. It is thought that these high-energy cosmic neutrinos result from interaction of ultra-high-energy (1020 eV) cosmic rays with the photons of the cosmic microwave background radiation. It is thus hoped that the ANITA experiment can help to explain the origin of these cosmic rays.
Experimental time frame
ANITA-I launched from McMurdo, Antarctica in the summer of 2006–07.
The array should travel around the continent with the circumpolar winds for approximately a month before being recovered by the CSBF. Each successive mission (if funded) would be at two-year intervals. ANITA-II, a modified instrument with 40 antennas, launched from McMurdo Station in the summer of 2008–2009. ANITA-III, expected to improve sensitivity by a factor of 5–10, launched in December 2014.
ANITA-IV launched in December 2016, with a lighter overall build, tunable notch filters and an improved trigger system.
Funding
ANITA is a collaboration of multiple universities, led by UH Manoa and funded through grants by NASA and the U.S. Department of Energy.
Results
ANITA flew four times between 2006 and 2016 and set the most competitive limits on the ultrahigh-energy diffuse neutrino flux above several tens of exa-electronvolt (EeV). In addition to its constraints on the diffuse neutrino flux, each ANITA flight has observed dozens of ultrahigh-energy cosmic rays via the geomagnetic radio emission from cosmic-ray-induced extensive air showers which ANITA typically observes in reflection off the surface of the ice.
ANITA-I and ANITA-III also each detected anomalous radio signatures that were observationally consistent with upcoming extensive air showers emerging from the surface. Upcoming extensive air showers are predicted to be created by the decay of upcoming tau leptons generated via incident tau neutrinos during their propagation through the Earth. However, the angles at which these events were observed are in tension with Standard Model neutrino properties as the Earth should strongly attenuate the neutrino flux at these steep emergence angles. A follow-up study by the IceCube experiment, which searches for neutrinos with significantly less energy than ANITA, could not detect any significant source of neutrinos from the location of these events. As of 2016, these events remain unexplained.
The fourth flight of ANITA, ANITA-IV, also detected four events that were observationally consistent with upcoming tau-induced extensive air showers. Unlike the events from ANITA-I and ANITA-III that were observed at steep angles below the horizon, the ANITA-IV events were observed very close to the horizon where tau-induced events are most likely to occur.
Collaborators
The current ANITA collaboration team includes members from the University of Hawaii at Manoa; University of California, Los Angeles; Ohio State University; The University of Delaware; The University of Kansas; Washington University in St. Louis; the NASA Jet Propulsion Laboratory; University College London; University of Chicago; National Taiwan University; and the California Polytechnic State University.
See also
IceCube Neutrino Observatory
Radio Ice Cerenkov Experiment
Neutrino telescope
Encounters at the End of the World
References
External links
University of California article
University of Hawaii article
Science and technology in Antarctica
Neutrino astronomy
Balloon-borne experiments
Astronomical experiments in the Antarctic | Antarctic Impulsive Transient Antenna | [
"Astronomy"
] | 779 | [
"Neutrino astronomy",
"Astronomical sub-disciplines"
] |
3,418,832 | https://en.wikipedia.org/wiki/Inner%20sphere%20electron%20transfer | Inner sphere electron transfer (IS ET) or bonded electron transfer is a redox chemical reaction that proceeds via a covalent linkage—a strong electronic interaction—between the oxidant and the reductant reactants. In inner sphere electron transfer, a ligand bridges the two metal redox centers during the electron transfer event. Inner sphere reactions are inhibited by large ligands, which prevent the formation of the crucial bridged intermediate. Thus, inner sphere ET is rare in biological systems, where redox sites are often shielded by bulky proteins. Inner sphere ET is usually used to describe reactions involving transition metal complexes and most of this article is written from this perspective. However, redox centers can consist of organic groups rather than metal centers.
The bridging ligand could be virtually any entity that can convey electrons. Typically, such a ligand has more than one lone electron pair, such that it can serve as an electron donor to both the reductant and the oxidant. Common bridging ligands include the halides and the pseudohalides such as hydroxide and thiocyanate. More complex bridging ligands are also well known including oxalate, malonate, and pyrazine. Prior to ET, the bridged complex must form, and such processes are often highly reversible. Electron transfer occurs through the bridge once it is established. In some cases, the stable bridged structure may exist in the ground state; in other cases, the bridged structure may be a transiently-formed intermediate, or else as a transition state during the reaction.
The alternative to inner sphere electron transfer is outer sphere electron transfer. In any transition metal redox process, the mechanism can be assumed to be outer sphere unless the conditions of the inner sphere are met. Inner sphere electron transfer is generally enthalpically more favorable than outer sphere electron transfer due to a larger degree of interaction between the metal centers involved, however, inner sphere electron transfer is usually entropically less favorable since the two sites involved must become more ordered (come together via a bridge) than in outer sphere electron transfer.
Taube's experiment
The discoverer of the inner sphere mechanism was Henry Taube, who was awarded the Nobel Prize in Chemistry in 1983 for his pioneering studies. A particularly historic finding is summarized in the abstract of the seminal publication.
"When Co(NH3)5Cl++ is reduced by Cr++ in M [meaning 1 M] HClO4, 1 Cl− appears attached to Cr for each Cr(III) which is formed or Co(III) reduced. When the reaction is carried on in a medium containing radioactive Cl, the mixing of the Cl− attached to Cr(III) with that in solution is less than 0.5%. This experiment shows that transfer of Cl to the reducing agent from the oxidizing agent is direct…"
The paper and the excerpt above can be described with the following equation:
[CoCl(NH3)5]2+ + [Cr(H2O)6]2+ → [Co(NH3)5(H2O)]2+ + [CrCl(H2O)5]2+
The point of interest is that the chloride that was originally bonded to the cobalt, the oxidant, becomes bonded to chromium, which in its +3 oxidation state, forms kinetically inert bonds to its ligands. This observation implies the intermediacy of the bimetallic complex [Co(NH3)5(μ-Cl)Cr(H2O)5]4+, wherein "μ-Cl" indicates that the chloride bridges between the Cr and Co atoms, serving as a ligand for both. This chloride serves as a conduit for electron flow from Cr(II) to Co(III), forming Cr(III) and Co(II).
See also
Inner sphere complex
Outer sphere electron transfer
Solvated electron
References
Physical chemistry
Electron | Inner sphere electron transfer | [
"Physics",
"Chemistry"
] | 814 | [
"Electron",
"Molecular physics",
"Applied and interdisciplinary physics",
"nan",
"Physical chemistry"
] |
3,419,098 | https://en.wikipedia.org/wiki/Outer%20sphere%20electron%20transfer | Outer sphere refers to an electron transfer (ET) event that occurs between chemical species that remain separate and intact before, during, and after the ET event. In contrast, for inner sphere electron transfer the participating redox sites undergoing ET become connected by a chemical bridge. Because the ET in outer sphere electron transfer occurs between two non-connected species, the electron is forced to move through space from one redox center to the other.
Marcus theory
The main theory describing the rates of outer sphere electron transfer was developed by Rudolph A. Marcus in the 1950s, for which he was awarded the Nobel Prize in Chemistry in 1992. A major aspect of Marcus theory is the dependence of the electron transfer rate on the thermodynamic driving force (difference in the redox potentials of the electron-exchanging sites). For most reactions, the rates increase with increased driving force. A second aspect is that the rate of outer sphere electron-transfer depends inversely on the "reorganizational energy." Reorganization energy describes the changes in bond lengths and angles that are required for the oxidant and reductant to switch their oxidation states. This energy is assessed by measurements of the self-exchange rates (see below).
Outer sphere electron transfer is the most common type of electron transfer, especially in biochemistry, where redox centers are separated by several (up to about 11) angstroms by intervening protein. In biochemistry, there are two main types of outer sphere ET: ET between two separate biological molecules or fixed distance electron transfer, in which the electron transfers within a single biomolecule (e.g., intraprotein).
Examples
Self-exchange
Outer sphere electron transfer can occur between chemical species that are identical except for their oxidation state. This process is termed self-exchange. An example is the degenerate reaction between the tetrahedral ions permanganate and manganate:
[MnO4]− + [Mn*O4]2− → [MnO4]2− + [Mn*O4]−
For octahedral metal complexes, the rate constant for self-exchange reactions correlates with changes in the population of the eg orbitals, the population of which most strongly affects the length of metal-ligand bonds:
For the [Co(bipy)3]+/[Co(bipy)3]2+ pair, self exchange proceeds at 109 M−1s−1. In this case, the electron configuration changes from Co(I): (t2g)6(eg)2 to Co(II): (t2g)5(eg)2.
For the [Co(bipy)3]2+/[Co(bipy)3]3+ pair, self exchange proceeds at 18 M−1s−1. In this case, the electron configuration changes from Co(II): (t2g)5(eg)2 to Co(III): (t2g)6(eg)0.
Iron-sulfur proteins
Outer sphere ET is the basis of the biological function of the iron-sulfur proteins. The Fe centers are typically further coordinated by cysteinyl ligands. The [Fe4S4] electron-transfer proteins ([Fe4S4] ferredoxins) may be further subdivided into low-potential (bacterial-type) and high-potential (HiPIP) ferredoxins. Low- and high-potential ferredoxins are related by the following redox scheme:
Because of the small structural differences between the individual redox states, ET is rapid between these clusters.
See also
Inner sphere electron transfer
References
Physical chemistry
Electron | Outer sphere electron transfer | [
"Physics",
"Chemistry"
] | 749 | [
"Electron",
"Molecular physics",
"Applied and interdisciplinary physics",
"nan",
"Physical chemistry"
] |
3,419,165 | https://en.wikipedia.org/wiki/Marcus%20theory | In theoretical chemistry, Marcus theory is a theory originally developed by Rudolph A. Marcus, starting in 1956, to explain the rates of electron transfer reactions – the rate at which an electron can move or jump from one chemical species (called the electron donor) to another (called the electron acceptor). It was originally formulated to address outer sphere electron transfer reactions, in which the two chemical species only change in their charge with an electron jumping (e.g. the oxidation of an ion like Fe2+/Fe3+), but do not undergo large structural changes. It was extended to include inner sphere electron transfer contributions, in which a change of distances or geometry in the solvation or coordination shells of the two chemical species is taken into account (the Fe-O distances in Fe(H2O)2+ and Fe(H2O)3+ are different).
For electron transfer reactions without making or breaking bonds Marcus theory takes the place of Eyring's transition state theory which has been derived for reactions with structural changes. Both theories lead to rate equations of the same exponential form. However, whereas in Eyring theory the reaction partners become strongly coupled in the course of the reaction to form a structurally defined activated complex, in Marcus theory they are weakly coupled and retain their individuality. It is the thermally induced reorganization of the surroundings, the solvent (outer sphere) and the solvent sheath or the ligands (inner sphere) which create the geometrically favourable situation prior to and independent of the electron jump.
The original classical Marcus theory for outer sphere electron transfer reactions demonstrates the importance of the solvent and leads the way to the calculation of the Gibbs free energy of activation, using the polarization properties of the solvent, the size of the reactants, the transfer distance and the Gibbs free energy of the redox reaction. The most startling result of Marcus' theory was the "inverted region": whereas the reaction rates usually become higher with increasing exergonicity of the reaction, electron transfer should, according to Marcus theory, become slower in the very negative domain. Scientists searched the inverted region for proof of a slower electron transfer rate for 30 years until it was unequivocally verified experimentally in 1984.
R. A. Marcus received the Nobel Prize in Chemistry in 1992 for this theory. Marcus theory is used to describe a number of important processes in chemistry and biology, including photosynthesis, corrosion, certain types of chemiluminescence, charge separation in some types of solar cells and more. Besides the inner and outer sphere applications, Marcus theory has been extended to address heterogeneous electron transfer.
Outer vs inner ET
In a redox reaction an electron donor D must diffuse to the acceptor A, forming a precursor complex, which is labile but allows electron transfer to give successor complex. The pair then dissociates. For a one electron transfer the reaction is
{D}+A <=>[k_{12}][k_{21}] [D{\dotsm}A] <=>[k_{23}][k_{32}] [D+{\dotsm}A^-] ->[k_{30}] {D+} + {A^-}
(D and A may already carry charges). Here k12, k21 and k30 are diffusion constants, k23 and k32 are rate constants of activated reactions. The total reaction may be diffusion controlled (the electron transfer step is faster than diffusion, every encounter leads to reaction) or activation controlled (the "equilibrium of association" is reached, the electron transfer step is slow, the separation of the successor complex is fast). The ligand shells around A and D are retained. This process is called outer sphere electron transfer. Outer sphere ET is the main focus of traditional Marcus Theory. The other kind or redox reactions is inner sphere where A and D are covalently linked by a bridging ligand. Rates for such ET reactions depend on ligand exchange rates.
The problem
In outer sphere redox reactions no bonds are formed or broken; only an electron transfer (ET) takes place. A quite simple example is the Fe2+/Fe3+ redox reaction, the self exchange reaction which is known to be always occurring in an aqueous solution containing the aquo complexes [Fe(H2O)6]2+ and [Fe(H2O)6]3+. Redox occurs with Gibbs free reaction energy .
From the reaction rate's temperature dependence an activation energy is determined, and this activation energy is interpreted as the energy of the transition state in a reaction diagram. The latter is drawn, according to Arrhenius and Eyring, as an energy diagram with the reaction coordinate as the abscissa. The reaction coordinate describes the minimum energy path from the reactants to the products, and the points of this coordinate are combinations of distances and angles between and in the reactants in the course of the formation and/or cleavage of bonds. The maximum of the energy diagram, the transition state, is characterized by a specific configuration of the atoms. Moreover, in Eyring's TST a quite specific change of the nuclear coordinates is responsible for crossing the maximum point, a vibration in this direction is consequently treated as a translation.
For outer sphere redox reactions there cannot be such a reaction path, but nevertheless one does observe an activation energy. The rate equation for activation-controlled reactions has the same exponential form as the Eyring equation,
is the Gibbs free energy of the formation of the transition state, the exponential term represents the probability of its formation, A contains the probability of crossing from precursor to successor complex.
The Marcus model
The consequence of an electron transfer is the rearrangement of charges, and this greatly influences the solvent environment. For the dipolar solvent molecules rearrange in the direction of the field of the charges (this is called orientation polarisation), and also the atoms and electrons in the solvent molecules are slightly displaced (atomic and electron polarization, respectively). It is this solvent polarization which determines the free energy of activation and thus the reaction rate.
Substitution, elimination and isomerization reactions differ from the outer sphere redox reaction not only in the structural changes outlined above, but also in the fact that the movements of the nuclei and the shift of charges (charge transfer, CT) on the reactions path take place in a continuous and concerted way: nuclear configurations and charge distribution are always "in equilibrium". This is illustrated by the SN2 substitution of the saponification of an alkyl halide where the rear side attack of the OH− ion pushes out a halide ion and where a transition state with a five-coordinated carbon atom must be visualized. The system of the reactants becomes coupled so tightly during the reaction that they form the activated complex as an integral entity. The solvent here has a minor effect.
By contrast, in outer sphere redox reactions the displacement of nuclei in the reactants are small, here the solvent has the dominant role. Donor-acceptor coupling is weak, both keep their identity during the reaction. Therefore, the electron, being an elementary particle, can only "jump" as a whole (electron transfer, ET). If the electron jumps, the transfer is much faster than the movement of the large solvent molecules, with the consequence that the nuclear positions of the reaction partners and the solvent molecules are the same before and after the electron jump (Franck–Condon principle). The jump of the electron is governed by quantum mechanical rules, it is only possible if also the energy of the ET system does not change "during" the jump.
The arrangement of solvent molecules depends on the charge distribution on the reactants. If the solvent configuration must be the same before and after the jump and the energy may not change, then the solvent cannot be in the solvation state of the precursor nor in that of the successor complex as they are different, it has to be somewhere in between. For the self-exchange reaction for symmetry reasons an arrangement of the solvent molecules exactly in the middle of those of precursor and successor complex would meet the conditions. This means that the solvent arrangement with half of the electron on both donor and acceptor would be the correct environment for jumping. Also, in this state the energy of precursor and successor in their solvent environment would be the same.
However, the electron as an elementary particle cannot be divided, it resides either on the donor or the acceptor and arranges the solvent molecules accordingly in an equilibrium. The "transition state", on the other hand, requires a solvent configuration which would result from the transfer of half an electron, which is impossible. This means that real charge distribution and required solvent polarization are not in an "equilibrium". Yet it is possible that the solvent takes a configuration corresponding to the "transition state", even if the electron sits on the donor or acceptor. This, however, requires energy. This energy may be provided by the thermal energy of the solvent and thermal fluctuations can produce the correct polarization state. Once this has been reached the electron can jump. The creation of the correct solvent arrangement and the electron jump are decoupled and do not happen in a synchronous process. Thus the energy of the transition state is mostly polarization energy of the solvent.
Marcus theory
The macroscopic system: two conducting spheres
On the basis of his reasoning R.A. Marcus developed a classical theory with the aim of calculating the polarization energy of the said non-equilibrium state. From thermodynamics it is well known that the energy of such a state can be determined if a reversible path to that state is found. Marcus was successful in finding such a path via two reversible charging steps for the preparation of the "transition state" from the precursor complex.
Four elements are essential for the model on which the theory is based:
Marcus employs a classical, purely electrostatic model. The charge (many elementary charges) may be transferred in any portion from one body to another.
Marcus separates the fast electron polarisation Pe and the slow atom and orientation polarisation Pu of the solvent on grounds of their time constants differing several orders of magnitude.
Marcus separates the inner sphere (reactant + tightly bound solvent molecules, in complexes + ligands) and the outer sphere (free solvent )
In this model Marcus confines himself to calculating the outer sphere energy of the non-equilibrium polarization of the "transition state". The outer sphere energy is often much larger than the inner sphere contribution because of the far reaching electrostatic forces (compare the Debye–Hückel theory of electrochemistry).
Marcus' tool is the theory of dielectric polarization in solvents. He solved the problem in a general way for a transfer of charge between two bodies of arbitrary shape with arbitrary surface and volume charge. For the self-exchange reaction, the redox pair (e.g. Fe(H2O)63+ / Fe(H2O)62+) is substituted by two macroscopic conducting spheres at a defined distance carrying specified charges. Between these spheres a certain amount of charge is reversibly exchanged.
In the first step the energy WI of the transfer of a specific amount of charge is calculated, e.g. for the system in a state when both spheres carry half of the amount of charge which is to be transferred. This state of the system can be reached by transferring the respective charge from the donor sphere to the vacuum and then back to the acceptor sphere. Then the spheres in this state of charge give rise to a defined electric field in the solvent which creates the total solvent polarization Pu + Pe. By the same token this polarization of the solvent interacts with the charges.
In a second step the energy WII of the reversible (back) transfer of the charge to the first sphere, again via the vacuum, is calculated. However, the atom and orientation polarization Pu is kept fixed, only the electron polarization Pe may adjust to the field of the new charge distribution and the fixed Pu. After this second step the system is in the desired state with an electron polarization corresponding to the starting point of the redox reaction and an atom and orientation polarization corresponding to the "transition state". The energy WI + WII of this state is, thermodynamically speaking, a Gibbs free energy G.
Of course, in this classical model the transfer of any arbitrary amount of charge Δe is possible. So the energy of the non-equilibrium state, and consequently of the polarization energy of the solvent, can be probed as a function of Δe. Thus Marcus has lumped together, in a very elegant way, the coordinates of all solvent molecules into a single coordinate of solvent polarization Δp which is determined by the amount of transferred charge Δe. So he reached a simplification of the energy representation to only two dimensions: G = f(Δe). The result for two conducting spheres in a solvent is the formula of Marcus
Where r1 and r2 are the radii of the spheres and R is their separation, εs and εopt are the static and high frequency (optical) dielectric constants of the solvent, Δe the amount of charge transferred. The graph of G vs. Δe is a parabola (Fig. 1). In Marcus theory the energy belonging to the transfer of a unit charge (Δe = 1) is called the (outer sphere) reorganization energy λo, i.e. the energy of a state where the polarization would correspond to the transfer of a unit amount of charge, but the real charge distribution is that before the transfer. In terms of exchange direction the system is symmetric.
The microscopic system: the donor-acceptor pair
Shrinking the two-sphere model to the molecular level creates the problem that in the self-exchange reaction the charge can no longer be transferred in arbitrary amounts, but only as a single electron. However, the polarization still is determined by the total ensemble of the solvent molecules and therefore can still be treated classically, i.e. the polarization energy is not subject to quantum limitations. Therefore, the energy of solvent reorganization can be calculated as being due to a hypothetical transfer and back transfer of a partial elementary charge according to the Marcus formula. Thus the reorganization energy for chemical redox reactions, which is a Gibbs free energy, is also a parabolic function of Δe of this hypothetical transfer, For the self exchange reaction, where for symmetry reasons Δe = 0.5, the Gibbs free energy of activation is ΔG(0)‡ = λo/4 (see Fig. 1 and Fig. 2 intersection of the parabolas I and f, f(0), respectively).
Up to now all was physics, now some chemistry enters. The self exchange reaction is a very specific redox reaction, most of the redox reactions are between different partners e.g.
{[Fe^{II}(CN)6]^{4-}}+{[Ir^{IV}Cl6]^{2-}}<=>{[Fe^{III}(CN)6]^{3-}}+{[Ir^{III}Cl6]^{3-}}
and they have positive (endergonic) or negative (exergonic) Gibbs free energies of reaction .
As Marcus calculations refer exclusively to the electrostatic properties in the solvent (outer sphere) and are independent of one another and therefore can just be added up. This means that the Marcus parabolas in systems with different are shifted just up or down in the vs. diagram (Fig. 2). Variation of can be affected in experiments by offering different acceptors to the same donor.
Simple calculation of the intersection point between the parabolas i and give the Gibbs free energy of activation
,
where = and = c. The intersection of those parabolas represents an activation energy and not the energy of a transition state of fixed configuration of all nuclei in the system as is the case in the substitution and other reactions mentioned. The transition state of the latter reactions has to meet structural and energetic conditions, redox reactions have only to comply to the energy requirement. Whereas the geometry of the transition state in the other reactions is the same for all pairs of reactants, for redox pairs many polarization environments may meet the energetic conditions.
Marcus' formula shows a quadratic dependence of the Gibbs free energy of activation on the Gibbs free energy of reaction. It is general knowledge from the host of chemical experience that reactions usually are the faster the more negative is . In many cases even a linear free energy relation is found. According to the Marcus formula the rates increase also when the reactions are more exergonic, however only as long as is positive or slightly negative. It is surprising that for redox reactions according to the Marcus formula the activation energy should increase for very exergonic reaction, i.e. in the cases when is negative and its absolute value is greater than that of . This realm of Gibbs free energy of reaction is called "Marcus inverted region". In Fig. 2 it becomes obvious that the intersection of the parabolas i and f moves upwards in the left part of the graph when continues to become more negative, and this means increasing activation energy. Thus the total graph of vs. should have a maximum.
The maximum of the ET rate is expected at Here and (Fig. 2) which means that the electron may jump in the precursor complex at its equilibrium polarization. No thermal activation is necessary: the reaction is barrierless. In the inverted region the polarization corresponds to the difficult-to-imagine notion of a charge distribution where the donor has received and the acceptor given off charge. Of course, in real world this does not happen, it is not a real charge distribution which creates this critical polarization, but the thermal fluctuation in the solvent. This polarization necessary for transfer in the inverted region can be created – with some probability – as well as any other one. The electron is just waiting for it for jumping.
Inner sphere electron transfer
In the outer sphere model the donor or acceptor and the tightly bound solvation shells or the complex' ligands were considered to form rigid structures which do not change in the course of electron transfer. However, the distances in the inner sphere are dependent on the charge of donor and acceptor, e.g. the central ion-ligand distances are different in complexes carrying different charges and again the Franck–Condon principle must be obeyed: for the electron to jump to occur, the nuclei have to have an identical configuration to both the precursor and the successor complexes, of course highly distorted. In this case the energy requirement is fulfilled automatically.
In this inner sphere case the Arrhenius concept holds, the transition state of definite geometric structure is reached along a geometrical reaction coordinate determined by nuclear motions. No further nuclear motion is necessary to form the successor complex, just the electron jumps, which makes a difference to the TST theory. The reaction coordinate for inner sphere energy is governed by vibrations and they differ in the oxidized and reduces species.
For the self-exchange system Fe2+/Fe3+ only the symmetrical breathing vibration of the six water molecules around the iron ions is considered. Assuming harmonic conditions this vibration has frequencies and , the force constants fD and fA are and the energies are
where q0 is the equilibrium normal coordinate and the displacement along the normal coordinate, the factor 3 stems from 6 (H2O)·. Like for the outer-sphere reorganization energy potential energy curve is quadratic, here, however, as a consequence of vibrations.
The equilibrium normal coordinates differ in Fe(H2O)62+ and Fe(H2O)63+. By thermal excitation of the breathing vibration a geometry can be reached which is common to both donor and acceptor, i.e. the potential energy curves of the breathing vibrations of D and A intersect here. This is the situation where the electron may jump. The energy of this transition state is the inner sphere reorganization energy λin.
For the self-exchange reaction the metal-water distance in the transition state can be calculated
This gives the inner sphere reorganisation energy
It is fortunate that the expressions for the energies for outer and inner reorganization have the same quadratic form. Inner sphere and outer sphere reorganization energies are independent, so they can be added to give and inserted in the Arrhenius equation
Here, A can be seen to represent the probability of electron jump, that of reaching the transition state of the inner sphere and that of outer sphere adjustment.
For unsymmetrical (cross) reactions like
{[Fe(H2O)6]^2+} + {[Co(H2O)6]^3+} <=> {[Fe(H2O)6]^3+} + {[Co(H2O)6]^2+}
the expression for can also be derived, but it is more complicated. These reactions have a free reaction enthalpy ΔG0 which is independent of the reorganization energy and determined by the different redox potentials of the iron and cobalt couple. Consequently, the quadratic Marcus equation holds also for the inner sphere reorganization energy, including the prediction of an inverted region. One may visualizing this by (a) in the normal region both the initial state and the final state have to have stretched bonds, (b) In the Δ G‡ = 0 case the equilibrium configuration of the initial state is the stretched configuration of the final state, and (c) in the inverted region the initial state has compressed bonds whereas the final state has largely stretched bonds.
Similar considerations hold for metal complexes where the ligands are larger than solvent molecules and also for ligand bridged polynuclear complexes.
The probability of the electron jump
The strength of the electronic coupling of the donor and acceptor decides whether the electron transfer reaction is adiabatic or non-adiabatic. In the non-adiabatic case the coupling is weak, i.e. HAB in Fig. 3 is small compared to the reorganization energy and donor and acceptor retain their identity. The system has a certain probability to jump from the initial to the final potential energy curves. In the adiabatic case the coupling is considerable, the gap of 2 HAB is larger and the system stays on the lower potential energy curve.
Marcus theory as laid out above, represents the non-adiabatic case. Consequently, the semi-classical Landau-Zener theory can be applied, which gives the probability of interconversion of donor and acceptor for a single passage of the system through the region of the intersection of the potential energy curves
where Hif is the interaction energy at the intersection, v the velocity of the system through the intersection region, si and sf the slopes there.
Working this out, one arrives at the basic equation of Marcus theory
where is the rate constant for electron transfer, is the electronic coupling between the initial and final states, is the reorganization energy (both inner and outer-sphere), and is the total Gibbs free energy change for the electron transfer reaction ( is the Boltzmann constant and is the absolute temperature).
Thus Marcus's theory builds on the traditional Arrhenius equation for the rates of chemical reactions in two ways:
1. It provides a formula for the activation energy, based on a parameter called the reorganization energy, as well as the Gibbs free energy. The reorganization energy is defined as the energy required to "reorganize" the system structure from initial to final coordinates, without making the charge transfer.
2. It provides a formula for the pre-exponential factor in the Arrhenius equation, based on the electronic coupling between the initial and final state of the electron transfer reaction (i.e., the overlap of the electronic wave functions of the two states).
Experimental results
Marcus published his theory in 1956. For many years there was an intensive search for the inverted region which would be a proof of the theory. But all experiments with series of reactions of more and more negative ΔG0 revealed only an increase of the reaction rate up to the diffusion limit, i.e. to a value indicating that every encounter lead to electron transfer, and that limit held also for very negative ΔG0 values (Rehm-Weller behaviour). It took about 30 years until the inverted region was unequivocally substantiated by Miller, Calcaterra and Closs for an intramolecular electron transfer in a molecule where donor and acceptor are kept at a constant distance by means of a stiff spacer (Fig.4).
A posteriori one may presume that in the systems where the reaction partners may diffuse freely the optimum distance for the electron jump may be sought, i.e. the distance for which ΔG‡ = 0 and ΔG0 = - λo. For λo is dependent on R, λo increases for larger R and the opening of the parabola smaller. It is formally always possible to close the parabola in Fig. 2 to such an extent, that the f-parabola intersects the i-parabola in the apex. Then always ΔG‡ = 0 and the rate k reaches the maximum diffusional value for all very negative ΔG0. There are, however, other concepts for the phenomenon, e.g. the participation of excited states or that the decrease of the rate constants would be so far in the inverted region that it escapes measurement.
R. A. Marcus and his coworkers have further developed the theory outlined here in several aspects. They have included inter alia statistical aspects and quantum effects, they have applied the theory to chemiluminescence and electrode reactions. R. A. Marcus received the Nobel Prize in Chemistry in 1992, and his Nobel Lecture gives an extensive view of his work.
See also
Hammond's postulate
Solvated electron
Free-energy relationship
References
Marcus's key papers
Physical organic chemistry
Physical chemistry | Marcus theory | [
"Physics",
"Chemistry"
] | 5,350 | [
"Physical chemistry",
"nan",
"Applied and interdisciplinary physics",
"Physical organic chemistry"
] |
3,422,027 | https://en.wikipedia.org/wiki/Field%20capacity | Field capacity is the amount of soil moisture or water content held in the soil after excess water has drained away and the rate of downward movement has decreased. This usually occurs two to three days after rain or irrigation in pervious soils of uniform structure and texture. The nominal definition of field capacity (expressed symbolically as θfc) is the bulk water content retained in soil at −33 kPa (or −0.33 bar) of hydraulic head or suction pressure. The term originated from Israelsen and West and Frank Veihmeyer and Arthur Hendrickson.
Veihmeyer and Hendrickson realized the limitation in this measurement and commented that it is affected by so many factors that, precisely, it is not a constant (for a particular soil), yet it does serve as a practical measure of soil water-holding capacity. Field capacity improves on the concept of moisture equivalent by Lyman Briggs. Veihmeyer & Hendrickson proposed this concept as an attempt to improve water-use efficiency for farmers in California in 1949.
Field capacity is characterized by measuring water content after wetting a soil profile, covering it (to prevent evaporation), and monitoring the change soil moisture in the profile. A relatively low rate of change indicates when macropore drainage ceases, which is called Field Capacity; it is also termed drained upper limit (DUL).
Lorenzo A. Richards and Weaver found that water content held by soil at a potential of −33 kPa (or −0.33 bar) correlate closely with field capacity (−10 kPa for sandy soils).
Criticism
This concept is criticized. Field capacity is a static measurement: in a field, it depends upon the initial water content, the depth of wetting before the commencement of redistribution, and the rate of change in water content over time. These conditions are not unique to a given soil.
See also
Available water capacity
Integral energy
Nonlimiting water range
Pedotransfer function
Permanent wilting point
Water potential
Water retention curve
References
Soil physics
Hydrology | Field capacity | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 407 | [
"Environmental engineering",
"Hydrology",
"Applied and interdisciplinary physics",
"Soil physics"
] |
3,422,049 | https://en.wikipedia.org/wiki/Moisture%20equivalent | Moisture equivalent is proposed by Lyman Briggs and McLane (1910) as a measure of field capacity for fine-textured soil materials.
Moisture equivalent is defined as the percentage of water which a soil can retain in opposition to a centrifugal force 1000 times that of gravity. It is measured by saturating sample of soil 1 cm thick, and subjecting it to a centrifugal force of 1000 times gravity for 30 min. The gravimetric water content after this treatment is its moisture equivalent.
This concept is no longer used in soil physics, replaced by field capacity.
Lyman Briggs and Homer LeRoy Shantz (1912) found that:
Moisture Equivalent = 0.02 sand + 0.22 silt + 1.05 clay
Note: volume of water stored in root zone is equal to the depth of water in root zone (Vw=Dw)
See also
Available water capacity
Field capacity
Nonlimiting water range
Pedotransfer function
Permanent wilting point
References
Soil physics
Equivalent units | Moisture equivalent | [
"Physics",
"Mathematics"
] | 204 | [
"Equivalent quantities",
"Applied and interdisciplinary physics",
"Quantity",
"Soil physics",
"Equivalent units",
"Units of measurement"
] |
3,422,168 | https://en.wikipedia.org/wiki/Traian%20Lalescu | Traian Lalescu (; 12 July 1882 – 15 June 1929) was a Romanian mathematician. His main focus was on integral equations and he contributed to work in the areas of functional equations, trigonometric series, mathematical physics, geometry, mechanics, algebra, and the history of mathematics.
Life
He was born in Bucharest. His father, also named Traian, was originally from Cornea, Caraș-Severin and worked as a superintendent at the Creditul Agricol Bank. Lalescu went to the Carol I High School in Craiova, continuing high school in Roman, and graduating from the Boarding High School in Iași. After entering the University of Iași, he completed his undergraduate studies in 1903 at the University of Bucharest.
He earned his Ph.D. in Mathematics from the University of Paris in 1908. His dissertation, Sur les équations de Volterra, was written under the direction of Émile Picard. That same year, he presented his work at the International Congress of Mathematicians in Rome. In 1911, he published Introduction to the Theory of Integral Equations, the first book ever on the subject of integral equations.
After returning to Romania in 1909, he first taught Mathematics at the Ion Maiorescu Gymnasium in Giurgiu. He then taught until 1912 at the Gheorghe Șincai High School and the Cantemir Vodă High School in Bucharest. From 1909 to 1910, he was a teaching assistant at the School of Bridges and Roads, in the department of graphic statistics. A year later, he was appointed full-time professor of analytical geometry, succeeding Spiru Haret; he lectured at the School (which would later become the Polytechnic University of Bucharest) until his death. In 1916, he became the first president of Sportul Studențesc, the university's football club. Also that year, he was appointed tenured professor of algebra and number theory at the University of Bucharest, a position he held until his death. In 1920, Lalescu became a professor and the inaugural rector of the Polytechnic University of Timișoara; for a year, he would commute by train for 20 hours between Timișoara and Bucharest to teach his classes. In 1921, he founded the football club Politehnica Timișoara.
His wife, Ecaterina, was a former student of his; they had for children—two sons and two daughters: Nicolae, Mariana, Florica, and Traian. She died in childbirth in 1921, at age 28. In 1920, Lalescu was elected to the Parliament of Romania as deputy for Orșova, and then re-elected twice as deputy for Caransebeș. He presented in parliament a well-received report on the budget project for 1925. In the fall of 1927, he caught a double pneumonia; in 1928, he went for a vacation in Nice and for treatment in Paris, but he succumbed to the disease the next year, at age 46. In 1991, he was elected posthumously honorary member of the Romanian Academy.
The Lalescu sequence
In a 1900 issue of , Lalescu proposed the study of the sequence
.
It turns out that the Lalescu sequence is decreasing and bounded below by 0, and thus is converging. Its limit is given by
.
Legacy
There are several institutions bearing his name, including Colegiul Național de Informatică Traian Lalescu in Hunedoara and Liceul Teoretic Traian Lalescu in Reșița. There are also streets named after him in Craiova, Oradea, Reșița, and Timișoara. The National Mathematics Contest Traian Lalescu for undergraduate students is also named after him.
A statue of Lalescu, carved in 1930 by Cornel Medrea, is situated in front of the Faculty of Mechanical Engineering, in Timișoara and another statue of Lalescu is situated inside the University of Bucharest.
Work
T. Lalesco, Introduction à la théorie des équations intégrales. Avec une préface de É. Picard, Paris: A. Hermann et Fils, 1912. VII + 152 pp. JFM entry
Traian Lalescu, Introducere la teoria ecuațiilor integrale, Editura Academiei Republicii Populare Romîne, 1956. 134 pp. (A reprint of the first edition [Bucharest, 1911], with a bibliography taken from the French translation [Paris, 1912]).
References
External links
"Representative Figures of the Romanian Science and Technology"
"Traian Lalescu", from Colegiul Național de Informatică Traian Lalescu, Hunedoara
"Cine a fost Traian Lalescu?", from Liceul Teoretic Traian Lalescu, Reșița
"Monumentul lui Traian Lalescu (1930)", at infotim.ro
A Class of Applications of AM-GM Inequality (From a 2004 Putnam Competition Problem to Lalescu’s Sequence) by Wladimir G. Boskoff and Bogdan Suceava, Australian Math. Society Gazette, 33 (2006), No.1, 51-56.
1882 births
1929 deaths
Scientists from Bucharest
20th-century Romanian mathematicians
Mathematical analysts
Romanian schoolteachers
Romanian textbook writers
Rectors of Politehnica University of Timișoara
University and college founders
Academic staff of the University of Bucharest
Academic staff of the Politehnica University of Bucharest
Carol I National College alumni
Costache Negruzzi National College alumni
Alexandru Ioan Cuza University alumni
University of Bucharest alumni
University of Paris alumni
Members of the Chamber of Deputies (Romania)
Romanian expatriates in France
Deaths from pneumonia in Romania
Members of the Romanian Academy elected posthumously | Traian Lalescu | [
"Mathematics"
] | 1,165 | [
"Mathematical analysis",
"Mathematical analysts"
] |
3,422,451 | https://en.wikipedia.org/wiki/Pyramidal%20alkene | Pyramidal alkenes are alkenes in which the two carbon atoms making up the double bond are not coplanar with their four substituents. This deformation results from geometric constraints. Pyramidal alkenes only are of interest because much can be learned from them about the nature of chemical bonding.
Energetics
Twisting to a 90° dihedral angle between two of the groups on the carbons requires less energy than the strength of a pi bond, and the bond still holds. The carbons of the double bond become pyramidal, which allows preserving some p orbital alignment—and hence pi bonding. The other two attached groups remain at a larger dihedral angle. This contradicts a common textbook assertion that the two carbons retain their planar nature when twisting, in which case the p orbitals would rotate enough away from each other to be unable to sustain a pi bond. In a 90°-twisted alkene, the p orbitals are only misaligned by 42° and the strain energy is only around 40 kcal/mol. In contrast, a fully broken pi bond has an energetic cost of around 65 kcal/mol.
Examples
In cycloheptene (1.1) the cis isomer is an ordinary unstrained molecule, but the heptane ring is too small to accommodate a trans-configured alkene group resulting in strain and twisting of the double bond. The p-orbital misalignment is minimized by a degree of pyramidalization. In the related anti-Bredt molecules, it is not pyramidalization but twisting that dominates.
Pyramidalized cage alkenes also exist where symmetrical bending of the substituents predominates without p-orbital misalignment.
The pyramidalization angle φ (b) is defined as the angle between the plane defined by one of the doubly bonded carbons and its two substituents and the extension of the double bond and is calculated as:
the butterfly bending angle or folding angle ψ (c) is defined as the angle between two planes and can be obtained by averaging the two torsional angles R1C=CR3 and R2C=CR4.
In alkenes 1.2 and 1.3 these angles are determined with X-ray crystallography as respectively 32.4°/22.7° and 27.3°/35.6°. Although stable, these alkenes are very reactive compared to ordinary alkenes. They are liable to dimerization creating cyclobutane rings, or react with oxygen to epoxides.
The compound tetradehydrodianthracene, also with a 35° pyramidalization angle, is synthesized in a photochemical cycloaddition of bromoanthracene followed by elimination of hydrogen bromide.
This compound is very reactive in Diels–Alder reactions due to through-space interactions between the two alkene groups. This enhanced reactivity enabled in turn the synthesis of the first-ever Möbius aromat.
In one study, the strained alkene 4.4 was synthesized with the highest pyramidalizion angles yet, 33.5° and 34.3°. This compound is the double Diels–Alder adduct of the diiodocyclophane 4.1 and anthracene 4.3 by reaction in presence of potassium tert-butoxide in refluxing dibutyl ether through a diaryne intermediate 4.2. This is a stable compound but will slowly react with oxygen to an epoxide when left standing as a chloroform solution.
In one study, isolation of a pyramidal alkene is not even possible by matrix isolation at extremely low temperatures unless stabilized by metal coordination:
A reaction of the diiodide 5.1 in Figure 5 with sodium amalgam in the presence of ethylenebis(triphenylphosphine)platinum(0) does not give the intermediate alkene 5.2 but the platinum stabilized 5.3. The sigma bond in this compound is destroyed in reaction with ethanol.
References
Alkenes
Chemical bonding | Pyramidal alkene | [
"Physics",
"Chemistry",
"Materials_science"
] | 857 | [
"Organic compounds",
"Alkenes",
"Condensed matter physics",
"nan",
"Chemical bonding"
] |
14,286,201 | https://en.wikipedia.org/wiki/Phenylalanine%E2%80%94tRNA%20ligase | In enzymology, a phenylalanine—tRNA ligase () is an enzyme that catalyzes the chemical reaction
ATP + L-phenylalanine + tRNAPhe AMP + diphosphate + L-phenylalanyl-tRNAPhe
The 3 substrates of this enzyme are ATP, L-phenylalanine, and tRNAPhe, whereas its 3 products are AMP, diphosphate, and L-phenylalanyl-tRNAPhe.
This enzyme belongs to the family of ligases, to be specific those forming carbon-oxygen bonds in aminoacyl-tRNA and related compounds. The systematic name of this enzyme class is L-phenylalanine:tRNAPhe ligase (AMP-forming). Other names in common use include phenylalanyl-tRNA synthetase, phenylalanyl-transfer ribonucleate synthetase, phenylalanine-tRNA synthetase, phenylalanyl-transfer RNA synthetase, phenylalanyl-tRNA ligase, phenylalanyl-transfer RNA ligase, L-phenylalanyl-tRNA synthetase, and phenylalanine translase. This enzyme participates in phenylalanine, tyrosine and tryptophan biosynthesis and aminoacyl-tRNA biosynthesis.
Phenylalanine-tRNA synthetase (PheRS) is known to be among the most complex enzymes of the aaRS (Aminoacyl-tRNA synthetase) family. Bacterial and mitochondrial PheRSs share a ferredoxin-fold anticodon binding (FDX-ACB) domain, which represents a canonical double split alpha+beta motif having no insertions. The FDX-ACB domain displays a typical RNA recognition fold (RRM) formed by the four-stranded antiparallel beta sheet, with two helices packed against it.
Structural studies
As of late 2007, 10 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , and .
References
Further reading
Protein domains
EC 6.1.1
Enzymes of known structure | Phenylalanine—tRNA ligase | [
"Biology"
] | 477 | [
"Protein domains",
"Protein classification"
] |
14,286,331 | https://en.wikipedia.org/wiki/PDGFB | Platelet-derived growth factor subunit B is a protein that in humans is encoded by the PDGFB gene.
Function
The protein encoded by this gene is a member of the platelet-derived growth factor family. The four members of this family are mitogenic factors for cells of mesenchymal origin and are characterized by a motif of eight cysteines. This gene product can exist either as a homodimer (PDGF-BB) or as a heterodimer with the platelet-derived growth factor alpha (PDGFA) polypeptide (PDGF-AB), where the dimers are connected by disulfide bonds.
Clinical significance
Mutations in this gene are associated with meningioma. Reciprocal translocations between chromosomes 22 and 17, at sites where the PDGFB and COL1A1 genes are respectively located or, alternatively, an abnormal small supernumerary ring chromosome merge these two genes to form a COL1A-PDGFB fusion gene. This fusion gene greatly overproduces PDGFB and is considered responsible for causing the development and/or progression of three closely related fibroblastic and myofibroblastic tumors of the skin: giant cell fibroblastoma, dermatofibrosarcoma protuberans, and dermatofibrosarcoma protuberans, sarcomatous.
Two splice variants have been identified for the PDGFB gene.
See also
Platelet-derived growth factor
References
Further reading
Growth factors | PDGFB | [
"Chemistry"
] | 316 | [
"Growth factors",
"Signal transduction"
] |
14,289,175 | https://en.wikipedia.org/wiki/Nonhypotenuse%20number | In mathematics, a nonhypotenuse number is a natural number whose square cannot be written as the sum of two nonzero squares. The name stems from the fact that an edge of length equal to a nonhypotenuse number cannot form the hypotenuse of a right angle triangle with integer sides.
The numbers 1, 2, 3, and 4 are all nonhypotenuse numbers. The number 5, however, is not a nonhypotenuse number as .
The first fifty nonhypotenuse numbers are:
1, 2, 3, 4, 6, 7, 8, 9, 11, 12, 14, 16, 18, 19, 21, 22, 23, 24, 27, 28, 31, 32, 33, 36, 38, 42, 43, 44, 46, 47, 48, 49, 54, 56, 57, 59, 62, 63, 64, 66, 67, 69, 71, 72, 76, 77, 79, 81, 83, 84
Although nonhypotenuse numbers are common among small integers, they become more-and-more sparse for larger numbers. Yet, there are infinitely many nonhypotenuse numbers, and the number of nonhypotenuse numbers not exceeding a value x scales asymptotically with x/.
The nonhypotenuse numbers are those numbers that have no prime factors of the form 4k+1. Equivalently, they are the number that cannot be expressed in the form where K, m, and n are all positive integers. A number whose prime factors are not of the form 4k+1 cannot be the hypotenuse of a primitive integer right triangle (one for which the sides do not have a nontrivial common divisor), but may still be the hypotenuse of a non-primitive triangle.
The nonhypotenuse numbers have been applied to prove the existence of addition chains that compute the first square numbers using only additions.
See also
Pythagorean theorem
Landau-Ramanujan constant
Fermat's theorem on sums of two squares
References
External links
Integer sequences | Nonhypotenuse number | [
"Mathematics"
] | 447 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
14,289,209 | https://en.wikipedia.org/wiki/Ommaya%20reservoir | An Ommaya reservoir is an intraventricular catheter system that can be used for the aspiration of cerebrospinal fluid or for the delivery of drugs (e.g. chemotherapy) into the cerebrospinal fluid. It consists of a catheter in one lateral ventricle attached to a reservoir implanted under the scalp. It is used to treat brain tumors, leukemia/lymphoma or leptomeningeal disease by intrathecal drug administration. In the palliative care of terminal cancer, an Ommaya reservoir can be inserted for intracerebroventricular injection (ICV) of morphine.
It was originally invented in 1963 by Ayub K. Ommaya, a Pakistani-American neurosurgeon.
In January 2017, researchers at University of Texas Southwestern Medical Centre used an Ommaya reservoir to measure the intracranial pressure that is regularly observed in astronauts in zero-gravity conditions.
References
Science and technology in Pakistan
Medical equipment
Drug delivery devices
Pakistani inventions
History of science and technology in Pakistan
Neurosurgical procedures | Ommaya reservoir | [
"Chemistry",
"Biology"
] | 228 | [
"Pharmacology",
"Drug delivery devices",
"Medical equipment",
"Medical technology"
] |
14,290,308 | https://en.wikipedia.org/wiki/Retinoid%20X%20receptor%20gamma | Retinoid X receptor gamma (RXR-gamma), also known as NR2B3 (nuclear receptor subfamily 2, group B, member 3) is a nuclear receptor that in humans is encoded by the RXRG gene.
Function
This gene encodes a member of the retinoid X receptor (RXR) family of nuclear receptors which are involved in mediating the antiproliferative effects of retinoic acid (RA). This receptor forms heterodimers with the retinoic acid, thyroid hormone, and vitamin D receptors, increasing both DNA binding and transcriptional function on their respective response elements. This gene is expressed at significantly lower levels in non-small cell lung cancer cells. Alternate transcriptional splice variants, encoding different isoforms, have been characterized.
See also
Retinoid X receptor
Interactions
Retinoid X receptor gamma has been shown to interact with ITGB3BP.
References
Further reading
Intracellular receptors
Transcription factors | Retinoid X receptor gamma | [
"Chemistry",
"Biology"
] | 198 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,291,150 | https://en.wikipedia.org/wiki/Unified%20Power%20Format | Unified Power Format (UPF) is the popular name of the Institute of Electrical and Electronics Engineers (IEEE) standard for specifying power intent in power optimization of electronic design automation. The IEEE 1801-2009 release of the standard was based on a donation from the Accellera organization. The current release is IEEE 1801-2018.
History
A Unified Power Format technical committee was formed by the Accellera organization, chaired by Stephen Bailey of Mentor Graphics.
As a reaction to the Power Forward Initiative the group was proposed in July 2006 and met on September 13, 2006.
It submitted its first draft in January 2007, and a version 1.0 was approved to be published on February 26, 2007.
Joe Daniels was technical editor.
Files written to this standard annotate an electric design with the power and power control intent of that design. Elements of that annotation include:
Power Supplies: supply nets, supply sets, power states
Power Control: power switches
Additional Protection: level shifters and isolation
Memory retention during times of limited power: retention strategies and supply set power states
Refinable descriptions of the potential power applied to the electronic system: power states, transitions, a set of simstate, power/ground pin type (pg_type) and function attributes of nets, and the -update argument to support the progressive refinement of the power intent.
The standard describes extensions to the Tool Command Language (Tcl): commands and arguments for annotating a design hierarchy which has been read into a tool.
Semantics for inferring additional elements in the design from the intent are provided in the standard.
Digital designers, IP Block providers, Physical Designers, and Verification engineers make use of this standard language to communicate their design intent and implementation with respect to the variable power of an electronic system.
The Design Automation Standards Committee (DASC) of the IEEE Standards Association sponsored working group 1801, with the project authorization approved on May 7, 2007.
Goals included:
clarify the semantics of the intent - this provides portability of design intent across many vendors tools
Add support for incremental refinement - Platinum source (constraints) from IP vendors, Golden source (configuration) from IP integrators, and Silicon source (implementation choices) from those that realize the instantiations.
Add support for bottom up and top down design
add documentation of the support for wildcard and regular expression selection of design instances
clarify the differences between ports and pins
provide for convergence capability from both UPF and Common Power Format of the Silicon Integration Initiative (Si2)
The IEEE group was initially called the "Low Power Study Group". Proposed standards have the letter "P" in front of them (such as P1801), which is removed and replaced with a dash and year when the standard is ratified.
Accelera's UPF 1.0 was donated to the IEEE as a basis of this standard in June 2006.
After reviewing 14 drafts, on March 27, 2009, the "Standard for Design and Verification of Low Power Integrated Circuits" was published as IEEE Std 1801-2009. It is sometimes called UPF 2.0.
Bailey was also chairman of the IEEE group.
Another notable supporter of the standard was Synopsys.
A follow-on project planned to develop a list of frequently asked questions (FAQ) about the specification.
References
External links
IEEE 1801-2018 - free download of the standard.
Power standards
IEEE DASC standards
Electronics standards | Unified Power Format | [
"Engineering"
] | 694 | [
"Electrical engineering",
"Power standards"
] |
14,291,494 | https://en.wikipedia.org/wiki/EPAS1 | Endothelial PAS domain-containing protein 1 (EPAS1, also known as hypoxia-inducible factor-2alpha (HIF-2α)) is a protein that is encoded by the EPAS1 gene in mammals. It is a type of hypoxia-inducible factor, a group of transcription factors involved in the physiological response to oxygen concentration. The gene is active under hypoxic conditions. It is also important in the development of the heart, and for maintaining the catecholamine balance required for protection of the heart. Mutation often leads to neuroendocrine tumors.
However, several characterized alleles of EPAS1 contribute to high-altitude adaptation in humans. One such allele, which has been inherited from Denisovan archaic hominins, is known to confer increased athletic performance in some people, and has therefore been referred to as the "super athlete gene".
Function
The EPAS1 gene encodes one subunit of a transcription factor involved in the induction of genes regulated by oxygen, and which is induced as oxygen concentration falls (hypoxia). The protein contains a basic helix-loop-helix protein dimerization domain as well as a domain found in signal transduction proteins which respond to oxygen levels. EPAS1 is involved in the development of the embryonic heart and is expressed in endothelial cells that line the walls of blood vessels in the umbilical cord.
EPAS1 is also essential for the maintenance of catecholamine homeostasis and protection against heart failure during early embryonic development. Catecholamines regulated by EPAS1 include epinephrine and norepinephrine. It is critical that the production of catecholamines remain in homeostatic conditions so that both the delicate fetal heart and the adult heart do not overexert themselves and induce heart failure. Catecholamine production in the embryo is related to control of cardiac output by increasing the fetal heart rate.
Alleles
A high percentage of Tibetans carry an allele of EPAS1 that improves oxygen transport. The beneficial allele is also found in the extinct Denisovan genome, suggesting that it arose in them and entered the modern human population through hybridization.
The Himalayan wolf and the Tibetan mastiff have inherited an altitude-adaptive allele of the gene from interbreeding with a ghost population of an unknown wolf-like canid. The EPAS1 allele is known to confer an adaptive advantage to animals living at high-altitudes.
Clinical significance
Mutations in the EPAS1 gene are related to early-onset neuroendocrine tumors such as paragangliomas, somatostatinomas and/or pheochromocytomas. The mutations are commonly somatic missense mutations that locate in the primary hydroxylation site of HIF-2α, which disrupt the protein hydroxylation/degradation mechanism, and leads to protein stabilization and pseudohypoxic signaling. In addition, these neuroendocrine tumors release erythropoietin (EPO) into circulating blood, and lead to polycythemia.
Mutations in this gene are associated with erythrocytosis familial type 4, pulmonary hypertension, and chronic mountain sickness. There is also evidence that certain variants of this gene provide protection for people living at high altitude such as in Tibet. The effect is most profound among the Tibetans living in the Himalayas at an altitude of about 4,000 metres above sea level, the environment of which is intolerable to other human populations due to 40% less atmospheric oxygen.
A study by UC Berkeley identified more than 30 genetic factors that make Tibetans' bodies well-suited for high-altitudes, including EPAS1. Tibetans suffer no health problems associated with altitude sickness, but instead produce low levels of blood pigment (haemoglobin) sufficient for less oxygen, more elaborate blood vessels, have lower infant mortality, and are heavier at birth.
EPAS1 is useful in high altitudes as a short term adaptive response. However, EPAS1 can also cause excessive production of red blood cells leading to chronic mountain sickness that can lead to death and inhibited reproductive abilities. Some mutations that increase its expression are associated with increased hypertension and stroke at low altitude, with symptoms similar to mountain sickness. Populations living permanently at high altitudes experience selection on EPAS1 for mutations which reduce the negative fitness consequences of excessive red blood cell production.
Interactions
EPAS1 has been shown to interact with aryl hydrocarbon receptor nuclear translocator and ARNTL.
References
Further reading
External links
Transcription factors
PAS-domain-containing proteins | EPAS1 | [
"Chemistry",
"Biology"
] | 959 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
19,347,878 | https://en.wikipedia.org/wiki/Liquid-hydrogen%20trailer | A liquid-hydrogen trailer is a trailer designed to carry cryogenic liquid hydrogen (LH2) on roads being pulled by a powered vehicle. The largest such vehicles are similar to railroad tanktainers which are also designed to carry liquefied loads. Liquid-hydrogen trailers tend to be large; they are insulated. Some are semi-trailers.
History
The U-1 semi-trailer was a liquid-hydrogen trailer designed in the 1950s to carry cryogenic liquid hydrogen (LH2) on roads being pulled by a powered vehicle. It was constructed by the Cambridge Corporation and had a capacity of with a hydrogen loss rate of approximately 2 percent per day. The U-1 was a single-axle semi-trailer. The specifications for its successor the U-2, a double axle semi-trailer, were issued on 15 March 1957.
Size and volume
Liquid hydrogen trailers are referenced by their size or volume capacity. Liquid-hydrogen trailers typically have capacities ranging from gross volume.
See also
Compressed-hydrogen tube trailer
Hydrogen economy
Hydrogen infrastructure
Liquid-hydrogen tank car
Liquid-hydrogen tanktainer
Trailer (vehicle)
References
External links
Liquid Hydrogen Transport by Truck
Trailers
Hydrogen infrastructure
Industrial gases
Cryogenics | Liquid-hydrogen trailer | [
"Physics",
"Chemistry"
] | 241 | [
"Chemical process engineering",
"Applied and interdisciplinary physics",
"Cryogenics",
"Industrial gases"
] |
19,348,382 | https://en.wikipedia.org/wiki/Microcosm%20%28experimental%20ecosystem%29 | Microcosms are artificial, simplified ecosystems that are used to simulate and predict the behaviour of natural ecosystems under controlled conditions. Open or closed microcosms provide an experimental area for ecologists to study natural ecological processes. Microcosm studies can be very useful to study the effects of disturbance or to determine the ecological role of key species. A Winogradsky column is an example of a microbial microcosm.
See also
Closed ecological system
Ecologist Howard T. Odum was a pioneer in his use of small closed and open ecosystems in classroom teaching.
Biosphere 2 - Controversial project with a 1.27 ha artificial closed ecological system in Oracle, Arizona (USA).
References
Ecological Microcosms
Ecosystems
Biological systems | Microcosm (experimental ecosystem) | [
"Biology"
] | 147 | [
"Symbiosis",
"Ecosystems",
"nan"
] |
19,349,420 | https://en.wikipedia.org/wiki/Apsidal%20precession | In celestial mechanics, apsidal precession (or apsidal advance) is the precession (gradual rotation) of the line connecting the apsides (line of apsides) of an astronomical body's orbit. The apsides are the orbital points farthest (apoapsis) and closest (periapsis) from its primary body. The apsidal precession is the first time derivative of the argument of periapsis, one of the six main orbital elements of an orbit. Apsidal precession is considered positive when the orbit's axis rotates in the same direction as the orbital motion. An apsidal period is the time interval required for an orbit to precess through 360°, which takes the Earth about 112,000 years and the Moon about 8.85 years.
History
The ancient Greek astronomer Hipparchus noted the apsidal precession of the Moon's orbit (as the revolution of the Moon's apogee with a period of approximately 8.85 years); it is corrected for in the Antikythera Mechanism (circa 80 BCE) (with the supposed value of 8.88 years per full cycle, correct to within 0.34% of current measurements). The precession of the solar apsides (as a motion distinct from the precession of the equinoxes), was first quantified in the second century by Ptolemy of Alexandria. He also calculated the effect of precession on movement of the heavenly bodies. The apsidal precessions of the Earth and other planets are the result of a plethora of phenomena, of which a part remained difficult to account for until the 20th century when the last unidentified part of Mercury's precession was precisely explained.
Calculation
A variety of factors can lead to periastron precession such as general relativity, stellar quadrupole moments, mutual star–planet tidal deformations, and perturbations from other planets.
For Mercury, the perihelion precession rate due to general relativistic effects is 43″ (arcseconds) per century. By comparison, the precession due to perturbations from the other planets in the Solar System is 532″ per century, whereas the oblateness of the Sun (quadrupole moment) causes a negligible contribution of 0.025″ per century.
From classical mechanics, if stars and planets are considered to be purely spherical masses, then they will obey a simple inverse-square law, relating force to distance and hence execute closed elliptical orbits according to Bertrand's theorem. Non-spherical mass effects are caused by the application of external potential(s): the centrifugal potential of spinning bodies causes flattening between the poles and the gravity of a nearby mass raises tidal bulges. Rotational and net tidal bulges create gravitational quadrupole fields () that lead to orbital precession.
Total apsidal precession for isolated very hot Jupiters is, considering only lowest order effects, and broadly in order of importance
with planetary tidal bulge being the dominant term, exceeding the effects of general relativity and the stellar quadrupole by more than an order of magnitude. The good resulting approximation of the tidal bulge is useful for understanding the interiors of such planets. For the shortest-period planets, the planetary interior induces precession of a few degrees per year. It is up to 19.9° per year for WASP-12b.
Newton's theorem of revolving orbits
Newton derived an early theorem which attempted to explain apsidal precession. This theorem is historically notable, but it was never widely used and it proposed forces which have been found not to exist, making the theorem invalid. This theorem of revolving orbits remained largely unknown and undeveloped for over three centuries until 1995. Newton proposed that variations in the angular motion of a particle can be accounted for by the addition of a force that varies as the inverse cube of distance, without affecting the radial motion of a particle. Using a forerunner of the Taylor series, Newton generalized his theorem to all force laws provided that the deviations from circular orbits are small, which is valid for most planets in the Solar System. However, his theorem did not account for the apsidal precession of the Moon without giving up the inverse-square law of Newton's law of universal gravitation. Additionally, the rate of apsidal precession calculated via Newton's theorem of revolving orbits is not as accurate as it is for newer methods such as by perturbation theory.
General relativity
An apsidal precession of the planet Mercury was noted by Urbain Le Verrier in the mid-19th century and accounted for by Einstein's general theory of relativity.
In the 1910s, several astronomers calculated the precession of perihelion according to special relativity. They typically obtained a value that is only 1/6 of the correct value, at 7''/year.
Einstein showed that for a planet, the major semi-axis of its orbit being , the eccentricity of the orbit and the period of revolution , then the apsidal precession due to relativistic effects, during one period of revolution in radians, is
where is the speed of light. In the case of Mercury, half of the greater axis is about , the eccentricity of its orbit is 0.206 and the period of revolution 87.97 days or . From these and the speed of light (which is ~), it can be calculated that the apsidal precession during one period of revolution is = radians ( degrees or 0.104″). In one hundred years, Mercury makes approximately 415 revolutions around the Sun, and thus in that time, the apsidal perihelion due to relativistic effects is approximately 43″, which corresponds almost exactly to the previously unexplained part of the measured value.
Long-term climate
Earth's apsidal precession slowly increases its argument of periapsis; it takes about years for the ellipse to revolve once relative to the fixed stars. Earth's polar axis, and hence the solstices and equinoxes, precess with a period of about years in relation to the fixed stars. These two forms of 'precession' combine so that it takes between and years (and on average years) for the ellipse to revolve once relative to the vernal equinox, that is, for the perihelion to return to the same date (given a calendar that tracks the seasons perfectly).
This interaction between the anomalistic and tropical cycle is important in the long-term climate variations on Earth, called the Milankovitch cycles. Milankovitch cycles are central to understanding the effects of apsidal precession. An equivalent is also known on Mars.
The figure on the right illustrates the effects of precession on the northern hemisphere seasons, relative to perihelion and aphelion. Notice that the areas swept during a specific season changes through time. Orbital mechanics require that the length of the seasons be proportional to the swept areas of the seasonal quadrants, so when the orbital eccentricity is extreme, the seasons on the far side of the orbit may be substantially longer in duration.
See also
Axial precession
Nodal precession
Hypotrochoid
Rosetta (orbit)
Spirograph
Notes
Orbits
Precession | Apsidal precession | [
"Physics"
] | 1,548 | [
"Wikipedia categories named after physical quantities",
"Physical quantities",
"Precession"
] |
19,354,017 | https://en.wikipedia.org/wiki/HD%20134335 | HD 134335 is a giant star in the northern constellation of Boötes. As a sixth magnitude star, it is dimly visible to the naked eye under favorable viewing conditions. It is located at a distance of approximately 478 light years based on parallax measurements, and is drifting closer with a heliocentric radial velocity of −18 km/s. It may approach as close as in about 7.6 million years.
The stellar classification of HD 134335 is K1III, matching a K-type giant star that has exhausted the supply of hydrogen at its core and expanded. It is radiating 127 times the luminosity of the Sun from its photosphere at an effective temperature of 4,409 K.
References
External links
HR 5640
CCDM J15086 +2507
Image HD 134335
Boötes
134335
Double stars
074096
K-type giants
5640
Durchmusterung objects | HD 134335 | [
"Astronomy"
] | 192 | [
"Boötes",
"Constellations"
] |
20,530,400 | https://en.wikipedia.org/wiki/Chiral%20Lewis%20acid | Chiral Lewis acids (CLAs) are a type of Lewis acid catalyst. These acids affect the chirality of the substrate as they react with it. In such reactions, synthesis favors the formation of a specific enantiomer or diastereomer. The method is an enantioselective asymmetric synthesis reaction. Since they affect chirality, they produce optically active products from optically inactive or mixed starting materials. This type of preferential formation of one enantiomer or diastereomer over the other is formally known as asymmetric induction. In this kind of Lewis acid, the electron-accepting atom is typically a metal, such as indium, zinc, lithium, aluminium, titanium, or boron. The chiral-altering ligands employed for synthesizing these acids often have multiple Lewis basic sites (often a diol or a dinitrogen structure) that allow the formation of a ring structure involving the metal atom.
Achiral Lewis acids have been used for decades to promote the synthesis of racemic mixtures in myriad different reactions. Since the 1960s, chemists have used Chiral Lewis acids to induce enantioselective reactions. This is useful when the desired product is a specific enantiomer, as is common in drug synthesis. Common reaction types include Diels–Alder reactions, the ene reaction, [2+2] cycloaddition reactions, hydrocyanation of aldehydes, and most notably, Sharpless epoxidations.
Theory
The enantioselectivity of CLAs derives from their ability to perturb the free energy barrier along with the reaction coordinate pathway that leads to either the R- or S- enantiomer. Ground state diastereomers and enantiomers are of equal energy in the ground state, and when reacted with an achiral Lewis acid, their diastereomeric intermediates, transition states, and products are also of equal energy. This leads to the production of racemic mixtures. However, when a CLA is used in the same reaction, the energetic barrier of formation of one diastereomer is less than that of another; the reaction is under kinetic control. If the difference in the energy barriers between the diastereomeric transition states are of sufficient magnitude, then a high enantiomeric excess of one isomer is observed.
Asymmetric synthesis
Diels–Alder reaction
Diels–Alder reactions occur between a conjugated diene and an alkene (commonly known as the dienophile). This cycloaddition process allows for the stereoselective formation of cyclohexene rings capable of possessing as many as four contiguous stereogenic centers.
Diels–Alder reactions can lead to the formation of a variety of structural isomers and stereoisomers. Molecular orbital theory considers that the endo transition state, instead of the exo transition state, is favored (endo addition rule). Also, augmented secondary orbital interactions have been postulated as the source of enhanced endo diastereoselection.
Usually, CLAs are employed to activate the dienophile. A typical CLA catalyst is derived from a Mg2+ center made chiral by attachment of a binol- phosphate ester. CLAs have been applied to a number of intramolecular Diels–Alder reactions.
A complex derived from diethylaluminium chloride and a “vaulted” biaryl ligand below catalyzes the enantioselective Diels–Alder reaction between cyclopentadiene and methacrolein. The chiral ligand is recovered quantitatively by silica gel chromatography.
The chiral (acyloxy) borane (CAB) complex is effective in catalyzing a number of aldehyde Diels–Alder reactions. NMR spectroscopic experiments have indicated close proximity of the aldehyde and the aryl ring. Pi stacking between the aryl group and aldehyde has been suggested as an organizational feature that imparts high enantioselectivity to the cycloaddition.
Bronsted acid-assisted chiral Lewis acid (BLA) catalyzes a number of diene-aldehyde cycloaddition reactions.
Aldol reaction
In the aldol reaction, the diastereoselectivity of the product is often dictated by the geometry of the enolate. The Zimmerman–Traxler model predicts that the Z enolate will give syn products, and that E enolates will give anti products. Reactions catalyzed by tin-based CLAs allow products to deviate from this pattern.
The transition structures for reactions with both the R and S catalyst enantiomers are:
Baylis–Hillman reaction
The Baylis–Hillman reaction is a route for C-C bond formation between an alpha, beta-unsaturated carbonyl and an aldehyde, which requires a nucleophilic catalyst, usually a tertiary amine, for a Michael-type addition and elimination. The stereoselectivity of these reactions is usually poor. Lanthanum(III)-containing CLAs have been demonstrated to improve stereoselectivity. Similarly, a chiral amine may also be used to achieve stereoselectivity.
The product obtained by the reaction using the chiral catalyst was obtained in good yield with excellent enantioselectivity.
Ene reaction
Chiral Lewis acids have proven useful in the ene reaction. When catalyzed by an achiral Lewis acid, the reaction normally provides good diastereoselectivity.
Good enantioselectivity has been observed when a chiral Lewis acid catalyst is used.
The enantioselectivity is believed to be due to the steric interactions between the methyl and phenyl group, which makes the transition structure of the iso product considerably more favorable.
Achiral Lewis acids in stereoselective synthesis
In some cases, an achiral Lewis acid may provide good stereoselectivity. Kimura et al. demonstrated the regio- and diastereoselective coupling of 1,3-dienes with aldehydes using a nickel catalyst.
References
Stereochemistry
Acids | Chiral Lewis acid | [
"Physics",
"Chemistry"
] | 1,297 | [
"Acids",
"Stereochemistry",
"Space",
"nan",
"Spacetime"
] |
20,536,158 | https://en.wikipedia.org/wiki/Aeronautical%20Engineering%20Review | Aeronautical Engineering Review was a journal published by the Institute of the Aeronautical Sciences.
History
The Institute of the Aeronautical Sciences started on 1933. It was titled the Journal of the Aeronautical Sciences. It became a monthly publication in 1935. The journal contained a section called "News from the Institute", that contained meeting notices, announcements, and obituaries. By 1944 this information was transferred to the Aeronautical Engineering Review.
References
Aerospace engineering journals
Defunct journals of the United States
Academic journals established in 1933 | Aeronautical Engineering Review | [
"Engineering"
] | 98 | [
"Aerospace engineering journals",
"Aerospace engineering"
] |
20,536,726 | https://en.wikipedia.org/wiki/Ocean%20dynamics | Ocean dynamics define and describe the flow of water within the oceans. Ocean temperature and motion fields can be separated into three distinct layers: mixed (surface) layer, upper ocean (above the thermocline), and deep ocean.
Ocean dynamics has traditionally been investigated by sampling from instruments in situ.
The mixed layer is nearest to the surface and can vary in thickness from 10 to 500 meters. This layer has properties such as temperature, salinity and dissolved oxygen which are uniform with depth reflecting a history of active turbulence (the atmosphere has an analogous planetary boundary layer). Turbulence is high in the mixed layer. However, it becomes zero at the base of the mixed layer. Turbulence again increases below the base of the mixed layer due to shear instabilities. At extratropical latitudes this layer is deepest in late winter as a result of surface cooling and winter storms and quite shallow in summer. Its dynamics is governed by turbulent mixing as well as Ekman transport, exchanges with the overlying atmosphere, and horizontal advection.
The upper ocean, characterized by warm temperatures and active motion, varies in depth from 100 m or less in the tropics and eastern oceans to in excess of 800 meters in the western subtropical oceans. This layer exchanges properties such as heat and freshwater with the atmosphere on timescales of a few years. Below the mixed layer the upper ocean is generally governed by the hydrostatic and geostrophic relationships. Exceptions include the deep tropics and coastal regions.
The deep ocean is both cold and dark with generally weak velocities (although limited areas of the deep ocean are known to have significant recirculations). The deep ocean is supplied with water from the upper ocean in only a few limited geographical regions: the subpolar North Atlantic and several sinking regions around the Antarctic. Because of the weak supply of water to the deep ocean the average residence time of water in the deep ocean is measured in hundreds of years. In this layer as well the hydrostatic and geostrophic relationships are generally valid and mixing is generally quite weak.
Primitive equations
Ocean dynamics are governed by Newton's equations of motion expressed as the Navier-Stokes equations for a fluid element located at (x,y,z) on the surface of our rotating planet and moving at velocity (u,v,w) relative to that surface:
the zonal momentum equation:
the meridional momentum equation:
the vertical momentum equation (assumes the ocean is in hydrostatic balance):
the continuity equation (assumes the ocean is incompressible):
the temperature equation:
the salinity equation:
Here "u" is zonal velocity, "v" is meridional velocity, "w" is vertical velocity, "p" is pressure, "ρ" is density, "T" is temperature, "S" is salinity, "g" is acceleration due to gravity, "τ" is wind stress, and "f" is the Coriolis parameter. "Q" is the heat input to the ocean, while "P-E" is the freshwater input to the ocean.
Mixed layer dynamics
Mixed layer dynamics are quite complicated; however, in some regions some simplifications are possible. The wind-driven horizontal transport in the mixed layer is approximately described by Ekman Layer dynamics in which vertical diffusion of momentum balances the Coriolis effect and wind stress. This Ekman transport is superimposed on geostrophic flow associated with horizontal gradients of density.
Upper ocean dynamics
Horizontal convergences and divergences within the mixed layer due, for example, to Ekman transport convergence imposes a requirement that ocean below the mixed layer must move fluid particles vertically. But one of the implications of the geostrophic relationship is that the magnitude of horizontal motion must greatly exceed the magnitude of vertical motion. Thus the weak vertical velocities associated with Ekman transport convergence (measured in meters per day) cause horizontal motion with speeds of 10 centimeters per second or more. The mathematical relationship between vertical and horizontal velocities can be derived by expressing the idea of conservation of angular momentum for a fluid on a rotating sphere. This relationship (with a couple of additional approximations) is known to oceanographers as the Sverdrup relation. Among its implications is the result that the horizontal convergence of Ekman transport observed to occur in the subtropical North Atlantic and Pacific forces southward flow throughout the interior of these two oceans. Western boundary currents (the Gulf Stream and Kuroshio) exist in order to return water to higher latitude.
References
Ocean currents
Fluid dynamics
Marine energy
Water waves
Oceanographical terminology | Ocean dynamics | [
"Physics",
"Chemistry",
"Engineering"
] | 942 | [
"Ocean currents",
"Physical phenomena",
"Water waves",
"Chemical engineering",
"Waves",
"Piping",
"Fluid dynamics"
] |
735,430 | https://en.wikipedia.org/wiki/Timestamp | A timestamp is a sequence of characters or encoded information identifying when a certain event occurred, usually giving date and time of day, sometimes accurate to a small fraction of a second. Timestamps do not have to be based on some absolute notion of time, however. They can have any epoch, can be relative to any arbitrary time, such as the power-on time of a system, or to some arbitrary time in the past.
A distinction is sometimes made between the terms datestamp, timestamp and date-timestamp:
Datestamp or DS: A date, for example -- according to ISO 8601
Timestamp or TS: A time of day, for example :: using 24-hour clock
Date-timestamp or DTS: Date and time, for example --, ::
History
The term "timestamp" derives from rubber stamps used in offices to stamp the current date, and sometimes time, in ink on paper documents, to record when the document was received. Common examples of this type of timestamp are a postmark on a letter or the "in" and "out" times on a time card.
With the advent of digital data systems, the term has expanded to refer to digital date and time information attached to digital data. For example, computer files contain timestamps that tell when the file was last modified, and digital cameras add timestamps to the pictures they take, recording the date and time the picture was taken.
Digital timestamps
This data is usually presented in a consistent format, allowing for easy comparison of two different records and tracking progress over time; the practice of recording timestamps in a consistent manner along with the actual data is called timestamping.
Timestamps are typically used for logging events or in a sequence of events (SOE), in which case each event in the log or SOE is marked with a timestamp.
Practically all computer file systems store one or more timestamps in the per-file metadata.
In particular, most modern operating systems support the POSIX stat (system call), so each file has three timestamps associated with it:
time of last access (atime: ls -lu),
time of last modification (mtime: ls -l), and
time of last status change (ctime: ls -lc).
Some file archivers and some version control software, when they copy a file from some remote computer to the local computer, adjust the timestamps of the local file to show the date/time in the past when that file was created or modified on that remote computer, rather than the date/time when that file was copied to the local computer.
Timestamps are often found to be dirty in many cases. Without cleaning up inaccurate timestamps, time-related applications such as provenance analysis or pattern queries are not reliable. To evaluate the correctness of timestamps, temporal constraints can be applied, declaring distance limits between timestamps.
Standardization
ISO 8601 standardizes the representation of dates and times. These standard representations are often used to construct timestamp values.
Examples
Examples of date-timestamps:
Thurs 12/31/2009 1:35 p.m. — mixed-endian date, big-endian 12-hour clock
Thurs 31.12.2009 13:35 — same time as the above, different format with little-endian date and big-endian 24-hour clock
2005-10-30 T 10:45 UTC — ISO 8601 international order (big-endian) with time zone)
2007-11-09 T 11:20 UTC — same format as the above, hence easy to compare and perform alphanumeric sorting
Sat Jul 23 02:16:57 2005
2009-10-31T01:48:52Z — ISO 8601
2009-10-31 01:48:52Z — "Internet time" per RFC 3339, based on ISO 8601
1256953732 — Unix time, equivalent to 2009-10-31 T 01:48:52Z
1969-07-21 T 02:56 UTC
07:38, 11 December 2012 (UTC)
1985-102 T 10:15 UTC — year 1985, day number 102, i.e., 1985 April 12
1985-W15-5 T 10:15 UTC — year 1985, week number 15, weekday 5 = 1985 April 12
20180203073000 — used in Wayback Machine memento URLs, equals 3 February 2018 07:30:00
Examples of datestamps:
2025-05-25 — ISO 8601 international representation of 2025 May 25
Examples of timestamps:
17:30:23 — time of day in an afternoon
123478382 ns — the number of nanoseconds since boot
17 minutes — an arbitrary minute counter that increments every 1 minute since its last manual "reset" event
Sequence number:
21 — a unitless counter that indicates only the relative order of events; this is event #21, which comes after 20 and before 22
See also
Advanced electronic signature
Bates numbering
Decentralized trusted timestamping on the blockchain
Linked timestamping
Timestamping (computing)
Timestamp-based concurrency control
Trusted timestamping
References
Time | Timestamp | [
"Physics",
"Mathematics"
] | 1,095 | [
"Physical quantities",
"Time",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
735,611 | https://en.wikipedia.org/wiki/Network%20analysis%20%28electrical%20circuits%29 | In electrical engineering and electronics, a network is a collection of interconnected components. Network analysis is the process of finding the voltages across, and the currents through, all network components. There are many techniques for calculating these values; however, for the most part, the techniques assume linear components. Except where stated, the methods described in this article are applicable only to linear network analysis.
Definitions
Equivalent circuits
A useful procedure in network analysis is to simplify the network by reducing the number of components. This can be done by replacing physical components with other notional components that have the same effect. A particular technique might directly reduce the number of components, for instance by combining impedances in series. On the other hand, it might merely change the form into one in which the components can be reduced in a later operation. For instance, one might transform a voltage generator into a current generator using Norton's theorem in order to be able to later combine the internal resistance of the generator with a parallel impedance load.
A resistive circuit is a circuit containing only resistors, ideal current sources, and ideal voltage sources. If the sources are constant (DC) sources, the result is a DC circuit. Analysis of a circuit consists of solving for the voltages and currents present in the circuit. The solution principles outlined here also apply to phasor analysis of AC circuits.
Two circuits are said to be equivalent with respect to a pair of terminals if the voltage across the terminals and current through the terminals for one network have the same relationship as the voltage and current at the terminals of the other network.
If implies for all (real) values of , then with respect to terminals and , circuit 1 and circuit 2 are equivalent.
The above is a sufficient definition for a one-port network. For more than one port, then it must be defined that the currents and voltages between all pairs of corresponding ports must bear the same relationship. For instance, star and delta networks are effectively three port networks and hence require three simultaneous equations to fully specify their equivalence.
Impedances in series and in parallel
Some two terminal network of impedances can eventually be reduced to a single impedance by successive applications of impedances in series or impedances in parallel.
Impedances in series:
Impedances in parallel:
The above simplified for only two impedances in parallel:
Delta-wye transformation
A network of impedances with more than two terminals cannot be reduced to a single impedance equivalent circuit. An -terminal network can, at best, be reduced to impedances (at worst ). For a three terminal network, the three impedances can be expressed as a three node delta (Δ) network or four node star (Y) network. These two networks are equivalent and the transformations between them are given below. A general network with an arbitrary number of nodes cannot be reduced to the minimum number of impedances using only series and parallel combinations. In general, Y-Δ and Δ-Y transformations must also be used. For some networks the extension of Y-Δ to star-polygon transformations may also be required.
For equivalence, the impedances between any pair of terminals must be the same for both networks, resulting in a set of three simultaneous equations. The equations below are expressed as resistances but apply equally to the general case with impedances.
Delta-to-star transformation equations
Star-to-delta transformation equations
General form of network node elimination
The star-to-delta and series-resistor transformations are special cases of the general resistor network node elimination algorithm. Any node connected by resistors to nodes can be replaced by resistors interconnecting the remaining nodes. The resistance between any two nodes is given by:
For a star-to-delta () this reduces to:
For a series reduction () this reduces to:
For a dangling resistor () it results in the elimination of the resistor because .
Source transformation
A generator with an internal impedance (i.e. non-ideal generator) can be represented as either an ideal voltage generator or an ideal current generator plus the impedance. These two forms are equivalent and the transformations are given below. If the two networks are equivalent with respect to terminals ab, then and must be identical for both networks. Thus,
or
Norton's theorem states that any two-terminal linear network can be reduced to an ideal current generator and a parallel impedance.
Thévenin's theorem states that any two-terminal linear network can be reduced to an ideal voltage generator plus a series impedance.
Simple networks
Some very simple networks can be analysed without the need to apply the more systematic approaches.
Voltage division of series components
Consider n impedances that are connected in series. The voltage across any impedance is
Current division of parallel components
Consider n admittances that are connected in parallel. The current through any admittance is
for
Special case: Current division of two parallel components
Nodal analysis
Nodal analysis uses the concept of a node voltage and considers the node voltages to be the unknown variables. For all nodes, except a chosen reference node, the node voltage is defined as the voltage drop from the node to the reference node. Therefore, there are N-1 node voltages for a circuit with N nodes.
In principle, nodal analysis uses Kirchhoff's current law (KCL) at N-1 nodes to get N-1 independent equations. Since equations generated with KCL are in terms of currents going in and out of nodes, these currents, if their values are not known, need to be represented by the unknown variables (node voltages). For some elements (such as resistors and capacitors) getting the element currents in terms of node voltages is trivial.
For some common elements where this is not possible, specialized methods are developed. For example, a concept called supernode is used for circuits with independent voltage sources.
Label all nodes in the circuit. Arbitrarily select any node as reference.
Define a voltage variable from every remaining node to the reference. These voltage variables must be defined as voltage rises with respect to the reference node.
Write a KCL equation for every node except the reference.
Solve the resulting system of equations.
Mesh analysis
Mesh — a loop that does not contain an inner loop.
Count the number of “window panes” in the circuit. Assign a mesh current to each window pane.
Write a KVL equation for every mesh whose current is unknown.
Solve the resulting equations
Superposition
In this method, the effect of each generator in turn is calculated. All the generators other than the one being considered are removed and either short-circuited in the case of voltage generators or open-circuited in the case of current generators. The total current through or the total voltage across a particular branch is then calculated by summing all the individual currents or voltages.
There is an underlying assumption to this method that the total current or voltage is a linear superposition of its parts. Therefore, the method cannot be used if non-linear components are present. Superposition of powers cannot be used to find total power consumed by elements even in linear circuits. Power varies according to the square of total voltage or current and the square of the sum is not generally equal to the sum of the squares. Total power in an element can be found by applying superposition to the voltages and current independently and then calculating power from the total voltage and current.
Choice of method
Choice of method is to some extent a matter of taste. If the network is particularly simple or only a specific current or voltage is required then ad-hoc application of some simple equivalent circuits may yield the answer without recourse to the more systematic methods.
Nodal analysis: The number of voltage variables, and hence simultaneous equations to solve, equals the number of nodes minus one. Every voltage source connected to the reference node reduces the number of unknowns and equations by one.
Mesh analysis: The number of current variables, and hence simultaneous equations to solve, equals the number of meshes. Every current source in a mesh reduces the number of unknowns by one. Mesh analysis can only be used with networks which can be drawn as a planar network, that is, with no crossing components.
Superposition is possibly the most conceptually simple method but rapidly leads to a large number of equations and messy impedance combinations as the network becomes larger.
Effective medium approximations: For a network consisting of a high density of random resistors, an exact solution for each individual element may be impractical or impossible. Instead, the effective resistance and current distribution properties can be modelled in terms of graph measures and geometrical properties of networks.
Transfer function
A transfer function expresses the relationship between an input and an output of a network. For resistive networks, this will always be a simple real number or an expression which boils down to a real number. Resistive networks are represented by a system of simultaneous algebraic equations. However, in the general case of linear networks, the network is represented by a system of simultaneous linear differential equations. In network analysis, rather than use the differential equations directly, it is usual practice to carry out a Laplace transform on them first and then express the result in terms of the Laplace parameter s, which in general is complex. This is described as working in the s-domain. Working with the equations directly would be described as working in the time (or t) domain because the results would be expressed as time varying quantities. The Laplace transform is the mathematical method of transforming between the s-domain and the t-domain.
This approach is standard in control theory and is useful for determining stability of a system, for instance, in an amplifier with feedback.
Two terminal component transfer functions
For two terminal components the transfer function, or more generally for non-linear elements, the constitutive equation, is the relationship between the current input to the device and the resulting voltage across it. The transfer function, Z(s), will thus have units of impedance, ohms. For the three passive components found in electrical networks, the transfer functions are;
For a network to which only steady ac signals are applied, s is replaced with jω and the more familiar values from ac network theory result.
Finally, for a network to which only steady dc is applied, s is replaced with zero and dc network theory applies.
Two port network transfer function
Transfer functions, in general, in control theory are given the symbol H(s). Most commonly in electronics, transfer function is defined as the ratio of output voltage to input voltage and given the symbol A(s), or more commonly (because analysis is invariably done in terms of sine wave response), A(jω), so that;
The A standing for attenuation, or amplification, depending on context. In general, this will be a complex function of jω, which can be derived from an analysis of the impedances in the network and their individual transfer functions. Sometimes the analyst is only interested in the magnitude of the gain and not the phase angle. In this case the complex numbers can be eliminated from the transfer function and it might then be written as;
Two port parameters
The concept of a two-port network can be useful in network analysis as a black box approach to analysis. The behaviour of the two-port network in a larger network can be entirely characterised without necessarily stating anything about the internal structure. However, to do this it is necessary to have more information than just the A(jω) described above. It can be shown that four such parameters are required to fully characterise the two-port network. These could be the forward transfer function, the input impedance, the reverse transfer function (i.e., the voltage appearing at the input when a voltage is applied to the output) and the output impedance. There are many others (see the main article for a full listing), one of these expresses all four parameters as impedances. It is usual to express the four parameters as a matrix;
The matrix may be abbreviated to a representative element;
or just
These concepts are capable of being extended to networks of more than two ports. However, this is rarely done in reality because, in many practical cases, ports are considered either purely input or purely output. If reverse direction transfer functions are ignored, a multi-port network can always be decomposed into a number of two-port networks.
Distributed components
Where a network is composed of discrete components, analysis using two-port networks is a matter of choice, not essential. The network can always alternatively be analysed in terms of its individual component transfer functions. However, if a network contains distributed components, such as in the case of a transmission line, then it is not possible to analyse in terms of individual components since they do not exist. The most common approach to this is to model the line as a two-port network and characterise it using two-port parameters (or something equivalent to them). Another example of this technique is modelling the carriers crossing the base region in a high frequency transistor. The base region has to be modelled as distributed resistance and capacitance rather than lumped components.
Image analysis
Transmission lines and certain types of filter design use the image method to determine their transfer parameters. In this method, the behaviour of an infinitely long cascade connected chain of identical networks is considered. The input and output impedances and the forward and reverse transmission functions are then calculated for this infinitely long chain. Although the theoretical values so obtained can never be exactly realised in practice, in many cases they serve as a very good approximation for the behaviour of a finite chain as long as it is not too short.
Time-based network analysis with simulation
Most analysis methods calculate the voltage and current values for static networks, which are circuits consisting of memoryless components only but have difficulties with complex dynamic networks. In general, the equations that describe the behaviour of a dynamic circuit are in the form of a differential-algebraic system of equations (DAEs). DAEs are challenging to solve and the methods for doing so are not yet fully understood and developed (as of 2010). Also, there is no general theorem that guarantees solutions to DAEs will exist and be unique. In special cases, the equations of the dynamic circuit will be in the form of an ordinary differential equations (ODE), which are easier to solve, since numerical methods for solving ODEs have a rich history, dating back to the late 1800s. One strategy for adapting ODE solution methods to DAEs is called direct discretization and is the method of choice in circuit simulation.
Simulation-based methods for time-based network analysis solve a circuit that is posed as an initial value problem (IVP). That is, the values of the components with memories (for example, the voltages on capacitors and currents through inductors) are given at an initial point of time , and the analysis is done for the time . Since finding numerical results for the infinite number of time points from to is not possible, this time period is discretized into discrete time instances, and the numerical solution is found for every instance. The time between the time instances is called the time step and can be fixed throughout the whole simulation or may be adaptive.
In an IVP, when finding a solution for time , the solution for time is already known. Then, temporal discretization is used to replace the derivatives with differences, such as for the backward Euler method, where is the time step.
If all circuit components were linear or the circuit was linearized beforehand, the equation system at this point is a system of linear equations and is solved with numerical linear algebra methods. Otherwise, it is a nonlinear algebraic equation system and is solved with nonlinear numerical methods such as Root-finding algorithms.
Comparison to other methods
Simulation methods are much more applicable than Laplace transform based methods, such as transfer functions, which only work for simple dynamic networks with capacitors and inductors. Also, the input signals to the network cannot be arbitrarily defined for Laplace transform based methods.
Non-linear networks
Most electronic designs are, in reality, non-linear. There are very few that do not include some semiconductor devices. These are invariably non-linear, the transfer function of an ideal semiconductor p-n junction is given by the very non-linear relationship;
where;
i and v are the instantaneous current and voltage.
Io is an arbitrary parameter called the reverse leakage current whose value depends on the construction of the device.
VT is a parameter proportional to temperature called the thermal voltage and equal to about 25mV at room temperature.
There are many other ways that non-linearity can appear in a network. All methods utilising linear superposition will fail when non-linear components are present. There are several options for dealing with non-linearity depending on the type of circuit and the information the analyst wishes to obtain.
Constitutive equations
The diode equation above is an example of an element constitutive equation of the general form,
This can be thought of as a non-linear resistor. The corresponding constitutive equations for non-linear inductors and capacitors are respectively;
where f is any arbitrary function, φ is the stored magnetic flux and q is the stored charge.
Existence, uniqueness and stability
An important consideration in non-linear analysis is the question of uniqueness. For a network composed of linear components there will always be one, and only one, unique solution for a given set of boundary conditions. This is not always the case in non-linear circuits. For instance, a linear resistor with a fixed current applied to it has only one solution for the voltage across it. On the other hand, the non-linear tunnel diode has up to three solutions for the voltage for a given current. That is, a particular solution for the current through the diode is not unique, there may be others, equally valid. In some cases there may not be a solution at all: the question of existence of solutions must be considered.
Another important consideration is the question of stability. A particular solution may exist, but it may not be stable, rapidly departing from that point at the slightest stimulation. It can be shown that a network that is absolutely stable for all conditions must have one, and only one, solution for each set of conditions.
Methods
Boolean analysis of switching networks
A switching device is one where the non-linearity is utilised to produce two opposite states. CMOS devices in digital circuits, for instance, have their output connected to either the positive or the negative supply rail and are never found at anything in between except during a transient period when the device is switching. Here the non-linearity is designed to be extreme, and the analyst can take advantage of that fact. These kinds of networks can be analysed using Boolean algebra by assigning the two states ("on"/"off", "positive"/"negative" or whatever states are being used) to the Boolean constants "0" and "1".
The transients are ignored in this analysis, along with any slight discrepancy between the state of the device and the nominal state assigned to a Boolean value. For instance, Boolean "1" may be assigned to the state of +5V. The output of the device may be +4.5V but the analyst still considers this to be Boolean "1". Device manufacturers will usually specify a range of values in their data sheets that are to be considered undefined (i.e. the result will be unpredictable).
The transients are not entirely uninteresting to the analyst. The maximum rate of switching is determined by the speed of transition from one state to the other. Happily for the analyst, for many devices most of the transition occurs in the linear portion of the devices transfer function and linear analysis can be applied to obtain at least an approximate answer.
It is mathematically possible to derive Boolean algebras that have more than two states. There is not too much use found for these in electronics, although three-state devices are passingly common.
Separation of bias and signal analyses
This technique is used where the operation of the circuit is to be essentially linear, but the devices used to implement it are non-linear. A transistor amplifier is an example of this kind of network. The essence of this technique is to separate the analysis into two parts. Firstly, the dc biases are analysed using some non-linear method. This establishes the quiescent operating point of the circuit. Secondly, the small signal characteristics of the circuit are analysed using linear network analysis. Examples of methods that can be used for both these stages are given below.
Graphical method of dc analysis
In a great many circuit designs, the dc bias is fed to a non-linear component via a resistor (or possibly a network of resistors). Since resistors are linear components, it is particularly easy to determine the quiescent operating point of the non-linear device from a graph of its transfer function. The method is as follows: from linear network analysis the output transfer function (that is output voltage against output current) is calculated for the network of resistor(s) and the generator driving them. This will be a straight line (called the load line) and can readily be superimposed on the transfer function plot of the non-linear device. The point where the lines cross is the quiescent operating point.
Perhaps the easiest practical method is to calculate the (linear) network open circuit voltage and short circuit current and plot these on the transfer function of the non-linear device. The straight line joining these two point is the transfer function of the network.
In reality, the designer of the circuit would proceed in the reverse direction to that described. Starting from a plot provided in the manufacturers data sheet for the non-linear device, the designer would choose the desired operating point and then calculate the linear component values required to achieve it.
It is still possible to use this method if the device being biased has its bias fed through another device which is itself non-linear, a diode for instance. In this case however, the plot of the network transfer function onto the device being biased would no longer be a straight line and is consequently more tedious to do.
Small signal equivalent circuit
This method can be used where the deviation of the input and output signals in a network stay within a substantially linear portion of the non-linear devices transfer function, or else are so small that the curve of the transfer function can be considered linear. Under a set of these specific conditions, the non-linear device can be represented by an equivalent linear network. It must be remembered that this equivalent circuit is entirely notional and only valid for the small signal deviations. It is entirely inapplicable to the dc biasing of the device.
For a simple two-terminal device, the small signal equivalent circuit may be no more than two components. A resistance equal to the slope of the v/i curve at the operating point (called the dynamic resistance), and tangent to the curve. A generator, because this tangent will not, in general, pass through the origin. With more terminals, more complicated equivalent circuits are required.
A popular form of specifying the small signal equivalent circuit amongst transistor manufacturers is to use the two-port network parameters known as [h] parameters. These are a matrix of four parameters as with the [z] parameters but in the case of the [h] parameters they are a hybrid mixture of impedances, admittances, current gains and voltage gains. In this model the three terminal transistor is considered to be a two port network, one of its terminals being common to both ports. The [h] parameters are quite different depending on which terminal is chosen as the common one. The most important parameter for transistors is usually the forward current gain, h21, in the common emitter configuration. This is designated hfe on data sheets.
The small signal equivalent circuit in terms of two-port parameters leads to the concept of dependent generators. That is, the value of a voltage or current generator depends linearly on a voltage or current elsewhere in the circuit. For instance the [z] parameter model leads to dependent voltage generators as shown in this diagram;
There will always be dependent generators in a two-port parameter equivalent circuit. This applies to the [h] parameters as well as to the [z] and any other kind. These dependencies must be preserved when developing the equations in a larger linear network analysis.
Piecewise linear method
In this method, the transfer function of the non-linear device is broken up into regions. Each of these regions is approximated by a straight line. Thus, the transfer function will be linear up to a particular point where there will be a discontinuity. Past this point the transfer function will again be linear but with a different slope.
A well known application of this method is the approximation of the transfer function of a pn junction diode. The transfer function of an ideal diode has been given at the top of this (non-linear) section. However, this formula is rarely used in network analysis, a piecewise approximation being used instead. It can be seen that the diode current rapidly diminishes to -Io as the voltage falls. This current, for most purposes, is so small it can be ignored. With increasing voltage, the current increases exponentially. The diode is modelled as an open circuit up to the knee of the exponential curve, then past this point as a resistor equal to the bulk resistance of the semiconducting material.
The commonly accepted values for the transition point voltage are 0.7V for silicon devices and 0.3V for germanium devices. An even simpler model of the diode, sometimes used in switching applications, is short circuit for forward voltages and open circuit for reverse voltages.
The model of a forward biased pn junction having an approximately constant 0.7V is also a much used approximation for transistor base-emitter junction voltage in amplifier design.
The piecewise method is similar to the small signal method in that linear network analysis techniques can only be applied if the signal stays within certain bounds. If the signal crosses a discontinuity point then the model is no longer valid for linear analysis purposes. The model does have the advantage over small signal however, in that it is equally applicable to signal and dc bias. These can therefore both be analysed in the same operations and will be linearly superimposable.
Time-varying components
In linear analysis, the components of the network are assumed to be unchanging, but in some circuits this does not apply, such as sweep oscillators, voltage controlled amplifiers, and variable equalisers. In many circumstances the change in component value is periodic. A non-linear component excited with a periodic signal, for instance, can be represented as a periodically varying linear component. Sidney Darlington disclosed a method of analysing such periodic time varying circuits. He developed canonical circuit forms which are analogous to the canonical forms of Ronald M. Foster and Wilhelm Cauer used for analysing linear circuits.
Vector circuit theory
Generalization of circuit theory based on scalar quantities to vectorial currents is a necessity for newly evolving circuits such as spin circuits. Generalized circuit variables consist of four components: scalar current and vector spin current in x, y, and z directions. The voltages and currents each become vector quantities with conductance described as a 4x4 spin conductance matrix.
See also
Bartlett's bisection theorem
Kirchhoff's circuit laws
Millman's theorem
Modified nodal analysis
Ohm's law
Reciprocity (electrical networks)
Tellegen's theorem
Symbolic circuit analysis
References
External links
The Feynman Lectures on Physics Vol. II Ch. 22: AC Circuits
Electrical engineering
Electronic design | Network analysis (electrical circuits) | [
"Engineering"
] | 5,718 | [
"Electronic design",
"Electrical engineering",
"Electronic engineering",
"Design"
] |
735,965 | https://en.wikipedia.org/wiki/Valence%20bond%20theory | In chemistry, valence bond (VB) theory is one of the two basic theories, along with molecular orbital (MO) theory, that were developed to use the methods of quantum mechanics to explain chemical bonding. It focuses on how the atomic orbitals of the dissociated atoms combine to give individual chemical bonds when a molecule is formed. In contrast, molecular orbital theory has orbitals that cover the whole molecule.
History
In 1916, G. N. Lewis proposed that a chemical bond forms by the interaction of two shared bonding electrons, with the representation of molecules as Lewis structures. The chemist Charles Rugeley Bury suggested in 1921 that eight and eighteen electrons in a shell form stable configurations. Bury proposed that the electron configurations in transitional elements depended upon the valence electrons in their outer shell. In 1916, Kossel put forth his theory of the ionic chemical bond (octet rule), also independently advanced in the same year by Gilbert N. Lewis. Walther Kossel put forward a theory similar to Lewis' only his model assumed complete transfers of electrons between atoms, and was thus a model of ionic bonding. Both Lewis and Kossel structured their bonding models on that of Abegg's rule (1904).
Although there is no mathematical formula either in chemistry or quantum mechanics for the arrangement of electrons in the atom, the hydrogen atom can be described by the Schrödinger equation and the Matrix Mechanics equation both derived in 1925. However, for hydrogen alone, in 1927 the Heitler–London theory was formulated which for the first time enabled the calculation of bonding properties of the hydrogen molecule H2 based on quantum mechanical considerations. Specifically, Walter Heitler determined how to use Schrödinger's wave equation (1926) to show how two hydrogen atom wavefunctions join together, with plus, minus, and exchange terms, to form a covalent bond. He then called up his associate Fritz London and they worked out the details of the theory over the course of the night. Later, Linus Pauling used the pair bonding ideas of Lewis together with Heitler–London theory to develop two other key concepts in VB theory: resonance (1928) and orbital hybridization (1930). According to Charles Coulson, author of the noted 1952 book Valence, this period marks the start of "modern valence bond theory", as contrasted with older valence bond theories, which are essentially electronic theories of valence couched in pre-wave-mechanical terms.
Linus Pauling published in 1931 his landmark paper on valence bond theory: "On the Nature of the Chemical Bond". Building on this article, Pauling's 1939 textbook: On the Nature of the Chemical Bond would become what some have called the bible of modern chemistry. This book helped experimental chemists to understand the impact of quantum theory on chemistry. However, the later edition in 1959 failed to adequately address the problems that appeared to be better understood by molecular orbital theory. The impact of valence theory declined during the 1960s and 1970s as molecular orbital theory grew in usefulness as it was implemented in large digital computer programs. Since the 1980s, the more difficult problems, of implementing valence bond theory into computer programs, have been solved largely, and valence bond theory has seen a resurgence.
Theory
According to this theory a covalent bond is formed between two atoms by the overlap of half filled valence atomic orbitals of each atom containing one unpaired electron. Valence Bond theory describes chemical bonding better than Lewis Theory, which states that atoms share or transfer electrons so that they achieve the octet rule. It does not take into account orbital interactions or bond angles, and treats all covalent bonds equally. A valence bond structure resembles a Lewis structure, but when a molecule cannot be fully represented by a single Lewis structure, multiple valence bond structures are used. Each of these VB structures represents a specific Lewis structure. This combination of valence bond structures is the main point of resonance theory. Valence bond theory considers that the overlapping atomic orbitals of the participating atoms form a chemical bond. Because of the overlapping, it is most probable that electrons should be in the bond region. Valence bond theory views bonds as weakly coupled orbitals (small overlap). Valence bond theory is typically easier to employ in ground state molecules. The core orbitals and electrons remain essentially unchanged during the formation of bonds.
The overlapping atomic orbitals can differ. The two types of overlapping orbitals are sigma and pi. Sigma bonds occur when the orbitals of two shared electrons overlap head-to-head, with the electron density most concentrated between nuclei. Pi bonds occur when two orbitals overlap when they are parallel. For example, a bond between two s-orbital electrons is a sigma bond, because two spheres are always coaxial. In terms of bond order, single bonds have one sigma bond, double bonds consist of one sigma bond and one pi bond, and triple bonds contain one sigma bond and two pi bonds. However, the atomic orbitals for bonding may be hybrids. Hybridization is a model that describes how atomic orbitals combine to form new orbitals that better match the geometry of molecules. Atomic orbitals that are similar in energy combine to make hybrid orbitals. For example, the carbon in methane (CH4) undergoes sp3 hybridization to form four equivalent orbitals, resulting in a tetrahedral shape. Different types of hybridization, such as sp, sp2, and sp3, correspond to specific molecular geometries (linear, trigonal planar, and tetrahedral), influencing the bond angles observed in molecules. Hybrid orbitals provide additional directionality to sigma bonds, accurately explaining molecular geometries.
Comparison with MO theory
Valence bond theory complements molecular orbital theory, which does not adhere to the valence bond idea that electron pairs are localized between two specific atoms in a molecule but that they are distributed in sets of molecular orbitals which can extend over the entire molecule. Although both theories describe chemical bonding, molecular orbital theory generally offers a clearer and more reliable framework for predicting magnetic and ionization properties. In particular, MO theory can effectively account for paramagnetism arising from unpaired electrons, whereas VBT struggles. Valence bond theory views aromatic properties of molecules as due to spin coupling of the orbitals. This is essentially still the old idea of resonance between Friedrich August Kekulé von Stradonitz and James Dewar structures. In contrast, molecular orbital theory views aromaticity as delocalization of the -electrons. Valence bond treatments are restricted to relatively small molecules, largely due to the lack of orthogonality between valence bond orbitals and between valence bond structures, while molecular orbitals are orthogonal. Additionally, valence bond theory cannot explain electronic transitions and spectroscopic properties as effectively as MO theory. Furthermore, while VBT employs hybridization to explain bonding, it can oversimplify complex bonding situations, limiting its applicability in more intricate molecular geometries such as transition metal compounds. On the other hand, valence bond theory provides a much more accurate picture of the reorganization of electronic charge that takes place when bonds are broken and formed during the course of a chemical reaction. In particular, valence bond theory correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms, while simple molecular orbital theory predicts dissociation into a mixture of atoms and ions. For example, the molecular orbital function for dihydrogen is an equal mixture of the covalent and ionic valence bond structures and so predicts incorrectly that the molecule would dissociate into an equal mixture of hydrogen atoms and hydrogen positive and negative ions.
Computational approaches
Modern valence bond theory replaces the overlapping atomic orbitals by overlapping valence bond orbitals that are expanded over a large number of basis functions, either centered each on one atom to give a classical valence bond picture, or centered on all atoms in the molecule. The resulting energies are more competitive with energies from calculations where electron correlation is introduced based on a Hartree–Fock reference wavefunction. The most recent text is by Shaik and Hiberty.
Applications
An important aspect of the valence bond theory is the condition of maximum overlap, which leads to the formation of the strongest possible bonds. This theory is used to explain the covalent bond formation in many molecules.
For example, in the case of the F2 molecule, the F−F bond is formed by the overlap of pz orbitals of the two F atoms, each containing an unpaired electron. Since the nature of the overlapping orbitals are different in H2 and F2 molecules, the bond strength and bond lengths differ between H2 and F2 molecules.
In methane (CH4), the carbon atom undergoes sp3 hybridization, allowing it to form four equivalent sigma bonds with hydrogen atoms, resulting in a tetrahedral geometry. Hybridization also explains the equal C-H bond strengths.
In an HF molecule the covalent bond is formed by the overlap of the 1s orbital of H and the 2pz orbital of F, each containing an unpaired electron. Mutual sharing of electrons between H and F results in a covalent bond in HF.
See also
Modern valence bond theory
Molecular orbital theory
Valence bond programs
References
Chemistry theories
Quantum chemistry
Chemical bonding
General chemistry | Valence bond theory | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,906 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
"Chemical bonding",
" and optical physics"
] |
736,407 | https://en.wikipedia.org/wiki/Opioid%20receptor | Opioid receptors are a group of inhibitory G protein-coupled receptors with opioids as ligands. The endogenous opioids are dynorphins, enkephalins, endorphins, endomorphins and nociceptin. The opioid receptors are ~40% identical to somatostatin receptors (SSTRs). Opioid receptors are distributed widely in the brain, in the spinal cord, on peripheral neurons, and digestive tract.
Discovery
By the mid-1960s, it had become apparent from pharmacologic studies that opioids were likely to exert their actions at specific receptor sites, and that there were likely to be multiple such sites. Early studies had indicated that opiates appeared to accumulate in the brain. The receptors were first identified as specific molecules through the use of binding studies, in which opiates that had been labeled with radioisotopes were found to bind to brain membrane homogenates. The first such study was published in 1971, using 3H-levorphanol. In 1973, Candace Pert and Solomon H. Snyder published the first detailed binding study of what would turn out to be the μ opioid receptor, using 3H-naloxone. That study has been widely credited as the first definitive finding of an opioid receptor, although two other studies followed shortly after.
Purification
Purification of the receptor further verified its existence. The first attempt to purify the receptor involved the use of a novel opioid antagonist called chlornaltrexamine that was demonstrated to bind to the opioid receptor. Caruso later purified the detergent-extracted component of rat brain membrane that eluted with the specifically bound 3H-chlornaltrexamine.
Major subtypes
There are four major subtypes of opioid receptors. OGFr was originally discovered and named as a new opioid receptor zeta (ζ). However it was subsequently found that it shares little sequence similarity with the other opioid receptors, and has quite different function.
(I). Name based on order of discovery
Evolution
The opioid receptor (OR) family originated from two duplication events of a single ancestral opioid receptor early in vertebrate evolution. Phylogenetic analysis demonstrates that the family of opioid receptors was already present at the origin of jawed vertebrates over 450 million years ago. In humans, this paralogon resulting from a double tetraploidization event resulted in the receptor genes being located on chromosomes 1, 6, 8, and 20. Tetraploidization events often result in the loss of one or more of the duplicated genes, but in this case, nearly all species retain all four opioid receptors, indicating biological significance of these systems. Stefano traced the co-evolution of OR and the immune system underlying the fact that these receptors helped earlier animals to survive pain and inflammation shock in aggressive environments.
The receptor families delta, kappa, and mu demonstrate 55–58% identity to one another, and a 48–49% homology to the nociceptin receptor. Taken together, this indicates that the NOP receptor gene, OPRL1, has equal evolutionary origin, but a higher mutation rate, than the other receptor genes.
Although opioid receptor families share many similarities, their structural differences lead to functional difference. Thus, mu-opioid receptors induce relaxation, trust, satisfaction, and analgesia. This system may also help mediate stable, emotionally committed relationships. Experiments with juvenile guinea pigs showed that social attachment is mediated by the opioid system. The evolutionary role of opioid signaling in these behaviors was confirmed in dogs, chicks, and rats. Opioid receptors also have a role in mating behaviors. However, mu-opioid receptors do not just control social behavior because they also make individuals feel relaxed in a wide range of other situations.
Kappa- and delta-opioid receptors may be less associated with relaxation and analgesia because kappa-opioid receptor suppresses mu-opioid receptor activation, and delta-opioid receptor interacts differently with agonists and antagonists. Kappa-opioid receptors are involved in chronic anxiety's perceptual mobilization, whereas delta-opioid receptors induce action initiation, impulsivity, and behavioural mobilization. These differences led some researches to suggest that up- or down-regulations within three opioid receptors families are the basis of different dispositional emotionality seen in psychiatric disorders.
Human-specific opioid-modulated cognitive features are not attributable to coding differences for receptors or ligands, which share 99% similarity with primates, but to regulatory changes in expression levels.
Nomenclature
The receptors were named using the first letter of the first ligand that was found to bind to them. Morphine was the first chemical shown to bind to "mu" receptors. The first letter of the drug morphine is m, rendered as the corresponding Greek letter μ. In similar manner, a drug known as ketocyclazocine was first shown to attach itself to "κ" (kappa) receptors, while the "δ" (delta) receptor was named after the mouse vas deferens tissue in which the receptor was first characterised. An additional opioid receptor was later identified and cloned based on homology with the cDNA. This receptor is known as the nociceptin receptor or ORL1 (opiate receptor-like 1).
The opioid receptor types are nearly 70% identical, with the differences located at the N and C termini. The μ receptor is perhaps the most important. It is thought that the G protein binds to the third intracellular loop of all opioid receptors. Both in mice and humans, the genes for the various receptor subtypes are located on separate chromosomes.
Separate opioid receptor subtypes have been identified in human tissue. Research has so far failed to identify the genetic evidence of the subtypes, and it is thought that they arise from post-translational modification of cloned receptor types.
An IUPHAR subcommittee has recommended that appropriate terminology for the 3 classical (μ, δ, κ) receptors, and the non-classical (nociceptin) receptor, should be MOP ("Mu OPiate receptor"), DOP, KOP and NOP respectively.
Additional receptors
Sigma (σ) receptors were once considered to be opioid receptors due to the antitussive actions of many opioid drugs' being mediated via σ receptors, and the first selective σ agonists being derivatives of opioid drugs (e.g., allylnormetazocine). However, σ receptors were found to not be activated by endogenous opioid peptides, and are quite different from the other opioid receptors in both function and gene sequence, so they are now not usually classified with the opioid receptors.
The existence of further opioid receptors (or receptor subtypes) has also been suggested because of pharmacological evidence of actions produced by endogenous opioid peptides, but shown not to be mediated through any of the four known opioid receptor subtypes. The existence of receptor subtypes or additional receptors other than the classical opioid receptors (μ, δ, κ) has been based on limited evidence, since only three genes for the three main receptors have been identified. The only one of these additional receptors to have been definitively identified is the zeta (ζ) opioid receptor, which has been shown to be a cellular growth factor modulator with met-enkephalin being the endogenous ligand. This receptor is now most commonly referred to as the opioid growth factor receptor (OGFr).
Epsilon (ε) opioid receptor
Another postulated opioid receptor is the ε opioid receptor. The existence of this receptor was suspected after the endogenous opioid peptide beta-endorphin was shown to produce additional actions that did not seem to be mediated through any of the known opioid receptors. Activation of this receptor produces strong analgesia and release of met-enkephalin; a number of widely used opioid agonists, such as the μ agonist etorphine and the κ agonist bremazocine, have been shown to act as agonists for this effect (even in the presence of antagonists to their more well known targets), while buprenorphine has been shown to act as an epsilon antagonist. Several selective agonists and antagonists are now available for the putative epsilon receptor; however, efforts to locate a gene for this receptor have been unsuccessful, and epsilon-mediated effects were absent in μ/δ/κ "triple knockout" mice, suggesting the epsilon receptor is likely to be either a splice variant derived from alternate post-translational modification, or a heteromer derived from hybridization of two or more of the known opioid receptors.
Mechanism of activation
Opioid receptors are a type of G protein–coupled receptor (GPCR). These receptors are distributed throughout the central nervous system and within the peripheral tissue of neural and non-neural origin. They are also located in high concentrations in the Periaqueductal gray, Locus coeruleus, and the Rostral ventromedial medulla. The receptors consist of an extracellular amino acid N-terminus, seven trans-membrane helical loops, three extracellular loops, three intracellular loops, and an intracellular carboxyl C-terminus. Three GPCR extracellular loops provide a compartment where signaling molecules can attach to generate a response. Heterotrimeric G protein contain three different sub-units, which include an alpha (α) subunit, a beta (β) subunit, and a gamma (γ) sub-unit. The gamma and beta sub-units are permanently bound together, producing a single Gβγ sub-unit. Heterotrimeric G proteins act as ‘molecular switches’, which play a key role in signal transduction, because they relay information from activated receptors to appropriate effector proteins. All G protein α sub-units contain palmitate, which is a 16-carbon saturated fatty acid, that is attached near the N-terminus through a labile, reversible thioester linkage to a cysteine amino acid. It is this palmitoylation that allows the G protein to interact with membrane phospholipids due to the hydrophobic nature of the alpha sub-units. The gamma sub-unit is also lipid modified and can attach to the plasma membrane as well. These properties of the two sub-units, allow the opioid receptor's G protein to permanently interact with the membrane via lipid anchors.
When an agonistic ligand binds to the opioid receptor, a conformational change occurs, and the GDP molecule is released from the Gα sub-unit. This mechanism is complex, and is a major stage of the signal transduction pathway. When the GDP molecule is attached, the Gα sub-unit is in its inactive state, and the nucleotide-binding pocket is closed off inside the protein complex. However, upon ligand binding, the receptor switches to an active conformation, and this is driven by intermolecular rearrangement between the trans-membrane helices. The receptor activation releases an ‘ionic lock’ which holds together the cytoplasmic sides of transmembrane helices three and six, causing them to rotate. This conformational change exposes the intracellular receptor domains at the cytosolic side, which further leads to the activation of the G protein. When the GDP molecule dissociates from the Gα sub-unit, a GTP molecule binds to the free nucleotide-binding pocket, and the G protein becomes active. A Gα(GTP) complex is formed, which has a weaker affinity for the Gβγ sub-unit than the Gα(GDP) complex, causing the Gα sub-unit to separate from the Gβγ sub-unit, forming two sections of the G protein. The sub-units are now free to interact with effector proteins; however, they are still attached to the plasma membrane by lipid anchors. After binding, the active G protein sub-units diffuses within the membrane and acts on various intracellular effector pathways. This includes inhibiting neuronal adenylate cyclase activity, as well as increasing membrane hyper-polarisation. When the adenylyl cyclase enzyme complex is stimulated, it results in the formation of Cyclic Adenosine 3', 5'-Monophosphate (cAMP), from Adenosine 5' Triphosphate (ATP). cAMP acts as a secondary messenger, as it moves from the plasma membrane into the cell and relays the signal.
cAMP binds to, and activates cAMP-dependent protein kinase A (PKA), which is located intracellularly in the neuron. The PKA consists of a holoenzyme - it is a compound which becomes active due to the combination of an enzyme with a coenzyme. The PKA enzyme also contains two catalytic PKS-Cα subunits, and a regulator PKA-R subunit dimer. The PKA holoenzyme is inactive under normal conditions, however, when cAMP molecules that are produced earlier in the signal transduction mechanism combine with the enzyme, PKA undergoes a conformational change. This activates it, giving it the ability to catalyse substrate phosphorylation. CREB (cAMP response element binding protein) belongs to a family of transcription factors and is positioned in the nucleus of the neuron. When the PKA is activated, it phosphorylates the CREB protein (adds a high energy phosphate group) and activates it. The CREB protein binds to cAMP response elements CRE, and can either increase or decrease the transcription of certain genes. The cAMP/PKA/CREB signalling pathway described above is crucial in memory formation and pain modulation. It is also significant in the induction and maintenance of long-term potentiation, which is a phenomenon that underlies synaptic plasticity - the ability of synapses to strengthen or weaken over time.
Voltage-gated dependent calcium channel, (VDCCs), are key in the depolarisation of neurons, and play a major role in promoting the release of neurotransmitters. When agonists bind to opioid receptors, G proteins activate and dissociate into their constituent Gα and Gβγ sub-units. The Gβγ sub-unit binds to the intracellular loop between the two trans-membrane helices of the VDCC. When the sub-unit binds to the voltage-dependent calcium channel, it produces a voltage-dependent block, which inhibits the channel, preventing the flow of calcium ions into the neuron. Embedded in the cell membrane is also the G protein-coupled inwardly-rectifying potassium channel. When a Gβγ or Gα(GTP) molecule binds to the C-terminus of the potassium channel, it becomes active, and potassium ions are pumped out of the neuron. The activation of the potassium channel and subsequent deactivation of the calcium channel causes membrane hyperpolarization. This is when there is a change in the membrane's potential, so that it becomes more negative. The reduction in calcium ions causes a reduction neurotransmitter release because calcium is essential for this event to occur. This means that neurotransmitters such as glutamate and substance P cannot be released from the presynaptic terminal of the neurons. These neurotransmitters are vital in the transmission of pain, so opioid receptor activation reduces the release of these substances, thus creating a strong analgesic effect.
Pathology
Some forms of mutations in δ-opioid receptors have resulted in constant receptor activation.
Protein–protein interactions
Receptor heteromers
δ-κ
δ-μ
κ-μ
μ-ORL1
δ-CB1
μ-CB1
κ-CB1
δ-α2A
δ-β2
κ-β2
μ-α2A
δ-CXCR4
δ-SNSR4
κ-APJ
μ-CCR5
μ1D-GRPR
μ-mGlu5
μ-5-HT1A
μ-NK1
μ-sst2A
See also
List of opioids
Opioid antagonist
Opioidergic
References
Further reading
External links
Opioid receptors | Opioid receptor | [
"Chemistry"
] | 3,453 | [
"Opioid receptors",
"Signal transduction"
] |
736,618 | https://en.wikipedia.org/wiki/Reeh%E2%80%93Schlieder%20theorem | The Reeh–Schlieder theorem is a result in relativistic local quantum field theory published by Helmut Reeh and Siegfried Schlieder in 1961.
The theorem states that the vacuum state is a cyclic vector for the field algebra corresponding to any open set in Minkowski space. That is, any state can be approximated to arbitrary precision by acting on the vacuum with an operator selected from the local algebra, even for that contain excitations arbitrarily far away in space. In this sense, states created by applying elements of the local algebra to the vacuum state are not localized to the region .
For practical purposes, however, local operators still generate quasi-local states. More precisely, the long range effects of the operators of the
local algebra will diminish rapidly with distance, as seen by the cluster properties of the Wightman functions. And with increasing distance, creating a unit vector localized outside the region requires operators of ever increasing operator norm.
This theorem is also cited in connection with quantum entanglement. But it is subject to some doubt whether the Reeh–Schlieder theorem can usefully be seen as the quantum field theory analog to quantum entanglement, since the exponentially-increasing energy needed for long range actions will prohibit any macroscopic effects. However, Benni Reznik showed that vacuum entanglement can be distilled into EPR pairs used in quantum information tasks.
It is known that the Reeh–Schlieder property applies not just to the vacuum but in fact to any state with bounded energy. If some finite number N of space-like separated regions is chosen, the multipartite entanglement can be analyzed in the typical quantum information setting of N abstract quantum systems, each with a Hilbert space possessing a countable basis, and the corresponding structure has been called superentanglement.
See also
Newton–Wigner localization
References
External links
Siegfried Schlieder, Some remarks about the localization of states in a quantum field theory, Comm. Math. Phys. 1, no. 4 (1965), 265–280 online at Project Euclid
hep-th/0001154 Christian Jaekel, "The Reeh–Schlieder property for ground states"
"Reeh–Schlieder property in a separable Hilbert space"
https://scholar.harvard.edu/files/ghazalddowen/files/ghazal_owen_ee_in_qft-converted.pdf - provides a succinct summary and describes its relation to entanglement
Axiomatic quantum field theory
Theorems in quantum mechanics | Reeh–Schlieder theorem | [
"Physics",
"Mathematics"
] | 545 | [
"Theorems in quantum mechanics",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Quantum physics stubs",
"Physics theorems"
] |
737,164 | https://en.wikipedia.org/wiki/Large%20diffeomorphism | In mathematics and theoretical physics, a large diffeomorphism is an equivalence class of diffeomorphisms under the equivalence relation where diffeomorphisms that can be continuously connected to each other are in the same equivalence class.
For example, a two-dimensional real torus has a SL(2,Z) group of large diffeomorphisms by which the one-cycles of the torus are transformed into their integer linear combinations. This group of large diffeomorphisms is called the modular group.
More generally, for a surface S, the structure of self-homeomorphisms up to homotopy is known as the mapping class group. It is known (for compact, orientable S) that this is isomorphic with the automorphism group of the fundamental group of S. This is consistent with the genus 1 case, stated above, if one takes into account that then the fundamental group is Z2, on which the modular group acts as automorphisms (as a subgroup of index 2 in all automorphisms, since the orientation may also be reverse, by a transformation with determinant −1).
See also
Large gauge transformation
Diffeomorphisms
Theoretical physics | Large diffeomorphism | [
"Physics",
"Mathematics"
] | 253 | [
"Topology stubs",
"Theoretical physics",
"Topology",
"Theoretical physics stubs"
] |
737,618 | https://en.wikipedia.org/wiki/GABA%20receptor | The GABA receptors are a class of receptors that respond to the neurotransmitter gamma-aminobutyric acid (GABA), the chief inhibitory compound in the mature vertebrate central nervous system. There are two classes of GABA receptors: GABAA and GABAB. GABAA receptors are ligand-gated ion channels (also known as ionotropic receptors); whereas GABAB receptors are G protein-coupled receptors, also called metabotropic receptors.
Ligand-gated ion channels
GABAA receptor
It has long been recognized that, for neurons that are stimulated by bicuculline and picrotoxin, the fast inhibitory response to GABA is due to direct activation of an anion channel. This channel was subsequently termed the GABAA receptor. Fast-responding GABA receptors are members of a family of Cys-loop ligand-gated ion channels. Members of this superfamily, which includes nicotinic acetylcholine receptors, GABAA receptors, glycine and 5-HT3 receptors, possess a characteristic loop formed by a disulfide bond between two cysteine residues.
In ionotropic GABAA receptors, binding of GABA molecules to their binding sites in the extracellular part of the receptor triggers opening of a chloride ion-selective pore. The increased chloride conductance drives the membrane potential towards the reversal potential of the Cl¯ ion which is about –75 mV in neurons, inhibiting the firing of new action potentials. This mechanism is responsible for the sedative effects of GABAA allosteric agonists. In addition, activation of GABA receptors lead to the so-called shunting inhibition, which reduces the excitability of the cell independent of the changes in membrane potential.
There have been numerous reports of excitatory GABAA receptors. According to the excitatory GABA theory, this phenomenon is due to increased intracellular concentration of Cl¯ ions either during development of the nervous system or in certain cell populations. After this period of development, a chloride pump is upregulated and inserted into the cell membrane, pumping Cl− ions into the extracellular space of the tissue. Further openings via GABA binding to the receptor then produce inhibitory responses. Over-excitation of this receptor induces receptor remodeling and the eventual invagination of the GABA receptor. As a result, further GABA binding becomes inhibited and inhibitory postsynaptic potentials are no longer relevant.
However, the excitatory GABA theory has been questioned as potentially being an artefact of experimental conditions, with most data acquired in in-vitro brain slice experiments susceptible to un-physiological milieu such as deficient energy metabolism and neuronal damage. The controversy arose when a number of studies have shown that GABA in neonatal brain slices becomes inhibitory if glucose in perfusate is supplemented with ketone bodies, pyruvate, or lactate, or that the excitatory GABA was an artefact of neuronal damage. Subsequent studies from originators and proponents of the excitatory GABA theory have questioned these results, but the truth remained elusive until the real effects of GABA could be reliably elucidated in intact living brain. Since then, using technology such as in-vivo electrophysiology/imaging and optogenetics, two in-vivo studies have reported the effect of GABA on neonatal brain, and both have shown that GABA is indeed overall inhibitory, with its activation in the developing rodent brain not resulting in network activation, and instead leading to a decrease of activity.
GABA receptors influence neural function by coordinating with glutamatergic processes.
GABAA-ρ receptor
A subclass of ionotropic GABA receptors, insensitive to typical allosteric modulators of GABAA receptor channels such as benzodiazepines and barbiturates, was designated GABAС receptor. Native responses of the GABAC receptor type occur in retinal bipolar or horizontal cells across vertebrate species.
GABAС receptors are exclusively composed of ρ (rho) subunits that are related to GABAA receptor subunits. Although the term "GABAС receptor" is frequently used, GABAС may be viewed as a variant within the GABAA receptor family. Others have argued that the differences between GABAС and GABAA receptors are large enough to justify maintaining the distinction between these two subclasses of GABA receptors. However, since GABAС receptors are closely related in sequence, structure, and function to GABAA receptors and since other GABAA receptors besides those containing ρ subunits appear to exhibit GABAС pharmacology, the Nomenclature Committee of the IUPHAR has recommended that the GABAС term no longer be used and these ρ receptors should be designated as the ρ subfamily of the GABAA receptors (GABAA-ρ).
G protein-coupled receptors
GABAB receptor
A slow response to GABA is mediated by GABAB receptors, originally defined on the basis of pharmacological properties.
In studies focused on the control of neurotransmitter release, it was noted that a GABA receptor was responsible for modulating evoked release in a variety of isolated tissue preparations. This ability of GABA to inhibit neurotransmitter release from these preparations was not blocked by bicuculline, was not mimicked by isoguvacine, and was not dependent on Cl¯, all of which are characteristic of the GABAA receptor. The most striking discovery was the finding that baclofen (β-parachlorophenyl GABA), a clinically employed muscle relaxant mimicked, in a stereoselective manner, the effect of GABA.
Later ligand-binding studies provided direct evidence of binding sites for baclofen on central neuronal membranes. cDNA cloning confirmed that the GABAB receptor belongs to the family of G-protein coupled receptors. Additional information on GABAB receptors has been reviewed elsewhere.
GABA receptor gene polymorphisms
Two separate genes on two chromosomes control GABA synthesis - glutamate decarboxylase and alpha-ketoglutarate decarboxylase genes - though not much research has been done to explain this polygenic phenomenon. GABA receptor genes have been studied more in depth, and many have hypothesized about the deleterious effects of polymorphisms in these receptor genes. The most common single nucleotide polymorphisms (SNPs) occurring in GABA receptor genes rho 1, 2, and 3 (GABBR1, GABBR2, and GABBR3) have been more recently explored in literature, in addition to the potential effects of these polymorphisms. However, some research has demonstrated that there is evidence that these polymorphisms caused by single base pair variations may be harmful.
It was discovered that the minor allele of a single nucleotide polymorphism at GABBR1 known as rs1186902 is significantly associated with a later age of onset for migraines, but for the other SNPs, no differences were discovered between genetic and allelic variations in the control vs. migraine participants. Similarly, in a study examining SNPs in rho 1, 2, and 3, and their implication in essential tremor, a nervous system disorder, it was discovered that there were no differences in the frequencies of the allelic variants of polymorphisms for control vs. essential tremor participants. On the other hand, research examining the effect of SNPs in participants with restless leg syndrome found an "association between GABRR3rs832032 polymorphism and the risk for RLS, and a modifier effect of GABRA4 rs2229940 on the age of onset of RLS" - the latter of which is a modifier gene polymorphism. The most common GABA receptor SNPs do not correlate with deleterious health effects in many cases, but do in a few.
One significant example of a deleterious mutation is the major association between several GABA receptor gene polymorphisms and schizophrenia. Because GABA is integral to the release of inhibitory neurotransmitters which produce a calming effect and play a role in reducing anxiety, stress, and fear, it is not surprising that polymorphisms in these genes result in more consequences relating to mental health than to physical health. Of an analysis on 19 SNPs on various GABA receptor genes, five SNPs in the GABBR2 group were found to be significantly associated with schizophrenia, which produce the unexpected haplotype frequencies not found in the studies mentioned previously.
Several studies have verified association between alcohol use disorder and the rs279858 polymorphism on the GABRA2 gene e, and higher negative alcohol effects scores for individuals who were homozygous at six SNPs. Furthermore, a study examining polymorphisms in the GABA receptor beta 2 subunit gene found an association with schizophrenia and bipolar disorder, and examined three SNPs and their effects on disease frequency and treatment dosage. A major finding of this study was that functional psychosis should be conceptualized as a scale of phenotypes rather than distinct categories.
See also
GABA agonist
GABA antagonist
References
External links
IUPHAR GPCR Database - GABAB receptors
Ionotropic receptors
G protein-coupled receptors | GABA receptor | [
"Chemistry"
] | 1,979 | [
"G protein-coupled receptors",
"Ionotropic receptors",
"Signal transduction"
] |
738,085 | https://en.wikipedia.org/wiki/Angiotensin%20II%20receptor | The angiotensin II receptors, (ATR1) and (ATR2), are a class of G protein-coupled receptors with angiotensin II as their ligands. They are important in the renin–angiotensin system: they are responsible for the signal transduction of the vasoconstricting stimulus of the main effector hormone, angiotensin II.
Structure
The AT1 and AT2 receptors share a sequence identity of ~30%, but have a similar affinity for angiotensin II, which is their main ligand.
Members
Overview table
AT1
The AT1 receptor is the best elucidated angiotensin receptor.
Location within the body
The AT1 subtype is found in the heart, blood vessels, kidney, adrenal cortex, lung and circumventricular organs of brain, basal ganglia, brainstem and mediates the vasoconstrictor effects.
Mechanism
The angiotensin receptor is activated by the vasoconstricting peptide angiotensin II. The activated receptor in turn couples to Gq/11 and Gi/o and thus activates phospholipase C and increases the cytosolic Ca2+ concentrations, which in turn triggers cellular responses such as stimulation of protein kinase C. Activated receptor also inhibits adenylate cyclase and activates various tyrosine kinases.
Effects
Effects mediated by the AT1 receptor include vasoconstriction, aldosterone synthesis and secretion, increased vasopressin secretion, cardiac hypertrophy, augmentation of peripheral noradrenergic activity, vascular smooth muscle cells proliferation, decreased renal blood flow, renal renin inhibition, renal tubular sodium reuptake, modulation of central sympathetic nervous system activity, cardiac contractility, central osmocontrol and extracellular matrix formation.
AT2
AT2 receptors are more plentiful in the fetus and neonate. The AT2 receptor remains enigmatic and controversial – is probably involved in vascular growth. Effects mediated by the AT2 receptor are suggested to include inhibition of cell growth, fetal tissue development, modulation of extracellular matrix, neuronal regeneration, apoptosis, cellular differentiation, and maybe vasodilation and left ventricular hypertrophy. In humans the AT2 subtype is found in molecular layer of the cerebellum. In the mouse is found in the adrenal gland, amygdaloid nuclei and, in small numbers, in the paraventricular nucleus of the hypothalamus and the locus coeruleus.
AT3 and AT4
Other poorly characterized subtypes include the AT3 and AT4 receptors. The AT4 receptor is activated by the angiotensin II metabolite angiotensin IV, and may play a role in regulation of the CNS extracellular matrix, as well as modulation of oxytocin release.
See also
Angiotensin II receptor antagonist
References
External links
G protein-coupled receptors | Angiotensin II receptor | [
"Chemistry"
] | 624 | [
"G protein-coupled receptors",
"Signal transduction"
] |
738,191 | https://en.wikipedia.org/wiki/Cholecystokinin%20receptor | Cholecystokinin receptors or CCK receptors are a group of G-protein coupled receptors which bind the peptide hormones cholecystokinin (CCK) and gastrin. There are two different subtypes CCKA and CCKB which are ~50% homologous: Various cholecystokinin antagonists have been developed and are used in research, although the only drug of this class that has been widely marketed to date is the anti-ulcer drug proglumide.
References
External links
G protein-coupled receptors | Cholecystokinin receptor | [
"Chemistry"
] | 116 | [
"G protein-coupled receptors",
"Signal transduction"
] |
4,613,848 | https://en.wikipedia.org/wiki/Firestop | A firestop or fire-stopping is a form of passive fire protection that is used to seal around openings and between joints in a fire-resistance-rated wall or floor assembly. Firestops are designed to maintain the fire-resistance rating of a wall or floor assembly intended to impede the spread of fire and smoke.
Description
Firestops prevent unprotected horizontal and vertical penetrations in a fire-resistance-rated wall or floor assembly from creating a route by which fire and smoke can spread that would otherwise have been fire resisting construction, e.g. where a pipe passes through a firewall.
Fire stopping is also to seal around gaps between fire resisting constructions, e.g. the linear gap between a wall and the floor above, in order for construction to form a complete barrier to fire and smoke spread.
Opening types
Firestops are used in:
Electrical, mechanical, and structural penetrations
Unpenetrated openings (such as openings for future use)
Re-entries of existing firestops
Control or sway joints in fire-resistance-rated wall or floor assemblies
Junctions between fire-resistance-rated wall or floor assemblies
Head-of-wall (HOW) joints, where non-load-bearing wall assemblies meet floor assemblies
Numeric characters are used to identify what penetrant, if any, can be found within the present system and help identify what UL-tested system was used.
Classification for penetrations and the barriers they penetrate, are categorized by a standardized letter-number system that has been adopted by all firestop products manufacturers. A typical system would consist of several letters, followed by a series of numbers indicating the type of penetrant that is passing through the particular barrier ex: (FB-5533.)
Materials
Components include intumescents, cementitious mortars, silicone, firestop pillows, mineral fibers, and rubber compounds.
Maintenance
Firestops should be maintained in accordance with the certification listing. Construction documentation sometimes includes an inventory of all firestops in a building, with drawings indicating their location and certification listings. Using this, a building owner can meet the fire code relating to fire barriers. Improper repairs may otherwise result, which would violate the fire code and could allow a fire to travel between areas intended by code to be separated during a fire.
Ratings
Firestop materials are not rated per se. They receive a fire rating by combining materials in an arrangement specific to the item (a pipe or cable, for example) penetrating the fire-rated wall or floor and the construction arrangement of the fire-rated wall or floor. A two-hour-rated pipe-penetration firestop may consist of a layer of caulking over packed mineral wool. The arrangement, not the caulking, provides the two-hour rating. The individual firestop materials and the overall firestop assembly are listed.
Testing and certification
Certification listings include those available from:
Underwriters Laboratories
Underwriters Laboratories of Canada
Deutsches Intitut für Bautechnik (Germany)
FIRAS scheme- Warrington Fire (UK)
Efectis (Netherlands, France, and Norway)
FM Global
Regulations and compliance
When the installed configuration does not comply with the appropriate certification listing, the fire-resistance rating may be lower than expected. Each opening in a fire-resistance-rated wall or floor in a building must have a certification listing. There are thousands of listings from various certification and testing laboratories. The Canadian and United States Underwriters Laboratories publish books listing firestop manufacturers who have contracted with them for testing and certification.
Inadequate firestopping
No firestopping
Older buildings often lack firestops. A thorough inspection can identify all vertical and horizontal fire barriers and their fire ratings, and all breaches in these barriers (which can be sealed with approved methods).
Non-listed attempts
Firestops created by contractors or building maintenance personnel which are not listed are not credited with an adequate fire resistance rating for building-code compliance purposes. They are usually short-term, cost-cutting measures at the expense of fire safety and code compliance. One common error is citing a listing for a product which may be for another use. An insulation with an active listing of a certain flame-spread rating is unacceptable for firestopping purposes.
See also
Fire blocking
Firestop pillow
Penetrant
Penetration (firestop)
Endothermic
Annulus (firestop)
Product certification
Certification mark
Silicone foam
Packing (firestopping)
Sleeve (construction)
Heat sink
References
External links
Gütegemeinschaft Brandschutz im Ausbau German passive fire protection association
International Firestop Council An International association of firestop manufacturers, consultants, inspectors, and contractors
Efectis Test Laboratory
UL and International Firestop Council (IFC) video Close enough is not good enough: A demonstration of Proper vs Improper Firestopping
UL Essay On Firestops
Deutsches Institut für Bautechnik (DIBt)
iBMB a part of Technische Universität Braunschweig
Underwriters' Laboratories of Canada (ULC)
Underwriters Laboratories
Building materials
Passive fire protection | Firestop | [
"Physics",
"Engineering"
] | 1,016 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
4,614,945 | https://en.wikipedia.org/wiki/Aminoacyl-tRNA | Aminoacyl-tRNA (also aa-tRNA or charged tRNA) is tRNA to which its cognate amino acid is chemically bonded (charged). The aa-tRNA, along with particular elongation factors, deliver the amino acid to the ribosome for incorporation into the polypeptide chain that is being produced during translation.
Alone, an amino acid is not the substrate necessary to allow for the formation of peptide bonds within a growing polypeptide chain. Instead, amino acids must be "charged" or aminoacylated with a tRNA to form their respective aa-tRNA. Every amino acid has its own specific aminoacyl-tRNA synthetase, which is utilized to chemically bind to the tRNA that it is specific to, or in other words, "cognate" to. The pairing of a tRNA with its cognate amino acid is crucial, as it ensures that only the particular amino acid matching the anticodon of the tRNA, and in turn matching the codon of the mRNA, is used during protein synthesis.
In order to prevent translational errors, in which the wrong amino acid is incorporated into the polypeptide chain, evolution has provided for proofreading functionalities of aa-tRNA synthetases; these mechanisms ensure the proper pairing of an amino acid to its cognate tRNA. Amino acids that are misacylated with the proper tRNA substrate undergo hydrolysis through the deacylation mechanisms possessed by aa-tRNA synthetases.
Due to the degeneracy of the genetic code, multiple tRNAs will have the same amino acid but different anticodons. These different tRNAs are called isoacceptors. Under certain circumstances, non-cognate amino acids will be charged, resulting in mischarged or misaminoacylated tRNA. These mischarged tRNAs must be hydrolyzed in order to prevent incorrect protein synthesis.
While aa-tRNA serves primarily as the intermediate link between the mRNA coding strand and the encoded polypeptide chain during protein synthesis, it is also found that aa-tRNA have functions in several other biosynthetic pathways. aa-tRNAs are found to function as substrates in biosynthetic pathways for cell walls, antibiotics, lipids, and protein degradation.
It is understood that aa-tRNAs may function as donors of amino acids necessary for the modification of lipids and the biosynthesis of antibiotics. For example, microbial biosynthetic gene clusters may utilize aa-tRNAs in the synthesis of non-ribosomal peptides and other amino acid-containing metabolites.
Synthesis
Aminoacyl-tRNA is produced in two steps. First, the adenylation of the amino acid, which forms aminoacyl-AMP:
Amino Acid + ATP → Aminoacyl-AMP + PPi
Second, the amino acid residue is transferred to the tRNA:
Aminoacyl-AMP + tRNA → Aminoacyl-tRNA + AMP
The overall net reaction is:
Amino Acid + ATP + tRNA → Aminoacyl-tRNA + AMP + PPi
The net reaction is energetically favorable only because the pyrophosphate (PPi) is later hydrolyzed. The hydrolysis of pyrophosphate to two molecules of inorganic phosphate (Pi) reaction is highly energetically favorable and drives the other two reactions. Together, these highly exergonic reactions take place inside the aminoacyl-tRNA synthetase specific for that amino acid.
Stability and hydrolysis
Research into the stability of aa-tRNAs illustrates that the acyl (or ester) linkage is the most important conferring factor, as opposed to the sequence of the tRNA itself. This linkage is an ester bond that chemically binds the carboxyl group of an amino acid to the terminal 3'-OH group of its cognate tRNA. It has been discovered that the amino acid moiety of a given aa-tRNA provides for its structural integrity; the tRNA moiety dictates, for the most part, how and when the amino acid will be incorporated into a growing polypeptide chain.
The different aa-tRNAs have varying pseudo-first-order rate constants for the hydrolysis of the ester bond between the amino acid and tRNA. Such observations are due to, primarily, steric effects. Steric hindrance is provided for by specific side chain groups of amino acids, which aids in inhibiting intermolecular attacks on the ester carbonyl; these intermolecular attacks are responsible for hydrolyzing the ester bond.
Branched and aliphatic amino acids (valine and isoleucine) prove to generate the most stable aminoacyl-tRNAs upon their synthesis, with notably longer half lives than those that possess low hydrolytic stability (for example, proline). The steric hindrance of valine and isoleucine amino acids is generated by the methyl group on the β-carbon of the side chain. Overall, the chemical nature of the bound amino acid is responsible for determining the stability of the aa-tRNA.
Increased ionic strength resulting from sodium, potassium, and magnesium salts has been shown to destabilize the aa-tRNA acyl bond. Increased pH also destabilizes the bond and changes the ionization of the α-carbon amino group of the amino acid. The charged amino group can destabilize the aa-tRNA bond via the inductive effect. The elongation factor EF-Tu has been shown to stabilize the bond by preventing weak acyl linkages from being hydrolyzed.
All together, the actual stability of the ester bond influences the susceptibility of the aa-tRNA to hydrolysis within the body at physiological pH and ion concentrations. It is thermodynamically favorable that the aminoacylation process yield a stable aa-tRNA molecule, thus providing for the acceleration and productivity of polypeptide synthesis.
Drug targeting
Certain antibiotics, such as tetracyclines, prevent the aminoacyl-tRNA from binding to the ribosomal subunit in prokaryotes. It is understood that tetracyclines inhibit the attachment of aa-tRNA within the acceptor (A) site of prokaryotic ribosomes during translation. Tetracyclines are considered broad-spectrum antibiotic agents; these drugs exhibit capabilities of inhibiting the growth of both gram-positive and gram-negative bacteria, as well as other atypical microorganisms.
Furthermore, the TetM protein () is found to allow aminoacyl-tRNA molecules to bind to the ribosomal acceptor site, despite being concentrated with tetracyclines that would typically inhibit such actions. The TetM protein is regarded as a ribosomal protection protein, exhibiting GTPase activity that is dependent upon ribosomes. Research has demonstrated that in the presence of TetM proteins, tetracyclines are released from ribosomes. Thus, this allows for aa-tRNA binding to the A site of ribosomes, as it is no longer precluded by tetracycline molecules. TetO is 75% similar to TetM, and both have some 45% similarity with EF-G. The structure of TetM in complex with E. coli ribosome has been resolved.
See also
Aminoacyl tRNA synthetase
References
Protein biosynthesis | Aminoacyl-tRNA | [
"Chemistry"
] | 1,558 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
4,615,464 | https://en.wikipedia.org/wiki/Meta-learning%20%28computer%20science%29 | Meta-learning
is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.
Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one domain, but not on the next. This poses strong restrictions on the use of machine learning or data mining techniques, since the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood.
By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta-learning approaches bear a strong resemblance to the critique of metaheuristic, a possibly related problem. A good analogy to meta-learning, and the inspiration for Jürgen Schmidhuber's early work (1987) and Yoshua Bengio et al.'s work (1991), considers that genetic evolution learns the learning procedure encoded in genes and executed in each individual's brain. In an open-ended hierarchical meta-learning system using genetic programming, better evolutionary methods can be learned by meta evolution, which itself can be improved by meta meta evolution, etc.
Definition
A proposed definition for a meta-learning system combines three requirements:
The system must include a learning subsystem.
Experience is gained by exploiting meta knowledge extracted
in a previous learning episode on a single dataset, or
from different domains.
Learning bias must be chosen dynamically.
Bias refers to the assumptions that influence the choice of explanatory hypotheses and not the notion of bias represented in the bias-variance dilemma. Meta-learning is concerned with two aspects of learning bias.
Declarative bias specifies the representation of the space of hypotheses, and affects the size of the search space (e.g., represent hypotheses using linear functions only).
Procedural bias imposes constraints on the ordering of the inductive hypotheses (e.g., preferring smaller hypotheses).
Common approaches
There are three common approaches:
using (cyclic) networks with external or internal memory (model-based)
learning effective distance metrics (metrics-based)
explicitly optimizing model parameters for fast learning (optimization-based).
Model-Based
Model-based meta-learning models updates its parameters rapidly with a few training steps, which can be achieved by its internal architecture or controlled by another meta-learner model.
Memory-Augmented Neural Networks
A Memory-Augmented Neural Network, or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after only a few examples.
Meta Networks
Meta Networks (MetaNet) learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization.
Metric-Based
The core idea in metric-based meta-learning is similar to nearest neighbors algorithms, which weight is generated by a kernel function. It aims to learn a metric or distance function over objects. The notion of a good metric is problem-dependent. It should represent the relationship between inputs in the task space and facilitate problem solving.
Convolutional Siamese Neural Network
Siamese neural network is composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data sample pairs. The two networks are the same, sharing the same weight and network parameters.
Matching Networks
Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.
Relation Network
The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting.
Prototypical Networks
Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results.
Optimization-Based
What optimization-based meta-learning algorithms intend for is to adjust the optimization algorithm so that the model can be good at learning with a few examples.
LSTM Meta-Learner
LSTM-based meta-learner is to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime. The parametrization allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner (classifier) network that allows for quick convergence of training.
Temporal Discreteness
Model-Agnostic Meta-Learning (MAML) is a fairly general optimization algorithm, compatible with any model that learns through gradient descent.
Reptile
Reptile is a remarkably simple meta-learning optimization algorithm, given that both of its components rely on meta-optimization through gradient descent and both are model-agnostic.
Examples
Some approaches which have been viewed as instances of meta-learning:
Recurrent neural networks (RNNs) are universal computers. In 1993, Jürgen Schmidhuber showed how "self-referential" RNNs can in principle learn by backpropagation to run their own weight change algorithm, which may be quite different from backpropagation. In 2001, Sepp Hochreiter & A.S. Younger & P.R. Conwell built a successful supervised meta-learner based on Long short-term memory RNNs. It learned through backpropagation a learning algorithm for quadratic functions that is much faster than backpropagation. Researchers at Deepmind (Marcin Andrychowicz et al.) extended this approach to optimization in 2017.
In the 1990s, Meta Reinforcement Learning or Meta RL was achieved in Schmidhuber's research group through self-modifying policies written in a universal programming language that contains special instructions for changing the policy itself. There is a single lifelong trial. The goal of the RL agent is to maximize reward. It learns to accelerate reward intake by continually improving its own learning algorithm which is part of the "self-referential" policy.
An extreme type of Meta Reinforcement Learning is embodied by the Gödel machine, a theoretical construct which can inspect and modify any part of its own software which also contains a general theorem prover. It can achieve recursive self-improvement in a provably optimal way.
Model-Agnostic Meta-Learning (MAML) was introduced in 2017 by Chelsea Finn et al. Given a sequence of tasks, the parameters of a given model are trained such that few iterations of gradient descent with few training data from a new task will lead to good generalization performance on that task. MAML "trains the model to be easy to fine-tune." MAML was successfully applied to few-shot image classification benchmarks and to policy-gradient-based reinforcement learning.
Variational Bayes-Adaptive Deep RL (VariBAD) was introduced in 2019. While MAML is optimization-based, VariBAD is a model-based method for meta reinforcement learning, and leverages a variational autoencoder to capture the task information in an internal memory, thus conditioning its decision making on the task.
When addressing a set of tasks, most meta learning approaches optimize the average score across all tasks. Hence, certain tasks may be sacrificed in favor of the average score, which is often unacceptable in real-world applications. By contrast, Robust Meta Reinforcement Learning (RoML) focuses on improving low-score tasks, increasing robustness to the selection of task. RoML works as a meta-algorithm, as it can be applied on top of other meta learning algorithms (such as MAML and VariBAD) to increase their robustness. It is applicable to both supervised meta learning and meta reinforcement learning.
Discovering meta-knowledge works by inducing knowledge (e.g. rules) that expresses how each learning method will perform on different learning problems. The metadata is formed by characteristics of the data (general, statistical, information-theoretic,... ) in the learning problem, and characteristics of the learning algorithm (type, parameter settings, performance measures,...). Another learning algorithm then learns how the data characteristics relate to the algorithm characteristics. Given a new learning problem, the data characteristics are measured, and the performance of different learning algorithms are predicted. Hence, one can predict the algorithms best suited for the new problem.
Stacked generalisation works by combining multiple (different) learning algorithms. The metadata is formed by the predictions of those different algorithms. Another learning algorithm learns from this metadata to predict which combinations of algorithms give generally good results. Given a new learning problem, the predictions of the selected set of algorithms are combined (e.g. by (weighted) voting) to provide the final prediction. Since each algorithm is deemed to work on a subset of problems, a combination is hoped to be more flexible and able to make good predictions.
Boosting is related to stacked generalisation, but uses the same algorithm multiple times, where the examples in the training data get different weights over each run. This yields different predictions, each focused on rightly predicting a subset of the data, and combining those predictions leads to better (but more expensive) results.
Dynamic bias selection works by altering the inductive bias of a learning algorithm to match the given problem. This is done by altering key aspects of the learning algorithm, such as the hypothesis representation, heuristic formulae, or parameters. Many different approaches exist.
Inductive transfer studies how the learning process can be improved over time. Metadata consists of knowledge about previous learning episodes and is used to efficiently develop an effective hypothesis for a new task. A related approach is called learning to learn, in which the goal is to use acquired knowledge from one domain to help learning in other domains.
Other approaches using metadata to improve automatic learning are learning classifier systems, case-based reasoning and constraint satisfaction.
Some initial, theoretical work has been initiated to use Applied Behavioral Analysis as a foundation for agent-mediated meta-learning about the performances of human learners, and adjust the instructional course of an artificial agent.
AutoML such as Google Brain's "AI building AI" project, which according to Google briefly exceeded existing ImageNet benchmarks in 2017.
==References==
External links
Metalearning article in Scholarpedia
Video courses about Meta-Learning with step-by-step explanation of MAML, Prototypical Networks, and Relation Networks.
Machine learning | Meta-learning (computer science) | [
"Engineering"
] | 2,299 | [
"Artificial intelligence engineering",
"Machine learning"
] |
4,616,444 | https://en.wikipedia.org/wiki/Hydride%20vapour-phase%20epitaxy | Hydride vapour-phase epitaxy (HVPE) is an epitaxial growth technique often employed to produce semiconductors such as GaN, GaAs, InP and their related compounds, in which hydrogen chloride is reacted at elevated temperature with the group-III metals to produce gaseous metal chlorides, which then react with ammonia to produce the group-III nitrides. Carrier gasses commonly used include ammonia, hydrogen and various chlorides.
HVPE technology can significantly reduce the cost of production compared to the most common method of vapor deposition of organometallic compounds (MOCVD). Cost reduction is achieved by significantly reducing the consumption of NH3, cheaper source materials than in MOCVD, reducing the capital equipment costs, due to the high growth rate.
Developed in the 1960s, it was the first epitaxial method used for the fabrication of single GaN crystals.
Hydride vapour-phase epitaxy (HVPE) is the only III–V and III–N semiconductor crystal growth process working close to equilibrium. This means that the condensation reactions exhibit fast kinetics: one observes immediate reactivity to an increase of the vapour-phase supersaturation towards condensation. This property is due to the use of chloride vapour precursors GaCl and InCl, of which dechlorination frequency is high enough so that there is no kinetic delay. A wide range of growth rates, from 1 to 100 micrometers per hour, can then be set as a function of the vapour-phase supersaturation. Another HVPE feature is that growth is governed by surface kinetics: adsorption of gaseous precursors, decomposition of ad-species, desorption of decomposition products, surface diffusion towards kink sites. This property is of benefit when it comes to selective growth on patterned substrates for the synthesis of objects and structures exhibiting a 3D morphology. The morphology is only dependent on the intrinsic growth anisotropy of crystals. By setting experimental growth parameters of temperature and composition of the vapour phase, one can control this anisotropy, which can be very high as growth rates can be varied by an order of magnitude. Therefore, we can shape structures with various novel aspect ratios. The accurate control of growth morphology was used for the making of GaN quasi-substrates, arrays of GaAs and GaN structures on the micrometer and submicrometer scales, GaAs tips for local spin injection. Fast dechlorination property is also used for the VLS growth of GaAs and GaN nanowires with exceptional length.
References
Chemical vapor deposition
Thin film deposition
Semiconductor device fabrication | Hydride vapour-phase epitaxy | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 545 | [
"Microtechnology",
"Thin film deposition",
"Coatings",
"Thin films",
"Semiconductor device fabrication",
"Chemical vapor deposition",
"Planes (geometry)",
"Solid state engineering"
] |
4,616,703 | https://en.wikipedia.org/wiki/Halosere | A halosere is an ecological succession in saline water environments. An example of a halosere is a salt marsh.
In a river estuary, large amounts of silt are deposited by the ebbing tides, as well as inflowing rivers.
Plants in halosere
The earliest plant colonizers are algae and zostera, which can tolerate submergence by the tide for most of the 12 hour cycle and which trap mud, causing it to accumulate.
Two other colonizer plants are Salicornia, and Spartina, which are both halophytes. Halophytes are plants that can tolerate saline conditions and they grow on the intertidal mudflats with a maximum of four hours' exposure to air every 12 hours. On a large scale halophytes have colonized the halosere on the banks of the Great Salt Lake in Utah. Halosere vegetation can also be found in the salt marshes of the Wadden Sea islands and the zone towards the dunes.
River estuaries
In a river estuary, large amount of silt are depositing. Halosere in river estuaries consist of mudflats and the so called sward zone. Halosere sward zones can be found in the Llanrhidian marsh on the Gower Peninsula.
See also
Seral community
References
Ecological succession
Wetlands
Estuaries | Halosere | [
"Environmental_science"
] | 276 | [
"Hydrology",
"Wetlands"
] |
7,961,605 | https://en.wikipedia.org/wiki/CoreASM | CoreASM is an open source project (licensed under Academic Free License version 3.0) that focuses on the design of a lean executable ASM (Abstract State Machines) language, in combination with a supporting tool environment for high-level design, experimental validation, and formal verification (where appropriate) of abstract system models.
Abstract state machines are known for their versatility in modeling of algorithms, architectures, languages, protocols, and virtually all kinds of sequential, parallel, and distributed systems. The ASM formalism has been studied extensively by researchers in academia and industry for more than 15 years with the intention to bridge the gap between formal and pragmatic approaches.
Model-based systems engineering can benefit from abstract executable specifications as a tool for design exploration and experimental validation through simulation and testing. Building on experiences with two generations of ASM tools, a novel executable ASM language, called CoreASM, is being developed (see CoreASM homepage).
The CoreASM language emphasizes freedom of experimentation, and supports the evolutionary nature of design as a product of creativity. It is particularly suited to Exploring the problem space for the purpose of writing an initial specification. The CoreASM language allows writing of highly abstract and concise specifications by minimizing the need for encoding in mapping the problem space to a formal model, and by allowing explicit declaration of the parts of the specification that are purposely left abstract. The principle of minimality, in combination with robustness of the underlying mathematical framework, improves modifiability of specifications, while effectively supporting the highly iterative nature of specification and design.
References
R. Farahbod, V. Gervasi, U. Glässer and M. Memon. Design Exploration and Experimental Validation of Abstract Requirements, Proceedings of the 12th International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ'06), June 2006, Luxembourg, Grand-Duchy of Luxembourg, Essener Informatik Beitrage, .
R. Farahbod, V. Gervasi, U. Glässer, and M. Memon. Design and Specification of the CoreASM Execution Engine, Part 1: the Kernel. Technical Report SFU-CMPT-TR-2006-09, Simon Fraser University, May 2006.
R. Farahbod, V. Gervasi, and U. Glässer. CoreASM: An extensible ASM execution engine. In D. Beauquier, E. Börger and A. Slissenko (Eds.), Proc. 12th International Workshop on Abstract State Machines, Paris, March 2005, pages 153–165
... further references and documentation
External links
The CoreASM Project at GitHub
The CoreASM wiki
Abstract State Machines homepage
Formal specification languages
Formal methods tools
Software using the Academic Free License | CoreASM | [
"Mathematics"
] | 583 | [
"Formal methods tools",
"Mathematical software"
] |
7,962,469 | https://en.wikipedia.org/wiki/Ultra-linear | Ultra-linear electronic circuits are those used to couple a tetrode or pentode vacuum-tube (also called "electron-valve") to a load (e.g. to a loudspeaker).
'Ultra-linear' is a special case of 'distributed loading'; a circuit technique patented by Alan Blumlein in 1937 (Patent No. 496,883), although the name 'distributed loading' is probably due to Mullard. In 1938 he applied for the US patent 2218902. The particular advantages of ultra-linear operation, and the name itself, were published by David Hafler and Herbert Keroes in the early 1950s through articles in the magazine "Audio Engineering" from the USA. The special case of 'ultra linear' operation is sometimes confused with the more general principle of distributed loading.
Operation
A pentode or tetrode vacuum-tube (valve) configured as a common-cathode amplifier (where the output signal appears on the plate) may be operated as:
a pentode or tetrode, in which the screen-grid is connected to a stable DC voltage so there are no signal variations on the screen-grid (i.e. the screen-grid has 0% of the plate's output signal impressed on it), or
a triode, in which the screen-grid is connected to the plate (i.e. the screen-grid has 100% of the plate's output signal voltage impressed on it), or
a blend of triode and pentode, in which the screen-grid has a percentage (between 0% and 100%) of the plate's output signal impressed on it. This is the basis of the distributed load circuit, and is usually achieved by incorporating a suitable "tap" on the primary winding of the output transformer that the vacuum-tube (valve) is connected to.
The impression of any portion of the output signal onto the screen-grid can be seen as a form of feedback, which alters the behaviour of the electron stream passing from cathode to anode.
Advantages
By judicious choice of the screen-grid percentage-tap, the benefits of both triode and pentode vacuum-tubes can be realised. Over a very narrow range of percentage-tapping, distortion is found to fall to an unusually low value—sometimes less than for either triode or pentode operation—while power efficiency is only slightly reduced compared with full pentode operation. The optimum percentage-tap to achieve ultra-linear operation depends mainly on the type of valve used; a commonly seen percentage is 43% (of the number of transformer primary turns on the plate-circuit) which applies to the KT88, although many other valve types have optimum values close to this. A value of 20% was recommended for 6V6GTs. Mullard circuits such as the 5-20 also used 20% distributed loading (but did not achieve ultra-linear operation), while LEAK amplifiers used 50%.
The characteristics of the circuit which make distributed loading suitable for audio power amplifiers, when compared to a triode, beam tetrode or pentode based amplifier, are:
The output impedance is lowered to be about half that achieved with a triode.
Distortion is lowered to approach that achieved with a triode tube, but may be even less for ultra-linear operation.
The power output is higher than from a triode, approaching that delivered by a pentode.
The power output is more constant as distributed loading is a combination of a transconductance amplifier and a voltage amplifier.
The distributed load circuit may be applied to either push-pull or single-ended amplifier circuits.
Note that the term 'ultra linear' was expressly reserved only for the condition of optimum tapping point. As Hafler and Keroes wrote:
"Our patent claims cover the use of any primary tap in this circuit arrangement. However, we have restricted the use of the term "Ultra Linear" to the conditions where the dynamic plate characteristic curves are most linear".
Related circuits
The "QUAD II" amplifier from QUAD uses a circuit in which the cathode has a portion of the output signal applied to it, and was referred to as "distributed load" by Peter Walker of QUAD. In the United States, McIntosh Laboratories used this technique extensively in their vacuum-tube power amplifiers. Audio Research Corp have also used a similar circuit.
References
Vacuum tubes
Electronic amplifiers | Ultra-linear | [
"Physics",
"Technology"
] | 925 | [
"Vacuum tubes",
"Vacuum",
"Electronic amplifiers",
"Amplifiers",
"Matter"
] |
7,968,026 | https://en.wikipedia.org/wiki/Centrifugal%20evaporator | A centrifugal evaporator is a device used in chemical and biochemical laboratories for the efficient and gentle evaporation of solvents from many samples at the same time, and samples contained in microtitre plates. If only one sample required evaporation then a rotary evaporator is most often used. The most advanced modern centrifugal evaporators not only concentrate many samples at the same time, they eliminate solvent bumping and can handle solvents with boiling points of up to 220 °C. This is more than adequate for the modern high throughput laboratory.
History
The centrifugal evaporator dates from the second half of the 1800s. Patent US158764 was granted in 1875 to Conrad Wendel and William Florich for an improvement in centrifugal evaporators.
Design
A centrifugal evaporator often comprises a vacuum pump connected to a centrifuge chamber in which the samples are placed. Many systems also have a cold trap or solvent condenser placed in line between the vacuum pump and the centrifuge chamber to collect the evaporated solvents. The most efficient systems also have a cold trap on the pump exhaust. There are many further developments available from manufacturers to speed up the process, and to provide protection for delicate samples.
Mechanism
The system works by lowering the pressure in the centrifuge system - as the pressure drops so does the boiling point of the solvent(s) in the system. When the pressure is sufficiently low that the boiling points of the solvents are below the temperature of the sample holder, then they will boil. This enables solvent to be rapidly removed while the samples themselves are not heated to damaging temperatures. High performance systems can remove very high boiling solvents such as dimethyl sulfoxide (DMSO) or N-methyl-2-pyrrolidone (NMP) while keeping sample temperatures below 40 °C at all times.
The centrifugal force generated by spinning the centrifuge rotor creates a pressure gradient within the solvent contained in the tubes or vials, this means that the samples boil from the top down, helping to prevent "bumping". The most advanced systems apply the vacuum slowly and run the rotor at speeds of 500 x gravity - this system is proven to prevent bumping and was patented by Genevac in the late 1990s.
References
External links
Decanter Centrifuge
evaporator
Evaporators
Laboratory equipment | Centrifugal evaporator | [
"Chemistry",
"Engineering"
] | 505 | [
"Centrifugation",
"Chemical equipment",
"Distillation",
"Evaporators",
"Centrifuges"
] |
7,970,283 | https://en.wikipedia.org/wiki/Sequential%20probability%20ratio%20test | The sequential probability ratio test (SPRT) is a specific sequential hypothesis test, developed by Abraham Wald and later proven to be optimal by Wald and Jacob Wolfowitz. Neyman and Pearson's 1933 result inspired Wald to reformulate it as a sequential analysis problem. The Neyman-Pearson lemma, by contrast, offers a rule of thumb for when all the data is collected (and its likelihood ratio known).
While originally developed for use in quality control studies in the realm of manufacturing, SPRT has been formulated for use in the computerized testing of human examinees as a termination criterion.
Theory
As in classical hypothesis testing, SPRT starts with a pair of hypotheses, say and for the null hypothesis and alternative hypothesis respectively. They must be specified as follows:
The next step is to calculate the cumulative sum of the log-likelihood ratio, , as new data arrive: with , then, for =1,2,...,
The stopping rule is a simple thresholding scheme:
: continue monitoring (critical inequality)
: Accept
: Accept
where and () depend on the desired type I and type II errors, and . They may be chosen as follows:
and
In other words, and must be decided beforehand in order to set the thresholds appropriately. The numerical value will depend on the application. The reason for being only an approximation is that, in the discrete case, the signal may cross the threshold between samples. Thus, depending on the penalty of making an error and the sampling frequency, one might set the thresholds more aggressively. The exact bounds are correct in the continuous case.
Example
A textbook example is parameter estimation of a probability distribution function. Consider the exponential distribution:
The hypotheses are
Then the log-likelihood function (LLF) for one sample is
The cumulative sum of the LLFs for all is
Accordingly, the stopping rule is:
After re-arranging we finally find
The thresholds are simply two parallel lines with slope . Sampling should stop when the sum of the samples makes an excursion outside the continue-sampling region.
Applications
Manufacturing
The test is done on the proportion metric, and tests that a variable p is equal to one of two desired points, p1 or p2. The region between these two points is known as the indifference region (IR). For example, suppose you are performing a quality control study on a factory lot of widgets. Management would like the lot to have 3% or less defective widgets, but 1% or less is the ideal lot that would pass with flying colors. In this example, p1 = 0.01 and p2 = 0.03 and the region between them is the IR because management considers these lots to be marginal and is OK with them being classified either way. Widgets would be sampled one at a time from the lot (sequential analysis) until the test determines, within an acceptable error level, that the lot is ideal or should be rejected.
Testing of human examinees
The SPRT is currently the predominant method of classifying examinees in a variable-length computerized classification test (CCT). The two parameters are p1 and p2 are specified by determining a cutscore (threshold) for examinees on the proportion correct metric, and selecting a point above and below that cutscore. For instance, suppose the cutscore is set at 70% for a test. We could select p1 = 0.65 and p2 = 0.75 . The test then evaluates the likelihood that an examinee's true score on that metric is equal to one of those two points. If the examinee is determined to be at 75%, they pass, and they fail if they are determined to be at 65%.
These points are not specified completely arbitrarily. A cutscore should always be set with a legally defensible method, such as a modified Angoff procedure. Again, the indifference region represents the region of scores that the test designer is OK with going either way (pass or fail). The upper parameter p2 is conceptually the highest level that the test designer is willing to accept for a Fail (because everyone below it has a good chance of failing), and the lower parameter p1 is the lowest level that the test designer is willing to accept for a pass (because everyone above it has a decent chance of passing). While this definition may seem to be a relatively small burden, consider the high-stakes case of a licensing test for medical doctors: at just what point should we consider somebody to be at one of these two levels?
While the SPRT was first applied to testing in the days of classical test theory, as is applied in the previous paragraph, Reckase (1983) suggested that item response theory be used to determine the p1 and p2 parameters. The cutscore and indifference region are defined on the latent ability (theta) metric, and translated onto the proportion metric for computation. Research on CCT since then has applied this methodology for several reasons:
Large item banks tend to be calibrated with IRT
This allows more accurate specification of the parameters
By using the item response function for each item, the parameters are easily allowed to vary between items.
Detection of anomalous medical outcomes
Spiegelhalter et al. have shown that SPRT can be used to monitor the performance of doctors, surgeons and other medical practitioners in such a way as to give early warning of potentially anomalous results. In their 2003 paper, they showed how it could have helped identify Harold Shipman as a murderer well before he was actually identified.
Extensions
MaxSPRT
More recently, in 2011, an extension of the SPRT method called Maximized Sequential Probability Ratio Test (MaxSPRT) was introduced. The salient feature of MaxSPRT is the allowance of a composite, one-sided alternative hypothesis, and the introduction of an upper stopping boundary. The method has been used in several medical research studies.
See also
CUSUM
Computerized classification test
Wald test
Likelihood-ratio test
References
Further reading
Holger Wilker: Sequential-Statistik in der Praxis, BoD, Norderstedt 2012, .
External links
Wald's Sequential Probability Ratio Test for R by Stéphane Bottine
Wald's Sequential Probability Ratio Test for Python by Zhenning Yu
Statistical tests
Sequential methods
Mathematical psychology | Sequential probability ratio test | [
"Mathematics"
] | 1,292 | [
"Applied mathematics",
"Mathematical psychology"
] |
2,492,288 | https://en.wikipedia.org/wiki/Afghanite | Afghanite, (Na,K)22Ca10[Si24Al24O96](SO4)6Cl6, is a hydrous sodium, calcium, potassium, sulfate, chloride, carbonate alumino-silicate mineral. Afghanite is a feldspathoid of the cancrinite group and typically occurs with sodalite group minerals. It forms blue to colorless, typically massive crystals in the trigonal crystal system. The lowering of the symmetry from typical (for cancrinite group) hexagonal one is due to ordering of Si and Al. It has a Mohs hardness of 5.5 to 6 and a specific gravity of 2.55 to 2.65. It has refractive index values of nω = 1.523 and nε = 1.529. It has one direction of perfect cleavage and exhibits conchoidal fracture. It fluoresces a bright orange.
It was discovered in 1968 in the Lapis-lazuli Mine, Sar-i Sang, Badakhshan Province, Afghanistan and takes its name from that country. It has also been described from localities in Germany, Italy, the Pamir Mountains of Tajikistan, near Lake Baikal in Siberia, New York and Newfoundland. It occurs as veinlets in lazurite crystals in the Afghan location and in altered limestone xenoliths within pumice in Pitigliano, Tuscany, Italy.
It is used as a gemstone.
See also
List of gemstones
List of minerals
References
Sodium minerals
Potassium minerals
Calcium minerals
Aluminium minerals
Feldspathoid
Trigonal minerals
Minerals in space group 159
Gemstones
Minerals described in 1968 | Afghanite | [
"Physics"
] | 340 | [
"Materials",
"Gemstones",
"Matter"
] |
2,492,332 | https://en.wikipedia.org/wiki/Restriction%20site | Restriction sites, or restriction recognition sites, are located on a DNA molecule containing specific (4-8 base pairs in length) sequences of nucleotides, which are recognized by restriction enzymes. These are generally palindromic sequences (because restriction enzymes usually bind as homodimers), and a particular restriction enzyme may cut the sequence between two nucleotides within its recognition site, or somewhere nearby.
Function
For example, the common restriction enzyme EcoRI recognizes the palindromic sequence GAATTC and cuts between the G and the A on both the top and bottom strands. This leaves an overhang (an end-portion of a DNA strand with no attached complement) known as a sticky end on each end of AATT. The overhang can then be used to ligate in (see DNA ligase) a piece of DNA with a complementary overhang (another EcoRI-cut piece, for example).
Some restriction enzymes cut DNA at a restriction site in a manner which leaves no overhang, called a blunt end. Blunt ends are much less likely to be ligated by a DNA ligase because the blunt end doesn't have the overhanging base pair that the enzyme can recognize and match with a complementary pair. Sticky ends of DNA however are more likely to successfully bind with the help of a DNA ligase because of the exposed and unpaired nucleotides. For example, a sticky end trailing with AATTG is more likely to bind with a ligase than a blunt end where both the 5' and 3' DNA strands are paired. In the case of the example the AATTG would have a complementary pair of TTAAC which would reduce the functionality of the DNA ligase enzyme.
Applications
Restriction sites can be used for multiple applications in molecular biology such as identifying restriction fragment length polymorphisms (RFLPs). Restriction sites are also important consideration to be aware of when designing plasmids.
Databases
Several databases exist for restriction sites and enzymes, of which the largest noncommercial database is REBASE. Recently, it has been shown that statistically significant nullomers (i.e. short absent motifs which are highly expected to exist) in virus genomes are restriction sites indicating that viruses have probably got rid of these motifs to facilitate invasion of bacterial hosts. Nullomers Database contains a comprehensive catalogue of minimal absent motifs many of which might potentially be not-yet-known restriction motifs.
See also
List of restriction enzyme cutting sites
References
Genetics techniques
Molecular biology
Restriction enzymes | Restriction site | [
"Chemistry",
"Engineering",
"Biology"
] | 514 | [
"Genetics techniques",
"Genetic engineering",
"Molecular biology",
"Biochemistry",
"Restriction enzymes"
] |
2,493,007 | https://en.wikipedia.org/wiki/Kilogram-force%20per%20square%20centimetre | A kilogram-force per square centimetre (kgf/cm2), often just kilogram per square centimetre (kg/cm2), or kilopond per square centimetre (kp/cm2) is a deprecated unit of pressure using metric units. It is not a part of the International System of Units (SI), the modern metric system. 1 kgf/cm2 equals 98.0665 kPa (kilopascals) or 0.980665 bar—2% less than a bar. It is also known as a technical atmosphere (symbol: at).
Use of the kilogram-force per square centimetre continues primarily due to older pressure measurement devices still in use.
This use of the unit of pressure provides an intuitive understanding for how a body's mass, in contexts with roughly standard gravity, can apply force to a scale's surface area, i.e. kilogram-force per square (centi-)metre.
In SI units, the unit is converted to the SI derived unit pascal (Pa), which is defined as one newton per square metre (N/m2). A newton is equal to 1 kg⋅m/s2, and a kilogram-force is 9.80665 N, meaning that 1 kgf/cm2 equals 98.0665 kilopascals (kPa).
In some older publications, kilogram-force per square centimetre is abbreviated ksc instead of kg/cm2.
{|
|-
|valign=top rowspan=2|1 at ||= 98.0665 kPa
|-
|≈ standard atmospheres
|}
Ambiguity of at
The symbol "at" clashes with that of the katal (symbol: "kat"), the SI unit of catalytic activity; a kilotechnical atmosphere would have the symbol "kat", indistinguishable from the symbol for the katal. It also clashes with that of the non-SI unit, the attotonne, but that unit would more likely be rendered as the equivalent SI unit, the picogram.
References
Units of pressure
Non-SI metric units | Kilogram-force per square centimetre | [
"Mathematics"
] | 458 | [
"Non-SI metric units",
"Quantity",
"Units of measurement",
"Units of pressure"
] |
2,494,190 | https://en.wikipedia.org/wiki/RSPB%20Dearne%20Valley%20Old%20Moor | RSPB Dearne Valley Old Moor is an wetlands nature reserve in the Dearne Valley near Barnsley, South Yorkshire, run by the Royal Society for the Protection of Birds (RSPB). It lies on the junction of the A633 and A6195 roads and is bordered by the Trans Pennine Trail long-distance path. Following the end of coal mining locally, the Dearne Valley had become a derelict post-industrial area, and the removal of soil to cover an adjacent polluted site enabled the creation of the wetlands at Old Moor.
Old Moor is managed to benefit bitterns, breeding waders such as lapwings, redshanks and avocets, and wintering golden plovers. A calling male little bittern was present in the summers of 2015 and 2016.
Barnsley Metropolitan Borough Council created the reserve, which opened in 1998, but the RSPB took over management of the site in 2003 and developed it further, with funding from several sources including the National Lottery Heritage Fund. The reserve, along with others nearby, forms part of a landscape-scale project to create wildlife habitat in the Dearne Valley. It is an 'Urban Gateway' site with facilities intended to attract visitors, particularly families. In 2018, the reserve had about 100,000 visits. The reserve may benefit in the future from new habitat creation beyond the reserve and improved accessibility, although there is also a potential threat to the reserve from climate change and flooding.
Landscape
Most of the Dearne Valley area lies on the coal measures, comprising Carboniferous sandstone and slate with seams of coal. The valleys contain fertile alluvium deposited by their rivers, and the sandstone forms rolling ridges cut by the broad floodplains.
The area has been settled continuously since prehistoric times, with villages developing on the drier sandstone ridges above the flood plain from at least the late Saxon period. Mining is recorded from at least the 13th century, and probably back to Roman Britain, and the area became heavily industrialised in the 18th century with the arrival of the Dearne and Dove Canal. This connected Barnsley to the River Don and beyond, aiding the intensive exploitation of the locality's coal, sandstone and iron ore. Over the next two centuries, especially following the arrival of the railway in 1840, the area became dominated by its heavy industries.
The name Old Moor may derive from an archaic meaning of moor, referring to a marshy area that was more difficult to cultivate than the alluvium of the flood plain. It had been enclosed as a farm by 1757, when it was owned by the Marquess of Rockingham.
History
The Dearne Valley was formerly a major coal mining area, with several accessible seams of high-quality coal, and in 1950s more than 32,000 colliers worked in its 30 pits. The coal industry dominated the area, and its waste rendered the River Dearne lifeless, although a few isolated wetland areas remained, monitored by local birdwatchers.
The miners' strike of 1984 was the first sign of a national programme of pit closures in the UK that led to all the Dearne Valley mines being closed by 1993, with the loss of 11,000 jobs in the industry. About of the former Wath Manvers Colliery, including a coking plant and marshalling yard, was left as the largest derelict site in western Europe. The ground was heavily polluted and needed to be restored by covering it with clean soil deep enough for trees and scrubs to become established. To achieve this, of material was removed from the adjacent Old Moor, thereby creating a new wetland at that site.
The Wildfowl & Wetlands Trust (WWT) were originally intended to run the proposed reserve, and planned a large lake for wintering wildfowl. The Royal Society for the Protection of Birds (RSPB) suggested adding reed beds to help the then-struggling bittern population; only 11 males were present in the UK at one point in the 1990s. The WWT was at that time also working on its London Wetland Centre, and pulled out of the Old Moor project since it lacked the resources to cope with two large projects.
The creation of the reserve fell to Barnsley Metropolitan Borough Council, which offered the site to the RSPB in 1997. At that time, the bird charity was more interested in preserving established habitats than creating new sites, and declined to take on Old Moor. The reserve eventually opened in 1998 as part of the regeneration of the Dearne Valley, and was then developed further with the help of a lottery grant of nearly £800,000 in 2002.
By 2000, the reserve had only 10,000 visitors annually, and was making a financial loss before being taken over by the RSPB in 2003. The RSPB had changed it its position since its refusal in 1997, with a greater emphasis nationally on engaging the public, and more opportunities to work with the Environment Agency to create and manage new wetlands. With help from the Environment Agency, local councils and others, the RSPB tripled its land holding in the area to in the next ten years, while other conservation bodies also created and improved reserves.
Cooperation between conservation organisations and other agencies led to the formation of the Dearne Valley Landscape Partnership (DVLP) in 2014. This is the main coordinating body for the partners in the Dearne Valley scheme, which include the Department for Environment, Food and Rural Affairs (DEFRA), local councils, Natural England, The Environment Agency, the Forestry Commission, Yorkshire Water, the RSPB and several local conservation charities. The DVLP is supported by the Heritage Lottery Fund and its administration is the responsibility of Barnsley Council's Museums and Heritage Service. The partnership's remit includes industrial heritage sites as well as the local environment, and its funding for 2014–2019 was £2.4 million, of which £1.8 million was from the National Lottery.
In October 2020, Old Moor was one of the sites from which the BBC Television programme Autumnwatch was broadcast, hosting presenter Gillian Burke.
Access and facilities
Old Moor lies about from the M1 motorway, and is accessed from Manvers Way (A633) just east of the A6195 Dearne Valley Parkway junction. The nearest railway stations are at Wombwell and Swinton, both about away. Buses are infrequent, but cyclists can access Old Moor by a bridge to the reserve car park from the Trans Pennine Trail long-distance path, which runs along the southern edge of the reserve.
The reserve has a visitor centre, created by Barnsley Council from existing farm buildings, which includes a shop, educational facilities, a café and toilets, picnic and play areas and nature trails. The visitor centre and its café are open daily from 9.30 am–4.00 pm all year, except for 25 and 26 December, but staying open until 5 pm from April to October. Entrance is free for RSPB members, although there are entry charges for other visitors.
Old Moor was planned as an "Urban Gateway" RSPB site, its playground, café balcony and children's discovery zone intended to attract visitors. It has nine bird hides and viewing screens, and a sunken hide with a reflection pool for the benefit of photographers. The track to the reed bed is long and the main track is . As of 2018, the reserve had about 100,000 visits per year, with around 3,500 children annually making use of the RSPB's on-site education programmes.
The site uses wood pellets and chippings to fuel a 100 kW biomass converter which provides hot water and heating for five buildings on the site.
Management
The main focus on management throughout the Dearne Valley complex is on its key habitats: wet grassland, open water and reed bed. Although the first reeds were planted at Old Moor in 1996, their establishment has been slow because the topsoil had been stripped off leaving only hard sterile clay subsoil for planting. Bringing fertile mud from Blacktoft Sands RSPB reserve has helped, although the reeds still stand in ribbons rather than solid blocks. The reed beds are cut when mature to encourage new growth, and are divided into four sections which can be separately drained.
Wet grassland is kept short for breeding waders through grazing by cattle or Konik horses, and by mowing. Ditches are cleared in rotation, and islands are flooded in winter, if possible, to suppress vegetation. Surviving plants are then cut down, and the soil is rotavated to break up the hard clay and deter invasive New Zealand pygmyweed. As a man-made site, Old Moor has a complex water-management system that allows water levels to be controlled in separate compartments of the wetland. In general, water levels are kept high in winter, then lowered to expose the islands for breeding and passage waders.
The Dearne Valley is one of 12 Nature Improvement Areas (NIAs) created as part of the UK Government's response to Sir John Lawton's 2010 report "Making Space for Nature", which proposed managing conservation on a landscape scale. Plans to manage the Dearne Valley on a landscape-wide basis involve coordination with other wetland reserves. Five smaller sites are already managed by the RSPB; these are Bolton Ings and Gypsy Marsh close to Old Moor, and Adwick Washlands, Wombwell Ings and Edderthorpe Flash within a few miles. Other reserves are the Garganey Trust's Broomhill Flash and the Yorkshire Wildlife Trust's (YWT) Denaby Ings. The YWT also manages Barnsley council-owned Carlton Marsh, and the Environment Agency is restoring marshes at Houghton Washland. Other parcels of land are being acquired by the various conservation charities as they become available. The Dearne Valley reserves have no statutory protection, but as of 2019, the process to become a Site of Special Scientific Interest (SSSI) is under way.
Fauna and flora
Birds
Since the 1990s the RSPB has been attempting to create improved habitats for the formerly endangered UK bittern population, with major reed bed creation at their Ham Wall and Lakenheath Fen reserves being a key part of the bittern recovery programme initiated in 1994 as part of the United Kingdom Biodiversity Action Plan. At Old Moor, in addition to the creation of new reed beds, 23,000 small fish were introduced between 2010 and 2016, mainly of species such as rudd and eels that are preferred as food by bitterns. This project increased the fish biomass more than twenty-fold to .
Breeding waders include lapwings, redshanks, snipe and avocets, the last species having bred on the reserve since 2011. Predation of wader chicks by foxes has been a problem, so deep ditches and electric fences are being introduced to exclude mammals. Black-headed gull numbers have increased from 183 breeding pairs in 2006 to 2,385 pairs in 2017, and have been joined by Mediterranean gulls, eight being present in 2018. Old Moor is an important wintering site for golden plovers, although numbers have dropped from 6,000–8,000 to just a few hundred in about twenty years.
The post-industrial landscaping and planting in the area have created a suitable habitat for the species containing willow, alder and clumps of bramble close to water and linked by linear features such as railways, canals and streams. Cetti's warbler and bearded tit have recently colonised the reserve, and up to three pairs of barn owls breed there.
A calling male little bittern summered in 2015 and 2016, and appeared for a few days in 2017. Other recent rarities include a Baird's sandpiper in 2016, a thrush nightingale and gull-billed tern in 2015, and a black stork in 2014.
Since the early 2020's there have been regular spoonbill sightings and it is hoped that they will breed at the reserve in future.
Other animals and plants
Lesser noctule bats and water voles figure among the scarcer mammals found on the reserve, and otters have returned to the now-clean rivers. Other mammal species targeted for monitoring during the creation process include the brown hare and the pipistrelle.
The alder leaf beetle, formerly believed extinct in the UK, has colonised the Dearne and other local river catchments, probably introduced when the pollution-tolerant Italian alder was planted on restored land. Other uncommon insects found at Old Moor include the great silver water beetle, the longhorn beetle Pyrrhidium sanguineum, the dingy skipper butterfly, and a day-flying moth, the six-belted clearwing. Nationally scarce nocturnal moths include the cream-bordered green pea and chocolate-tip, while the red-eyed damselfly and red-veined and black darters are notable among the Odonata.
Several rare flies have been recorded, including three species, Parochthiphila coronata, Calamoncosis aspistylina and Neoascia interrupta, otherwise known in the UK only from a few sites in the East Anglian fenland. An unusual plant gall found on creeping bent was caused by the nematode Subanguina graminophila.
Scarce plants include yellow vetchling and hairy bird's-foot trefoil. Marsh orchids flower in grassy areas in the summer, and the same species, along with the bee orchid, has colonised the verges of the adjacent Manvers Way. Other scarce plants found in the area include hairlike pondweed, pond water-crowfoot and greater pond sedge.
Threats and opportunities
The Dearne Valley is a natural washland with a capacity of , and as such it can normally absorb overflow from its river. The floods of 2007 overwhelmed the storage capacity and covered the whole of Old Moor to hide-roof level, only the visitor centre being untouched. In the longer term, the reserve might be adversely affected by climate change, perhaps leading to alterations in the populations of woodland species.
More positive effects may arise as the local environment improves, with habitat creation occurring beyond the reserve and better accessibility. A survey by the DVLP showed that 44% of respondents said that they liked to visit the local wildlife reserves, with another 17% mentioning waterways and lakes. When asked what they liked about the Dearne Valley area, 35% of replies said nature and wildlife.
The success of Old Moor has led to the creation of similar RSPB reserves close to urban areas at Rainham Marshes east of London, Newport Wetlands in South Wales, and RSPB Saltholme on Teesside.
References
Cited texts
External links
Official site
Constructed wetlands
Old Moor
RSPB visitor centres in England
Tourist attractions in Barnsley
Nature reserves in South Yorkshire
Wetlands of England | RSPB Dearne Valley Old Moor | [
"Chemistry",
"Engineering",
"Biology"
] | 3,004 | [
"Bioremediation",
"Constructed wetlands",
"Environmental engineering"
] |
2,495,757 | https://en.wikipedia.org/wiki/Gas%20flare | A gas flare, alternatively known as a flare stack, flare boom, ground flare, or flare pit, is a gas combustion device used in places such as petroleum refineries, chemical plants and natural gas processing plants, oil or gas extraction sites having oil wells, gas wells, offshore oil and gas rigs and landfills.
In industrial plants, flare stacks are primarily used for burning off flammable gas released by safety valves during unplanned overpressuring of plant equipment. During plant or partial plant startups and shutdowns, they are also often used for the planned combustion of gases over relatively short periods.
At oil and gas extraction sites, gas flares are similarly used for a variety of startup, maintenance, testing, safety, and emergency purposes. In a practice known as production flaring, they may also be used to dispose of large amounts of unwanted associated petroleum gas, possibly throughout the life of an oil well.
Overall flare system in industrial plants
When industrial plant equipment items are overpressured, the pressure relief valve is an essential safety device that automatically releases gases and sometimes liquids. Those pressure relief valves are required by industrial design codes and standards as well as by law.
The released gases and liquids are routed through large piping systems called flare headers to a vertical elevated flare. The released gases are burned as they exit the flare stacks. The size and brightness of the resulting flame depends upon the flammable material's flow rate in joules per hour (or btu per hour).
Most industrial plant flares have a vapor–liquid separator (also known as a knockout drum) upstream of the flare to remove any large amounts of liquid that may accompany the relieved gases.
Steam is very often injected into the flame to reduce the formation of black smoke. When too much steam is added, a condition known as "oversteaming" can occur resulting in reduced combustion efficiency and higher emissions. To keep the flare system functional, a small amount of gas is continuously burned, like a pilot light, so that the system is always ready for its primary purpose as an overpressure safety system.
The adjacent flow diagram depicts the typical components of an overall industrial flare stack system:
A knockout drum to remove any oil or water from the relieved gases. There may be several knock out drums: high-pressure and low-pressure drums taking relief flow from high-pressure and low-pressure equipment. A cold relief drum which is segregated from wet relief system because of the risk of freezing.
A water seal drum to prevent any flashback of the flame from the top of the flare stack.
An alternative gas recovery system for use during partial plant startups and shutdowns as well as other times when required. The recovered gas is routed into the fuel gas system of the overall industrial plant.
A steam injection system to provide an external momentum force used for efficient mixing of air with the relieved gas, which promotes smokeless burning.
A pilot flame (with its ignition system) that burns all the time so that it is available to ignite relieved gases when needed.
The flare stack, including a flashback prevention section at the upper part of the stack.
The schematic shows a pipe flare tip. The flare tip can have several configurations:
a simple pipe flare
a sonic tip – upstream pressure > 5 bar
a multi nozzle tip, sonic or subsonic
a Coandă tip – a profiled tip using the Coandă effect to entrain air into the gas to improve combustion.
Flare stack height
The height of a flare stack, or the reach of a flare boom, is determined by the thermal radiation that is permissible or tolerable for equipment or personnel to be exposed to. For continuous exposure of personnel wearing appropriate industrial clothing a maximum radiation level of 1.58 kW/m2 (500 Btu/hr.ft²) is recommended. Higher radiation levels are permissible but for reduced exposure times:
4.73 kW/m2 (1500 Btu/hr.ft²) would limit exposure to 3 to 4 minutes
6.31 kW/m2 (2000 Btu/hr.ft²) would limit exposure to 30 seconds.
Ground flares
Ground flares are designed to hide the flame from sight and to reduce thermal radiation and noise. They comprise a steel box or cylinder lined with refractory material. They are open at the top and have openings around the base to allow combustion air to enter. They may have an array of multiple flare tips to provide turndown capability and to spread the flame across the cross-section of the flare. They are generally used onshore in environmentally sensitive areas and have been used offshore on floating production storage and offloading installations (FPSOs).
Crude oil production flares
When crude oil is extracted and produced from oil wells, raw natural gas associated with the oil is brought to the surface as well. Especially in areas of the world lacking pipelines and other gas transportation infrastructure, vast amounts of such associated gas are commonly flared as waste or unusable gas. The flaring of associated gas may occur at the top of a vertical flare stack, or it may occur in a ground-level flare in an earthen pit. Preferably, associated gas is reinjected into the reservoir, which saves it for future use while maintaining higher well pressure and crude oil producibility.
Advances in satellite monitoring, along with voluntary reporting, have revealed that about 150 × 109 cubic meters (5.3 × 1012 cubic feet) of associated gas have been flared globally each year since at least the mid-1990s until 2020. In 2011, that was equivalent to about 25 percent of the annual natural gas consumption in the United States or about 30 per cent of the annual gas consumption in the European Union. At market, this quantity of gas—at a nominal value of $5.62 per 1000 cubic feet—would be worth US$29.8 billion.
Additionally, the waste is a significant source of carbon dioxide (CO2) and other greenhouse gas emissions.
Biogas flares
An important source of anthropogenic methane comes from the treatment and storage of organic waste material including waste water, animal waste and landfill. Gas flares are used in any process that results in the generation and collection of biogas. As a result, gas flares are a standard component of an installation for controlling the production of biogas. They are installed on landfill sites, waste water treatment plant and anaerobic digestion plant that use agriculturally or domestically produced organic waste to produce methane for use as a fuel or for heating.
Gas flares on biogas collection systems are used if the gas production rates are not sufficient to warrant use in any industrial process. However, on a plant where the gas production rate is sufficient for direct use in an industrial process that could be classified as part of the circular economy, and that may include the generation of electricity, the production of natural gas quality biogas for vehicle fuel or for heating in buildings, drying refuse-derived fuel or leachate treatment, gas flares are used as a back-up system during down-time for maintenance or breakdown of generation equipment. In this latter case, generation of biogas cannot normally be interrupted, and a gas flare is employed to maintain the internal pressure on the biological process.
There are two types of gas flare used for controlling biogas, open or enclosed. Open flares burn at a lower temperature, less than 1000 °C and are generally cheaper than enclosed flares that burn at a higher combustion temperature and are usually supplied to conform to a specific residence time of 0.3s within the chimney to ensure complete destruction of the toxic elements contained within the biogas. Flare specification usually demands that enclosed flares must operate at >1000 °C and <1200 °C; this in order to ensure a 98% destruction efficient and avoid the formation of NOx.
Environmental impacts
The natural gas that is not combusted by a flare is vented into the atmosphere as methane. Methane's estimated global warming potential is 28-36 times greater than that of CO2 over the course of a century, and 84-87 times greater over two decades. Natural gas flaring produces CO2 and many other compounds, depending on the chemical composition of the natural gas and on how well the natural gas burns in the flare. Therefore, to the extent that gas flares convert methane to CO2 before it is released into the atmosphere, they reduce the amount of global warming that would otherwise occur.
Flaring emissions contributed to 270 Mt (megatonnes) of CO2 in 2017 and reducing flaring emissions is thought to be an important component in curbing global warming. An increasing number of governments and industries have pledged to eliminate or reduce flaring. The Global Methane Pledge signed at COP26, in which 111 nations committed to reducing methane emissions by at least 30 percent from 2020 levels by 2030, is also playing a role in raising the global focus on methane.
Additional noxious fumes emitted by flaring may include, aromatic hydrocarbons (benzene, toluene, xylenes) and benzo(a)pyrene, which are known to be carcinogenic. A 2013 study found that gas flares contributed over 40% of the black carbon deposited in the Arctic.
Flaring can affect wildlife by attracting birds and insects to the flame. Approximately 7,500 migrating songbirds were attracted to and killed by the flare at the liquefied natural gas terminal in Saint John, New Brunswick, Canada on September 13, 2013. Similar incidents have occurred at flares on offshore oil and gas installations. Moths are known to be attracted to lights. A brochure published by the Secretariat of the Convention on Biological Diversity describing the Global Taxonomy Initiative describes a situation where "a taxonomist working in a tropical forest noticed that a gas flare at an oil refinery was attracting and killing hundreds of these [hawk or sphinx] moths. Over the course of the months and years that the refinery was running a vast number of moths must have been killed, suggesting that plants could not be pollinated over a large area of forest".
Adverse health effects
Flares release several different chemicals including: benzene, particulates, nitrogen oxides, heavy metals, black carbon, and carbon monoxide. Several of these pollutants correlate with preterm birth and reduced newborn birth weight. According to one study from 2020, pregnant women living near flaring natural gas and oil wells have reportedly experienced a 50% greater premature birth rate. Flares may emit methane and other volatile organic compounds as well as sulfur dioxide and other sulfur compounds, which are known to exacerbate asthma and other respiratory disease.
A 2021 study found that a 1% increase in flared natural gas increases the respiratory-related hospitalization rate by 0.73%.
See also
Blowdown stack
Flue-gas stack
Gas venting
References
Further reading
Flare and Vent Disposal Systems on PetroWiki
Media
Fuels
Oil refining
Air pollution
Air pollution control systems
Volatile organic compound abatement
Gas technologies | Gas flare | [
"Chemistry"
] | 2,226 | [
"Petroleum technology",
"Oil refining",
"Fuels",
"Chemical energy sources"
] |
2,495,995 | https://en.wikipedia.org/wiki/Universal%20motor | The universal motor is a type of electric motor that can operate on either AC or DC power and uses an electromagnet as its stator to create its magnetic field. It is a commutated series-wound motor where the stator's field coils are connected in series with the rotor windings through a commutator. It is often referred to as an AC series motor. The universal motor is very similar to a DC series motor in construction, but is modified slightly to allow the motor to operate properly on AC power. This type of electric motor can operate well on AC because the current in both the field coils and the armature (and the resultant magnetic fields) will alternate (reverse polarity) synchronously with the supply. Hence the resulting mechanical force will occur in a consistent direction of rotation, independent of the direction of applied voltage, but determined by the commutator and polarity of the field coils.
Universal motors have high starting torque, can run at high speed, and are lightweight and compact. They are commonly used in portable power tools and equipment, as well as many household appliances. They are relatively easy to control, electromechanically using tapped coils, or electronically. However, the commutator has brushes that wear, so they are less suitable for equipment that is in continuous use. In addition, partly because of the commutator, universal motors are typically very noisy, both acoustically and electromagnetically.
Working
Not all series-wound motors operate well on AC current.
If an ordinary series-wound DC motor were connected to an AC supply, it would run very poorly. The universal motor is modified in several ways to allow for proper AC supply operation. There is a compensating winding typically added, along with laminated pole pieces, as opposed to the solid pole pieces found in DC motors. A universal motor's armature typically has far more coils and plates than a DC motor, and hence fewer windings per coil. This reduces the inductance.
Efficiency
Even when used with AC power these types of motors are able to run at a rotation frequency well above that of the mains supply, and because most electric motor properties improve with speed, this means they can be lightweight and powerful. However, universal motors are usually relatively inefficient: around 30% for smaller motors and up to 70–75% for larger ones.
Torque–speed characteristics
Series-wound electric motors respond to increased load by slowing down; the current increases and the torque rises in proportion to the square of the current because the same current flows in both the armature and the field windings. If the motor is stalled, the current is limited only by the total resistance of the windings and the torque can be very high, and there is a danger of the windings becoming overheated. The counter-EMF aids the armature resistance to limit the current through the armature. When power is first applied to a motor, the armature does not rotate. At that instant, the counter-EMF is zero and the only factor limiting the armature current is the armature resistance. Usually the armature resistance of a motor is low; therefore the current through the armature would be very large when the power is applied. Therefore the need can arise for an additional resistance in series with the armature to limit the current until the motor rotation can build up the counter-EMF. As the motor rotation builds up, the resistance is gradually cut out. The speed-torque characteristic is an almost perfectly straight line between the stall torque and the no-load speed. This suits large inertial loads as the speed will drop until the motor slowly starts to rotate and these motors have a very high stalling torque.
As the speed increases, the inductance of the rotor means that the ideal commutating point changes. Small motors typically have fixed commutation. While some larger universal motors have rotatable commutation, this is rare. Instead larger universal motors often have compensation windings in series with the motor, or sometimes inductively coupled, and placed at ninety electrical degrees to the main field axis. These reduce the reactance of the armature, and improve the commutation.
One useful property of having the field windings in series with the armature winding is that as the speed increases the counter EMF naturally reduces the voltage across, and current through the field windings, giving field weakening at high speeds. This means that the motor has no theoretical maximum speed for any particular applied voltage. Universal motors can be and are generally run at high speeds, 4000–16000RPM, and can go over 20,000RPM. By way of contrast, AC synchronous and squirrel-cage induction motors cannot turn a shaft faster than allowed by the power line frequency. In countries with 60Hz AC supply, this speed is limited to 3600RPM.
Motor damage may occur from over-speeding (running at a rotational speed in excess of design limits) if the unit is operated with no significant mechanical load. On larger motors, sudden loss of load is to be avoided, and the possibility of such an occurrence is incorporated into the motor's protection and control schemes. In some smaller applications, a fan blade attached to the shaft often acts as an artificial load to limit the motor speed to a safe level, as well as a means to circulate cooling airflow over the armature and field windings. If there were no mechanical limits placed on a universal motor it could theoretically speed out of control in the same way any series-wound DC motor can.
An advantage of the universal motor is that AC supplies may be used on motors which have some characteristics more common in DC motors, specifically high starting torque and very compact design if high running speeds are used.
Disadvantages
A negative aspect is the maintenance and short life problems caused by the commutator, as well as electromagnetic interference (EMI) issues due to any sparking. Because of the relatively high maintenance commutator brushes, universal motors are best-suited for devices such as food mixers and power tools which are used only intermittently, and often have high starting-torque demands.
Another negative aspect is that these motors may only be used where mostly-clean air is present at all times. Due to the dramatically increased risk of overheating, totally-enclosed fan-cooled universal motors would be impractical, though some have been made. Such a motor would need a large fan to circulate enough air, decreasing efficiency since the motor must use more energy to cool itself. The impracticality comes from the resulting size, weight, and thermal management issues which open motors have none of.
Universal motors are also very noisy compared to other types of AC and DC motors.
Speed control
Continuous speed control of a universal motor running on AC is easily obtained by use of a thyristor circuit, while multiple taps on the field coil provide (imprecise) stepped speed control. Household blenders that advertise many speeds frequently combine a field coil with several taps and a diode that can be inserted in series with the motor (causing the motor to run on half-wave rectified AC).
Variations
Shunt winding
Universal motors are series wound. Shunt winding was used experimentally, in the late 19th century, but was impractical owing to problems with commutation. Various schemes of embedded resistance, inductance, and antiphase cross-coupling were attempted to reduce this. Universal motors, including shunt wound, were favoured as AC motors at this time as they were self-starting. When self-starting induction motors and automatic starters became available, these replaced the larger universal motors (above 1 hp) and the shunt wound.
Repulsion-start
In the past, repulsion-start wound-rotor motors provided high starting torque, but with added complexity. Their rotors were similar to those of universal motors, but their brushes were connected only to each other. Transformer action induced current into the rotor. Brush position relative to field poles meant that starting torque was developed by rotor repulsion from the field poles. A centrifugal mechanism, when close to running speed, connected all commutator bars together to create the equivalent of a squirrel-cage rotor. As well, when close to approximately 80 per cent of its run speed, these motors can run as induction motors.
Applications
Domestic appliances
Operating at normal power line frequencies, universal motors are not often found in a range less than . Their high speed makes them useful for appliances such as blenders, vacuum cleaners, and hair dryers where high speed and light weight are desirable. They are also commonly used in portable power tools, such as drills, sanders, circular saws, and jigsaws, where the motor's characteristics work well. An added benefit for power tools used by welders is that classic engine-driven welding machines may be a pure DC generator, and their auxiliary power receptacles will still be DC, even though a typical NEMA 5-15 household configuration. The DC power is fine for typical jobsite (outmoded) incandescent lighting and the universal motors in some drills and grinders. Many vacuum cleaner and weed trimmer motors exceed , while many Dremel and similar miniature grinders exceed .
Universal motors also lend themselves to electronic speed control and, as such, were an ideal choice for domestic washing machines. The motor can be used to agitate the drum (both forward and in reverse) by switching the field winding with respect to the armature. The motor can also be run up to the high speeds required for the spin cycle. Nowadays, variable-frequency drive motors are more commonly used instead.
Rail traction
Universal motors also formed the basis of the traditional railway traction motor in electric railways. In this application, the use of AC to power a motor originally designed to run on DC would lead to efficiency losses due to eddy current heating of their magnetic components, particularly the motor field pole-pieces that, for DC, would have used solid (un-laminated) iron. Although the heating effects are reduced by using laminated pole-pieces, as used for the cores of transformers and by the use of laminations of high-permeability electrical steel, one solution available at the start of the 20th century was for the motors to be operated from very-low-frequency AC supplies, with 25 Hz and Hz operation being common.
Starter motor
Starters of combustion engines are usually universal motors, with the advantage of being small and having high torque at low speed. Some starters have permanent magnets, others have one of the four poles wound with a shunt coil rather than series-wound coils.
References
AC motors
Electric motors | Universal motor | [
"Technology",
"Engineering"
] | 2,206 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
2,496,000 | https://en.wikipedia.org/wiki/Bispectrum | In mathematics, in the area of statistical analysis, the bispectrum is a statistic used to search for nonlinear interactions.
Definitions
The Fourier transform of the second-order cumulant, i.e., the autocorrelation function, is the traditional power spectrum.
The Fourier transform of C3(t1, t2) (third-order cumulant-generating function) is called the bispectrum or bispectral density.
Calculation
Applying the convolution theorem allows fast calculation of the bispectrum: , where denotes the Fourier transform of the signal, and its conjugate.
Applications
Bispectrum and bicoherence may be applied to the case of non-linear interactions of a continuous spectrum of propagating waves in one dimension.
Bispectral measurements have been carried out for EEG signals monitoring. It was also shown that bispectra characterize differences between families of musical instruments.
In seismology, signals rarely have adequate duration for making sensible bispectral estimates from time averages.
Bispectral analysis describes observations made at two wavelengths. It is often used by scientists to analyze elemental makeup of a planetary atmosphere by analyzing the amount of light reflected and received through various color filters. By combining and removing two filters, much can be gleaned from only two filters. Through modern computerized interpolation, a third virtual filter can be created to recreate true color photographs that, while not particularly useful for scientific analysis, are popular for public display in textbooks and fund raising campaigns.
Bispectral analysis can also be used to analyze interactions between wave patterns and tides on Earth.
A form of bispectral analysis called the bispectral index is applied to EEG waveforms to monitor depth of anesthesia.
Biphase (phase of polyspectrum) can be used for detection of phase couplings, noise reduction of polharmonic (particularly, speech ) signal analysis.
A physical interpretation
The bispectrum reflects the energy budget of interactions, as it can be interpreted as a covariance defined between energy-supplying and energy-receiving parties of waves involved in an nonlinear interaction. On the other hand, bicoherence has been proven to be the corresponding correlation coefficient. Just as correlation cannot sufficiently demonstrate the presence of causality, spectrum and bicoherence also cannot sufficiently substantiate the existence of an nonlinear interaction.
Generalizations
Bispectra fall in the category of higher-order spectra, or polyspectra and provide supplementary information to the power spectrum. The third order polyspectrum (bispectrum) is the easiest to compute, and hence the most popular.
A statistic defined analogously is the bispectral coherency or bicoherence.
Trispectrum
The Fourier transform of C4 (t1, t2, t3) (fourth-order cumulant-generating function) is called the trispectrum or trispectral density.
The trispectrum T(f1,f2,f3) falls into the category of higher-order spectra, or polyspectra, and provides supplementary information to the power spectrum. The trispectrum is a three-dimensional construct. The symmetries of the trispectrum allow a much reduced support set to be defined, contained within the following vertices, where 1 is the Nyquist frequency. (0,0,0) (1/2,1/2,-1/2) (1/3,1/3,0) (1/2,0,0) (1/4,1/4,1/4). The plane containing the points (1/6,1/6,1/6) (1/4,1/4,0) (1/2,0,0) divides this volume into an inner and an outer region. A stationary signal will have zero strength (statistically) in the outer region. The trispectrum support is divided into regions by the plane identified above and by the (f1,f2) plane. Each region has different requirements in terms of the bandwidth of signal required for non-zero values.
In the same way that the bispectrum identifies contributions to a signal's skewness as a function of frequency triples, the trispectrum identifies contributions to a signal's kurtosis as a function of frequency quadruplets.
The trispectrum has been used to investigate the domains of applicability of maximum kurtosis phase estimation used in the deconvolution of seismic data to find layer structure.
References
Further reading
HOSA - Higher Order Spectral Analysis Toolbox: A MATLAB toolbox for spectral and polyspectral analysis, and time-frequency distributions. The documentation explains polyspectra in great detail.
Complex analysis
Integral transforms
Fourier analysis
Time series
Nonlinear time series analysis
Statistical signal processing | Bispectrum | [
"Engineering"
] | 1,020 | [
"Statistical signal processing",
"Engineering statistics"
] |
2,497,242 | https://en.wikipedia.org/wiki/Mexican%20burrowing%20toad | The Mexican burrowing toad (Rhinophrynus dorsalis) is the single living representative of the family Rhinophrynidae. It is a unique species in its taxonomy and morphology, with special adaptations to assist them in digging burrows where they spend most of their time. These adaptations include a small pointed snout and face, keratinized structures and a lack of webbing on front limbs, and specialized tongue morphology to assist in feeding on ants and termites underground. The body is nearly equal in width and length. It is a dark brown to black color with a red-orange stripe on its back along with splotches of color on its body. The generic name Rhinophrynus means 'nose-toad', from rhino- (), the combining form of the Ancient Greek (, 'nose') and (, 'toad').
The Mexican burrowing toad diverged from other amphibians over 190 million years ago and has been evolving independently for a longer period of time than the evolutionary differences between mammals like humans, fruit bats, polar bears and killer whales. Its closest sister group is Pipidae, or the aquatic clawed frogs.
Description
The Mexican burrowing toad has a unique appearance that makes it easy to distinguish from other organisms. This species’ body is flat with a width and length that are almost equal. It is covered in loosely fitting wrinkled skin which becomes taut and shiny when the frog's body swells up during its mating call. Its head is small and triangular, and projects out of its body in a small point with very small eyes. They have no neck and no visible ear holes or tympanum. Its legs are short and muscular, and are structured for burrowing as indicated by its name. Its feet also have adaptations for burrowing, mainly nail-like keratinized structures at the end of each digit. Its front feet lack webbing between the digits to free them up for burrowing, and its back feet are short and extensively webbed.
The toad's coloration ranges from a dark brown to black. A bright red-orange stripe runs on its back from head to tail, and the body is covered with other red-orange splotches in varying patterns. Its underside is gray to dark brown and does not have the red splotches like the rest of the body. The toad is sexually dimorphic, with females being larger than males. Adults of the Mexican burrowing toad grow to be between 75 and 88 mm (snout-vent length) or about 3.0 to 3.3 inches.
Feeding specializations
The toad's snout is covered in an armor of small keratinous spines, and its lips are sealed by secretions from glands under the mandible. Its lips have a double closure along their maxillary arch, which are enhanced by the glands under the jaw. Morphological studies reveal that the frog has a type of tongue protrusion that is distinct from that of other frogs. Many other frog species project their tongues by a lingual flip, a behavior where the tongue is strongly flipped through the lips and out of the mouth. In this species however, the tongue stiffens and protrudes out of the mouth by moving the jaw backwards. This mechanism is specialized for capturing small insect prey in burrows.
Distribution
The Mexican burrowing toad is found in tropical and subtropical dry broadleaf forest, savannas, and thorn scrub (e.g. Tamaulipan mezquital) in the lowlands of Central America, Mexico, and extreme south Texas, USA. Due to its wide range, the species is categorized as least concern by the IUCN, but some local and regional populations are protected and listed as threatened by various governments within its distribution.
Rhinophrynus dorsalis occurs in the Lower Rio Grande Valley of south Texas, USA, ranging southward through the costal lowlands of the Gulf of Mexico and Caribbean Sea in eastern Mexico including much of the Yucatán Peninsula, into northern Guatemala, Belize, extreme northwest Honduras, and an isolated record from northeast Nicaragua. Another geographically isolated population occurs in the lowlands of the Pacific coast, from extreme southern Michoacán, Mexico, southward into coastal areas of Guatemala, El Salvador, Nicaragua, Honduras, and northwest Costa Rica.
Evolutionary history
The oldest fossil of the genus is Rhinophrynus canadensis known from the late Eocene of Saskatchewan, Canada. Other fossils are known from the Oligocene of Florida.
Habitat
Its natural habitats include forest, savanna, shrubland, grassland, and inland wetlands. It primarily inhabits lowland areas of tropical dry and moist forests. It is generally associated with areas which are seasonally flooded because it relies on temporary ponds for breeding. It usually remains underground in the dry season following the breeding period. Its eggs and larvae develop in temporary pools formed by heavy rains, and the adults remain in fairly small areas.
Behavior
This species is nocturnal. These frogs make burrows to survive the dry season without suffering from lack of water. They use their strong short limbs and nails to dig into the soil and create burrows. The frog can survive long periods of drought inside this burrow. When the frog is making its vocalization or when it is alarmed its body becomes inflated and resembles a balloon, with its already short head and limbs almost disappearing. This mechanism is not deeply studied, and may require more research to determine its physiology.
Diet
The Mexican burrowing toad primarily subsists on ants and termites that they forage underground. Their features are specialized for underground foraging, especially the way the tongue is used by shifting it forward rather than the lingual flip seen in other frogs. This mode is unique among anurans, and is highly specialized for capturing small insects in burrows.
Reproduction and life cycle
This species has a characteristically short and explosive breeding period, often lasting only one to three days. This explosive breeding combined with the ecological condition of dry seasonal forests have influenced the evolution of their courtship behavior and male-female interactions. There is size sexual dimorphism in this species with females being larger than males, and male-male contests are largely absent during the short breeding period. Due to the absence of male-male competition and territoriality, females select their mates based on the frequency and tonality of advertisement calls. The characteristics of the advertisement call can give females insight to male size, which affects mating with larger females opting for larger mates over smaller ones.
Breeding
Breeding in this species occurs after heavy rains in small temporary pools. Based on Costa Rican populations, clutch sizes range from 2,000 to 8,000 eggs. The Mexican burrowing toads are considered explosive breeders, and reproduce in a way where many individuals exit burrows at the same time to gather at temporary pools of water for breeding to occur. The males then float on the surface of the water and inflate their bodies while making a characteristic call that attracts females. The toad's mating periods is between one and three days, one of the shortest seasons among amphibians, and after this period they burrow back into the ground and remain there until the next breeding season.
Sexual maturity
Sexual maturity in the Mexican burrowing toad is determined by examining testes size in males and ovarian stages in females. The presence of enlarged testes and a larger body size is used to determine maturity in males, and various ovarian characteristics including oviduct size and shape are used in females. Females are most likely to be carrying eggs during May and June, but reproduction can occur in October and January as well. In one study the clutch size ranged from 1,000 to around 8,000 eggs, with larger females carrying proportionally more eggs.
R. dorsalis will live underground for most of the year and emerges with heavy rains. The males then float on the surface of water and call to females which results in amplexus. After mating and laying eggs in the water, the environment dries and they will burrow back into the ground. Tadpoles hatch in a few days and transform into adults after one to three months of metamorphosis.
Female/Male interactions
Competition between males for females relies primarily on acoustic communication, with males depending on the impressive calls they make to attract females. The short breeding season imposes constraints on their courtship behavior and breeding formations. Because the breeding season is so short, there is more incentive to spend time breeding rather than competing with other males. Therefore, there are few antagonistic interactions between the males of this species and female choice is based on acoustic displays rather than physical competition or territory defense.
Males produce two types of Mating calls during the breeding season to attract females. These calls are the pre-advertisement and advertisement calls. In one observational study of the reproductive behavior of The Mexican burrowing toad, the pre-advertisement call was often produced just before the advertisement call. The advertisement call is a single tone with an upward tone, with a duration of about 1.36 seconds. The pre-advertisement call was a single short sound without modulation, and was of higher frequency than the advertisement calls.
The calls attract females, after which the male and female will participate in amplexus. In all the mating pairs of R. dorsalis, females mate with smaller males but large females often mate with the larger males present. Females will inflate their bodies during breeding season which allows them to reduce the ability of smaller males to maintain amplexus.
Conservation
The population trend of the Mexican burrowing toad is described as stable and as of 2019 it is listed as being of least concern by the IUCN. In Mexico and Central America it is widespread and locally abundant in many areas within its range. The species is protected by Mexican law under the Special Protection category. In the state of Texas, USA, it is listed as a threatened species due to the extensive areas of its habitat that have been converted for agricultural uses and urban development in its limited distribution there.
References
Rhinophrynidae
Frogs of North America
Amphibians of North America
Amphibians of Costa Rica
Amphibians of El Salvador
Amphibians of Honduras
Amphibians of Mexico
Amphibians of Nicaragua
Fauna of the Rio Grande valleys
Amphibians of the United States
Amphibians of Guatemala
Amphibians described in 1841
Taxa named by André Marie Constant Duméril
Taxa named by Gabriel Bibron
EDGE species
Fauna of the Yucatán Peninsula
Fauna of the Southern Pacific dry forests | Mexican burrowing toad | [
"Biology"
] | 2,098 | [
"EDGE species",
"Biodiversity"
] |
2,497,263 | https://en.wikipedia.org/wiki/Kinetic%20fractionation | Kinetic fractionation is an isotopic fractionation process that separates stable isotopes from each other by their mass during unidirectional processes. Biological processes are generally unidirectional and are very good examples of "kinetic" isotope reactions. All organisms preferentially use lighter isotopes, because "energy costs" are lower, resulting in a significant fractionation between the substrate (heavier) and the biologically mediated product (lighter). For example, photosynthesis preferentially takes up the light isotope of carbon C during assimilation of atmospheric CO. This kinetic isotope fractionation explains why plant material (and thus fossil fuels, which are derived from plants) is typically depleted in C by 25 per mil (2.5%) relative to most inorganic carbon on Earth.
A naturally occurring example of non-biological kinetic fractionation occurs during the evaporation of seawater to form clouds under conditions in which some part of the transport is unidirectional, such as evaporation into very dry air. In this case, lighter water molecules (i.e., those with O) evaporate slightly more easily than heavier water molecules with O; this difference will be greater than it would be if the evaporation was taking place under equilibrium conditions (with bidirectional transport).
During this process the oxygen isotopes are fractionated: the clouds become enriched with O, and the seawater becomes enriched in O. Whereas equilibrium fractionation makes the vapor about 10 per mil (1%) depleted in O relative to the liquid water, kinetic fractionation enhances this fractionation and often makes vapor that is about 15 per mil (1.5%) depleted. Condensation occurs almost exclusively by equilibrium processes, and so it enriches cloud droplets somewhat less than evaporation depletes the vapor. This explains part of the reason why rainwater is observed to be isotopically lighter than seawater.
The heavy isotope of hydrogen in water, deuterium (H), is much less sensitive to kinetic fractionation than oxygen isotopes, relative to the very large equilibrium fractionation of deuterium. Therefore kinetic fractionation does not deplete H nearly as much, in a relative sense, as O. This gives rise to an excess of deuterium in vapor and rainfall, relative to seawater. The value of this "deuterium excess", as it is called, is about +10 per mil (1%) in most meteoric waters and its non-zero value is a direct manifestation of kinetic isotope fractionation.
A generalized treatment of kinetic isotopic effects is via the GEBIK and GEBIF equations describing transient kinetic isotope effects.
Other types of fractionation
Equilibrium fractionation
Mass-independent fractionation
Transient kinetic isotope fractionation
See also
Isotopic enrichment
Isotopic ratio
Kinetic isotope effect
Hydrogen isotope biogeochemistry
References
Fractionation
Environmental isotopes | Kinetic fractionation | [
"Chemistry"
] | 596 | [
"Fractionation",
"Separation processes",
"Environmental isotopes",
"Isotope stubs",
"Isotopes",
"Nuclear chemistry stubs",
"Geochemistry stubs"
] |
2,497,795 | https://en.wikipedia.org/wiki/Acoustic%20metric | In acoustics and fluid dynamics, an acoustic metric (also known as a sonic metric) is a metric that describes the signal-carrying properties of a given particulate medium.
(Generally, in mathematical physics, a metric describes the arrangement of relative distances within a surface or volume, usually measured by signals passing through the region – essentially describing the intrinsic geometry of the region.)
A simple fluid example
For simplicity, we will assume that the underlying background geometry is Euclidean, and that this space is filled with an isotropic inviscid fluid at zero temperature (e.g. a superfluid). This fluid is described by a density field ρ and a velocity field . The speed of sound at any given point depends upon the compressibility which in turn depends upon the density at that point. It requires much work to compress anything more into an already compacted space. This can be specified by the "speed of sound field" c. Now, the combination of both isotropy and Galilean covariance tells us that the permissible velocities of the sound waves at a given point x, has to satisfy
This restriction can also arise if we imagine that sound is like "light" moving through a spacetime described by an effective metric tensor called the acoustic metric.
The acoustic metric is
"Light" moving with a velocity of (not the 4-velocity) has to satisfy
If
where α is some conformal factor which is yet to be determined (see Weyl rescaling), we get the desired velocity restriction. α may be some function of the density, for example.
Acoustic horizons
An acoustic metric can give rise to "acoustic horizons" (also known as "sonic horizons"), analogous to the event horizons in the spacetime metric of general relativity. However, unlike the spacetime metric, in which the invariant speed is the absolute upper limit on the propagation of all causal effects, the invariant speed in an acoustic metric is not the upper limit on propagation speeds. For example, the speed of sound is less than the speed of light. As a result, the horizons in acoustic metrics are not perfectly analogous to those associated with the spacetime metric. It is possible for certain physical effects to propagate back across an acoustic horizon. Such propagation is sometimes considered to be analogous to Hawking radiation, although the latter arises from quantum field effects in curved spacetime.
See also
Acoustics
Analog models of gravity
Gravastar
Hawking radiation
Quantum gravity
Superfluid vacuum theory
References
Considers information leakage through a transsonic horizon as an "analogue" of Hawking radiation in black hole problems.
Indirect radiation effects in the physics of acoustic horizon explored as a case of Hawking radiation.
Huge review article of "toy models" of gravitation, 2005, currently on v2, 152 pages, 435 references, alphabetical by author.
External links
Acoustic black holes on arxiv.org
Acoustics
Quantum gravity | Acoustic metric | [
"Physics"
] | 597 | [
"Unsolved problems in physics",
"Classical mechanics",
"Acoustics",
"Quantum gravity",
"Physics beyond the Standard Model"
] |
2,497,815 | https://en.wikipedia.org/wiki/Complex%20differential%20form | In mathematics, a complex differential form is a differential form on a manifold (usually a complex manifold) which is permitted to have complex coefficients.
Complex forms have broad applications in differential geometry. On complex manifolds, they are fundamental and serve as the basis for much of algebraic geometry, Kähler geometry, and Hodge theory. Over non-complex manifolds, they also play a role in the study of almost complex structures, the theory of spinors, and CR structures.
Typically, complex forms are considered because of some desirable decomposition that the forms admit. On a complex manifold, for instance, any complex k-form can be decomposed uniquely into a sum of so-called (p, q)-forms: roughly, wedges of p differentials of the holomorphic coordinates with q differentials of their complex conjugates. The ensemble of (p, q)-forms becomes the primitive object of study, and determines a finer geometrical structure on the manifold than the k-forms. Even finer structures exist, for example, in cases where Hodge theory applies.
Differential forms on a complex manifold
Suppose that M is a complex manifold of complex dimension n. Then there is a local coordinate system consisting of n complex-valued functions z1, ..., zn such that the coordinate transitions from one patch to another are holomorphic functions of these variables. The space of complex forms carries a rich structure, depending fundamentally on the fact that these transition functions are holomorphic, rather than just smooth.
One-forms
We begin with the case of one-forms. First decompose the complex coordinates into their real and imaginary parts: for each j. Letting
one sees that any differential form with complex coefficients can be written uniquely as a sum
Let Ω1,0 be the space of complex differential forms containing only 's and Ω0,1 be the space of forms containing only 's. One can show, by the Cauchy–Riemann equations, that the spaces Ω1,0 and Ω0,1 are stable under holomorphic coordinate changes. In other words, if one makes a different choice wi of holomorphic coordinate system, then elements of Ω1,0 transform tensorially, as do elements of Ω0,1. Thus the spaces Ω0,1 and Ω1,0 determine complex vector bundles on the complex manifold.
Higher-degree forms
The wedge product of complex differential forms is defined in the same way as with real forms. Let p and q be a pair of non-negative integers ≤ n. The space Ωp,q of (p, q)-forms is defined by taking linear combinations of the wedge products of p elements from Ω1,0 and q elements from Ω0,1. Symbolically,
where there are p factors of Ω1,0 and q factors of Ω0,1. Just as with the two spaces of 1-forms, these are stable under holomorphic changes of coordinates, and so determine vector bundles.
If Ek is the space of all complex differential forms of total degree k, then each element of Ek can be expressed in a unique way as a linear combination of elements from among the spaces Ωp,q with . More succinctly, there is a direct sum decomposition
Because this direct sum decomposition is stable under holomorphic coordinate changes, it also determines a vector bundle decomposition.
In particular, for each k and each p and q with , there is a canonical projection of vector bundles
The Dolbeault operators
The usual exterior derivative defines a mapping of sections via
The exterior derivative does not in itself reflect the more rigid complex structure of the manifold.
Using d and the projections defined in the previous subsection, it is possible to define the Dolbeault operators:
To describe these operators in local coordinates, let
where I and J are multi-indices. Then
The following properties are seen to hold:
These operators and their properties form the basis for Dolbeault cohomology and many aspects of Hodge theory.
On a star-shaped domain of a complex manifold the Dolbeault operators have dual homotopy operators that result from splitting of the homotopy operator for . This is a content of the Poincaré lemma on a complex manifold.
The Poincaré lemma for and can be improved further to the local -lemma, which shows that every -exact complex differential form is actually -exact. On compact Kähler manifolds a global form of the local -lemma holds, known as the -lemma. It is a consequence of Hodge theory, and states that a complex differential form which is globally -exact (in other words, whose class in de Rham cohomology is zero) is globally -exact.
Holomorphic forms
For each p, a holomorphic p-form is a holomorphic section of the bundle Ωp,0. In local coordinates, then, a holomorphic p-form can be written in the form
where the are holomorphic functions. Equivalently, and due to the independence of the complex conjugate, the (p, 0)-form α is holomorphic if and only if
The sheaf of holomorphic p-forms is often written Ωp, although this can sometimes lead to confusion so many authors tend to adopt an alternative notation.
See also
Dolbeault complex
Frölicher spectral sequence
Differential of the first kind
References
Complex manifolds
Differential forms | Complex differential form | [
"Engineering"
] | 1,123 | [
"Tensors",
"Differential forms"
] |
2,497,853 | https://en.wikipedia.org/wiki/Sudatorium | In architecture, a sudatorium is a vaulted sweating-room (sudor, "sweat") or steam bath (Latin: sudationes, steam) of the Roman baths or thermae. The Roman architectural writer Vitruvius (v. 2) refers to it as concamerata sudatio. It is similar to a laconicum, or dry heat bath, with the addition of water to produce steam.
In order to obtain the great heat required, the whole wall was lined with vertical terracotta flue pipes of rectangular section, placed side by side, through which hot air and smoke from the suspensura passed to an exit in the roof.
When Arabs and Turks overran the Eastern Roman Empire, they adopted and developed this feature in their baths or hammams.
References
Ancient Roman baths
Rooms | Sudatorium | [
"Engineering"
] | 173 | [
"Rooms",
"Architecture"
] |
17,147,022 | https://en.wikipedia.org/wiki/European%20Nuclear%20Society | Since being founded in 1975, the European Nuclear Society (ENS) has grown to become the largest society in Europe for science, engineering and research in support of the nuclear industry. ENS's membership consists of national nuclear societies from 22 European countries, and additionally, Israel. Within the membership there are also stakeholder representatives for nuclear technology and research businesses, with around 60 corporate members.
ENS exists to promote the advancement of peaceful uses of nuclear energy on an international level, encouraging networking between countries and facilitating meetings to support global communication on scientific and technical affairs. ENS also supports education and training in engineering, promotes international standardisation in the nuclear industry, coordinates the activities of the member organisations and develops the expertise and capability needed for the future of the industry.
One of ENS's activities is organising conferences and workshops, providing a platform for international forums to exchange knowledge, experience, ideas and scientific developments.
The current president of the European Nuclear Society is Noël Camarcat.
The ENS is member of the International Nuclear Societies Council (INSC).
ENS Young Generation Network (YGN)
The ENS Young Generation Network (YGN) has been active across the society's member countries since 1995 when ENS supported a proposal from Jan Runermark, the then President-elect of ENS, to spread the Young Generation Network (YGN) to all its member countries. Five objectives ensure that YGN members are working towards a common goal, these are:
Attracting more young people: Recruiting and educating young people to be skilled members of the nuclear industry
Training new leaders: Exchanging knowledge between generations
Thinking nationally: Participation of young people in the national nuclear societies
Thinking internationally: Bringing together all national YGNs at European level
Opening up nuclear conferences: Ensuring events are relevant and topical for young people to attend.
YGN membership is available to anybody working in the nuclear industry, as well as fields of nuclear academia and research.
European Nuclear Young Generation Forum (ENYGF)
The European Nuclear Young Generation Forum (ENYGF) is a biennial international event, held since 2005 by the Young Generation Network (YGN) as part of the European Nuclear Society. The forum alternates with the International Youth Nuclear Congress (IYNC) and is held in a different location in Europe each time.
The aim of the event is to provide a platform for learning and networking for young professionals in all areas of nuclear application. It provides a chance to enhance international communication as well as sharing technical advances and knowledge, learning from experience and discussing best practice as well as considering social and political aspects of the nuclear industry.
The forum involves:
Formal lectures and presentations
Workshops - which have previously included Women in Nuclear (WiN)
Technical tours
Keynote speakers
Social and cultural events
Each forum has a number of central focuses, around which the speakers, lectures, presentations and workshops are based. In 2011, the forum in Prague focused on the topics of nuclear safety and severe accidents, education and training, new build projects and ITER and fusion. At the 2015 forum in Paris the main focus points were nuclear efficiency and nuclear and the environment.
Previous ENYGF events
Following the success of IYNC conferences which began in the year 2000, the ENS-YGN decided to create the ENYGF 2005 and host the inaugural event in the city Zagreb, Croatia. Following this, the ENS-YGN elected cities to host the event every two years, the host locations to date have been:
ENYGF 2005, Zagreb, Republic of Croatia
ENYGF 2007, Amsterdam, Netherlands
ENYGF 2009, Córdoba, Spain
ENYGF 2011, Prague, Czech Republic
ENYGF 2013, Stockholm, Sweden
ENYGF 2015, Paris, France
ENYGF 2017, Manchester, United Kingdom
ENYGF 2019, Ghent, Belgium
ENYGF 2021, Tarragona, Spain
ENYGF 2023, Krakow, Poland
The events are organized by an executive committee from the selected country. This executive committee can acquire assistance from delegates of other countries who chose to collaborate. All the committee members have a common goal which is to further the ENS-YGN mission and help to create a global community of nuclear professionals.
See also
American Nuclear Society
European Atomic Forum
Institute of Nuclear Materials Management
Nuclear Institute
References
International nuclear energy organizations
International scientific organizations based in Europe
Nuclear organizations
Nuclear power in Belgium
Organisations based in Brussels | European Nuclear Society | [
"Engineering"
] | 881 | [
"International nuclear energy organizations",
"Nuclear organizations",
"Energy organizations"
] |
17,149,192 | https://en.wikipedia.org/wiki/Philosophy%20of%20design | Philosophy of design is the study of definitions of design, and the assumptions, foundations, and implications of design. The field, which is mostly a sub-discipline of aesthetics, is defined by an interest in a set of problems, or an interest in central or foundational concerns in design. In addition to these central problems for design as a whole, many philosophers of design consider these problems as they apply to particular disciplines (e.g. philosophy of art).
Although most practitioners are philosophers specialized in aesthetics (i.e., aestheticians), several prominent designers and artists have contributed to the field. For an introduction to the philosophy of design see the article by Per Galle at the Royal Danish Academy of Art.
Notable philosophers and theorists
Philosophers of design, or philosophers relevant to the philosophical study of design:
References
Philosophy of technology
Design studies
Science and technology studies
Media studies
D
Design | Philosophy of design | [
"Technology",
"Engineering"
] | 179 | [
"Design studies",
"Philosophy of technology",
"Design",
"Science and technology studies"
] |
17,153,924 | https://en.wikipedia.org/wiki/Elastic%20pendulum | In physics and mathematics, in the area of dynamical systems, an elastic pendulum (also called spring pendulum or swinging spring) is a physical system where a piece of mass is connected to a spring so that the resulting motion contains elements of both a simple pendulum and a one-dimensional spring-mass system. For specific energy values, the system demonstrates all the hallmarks of chaotic behavior and is sensitive to initial conditions.At very low and very high energy, there also appears to be regular motion. The motion of an elastic pendulum is governed by a set of coupled ordinary differential equations.This behavior suggests a complex interplay between energy states and system dynamics.
Analysis and interpretation
The system is much more complex than a simple pendulum, as the properties of the spring add an extra dimension of freedom to the system. For example, when the spring compresses, the shorter radius causes the spring to move faster due to the conservation of angular momentum. It is also possible that the spring has a range that is overtaken by the motion of the pendulum, making it practically neutral to the motion of the pendulum.
Lagrangian
The spring has the rest length and can be stretched by a length . The angle of oscillation of the pendulum is .
The Lagrangian is:
where is the kinetic energy and is the potential energy.
Hooke's law is the potential energy of the spring itself:
where is the spring constant.
The potential energy from gravity, on the other hand, is determined by the height of the mass. For a given angle and displacement, the potential energy is:
where is the gravitational acceleration.
The kinetic energy is given by:
where is the velocity of the mass. To relate to the other variables, the velocity is written as a combination of a movement along and perpendicular to the spring:
So the Lagrangian becomes:
Equations of motion
With two degrees of freedom, for and , the equations of motion can be found using two Euler-Lagrange equations:
For :
isolated:
And for :
isolated:
These can be further simplified by scaling length and time . Expressing the system in terms of and results in nondimensional equations of motion. The one remaining dimensionless parameter characterizes the system.
The elastic pendulum is now described with two coupled ordinary differential equations. These can be solved numerically. Furthermore, one can use analytical methods to study the intriguing phenomenon of order-chaos-order in this system for various values of the parameter and initial conditions and .
See also
Double pendulum
Duffing oscillator
Pendulum (mathematics)
Spring-mass system
References
Further reading
External links
Holovatsky V., Holovatska Y. (2019) "Oscillations of an elastic pendulum" (interactive animation), Wolfram Demonstrations Project, published February 19, 2019.
Holovatsky V., Holovatskyi I., Holovatska Ya., Struk Ya. Oscillations of the resonant elastic pendulum. Physics and Educational Technology, 2023, 1, 10–17, https://doi.org/10.32782/pet-2023-1-2 http://journals.vnu.volyn.ua/index.php/physics/article/view/1093
Chaotic maps
Dynamical systems
Mathematical physics
Pendulums | Elastic pendulum | [
"Physics",
"Mathematics"
] | 674 | [
"Functions and mappings",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Mechanics",
"Mathematical relations",
"Chaotic maps",
"Mathematical physics",
"Dynamical systems"
] |
17,156,914 | https://en.wikipedia.org/wiki/Computational%20particle%20physics | Computational particle physics refers to the methods and computing tools developed in and used by particle physics research. Like computational chemistry or computational biology, it is, for particle physics both a specific branch and an interdisciplinary field relying on computer science, theoretical and experimental particle physics and mathematics.
The main fields of computational particle physics are: lattice field theory (numerical computations), automatic calculation of particle interaction or decay (computer algebra) and event generators (stochastic methods).
Computing tools
Computer algebra: Many of the computer algebra languages were developed initially to help particle physics calculations: Reduce, Mathematica, Schoonschip, Form, GiNaC.
Data Grid: The largest planned use of the grid systems will be for the analysis of the LHC - produced data. Large software packages have been developed to support this application like the LHC Computing Grid (LCG) . A similar effort in the wider e-Science community is the GridPP collaboration, a consortium of particle physicists from UK institutions and CERN.
Data Analysis Tools: These tools are motivated by the fact that particle physics experiments and simulations often create large datasets, e.g. see references.
Software Libraries: Many software libraries are used for particle physics computations. Also important are packages that simulate particle physics interactions using Monte Carlo simulation techniques (i.e. event generators).
History
Particle physics played a role in the early history of the internet; the World-Wide Web was created by Tim Berners-Lee when working at CERN in 1991.
Computer Algebra
Note: This section contains an excerpt from 'Computer Algebra in Particle Physics' by Stefan Weinzierl
Particle physics is an important field of application for computer algebra and exploits the capabilities of Computer Algebra Systems (CAS). This leads to valuable feed-back for the development of CAS. Looking at the history of computer algebra systems, the first programs date back to the 1960s. The first systems were almost entirely based on LISP ("LISt Programming language"). LISP is an interpreted language and, as the name already indicates, designed for the manipulation of lists. Its importance for symbolic computer programs in the early days has been compared to the importance of FORTRAN for numerical programs in the same period. Already in this first period, the program REDUCE had some special features for the application to high energy physics. An exception to the LISP-based programs was SCHOONSHIP, written in assembler language by Martinus J. G. Veltman and specially designed for applications in particle physics. The use of assembler code lead to an incredible fast program (compared to the interpreted programs at that time) and allowed the calculation of more complex scattering processes in high energy physics. It has been claimed the program's importance was recognized in 1998 by awarding the half of the Nobel prize to Veltman. Also the program MACSYMA deserves to be mentioned explicitly, since it triggered important development with regard to algorithms. In the 1980s new computer algebra systems started to be written in C. This enabled the better exploitation of the resources of the computer (compared to the interpreted language LISP) and at the same time allowed to maintain portability (which would not have been possible in assembler language). This period marked also the appearance of the first commercial computer algebra system, among which Mathematica and Maple are the best known examples. In addition, a few dedicated programs appeared, an example relevant to particle physics is the program FORM by J. Vermaseren as a (portable) successor to SCHOONSHIP. More recently issues of the maintainability of large projects became more and more important and the overall programming paradigma changed from procedural programming to object-oriented design. In terms of programming languages this was reflected by a move from C to C++. Following this change of paradigma, the library GiNaC was developed. The GiNac library allows symbolic calculations in C++.
Code generation for computer algebra can also be used in this area.
Lattice field theory
Lattice field theory was created by Kenneth Wilson in 1974. Simulation techniques were later developed from statistical mechanics.
Since the early 1980s, LQCD researchers have pioneered the use of massively parallel computers in large scientific applications, using virtually all available computing systems including traditional main-frames, large PC clusters, and high-performance systems. In addition, it has also been used as a benchmark for high-performance computing, starting with the IBM Blue Gene supercomputer.
Eventually national and regional QCD grids were created: LATFOR (continental Europe), UKQCD and USQCD. The ILDG (International Lattice Data Grid) is an international venture comprising grids from the UK, the US, Australia, Japan and Germany, and was formed in 2002.
See also
Les Houches Accords
CHEP Conference
Computational physics
References
External links
Brown University. Computational High Energy Physics (CHEP) group page
International Research Network for Computational Particle Physics . Center for Computational Sciences, Univ. of Tsukuba, Japan.
History of computing at CERN
Computational fields of study | Computational particle physics | [
"Physics",
"Technology"
] | 1,027 | [
"Computational fields of study",
"Computational particle physics",
"Computational physics",
"Computing and society",
"Particle physics"
] |
17,157,285 | https://en.wikipedia.org/wiki/Biocrystallization | Biocrystallization is the formation of crystals from organic macromolecules by living organisms. This may be a stress response, a normal part of metabolism such as processes that dispose of waste compounds, or a pathology. Template mediated crystallization is qualitatively different from in vitro crystallization. Inhibitors of biocrystallization are of interest in drug design efforts against lithiasis and against pathogens that feed on blood, since many of these organisms use this process to safely dispose of heme.
DNA
Under severe stress conditions the bacteria Escherichia coli protects its DNA from damage by sequestering it within a crystalline structure. This process is mediated by the stress response protein Dps and allows the bacteria to survive varied assaults such as oxidative stress, heat shock, ultraviolet light, gamma radiation and extremes of pH.
Heme
Blood feeding organisms digest hemoglobin and release high quantities of free toxic heme. To avoid destruction by this molecule, the parasite biocrystallizes heme to form hemozoin. To date, the only definitively characterized product of hematin disposal is the pigment hemozoin. Hemozoin is per definitionem not a mineral and therefore not formed by biomineralization. Heme biocrystallization has been found in blood feeding organisms of great medical importance including Plasmodium, Rhodnius and Schistosoma. Heme biocrystallization is inhibited by quinoline antimalarials such as chloroquine.
Targeting heme biocrystallization remains one of the most promising avenues for antimalarial drug development because the drug target is highly specific to the malarial parasite, and outside the genetic control of the parasite.
Lithiasis
Lithiasis (formation of stones) is a global human health problem. Stones can form in both urinary and gastrointestinal tracts. Related to the formation of stones is the formation of crystals; this can occur in joints (e.g. gout) and in the viscera.
See also
Biomineralization
Diatomaceous earth
Magnetotactic bacteria
Prion
References
External links
Order in stress – Lessons from the inanimate world
Metabolism
Cell biology
Chemical pathology
Biomineralization | Biocrystallization | [
"Chemistry",
"Biology"
] | 461 | [
"Cell biology",
"Biomineralization",
"Bioinorganic chemistry",
"Cellular processes",
"Biochemistry",
"Chemical pathology",
"Metabolism"
] |
1,193,525 | https://en.wikipedia.org/wiki/Normal%20order | In quantum field theory a product of quantum fields, or equivalently their creation and annihilation operators, is usually said to be normal ordered (also called Wick order) when all creation operators are to the left of all annihilation operators in the product. The process of putting a product into normal order is called normal ordering (also called Wick ordering). The terms antinormal order and antinormal ordering are analogously defined, where the annihilation operators are placed to the left of the creation operators.
Normal ordering of a product of quantum fields or creation and annihilation operators can also be defined in many other ways. Which definition is most appropriate depends on the expectation values needed for a given calculation. Most of this article uses the most common definition of normal ordering as given above, which is appropriate when taking expectation values using the vacuum state of the creation and annihilation operators.
The process of normal ordering is particularly important for a quantum mechanical Hamiltonian. When quantizing a classical Hamiltonian there is some freedom when choosing the operator order, and these choices lead to differences in the ground state energy. That's why the process can also be used to eliminate the infinite vacuum energy of a quantum field.
Notation
If denotes an arbitrary product of creation and/or annihilation operators (or equivalently, quantum fields), then the normal ordered form of is denoted by .
An alternative notation is .
Note that normal ordering is a concept that only makes sense for products of operators. Attempting to apply normal ordering to a sum of operators is not useful as normal ordering is not a linear operation.
Bosons
Bosons are particles which satisfy Bose–Einstein statistics. We will now examine the normal ordering of bosonic creation and annihilation operator products.
Single bosons
If we start with only one type of boson there are two operators of interest:
: the boson's creation operator.
: the boson's annihilation operator.
These satisfy the commutator relationship
where denotes the commutator. We may rewrite the last one as:
Examples
1. We'll consider the simplest case first. This is the normal ordering of :
The expression has not been changed because it is already in normal order - the creation operator is already to the left of the annihilation operator .
2. A more interesting example is the normal ordering of :
Here the normal ordering operation has reordered the terms by placing to the left of .
These two results can be combined with the commutation relation obeyed by and to get
or
This equation is used in defining the contractions used in Wick's theorem.
3. An example with multiple operators is:
4. A simple example shows that normal ordering cannot be extended by linearity from the monomials to all operators in a self-consistent way. Assume that we can apply the commutation relations to obtain:
Then, by linearity,
a contradiction.
The implication is that normal ordering is not a linear function on operators, but on the free algebra generated by the operators, i.e. the operators do not satisfy the canonical commutation relations while inside the normal ordering (or any other ordering operator like time-ordering, etc).
Multiple bosons
If we now consider different bosons there are operators:
: the boson's creation operator.
: the boson's annihilation operator.
Here .
These satisfy the commutation relations:
where and denotes the Kronecker delta.
These may be rewritten as:
Examples
1. For two different bosons () we have
2. For three different bosons () we have
Notice that since (by the commutation relations) the order in which we write the annihilation operators does not matter.
Bosonic operator functions
Normal ordering of bosonic operator functions , with occupation number operator , can be accomplished using (falling) factorial powers and Newton series instead of Taylor series:
It is easy to show
that factorial powers are equal to normal-ordered (raw) powers and are therefore normal ordered by construction,
such that the Newton series expansion
of an operator function , with -th forward difference at , is always normal ordered. Here, the eigenvalue equation relates and .
As a consequence, the normal-ordered Taylor series of an arbitrary function is equal to the Newton series of an associated function , fulfilling
if the series coefficients of the Taylor series of , with continuous , match the coefficients of the Newton series of , with integer ,
with -th partial derivative at .
The functions and are related through the so-called normal-order transform according to
which can be expressed in terms of the Mellin transform , see for details.
Fermions
Fermions are particles which satisfy Fermi–Dirac statistics. We will now examine the normal ordering of fermionic creation and annihilation operator products.
Single fermions
For a single fermion there are two operators of interest:
: the fermion's creation operator.
: the fermion's annihilation operator.
These satisfy the anticommutator relationships
where denotes the anticommutator. These may be rewritten as
To define the normal ordering of a product of fermionic creation and annihilation operators we must take into account the number of interchanges between neighbouring operators. We get a minus sign for each such interchange.
Examples
1. We again start with the simplest cases:
This expression is already in normal order so nothing is changed. In the reverse case, we introduce a minus sign because we have to change the order of two operators:
These can be combined, along with the anticommutation relations, to show
or
This equation, which is in the same form as the bosonic case above, is used in defining the contractions used in Wick's theorem.
2. The normal order of any more complicated cases gives zero because there will be at least one creation or annihilation operator appearing twice. For example:
Multiple fermions
For different fermions there are operators:
: the fermion's creation operator.
: the fermion's annihilation operator.
Here .
These satisfy the anti-commutation relations:
where and denotes the Kronecker delta.
These may be rewritten as:
When calculating the normal order of products of fermion operators we must take into account the number of interchanges of neighbouring operators required to rearrange the expression. It is as if we pretend the creation and annihilation operators anticommute and then we reorder the expression to ensure the creation operators are on the left and the annihilation operators are on the right - all the time taking account of the anticommutation relations.
Examples
1. For two different fermions () we have
Here the expression is already normal ordered so nothing changes.
Here we introduce a minus sign because we have interchanged the order of two operators.
Note that the order in which we write the operators here, unlike in the bosonic case, does matter.
2. For three different fermions () we have
Notice that since (by the anticommutation relations) the order in which we write the operators does matter in this case.
Similarly we have
Uses in quantum field theory
The vacuum expectation value of a normal ordered product of creation and annihilation operators is zero. This is because, denoting the vacuum state by , the creation and annihilation operators satisfy
(here and are creation and annihilation operators (either bosonic or fermionic)).
Let denote a non-empty product of creation and annihilation operators. Although this may satisfy
we have
Normal ordered operators are particularly useful when defining a quantum mechanical Hamiltonian. If the Hamiltonian of a theory is in normal order then the ground state energy will be zero:
.
Free fields
With two free fields φ and χ,
where is again the vacuum state. Each of the two terms on the right hand side typically blows up in the limit as y approaches x but the difference between them has a well-defined limit. This allows us to define :φ(x)χ(x):.
Wick's theorem
Wick's theorem states the relationship between the time ordered product of fields and a sum of
normal ordered products. This may be expressed for even as
where the summation is over all the distinct ways in which one may pair up fields. The result for odd looks the same
except for the last line which reads
This theorem provides a simple method for computing vacuum expectation values of time ordered products of operators and was the motivation behind the introduction of normal ordering.
Alternative definitions
The most general definition of normal ordering involves splitting all quantum fields into two parts (for example see Evans and Steer 1996)
.
In a product of fields, the fields are split into the two parts and the parts are moved so as to be always to the left of all the parts. In the usual case considered in the rest of the article, the contains only creation operators, while the contains only annihilation operators. As this is a mathematical identity, one can split fields in any way one likes. However, for this to be a useful procedure one demands that the normal ordered product of any combination of fields has zero expectation value
It is also important for practical calculations that all the commutators (anti-commutator for fermionic fields) of all and are all c-numbers. These two properties means that we can apply Wick's theorem in the usual way, turning expectation values of time-ordered products of fields into products of c-number pairs, the contractions. In this generalised setting, the contraction is defined to be the difference between the time-ordered product and the normal ordered product of a pair of fields.
The simplest example is found in the context of thermal quantum field theory (Evans and Steer 1996). In this case the expectation values of interest are statistical ensembles, traces over all states weighted by . For instance, for a single bosonic quantum harmonic oscillator we have that the thermal expectation value of the number operator is simply the Bose–Einstein distribution
So here the number operator is normal ordered in the usual sense used in the rest of the article yet its thermal expectation values are non-zero. Applying Wick's theorem and doing calculation with the usual normal ordering in this thermal context is possible but computationally impractical. The solution is to define a different ordering, such that the and are linear combinations of the original annihilation and creations operators. The combinations are chosen to ensure that the thermal expectation values of normal ordered products are always zero so the split chosen will depend on the temperature.
References
F. Mandl, G. Shaw, Quantum Field Theory, John Wiley & Sons, 1984.
S. Weinberg, The Quantum Theory of Fields (Volume I) Cambridge University Press (1995)
T.S. Evans, D.A. Steer, Wick's theorem at finite temperature, Nucl. Phys B 474, 481-496 (1996) arXiv:hep-ph/9601268
Quantum field theory | Normal order | [
"Physics"
] | 2,256 | [
"Quantum field theory",
"Quantum mechanics"
] |
1,193,823 | https://en.wikipedia.org/wiki/Jacobian%20conjecture | In mathematics, the Jacobian conjecture is a famous unsolved problem concerning polynomials in several variables. It states that if a polynomial function from an n-dimensional space to itself has Jacobian determinant which is a non-zero constant, then the function has a polynomial inverse. It was first conjectured in 1939 by Ott-Heinrich Keller, and widely publicized by Shreeram Abhyankar, as an example of a difficult question in algebraic geometry that can be understood using little beyond a knowledge of calculus.
The Jacobian conjecture is notorious for the large number of attempted proofs that turned out to contain subtle errors. As of 2018, there are no plausible claims to have proved it. Even the two-variable case has resisted all efforts. There are currently no known compelling reasons for believing the conjecture to be true, and according to van den Essen there are some suspicions that the conjecture is in fact false for large numbers of variables (indeed, there is equally also no compelling evidence to support these suspicions). The Jacobian conjecture is number 16 in Stephen Smale's 1998 list of Mathematical Problems for the Next Century.
The Jacobian determinant
Let N > 1 be a fixed integer and consider polynomials f1, ..., fN in variables X1, ..., XN with coefficients in a field k. Then we define a vector-valued function F: kN → kN by setting:
F(X1, ..., XN) = (f1(X1, ...,XN),..., fN(X1,...,XN)).
Any map F: kN → kN arising in this way is called a polynomial mapping.
The Jacobian determinant of F, denoted by JF, is defined as the determinant of the N × N Jacobian matrix consisting of the partial derivatives of fi with respect to Xj:
then JF is itself a polynomial function of the N variables X1, ..., XN.
Formulation of the conjecture
It follows from the multivariable chain rule that if F has a polynomial inverse function G: kN → kN, then JF has a polynomial reciprocal, so is a nonzero constant. The Jacobian conjecture is the following partial converse:
Jacobian conjecture: Let k have characteristic 0. If JF is a non-zero constant, then F has an inverse function G: kN → kN which is regular, meaning its components are polynomials.
According to van den Essen, the problem was first conjectured by Keller in 1939 for the limited case of two variables and integer coefficients.
The obvious analogue of the Jacobian conjecture fails if k has characteristic p > 0 even for one variable. The characteristic of a field, if it is not zero, must be prime, so at least 2. The polynomial has derivative which is 1 (because px is 0) but it has no inverse function. However, suggested extending the Jacobian conjecture to characteristic by adding the hypothesis that p does not divide the degree of the field extension .
The existence of a polynomial inverse is obvious if F is simply a set of functions linear in the variables, because then the inverse will also be a set of linear functions. A simple non-linear example is given by
so that the Jacobian determinant is
In this case the inverse exists as the polynomials
But if we modify F slightly, to
then the determinant is
which is not constant, and the Jacobian conjecture does not apply.
The function still has an inverse:
but the expression for x is not a polynomial.
The condition JF ≠ 0 is related to the inverse function theorem in multivariable calculus. In fact for smooth functions (and so in particular for polynomials) a smooth local inverse function to F exists at every point where JF is non-zero. For example, the map x → x + x3 has a smooth global inverse, but the inverse is not polynomial.
Results
Stuart Sui-Sheng Wang proved the Jacobian conjecture for polynomials of degree 2. Hyman Bass, Edwin Connell, and David Wright showed that the general case follows from the special case where the polynomials are of degree 3, or even more specifically, of cubic homogeneous type, meaning of the form F = (X1 + H1, ..., Xn + Hn), where each Hi is either zero or a homogeneous cubic. Ludwik Drużkowski showed that one may further assume that the map is of cubic linear type, meaning that the nonzero Hi are cubes of homogeneous linear polynomials. It seems that Drużkowski's reduction is one most promising way to go forward. These reductions introduce additional variables and so are not available for fixed N.
Edwin Connell and Lou van den Dries proved that if the Jacobian conjecture is false, then it has a counterexample with integer coefficients and Jacobian determinant 1. In consequence, the Jacobian conjecture is true either for all fields of characteristic 0 or for none. For fixed dimension N, it is true if it holds for at least one algebraically closed field of characteristic 0.
Let k[X] denote the polynomial ring and k[F] denote the k-subalgebra generated by f1, ..., fn. For a given F, the Jacobian conjecture is true if, and only if, . Keller (1939) proved the birational case, that is, where the two fields k(X) and k(F) are equal. The case where k(X) is a Galois extension of k(F) was proved by Andrew Campbell for complex maps and in general by Michael Razar and, independently, by David Wright. Tzuong-Tsieng Moh checked the conjecture for polynomials of degree at most 100 in two variables.
Michiel de Bondt and Arno van den Essen and Ludwik Drużkowski independently showed that it is enough to prove the Jacobian Conjecture for complex maps of cubic homogeneous type with a symmetric Jacobian matrix, and further showed that the conjecture holds for maps of cubic linear type with a symmetric Jacobian matrix, over any field of characteristic 0.
The strong real Jacobian conjecture was that a real polynomial map with a nowhere vanishing Jacobian determinant has a smooth global inverse. That is equivalent to asking whether such a map is topologically a proper map, in which case it is a covering map of a simply connected manifold, hence invertible. Sergey Pinchuk constructed two variable counterexamples of total degree 35 and higher.
It is well known that the Dixmier conjecture implies the Jacobian conjecture. Conversely, it is shown by Yoshifumi Tsuchimoto and independently by Alexei Belov-Kanel and Maxim Kontsevich that the Jacobian conjecture for 2N variables implies the Dixmier conjecture in N dimensions. A self-contained and purely algebraic proof of the last implication is also given by Kossivi Adjamagbo and Arno van den Essen who also proved in the same paper that these two conjectures are equivalent to the Poisson conjecture.
See also
List of unsolved problems in mathematics
References
External links
Web page of Tzuong-Tsieng Moh on the conjecture
Polynomials
Algebraic geometry
Conjectures
Unsolved problems in geometry | Jacobian conjecture | [
"Mathematics"
] | 1,495 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Polynomials",
"Unsolved problems in geometry",
"Conjectures",
"Fields of abstract algebra",
"Algebraic geometry",
"Mathematical problems",
"Algebra"
] |
1,193,903 | https://en.wikipedia.org/wiki/Uncontrolled%20decompression | An uncontrolled decompression is an undesired drop in the pressure of a sealed system, such as a pressurised aircraft cabin or hyperbaric chamber, that typically results from human error, structural failure, or impact, causing the pressurised vessel to vent into its surroundings or fail to pressurize at all.
Such decompression may be classed as explosive, rapid, or slow:
Explosive decompression (ED) is violent and too fast for air to escape safely from the lungs and other air-filled cavities in the body such as the sinuses and eustachian tubes, typically resulting in severe to fatal barotrauma.
Rapid decompression may be slow enough to allow cavities to vent but may still cause serious barotrauma or discomfort.
Slow or gradual decompression occurs so slowly that it may not be sensed before hypoxia sets in.
Description
The term uncontrolled decompression here refers to the unplanned depressurisation of vessels that are occupied by people; for example, a pressurised aircraft cabin at high altitude, a spacecraft, or a hyperbaric chamber. For the catastrophic failure of other pressure vessels used to contain gas, liquids, or reactants under pressure, the term explosion is more commonly used, or other specialised terms such as BLEVE may apply to particular situations.
Decompression can occur due to structural failure of the pressure vessel, or failure of the compression system itself. The speed and violence of the decompression is affected by the size of the pressure vessel, the differential pressure between the inside and outside of the vessel, and the size of the leak hole.
The US Federal Aviation Administration recognizes three distinct types of decompression events in aircraft: explosive, rapid, and gradual decompression.
Explosive decompression
Explosive decompression occurs typically in less than 0.1 to 0.5 seconds, a change in cabin pressure faster than the lungs can decompress. Normally, the time required to release air from the lungs without restrictions, such as masks, is 0.2 seconds. The risk of lung trauma is very high, as is the danger from any unsecured objects that can become projectiles because of the explosive force, which may be likened to a bomb detonation.
Immediately after an explosive decompression, a heavy fog may fill the aircraft cabin as the air cools, raising the relative humidity and causing sudden condensation. Military pilots with oxygen masks must pressure-breathe, whereby the lungs fill with air when relaxed, and effort has to be exerted to expel the air again.
Rapid decompression
Rapid decompression typically takes more than 0.1 to 0.5 seconds, allowing the lungs to decompress more quickly than the cabin. The risk of lung damage is still present, but significantly reduced compared with explosive decompression.
Gradual decompression
Slow, or gradual, decompression occurs slowly enough to go unnoticed and might only be detected by instruments. This type of decompression may also come about from a failure to pressurize the cabin as an aircraft climbs to altitude. An example of this is the 2005 Helios Airways Flight 522 crash, in which the maintenance service left the pressurization system in manual mode and the pilots did not check the pressurization system. As a result, they suffered a loss of consciousness (as well as most of the passengers and crew) due to hypoxia (lack of oxygen). The plane continued to fly due to the autopilot system and eventually crashed due to fuel exhaustion after leaving its flight path.
Decompression injuries
The following physical injuries may be associated with decompression incidents:
Hypoxia is the most serious risk associated with decompression, especially as it may go undetected or incapacitate the aircrew.
Barotrauma: an inability to equalize pressure in internal air spaces such as the middle ear or gastrointestinal tract, or more serious injury such as a burst lung.
Decompression sickness.
Altitude sickness.
Frostbite or hypothermia from exposure to freezing cold air at high altitude.
Physical trauma caused by the violence of explosive decompression, which can turn people and loose objects into projectiles.
At least two confirmed cases have been documented of a person being blown through an airplane passenger window. The first occurred in 1973 when debris from an engine failure struck a window roughly midway in the fuselage. Despite efforts to pull the passenger back into the airplane, the occupant was forced entirely through the cabin window. The passenger's skeletal remains were eventually found by a construction crew, and were positively identified two years later. The second incident occurred on April 17, 2018, when a woman on Southwest Airlines Flight 1380 was partially blown through an airplane passenger window that had broken from a similar engine failure. Although the other passengers were able to pull her back inside, she later died from her injuries. In both incidents, the plane landed safely with the sole fatality being the person seated next to the window involved.
According to NASA scientist Geoffrey A. Landis, the effect depends on the size of the hole, which can be expanded by debris that is blown through it; "it would take about 100 seconds for pressure to equalise through a roughly hole in the fuselage of a Boeing 747." Anyone blocking the hole would have half a ton of force pushing them towards it, but this force reduces rapidly with distance from the hole.
Implications for aircraft design
Modern aircraft are specifically designed with longitudinal and circumferential reinforcing ribs in order to prevent localised damage from tearing the whole fuselage open during a decompression incident. However, decompression events have nevertheless proved fatal for aircraft in other ways. In 1974, explosive decompression onboard Turkish Airlines Flight 981 caused the floor to collapse, severing vital flight control cables in the process. The FAA issued an Airworthiness Directive the following year requiring manufacturers of wide-body aircraft to strengthen floors so that they could withstand the effects of in-flight decompression caused by an opening of up to in the lower deck cargo compartment. Manufacturers were able to comply with the Directive either by strengthening the floors and/or installing relief vents called "dado panels" between the passenger cabin and the cargo compartment.
Cabin doors are designed to make it nearly impossible to lose pressurization through opening a cabin door in flight, either accidentally or intentionally. The plug door design ensures that when the pressure inside the cabin exceeds the pressure outside, the doors are forced shut and will not open until the pressure is equalized. Cabin doors, including the emergency exits, but not all cargo doors, open inwards, or must first be pulled inwards and then rotated before they can be pushed out through the door frame because at least one dimension of the door is larger than the door frame. Pressurization prevented the doors of Saudia Flight 163 from being opened on the ground after the aircraft made a successful emergency landing, resulting in the deaths of all 287 passengers and 14 crew members from fire and smoke.
Prior to 1996, approximately 6,000 large commercial transport airplanes were type certified to fly up to , without being required to meet special conditions related to flight at high altitude. In 1996, the FAA adopted Amendment 25–87, which imposed additional high-altitude cabin-pressure specifications, for new designs of aircraft types. For aircraft certified to operate above 25,000 feet (FL 250; 7,600 m), it "must be designed so that occupants will not be exposed to cabin pressure altitudes in excess of after any probable failure condition in the pressurization system." In the event of a decompression which results from "any failure condition not shown to be extremely improbable," the aircraft must be designed so that occupants will not be exposed to a cabin altitude exceeding for more than 2 minutes, nor exceeding an altitude of at any time. In practice, that new FAR amendment imposes an operational ceiling of 40,000 feet on the majority of newly designed commercial aircraft.
In 2004, Airbus successfully petitioned the FAA to allow cabin pressure of the A380 to reach in the event of a decompression incident and to exceed for one minute. This special exemption allows the A380 to operate at a higher altitude than other newly designed civilian aircraft, which have not yet been granted a similar exemption.
International standards
The Depressurization Exposure Integral (DEI) is a quantitative model that is used by the FAA to enforce compliance with decompression-related design directives. The model relies on the fact that the pressure that the subject is exposed to and the duration of that exposure are the two most important variables at play in a decompression event.
Other national and international standards for explosive decompression testing include:
MIL-STD-810, 202
RTCA/DO-160
NORSOK M710
API 17K and 17J
NACE TM0192 and TM0297
TOTALELFFINA SP TCS 142 Appendix H
Notable decompression accidents and incidents
Decompression incidents are not uncommon on military and civilian aircraft, with approximately 40–50 rapid decompression events occurring worldwide annually. However, in most cases the problem is manageable, injuries or structural damage rare and the incident not considered notable. One notable, recent case was Southwest Airlines Flight 1380 in 2018, where an uncontained engine failure ruptured a window, causing a passenger to be partially blown out.
Decompression incidents do not occur solely in aircraft; the Byford Dolphin accident is an example of violent explosive decompression of a saturation diving system on an oil rig. A decompression event is often the result of a failure caused by another problem (such as an explosion or mid-air collision), but the decompression event may worsen the initial issue.
Myths
A bullet through a window may cause explosive decompression
In 2004, the TV show MythBusters examined whether explosive decompression occurs when a bullet is fired through the fuselage of an airplane informally by way of several tests using a decommissioned pressurised DC-9. A single shot through the side or the window did not have any effect – it took actual explosives to cause explosive decompression – suggesting that the fuselage is designed to prevent people from being blown out. Professional pilot David Lombardo states that a bullet hole would have no perceived effect on cabin pressure as the hole would be smaller than the opening of the aircraft's outflow valve.
NASA scientist Geoffrey A. Landis points out though that the impact depends on the size of the hole, which can be expanded by debris that is blown through it. Landis went on to say that "it would take about 100 seconds for pressure to equalise through a roughly hole in the fuselage of a Boeing 747." He then stated that anyone sitting next to the hole would have about half a ton of force pulling them towards it. At least two confirmed cases have been documented of a person being blown through an airplane passenger window. The first occurred in 1973 when debris from an engine failure struck a window roughly midway in the fuselage. Despite efforts to pull the passenger back into the airplane, the occupant was forced entirely through the cabin window. The passenger's skeletal remains were eventually found by a construction crew, and were positively identified two years later. The second incident occurred on April 17, 2018, when a woman on Southwest Airlines Flight 1380 was partially blown through an airplane passenger window that had broken from a similar engine failure. Although the other passengers were able to pull her back inside, she later died from her injuries. In both incidents, the plane landed safely with the sole fatality being the person seated next to the window involved. Fictional accounts of this include a scene in Goldfinger, when James Bond kills the eponymous villain by blowing him out a passenger window and Die Another Day, when an errant gunshot shatters a window on a cargo plane and rapidly expands, causing multiple enemy officials, henchmen and the main villain to be sucked out to their deaths.
Exposure to a vacuum causes the body to explode
This persistent myth is based on a failure to distinguish between two types of decompression and their exaggerated portrayal in some fictional works. The first type of decompression deals with changing from normal atmospheric pressure (one atmosphere) to a vacuum (zero atmosphere) which is usually centered around space exploration. The second type of decompression changes from exceptionally high pressure (many atmospheres) to normal atmospheric pressure (one atmosphere) as may occur in deep-sea diving.
The first type is more common as pressure reduction from normal atmospheric pressure to a vacuum can be found in both space exploration and high-altitude aviation. Research and experience have shown that while exposure to a vacuum causes swelling, human skin is tough enough to withstand the drop of one atmosphere. The most serious risk from vacuum exposure is hypoxia, in which the body is starved of oxygen, leading to unconsciousness within a few seconds. Rapid uncontrolled decompression can be much more dangerous than vacuum exposure itself. Even if the victim does not hold their breath, venting through the windpipe may be too slow to prevent the fatal rupture of the delicate alveoli of the lungs. Eardrums and sinuses may also be ruptured by rapid decompression, and soft tissues may be affected by bruises seeping blood. If the victim somehow survived, the stress and shock would accelerate oxygen consumption, leading to hypoxia at a rapid rate. At the extremely low pressures encountered at altitudes above about , the boiling point of water becomes less than normal body temperature. This measure of altitude is known as the Armstrong limit, which is the practical limit to survivable altitude without pressurization. Fictional accounts of bodies exploding due to exposure from a vacuum include, among others, several incidents in the movie Outland, while in the movie Total Recall, characters appear to suffer effects of ebullism and blood boiling when exposed to the atmosphere of Mars.
The second type is rare since it involves a pressure drop over several atmospheres, which would require the person to have been placed in a pressure vessel. The only likely situation in which this might occur is during decompression after deep-sea diving. A pressure drop as small as 100 Torr (13 kPa), which produces no symptoms if it is gradual, may be fatal if it occurs suddenly. One such incident occurred in 1983 in the North Sea, where violent explosive decompression from nine atmospheres to one caused four divers to die instantly from massive and lethal barotrauma. Dramatized fictional accounts of this include a scene from the film Licence to Kill, when a character's head explodes after his hyperbaric chamber is rapidly depressurized, and another in the film DeepStar Six, wherein rapid depressurization causes a character to hemorrhage profusely before exploding in a similar fashion.
See also
Notes
References
External links
Human Exposure to Vacuum
Will an astronaut explode if he takes off his helmet?
Mechanical failure modes
Aviation accidents and incidents
Aviation medicine
Underwater diving medicine | Uncontrolled decompression | [
"Materials_science",
"Technology",
"Engineering"
] | 3,146 | [
"Structural engineering",
"Mechanical failure modes",
"Mechanical failure",
"Technological failures"
] |
1,194,086 | https://en.wikipedia.org/wiki/Spectral%20signature | Spectral signature is the variation of reflectance or emittance of a material with respect to wavelengths (i.e., reflectance/emittance as a function of wavelength). The spectral signature of stars indicates the composition of the stellar atmosphere. The spectral signature of an object is a function of the incidental EM wavelength and material interaction with that section of the electromagnetic spectrum.
The measurements can be made with various instruments, including a task specific spectrometer, although the most common method is separation of the red, green, blue and near infrared portion of the EM spectrum as acquired by digital cameras. Calibrating spectral signatures under specific illumination are collected in order to apply a correction to airborne or satellite imagery digital images.
The user of one kind of spectroscope looks through it at a tube of ionized gas. The user sees specific lines of colour falling on a graduated scale. Each substance will have its own unique pattern of spectral lines.
Most remote sensing applications process digital images to extract spectral signatures at each pixel and use them to divide the image in groups of similar pixels (segmentation) using different approaches. As a last step, they assign a class to each group (classification) by comparing with known spectral signatures. Depending on pixel resolution, a pixel can represent many spectral signature "mixed" together - that is why much remote sensing analysis is done to "unmix mixtures". Ultimately correct matching of spectral signature recorded by image pixel with spectral signature of existing elements leads to accurate classification in remote sensing.
See also
Spectroscopy
Spectral imaging
Hyperspectral imaging
Multispectral image
References
Spectroscopy | Spectral signature | [
"Physics",
"Chemistry"
] | 325 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
1,194,140 | https://en.wikipedia.org/wiki/Slow%20sand%20filter | Slow sand filters are used in water purification for treating raw water to produce a potable product. They are typically deep, can be rectangular or cylindrical in cross section and are used primarily to treat surface water. The length and breadth of the tanks are determined by the flow rate desired for the filters, which typically have a loading rate of per square metre per hour.
Slow sand filters differ from all other filters used to treat drinking water in that they work by using a complex biofilm that grows naturally on the surface of the sand. The sand itself does not perform any filtration function but simply acts as a substrate, unlike its counterparts for ultraviolet and pressurized treatments. Although they are often preferred technology in many developing countries because of their low energy requirements and robust performance, they are also used to treat water in some developed countries, such as the UK, where they are used to treat water supplied to London. Slow sand filters now are also being tested for pathogen control of nutrient solutions in hydroponic systems.
History
The first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in Paisley, Scotland, John Gibb, installed an experimental filter created by engineer Robert Thom, selling his unwanted surplus to the public. This method was refined in the following two decades by engineers working for private water companies, and it culminated in the first treated public water supply in the world, installed by engineer James Simpson for the Chelsea Waterworks Company in London in 1829. This installation provided filtered water for every resident of the area, and the network design was widely copied throughout the United Kingdom in the ensuing decades.
The practice of water treatment soon became mainstream, and the virtues of the system were made starkly apparent after the investigations of the physician John Snow during the 1854 Broad Street cholera outbreak. Snow was sceptical of the then-dominant miasma theory that stated that diseases were caused by noxious "bad airs". Although the germ theory of disease had not yet been developed, Snow's observations led him to discount the prevailing theory. His 1855 essay On the Mode of Communication of Cholera conclusively demonstrated the role of the water supply in spreading the cholera epidemic in Soho, with the use of a dot distribution map and statistical proof to illustrate the connection between the quality of the water source and cholera cases. His data convinced the local council to disable the water pump, which promptly ended the outbreak.
The Metropolis Water Act introduced the regulation of the water supply companies in London, including minimum standards of water quality for the first time. The Act "made provision for securing the supply to the Metropolis of pure and wholesome water", and required that all water be "effectually filtered" from 31 December 1855. This was followed up with legislation for the mandatory inspection of water quality, including comprehensive chemical analyses, in 1858. This legislation set a worldwide precedent for similar state public health interventions across Europe. The Metropolitan Commission of Sewers was formed at the same time, water filtration was adopted throughout the country, and new water intakes on the Thames were established above Teddington Lock.
Water treatment came to the United States in 1872 when Poughkeepsie, New York, opened the first slow sand filtration plant, dramatically reducing instances of cholera and typhoid fever which had been seriously impacting the local community. Poughkeepsie's design criteria were used throughout the country as a model for other municipalities. Poughkeepsie's original treatment facility operated continuously for 87 years before being replaced in 1959.
Method of operation
Slow sand filters work through the formation of a gelatinous layer (or biofilm) called the hypogeal layer or Schmutzdecke in the top few millimetres of the fine sand layer. The Schmutzdecke is formed in the first 10–20 days of operation and consists of bacteria, fungi, protozoa, rotifera and a range of aquatic insect larvae. As an epigeal biofilm ages, more algae tend to develop and larger aquatic organisms may be present including some bryozoa, snails and Annelid worms. The surface biofilm is the layer that provides the effective purification in potable water treatment, the underlying sand providing the support medium for this biological treatment layer. As water passes through the hypogeal layer, particles of foreign matter are trapped in the mucilaginous matrix and soluble organic material is adsorbed. The contaminants are metabolised by the bacteria, fungi and protozoa. The water produced from an exemplary slow sand filter is of excellent quality with 90–99% bacterial cell count reduction. Typically, in the UK slow sand filters have a bed depth of 0.3 to 0.6 metres comprising 0.2 to 0.4 mm sand. The throughput is 0.25 m/h.
Slow sand filters slowly lose their performance as the biofilm thickens and thereby reduces the rate of flow through the filter. Eventually, it is necessary to refurbish the filter. Two methods are commonly used to do this. In the first, the top few millimetres of fine sand is scraped off to expose a new layer of clean sand. Water is then decanted back into the filter and re-circulated for a few hours to allow a new biofilm to develop. The filter is then filled to full volume and brought back into service. The second method, sometimes called wet harrowing, involves lowering the water level to just above the hypogeal layer, stirring the sand; thus precipitating any solids held in that layer and allowing the remaining water to wash through the sand. The filter column is then filled to full capacity and brought back into service. Wet harrowing can allow the filter to be brought back into service more quickly.
Features
Slow sand filters have a number of unique qualities:
Unlike other filtration methods, slow sand filters use biological processes to clean the water, and are non-pressurized systems. Slow sand filters do not require chemicals or electricity to operate.
Cleaning is traditionally done by use of a mechanical scraper, which is usually driven into the filter bed once the bed has been dried out. However, some slow sand filter operators use a method called "wet harrowing", where the sand is scraped while still under water, and the water used for cleaning is drained to waste.
For municipal systems there usually is a certain degree of redundancy, since it is desirable for the maximum required throughput of water to be achievable with one or more beds out of service.
Slow sand filters require relatively low turbidity levels to operate efficiently. In summer conditions with high microbial activity and in conditions when the raw water is turbid, blinding of the filters due to bioclogging occurs more quickly and pre-treatment is recommended.
Unlike other water filtration technologies that produce water on demand, slow sand filters produce water at a slow, constant flow rate and are usually used in conjunction with a storage tank for peak usage. This slow rate is necessary for healthy development of the biological processes in the filter.
While many municipal water treatment works will have 12 or more beds in use at any one time, smaller communities or households may only have one or two filter beds.
In the base of each bed is a series of herringbone drains that are covered with a layer of pebbles which in turn is covered with coarse gravel. Further layers of sand are placed on top followed by a thick layer of fine sand. The whole depth of filter material may be more than 1 metre in depth, the majority of which will be fine sand material. On top of the sand bed sits a supernatant layer of unpurified water.
Advantages
As they require little or no mechanical power, chemicals or replaceable parts, and they require minimal operator training and only periodic maintenance, they are often an appropriate technology for poor and isolated areas.
Slow sand filters, due to their simple design, may be created DIY. DIY-slow sand filters have been used by organisations like Tearfund in Democratic Republic of Congo and other countries to aid the poor.
Slow sand filters are recognized by the World Health Organization, Oxfam, and the United States Environmental Protection Agency as being a superior technology for the treatment of surface water sources in small water systems. According to the World Health Organization, "Under suitable circumstances, slow sand filtration may be not only the cheapest and simplest but also the most efficient method of water treatment."
Disadvantages
Due to the low filtration rate, slow sand filters require extensive land area for a large municipal system. Many municipal systems in the U.S. initially used slow sand filters, but as cities have grown, and because of their need to treat high-turbidity source waters, they subsequently installed rapid sand filters, due to increased demand for drinking water.
See also
Rapid sand filter
Notes
References
"UN High Commissioner for Refugees (UNHCR) Water Manual for Refugee Situations", Geneva, November 1992. Slow sand filters recommendations listed on, p. 38.
"Small System Compliance Technology List for The Surface Water Treatment Rule", United States Environmental Protection Agency, EPA 815-R-97-002 August 1997. Slow sand filtration is listed on, p. 24.
Water filters
Appropriate technology
Environmental soil science
DIY culture
Sand | Slow sand filter | [
"Chemistry",
"Environmental_science"
] | 1,903 | [
"Water treatment",
"Water filters",
"Environmental soil science",
"Filters"
] |
1,194,622 | https://en.wikipedia.org/wiki/General%20Data%20Format%20for%20Biomedical%20Signals | The General Data Format for Biomedical Signals is a scientific and medical data file format. The aim of GDF is to combine and integrate the best features of all biosignal file formats into a single file format.
The original GDF specification was introduced in 2005 as a new data format to overcome some of the limitations of the European Data Format for Biosignals (EDF). GDF was also designed to unify a number of file formats which had been designed for very specific applications (for example, in ECG research and EEG analysis). The original specification included a binary header, and used an event table. An updated specification (GDF v2) was released in 2011 and added fields for additional subject-specific information (gender, age, etc.) and utilized several standard codes for storing physical units and other properties. In 2015, the Austrian Standardization Institute made GDF an official Austrian Standard https://shop.austrian-standards.at/action/en/public/details/553360/OENORM_K_2204_2015_11_15, and the revision number has been updated to v3.
The GDF format is often used in brain–computer interface research. However, since GDF provides a superset of features from many different file formats, it could be also used for many other domains.
The free and open source software BioSig library provides implementations for reading and writing of GDF in GNU Octave/MATLAB and C/C++. A lightweight C++ library called libGDF is also available and implements version 2 of the GDF format.
See also
List of file formats
External links
GDF v2.0 specification
OeNORM K2204:2015
References
Bioinformatics
Standards for electronic health records
Computer file formats | General Data Format for Biomedical Signals | [
"Engineering",
"Biology"
] | 370 | [
"Bioinformatics",
"Biological engineering"
] |
1,194,729 | https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor%20theorem | In mathematics, the Heine–Cantor theorem states that a continuous function between two metric spaces is uniformly continuous if its domain is compact.
The theorem is named after Eduard Heine and Georg Cantor.
An important special case of the Cantor theorem is that every continuous function from a closed bounded interval to the real numbers is uniformly continuous.
For an alternative proof in the case of , a closed interval, see the article Non-standard calculus.
See also
Cauchy-continuous function
External links
Theory of continuous functions
Metric geometry
Theorems in analysis
Articles containing proofs | Heine–Cantor theorem | [
"Mathematics"
] | 112 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theory of continuous functions",
"Topology",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems"
] |
1,194,789 | https://en.wikipedia.org/wiki/Flexible%20organic%20light-emitting%20diode | A flexible organic light-emitting diode (FOLED) is a type of organic light-emitting diode (OLED) incorporating a flexible plastic substrate on which the electroluminescent organic semiconductor is deposited. This enables the device to be bent or rolled while still operating. Currently the focus of research in industrial and academic groups, flexible OLEDs form one method of fabricating a rollable display.
Technical details and applications
An OLED emits light due to the electroluminescence of thin films of organic semiconductors approximately 100 nm thick. Regular OLEDs are usually fabricated on a glass substrate, but by replacing glass with a flexible plastic such as polyethylene terephthalate (PET) among others, OLEDs can be made both bendable and lightweight.
Such materials may not be suitable for comparable devices based on inorganic semiconductors due to the need for lattice matching and the high temperature fabrication procedure involved.
In contrast, flexible OLED devices can be fabricated by deposition of the organic layer onto the substrate using a method derived from inkjet printing, allowing the inexpensive and roll-to-roll fabrication of printed electronics.
Flexible OLEDs may be used in the production of rollable displays, electronic paper, or bendable displays which can be integrated into clothing, wallpaper or other curved surfaces. Prototype displays have been exhibited by companies such as Sony, which are capable of being rolled around the width of a pencil.
Disadvantages
Both flexible substrate itself as well as the process of bending the device introduce stress into the materials. There may be residual stress from the deposition of layers onto a flexible substrate, thermal stresses due to the different coefficient of thermal expansion of materials in the device, in addition to the external stress from the bending of the device.
Stress introduced into the organic layers may lower the efficiency or brightness of the device as it is deformed, or cause complete breakdown of the device altogether. Indium tin oxide (ITO), the material most commonly used as the transparent anode, is brittle. Fracture of the anode can occur which can increase the sheet resistance of the ITO or disrupt the layered structure of the OLED. Although ITO is the most common and best understood anode material used in OLEDs, research has been undertaken into alternative materials that are better suited for flexible applications including carbon nanotubes.
Encapsulation is another challenge for flexible OLED devices. The materials in an OLED are sensitive to air and moisture which lead to degradation of the materials themselves as well as quenching of excited states within the molecule. The common method of encapsulation for regular OLEDs is to seal the organic layer between glass. Flexible encapsulation methods are generally not as effective a barrier to air and moisture as glass, and current research aims to improve the encapsulation of flexible organic light emitting diodes.
See also
Flexible electronics
Organic light-emitting diode
Phosphorescent organic light-emitting diode
Rollable display
References
External links
Are Foldable Laptops the Future?
Conductive polymers
Display technology
Electronic engineering
Flexible electronics
Molecular electronics
Optical diodes
Organic electronics | Flexible organic light-emitting diode | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 633 | [
"Molecular physics",
"Computer engineering",
"Molecular electronics",
"Electronic engineering",
"Flexible electronics",
"Display technology",
"Nanotechnology",
"Electrical engineering",
"Conductive polymers"
] |
1,196,185 | https://en.wikipedia.org/wiki/Pre-intuitionism | In the philosophy of mathematics, the pre-intuitionists is the name given by L. E. J. Brouwer to several influential mathematicians who shared similar opinions on the nature of mathematics. The term was introduced by Brouwer in his 1951 lectures at Cambridge where he described the differences between his philosophy of intuitionism and its predecessors:
Of a totally different orientation [from the "Old Formalist School" of Dedekind, Cantor, Peano, Zermelo, and Couturat, etc.] was the Pre-Intuitionist School, mainly led by Poincaré, Borel and Lebesgue. These thinkers seem to have maintained a modified observational standpoint for the introduction of natural numbers, for the principle of complete induction [...] For these, even for such theorems as were deduced by means of classical logic, they postulated an existence and exactness independent of language and logic and regarded its non-contradictority as certain, even without logical proof. For the continuum, however, they seem not to have sought an origin strictly extraneous to language and logic.
The introduction of natural numbers
The pre-intuitionists, as defined by L. E. J. Brouwer, differed from the formalist standpoint in several ways, particularly in regard to the introduction of natural numbers, or how the natural numbers are defined/denoted. For Poincaré, the definition of a mathematical entity is the construction of the entity itself and not an expression of an underlying essence or existence.
This is to say that no mathematical object exists without human construction of it, both in mind and language.
The principle of complete induction
This sense of definition allowed Poincaré to argue with Bertrand Russell over Giuseppe Peano's axiomatic theory of natural numbers.
Peano's fifth axiom states:
Allow that; zero has a property P;
And; if every natural number less than a number x has the property P then x also has the property P.
Therefore; every natural number has the property P.
This is the principle of complete induction, which establishes the property of induction as necessary to the system. Since Peano's axiom is as infinite as the natural numbers, it is difficult to prove that the property of P does belong to any x and also x + 1. What one can do is say that, if after some number n of trials that show a property P conserved in x and x + 1, then we may infer that it will still hold to be true after n + 1 trials. But this is itself induction. And hence the argument begs the question.
From this Poincaré argues that if we fail to establish the consistency of Peano's axioms for natural numbers without falling into circularity, then the principle of complete induction is not provable by general logic.
Thus arithmetic and mathematics in general is not analytic but synthetic. Logicism thus rebuked and Intuition is held up. What Poincaré and the Pre-Intuitionists shared was the perception of a difference between logic and mathematics that is not a matter of language alone, but of knowledge itself.
Arguments over the excluded middle
It was for this assertion, among others, that Poincaré was considered to be similar to the intuitionists. For Brouwer though, the Pre-Intuitionists failed to go as far as necessary in divesting mathematics from metaphysics, for they still used principium tertii exclusi (the "law of excluded middle").
The principle of the excluded middle does lead to some strange situations. For instance, statements about the future such as "There will be a naval battle tomorrow" do not seem to be either true or false, yet. So there is some question whether statements must be either true or false in some situations. To an intuitionist this seems to rank the law of excluded middle as just as unrigorous as Peano's vicious circle.
Yet to the Pre-Intuitionists this is mixing apples and oranges. For them mathematics was one thing (a muddled invention of the human mind, i.e., synthetic), and logic was another (analytic).
Other pre-intuitionists
The above examples only include the works of Poincaré, and yet Brouwer named other mathematicians as Pre-Intuitionists too; Borel and Lebesgue. Other mathematicians such as Hermann Weyl (who eventually became disenchanted with intuitionism, feeling that it places excessive strictures on mathematical progress) and Leopold Kronecker also played a role—though they are not cited by Brouwer in his definitive speech.
In fact Kronecker might be the most famous of the Pre-Intuitionists for his singular and oft quoted phrase, "God made the natural numbers; all else is the work of man."
Kronecker goes in almost the opposite direction from Poincaré, believing in the natural numbers but not the law of the excluded middle. He was the first mathematician to express doubt on non-constructive existence proofs that state that something must exist because it can be shown that it is "impossible" for it not to.
See also
Conventionalism
Notes
References
Logical Meanderings – a brief article by Jan Sraathof on Brouwer's various attacks on arguments of the Pre-Intuitionists about the Principle of the Excluded Third.
Proof And Intuition – an article on the many varieties of knowledge as they relate to the Intuitionist and Logicist.
Brouwer's Cambridge Lectures on Intuitionism – wherein Brouwer talks about the Pre-Intuitionist School and addresses what he sees as its many shortcomings.
Theories of deduction
History of mathematics | Pre-intuitionism | [
"Mathematics"
] | 1,161 | [
"Theories of deduction"
] |
1,196,909 | https://en.wikipedia.org/wiki/Mohr%E2%80%93Mascheroni%20theorem | In mathematics, the Mohr–Mascheroni theorem states that any geometric construction that can be performed by a compass and straightedge can be performed by a compass alone.
It must be understood that "any geometric construction" refers to figures that contain no straight lines, as it is clearly impossible to draw a straight line without a straightedge. It is understood that a line is determined provided that two distinct points on that line are given or constructed, even though no visual representation of the line will be present. The theorem can be stated more precisely as:
Any Euclidean construction, insofar as the given and required elements are points (or circles), may be completed with the compass alone if it can be completed with both the compass and the straightedge together.
Though the use of a straightedge can make a construction significantly easier, the theorem shows that any set of points that fully defines a constructed figure can be determined with compass alone, and the only reason to use a straightedge is for the aesthetics of seeing straight lines, which for the purposes of construction is functionally unnecessary.
History
The result was originally published by Georg Mohr in 1672, but his proof languished in obscurity until 1928. The theorem was independently discovered by Lorenzo Mascheroni in 1797 and it was known as Mascheroni's Theorem until Mohr's work was rediscovered.
Several proofs of the result are known. Mascheroni's proof of 1797 was generally based on the idea of using reflection in a line as the major tool. Mohr's solution was different. In 1890, August Adler published a proof using the inversion transformation.
An algebraic approach uses the isomorphism between the Euclidean plane and the real coordinate space . In this way, a stronger version of the theorem was proven in 1990. It also shows the dependence of the theorem on Archimedes' axiom (which cannot be formulated in a first-order language).
Constructive proof
Outline
To prove the theorem, each of the basic constructions of compass and straightedge need to be proven to be possible by using a compass alone, as these are the foundations of, or elementary steps for, all other constructions. These are:
Creating the line through two existing points
Creating the circle through one point with centre another point
Creating the point which is the intersection of two existing, non-parallel lines
Creating the one or two points in the intersection of a line and a circle (if they intersect)
Creating the one or two points in the intersection of two circles (if they intersect).
#1 - A line through two points
It is understood that a straight line cannot be drawn without a straightedge. A line is considered to be given by any two points, as any such pair define a unique line. In keeping with the intent of the theorem which we aim to prove, the actual line need not be drawn but for aesthetic reasons.
#2 - A circle through one point with defined center
This can be done with a compass alone. A straightedge is not required for this.
#5 - Intersection of two circles
This construction can also be done directly with a compass.
#3, #4 - The other constructions
Thus, to prove the theorem, only compass-only constructions for #3 and #4 need to be given.
Notation and remarks
The following notation will be used throughout this article. A circle whose center is located at point and that passes through point will be denoted by . A circle with center and radius specified by a number, , or a line segment will be denoted by or , respectively.
In general constructions there are often several variations that will produce the same result. The choices made in such a variant can be made without loss of generality. However, when a construction is being used to prove that something can be done, it is not necessary to describe all these various choices and, for the sake of clarity of exposition, only one variant will be given below. However, many constructions come in different forms depending on whether or not they use circle inversion and these alternatives will be given if possible.
It is also important to note that some of the constructions below proving the Mohr–Mascheroni theorem require the arbitrary placement of points in space, such as finding the center of a circle when not already provided (see construction below). In some construction paradigms - such as in the geometric definition of the constructible number - the arbitrary placement of points may be prohibited. In such a paradigm, however, for example, various constructions exist so that arbitrary point placement is unnecessary. It is also worth pointing out that no circle could be constructed without the compass, thus there is no reason in practice for a center point not to exist.
Some preliminary constructions
To prove the above constructions #3 and #4, which are included below, a few necessary intermediary constructions are also explained below since they are used and referenced frequently. These are also compass-only constructions. All constructions below rely on #1,#2,#5, and any other construction that is listed prior to it.
Compass equivalence theorem (circle translation)
The ability to translate, or copy, a circle to a new center is vital in these proofs and fundamental to establishing the veracity of the theorem. The creation of a new circle with the same radius as the first, but centered at a different point, is the key feature distinguishing the collapsing compass from the modern, rigid compass. With the rigid compass this is a triviality, but with the collapsing compass it is a question of construction possibility. The equivalence of a collapsing compass and a rigid compass was proved by Euclid (Book I Proposition 2 of The Elements) using straightedge and collapsing compass when he, essentially, constructs a copy of a circle with a different center. This equivalence can also be established with (collapsing) compass alone, a proof of which can be found in the main article.
Reflecting a point across a line
Given a line segment and a point not on the line determined by that segment, construct the image of upon reflection across this line.
Construct two circles: one centered at and one centered at , both passing through .
, the other point of intersection of the two circles, is the reflection of across the line .
If (that is, there is a unique point of intersection of the two circles), then is its own reflection and lies on the line (contrary to the assumption), and the two circles are internally tangential.
Extending the length of a line segment
Given a line segment find a point on the line such that is the midpoint of line segment .
Construct point as the intersection of circles and . (∆ABD is an equilateral triangle.)
Construct point as the intersection of circles and . (∆DBE is an equilateral triangle.)
Finally, construct point as the intersection of circles and . (∆EBC is an equilateral triangle, and the three angles at show that are collinear.)
This construction can be repeated as often as necessary to find a point so that the length of line segment = ⋅ length of line segment for any positive integer .
Inversion in a circle
Given a circle , for some radius (in black) and a point construct the point that is the inverse of in the circle. Naturally there is no inversion for a point .
Draw a circle (in red).
Assume that the red circle intersects the black circle at and
if the circles do not intersect in two points see below for an alternative construction.
if the circles intersect in only one point, , it is possible to invert simply by doubling the length of (quadrupling the length of ).
Reflect the circle center across the line :
Construct two new circles and (in light blue).
The light blue circles intersect at and at another point .
Point is the desired inverse of in the black circle.
Point is such that the radius of is to as is to the radius; or .
In the event that the above construction fails (that is, the red circle and the black circle do not intersect in two points), find a point on the line so that the length of line segment is a positive integral multiple, say , of the length of and is greater than (this is possible by Archimede's axiom). Find the inverse of in circle as above (the red and black circles must now intersect in two points). The point is now obtained by extending so that = .
Determining the center of a circle through three points
Given three non-collinear points , and , find the center of the circle they determine.
Construct point , the inverse of in the circle .
Reflect in the line to the point .
is the inverse of in the circle .
Intersection of two non-parallel lines (construction #3)
Given non-parallel lines and , find their point of intersection, .
Select circle of arbitrary radius whose center does not lie on either line.
Invert points and in circle to points and respectively.
The line is inverted to the circle passing through , and . Find the center of this circle.
Invert points and in circle to points and respectively.
The line is inverted to the circle passing through , and . Find the center of this circle.
Let be the intersection of circles and .
is the inverse of in the circle .
Intersection of a line and a circle (construction #4)
The compass-only construction of the intersection points of a line and a circle breaks into two cases depending upon whether the center of the circle is or is not collinear with the line.
Circle center is not collinear with the line
Assume that center of the circle does not lie on the line.
Given a circle (in black) and a line . We wish to construct the points of intersection, and , between them (if they exist).
Construct the point , which is the reflection of point across line . (See above.)
Under the assumption of this case, .
Construct a circle (in red). (See above, compass equivalence.)
The intersections of circle and the new red circle are points and .
If the two circles are (externally) tangential then .
Internal tangency is not possible.
If the two circles do not intersect then neither does the circle with the line.
Points and are the intersection points of circle and the line .
If then the line is tangential to the circle .
An alternate construction, using circle inversion can also be given.
Given a circle and a line . We wish to construct the points of intersection, and , between them (if they exist).
Invert points and in circle to points and respectively.
Under the assumption of this case, points , , and are not collinear.
Find the center of the circle passing through points , , and .
Construct circle , which represents the inversion of the line into circle .
and are the intersection points of circles and .
If the two circles are (internally) tangential then , and the line is also tangential.
Circle center is collinear with the line
Given the circle whose center lies on the line , find the points and , the intersection points of the circle and the line.
Construct point as the other intersection of circles and .
Construct point as the intersection of circles and . ( is the fourth vertex of parallelogram .)
Construct point as the intersection of circles and . ( is the fourth vertex of parallelogram .)
Construct point as an intersection of circles and . ( lies on .)
Points and are the intersections of circles and .
Thus it has been shown that all of the basic construction one can perform with a straightedge and compass can be done with a compass alone, provided that it is understood that a line cannot be literally drawn but merely defined by two points.
Other types of restricted construction
Restrictions involving the compass
Renaissance mathematicians Lodovico Ferrari, Gerolamo Cardano and Niccolò Fontana Tartaglia and others were able to show in the 16th century that any ruler-and-compass construction could be accomplished with a straightedge and a fixed-width compass (i.e. a rusty compass).
The compass equivalency theorem shows that in all the constructions mentioned above, the familiar modern compass with its fixable aperture, which can be used to transfer distances, may be replaced with a "collapsible compass", a compass that collapses whenever it is lifted from a page, so that it may not be directly used to transfer distances. Indeed, Euclid's original constructions use a collapsible compass. It is possible to translate any circle in the plane with a collapsing compass using no more than three additional applications of the compass over that of a rigid compass.
A variation on the compass, a neusis tool which does not actually exist but as an abstraction, has also been studied. Known as the cyclos, the device draws circles similarly to the compass, but does so not by defining a radius or providing a center, but by two points defining a diameter, or by three non-collinear points defining the arc. In either case, a single application of the tool is used, by definition, to draw a complete circle. The cyclos tool has been shown to be equivalent to a compass.
Restrictions excluding the compass
Motivated by Mascheroni's result, in 1822 Jean Victor Poncelet conjectured a variation on the same theme. His work paved the way for the field of projective geometry, wherein he proposed that any construction possible by straightedge and compass could be done with straightedge alone. However, the one stipulation is that no less than a single circle with its center identified must be provided. This statement, now known as the Poncelet–Steiner theorem, was proved by Jakob Steiner eleven years later.
A proof later provided in 1904 by Francesco Severi relaxes the requirement that one full circle be provided, and shows that any small arc of the circle, so long as the center is still provided, is still sufficient.
Additionally, the center itself may be omitted instead of portions of the arc, if it is substituted for something else sufficient, such as a second concentric circle, a second intersecting circle, or a third circle in the plane. Alternatively, a second circle which is neither intersecting nor concentric is sufficient, provided that a point on either the centerline through them or the radical axis between them is given, or two parallel lines exist in the plane. A single circle without its center can also be sufficient under the right circumstances. Other unique conditions may exist.
See also
Napoleon's problem
Geometrography
Inversive geometry
Projective geometry
Notes
References
Further reading
External links
Construction with the Compass Only
Compass and straightedge constructions
Theorems in plane geometry | Mohr–Mascheroni theorem | [
"Mathematics"
] | 2,952 | [
"Euclidean plane geometry",
"Theorems in plane geometry",
"Theorems in geometry",
"Straightedge and compass constructions",
"Planes (geometry)"
] |
6,061,729 | https://en.wikipedia.org/wiki/Fluid%E2%80%93structure%20interaction | Fluid–structure interaction (FSI) is the interaction of some movable or deformable structure with an internal or surrounding fluid flow. Fluid–structure interactions can be stable or oscillatory. In oscillatory interactions, the strain induced in the solid structure causes it to move such that the source of strain is reduced, and the structure returns to its former state only for the process to repeat.
Examples
Fluid–structure interactions are a crucial consideration in the design of many engineering systems, e.g. automobile, aircraft, spacecraft, engines and bridges. Failing to consider the effects of oscillatory interactions can be catastrophic, especially in structures comprising materials susceptible to fatigue. Tacoma Narrows Bridge (1940), the first Tacoma Narrows Bridge, is probably one of the most infamous examples of large-scale failure. Aircraft wings and turbine blades can break due to FSI oscillations. A reed actually produces sound because the system of equations governing its dynamics has oscillatory solutions. The dynamic of reed valves used in two strokes engines and compressors is governed by FSI. The act of "blowing a raspberry" is another such example. The interaction between tribological machine components, such as bearings and gears, and lubricant is also an example of FSI. The lubricant flows between the contacting solid components and causes elastic deformation in them during this process. Fluid–structure interactions also occur in moving containers, where liquid oscillations due to the container motion impose substantial magnitudes of forces and moments to the container structure that affect the stability of the container transport system in a highly adverse manner. Another prominent example is the start up of a rocket engine, e.g. Space Shuttle main engine (SSME), where FSI can lead to considerable unsteady side loads on the nozzle structure. In addition to pressure-driven effects, FSI can also have a large influence on surface temperatures on supersonic and hypersonic vehicles.
Fluid–structure interactions also play a major role in appropriate modeling of blood flow. Blood vessels act as compliant tubes that change size dynamically when there are changes to blood pressure and velocity of flow. Failure to take into account this property of blood vessels can lead to a significant overestimation of resulting wall shear stress (WSS). This effect is especially imperative to take into account when analyzing aneurysms. It has become common practice to use computational fluid dynamics to analyze patient specific models. The neck of an aneurysm is the most susceptible to changes in to WSS. If the aneurysmal wall becomes weak enough, it becomes at risk of rupturing when WSS becomes too high. FSI models contain an overall lower WSS compared to non-compliant models. This is significant because incorrect modeling of aneurysms could lead to doctors deciding to perform invasive surgery on patients who were not at a high risk of rupture. While FSI offers better analysis, it comes at a cost of highly increased computational time. Non-compliant models have a computational time of a few hours, while FSI models could take up to 7 days to finish running. This leads to FSI models to be most useful for preventative measures for aneurysms caught early, but unusable for emergency situations where the aneurysm may have already ruptured.
Analysis
Fluid–structure interaction problems and multiphysics problems in general are often too complex to solve analytically and so they have to be analyzed by means of experiments or numerical simulation. Research in the fields of computational fluid dynamics and computational structural dynamics is still ongoing but the maturity of these fields enables numerical simulation of fluid-structure interaction. Two main approaches exist for the simulation of fluid–structure interaction problems:
Monolithic approach: the equations governing the flow and the displacement of the structure are solved simultaneously, with a single solver
Partitioned approach: the equations governing the flow and the displacement of the structure are solved separately, with two distinct solvers
The monolithic approach requires a code developed for this particular combination of physical problems whereas the partitioned approach preserves software modularity because an existing flow solver and structural solver are coupled. Moreover, the partitioned approach facilitates solution of the flow equations and the structural equations with different, possibly more efficient techniques which have been developed specifically for either flow equations or structural equations. On the other hand, development of stable and accurate coupling algorithm is required in partitioned simulations. In conclusion, the partitioned approach allows reusing existing software which is an attractive advantage. However, stability of the coupling method needs to be taken into consideration. This is especially difficult, if the mass of the moving structure is small in comparison to the mass of fluid which is displaced by the structure movement.
In addition, the treatment of meshes introduces other classifications of FSI analysis. For example,one can classify them as the conforming mesh methods and the non-conforming mesh methods. Other classifications can be mesh-based methods and meshless methods.
Numerical simulation
The Newton–Raphson method or a different fixed-point iteration can be used to solve FSI problems. Methods based on Newton–Raphson iteration are used in both the monolithic
and the partitioned approach. These methods solve the nonlinear flow equations and the structural equations in the entire fluid and solid domain with the Newton–Raphson method. The system of linear equations within the Newton–Raphson iteration can be solved without knowledge of the Jacobian with a matrix-free iterative method, using a finite difference approximation of the Jacobian-vector product.
Whereas Newton–Raphson methods solve the flow and structural problem for the state in the entire fluid and solid domain, it is also possible to reformulate an FSI problem as a system with only the degrees of freedom in the interface’s position as unknowns. This domain decomposition condenses the error of the FSI problem into a subspace related to the interface. The FSI problem can hence be written as either a root finding problem or a fixed point problem, with the interface’s position as unknowns.
Interface Newton–Raphson methods solve this root-finding problem with Newton–Raphson iterations, e.g. with an approximation of the Jacobian from a linear reduced-physics model. The interface quasi-Newton method with approximation for the inverse of the Jacobian from a least-squares model couples a black-box flow solver and structural solver by means of the information that has been gathered during the coupling iterations. This technique is based on the interface block quasi-Newton technique with an approximation for the Jacobians from least-squares models which reformulates the FSI problem as a system of equations with both the interface’s position and the stress distribution on the interface as unknowns. This system is solved with block quasi-Newton iterations of the Gauss–Seidel type and the Jacobians of the flow solver and structural solver are approximated by means of least-squares models.
The fixed-point problem can be solved with fixed-point iterations, also called (block) Gauss–Seidel iterations, which means that the flow problem and structural problem are solved successively until the change is smaller than the convergence criterion. However, the iterations converge slowly if at all, especially when the interaction between the fluid and the structure is strong due to a high fluid/structure density ratio or the incompressibility of the fluid. The convergence of the fixed point iterations can be stabilized and accelerated by Aitken relaxation and steepest descent relaxation, which adapt the relaxation factor in each iteration based on the previous iterations.
If the interaction between the fluid and the structure is weak, only one fixed-point iteration is required within each time step. These so-called staggered or loosely coupled methods do not enforce the equilibrium on the fluid–structure interface within a time step but they are suitable for the simulation of aeroelasticity with a heavy and rather stiff structure.
Several studies have analyzed the stability of partitioned algorithms for the simulation of fluid-structure interaction
.
See also
Immersed boundary method
Smoothed particle hydrodynamics
Stochastic Eulerian Lagrangian method
Computational fluid dynamics
Fluid mechanics, fluid dynamics
Structural mechanics, structural dynamics
CFD Online page about FSI
NASA page about a tail flutter test
YouTube movie about flutter of glider wings
Hydroelasticity
Slosh dynamics
Open source codes
solids4Foam, a toolbox for OpenFOAM with capabilities for solid mechanics and fluid solid interactions
oomph-lib
Elmer FSI page
CBC.solve Biomedical Solvers
preCICE Coupling Library
SPHinXsys multi-physics library It provides C++ APIs for physical accurate simulation and aims to model coupled industrial dynamic systems including fluid, solid, multi-body dynamics and beyond with SPH (smoothed particle hydrodynamics), a meshless computational method using particle discretization.
Academic Codes
Stochastic Immersed Boundary Methods in 3D, P. Atzberger, UCSB
Immersed Boundary Method for Adaptive Meshes in 3D, B. Griffith, NYU.
Immersed Boundary Method for Uniform Meshes in 2D, A. Fogelson, Utah
IFLS, IFL, TU Braunschweig
Commercial Codes
Abaqus Multiphysics Coupling
AcuSolve FSI applications
ADINA FSI homepage
Ansys' FSI homepage
Altair RADIOSS
Autodesk Simulation CFD
Simcenter STAR-CCM+ from Siemens Digital Industries Software
CoLyX - FSI and mesh-morphing from EVEN - Evolutionary Engineering AG
Fluidyn-MP FSI Multiphysics Coupling
COMSOL FSI homepage
MpCCI homepage
MSC Software MD Nastran
MSC Software Dytran
FINE/Oofelie FSI: Fully integrated and strongly coupled for better convergence
LS-DYNA Home Page
Fluidyn-MP FSI: Fluid-Structure Interaction
CompassFEM Tdyn
CompassFEM SeaFEM
Cradle SC/Tetra CFD Software
PARACHUTES FSI HomePage
References
Further reading
Modarres-Sadeghi, Yahya: Introduction to Fluid-Structure Interactions, 2021, Springer Nature, 978-3-030-85882-7, http://dx.doi.org/10.1007/978-3-030-85884-1
Introduces the subject of Fluid-Structure Interactions (FSI) to students and professionals and discusses the major ideas in FSI with the goal of providing the fundamental understanding to the readers who possess limited or no understanding of the subject.
Fluid mechanics
Fluid dynamics | Fluid–structure interaction | [
"Chemistry",
"Engineering"
] | 2,156 | [
"Chemical engineering",
"Civil engineering",
"Piping",
"Fluid mechanics",
"Fluid dynamics"
] |
6,063,848 | https://en.wikipedia.org/wiki/Sandbox%20%28software%20development%29 | A sandbox is a testing environment that isolates untested code changes and outright experimentation from the production environment or repository in the context of software development, including web development, automation, revision control, configuration management (see also change management), and patch management.
Sandboxing protects "live" servers and their data, vetted source code distributions, and other collections of code, data and/or content, proprietary or public, from changes that could be damaging to a mission-critical system or which could simply be difficult to revert, regardless of the intent of the author of those changes. Sandboxes replicate at least the minimal functionality needed to accurately test the programs or other code under development (e.g. usage of the same environment variables as, or access to an identical database to that used by, the stable prior implementation intended to be modified; there are many other possibilities, as the specific functionality needs vary widely with the nature of the code and the application[s] for which it is intended).
The concept of sandboxing is built into revision control software such as Git, CVS and Subversion (SVN), in which developers "check out" a copy of the source code tree, or a branch thereof, to examine and work on. After the developer has fully tested the code changes in their own sandbox, the changes would be checked back into and merged with the repository and thereby made available to other developers or end users of the software.
By further analogy, the term "sandbox" can also be applied in computing and networking to other temporary or indefinite isolation areas, such as security sandboxes and search engine sandboxes (both of which have highly specific meanings), that prevent incoming data from affecting a "live" system (or aspects thereof) unless/until defined requirements or criteria have been met.
Sandboxing (see also ' soft launching') is often considered a best practice when making any changes to a system, regardless of whether that change is considered 'development', a modification of configuration state, or updating the system.
In web services
The term sandbox is commonly used for the development of web services to refer to a mirrored production environment for use by external developers. Typically, a third-party developer will develop and create an application that will use a web service from the sandbox, which is used to allow a third-party team to validate their code before migrating it to the production environment. Microsoft,
Google, Amazon,
Salesforce,
PayPal,
eBay, and
Yahoo, among others, provide such services.
In wikis
Wikis also typically employ a shared sandbox model of testing, though it is intended principally for learning and outright experimentation with features rather than for testing of alterations to existing content (the wiki analog of source code). An edit preview mode is usually used instead to test specific changes made to the texts or layout of wiki pages.
See also
Comparison of online source code playgrounds
OS-level virtualization
Pastebin
Sandbox (computer security)
Sandbox effect (search engines)
Sandbox (video game editor)
Sandbox game
References
Virtualization
Sdp | Sandbox (software development) | [
"Engineering"
] | 639 | [
"Computer networks engineering",
"Virtualization"
] |
10,303,676 | https://en.wikipedia.org/wiki/Oxoguanine%20glycosylase | 8-Oxoguanine glycosylase, also known as OGG1, is a DNA glycosylase enzyme that, in humans, is encoded by the OGG1 gene. It is involved in base excision repair. It is found in bacterial, archaeal and eukaryotic species.
Function
OGG1 is the primary enzyme responsible for the excision of 8-oxoguanine (8-oxoG), a mutagenic base byproduct that occurs as a result of exposure to reactive oxygen species (ROS). OGG1 is a bifunctional glycosylase, as it is able to both cleave the glycosidic bond of the mutagenic lesion and cause a strand break in the DNA backbone. Alternative splicing of the C-terminal region of this gene classifies splice variants into two major groups, type 1 and type 2, depending on the last exon of the sequence. Type 1 alternative splice variants end with exon 7 and type 2 end with exon 8. One set of spliced forms are designated 1a, 1b, 2a to 2e. All variants have the N-terminal region in common. Many alternative splice variants for this gene have been described, but the full-length nature for every variant has not been determined. In eukaryotes, the N-terminus of this gene contains a mitochondrial targeting signal, essential for mitochondrial localization. However, OGG1-1a also has a nuclear location signal at its C-terminal end that suppresses mitochondrial targeting and causes OGG1-1a to localize to the nucleus. The main form of OGG1 that localizes to the mitochondria is OGG1-2a. A conserved N-terminal domain contributes residues to the 8-oxoguanine binding pocket. This domain is organised into a single copy of a TBP-like fold.
Despite the presumed importance of this enzyme, mice lacking Ogg1 have been generated and found to have a normal lifespan, and Ogg1 knockout mice have a higher probability to develop cancer, whereas MTH1 gene disruption concomitantly suppresses lung cancer development in Ogg1-/- mice. Mice lacking Ogg1 have been shown to be prone to increased body weight and obesity, as well as high-fat-diet-induced insulin resistance. There is some controversy as to whether deletion of Ogg1 actually leads to increased 8-Oxo-2'-deoxyguanosine (8-oxo-dG) levels: high performance liquid chromatography with electrochemical detection (HPLC-ECD) assay suggests the deletion can lead to an up to 6 fold higher level of 8-oxo-dG in nuclear DNA and a 20-fold higher level in mitochondrial DNA, whereas DNA-fapy glycosylase assay indicates no change in 8-oxo-dG levels.
Increased oxidant stress temporarily inactivates OGG1, which recruits transcription factors such as NFkB and thereby activates expression of inflammatory genes.
OGG1 deficiency and increased 8-oxo-dG in mice
Mice without a functional OGG1 gene have about a 5-fold increased level of 8-oxo-dG in their livers compared to mice with wild-type OGG1. Mice defective in OGG1 also have an increased risk for cancer. Kunisada et al. irradiated mice without a functional OGG1 gene (OGG1 knock-out mice) and wild-type mice three times a week for 40 weeks with UVB light at a relatively low dose (not enough to cause skin redness). Both types of mice had high levels of 8-oxo-dG in their epidermal cells three hours after irradiation. After 24 hours, over half of the initial amount of 8-oxo-dG was absent from the epidermal cells of the wild-type mice, but 8-oxo-dG remained elevated in the epidermal cells of the OGG1 knock-out mice. The irradiated OGG1 knock-out mice went on to develop more than twice the incidence of skin tumors compared to irradiated wild-type mice, and the rate of malignancy within the tumors was higher in the OGG1 knock-out mice (73%) than in the wild-type mice (50%).
As reviewed by Valavanidis et al., increased levels of 8-oxo-dG in a tissue can serve as a biomarker of oxidative stress. They also noted that increased levels of 8-oxo-dG are frequently found during carcinogenesis.
In the figure showing examples of mouse colonic epithelium, the colonic epithelium from a mouse on a normal diet was found to have a low level of 8-oxo-dG in its colonic crypts (panel A). However, a mouse likely undergoing colonic tumorigenesis (due to deoxycholate added to its diet) was found to have a high level of 8-oxo-dG in its colonic epithelium (panel B). Deoxycholate increases intracellular production of reactive oxygen resulting in increased oxidative stress,> and this can lead to tumorigenesis and carcinogenesis.
Epigenetic control
In a breast cancer study, the methylation level of the OGG1 promoter was found to be negatively correlated with expression level of OGG1 messenger RNA. This means that hypermethylation was associated with low expression of OGG1 and hypomethylation was correlated with over-expression of OGG1. Thus, OGG1 expression is under epigenetic control. Breast cancers with methylation levels of the OGG1 promoter that were more than two standard deviations either above or below the normal were each associated with reduced patient survival.
In cancers
OGG1 is the primary enzyme responsible for the excision of 8-oxo-dG. Even when OGG1 expression is normal, the presence of 8-oxo-dG is mutagenic, since OGG1 is not 100% effective. Yasui et al. examined the fate of 8-oxo-dG when this oxidized derivative of deoxyguanosine was inserted into a specific gene in 800 cells in culture. After replication of the cells, 8-oxo-dG was restored to G in 86% of the clones, probably reflecting accurate OGG1 base excision repair or translesion synthesis without mutation. G:C to T:A transversions occurred in 5.9% of the clones, single base deletions in 2.1% and G:C to C:G transversions in 1.2%. Together, these mutations were the most common, totalling 9.2% of the 14% of mutations generated at the site of the 8-oxo-dG insertion. Among the other mutations in the 800 clones analyzed, there were also 3 larger deletions, of sizes 6, 33 and 135 base pairs. Thus 8-oxo-dG can directly cause mutations, some of which may contribute to carcinogenesis.
If OGG1 expression is reduced in cells, increased mutagenesis, and therefore increased carcinogenesis, would be expected. The table below lists some cancers associated with reduced expression of OGG1.
OGG1 or OGG activity in blood, and cancer
OGG1 methylation levels in blood cells were measured in a prospective study of 582 US military veterans, median age 72, and followed for 13 years. High OGG1 methylation at a particular promoter region was associated with increased risk for any cancer, and in particular for risk of prostate cancer.
Enzymatic activity excising 8-oxoguanine from DNA (OGG activity) was reduced in peripheral blood mononuclear cells (PBMCs), and in paired lung tissue, from patients with non–small cell lung cancer. OGG activity was also reduced in PBMCs of patients with head and neck squamous cell carcinoma (HNSCC).
An important effect on cancer is expected to derive from the drastic enhancement of gene expression for certain immunity genes, which OGG1 regulates.
Interactions
Oxoguanine glycosylase has been shown to interact with XRCC1 and PKC alpha.
Pathology
OGG1 may be associated with cancer risk in BRCA1 and BRCA2 mutation carriers.
See also
MUTYH
NTHL1
NEIL1
References
Further reading
External links
Protein families | Oxoguanine glycosylase | [
"Biology"
] | 1,812 | [
"Protein families",
"Protein classification"
] |
10,306,384 | https://en.wikipedia.org/wiki/Replication%20%28microscopy%29 | Replication, in metallography, is the use of thin plastic films to nondestructively duplicate the microstructure of a component. The film is then examined at high magnifications.
Replication is a method of copying the topography of a surface by casting or impressing material onto the surface. It is the commonly used technique to duplicate surfaces that are inaccessible in metrology to other forms of nondestructive testing. Replicas can be used in biology as well:
The replicas may be imaged in the light microscope or coated with heavy metals, the replicating film melted away, and the heavy metal replica imaged in a Transmission Electron Microscope (TEM).
The same materials, cellulose acetate films, are used for creating replicas of biological materials such as bacteria.
Metallurgy
Nondestructive testing
Field Metallurgical Replication (FMR), in field metallography, is the use of metallurgical preparation on surfaces in the field, by polishing to a mirror image, along with application of acetate or other thin plastic films designed to nondestructively duplicate the microstructure of a part or structure in-situ. The FMR replica is then transferred to a glass slide for examination by optical microscopy, electron microscopy, and other methods. | Replication (microscopy) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 268 | [
"Metallurgy",
"Materials science",
"Nondestructive testing",
"Materials testing",
"nan"
] |
10,307,157 | https://en.wikipedia.org/wiki/Steel%20casing%20pipe | Steel casing pipe, also known as encasement pipe, is most commonly used in underground construction to protect utility lines of various types from getting damaged. Such damage might occur due to the elements of nature or human activity.
Steel casing pipe is used in different types of horizontal underground boring, where the pipe is jacked into an augered hole in segments and then connected together by welding or by threaded and coupled ends, or other proprietary pipe connectors such as interference-fit interlocking push-on joints. The steel casing pipe can also be set up and welded into a "ribbon" and then directionally pulled through a previously drilled hole under highways, railroads, lakes and rivers.
Uses
Steel casing pipe protects one or many of various types of utilities such as water mains, gas pipes, electrical power cables, fiber-optic cables, etc. The utility lines that are run through the steel casing pipe are most commonly mounted and spaced within the steel casing pipe by using "casing spacers" that are made of various materials, including stainless steel or carbon steel and the more economical plastic versions. The ends of a steel casing pipe "run" are normally sealed with "casing end seals", which can be of the "pull-on" or "wrap-around" rubber varieties. Steel casing pipe is also used in the construction of deep foundations.
Specification
Steel casing pipe generally has no specific specifications, other than the need for the material to be extremely straight and round. In some areas A.S.T.M. specifications may be required by project engineers. The specification most commonly called for is A.S.T.M. 139 Grade B. This specification gives parameters for minimum yield and tensile strength of the steel pipe being used for casing, and tolerances of straightness and concentricity.
Steel casing pipe is often specified as ASTM A-252 which is a structural grade material that does not require hydrostatic testing and the inspection requirements are not stringent and it usually costs less than other grades such as A-53, A-139 or API 5L. Used natural gas line pipe is also used as casing on many projects because it is often reclaimed in very good condition and can offer a significant cost savings when compared to new steel pipe. Used pipe is most likely to not have any testing data associated with it and is generally used when the only required specification is a given diameter and wall thickness of steel casing pipe.
See also
Microtunneling
Trenchless technology
References
Piping | Steel casing pipe | [
"Chemistry",
"Engineering"
] | 519 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
10,311,578 | https://en.wikipedia.org/wiki/Davisson%E2%80%93Germer%20Prize | The Davisson–Germer Prize in Atomic or Surface Physics is an annual prize that has been awarded by the American Physical Society since 1965. The recipient is chosen for "outstanding work in atomic physics or surface physics". The prize is named after Clinton Davisson and Lester Germer, who first measured electron diffraction, and as of 2007 it is valued at $5,000.
Recipients
2023: Feng Liu
2022: David S. Weiss
2021: Michael F. Crommie
2020: Klaas Bergmann
2019: Randall M. Feenstra
2018:
2017: and Stephen Kevan
2016: Randall G. Hulet
2015: and
2014: Nora Berrah
2013: Geraldine L. Richmond
2012: Jean Dalibard
2011: Joachim Stohr
2010: Chris H. Greene
2009: and Krishnan Raghavachari
2008:
2007:
2006:
2005: Ernst G. Bauer
2004:
2003: Rudolf M. Tromp
2002: Gerald Gabrielse
2001: Donald M. Eigler
2000: William Happer
1999: Steven Gwon Sheng Louie
1998: Sheldon Datz
1997: Jerry D. Tersoff
1996:
1995: Max G. Lagally
1994: Carl Weiman [sic]
1993:
1992:
1991:
1990: David Wineland
1989:
1988: John L. Hall
1987:
1986: Daniel Kleppner
1985: J. Gregory Dash
1984: and
1983: E. W. Plummer
1982: Llewellyn H. Thomas
1981: Robert Gomer
1980: Alexander Dalgarno
1979: and Donald R. Hamann
1978: Vernon Hughes
1977: Walter Kohn and
1976: Ugo Fano
1975: and Homer D. Hagstrum
1974: Norman Ramsey
1972: Erwin Wilhelm Müller
1970: Hans Dehmelt
1967: Horace Richard Crane
1965:
Source:
See also
List of physics awards
References
Awards of the American Physical Society
Atomic physics
Surface science | Davisson–Germer Prize | [
"Physics",
"Chemistry",
"Materials_science"
] | 390 | [
"Quantum mechanics",
"Surface science",
"Atomic physics",
" molecular",
"Condensed matter physics",
"Atomic",
" and optical physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.