id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
9,036,004
https://en.wikipedia.org/wiki/Molindone
Molindone, sold under the brand name Moban, is an antipsychotic medication which is used in the United States in the treatment of schizophrenia. It is taken by mouth. Side effects of molindone include extrapyramidal symptoms and tardive dyskinesia, among others. Molindone is thought to work by blocking the effects of dopamine in the brain, leading to diminished symptoms of psychosis. The drug is sometimes described as a typical antipsychotic, and sometimes described as an atypical antipsychotic. Chemically, molindone is an indole and is structurally distinct from many other antipsychotics. Molindone was first described by 1966 and was introduced for medical use in 1974. It remains marketed only in the United States. The drug has been repurposed and is being developed for potential treatment of aggression in children and adolescents with attention deficit hyperactivity disorder (ADHD). Medical uses Molindone is used in the treatment of schizophrenia. Available forms Molindone is available in the form of 5, 10, 25, and 50mg oral tablets. Adverse effects The side effect profile of molindone is similar to that of other typical antipsychotics. This includes extrapyramidal symptoms and tardive dyskinesia. Unlike most antipsychotics, however, molindone use is associated with decreased appetite and weight loss rather than with weight gain. Molindone may have less potential for sedation than certain other antipsychotics owing to its lack of antihistamine activity. It has little or no anticholinergic activity and may be less likely than certain other antipsychotics to cause orthostatic hypotension. Pharmacology Pharmacodynamics Molindone is known to act as a potent antagonist of the dopamine D2 receptor ( = 84–140nM) and of the serotonin 5-HT2B receptor ( = 410nM). It is far less potent as an antagonist of the dopamine D1, D3, and D5 receptors ( = 3,200–8,300nM) and of the serotonin 5-HT2A receptor ( = 14,000nM). The drug does not significantly bind to or inhibit the α-adrenergic receptors, nor does it affect various other receptors, such as the serotonin 5-HT1A, 5-HT2C, 5-HT6, and 5-HT7 receptors. Likewise, molindone has essentially no affinity for the muscarinic acetylcholine receptors and has very little affinity for the histamine H1 receptor or the α1-adrenergic receptor. However, it has been found to have intermediate affinity for the α2-adrenergic receptor. The metabolites of molindone appear to be largely inactive in vitro. The preceding findings suggest that molindone is pharmacologically distinct from most atypical antipsychotics, which act as potent antagonists of both the D2 and 5-HT2A receptors. Additional binding data on molindone are also available and in some cases have found contrasting results relative to the above findings, for instance high affinity for the dopamine D3 receptor. Molindone is described as an antipsychotic, sedative, and major tranquilizer. In animals, it reduces spontaneous locomotor activity, inhibits conditioned avoidance responses, produces catalepsy and hypothermia, and limits aggression in monkeys. Like other antipsychotics, molindone antagonizes the effects of the dopamine releasing agent amphetamine and the dopamine receptor agonist apomorphine. In contrast to many antipsychotics however, molindone shows antidepressant-like effects in animals, for example reversing ptosis induced by the dopamine depleting agent tetrabenazine, potentiating 5-hydroxytryptophan (5-HTP)-induced tremors, and potentiating certain effects of levodopa (L-DOPA). It shows little anticholinergic activity in animals and its lack of histamine H1 receptor antagonism suggests less potential for sedation and weight gain than certain other antipsychotics. The drug shows antiemetic effects in animals. Molindone has been reported to inhibit monoamine oxidase both in vitro and in vivo. However, very high concentrations (~100,000nM) and high doses (10 and 40mg/kg) are required for monoamine oxidase inhibition. Its inhibition of monoamine oxidase is irreversible and is selective for monoamine oxidase A (MAO-A). The drug is much more potent in inhibiting monoamine oxidase in vivo than in vitro, suggesting that an active metabolite may be responsible for its monoamine oxidase inhibition. The MAO-A inhibition of molindone may be responsible for its antidepressant-like effects in animals. It is unclear whether the monoamine oxidase inhibition of molindone observed in preclinical research occurs therapeutically in humans or is clinically significant. It has no affinity for the muscariniac acetylcholine receptors. Pharmacokinetics The elimination half-life of molindone is approximately 2hours. This half-life is much shorter than that of most other antipsychotics. Concentrations of molindone are negligible 12hours following the last dose even it is used at high doses. Lithium has been found to prolong the half-life of molindone by at least 4-fold. In spite of the preceding findings, the duration of action of molindone is 24 to 36hours. It has been suggested that the antipsychotic effects of molindone may be mediated by active metabolites rather than by molindone itself. Chemistry Molindone is an indole derivative or dihydroindole and is structurally distinct from many other antipsychotics. Analogues Some structurally related compounds include L-741,626, losindole, and piquindone. Other indole-containing antipsychotics include ciclindole, flucindole, roxindole, sertindole, and tepirindole. Synthesis Condensation of oximinoketone 2 (from nitrosation of 3-pentanone), with cyclohexane-1,3-dione (1) in the presence of zinc and acetic acid leads directly to the partly reduced indole derivative 6. The transformation may be rationalized by assuming as the first step, reduction of 2 to the corresponding α-aminoketone. Conjugate addition of the amine to 1 followed by elimination of hydroxide (as water) would give ene-aminoketone 3. This enamine may be assumed to be in tautomeric equilibrium with imine 4. Aldol condensation of the side chain carbonyl group with the doubly activated ring methylene group would then result in cyclization to pyrrole 5; simple tautomeric transformation would then give the observed product. Mannich reaction of 6 with formaldehyde and morpholine gives the tranquilizer molindone (7). History Molindone was first described in the literature by 1966. It was first approved for medical use, to treat schizophrenia, in 1974 in the United States. Society and culture Availability Molindone has been marketed in the United States, Finland, and Hong Kong. In 2000, it was available only in these three countries. By 2017, molindone continued to be marketed only in the United States. The drug was discontinued by its original supplier, Endo Pharmaceuticals, on January 13, 2010. After having been produced and subsequently discontinued by Core Pharma in 2015 to 2017, molindone is available again from Epic Pharma effective December 2018. Research Depression and anxiety Molindone has been studied in the treatment of depression and anxiety. Some antidepressant and anxiolytic effects have been observed in small and old clinical studies, but findings in terms of effectiveness were mixed. Aggression in children and adolescents Molindone was found to reduce aggressive symptoms, including agitation, hostility, and uncooperativeness, in adults with schizophrenia in the 1970s. Many other antipsychotics have also shown clinical anti-aggressive effects. Subsequently, molindone was found to potentially be effective in the treatment of hospitalized aggressive children with conduct disorder in a clinical trial comparing it with thioridazine in the 1980s. This study eventually led to molindone being developed for treatment of impulsive aggression in youth much later on. Low-dose extended-release molindone (developmental code name SPN-810) is under development for the treatment of impulsive aggression in children and adolescents with attention deficit hyperactivity disorder (ADHD). As of May 2024, it is in phase 3 clinical trials for this indication. Negative effectiveness findings in a phase 3 trial have been reported. The exact mechanism of action of molindone for this indication is unknown, but has been proposed to be related to dopamine D2 and serotonin 5-HT2B receptor antagonism. References 4-Morpholinyl compounds 5-HT2B antagonists Antipsychotics D2 antagonists Indoles Ketones Monoamine oxidase inhibitors
Molindone
[ "Chemistry" ]
1,997
[ "Ketones", "Functional groups" ]
1,078,637
https://en.wikipedia.org/wiki/Herbrand%E2%80%93Ribet%20theorem
In mathematics, the Herbrand–Ribet theorem is a result on the class group of certain number fields. It is a strengthening of Ernst Kummer's theorem to the effect that the prime p divides the class number of the cyclotomic field of p-th roots of unity if and only if p divides the numerator of the n-th Bernoulli number Bn for some n, 0 < n < p − 1. The Herbrand–Ribet theorem specifies what, in particular, it means when p divides such an Bn. Statement The Galois group Δ of the cyclotomic field of pth roots of unity for an odd prime p, Q(ζ) with ζp = 1, consists of the p − 1 group elements σa, where . As a consequence of Fermat's little theorem, in the ring of p-adic integers we have p − 1 roots of unity, each of which is congruent mod p to some number in the range 1 to p − 1; we can therefore define a Dirichlet character ω (the Teichmüller character) with values in by requiring that for n relatively prime to p, ω(n) be congruent to n modulo p. The p part of the class group is a -module (since it is p-primary), hence a module over the group ring . We now define idempotent elements of the group ring for each n from 1 to p − 1, as It is easy to see that and where is the Kronecker delta. This allows us to break up the p part of the ideal class group G of Q(ζ) by means of the idempotents; if G is the p-primary part of the ideal class group, then, letting Gn = εn(G), we have . The Herbrand–Ribet theorem states that for odd n, Gn is nontrivial if and only if p divides the Bernoulli number Bp−n. The theorem makes no assertion about even values of n, but there is no known p for which Gn is nontrivial for any even n: triviality for all p would be a consequence of Vandiver's conjecture. Proofs The part saying p divides Bp−n if Gn is not trivial is due to Jacques Herbrand. The converse, that if p divides Bp−n then Gn is not trivial is due to Kenneth Ribet, and is considerably more difficult. By class field theory, this can only be true if there is an unramified extension of the field of pth roots of unity by a cyclic extension of degree p which behaves in the specified way under the action of Σ; Ribet proves this by actually constructing such an extension using methods in the theory of modular forms. A more elementary proof of Ribet's converse to Herbrand's theorem, a consequence of the theory of Euler systems, can be found in Washington's book. Generalizations Ribet's methods were developed further by Barry Mazur and Andrew Wiles in order to prove the main conjecture of Iwasawa theory, a corollary of which is a strengthening of the Herbrand–Ribet theorem: the power of p dividing Bp−n is exactly the power of p dividing the order of Gn. See also Iwasawa theory Stickelberger's theorem Kummer–Vandiver conjecture Ankeny–Artin–Chowla congruence, similar for class numbers of real quadratic fields Bernoulli number § The Kummer theorems Notes Cyclotomic fields Theorems in algebraic number theory
Herbrand–Ribet theorem
[ "Mathematics" ]
751
[ "Theorems in algebraic number theory", "Theorems in number theory" ]
1,078,762
https://en.wikipedia.org/wiki/Morpholino
A Morpholino, also known as a Morpholino oligomer and as a phosphorodiamidate Morpholino oligomer (PMO), is a type of oligomer molecule (colloquially, an oligo) used in molecular biology to modify gene expression. Its molecular structure contains DNA bases attached to a backbone of methylenemorpholine rings linked through phosphorodiamidate groups. Morpholinos block access of other molecules to small (~25 base) specific sequences of the base-pairing surfaces of ribonucleic acid (RNA). Morpholinos are used as research tools for reverse genetics by knocking down gene function. This article discusses only the Morpholino antisense oligomers, which are nucleic acid analogs. The word "Morpholino" can occur in other chemical names, referring to chemicals containing a six-membered morpholine ring. To help avoid confusion with other morpholine-containing molecules, when describing oligos "Morpholino" is often capitalized as a trade name, but this usage is not consistent across scientific literature. Morpholino oligos are sometimes referred to as PMO (for phosphorodiamidate morpholino oligomer), especially in medical literature. Vivo-Morpholinos and PPMO are modified forms of Morpholinos with chemical groups covalently attached to facilitate entry into cells. Gene knockdown is achieved by reducing the expression of a particular gene in a cell. In the case of protein-coding genes, this usually leads to a reduction in the quantity of the corresponding protein in the cell. Knocking down gene expression is a method for learning about the function of a particular protein; in a similar manner, causing a specific exon to be spliced out of the RNA transcript encoding a protein can help to determine the function of the protein moiety encoded by that exon or can sometimes knock down the protein activity altogether. These molecules have been applied to studies in several model organisms, including mice, zebrafish, frogs and sea urchins. Morpholinos can also modify the splicing of pre-mRNA or inhibit the maturation and activity of miRNA. Techniques for targeting Morpholinos to RNAs and delivering Morpholinos into cells have recently been reviewed in a journal article and in book form. Morpholinos are in development as pharmaceutical therapeutics targeted against pathogenic organisms such as bacteria or viruses and genetic diseases. A Morpholino-based drug eteplirsen from Sarepta Therapeutics received accelerated approval from the US Food and Drug Administration in September 2016 for the treatment of some mutations causing Duchenne muscular dystrophy, although the approval process was mired in controversy. Other Morpholino-based drugs golodirsen, viltolarsen, and casimersen (also for Duchenne muscular dystrophy) were approved by the FDA in 2019–2021. History Morpholino oligos were conceived by Summerton (Gene Tools) at AntiVirals Inc. (now Sarepta Therapeutics) and originally developed in collaboration with Weller. Structure Morpholinos are synthetic molecules that are the product of a redesign of natural nucleic acid structure. Usually 25 bases in length, they bind to complementary sequences of RNA or single-stranded DNA by standard nucleic acid base-pairing. In terms of structure, the difference between Morpholinos and DNA is that, while Morpholinos have standard nucleic acid bases, those bases are bound to methylenemorpholine rings linked through phosphorodiamidate groups instead of phosphates. The figure compares the structures of the two strands depicted there, one of RNA and the other of a Morpholino. Replacement of anionic phosphates with the uncharged phosphorodiamidate groups eliminates ionization in the usual physiological pH range, so Morpholinos in organisms or cells are uncharged molecules. The entire backbone of a Morpholino is made from these modified subunits. Function Morpholinos do not trigger the degradation of their target RNA molecules, unlike many antisense structural types (e.g., phosphorothioates, siRNA). Instead, Morpholinos act by "steric blocking", binding to a target sequence within an RNA, inhibiting molecules that might otherwise interact with the RNA. Morpholino oligos are often used to investigate the role of a specific mRNA transcript in an embryo. Developmental biologists inject Morpholino oligos into eggs or embryos of zebrafish, African clawed frog (Xenopus), sea urchin and killifish (F. heteroclitus) producing morphant embryos, or electroporate Morpholinos into chick embryos at later development stages. With appropriate cytosolic delivery systems, Morpholinos are effective in cell culture. Vivo-Morpholinos, in which the oligo is covalently linked to a delivery dendrimer, enter cells when administered systemically in adult animals or in tissue cultures. Normal gene expression in eukaryotes In eukaryotic organisms, pre-mRNA is transcribed in the nucleus, introns are spliced out, then the mature mRNA is exported from the nucleus to the cytoplasm. The small subunit of the ribosome usually starts by binding at the 5' end of the mRNA and is joined there by various other eukaryotic initiation factors, forming the initiation complex. The initiation complex scans along the mRNA strand until it reaches a start codon, and then the large subunit of the ribosome attaches to the small subunit and translation of a protein begins. This entire process is referred to as gene expression; it is the process by which the information in a gene, encoded as a sequence of bases in DNA, is converted into the structure of a protein. A Morpholino can modify splicing, block translation, or block other functional sites on RNA depending on the Morpholino's base sequence. Blocking translation Bound to the 5'-untranslated region of messenger RNA (mRNA), Morpholinos can interfere with progression of the ribosomal initiation complex from the 5' cap to the start codon. This prevents translation of the coding region of the targeted transcript (called "knocking down" gene expression). This is useful experimentally when an investigator wishes to know the function of a particular protein; Morpholinos provide a convenient means of knocking down expression of the protein and learning how that knockdown changes the cells or organism. Some Morpholinos knock down expression so effectively that, after degradation of preexisting proteins, the targeted proteins become undetectable by Western blot. In 2016 a synthetic peptide-conjugated PMO (PPMO) was found to inhibit the expression of New Delhi Metallo-beta-lactamase, an enzyme that many drug-resistant bacteria use to destroy carbapenems. Modifying pre-mRNA splicing Morpholinos can interfere with pre-mRNA processing steps either by preventing splice-directing small nuclear ribonucleoproteins (snRNP) complexes from binding to their targets at the borders of introns on a strand of pre-mRNA, or by blocking the nucleophilic adenine base and preventing it from forming the splice lariat structure, or by interfering with the binding of splice regulatory proteins such as splice silencers and splice enhancers. Preventing the binding of snRNP U1 (at the donor site) or U2/U5 (at the polypyrimidine moiety and acceptor site) can cause modified splicing, commonly excluding exons from the mature mRNA. Targeting some splice targets results in intron inclusions, while activation of cryptic splice sites can lead to partial inclusions or exclusions. Targets of U11/U12 snRNPs can also be blocked. Splice modification can be conveniently assayed by reverse-transcriptase polymerase chain reaction (RT-PCR) and is seen as a band shift after gel electrophoresis of RT-PCR products. Other applications: blocking other mRNA sites and use as probes Morpholinos have been used to block miRNA activity and maturation. Fluorescein-tagged Morpholinos combined with fluorescein-specific antibodies can be used as probes for in-situ hybridization to miRNAs. Morpholinos can block ribozyme activity. U2 and U12 snRNP functions have been inhibited by Morpholinos. Morpholinos targeted to "slippery" mRNA sequences within protein coding regions can induce translational frameshifts. Morpholinos can block RNA editing, poly(A) tailing and translocation sequences. Morpholino activities against this variety of targets suggest that Morpholinos can be used as a general-purpose tool for blocking interactions of proteins or nucleic acids with mRNA. Specificity, stability and non-antisense effects Morpholinos have become a standard knockdown tool in animal embryonic systems, which have a broader range of gene expression than adult cells and can be strongly affected by an off-target interaction. Following initial injections into frog or fish embryos at the single-cell or few-cell stages, Morpholino effects can be measured up to five days later, after most of the processes of organogenesis and differentiation are past, with observed phenotypes consistent with target-gene knockdown. Control oligos with irrelevant sequences usually produce no change in embryonic phenotype, evidence of the Morpholino oligo's sequence-specificity and lack of non-antisense effects. The dose required for a knockdown can be reduced by coinjection of several Morpholino oligos targeting the same mRNA, which is an effective strategy for reducing or eliminating dose-dependent off-target RNA interactions. mRNA rescue experiments can sometimes restore the wild-type phenotype to the embryos and provide evidence for the specificity of a Morpholino. In an mRNA rescue, a Morpholino is co-injected with an mRNA that codes for the morphlino's protein. However, the rescue mRNA has a modified 5'-UTR (untranslated region) so that the rescue mRNA contains no target for the Morpholino. The rescue mRNA's coding region encodes the protein of interest. Translation of the rescue mRNA replaces production of the protein that was knocked down by the Morpholino. Since the rescue mRNA would not affect phenotypic changes due to the Morpholino's off-target gene expression modulation, this return to wild-type phenotype is further evidence of Morpholino specificity. In some cases, ectopic expression of the rescue RNA makes recovery of the wild-type phenotype impossible. In embryos, Morpholinos can be tested in null mutants to check for unexpected RNA interactions, then used in a wild-type embryo to reveal the acute knockdown phenotype. The knockdown phenotype is often more extreme than the mutant phenotype; in the mutant, effects of losing the null gene can be concealed by genetic compensation. Because of their completely unnatural backbones, Morpholinos are not recognized by cellular proteins. Nucleases do not degrade Morpholinos, nor are they degraded in serum or in cells. Up to 18% of Morpholinos appear to induce nontarget-related phenotypes including cell death in the central nervous system and somite tissues of zebrafish embryos. Most of these effects are due to activation of p53-mediated apoptosis and can be suppressed by co-injection of an anti-p53 Morpholino along with the experimental Morpholino. Moreover, the p53-mediated apoptotic effect of a Morpholino knockdown has been phenocopied using another antisense structural type, showing the p53-mediated apoptosis to be a consequence of the loss of the targeted protein and not a consequence of the knockdown oligo type. It appears that these effects are sequence-specific; as in most cases, if a Morpholino is associated with non-target effects, the 4-base mismatch Morpholino will not trigger these effects. A cause for concern in the use of Morpholinos is the potential for "off-target" effects. Whether an observed morphant phenotype is due to the intended knockdown or an interaction with an off-target RNA can often be addressed in embryos by running another experiment to confirm that the observed morphant phenotype results from the knockdown of the expected target. This can be done by recapitulating the morphant phenotype with a second, non-overlapping Morpholino targeting the same mRNA, by confirmation of the observed phenotypes by comparing with a mutant strain (though compensation will obscure a phenotype in some mutants), by testing the Morpholino in a null mutant background to detect additional phenotypic changes or by dominant-negative methods. As mentioned above, rescue of observed phenotypes by coinjecting a rescue mRNA is, when feasible, a reliable test of specificity of a Morpholino. Delivery For a Morpholino to be effective, it must be delivered past the cell membrane into the cytosol of a cell. Once in the cytosol, Morpholinos freely diffuse between the cytosol and nucleus, as demonstrated by the nuclear splice-modifying activity of Morpholinos observed after microinjection into the cytosol of cells. Different methods are used for delivery into embryos, into cultured cells or into adult animals. A microinjection apparatus is usually used for delivery into an embryo, with injections most commonly performed at the single-cell or few-cell stage; an alternative method for embryonic delivery is electroporation, which can deliver oligos into tissues of later embryonic stages. Common techniques for delivery into cultured cells include the Endo-Porter peptide (which causes the Morpholino to be released from endosomes), the Special Delivery system (no longer commercially available, used a Morpholino-DNA heteroduplex and an ethoxylated polyethylenimine delivery reagent), electroporation, or scrape loading. Delivery into adult tissues is usually difficult, though there are a few systems allowing useful uptake of unmodified Morpholino oligos (including uptake into muscle cells with Duchenne muscular dystrophy or the vascular endothelial cells stressed during balloon angioplasty). Though they permeate through intercellular spaces in tissues effectively, unconjugated PMOs have limited distribution into the cytosol and nuclear spaces within healthy tissues following IV administration. Systemic delivery into many cells in adult organisms can be accomplished by using covalent conjugates of Morpholino oligos with cell-penetrating peptides, and, while toxicity has been associated with moderate doses of the peptide conjugates, they have been used in vivo for effective oligo delivery at doses below those causing observed toxicity. An octa-guanidinium dendrimer attached to the end of a Morpholino can deliver the modified oligo (called a Vivo-Morpholino) from the blood to the cytosol. Delivery-enabled Morpholinos, such as peptide conjugates and Vivo-Morpholinos, show promise as therapeutics for viral and genetic diseases. See also Oligonucleotide synthesis Nucleic acid analogue References Further reading Wiley-Liss, Inc. Special Issue: Morpholino Gene Knockdowns of genesis Volume 30, Issue 3 Pages 89-200 (July 2001). This is a special issue of Genesis that consists of a series of peer-reviewed short papers using Morpholino knock downs of gene function in various animal and tissue culture systems. "Peptide Nucleic Acids, Morpholinos and Related Antisense Biomolecules." eds. Janson & During (Springer, 2007) Genetics techniques Phosphoramidates Molecular genetics Nucleic acids Morpholines Gene expression Biotechnology
Morpholino
[ "Chemistry", "Engineering", "Biology" ]
3,445
[ "Genetics techniques", "Biomolecules by chemical classification", "Gene expression", "Genetic engineering", "Biotechnology", "Molecular genetics", "Cellular processes", "nan", "Molecular biology", "Biochemistry", "Nucleic acids" ]
1,079,448
https://en.wikipedia.org/wiki/Dirac%20operator
In mathematics and quantum mechanics, a Dirac operator is a differential operator that is a formal square root, or half-iterate, of a second-order operator such as a Laplacian. The original case which concerned Paul Dirac was to factorise formally an operator for Minkowski space, to get a form of quantum theory compatible with special relativity; to get the relevant Laplacian as a product of first-order operators he introduced spinors. It was first published in 1928 by Dirac. Formal definition In general, let D be a first-order differential operator acting on a vector bundle V over a Riemannian manifold M. If where ∆ is the Laplacian of V, then D is called a Dirac operator. In high-energy physics, this requirement is often relaxed: only the second-order part of D2 must equal the Laplacian. Examples Example 1 D = −i ∂x is a Dirac operator on the tangent bundle over a line. Example 2 Consider a simple bundle of notable importance in physics: the configuration space of a particle with spin confined to a plane, which is also the base manifold. It is represented by a wavefunction where x and y are the usual coordinate functions on R2. χ specifies the probability amplitude for the particle to be in the spin-up state, and similarly for η. The so-called spin-Dirac operator can then be written where σi are the Pauli matrices. Note that the anticommutation relations for the Pauli matrices make the proof of the above defining property trivial. Those relations define the notion of a Clifford algebra. Solutions to the Dirac equation for spinor fields are often called harmonic spinors. Example 3 Feynman's Dirac operator describes the propagation of a free fermion in three dimensions and is elegantly written using the Feynman slash notation. In introductory textbooks to quantum field theory, this will appear in the form where are the off-diagonal Dirac matrices , with and the remaining constants are the speed of light, being the Planck constant, and the mass of a fermion (for example, an electron). It acts on a four-component wave function , the Sobolev space of smooth, square-integrable functions. It can be extended to a self-adjoint operator on that domain. The square, in this case, is not the Laplacian, but instead (after setting ) Example 4 Another Dirac operator arises in Clifford analysis. In euclidean n-space this is where {ej: j = 1, ..., n} is an orthonormal basis for euclidean n-space, and Rn is considered to be embedded in a Clifford algebra. This is a special case of the Atiyah–Singer–Dirac operator acting on sections of a spinor bundle. Example 5 For a spin manifold, M, the Atiyah–Singer–Dirac operator is locally defined as follows: For and e1(x), ..., ej(x) a local orthonormal basis for the tangent space of M at x, the Atiyah–Singer–Dirac operator is where is the spin connection, a lifting of the Levi-Civita connection on M to the spinor bundle over M. The square in this case is not the Laplacian, but instead where is the scalar curvature of the connection. Example 6 On Riemannian manifold of dimension with Levi-Civita connection and an orthonormal basis , we can define exterior derivative and coderivative as . Then we can define a Dirac-Kähler operator , as follows . The operator acts on sections of Clifford bundle in general, and it can be restricted to spinor bundle, an ideal of Clifford bundle, only if the projection operator on the ideal is parallel. Generalisations In Clifford analysis, the operator acting on spinor valued functions defined by is sometimes called Dirac operator in k Clifford variables. In the notation, S is the space of spinors, are n-dimensional variables and is the Dirac operator in the i-th variable. This is a common generalization of the Dirac operator () and the Dolbeault operator (, k arbitrary). It is an invariant differential operator, invariant under the action of the group . The resolution of D is known only in some special cases. See also AKNS hierarchy Dirac equation Clifford algebra Clifford analysis Connection Dolbeault operator Heat kernel Spinor bundle References Differential operators Quantum mechanics Mathematical physics
Dirac operator
[ "Physics", "Mathematics" ]
935
[ "Mathematical analysis", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Differential operators", "Quantum operators", "Mathematical physics" ]
1,079,466
https://en.wikipedia.org/wiki/F%C3%B6rster%20resonance%20energy%20transfer
Förster resonance energy transfer (FRET), fluorescence resonance energy transfer, resonance energy transfer (RET) or electronic energy transfer (EET) is a mechanism describing energy transfer between two light-sensitive molecules (chromophores). A donor chromophore, initially in its electronic excited state, may transfer energy to an acceptor chromophore through nonradiative dipole–dipole coupling. The efficiency of this energy transfer is inversely proportional to the sixth power of the distance between donor and acceptor, making FRET extremely sensitive to small changes in distance. Measurements of FRET efficiency can be used to determine if two fluorophores are within a certain distance of each other. Such measurements are used as a research tool in fields including biology and chemistry. FRET is analogous to near-field communication, in that the radius of interaction is much smaller than the wavelength of light emitted. In the near-field region, the excited chromophore emits a virtual photon that is instantly absorbed by a receiving chromophore. These virtual photons are undetectable, since their existence violates the conservation of energy and momentum, and hence FRET is known as a radiationless mechanism. Quantum electrodynamical calculations have been used to determine that radiationless FRET and radiative energy transfer are the short- and long-range asymptotes of a single unified mechanism. Terminology Förster resonance energy transfer is named after the German scientist Theodor Förster. When both chromophores are fluorescent, the term "fluorescence resonance energy transfer" is often used instead, although the energy is not actually transferred by fluorescence. In order to avoid an erroneous interpretation of the phenomenon that is always a nonradiative transfer of energy (even when occurring between two fluorescent chromophores), the name "Förster resonance energy transfer" is preferred to "fluorescence resonance energy transfer"; however, the latter enjoys common usage in scientific literature. FRET is not restricted to fluorescence and occurs in connection with phosphorescence as well. Theoretical basis The FRET efficiency () is the quantum yield of the energy-transfer transition, i.e. the probability of energy-transfer event occurring per donor excitation event: where the radiative decay rate of the donor, is the rate of energy transfer, and the rates of any other de-excitation pathways excluding energy transfers to other acceptors. The FRET efficiency depends on many physical parameters that can be grouped as: 1) the distance between the donor and the acceptor (typically in the range of 1–10 nm), 2) the spectral overlap of the donor emission spectrum and the acceptor absorption spectrum, and 3) the relative orientation of the donor emission dipole moment and the acceptor absorption dipole moment. depends on the donor-to-acceptor separation distance with an inverse 6th-power law due to the dipole–dipole coupling mechanism: with being the Förster distance of this pair of donor and acceptor, i.e. the distance at which the energy transfer efficiency is 50%. The Förster distance depends on the overlap integral of the donor emission spectrum with the acceptor absorption spectrum and their mutual molecular orientation as expressed by the following equation all in SI units: where is the fluorescence quantum yield of the donor in the absence of the acceptor, is the dipole orientation factor, is the refractive index of the medium, is the Avogadro constant, and is the spectral overlap integral calculated as where is the donor emission spectrum, is the donor emission spectrum normalized to an area of 1, and is the acceptor molar extinction coefficient, normally obtained from an absorption spectrum. The orientation factor is given by where denotes the normalized transition dipole moment of the respective fluorophore, and denotes the normalized inter-fluorophore displacement. = 2/3 is often assumed. This value is obtained when both dyes are freely rotating and can be considered to be isotropically oriented during the excited-state lifetime. If either dye is fixed or not free to rotate, then = 2/3 will not be a valid assumption. In most cases, however, even modest reorientation of the dyes results in enough orientational averaging that = 2/3 does not result in a large error in the estimated energy-transfer distance due to the sixth-power dependence of on . Even when is quite different from 2/3, the error can be associated with a shift in , and thus determinations of changes in relative distance for a particular system are still valid. Fluorescent proteins do not reorient on a timescale that is faster than their fluorescence lifetime. In this case 0 ≤ ≤ 4. The units of the data are usually not in SI units. Using the original units to calculate the Förster distance is often more convenient. For example, the wavelength is often in unit nm and the extinction coefficient is often in unit , where is concentration . obtained from these units will have unit . To use unit Å () for the , the equation is adjusted to (Å) For time-dependent analyses of FRET, the rate of energy transfer () can be used directly instead: where is the donor's fluorescence lifetime in the absence of the acceptor. The FRET efficiency relates to the quantum yield and the fluorescence lifetime of the donor molecule as follows: where and are the donor fluorescence lifetimes in the presence and absence of an acceptor respectively, or as where and are the donor fluorescence intensities with and without an acceptor respectively. Experimental confirmation of the FRET theory The inverse sixth-power distance dependence of Förster resonance energy transfer was experimentally confirmed by Wilchek, Edelhoch and Brand using tryptophyl peptides. Stryer, Haugland and Yguerabide also experimentally demonstrated the theoretical dependence of Förster resonance energy transfer on the overlap integral by using a fused indolosteroid as a donor and a ketone as an acceptor. Calculations on FRET distances of some example dye-pairs can be found here. However, a lot of contradictions of special experiments with the theory was observed under complicated environment when the orientations and quantum yields of the molecules are difficult to estimate. Methods to measure FRET efficiency In fluorescence microscopy, fluorescence confocal laser scanning microscopy, as well as in molecular biology, FRET is a useful tool to quantify molecular dynamics in biophysics and biochemistry, such as protein-protein interactions, protein–DNA interactions, DNA-DNA interactions, and protein conformational changes. For monitoring the complex formation between two molecules, one of them is labeled with a donor and the other with an acceptor. The FRET efficiency is measured and used to identify interactions between the labeled complexes. There are several ways of measuring the FRET efficiency by monitoring changes in the fluorescence emitted by the donor or the acceptor. Sensitized emission One method of measuring FRET efficiency is to measure the variation in acceptor emission intensity. When the donor and acceptor are in proximity (1–10 nm) due to the interaction of the two molecules, the acceptor emission will increase because of the intermolecular FRET from the donor to the acceptor. For monitoring protein conformational changes, the target protein is labeled with a donor and an acceptor at two loci. When a twist or bend of the protein brings the change in the distance or relative orientation of the donor and acceptor, FRET change is observed. If a molecular interaction or a protein conformational change is dependent on ligand binding, this FRET technique is applicable to fluorescent indicators for the ligand detection. Photobleaching FRET FRET efficiencies can also be inferred from the photobleaching rates of the donor in the presence and absence of an acceptor. This method can be performed on most fluorescence microscopes; one simply shines the excitation light (of a frequency that will excite the donor but not the acceptor significantly) on specimens with and without the acceptor fluorophore and monitors the donor fluorescence (typically separated from acceptor fluorescence using a bandpass filter) over time. The timescale is that of photobleaching, which is seconds to minutes, with fluorescence in each curve being given by where is the photobleaching decay time constant and depends on whether the acceptor is present or not. Since photobleaching consists in the permanent inactivation of excited fluorophores, resonance energy transfer from an excited donor to an acceptor fluorophore prevents the photobleaching of that donor fluorophore, and thus high FRET efficiency leads to a longer photobleaching decay time constant: where and are the photobleaching decay time constants of the donor in the presence and in the absence of the acceptor respectively. (Notice that the fraction is the reciprocal of that used for lifetime measurements). This technique was introduced by Jovin in 1989. Its use of an entire curve of points to extract the time constants can give it accuracy advantages over the other methods. Also, the fact that time measurements are over seconds rather than nanoseconds makes it easier than fluorescence lifetime measurements, and because photobleaching decay rates do not generally depend on donor concentration (unless acceptor saturation is an issue), the careful control of concentrations needed for intensity measurements is not needed. It is, however, important to keep the illumination the same for the with- and without-acceptor measurements, as photobleaching increases markedly with more intense incident light. Lifetime measurements FRET efficiency can also be determined from the change in the fluorescence lifetime of the donor. The lifetime of the donor will decrease in the presence of the acceptor. Lifetime measurements of the FRET-donor are used in fluorescence-lifetime imaging microscopy (FLIM). Single-molecule FRET (smFRET) smFRET is a group of methods using various microscopic techniques to measure a pair of donor and acceptor fluorophores that are excited and detected at the single molecule level. In contrast to "ensemble FRET" or "bulk FRET" which provides the FRET signal of a high number of molecules, single-molecule FRET is able to resolve the FRET signal of each individual molecule. The variation of the smFRET signal is useful to reveal kinetic information that an ensemble measurement cannot provide, especially when the system is under equilibrium. Heterogeneity among different molecules can also be observed. This method has been applied in many measurements of biomolecular dynamics such as DNA/RNA/protein folding/unfolding and other conformational changes, and intermolecular dynamics such as reaction, binding, adsorption, and desorption that are particularly useful in chemical sensing, bioassays, and biosensing. Fluorophores used for FRET CFP-YFP pairs One common pair fluorophores for biological use is a cyan fluorescent protein (CFP) – yellow fluorescent protein (YFP) pair. Both are color variants of green fluorescent protein (GFP). Labeling with organic fluorescent dyes requires purification, chemical modification, and intracellular injection of a host protein. GFP variants can be attached to a host protein by genetic engineering which can be more convenient. Additionally, a fusion of CFP and YFP ("tandem-dimer") linked by a protease cleavage sequence can be used as a cleavage assay. BRET A limitation of FRET performed with fluorophore donors is the requirement for external illumination to initiate the fluorescence transfer, which can lead to background noise in the results from direct excitation of the acceptor or to photobleaching. To avoid this drawback, bioluminescence resonance energy transfer (or BRET) has been developed. This technique uses a bioluminescent luciferase (typically the luciferase from Renilla reniformis) rather than CFP to produce an initial photon emission compatible with YFP. BRET has also been implemented using a different luciferase enzyme, engineered from the deep-sea shrimp Oplophorus gracilirostris. This luciferase is smaller (19 kD) and brighter than the more commonly used luciferase from Renilla reniformis, and has been named NanoLuc or NanoKAZ. Promega has developed a patented substrate for NanoLuc called furimazine, though other valuables coelenterazine substrates for NanoLuc have also been published. A split-protein version of NanoLuc developed by Promega has also been used as a BRET donor in experiments measuring protein-protein interactions. Homo-FRET In general, "FRET" refers to situations where the donor and acceptor proteins (or "fluorophores") are of two different types. In many biological situations, however, researchers might need to examine the interactions between two, or more, proteins of the same type—or indeed the same protein with itself, for example if the protein folds or forms part of a polymer chain of proteins or for other questions of quantification in biological cells or in vitro experiments. Obviously, spectral differences will not be the tool used to detect and measure FRET, as both the acceptor and donor protein emit light with the same wavelengths. Yet researchers can detect differences in the polarisation between the light which excites the fluorophores and the light which is emitted, in a technique called FRET anisotropy imaging; the level of quantified anisotropy (difference in polarisation between the excitation and emission beams) then becomes an indicative guide to how many FRET events have happened. In the field of nano-photonics, FRET can be detrimental if it funnels excitonic energy to defect sites, but it is also essential to charge collection in organic and quantum-dot-sensitized solar cells, and various FRET-enabled strategies have been proposed for different opto-electronic devices. It is then essential to understand how isolated nano-emitters behave when they are stacked in a dense layer. Nanoplatelets are especially promising candidates for strong homo-FRET exciton diffusion because of their strong in-plane dipole coupling and low Stokes shift. Fluorescence microscopy study of such single chains demonstrated that energy transfer by FRET between neighbor platelets causes energy to diffuse over a typical 500-nm length (about 80 nano emitters), and the transfer time between platelets is on the order of 1 ps. Others Various compounds beside fluorescent proteins. Applications The applications of fluorescence resonance energy transfer (FRET) have expanded tremendously in the last 25 years, and the technique has become a staple in many biological and biophysical fields. FRET can be used as a spectroscopic ruler to measure distance and detect molecular interactions in a number of systems and has applications in biology and biochemistry. Proteins FRET is often used to detect and track interactions between proteins. Additionally, FRET can be used to measure distances between domains in a single protein by tagging different regions of the protein with fluorophores and measuring emission to determine distance. This provides information about protein conformation, including secondary structures and protein folding. This extends to tracking functional changes in protein structure, such as conformational changes associated with myosin activity. Applied in vivo, FRET has been used to detect the location and interactions of cellular structures including integrins and membrane proteins. Membranes FRET can be used to observe membrane fluidity, movement and dispersal of membrane proteins, membrane lipid-protein and protein-protein interactions, and successful mixing of different membranes. FRET is also used to study formation and properties of membrane domains and lipid rafts in cell membranes and to determine surface density in membranes. Chemosensor FRET-based probes can detect the presence of various molecules: the probe's structure is affected by small molecule binding or activity, which can turn the FRET system on or off. This is often used to detect anions, cations, small uncharged molecules, and some larger biomacromolecules as well. Similarly, FRET systems have been designed to detect changes in the cellular environment due to such factors as pH, hypoxia, or mitochondrial membrane potential. Signaling pathways Another use for FRET is in the study of metabolic or signaling pathways. For example, FRET and BRET have been used in various experiments to characterize G-protein coupled receptor activation and consequent signaling mechanisms. Other examples include the use of FRET to analyze such diverse processes as bacterial chemotaxis and caspase activity in apoptosis. Proteins and nucleotides folding kinetics Proteins, DNAs, RNAs, and other polymer folding dynamics have been measured using FRET. Usually, these systems are under equilibrium whose kinetics is hidden. However, they can be measured by measuring single-molecule FRET with proper placement of the acceptor and donor dyes on the molecules. See single-molecule FRET for a more detailed description. Other applications In addition to common uses previously mentioned, FRET and BRET are also effective in the study of biochemical reaction kinetics. FRET is increasingly used for monitoring pH dependent assembly and disassembly and is valuable in the analysis of nucleic acids encapsulation. This technique can be used to determine factors affecting various types of nanoparticle formation as well as the mechanisms and effects of nanomedicines. Other methods A different, but related, mechanism is Dexter electron transfer. An alternative method to detecting protein–protein proximity is the bimolecular fluorescence complementation (BiFC), where two parts of a fluorescent protein are each fused to other proteins. When these two parts meet, they form a fluorophore on a timescale of minutes or hours. See also Dexter electron transfer Förster coupling Surface energy transfer Time-resolved fluorescence energy transfer References External links FRET Imaging (Tutorial of Becker & Hickl, website) Imaging Fluorescence Biochemistry methods Biophysics Cell imaging Optical phenomena Protein–protein interaction assays Fluorescence techniques Cell biology Laboratory techniques Molecular biology techniques Energy transfer
Förster resonance energy transfer
[ "Physics", "Chemistry", "Biology" ]
3,742
[ "Biochemistry methods", "Physical phenomena", "Luminescence", "Applied and interdisciplinary physics", "Fluorescence", "Protein–protein interaction assays", "Cell biology", "Optical phenomena", "Molecular biology techniques", "Biophysics", "nan", "Microscopy", "Biochemistry", "Cell imaging...
1,080,811
https://en.wikipedia.org/wiki/Physics%20of%20skiing
The physics of skiing refers to the analysis of the forces acting on a person while skiing. The motion of a skier is determined by the physical principles of the conservation of energy and the frictional forces acting on the body. For example, in downhill skiing, as the skier is accelerated down the hill by the force of gravity, their gravitational potential energy is converted to kinetic energy, the energy of motion. In the ideal case, all of the potential energy would be converted into kinetic energy; in reality, some of the energy is lost to heat due to friction. One type of friction acting on the skier is the kinetic friction between the skis and snow. The force of friction acts in the direction opposite to the direction of motion, resulting in a lower velocity and hence less kinetic energy. The kinetic friction can be reduced by applying wax to the bottom of the skis which reduces the coefficient of friction. Different types of wax are manufactured for different temperature ranges because the snow quality changes depending on the current weather conditions and thermal history of the snow. The shape and construction material of a ski can also greatly impact the forces acting on a skier. Skis designed for use in powder condition are very different from skis designed for use on groomed trails. These design differences can be attributed to the differences in the snow quality. An illustration of how snow quality can be different follows. In an area which experiences fluctuation in temperatures around 0°C - freezing temperature of water, both rain and snowfall are possible. Wet snow or the wet ground can freeze into a slippery sheet of ice. In an area which consistently experiences temperatures below 0°C, snowfall leads to accumulation of snow on the ground. When fresh, this snow is fluffy and powder-like. This type of snow has a lot of air space. Over time, this snow will become more compact, and the lower layers of snow will become more dense than the top layer. Skiers can use this type of information to improve their skiing experience by choosing the appropriate skis, wax, or by choosing to stay home. Search and rescue teams, and backcountry users rely on our understanding of snow to navigate the dangers present in the outdoors. The second type of frictional force acting on a skier is drag. This is typically referred to as "air resistance". The drag force is proportional to the cross-sectional area of a body (e.g. the skier) and the square of its velocity and density relative to the fluid in which the body is traveling through (e.g. air). To go faster, a skier can try to reduce the cross-sectional area of their body. Downhill skiers can adopt more aerodynamic positions such as tucking. Alpine ski racers wear skin tight race suits. The general area of physics which addresses these forces is known as fluid dynamics. References External links Math models of the physics of skiing Paper on carving The square-cube law in aircraft design Mechanics Biomechanics Skiing
Physics of skiing
[ "Physics", "Engineering" ]
599
[ "Biomechanics", "Mechanics", "Mechanical engineering" ]
1,081,291
https://en.wikipedia.org/wiki/The%20Tao%20of%20Physics
The Tao of Physics: An Exploration of the Parallels Between Modern Physics and Eastern Mysticism is a 1975 book by physicist Fritjof Capra. A bestseller in the United States, it has been translated into 23 languages. Capra summarized his motivation for writing the book: “Science does not need mysticism and mysticism does not need science. But man needs both.” Origin According to the preface of the first edition, reprinted in subsequent editions, Capra struggled to reconcile theoretical physics and Eastern mysticism and was at first "helped on my way by 'power plants'" or psychedelics, with the first experience "so overwhelming that I burst into tears, at the same time, not unlike Castaneda, pouring out my impressions to a piece of paper". (p. 12, 4th ed.) Capra later discussed his ideas with Werner Heisenberg in 1972, as he mentioned in the following interview excerpt: I had several discussions with Heisenberg. I lived in England then [circa 1972], and I visited him several times in Munich and showed him the whole manuscript chapter by chapter. He was very interested and very open, and he told me something that I think is not known publicly because he never published it. He said that he was well aware of these parallels. While he was working on quantum theory he went to India to lecture and was a guest of Tagore. He talked a lot with Tagore about Indian philosophy. Heisenberg told me that these talks had helped him a lot with his work in physics, because they showed him that all these new ideas in quantum physics were in fact not all that crazy. He realized there was, in fact, a whole culture that subscribed to very similar ideas. Heisenberg said that this was a great help for him. Niels Bohr had a similar experience when he went to China. Bohr adopted the yin yang symbol as part of his coat of arms when he was knighted in 1947; it is claimed in the book that it was a result of orientalist influences. The Tao of Physics was followed by other books of the same genre like The Hidden Connection, The Turning Point and The Web of Life in which Capra extended the argument of how Eastern mysticism and scientific findings of today relate, and how Eastern mysticism might also have the linguistic and philosophical tools required to undertake to some of the biggest scientific challenges remaining. Afterword to the third edition In the afterword to the third edition (published in 1982, pp 360–368 of the 1991 edition) Capra offers six suggestions for a new paradigm in science. Consider the part and the whole as more symmetrically conditioning one another. Replace thinking in terms of structure with thinking in terms of process. Replace ‘objective science’ with ‘epistemic science’, where the approach to decide what counts as knowledge adapts to the subject studied. Replace the idea of knowledge as buildings based on foundations with an idea of knowledge as networks. Abandon the quest for truth with a quest for better approximations. Abandon the ideas of domination of nature with one of cooperation and nonviolence. Capra reconnects this new paradigm to the theories of living and self-organizing systems that has emerged from cybernetics. Here he quotes Ilya Prigogine, Gregory Bateson, Humberto Maturana and Francisco Varela (p. 372 of the 1991 edition). Acclaim and criticism According to Capra, Werner Heisenberg was in agreement with the main idea of the book:I showed the manuscript to him chapter by chapter, briefly summarizing the content of each chapter and emphasizing especially the topics related to his own work. Heisenberg was most interested in the entire manuscript and very open to hearing my ideas. I told him that I saw two basic themes running through all the theories of modern physics, which were also the two basic themes of all mystical traditions-the fundamental interrelatedness and interdependence of all phenomena and the intrinsically dynamic nature of reality. Heisenberg agreed with me as far as physics was concerned and he also told me that he was well aware of the emphasis on interconnectedness in Eastern thought. However, he had been unaware of the dynamic aspect of the Eastern world view and was intrigued when I showed him with numerous examples from my manuscript that the principal Sanskrit terms used in Hindu and Buddhist philosophy-brahman, rta, lila, karma, samsara, etc.-had dynamic connotations. At the end of my rather long presentation of the manuscript Heisenberg said simply: "Basically, I am in complete agreement with you."The book was a best-seller in the United States. It received a positive review from New York magazine: A brilliant best-seller.... Lucidly analyzes the tenets of Hinduism, Buddhism, and Taoism to show their striking parallels with the latest discoveries in cyclotrons. Victor N. Mansfield, a professor of physics and astronomy at Colgate University who wrote many papers and books of his own connecting physics to Buddhism and also to Jungian psychology, complimented The Tao of Physics in Physics Today: "Fritjof Capra, in The Tao of Physics, seeks ... an integration of the mathematical world view of modern physics and the mystical visions of Buddha and Krishna. Where others have failed miserably in trying to unite these seemingly different world views, Capra, a high-energy theorist, has succeeded admirably. I strongly recommend the book to both layman and scientist." However, it is not without its critics. Jeremy Bernstein, a professor of physics at the Stevens Institute of Technology, chastised The Tao of Physics: At the heart of the matter is Mr. Capra's methodology – his use of what seem to me to be accidental similarities of language as if these were somehow evidence of deeply rooted connections. Thus I agree with Capra when he writes, "Science does not need mysticism and mysticism does not need science but man needs both." What no one needs, in my opinion, is this superficial and profoundly misleading book. Leon M. Lederman, a Nobel Prize-winning physicist and current Director Emeritus of Fermilab, criticized both The Tao of Physics and Gary Zukav's The Dancing Wu Li Masters in his 1993 book The God Particle: If the Universe Is the Answer, What Is the Question? Starting with reasonable descriptions of quantum physics, he constructs elaborate extensions, totally bereft of the understanding of how carefully experiment and theory are woven together and how much blood, sweat, and tears go into each painful advance. Philosopher of science Eric Scerri criticizes both Capra and Zukav and similar books. Peter Woit, a mathematical physicist at Columbia University, criticized Capra for continuing to build his case for physics-mysticism parallels on the bootstrap model of strong-force interactions set out at the end of the book, long after the Standard Model had become thoroughly accepted by physicists as a better model: The Tao of Physics was completed in December 1974, and the implications of the November Revolution one month earlier that led to the dramatic confirmations of the standard-model quantum field theory clearly had not sunk in for Capra (like many others at that time). What is harder to understand is that the book has now gone through several editions, and in each of them Capra has left intact the now out-of-date physics, including new forewords and afterwords that with a straight face deny what has happened. The foreword to the second edition of 1983 claims, "It has been very gratifying for me that none of these recent developments has invalidated anything I wrote seven years ago. In fact, most of them were anticipated in the original edition," a statement far from any relation to the reality that in 1983 the standard model was nearly universally accepted in the physics community, and the bootstrap theory was a dead idea ... Even now, Capra's book, with its nutty denials of what has happened in particle theory, can be found selling well at every major bookstore. It has been joined by some other books on the same topic, most notably Gary Zukav's The Dancing Wu-Li Masters. The bootstrap philosophy, despite its complete failure as a physical theory, lives on as part of an embarrassing New Age cult, with its followers refusing to acknowledge what has happened. In a 2019 commemoration in honour of physicist Geoffrey Chew, one of bootstrap's "fathers", Capra replied to criticisms such as Woit's: However, the standard model does not include gravity, and hence fails to integrate all known particles and forces into a single mathematical framework. The currently most popular candidate for such a framework is string theory, which pictures all particles as different vibrations of mathematical "strings" in an abstract 9-dimensional space. The mathematical elegance of string theory is compelling, but the theory has serious deficiencies. If these difficulties persist, and if a theory of "quantum gravity" continues to remain elusive, the bootstrap idea may well be revived someday, in some mathematical formulation or other. Editions The Tao of Physics, Fritjof Capra, Shambhala Publications, 1975 Shambhala, 2nd edition 1983: Bantam reprint 1985: Shambhala, 3rd edition 1991: Shambhala, 4th edition 2000: Shambhala, 5th edition 2010: Audio Renaissance, 1990 audio cassette tape: Audio Renaissance, 2004 audio compact disc (abridged) See also Quantum mysticism Quantum Reality The Dancing Wu Li Masters The Turning Point War of the Worldviews Notes References The Holographic Paradigm and Other Paradoxes, edited by Ken Wilber, Boulder, Colorado: Shambhala, 1982, Siu, R. G. H., The Tao of Science: an Essay on Western Knowledge and Eastern Wisdom, Cambridge, Massachusetts: MIT Press, 1957, / 1975 non-fiction books American non-fiction books Books by Fritjof Capra English-language non-fiction books Books about philosophy of physics Quantum mysticism Shambhala Publications books Taoist philosophy
The Tao of Physics
[ "Physics" ]
2,070
[ "Quantum mechanics", "Quantum mysticism" ]
1,082,550
https://en.wikipedia.org/wiki/Kronecker%E2%80%93Weber%20theorem
In algebraic number theory, it can be shown that every cyclotomic field is an abelian extension of the rational number field Q, having Galois group of the form . The Kronecker–Weber theorem provides a partial converse: every finite abelian extension of Q is contained within some cyclotomic field. In other words, every algebraic integer whose Galois group is abelian can be expressed as a sum of roots of unity with rational coefficients. For example, and The theorem is named after Leopold Kronecker and Heinrich Martin Weber. Field-theoretic formulation The Kronecker–Weber theorem can be stated in terms of fields and field extensions. Precisely, the Kronecker–Weber theorem states: every finite abelian extension of the rational numbers Q is a subfield of a cyclotomic field. That is, whenever an algebraic number field has a Galois group over Q that is an abelian group, the field is a subfield of a field obtained by adjoining a root of unity to the rational numbers. For a given abelian extension K of Q there is a minimal cyclotomic field that contains it. The theorem allows one to define the conductor of K as the smallest integer n such that K lies inside the field generated by the n-th roots of unity. For example the quadratic fields have as conductor the absolute value of their discriminant, a fact generalised in class field theory. History The theorem was first stated by though his argument was not complete for extensions of degree a power of 2. published a proof, but this had some gaps and errors that were pointed out and corrected by . The first complete proof was given by . Generalizations proved the local Kronecker–Weber theorem which states that any abelian extension of a local field can be constructed using cyclotomic extensions and Lubin–Tate extensions. , and gave other proofs. Hilbert's twelfth problem asks for generalizations of the Kronecker–Weber theorem to base fields other than the rational numbers, and asks for the analogues of the roots of unity for those fields. A different approach to abelian extensions is given by class field theory. References External links Class field theory Cyclotomic fields Theorems in algebraic number theory
Kronecker–Weber theorem
[ "Mathematics" ]
461
[ "Theorems in algebraic number theory", "Theorems in number theory" ]
67,903
https://en.wikipedia.org/wiki/Squaring%20the%20square
Squaring the square is the problem of tiling an integral square using only other integral squares. (An integral square is a square whose sides have integer length.) The name was coined in a humorous analogy with squaring the circle. Squaring the square is an easy task unless additional conditions are set. The most studied restriction is that the squaring be perfect, meaning the sizes of the smaller squares are all different. A related problem is squaring the plane, which can be done even with the restriction that each natural number occurs exactly once as a size of a square in the tiling. The order of a squared square is its number of constituent squares. Perfect squared squares A "perfect" squared square is a square such that each of the smaller squares has a different size. Perfect squared squares were studied by R. L. Brooks, C. A. B. Smith, A. H. Stone and W. T. Tutte (writing under the collective pseudonym "Blanche Descartes") at Cambridge University between 1936 and 1938. They transformed the square tiling into an equivalent electrical circuit – they called it a "Smith diagram" – by considering the squares as resistors that connected to their neighbors at their top and bottom edges, and then applied Kirchhoff's circuit laws and circuit decomposition techniques to that circuit. The first perfect squared squares they found were of order 69. The first perfect squared square to be published, a compound one of side 4205 and order 55, was found by Roland Sprague in 1939. Martin Gardner published an extensive article written by W. T. Tutte about the early history of squaring the square in his Mathematical Games column of November 1958. Simple squared squares A "simple" squared square is one where no subset of more than one of the squares forms a rectangle or square. When a squared square has a square or rectangular subset, it is "compound". In 1978, discovered a simple perfect squared square of side 112 with the smallest number of squares using a computer search. His tiling uses 21 squares, and has been proved to be minimal. This squared square forms the logo of the Trinity Mathematical Society. It also appears on the cover of the Journal of Combinatorial Theory. Duijvestijn also found two simple perfect squared squares of sides 110 but each comprising 22 squares. Theophilus Harding Willcocks, an amateur mathematician and fairy chess composer, found another. In 1999, I. Gambini proved that these three are the smallest perfect squared squares in terms of side length. The perfect compound squared square with the fewest squares was discovered by T.H. Willcocks in 1946 and has 24 squares; however, it was not until 1982 that Duijvestijn, Pasquale Joseph Federico and P. Leeuw mathematically proved it to be the lowest-order example. Mrs. Perkins's quilt When the constraint of all the squares being different sizes is relaxed, a squared square such that the side lengths of the smaller squares do not have a common divisor larger than 1 is called a "Mrs. Perkins's quilt". In other words, the greatest common divisor of all the smaller side lengths should be 1. The Mrs. Perkins's quilt problem asks for a Mrs. Perkins's quilt with the fewest pieces for a given square. The number of pieces required is at least , and at most . Computer searches have found exact solutions for small values of (small enough to need up to 18 pieces). For the number of pieces required is: No more than two different sizes For any integer other than 2, 3, and 5, it is possible to dissect a square into squares of one or two different sizes. Squaring the plane In 1975, Solomon Golomb raised the question whether the whole plane can be tiled by squares, one of each integer edge-length, which he called the heterogeneous tiling conjecture. This problem was later publicized by Martin Gardner in his Scientific American column and appeared in several books, but it defied solution for over 30 years. In Tilings and patterns, published in 1987, Branko Grünbaum and G. C. Shephard describe a way of tiling of the plane by integral squares by recursively taking any perfect squared square and enlarging it so that the formerly smallest tile has the size of the original squared square, then replacing this tile with a copy of the original squared square. The recursive scaling process increases the sizes of the squares exponentially – skipping most integers – a feature which they note was true of all perfect integral tilings of the plane known at that time. In 2008 James Henle and Frederick Henle proved Golomb's heterogeneous tiling conjecture: there exists a tiling of the plane by squares, one of each integer size. Their proof is constructive and proceeds by "puffing up" an L-shaped region formed by two side-by-side and horizontally flush squares of different sizes to a perfect tiling of a larger rectangular region, then adjoining the square of the smallest size not yet used to get another, larger L-shaped region. The squares added during the puffing up procedure have sizes that have not yet appeared in the construction and the procedure is set up so that the resulting rectangular regions are expanding in all four directions, which leads to a tiling of the whole plane. Cubing the cube Cubing the cube is the analogue in three dimensions of squaring the square: that is, given a cube C, the problem of dividing it into finitely many smaller cubes, no two congruent. Unlike the case of squaring the square, a hard yet solvable problem, there is no perfect cubed cube and, more generally, no dissection of a rectangular cuboid C into a finite number of unequal cubes. To prove this, we start with the following claim: for any perfect dissection of a rectangle in squares, the smallest square in this dissection does not lie on an edge of the rectangle. Indeed, each corner square has a smaller adjacent edge square, and the smallest edge square is adjacent to smaller squares not on the edge. Now suppose that there is a perfect dissection of a rectangular cuboid in cubes. Make a face of C its horizontal base. The base is divided into a perfect squared rectangle R by the cubes which rest on it. The smallest square s1 in R is surrounded by larger, and therefore higher, cubes. Hence the upper face of the cube on s1 is divided into a perfect squared square by the cubes which rest on it. Let s2 be the smallest square in this dissection. By the claim above, this is surrounded on all 4 sides by squares which are larger than s2 and therefore higher. The sequence of squares s1, s2, ... is infinite and the corresponding cubes are infinite in number. This contradicts our original supposition. If a 4-dimensional hypercube could be perfectly hypercubed then its 'faces' would be perfect cubed cubes; this is impossible. Similarly, there is no solution for all cubes of higher dimensions. See also Square packing in a square Dividing a square into similar rectangles References External links Perfect squared squares: , Eindhoven University of Technology, Faculty of Mathematics and Computing Science http://www.squaring.net/ http://www.maa.org/editorial/mathgames/mathgames_12_01_03.html http://www.math.uwaterloo.ca/navigation/ideas/articles/honsberger2/index.shtml https://web.archive.org/web/20030419012114/http://www.math.niu.edu/~rusin/known-math/98/square_dissect Nowhere-neat squared squares: http://karlscherer.com/ Mrs. Perkins's quilt: Mrs. Perkins's Quilt on MathWorld Discrete geometry Mathematical problems Recreational mathematics Rectangular subdivisions
Squaring the square
[ "Physics", "Mathematics" ]
1,680
[ "Discrete mathematics", "Tessellation", "Recreational mathematics", "Discrete geometry", "Rectangular subdivisions", "Mathematical problems", "Symmetry" ]
67,911
https://en.wikipedia.org/wiki/Busy%20beaver
In theoretical computer science, the busy beaver game aims to find a terminating program of a given size that (depending on definition) either produces the most output possible, or runs for the longest number of steps. Since an endlessly looping program producing infinite output or running for infinite time is easily conceived, such programs are excluded from the game. Rather than traditional programming languages, the programs used in the game are n-state Turing machines, one of the first mathematical models of computation. Turing machines consist of an infinite tape, and a finite set of states which serve as the program's "source code". Producing the most output is defined as writing the largest number of 1s on the tape, also referred to as achieving the highest score, and running for the longest time is defined as taking the longest number of steps to halt. The n-state busy beaver game consists of finding the longest-running or highest-scoring Turing machine which has n states and eventually halts. Such machines are assumed to start on a blank tape, and the tape is assumed to contain only zeros and ones (a binary Turing machine). The objective of the game is to program a set of transitions between states aiming for the highest score or longest running time while making sure the machine will halt eventually. An n-th busy beaver, BB-n or simply "busy beaver" is a Turing machine that wins the n-state busy beaver game. Depending on definition, it either attains the highest score, or runs for the longest time, among all other possible n-state competing Turing machines. The functions determining the highest score or longest running time of the n-state busy beavers by each definition are Σ(n) and S(n) respectively. Deciding the running time or score of the nth Busy Beaver is incomputable. In fact, both the functions Σ(n) and S(n) eventually become larger than any computable function. This has implications in computability theory, the halting problem, and complexity theory. The concept of a busy beaver was first introduced by Tibor Radó in his 1962 paper, "On Non-Computable Functions". One of the most interesting aspects of the busy beaver game is that, if it were possible to compute the functions Σ(n) and S(n) for all n, then this would resolve all mathematical conjectures which can be encoded in the form "does <this Turing machine> halt". For example, a 27-state Turing machine could check Goldbach's conjecture for each number and halt on a counterexample: if this machine had not halted after running for S(27) steps, then it must run forever, resolving the conjecture. Many other problems, including the Riemann hypothesis (744 states) and the consistency of ZF set theory (745 states), can be expressed in a similar form, where at most a countably infinite number of cases need to be checked. Technical definition The n-state busy beaver game (or BB-n game), introduced in Tibor Radó's 1962 paper, involves a class of Turing machines, each member of which is required to meet the following design specifications: The machine has n "operational" states plus a Halt state, where n is a positive integer, and one of the n states is distinguished as the starting state. (Typically, the states are labelled by 1, 2, ..., n, with state 1 as the starting state, or by A, B, C, ..., with state A as the starting state.) The machine uses a single two-way infinite (or unbounded) tape. The tape alphabet is {0, 1}, with 0 serving as the blank symbol. The machine's transition function takes two inputs: the current non-Halt state, the symbol in the current tape cell, and produces three outputs: a symbol to write over the symbol in the current tape cell (it may be the same symbol as the symbol overwritten), a direction to move (left or right; that is, shift to the tape cell one place to the left or right of the current cell), and a state to transition into (which may be the Halt state). "Running" the machine consists of starting in the starting state, with the current tape cell being any cell of a blank (all-0) tape, and then iterating the transition function until the Halt state is entered (if ever). If, and only if, the machine eventually halts, then the number of 1s finally remaining on the tape is called the machine's score. The n-state busy beaver (BB-n) game is therefore a contest, depending on definition to find such an n-state Turing machine having the largest possible score or running time. Example The rules for one 1-state Turing machine might be: In state 1, if the current symbol is 0, write a 1, move one space to the right, and transition to state 1 In state 1, if the current symbol is 1, write a 0, move one space to the right, and transition to HALT This Turing machine would move to the right, swapping the value of all the bits it passes. Since the starting tape is all 0s, it would make an unending string of ones. This machine would not be a busy beaver contender because it runs forever on a blank tape. Functions In his original 1962 paper, Radó defined two functions related to the busy beaver game: the score function Σ(n) and the shifts function S(n). Both take a number of Turing machine states and output the maximum score attainable by a Turing machine of that number of states by some measure. The score function Σ(n) gives the maximum number of 1s an -state Turing machine can output before halting, while the shifts function S(n) gives the maximum number of shifts (or equivalently steps, because each step includes a shift) that an -state Turing machine can undergo before halting. He proved that both of these functions were noncomputable, because they each grew faster than any computable function. The function BB(n) has been defined to be either of these functions, so that notation is not used in this article. A number of other uncomputable functions can also be defined based on measuring the performance of Turing machines in other ways than time or maximal number of ones. For example: The function is defined to be the maximum number of contiguous ones a halting Turing machine can write on a blank tape. In other words, this is the largest unary number a Turing machine of n states can write on a tape. The function is defined to be the maximal number of tape squares a halting Turing machine can read (i.e., visit) before halting. This includes the starting square, but not a square that the machine only reaches after the halt transition (if the halt transition is annotated with a move direction), because that square does not influence the machine's behaviour. This is the maximal space complexity of an n-state Turing machine. These four functions together stand in the relation . More functions can also be defined by operating the game on different computing machines, such as 3-symbol Turing machines, non-deterministic Turing machines, the lambda calculus or even arbitrary programming languages. Score function Σ The score function quantifies the maximum score attainable by a busy beaver on a given measure. This is a noncomputable function, because it grows asymptotically faster than any computable function. The score function, , is defined so that Σ(n) is the maximum attainable score (the maximum number of 1s finally on the tape) among all halting 2-symbol n-state Turing machines of the above-described type, when started on a blank tape. It is clear that Σ is a well-defined function: for every n, there are at most finitely many n-state Turing machines as above, up to isomorphism, hence at most finitely many possible running times. According to the score-based definition, any n-state 2-symbol Turing machine M for which (i.e., which attains the maximum score) is called a busy beaver. For each n, there exist at least 4(n − 1)! n-state busy beavers. (Given any n-state busy beaver, another is obtained by merely changing the shift direction in a halting transition, a third by reversing all shift directions uniformly, and a fourth by reversing the halt direction of the all-swapped busy beaver. Furthermore, a permutation of all states except Start and Halt produces a machine that attains the same score. Theoretically, there could be more than one kind of transition leading to the halting state, but in practice it would be wasteful, because there is only one sequence of state transitions producing the sought-after result.) Non-computability Radó's 1962 paper proved that if is any computable function, then Σ(n) > f(n) for all sufficiently large n, and hence that Σ is not a computable function. Moreover, this implies that it is undecidable by a general algorithm whether an arbitrary Turing machine is a busy beaver. (Such an algorithm cannot exist, because its existence would allow Σ to be computed, which is a proven impossibility. In particular, such an algorithm could be used to construct another algorithm that would compute Σ as follows: for any given n, each of the finitely many n-state 2-symbol Turing machines would be tested until an n-state busy beaver is found; this busy beaver machine would then be simulated to determine its score, which is by definition Σ(n).) Even though Σ(n) is an uncomputable function, there are some small n for which it is possible to obtain its values and prove that they are correct. It is not hard to show that Σ(0) = 0, Σ(1) = 1, Σ(2) = 4, and with progressively more difficulty it can be shown that Σ(3) = 6, Σ(4) = 13 and Σ(5) = 4098 . Σ(n) has not yet been determined for any instance of n > 5, although lower bounds have been established (see the Known values section below). Complexity and unprovability of Σ A variant of Kolmogorov complexity is defined as follows: The complexity of a number n is the smallest number of states needed for a BB-class Turing machine that halts with a single block of n consecutive 1s on an initially blank tape. The corresponding variant of Chaitin's incompleteness theorem states that, in the context of a given axiomatic system for the natural numbers, there exists a number k such that no specific number can be proven to have complexity greater than k, and hence that no specific upper bound can be proven for Σ(k) (the latter is because "the complexity of n is greater than k" would be proven if were proven). As mentioned in the cited reference, for any axiomatic system of "ordinary mathematics" the least value k for which this is true is far less than 10⇈10; consequently, in the context of ordinary mathematics, neither the value nor any upper-bound of Σ(10⇈10) can be proven. (Gödel's first incompleteness theorem is illustrated by this result: in an axiomatic system of ordinary mathematics, there is a true-but-unprovable sentence of the form , and there are infinitely many true-but-unprovable sentences of the form .) Maximum shifts function S In addition to the function Σ, Radó [1962] introduced another extreme function for Turing machines, the maximum shifts function, S, defined as follows: = the number of shifts M makes before halting, for any , = the largest number of shifts made by any halting n-state 2-symbol Turing machine. Because normal Turing machines are required to have a shift in each and every transition or "step" (including any transition to a Halt state), the max-shifts function is at the same time a max-steps function. Radó showed that S is noncomputable for the same reason that Σ is noncomputable — it grows faster than any computable function. He proved this simply by noting that for each n, S(n) ≥ Σ(n). Each shift may write a 0 or a 1 on the tape, while Σ counts a subset of the shifts that wrote a 1, namely the ones that hadn't been overwritten by the time the Turing machine halted; consequently, S grows at least as fast as Σ, which had already been proved to grow faster than any computable function. The following connection between Σ and S was used by Lin & Radó [Computer Studies of Turing Machine Problems, 1965] to prove that Σ(3) = 6 and that S(3)=21: For a given n, if S(n) is known then all n-state Turing machines can (in principle) be run for up to S(n) steps, at which point any machine that hasn't yet halted will never halt. At that point, by observing which machines have halted with the most 1s on the tape (i.e., the busy beavers), one obtains from their tapes the value of Σ(n). The approach used by Lin & Radó for the case of n = 3 was to conjecture that S(3) = 21 (after unsuccessfully conjecturing 18), then to simulate all the essentially different 3-state machines (82,944 machines, equal to 23) for up to 21 steps. They found 26,073 machines that halted, including one that halted only after 21 steps. By analyzing the behavior of the machines that had not halted within 21 steps, they succeeded in showing that none of those machines would ever halt, most of them following a certain pattern. This proved the conjecture that S(3) = 21, and also determined that Σ(3) = 6, which was attained by several machines, all halting after 11 to 14 steps. In 2016, Adam Yedidia and Scott Aaronson obtained the first (explicit) upper bound on the minimum n for which S(n) is unprovable in ZFC. To do so they constructed a 7910-state Turing machine whose behavior cannot be proven based on the usual axioms of set theory (Zermelo–Fraenkel set theory with the axiom of choice), under reasonable consistency hypotheses (stationary Ramsey property). Stefan O'Rear then reduced it to 1919 states, with the dependency on the stationary Ramsey property eliminated, and later to 748 states. In July 2023, Riebel reduced it to 745 states. Proof for uncomputability of S(n) and Σ(n) Suppose that S(n) is a computable function and let EvalS denote a TM, evaluating S(n). Given a tape with n 1s it will produce S(n) 1s on the tape and then halt. Let Clean denote a Turing machine cleaning the sequence of 1s initially written on the tape. Let Double denote a Turing machine evaluating function n + n. Given a tape with n 1s it will produce 2n 1s on the tape and then halt. Let us create the composition Double | EvalS | Clean and let n0 be the number of states of this machine. Let Create_n0 denote a Turing machine creating n0 1s on an initially blank tape. This machine may be constructed in a trivial manner to have n0 states (the state i writes 1, moves the head right and switches to state i + 1, except the state n0, which halts). Let N denote the sum n0 + n0. Let BadS denote the composition Create_n0 | Double | EvalS | Clean. Notice that this machine has N states. Starting with an initially blank tape it first creates a sequence of n0 1s and then doubles it, producing a sequence of N 1s. Then BadS will produce S(N) 1s on tape, and at last it will clear all 1s and then halt. But the phase of cleaning will continue at least S(N) steps, so the time of working of BadS is strictly greater than S(N), which contradicts to the definition of the function S(n). The uncomputability of Σ(n) may be proved in a similar way. In the above proof, one must exchange the machine EvalS with EvalΣ and Clean with Increment — a simple TM, searching for a first 0 on the tape and replacing it with 1. The uncomputability of S(n) can also be established by reference to the blank tape halting problem. The blank tape halting problem is the problem of deciding for any Turing machine whether or not it will halt when started on an empty tape. The blank tape halting problem is equivalent to the standard halting problem and so it is also uncomputable. If S(n) was computable, then we could solve the blank tape halting problem simply by running any given Turing machine with n states for S(n) steps; if it has still not halted, it never will. So, since the blank tape halting problem is not computable, it follows that S(n) must likewise be uncomputable. Uncomputability of space(n) and num(n) Both and functions are uncomputable. This can be shown for by noting that every tape square a Turing machine writes a one to, it must also visit: in other words, . The function can be shown to be incomputable by proving, for example, that : this can be done by designing an (3n+3)-state Turing machine which simulates the n-state space champion, and then uses it to write at least contiguous ones to the tape. Generalizations Analogs of the shift function can be simply defined in any programming language, given that the programs can be described by bit-strings, and a program's number of steps can be counted. For example, the busy beaver game can also be generalized to two dimensions using Turing machines on two-dimensional tapes, or to Turing machines that are allowed to stay in the same place as well as move to the left and right. Alternatively a "busy beaver function" for diverse models of computation can be defined with Kolmogorov complexity. This is done by taking to be the largest integer such that , where is the length of the shortest program in that outputs : is thereby the largest integer a program with length or less can output in . The longest running 6-state, 2-symbol machine which has the additional property of reversing the tape value at each step produces 1s after steps. So for the Reversal Turing Machine (RTM) class, SRTM(6) ≥ and ΣRTM(6) ≥ . Likewise we could define an analog to the Σ function for register machines as the largest number which can be present in any register on halting, for a given number of instructions. Different numbers of symbols A simple generalization is the extension to Turing machines with m symbols instead of just 2 (0 and 1). For example a trinary Turing machine with m = 3 symbols would have the symbols 0, 1, and 2. The generalization to Turing machines with n states and m symbols defines the following generalized busy beaver functions: Σ(n, m): the largest number of non-zeros printable by an n-state, m-symbol machine started on an initially blank tape before halting, and S(n, m): the largest number of steps taken by an n-state, m-symbol machine started on an initially blank tape before halting. For example, the longest-running 3-state 3-symbol machine found so far runs steps before halting. Nondeterministic Turing machines The problem can be extended to nondeterministic Turing machines by looking for the system with the most states across all branches or the branch with the longest number of steps. The question of whether a given NDTM will halt is still computationally irreducible, and the computation required to find an NDTM busy beaver is significantly greater than the deterministic case, since there are multiple branches that need to be considered. For a 2-state, 2-color system with p cases or rules, the table to the right gives the maximum number of steps before halting and maximum number of unique states created by the NDTM. Applications Open mathematical problems In addition to posing a rather challenging mathematical game, the busy beaver functions Σ(n) and S(n) offer an entirely new approach to solving pure mathematics problems. Many open problems in mathematics could in theory, but not in practice, be solved in a systematic way given the value of S(n) for a sufficiently large n. Theoretically speaking, the value of S(n) encodes the answer to all mathematical conjectures that can be checked in infinite time by a Turing machine with less than or equal to n states. Consider any conjecture: any conjecture that could be disproven via a counterexample among a countable number of cases (e.g. Goldbach's conjecture). Write a computer program that sequentially tests this conjecture for increasing values. In the case of Goldbach's conjecture, we would consider every even number ≥ 4 sequentially and test whether or not it is the sum of two prime numbers. Suppose this program is simulated on an n-state Turing machine. If it finds a counterexample (an even number ≥ 4 that is not the sum of two primes in our example), it halts and indicates that. However, if the conjecture is true, then our program will never halt. (This program halts only if it finds a counterexample.) Now, this program is simulated by an n-state Turing machine, so if we know S(n) we can decide (in a finite amount of time) whether or not it will ever halt by simply running the machine that many steps. And if, after S(n) steps, the machine does not halt, we know that it never will and thus that there are no counterexamples to the given conjecture (i.e., no even numbers that are not the sum of two primes). This would prove the conjecture to be true. Thus specific values (or upper bounds) for S(n) could be, in theory, used to systematically solve many open problems in mathematics. However, current results on the busy beaver problem suggest that this will not be practical for two reasons: It is extremely hard to prove values for the busy beaver function (and the max shift function). Every known exact value of S(n) was proven by enumerating every n-state Turing machine and proving whether or not each halts. One would have to calculate S(n) by some less direct method for it to actually be useful. The values of S(n) and the other busy beaver functions get very large, very quickly. While the value of S(5) is only around 47 million, the value of S(6) is more than 10⇈15, which is equal to with a stack of 15 tens. This number has 10⇈14 digits and is unreasonable to use in a computation. The value of S(27), which is the number of steps the current program for the Goldbach conjecture would need to be run to give a conclusive answer, is incomprehensibly huge, and not remotely possible to write down, much less run a machine for, in the observable universe. Consistency of theories Another property of S(n) is that no arithmetically sound, computably axiomatized theory can prove all of the function's values. Specifically, given a computable and arithmetically sound theory , there is a number such that for all , no statement of the form can be proved in . This implies that for each theory there is a specific largest value of S(n) that it can prove. This is true because for every such , a Turing machine with states can be designed to enumerate every possible proof in . If the theory is inconsistent, then all false statements are provable, and the Turing machine can be given the condition to halt if and only if it finds a proof of, for example, . Any theory that proves the value of proves its own consistency, violating Gödel's second incompleteness theorem. This can be used to place various theories on a scale, for example the various large cardinal axioms in ZFC: if each theory is assigned as its number , theories with larger values of prove the consistency of those below them, placing all such theories on a countably infinite scale. Notable examples A 745-state binary Turing machine has been constructed that halts if and only if ZFC is inconsistent. A 744-state Turing machine has been constructed that halts if, and only if, the Riemann hypothesis is false. A 43-state Turing machine was constructed that halts if, and only if, Goldbach's conjecture is false. This was further reduced to 25-state machine, and later formally proved and verified in the Lean 4 theorem proving language. A 15-state Turing machine has been constructed that halts if and only if the following conjecture formulated by Paul Erdős in 1979 is false: for all n > 8 there is at least one digit 2 in the base 3 representation of 2n. Universal Turing machines Exploring the relationship between computational universality and the dynamic behavior of Busy Beaver Turing machines, a conjecture was proposed in 2012 suggesting that Busy Beaver machines were natural candidates for Turing universality as they display complex characteristics, known for (1) their maximal computational complexity within size constraints, (2) their ability to perform non-trivial calculations before halting, and (3) the difficulty in finding and proving these machines; these features suggest that Busy Beaver machines possess the necessary complexity for universality. Known results Lower bounds Green machines In 1964 Milton Green developed a lower bound for the 1s-counting variant of the Busy Beaver function that was published in the proceedings of the 1964 IEEE symposium on switching circuit theory and logical design. Heiner Marxen and Jürgen Buntrock described it as "a non-trivial (not primitive recursive) lower bound". This lower bound can be calculated but is too complex to state as a single expression in terms of n. This was done with a set of Turing machines, each of which demonstrated the lower bound for a certain n. When n=8 the method gives . In contrast, the best current (as of 2024) lower bound on is , where each is Knuth's up-arrow notation. This represents , an exponentiated chain of 15 tens equal to . The value of is probably much larger still than that. Specifically, the lower bound was shown with a series of recursive Turing machines, each of which was made of a smaller one with two additional states that repeatedly applied the smaller machine to the input tape. Defining the value of the N-state busy-beaver competitor on a tape containing ones to be (the ultimate output of each machine being its value on , because a blank tape has 0 ones), the recursion relations are as follows: a This leads to two formulas, for odd and even numbers, for calculating the lower bound given by the Nth machine, : for odd N for even N The lower bound BB(N) can also be related to the Ackermann function. It can be shown that: Relationships between Busy beaver functions Trivially, S(n) ≥ Σ(n) because a machine that writes Σ(n) ones must take at least Σ(n) steps to do so. It is possible to give a number of upper bounds on the time S(n) with the number of ones Σ(n):   (Rado)   (Buro)   (Julstrom and Zwick) By defining num(n) to be the maximum number of ones an n-state Turing machine is allowed to output contiguously, rather than in any position (the largest unary number it can output), it is possible to show:   (Ben-Amram, et al., 1996) Ben-Amram and Petersen, 2002, also give an asymptotically improved bound on S(n). There exists a constant c, such that for all : Exact values and lower and upper bounds The following table lists the exact values and some known lower bounds for S(n), Σ(n), and several other busy beaver functions. In this table, 2-symbol Turing machines are used. Entries listed as "?" are at least as large as other entries to the left (because all n-state machines are also (n+1) state machines), and no larger than entries above them (because S(n) ≥ space(n) ≥ Σ(n) ≥ num(n)). So, space(6) is known to be greater than 10⇈15, as space(n) ≥ Σ(n) and Σ(6) > 10⇈15. is an upper bound for space(5), because S(5) = () and S(n) ≥ space(n). 4098 is an upper bound for num(5), because Σ(5) = 4098 () and Σ(n) ≥ num(n). The last entry listed as "?" is num(6), because Σ(6) > 10⇈15, but Σ(n) ≥ num(n). The 5-state busy beaver was discovered by Heiner Marxen and Jürgen Buntrock in 1989, but only proved to be the winning fifth busy beaver — stylized as BB(5) — in 2024 using a proof in Coq. List of busy beavers These are tables of rules for Turing machines that generate Σ(1) and S(1), Σ(2) and S(2), Σ(3) (but not S(3)), Σ(4) and S(4), Σ(5) and S(5), and the best known lower bound for Σ(6) and S(6). In the tables, columns represent the current state and rows represent the current symbol read from the tape. Each table entry is a string of three characters, indicating the symbol to write onto the tape, the direction to move, and the new state (in that order). The halt state is shown as H. Each machine begins in state A with an infinite tape that contains all 0s. Thus, the initial symbol read from the tape is a 0. Result key: (starts at the position , halts at the position ) {| class="wikitable" |+ 1-state, 2-symbol busy beaver ! width="20px" | ! A |- ! 0 | 1RH |- ! 1 | (not used) |} Result: 0 0 0 (1 step, one "1" total) {| class="wikitable" |+ 2-state, 2-symbol busy beaver ! width="20px" | ! A ! B |- ! 0 | 1RB | 1LA |- ! 1 | 1LB | 1RH |} Result: 0 0 1 1 1 0 0 (6 steps, four "1"s total) {| class="wikitable" |+ 3-state, 2-symbol busy beaver ! width="20px" | ! A ! B ! C |- ! 0 | 1RB | 0RC | 1LC |- ! 1 | 1RH | 1RB | 1LA |} Result: 0 0 1 1 1 1 0 0 (14 steps, six "1"s total). This is one of several nonequivalent machines giving six 1s. Unlike the previous machines, this one is a busy beaver for Σ, but not for S. (S(3) = 21, and the machine obtains only five 1s.) {| class="wikitable" |+ 4-state, 2-symbol busy beaver ! width="20px" | ! A ! B ! C ! D |- ! 0 | 1RB | 1LA | 1RH | 1RD |- ! 1 | 1LB | 0LC | 1LD | 0RA |} Result: 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 (107 steps, thirteen "1"s total) {| class="wikitable" |+ 5-state, 2-symbol busy beaver ! width="20px" | ! A ! B ! C ! D ! E |- ! 0 | 1RB | 1RC | 1RD | 1LA | 1RH |- ! 1 | 1LC | 1RB | 0LE | 1LD | 0LA |} Result: 4098 "1"s with 8191 "0"s interspersed in 47,176,870 steps. Note in the image to the right how this solution is similar qualitatively to the evolution of some cellular automata. {| class="wikitable" |+ current 6-state, 2-symbol best contender ! width="20px" | ! A ! B ! C ! D ! E ! F |- ! 0 | 1RB | 1RC | 1LC | 0LE | 1LF | 0RC |- ! 1 | 0LD | 0RF | 1LA | 1RH | 0RB | 0RE |} Result: 1 1 1 1 ... 1 1 1 ("10" followed by more than 10↑↑15 contiguous "1"s in more than 10↑↑15 steps, where 10↑↑15=1010..10, an exponential tower of 15 tens). Visualizations In the following table, the rules for each busy beaver (maximizing Σ) are represented visually, with orange squares corresponding to a "1" on the tape, and white corresponding to "0". The position of the head is indicated by the black ovoid, with the orientation of the head representing the state. Individual tapes are laid out horizontally, with time progressing from top to bottom. The halt state is represented by a rule which maps one state to itself (head doesn't move). See also Rayo's number Turmite Notes References This is where Radó first defined the busy beaver problem and proved that it was uncomputable and grew faster than any computable function. The results of this paper had already appeared in part in Lin's 1963 doctoral dissertation, under Radó's guidance. Lin & Radó prove that Σ(3) = 6 and S(3) = 21 by proving that all 3-state 2-symbol Turing Machines which don't halt within 21 steps will never halt. (Most are proven automatically by a computer program, however 40 are proven by human inspection.) Brady proves that Σ(4) = 13 and S(4) = 107. Brady defines two new categories for non-halting 3-state 2-symbol Turing Machines: Christmas Trees and Counters. He uses a computer program to prove that all but 27 machines which run over 107 steps are variants of Christmas Trees and Counters which can be proven to run infinitely. The last 27 machines (referred to as holdouts) are proven by personal inspection by Brady himself not to halt. Machlin and Stout describe the busy beaver problem and many techniques used for finding busy beavers (which they apply to Turing Machines with 4-states and 2-symbols, thus verifying Brady's proof). They suggest how to estimate a variant of Chaitin's halting probability (Ω). Marxen and Buntrock demonstrate that Σ(5) ≥ 4098 and S(5) ≥  and describe in detail the method they used to find these machines and prove many others will never halt. Green recursively constructs machines for any number of states and provides the recursive function that computes their score (computes σ), thus providing a lower bound for Σ. This function's growth is comparable to that of Ackermann's function. Busy beaver programs are described by Alexander Dewdney in Scientific American, August 1984, pages 19–23, also March 1985 p. 23 and April 1985 p. 30. Wherein Brady (of 4-state fame) describes some history of the beast and calls its pursuit "The Busy Beaver Game". He describes other games (e.g. cellular automata and Conway's Game of Life). Of particular interest is "The Busy Beaver Game in Two Dimensions" (p. 247). With 19 references. Cf Chapter 9, Turing Machines. A difficult book, meant for electrical engineers and technical specialists. Discusses recursion, partial-recursion with reference to Turing Machines, halting problem. A reference in Booth attributes busy beaver to Rado. Booth also defines Rado's busy beaver problem in "home problems" 3, 4, 5, 6 of Chapter 9, p. 396. Problem 3 is to "show that the busy beaver problem is unsolvable... for all values of n." Bounds between functions Σ and S. Improved bounds. This article contains a complete classification of the 2-state, 3-symbol Turing machines, and thus a proof for the (2, 3) busy beaver: Σ(2, 3) = 9 and S(2, 3) = 38. This is the description of ideas, of the algorithms and their implementation, with the description of the experiments examining 5-state and 6-state Turing machines by parallel run on 31 4-core computer and finally the best results for 6-state TM. External links The page of Heiner Marxen, who, with Jürgen Buntrock, found the above-mentioned records for a 5 and 6-state Turing machine. Pascal Michel's Historical survey of busy beaver results which also contains best results and some analysis. Definition of the class RTM - Reversal Turing Machines, simple and strong subclass of the TMs. "The Busy Beaver Problem: A New Millennium Attack" (archived) at the Rensselaer RAIR Lab. This effort found several new records and established several values for the quadruple formalization. Daniel Briggs' website archive and forum for solving the 5-state, 2-symbol busy beaver problem, based on Skelet (Georgi Georgiev) nonregular machines list. Aaronson, Scott (1999), Who can name the bigger number? Busy Beaver Turing Machines - Computerphile, Youtube Pascal Michel. The Busy Beaver Competition: a historical survey. 70 pages. 2017. <hal-00396880v5> Computability theory Theory of computation Large integers Metaphors referring to animals
Busy beaver
[ "Mathematics" ]
8,093
[ "Computability theory", "Mathematical logic" ]
67,958
https://en.wikipedia.org/wiki/Copernicium
Copernicium is a synthetic chemical element; it has symbol Cn and atomic number 112. Its known isotopes are extremely radioactive, and have only been created in a laboratory. The most stable known isotope, copernicium-285, has a half-life of approximately 30 seconds. Copernicium was first created in February 1996 by the GSI Helmholtz Centre for Heavy Ion Research near Darmstadt, Germany. It was named after the astronomer Nicolaus Copernicus on his 537th anniversary. In the periodic table of the elements, copernicium is a d-block transactinide element and a group 12 element. During reactions with gold, it has been shown to be an extremely volatile element, so much so that it is possibly a gas or a volatile liquid at standard temperature and pressure. Copernicium is calculated to have several properties that differ from its lighter homologues in group 12, zinc, cadmium and mercury; due to relativistic effects, it may give up its 6d electrons instead of its 7s ones, and it may have more similarities to the noble gases such as radon rather than its group 12 homologues. Calculations indicate that copernicium may show the oxidation state +4, while mercury shows it in only one compound of disputed existence and zinc and cadmium do not show it at all. It has also been predicted to be more difficult to oxidize copernicium from its neutral state than the other group 12 elements. Predictions vary on whether solid copernicium would be a metal, semiconductor, or insulator. Copernicium is one of the heaviest elements whose chemical properties have been experimentally investigated. Introduction History Discovery Copernicium was first created on February 9, 1996, at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany, by Sigurd Hofmann, Victor Ninov et al. This element was created by firing accelerated zinc-70 nuclei at a target made of lead-208 nuclei in a heavy ion accelerator. A single atom of copernicium was produced with a mass number of 277. (A second was originally reported, but was found to have been based on data fabricated by Ninov, and was thus retracted.) Pb + Zn → Cn* → Cn + n In May 2000, the GSI successfully repeated the experiment to synthesize a further atom of copernicium-277. This reaction was repeated at RIKEN using the Search for a Super-Heavy Element Using a Gas-Filled Recoil Separator set-up in 2004 and 2013 to synthesize three further atoms and confirm the decay data reported by the GSI team. This reaction had also previously been tried in 1971 at the Joint Institute for Nuclear Research in Dubna, Russia to aim for 276Cn (produced in the 2n channel), but without success. The IUPAC/IUPAP Joint Working Party (JWP) assessed the claim of copernicium's discovery by the GSI team in 2001 and 2003. In both cases, they found that there was insufficient evidence to support their claim. This was primarily related to the contradicting decay data for the known nuclide rutherfordium-261. However, between 2001 and 2005, the GSI team studied the reaction 248Cm(26Mg,5n)269Hs, and were able to confirm the decay data for hassium-269 and rutherfordium-261. It was found that the existing data on rutherfordium-261 was for an isomer, now designated rutherfordium-261m. In May 2009, the JWP reported on the claims of discovery of element 112 again and officially recognized the GSI team as the discoverers of element 112. This decision was based on the confirmation of the decay properties of daughter nuclei as well as the confirmatory experiments at RIKEN. Work had also been done at the Joint Institute for Nuclear Research in Dubna, Russia from 1998 to synthesise the heavier isotope 283Cn in the hot fusion reaction 238U(48Ca,3n)283Cn; most observed atoms of 283Cn decayed by spontaneous fission, although an alpha decay branch to 279Ds was detected. While initial experiments aimed to assign the produced nuclide with its observed long half-life of 3 minutes based on its chemical behaviour, this was found to be not mercury-like as would have been expected (copernicium being under mercury in the periodic table), and indeed now it appears that the long-lived activity might not have been from 283Cn at all, but its electron capture daughter 283Rg instead, with a shorter 4-second half-life associated with 283Cn. (Another possibility is assignment to a metastable isomeric state, 283mCn.) While later cross-bombardments in the 242Pu+48Ca and 245Cm+48Ca reactions succeeded in confirming the properties of 283Cn and its parents 287Fl and 291Lv, and played a major role in the acceptance of the discoveries of flerovium and livermorium (elements 114 and 116) by the JWP in 2011, this work originated subsequent to the GSI's work on 277Cn and priority was assigned to the GSI. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, copernicium should be known as eka-mercury. In 1979, IUPAC published recommendations according to which the element was to be called ununbium (with the corresponding symbol of Uub), a systematic element name as a placeholder, until the element was discovered (and the discovery then confirmed) and a permanent name was decided on. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who either called it "element 112", with the symbol of E112, (112), or even simply 112. After acknowledging the GSI team's discovery, the IUPAC asked them to suggest a permanent name for element 112. On 14 July 2009, they proposed copernicium with the element symbol Cp, after Nicolaus Copernicus "to honor an outstanding scientist, who changed our view of the world". During the standard six-month discussion period among the scientific community about the naming, it was pointed out that the symbol Cp was previously associated with the name cassiopeium (cassiopium), now known as lutetium (Lu). Moreover, Cp is frequently used today to mean the cyclopentadienyl ligand (C5H5). Primarily because cassiopeium (Cp) was (until 1949) accepted by IUPAC as an alternative allowed name for lutetium, the IUPAC disallowed the use of Cp as a future symbol, prompting the GSI team to put forward the symbol Cn as an alternative. On 19 February 2010, the 537th anniversary of Copernicus' birth, IUPAC officially accepted the proposed name and symbol. Isotopes Copernicium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Eight different isotopes have been reported with mass numbers 277 and 280–286, and one unconfirmed metastable isomer in 285Cn has been reported. Most of these decay predominantly through alpha decay, but some undergo spontaneous fission, and copernicium-283 may have an electron capture branch. The isotope copernicium-283 was instrumental in the confirmation of the discoveries of the elements flerovium and livermorium. Half-lives All confirmed copernicium isotopes are extremely unstable and radioactive; in general, heavier isotopes are more stable than the lighter, and isotopes with an odd neutron number have relatively longer half-lives due to additional hindrance against spontaneous fission. The most stable known isotope, 285Cn, has a half-life of 30 seconds; 283Cn has a half-life of 4 seconds, and the unconfirmed 285mCn and 286Cn have half-lives of about 15 and 8.45 seconds respectively. Other isotopes have half-lives shorter than one second. 281Cn and 284Cn both have half-lives on the order of 0.1 seconds, and the remaining isotopes have half-lives shorter than one millisecond. It is predicted that the heavy isotopes 291Cn and 293Cn may have half-lives longer than a few decades, for they are predicted to lie near the center of the theoretical island of stability, and may have been produced in the r-process and be detectable in cosmic rays, though they would be about 10−12 times as abundant as lead. The lightest isotopes of copernicium have been synthesized by direct fusion between two lighter nuclei and as decay products (except for 277Cn, which is not known to be a decay product), while the heavier isotopes are only known to be produced by decay of heavier nuclei. The heaviest isotope produced by direct fusion is 283Cn; the three heavier isotopes, 284Cn, 285Cn, and 286Cn, have only been observed as decay products of elements with larger atomic numbers. In 1999, American scientists at the University of California, Berkeley, announced that they had succeeded in synthesizing three atoms of 293Og. These parent nuclei were reported to have successively emitted three alpha particles to form copernicium-281 nuclei, which were claimed to have undergone alpha decay, emitting alpha particles with decay energy 10.68 MeV and half-life 0.90 ms, but their claim was retracted in 2001 as it had been based on data fabricated by Ninov. This isotope was truly produced in 2010 by the same team; the new data contradicted the previous fabricated data. The missing isotopes 278Cn and 279Cn are too heavy to be produced by cold fusion and too light to be produced by hot fusion. They might be filled from above by decay of heavier elements produced by hot fusion, and indeed 280Cn and 281Cn were produced this way. The isotopes 286Cn and 287Cn could be produced by charged-particle evaporation, in the reaction 244Pu(48Ca,αxn) with x equalling 1 or 2. Predicted properties Very few properties of copernicium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that copernicium (and its parents) decays very quickly. A few singular chemical properties have been measured, as well as the boiling point, but properties of the copernicium metal remain generally unknown and for the most part, only predictions are available. Chemical Copernicium is the tenth and last member of the 6d series and is the heaviest group 12 element in the periodic table, below zinc, cadmium and mercury. It is predicted to differ significantly from the lighter group 12 elements. The valence s-subshells of the group 12 elements and period 7 elements are expected to be relativistically contracted most strongly at copernicium. This and the closed-shell configuration of copernicium result in it probably being a very noble metal. A standard reduction potential of +2.1 V is predicted for the Cn2+/Cn couple. Copernicium's predicted first ionization energy of 1155 kJ/mol almost matches that of the noble gas xenon at 1170.4 kJ/mol. Copernicium's metallic bonds should also be very weak, possibly making it extremely volatile like the noble gases, and potentially making it gaseous at room temperature. However, it should be able to form metal–metal bonds with copper, palladium, platinum, silver, and gold; these bonds are predicted to be only about 15–20 kJ/mol weaker than the analogous bonds with mercury. In opposition to the earlier suggestion, ab initio calculations at the high level of accuracy predicted that the chemistry of singly-valent copernicium resembles that of mercury rather than that of the noble gases. The latter result can be explained by the huge spin–orbit interaction which significantly lowers the energy of the vacant 7p1/2 state of copernicium. Once copernicium is ionized, its chemistry may present several differences from those of zinc, cadmium, and mercury. Due to the stabilization of 7s electronic orbitals and destabilization of 6d ones caused by relativistic effects, Cn2+ is likely to have a [Rn]5f146d87s2 electronic configuration, using the 6d orbitals before the 7s one, unlike its homologues. The fact that the 6d electrons participate more readily in chemical bonding means that once copernicium is ionized, it may behave more like a transition metal than its lighter homologues, especially in the possible +4 oxidation state. In aqueous solutions, copernicium may form the +2 and perhaps +4 oxidation states. The diatomic ion , featuring mercury in the +1 oxidation state, is well-known, but the ion is predicted to be unstable or even non-existent. Copernicium(II) fluoride, CnF2, should be more unstable than the analogous mercury compound, mercury(II) fluoride (HgF2), and may even decompose spontaneously into its constituent elements. As the most electronegative reactive element, fluorine may be the only element able to oxidise copernicium even further to the +4 and even +6 oxidation states in CnF4 and CnF6; the latter may require matrix-isolation conditions to be detected, as in the disputed detection of HgF4. CnF4 should be more stable than CnF2. In polar solvents, copernicium is predicted to preferentially form the and anions rather than the analogous neutral fluorides (CnF4 and CnF2, respectively), although the analogous bromide or iodide ions may be more stable towards hydrolysis in aqueous solution. The anions and should also be able to exist in aqueous solution. The formation of thermodynamically stable copernicium(II) and (IV) fluorides would be analogous to the chemistry of xenon. Analogous to mercury(II) cyanide (Hg(CN)2), copernicium is expected to form a stable cyanide, Cn(CN)2. Physical and atomic Copernicium should be a dense metal, with a density of 14.0 g/cm3 in the liquid state at 300 K; this is similar to the known density of mercury, which is 13.534 g/cm3. (Solid copernicium at the same temperature should have a higher density of 14.7 g/cm3.) This results from the effects of copernicium's higher atomic weight being cancelled out by its larger interatomic distances compared to mercury. Some calculations predicted copernicium to be a gas at room temperature due to its closed-shell electron configuration, which would make it the first gaseous metal in the periodic table. A 2019 calculation agrees with these predictions on the role of relativistic effects, suggesting that copernicium will be a volatile liquid bound by dispersion forces under standard conditions. Its melting point is estimated at and its boiling point at , the latter in agreement with the experimentally estimated value of . The atomic radius of copernicium is expected to be around 147 pm. Due to the relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Cn+ and Cn2+ ions are predicted to give up 6d electrons instead of 7s electrons, which is the opposite of the behavior of its lighter homologues. In addition to the relativistic contraction and binding of the 7s subshell, the 6d5/2 orbital is expected to be destabilized due to spin–orbit coupling, making it behave similarly to the 7s orbital in terms of size, shape, and energy. Predictions of the expected band structure of copernicium are varied. Calculations in 2007 expected that copernicium may be a semiconductor with a band gap of around 0.2 eV, crystallizing in the hexagonal close-packed crystal structure. However, calculations in 2017 and 2018 suggested that copernicium should be a noble metal at standard conditions with a body-centered cubic crystal structure: it should hence have no band gap, like mercury, although the density of states at the Fermi level is expected to be lower for copernicium than for mercury. 2019 calculations then suggested that in fact copernicium has a large band gap of 6.4 ± 0.2 eV, which should be similar to that of the noble gas radon (predicted as 7.1 eV) and would make it an insulator; bulk copernicium is predicted by these calculations to be bound mostly by dispersion forces, like the noble gases. Like mercury, radon, and flerovium, but not oganesson (eka-radon), copernicium is calculated to have no electron affinity. Experimental atomic gas phase chemistry Interest in copernicium's chemistry was sparked by predictions that it would have the largest relativistic effects in the whole of period 7 and group 12, and indeed among all 118 known elements. Copernicium is expected to have the ground state electron configuration [Rn] 5f14 6d10 7s2 and thus should belong to group 12 of the periodic table, according to the Aufbau principle. As such, it should behave as the heavier homologue of mercury and form strong binary compounds with noble metals like gold. Experiments probing the reactivity of copernicium have focused on the adsorption of atoms of element 112 onto a gold surface held at varying temperatures, in order to calculate an adsorption enthalpy. Owing to relativistic stabilization of the 7s electrons, copernicium shows radon-like properties. Experiments were performed with the simultaneous formation of mercury and radon radioisotopes, allowing a comparison of adsorption characteristics. The first chemical experiments on copernicium were conducted using the 238U(48Ca,3n)283Cn reaction. Detection was by spontaneous fission of the claimed parent isotope with half-life of 5 minutes. Analysis of the data indicated that copernicium was more volatile than mercury and had noble gas properties. However, the confusion regarding the synthesis of copernicium-283 has cast some doubt on these experimental results. Given this uncertainty, between April–May 2006 at the JINR, a FLNR–PSI team conducted experiments probing the synthesis of this isotope as a daughter in the nuclear reaction 242Pu(48Ca,3n)287Fl. (The 242Pu + 48Ca fusion reaction has a slightly larger cross-section than the 238U + 48Ca reaction, so that the best way to produce copernicium for chemical experimentation is as an overshoot product as the daughter of flerovium.) In this experiment, two atoms of copernicium-283 were unambiguously identified and the adsorption properties were interpreted to show that copernicium is a more volatile homologue of mercury, due to formation of a weak metal-metal bond with gold. This agrees with general indications from some relativistic calculations that copernicium is "more or less" homologous to mercury. However, it was pointed out in 2019 that this result may simply be due to strong dispersion interactions. In April 2007, this experiment was repeated and a further three atoms of copernicium-283 were positively identified. The adsorption property was confirmed and indicated that copernicium has adsorption properties in agreement with being the heaviest member of group 12. These experiments also allowed the first experimental estimation of copernicium's boiling point: 84 °C, so that it may be a gas at standard conditions. Because the lighter group 12 elements often occur as chalcogenide ores, experiments were conducted in 2015 to deposit copernicium atoms on a selenium surface to form copernicium selenide, CnSe. Reaction of copernicium atoms with trigonal selenium to form a selenide was observed, with -ΔHadsCn(t-Se) > 48 kJ/mol, with the kinetic hindrance towards selenide formation being lower for copernicium than for mercury. This was unexpected as the stability of the group 12 selenides tends to decrease down the group from ZnSe to HgSe. See also Island of stability Notes References Bibliography External links Copernicium at The Periodic Table of Videos (University of Nottingham) Chemical elements Transition metals Synthetic elements Nuclear physics
Copernicium
[ "Physics", "Chemistry" ]
4,386
[ "Matter", "Chemical elements", "Synthetic materials", "Synthetic elements", "Nuclear physics", "Atoms", "Radioactivity" ]
68,121
https://en.wikipedia.org/wiki/Kolmogorov%20space
In topology and related branches of mathematics, a topological space X is a T0 space or Kolmogorov space (named after Andrey Kolmogorov) if for every pair of distinct points of X, at least one of them has a neighborhood not containing the other. In a T0 space, all points are topologically distinguishable. This condition, called the T0 condition, is the weakest of the separation axioms. Nearly all topological spaces normally studied in mathematics are T0 spaces. In particular, all T1 spaces, i.e., all spaces in which for every pair of distinct points, each has a neighborhood not containing the other, are T0 spaces. This includes all T2 (or Hausdorff) spaces, i.e., all topological spaces in which distinct points have disjoint neighbourhoods. In another direction, every sober space (which may not be T1) is T0; this includes the underlying topological space of any scheme. Given any topological space one can construct a T0 space by identifying topologically indistinguishable points. T0 spaces that are not T1 spaces are exactly those spaces for which the specialization preorder is a nontrivial partial order. Such spaces naturally occur in computer science, specifically in denotational semantics. Definition A T0 space is a topological space in which every pair of distinct points is topologically distinguishable. That is, for any two different points x and y there is an open set that contains one of these points and not the other. More precisely the topological space X is Kolmogorov or if and only if: If and , there exists an open set O such that either or . Note that topologically distinguishable points are automatically distinct. On the other hand, if the singleton sets {x} and {y} are separated then the points x and y must be topologically distinguishable. That is, separated ⇒ topologically distinguishable ⇒ distinct The property of being topologically distinguishable is, in general, stronger than being distinct but weaker than being separated. In a T0 space, the second arrow above also reverses; points are distinct if and only if they are distinguishable. This is how the T0 axiom fits in with the rest of the separation axioms. Examples and counter examples Nearly all topological spaces normally studied in mathematics are T0. In particular, all Hausdorff (T2) spaces, T1 spaces and sober spaces are T0. Spaces that are not T0 A set with more than one element, with the trivial topology. No points are distinguishable. The set R2 where the open sets are the Cartesian product of an open set in R and R itself, i.e., the product topology of R with the usual topology and R with the trivial topology; points (a,b) and (a,c) are not distinguishable. The space of all measurable functions f from the real line R to the complex plane C such that the Lebesgue integral . Two functions which are equal almost everywhere are indistinguishable. See also below. Spaces that are T0 but not T1 The Zariski topology on Spec(R), the prime spectrum of a commutative ring R, is always T0 but generally not T1. The non-closed points correspond to prime ideals which are not maximal. They are important to the understanding of schemes. The particular point topology on any set with at least two elements is T0 but not T1 since the particular point is not closed (its closure is the whole space). An important special case is the Sierpiński space which is the particular point topology on the set {0,1}. The excluded point topology on any set with at least two elements is T0 but not T1. The only closed point is the excluded point. The Alexandrov topology on a partially ordered set is T0 but will not be T1 unless the order is discrete (agrees with equality). Every finite T0 space is of this type. This also includes the particular point and excluded point topologies as special cases. The right order topology on a totally ordered set is a related example. The overlapping interval topology is similar to the particular point topology since every non-empty open set includes 0. Quite generally, a topological space X will be T0 if and only if the specialization preorder on X is a partial order. However, X will be T1 if and only if the order is discrete (i.e. agrees with equality). So a space will be T0 but not T1 if and only if the specialization preorder on X is a non-discrete partial order. Operating with T0 spaces Commonly studied topological spaces are all T0. Indeed, when mathematicians in many fields, notably analysis, naturally run across non-T0 spaces, they usually replace them with T0 spaces, in a manner to be described below. To motivate the ideas involved, consider a well-known example. The space L2(R) is meant to be the space of all measurable functions f from the real line R to the complex plane C such that the Lebesgue integral of |f(x)|2 over the entire real line is finite. This space should become a normed vector space by defining the norm ||f|| to be the square root of that integral. The problem is that this is not really a norm, only a seminorm, because there are functions other than the zero function whose (semi)norms are zero. The standard solution is to define L2(R) to be a set of equivalence classes of functions instead of a set of functions directly. This constructs a quotient space of the original seminormed vector space, and this quotient is a normed vector space. It inherits several convenient properties from the seminormed space; see below. In general, when dealing with a fixed topology T on a set X, it is helpful if that topology is T0. On the other hand, when X is fixed but T is allowed to vary within certain boundaries, to force T to be T0 may be inconvenient, since non-T0 topologies are often important special cases. Thus, it can be important to understand both T0 and non-T0 versions of the various conditions that can be placed on a topological space. The Kolmogorov quotient Topological indistinguishability of points is an equivalence relation. No matter what topological space X might be to begin with, the quotient space under this equivalence relation is always T0. This quotient space is called the Kolmogorov quotient of X, which we will denote KQ(X). Of course, if X was T0 to begin with, then KQ(X) and X are naturally homeomorphic. Categorically, Kolmogorov spaces are a reflective subcategory of topological spaces, and the Kolmogorov quotient is the reflector. Topological spaces X and Y are Kolmogorov equivalent when their Kolmogorov quotients are homeomorphic. Many properties of topological spaces are preserved by this equivalence; that is, if X and Y are Kolmogorov equivalent, then X has such a property if and only if Y does. On the other hand, most of the other properties of topological spaces imply T0-ness; that is, if X has such a property, then X must be T0. Only a few properties, such as being an indiscrete space, are exceptions to this rule of thumb. Even better, many structures defined on topological spaces can be transferred between X and KQ(X). The result is that, if you have a non-T0 topological space with a certain structure or property, then you can usually form a T0 space with the same structures and properties by taking the Kolmogorov quotient. The example of L2(R) displays these features. From the point of view of topology, the seminormed vector space that we started with has a lot of extra structure; for example, it is a vector space, and it has a seminorm, and these define a pseudometric and a uniform structure that are compatible with the topology. Also, there are several properties of these structures; for example, the seminorm satisfies the parallelogram identity and the uniform structure is complete. The space is not T0 since any two functions in L2(R) that are equal almost everywhere are indistinguishable with this topology. When we form the Kolmogorov quotient, the actual L2(R), these structures and properties are preserved. Thus, L2(R) is also a complete seminormed vector space satisfying the parallelogram identity. But we actually get a bit more, since the space is now T0. A seminorm is a norm if and only if the underlying topology is T0, so L2(R) is actually a complete normed vector space satisfying the parallelogram identity—otherwise known as a Hilbert space. And it is a Hilbert space that mathematicians (and physicists, in quantum mechanics) generally want to study. Note that the notation L2(R) usually denotes the Kolmogorov quotient, the set of equivalence classes of square integrable functions that differ on sets of measure zero, rather than simply the vector space of square integrable functions that the notation suggests. Removing T0 Although norms were historically defined first, people came up with the definition of seminorm as well, which is a sort of non-T0 version of a norm. In general, it is possible to define non-T0 versions of both properties and structures of topological spaces. First, consider a property of topological spaces, such as being Hausdorff. One can then define another property of topological spaces by defining the space X to satisfy the property if and only if the Kolmogorov quotient KQ(X) is Hausdorff. This is a sensible, albeit less famous, property; in this case, such a space X is called preregular. (There even turns out to be a more direct definition of preregularity). Now consider a structure that can be placed on topological spaces, such as a metric. We can define a new structure on topological spaces by letting an example of the structure on X be simply a metric on KQ(X). This is a sensible structure on X; it is a pseudometric. (Again, there is a more direct definition of pseudometric.) In this way, there is a natural way to remove T0-ness from the requirements for a property or structure. It is generally easier to study spaces that are T0, but it may also be easier to allow structures that aren't T0 to get a fuller picture. The T0 requirement can be added or removed arbitrarily using the concept of Kolmogorov quotient. See also Sober space References Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology. Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1995. (Dover edition). Separation axioms Properties of topological spaces
Kolmogorov space
[ "Mathematics" ]
2,342
[ "Properties of topological spaces", "Topological spaces", "Topology", "Space (mathematics)" ]
68,206
https://en.wikipedia.org/wiki/Central%20dogma%20of%20molecular%20biology
The central dogma of molecular biology deals with the flow of genetic information within a biological system. It is often stated as "DNA makes RNA, and RNA makes protein", although this is not its original meaning. It was first stated by Francis Crick in 1957, then published in 1958: He re-stated it in a Nature paper published in 1970: "The central dogma of molecular biology deals with the detailed residue-by-residue transfer of sequential information. It states that such information cannot be transferred back from protein to either protein or nucleic acid." A second version of the central dogma is popular but incorrect. This is the simplistic DNA → RNA → protein pathway published by James Watson in the first edition of The Molecular Biology of the Gene (1965). Watson's version differs from Crick's because Watson describes a two-step (DNA → RNA and RNA → protein) process as the central dogma. While the dogma as originally stated by Crick remains valid today, Watson's version does not. Biological sequence information The biopolymers that comprise DNA, RNA and (poly)peptides are linear heteropolymers (i.e.: each monomer is connected to at most two other monomers). The sequence of their monomers effectively encodes information. The transfers of information from one molecule to another are faithful, deterministic transfers, wherein one biopolymer's sequence is used as a template for the construction of another biopolymer with a sequence that is entirely dependent on the original biopolymer's sequence. When DNA is transcribed to RNA, its complement is paired to it. DNA codes are transferred to RNA codes in a complementary fashion. The encoding of proteins is done in groups of three, known as codons. The standard codon table applies for humans and mammals, but some other lifeforms (including human mitochondria) use different translations. General transfers of biological sequential information DNA replications In the sense that DNA replication must occur if genetic material is to be provided for the progeny of any cell, whether somatic or reproductive, the copying from DNA to DNA arguably is the fundamental step in information transfer. A complex group of proteins called the replisome performs the replication of the information from the parent strand to the complementary daughter strand. Transcription Transcription is the process by which the information contained in a section of DNA is replicated in the form of a newly assembled piece of messenger RNA (mRNA). Enzymes facilitating the process include RNA polymerase and transcription factors. In eukaryotic cells the primary transcript is pre-mRNA. Pre-mRNA must be processed for translation to proceed. Processing includes the addition of a 5' cap and a poly-A tail to the pre-mRNA chain, followed by splicing. Alternative splicing occurs when appropriate, increasing the diversity of the proteins that any single mRNA can produce. The product of the entire transcription process (that began with the production of the pre-mRNA chain) is a mature mRNA chain. Translation The mature mRNA finds its way to a ribosome, where it gets translated. In prokaryotic cells, which have no nuclear compartment, the processes of transcription and translation may be linked together without clear separation. In eukaryotic cells, the site of transcription (the cell nucleus) is usually separated from the site of translation (the cytoplasm), so the mRNA must be transported out of the nucleus into the cytoplasm, where it can be bound by ribosomes. The ribosome reads the mRNA triplet codons, usually beginning with an AUG (adenine−uracil−guanine), or initiator methionine codon downstream of the ribosome binding site. Complexes of initiation factors and elongation factors bring aminoacylated transfer RNAs (tRNAs) into the ribosome-mRNA complex, matching the codon in the mRNA to the anti-codon on the tRNA. Each tRNA bears the appropriate amino acid residue to add to the polypeptide chain being synthesised. As the amino acids get linked into the growing peptide chain, the chain begins folding into the correct conformation. Translation ends with a stop codon which may be a UAA, UGA, or UAG triplet. The mRNA does not contain all the information for specifying the nature of the mature protein. The nascent polypeptide chain released from the ribosome commonly requires additional processing before the final product emerges. For one thing, the correct folding process is complex and vitally important. For most proteins it requires other chaperone proteins to control the form of the product. Some proteins then excise internal segments from their own peptide chains, splicing the free ends that border the gap; in such processes the inside "discarded" sections are called inteins. Other proteins must be split into multiple sections without splicing. Some polypeptide chains need to be cross-linked, and others must be attached to cofactors such as haem (heme) before they become functional. Additional transfers of biological sequential information Reverse transcription Reverse transcription is the transfer of information from RNA to DNA (the reverse of normal transcription). This is known to occur in the case of retroviruses, such as HIV, as well as in eukaryotes, in the case of retrotransposons and telomere synthesis. It is the process by which genetic information from RNA gets transcribed into new DNA. The family of enzymes involved in this process is called Reverse Transcriptase. RNA replication RNA replication is the copying of one RNA to another. Many viruses replicate this way. The enzymes that copy RNA to new RNA, called RNA-dependent RNA polymerases, are also found in many eukaryotes where they are involved in RNA silencing. RNA editing, in which an RNA sequence is altered by a complex of proteins and a "guide RNA", could also be seen as an RNA-to-RNA transfer. Activities unrelated to the central dogma The central dogma of molecular biology states that once sequential information has passed from nucleic acid to protein it cannot flow back from protein to nucleic acid. Some people believe that the following activities conflict with the central dogma. Post-translational modification After protein amino acid sequences have been translated from nucleic acid chains, they can be edited by appropriate enzymes. This is a form of protein affecting protein sequence not protein transferring information to nucleic acid. Nonribosomal peptide synthesis Some proteins are synthesized by nonribosomal peptide synthetases, which can be big protein complexes, each specializing in synthesizing only one type of peptide. Nonribosomal peptides often have cyclic and/or branched structures and can contain non-proteinogenic amino acids - both of these factors differentiate them from ribosome synthesized proteins. An example of nonribosomal peptides are some of the antibiotics. Inteins An intein is a "parasitic" segment of a protein that is able to excise itself from the chain of amino acids as they emerge from the ribosome and rejoin the remaining portions with a peptide bond in such a manner that the main protein "backbone" does not fall apart. This is a case of a protein changing its own primary sequence from the sequence originally encoded by the DNA of a gene. Additionally, most inteins contain a homing endonuclease or HEG domain which is capable of finding a copy of the parent gene that does not include the intein nucleotide sequence. On contact with the intein-free copy, the HEG domain initiates the DNA double-stranded break repair mechanism. This process causes the intein sequence to be copied from the original source gene to the intein-free gene. This is an example of protein directly editing DNA sequence, as well as increasing the sequence's heritable propagation. Prions Prions are proteins of particular amino acid sequences in particular conformations. They propagate themselves in host cells by making conformational changes in other molecules of protein with the same amino acid sequence, but with a different conformation that is functionally important or detrimental to the organism. Once the protein has been transconformed to the prion folding it changes function. In turn it can convey information into new cells and reconfigure more functional molecules of that sequence into the alternate prion form. In some types of prion in fungi this change is continuous and direct; the information flow is Protein → Protein. Some scientists such as Alain E. Bussard and Eugene Koonin have argued that prion-mediated inheritance violates the central dogma of molecular biology. However, Rosalind Ridley in Molecular Pathology of the Prions (2001) has written that "The prion hypothesis is not heretical to the central dogma of molecular biology—that the information necessary to manufacture proteins is encoded in the nucleotide sequence of nucleic acid—because it does not claim that proteins replicate. Rather, it claims that there is a source of information within protein molecules that contributes to their biological function, and that this information can be passed on to other molecules." Use of the term dogma In his autobiography, What Mad Pursuit, Crick wrote about his choice of the word dogma and some of the problems it caused him: "I called this idea the central dogma, for two reasons, I suspect. I had already used the obvious word hypothesis in the sequence hypothesis, and in addition I wanted to suggest that this new assumption was more central and more powerful. ... As it turned out, the use of the word dogma caused almost more trouble than it was worth. Many years later Jacques Monod pointed out to me that I did not appear to understand the correct use of the word dogma, which is a belief that cannot be doubted. I did apprehend this in a vague sort of way but since I thought that all religious beliefs were without foundation, I used the word the way I myself thought about it, not as most of the world does, and simply applied it to a grand hypothesis that, however plausible, had little direct experimental support." Similarly, Horace Freeland Judson records in The Eighth Day of Creation: "My mind was, that a dogma was an idea for which there was no reasonable evidence. You see?!" And Crick gave a roar of delight. "I just didn't know what dogma meant. And I could just as well have called it the 'Central Hypothesis,' or — you know. Which is what I meant to say. Dogma was just a catch phrase." Comparison with the Weismann barrier The Weismann barrier, proposed by August Weismann in 1892, distinguishes between the "immortal" germ cell lineages (the germ plasm) which produce gametes and the "disposable" somatic cells. Hereditary information moves only from germline cells to somatic cells (that is, somatic mutations are not inherited). This, before the discovery of the role or structure of DNA, does not predict the central dogma, but does anticipate its gene-centric view of life, albeit in non-molecular terms. See also Life Cell (biology) Cell division Gene Gene expression Epigenetics Genome Alternative splicing Genetic code Riboswitch References Further reading Baker, Harry F. (2001). Molecular Pathology of the Prions (Methods in Molecular Medicine). Humana Press. External links The Elaboration of the Central Dogma – Scitable: By Nature education Animation of Central Dogma from RIKEN - NatureDocumentaries.org Discussion on challenges to the "Central Dogma of Molecular Biology" Explanation of the central dogma using a musical analogy "Francis Harry Compton Crick (1916–2004)" by A. Andrei at the Embryo Project Encyclopedia 1958 in biology Cellular processes History of genetics Molecular biology Molecular genetics
Central dogma of molecular biology
[ "Chemistry", "Biology" ]
2,463
[ "Biochemistry", "Molecular genetics", "Cellular processes", "Molecular biology" ]
68,316
https://en.wikipedia.org/wiki/Heat%20pump
A heat pump is a device that uses energy (usually electricity) to transfer heat from a colder place to a warmer place. Specifically, the heat pump transfers thermal energy using a heat pump and refrigeration cycle, cooling the cool space and warming the warm space. In winter a heat pump can move heat from the cool outdoors to warm a house; the pump may also be designed to move heat from the house to the warmer outdoors in summer. As they transfer heat rather than generating heat, they are more energy-efficient than heating by gas boiler, and also good for cooling a home. A gaseous refrigerant is compressed so its pressure and temperature rise. When operating as a heater in cold weather, the warmed gas flows to a heat exchanger in the indoor space where some of its thermal energy is transferred to that indoor space, causing the gas to condense to its liquid state. The liquified refrigerant flows to a heat exchanger in the outdoor space where the pressure falls, the liquid evaporates and the temperature of the gas falls. It is now colder than the temperature of the outdoor space being used as a heat source. It can again take up energy from the heat source, be compressed and repeat the cycle. Air source heat pumps are the most common models, while other types include ground source heat pumps, water source heat pumps and exhaust air heat pumps. Large-scale heat pumps are also used in district heating systems. The efficiency of a heat pump is expressed as a coefficient of performance (COP), or seasonal coefficient of performance (SCOP). The higher the number, the more efficient a heat pump is. For example, an air-to-water heat pump that produces 6kW at a SCOP of 4.62 will give over 4kW of energy into a heating system for every kilowatt of energy that the heat pump uses itself to operate. When used for space heating, heat pumps are typically more energy-efficient than electric resistance and other heaters. Because of their high efficiency and the increasing share of fossil-free sources in electrical grids, heat pumps are playing a role in climate change mitigation. Consuming 1 kWh of electricity, they can transfer 1 to 4.5 kWh of thermal energy into a building. The carbon footprint of heat pumps depends on how electricity is generated, but they usually reduce emissions. Heat pumps could satisfy over 80% of global space and water heating needs with a lower carbon footprint than gas-fired condensing boilers: however, in 2021 they only met 10%. Principle of operation Heat flows spontaneously from a region of higher temperature to a region of lower temperature. Heat does not flow spontaneously from lower temperature to higher, but it can be made to flow in this direction if work is performed. The work required to transfer a given amount of heat is usually much less than the amount of heat; this is the motivation for using heat pumps in applications such as the heating of water and the interior of buildings. The amount of work required to drive an amount of heat Q from a lower-temperature reservoir such as ambient air to a higher-temperature reservoir such as the interior of a building is: where is the work performed on the working fluid by the heat pump's compressor. is the heat transferred from the lower-temperature reservoir to the higher-temperature reservoir. is the instantaneous coefficient of performance for the heat pump at the temperatures prevailing in the reservoirs at one instant. The coefficient of performance of a heat pump is greater than one so the work required is less than the heat transferred, making a heat pump a more efficient form of heating than electrical resistance heating. As the temperature of the higher-temperature reservoir increases in response to the heat flowing into it, the coefficient of performance decreases, causing an increasing amount of work to be required for each unit of heat being transferred. The coefficient of performance, and the work required by a heat pump can be calculated easily by considering an ideal heat pump operating on the reversed Carnot cycle: If the low-temperature reservoir is at a temperature of and the interior of the building is at the relevant coefficient of performance is 27. This means only 1 joule of work is required to transfer 27 joules of heat from a reservoir at 270 K to another at 280 K. The one joule of work ultimately ends up as thermal energy in the interior of the building so for each 27 joules of heat that are removed from the low-temperature reservoir, 28 joules of heat are added to the building interior, making the heat pump even more attractive from an efficiency perspective. As the temperature of the interior of the building rises progressively to the coefficient of performance falls progressively to 9. This means each joule of work is responsible for transferring 9 joules of heat out of the low-temperature reservoir and into the building. Again, the 1 joule of work ultimately ends up as thermal energy in the interior of the building so 10 joules of heat are added to the building interior. This is the theoretical amount of heat pumped but in practice it will be less for various reasons, for example if the outside unit has been installed where there is not enough airflow. More data sharing with owners and academics—perhaps from heat meters—could improve efficiency in the long run. History Milestones: 1748 William Cullen demonstrates artificial refrigeration. 1834 Jacob Perkins patents a design for a practical refrigerator using dimethyl ether. 1852 Lord Kelvin describes the theory underlying heat pumps. 1855–1857 Peter von Rittinger develops and builds the first heat pump. 1877 In the period before 1875, heat pumps were for the time being pursued for vapour compression evaporation (open heat pump process) in salt works with their obvious advantages for saving wood and coal. In 1857, Peter von Rittinger was the first to try to implement the idea of vapor compression in a small pilot plant. Presumably inspired by Rittinger's experiments in Ebensee, Antoine-Paul Piccard from the University of Lausanne and the engineer J. H. Weibel from the Weibel–Briquet company in Geneva built the world's first really functioning vapor compression system with a two-stage piston compressor. In 1877 this first heat pump in Switzerland was installed in the Bex salt works. 1928 Aurel Stodola constructs a closed-loop heat pump (water source from Lake Geneva) which provides heating for the Geneva city hall to this day. 1937–1945 During the First World War, fuel prices were very high in Switzerland but it had plenty of hydropower. In the period before and especially during the Second World War, when neutral Switzerland was completely surrounded by fascist-ruled countries, the coal shortage became alarming again. Thanks to their leading position in energy technology, the Swiss companies Sulzer, Escher Wyss and Brown Boveri built and put in operation around 35 heat pumps between 1937 and 1945. The main heat sources were lake water, river water, groundwater, and waste heat. Particularly noteworthy are the six historic heat pumps from the city of Zurich with heat outputs from 100 kW to 6 MW. An international milestone is the heat pump built by Escher Wyss in 1937/38 to replace the wood stoves in the City Hall of Zurich. To avoid noise and vibrations, a recently developed rotary piston compressor was used. This historic heat pump heated the town hall for 63 years until 2001. Only then was it replaced by a new, more efficient heat pump. 1945 John Sumner, City Electrical Engineer for Norwich, installs an experimental water-source heat pump fed central heating system, using a nearby river to heat new Council administrative buildings. It had a seasonal efficiency ratio of 3.42, average thermal delivery of 147 kW, and peak output of 234 kW. 1948 Robert C. Webber is credited as developing and building the first ground-source heat pump. 1951 First large scale installation—the Royal Festival Hall in London is opened with a town gas-powered reversible water-source heat pump, fed by the Thames, for both winter heating and summer cooling needs. 2019 The Kigali Amendment to phase out harmful refrigerants takes effect. Types Air-source Ground source Heat recovery ventilation Exhaust air heat pumps extract heat from the exhaust air of a building and require mechanical ventilation. Two classes exist: Exhaust air-air heat pumps transfer heat to intake air. Exhaust air-water heat pumps transfer heat to a heating circuit that includes a tank of domestic hot water. Solar-assisted Water-source A water-source heat pump works in a similar manner to a ground-source heat pump, except that it takes heat from a body of water rather than the ground. The body of water does, however, need to be large enough to be able to withstand the cooling effect of the unit without freezing or creating an adverse effect for wildlife. The largest water-source heat pump was installed in the Danish town of Esbjerg in 2023. Others A thermoacoustic heat pump operates as a thermoacoustic heat engine without refrigerant but instead uses a standing wave in a sealed chamber driven by a loudspeaker to achieve a temperature difference across the chamber. Electrocaloric heat pumps are solid state. Applications The International Energy Agency estimated that, as of 2021, heat pumps installed in buildings have a combined capacity of more than 1000 GW. They are used for heating, ventilation, and air conditioning (HVAC) and may also provide domestic hot water and tumble clothes drying. The purchase costs are supported in various countries by consumer rebates. Space heating and sometimes also cooling In HVAC applications, a heat pump is typically a vapor-compression refrigeration device that includes a reversing valve and optimized heat exchangers so that the direction of heat flow (thermal energy movement) may be reversed. The reversing valve switches the direction of refrigerant through the cycle and therefore the heat pump may deliver either heating or cooling to a building. Because the two heat exchangers, the condenser and evaporator, must swap functions, they are optimized to perform adequately in both modes. Therefore, the Seasonal Energy Efficiency Rating (SEER in the US) or European seasonal energy efficiency ratio of a reversible heat pump is typically slightly less than those of two separately optimized machines. For equipment to receive the US Energy Star rating, it must have a rating of at least 14 SEER. Pumps with ratings of 18 SEER or above are considered highly efficient. The highest efficiency heat pumps manufactured are up to 24 SEER. Heating seasonal performance factor (in the US) or Seasonal Performance Factor (in Europe) are ratings of heating performance. The SPF is Total heat output per annum / Total electricity consumed per annum in other words the average heating COP over the year. Window mounted heat pump Window mounted heat pumps run on standard 120v AC outlets and provide heating, cooling, and humidity control. They are more efficient with lower noise levels, condensation management, and a smaller footprint than window mounted air conditioners that just do cooling. Water heating In water heating applications, heat pumps may be used to heat or preheat water for swimming pools, homes or industry. Usually heat is extracted from outdoor air and transferred to an indoor water tank. District heating Large (megawatt-scale) heat pumps are used for district heating. However about 90% of district heat is from fossil fuels. In Europe, heat pumps account for a mere 1% of heat supply in district heating networks but several countries have targets to decarbonise their networks between 2030 and 2040. Possible sources of heat for such applications are sewage water, ambient water (e.g. sea, lake and river water), industrial waste heat, geothermal energy, flue gas, waste heat from district cooling and heat from solar seasonal thermal energy storage. Large-scale heat pumps for district heating combined with thermal energy storage offer high flexibility for the integration of variable renewable energy. Therefore, they are regarded as a key technology for limiting climate change by phasing out fossil fuels. They are also a crucial element of systems which can both heat and cool districts. Industrial heating There is great potential to reduce the energy consumption and related greenhouse gas emissions in industry by application of industrial heat pumps, for example for process heat. Short payback periods of less than 2 years are possible, while achieving a high reduction of emissions (in some cases more than 50%). Industrial heat pumps can heat up to 200 °C, and can meet the heating demands of many light industries. In Europe alone, 15 GW of heat pumps could be installed in 3,000 facilities in the paper, food and chemicals industries. Performance The performance of a heat pump is determined by the ability of the pump to extract heat from a low temperature environment (the source) and deliver it to a higher temperature environment (the sink). Performance varies, depending on installation details, temperature differences, site elevation, location on site, pipe runs, flow rates, and maintenance. In general, heat pumps work most efficiently (that is, the heat output produced for a given energy input) when the difference between the heat source and the heat sink is small. When using a heat pump for space or water heating, therefore, the heat pump will be most efficient in mild conditions, and decline in efficiency on very cold days. Performance metrics supplied to consumers attempt to take this variation into account. Common performance metrics are the SEER (in cooling mode) and seasonal coefficient of performance (SCOP) (commonly used just for heating), although SCOP can be used for both modes of operation. Larger values of either metric indicate better performance. When comparing the performance of heat pumps, the term performance is preferred to efficiency, with coefficient of performance (COP) being used to describe the ratio of useful heat movement per work input. An electrical resistance heater has a COP of 1.0, which is considerably lower than a well-designed heat pump which will typically have a COP of 3 to 5 with an external temperature of 10 °C and an internal temperature of 20 °C. Because the ground is a constant temperature source, a ground-source heat pump is not subjected to large temperature fluctuations, and therefore is the most energy-efficient type of heat pump. The "seasonal coefficient of performance" (SCOP) is a measure of the aggregate energy efficiency measure over a period of one year which is dependent on regional climate. One framework for this calculation is given by the Commission Regulation (EU) No. 813/2013. A heat pump's operating performance in cooling mode is characterized in the US by either its energy efficiency ratio (EER) or seasonal energy efficiency ratio (SEER), both of which have units of BTU/(h·W) (note that 1 BTU/(h·W) = 0.293 W/W) and larger values indicate better performance. Carbon footprint The carbon footprint of heat pumps depends on their individual efficiency and how electricity is produced. An increasing share of low-carbon energy sources such as wind and solar will lower the impact on the climate. In most settings, heat pumps will reduce emissions compared to heating systems powered by fossil fuels. In regions accounting for 70% of world energy consumption, the emissions savings of heat pumps compared with a high-efficiency gas boiler are on average above 45% and reach 80% in countries with cleaner electricity mixes. These values can be improved by 10 percentage points, respectively, with alternative refrigerants. In the United States, 70% of houses could reduce emissions by installing a heat pump. The rising share of renewable electricity generation in many countries is set to increase the emissions savings from heat pumps over time. Heating systems powered by green hydrogen are also low-carbon and may become competitors, but are much less efficient due to the energy loss associated with hydrogen conversion, transport and use. In addition, not enough green hydrogen is expected to be available before the 2030s or 2040s. Operation Vapor-compression uses a circulating refrigerant as the medium which absorbs heat from one space, compresses it thereby increasing its temperature before releasing it in another space. The system normally has eight main components: a compressor, a reservoir, a reversing valve which selects between heating and cooling mode, two thermal expansion valves (one used when in heating mode and the other when used in cooling mode) and two heat exchangers, one associated with the external heat source/sink and the other with the interior. In heating mode the external heat exchanger is the evaporator and the internal one being the condenser; in cooling mode the roles are reversed. Circulating refrigerant enters the compressor in the thermodynamic state known as a saturated vapor and is compressed to a higher pressure, resulting in a higher temperature as well. The hot, compressed vapor is then in the thermodynamic state known as a superheated vapor and it is at a temperature and pressure at which it can be condensed with either cooling water or cooling air flowing across the coil or tubes. In heating mode this heat is used to heat the building using the internal heat exchanger, and in cooling mode this heat is rejected via the external heat exchanger. The condensed, liquid refrigerant, in the thermodynamic state known as a saturated liquid, is next routed through an expansion valve where it undergoes an abrupt reduction in pressure. That pressure reduction results in the adiabatic flash evaporation of a part of the liquid refrigerant. The auto-refrigeration effect of the adiabatic flash evaporation lowers the temperature of the liquid and-vapor refrigerant mixture to where it is colder than the temperature of the enclosed space to be refrigerated. The cold mixture is then routed through the coil or tubes in the evaporator. A fan circulates the warm air in the enclosed space across the coil or tubes carrying the cold refrigerant liquid and vapor mixture. That warm air evaporates the liquid part of the cold refrigerant mixture. At the same time, the circulating air is cooled and thus lowers the temperature of the enclosed space to the desired temperature. The evaporator is where the circulating refrigerant absorbs and removes heat which is subsequently rejected in the condenser and transferred elsewhere by the water or air used in the condenser. To complete the refrigeration cycle, the refrigerant vapor from the evaporator is again a saturated vapor and is routed back into the compressor. Over time, the evaporator may collect ice or water from ambient humidity. The ice is melted through defrosting cycle. An internal heat exchanger is either used to heat/cool the interior air directly or to heat water that is then circulated through radiators or underfloor heating circuit to either heat or cool the buildings. Improvement of coefficient of performance by subcooling Heat input can be improved if the refrigerant enters the evaporator with a lower vapor content. This can be achieved by cooling the liquid refrigerant after condensation. The gaseous refrigerant condenses on the heat exchange surface of the condenser. To achieve a heat flow from the gaseous flow center to the wall of the condenser, the temperature of the liquid refrigerant must be lower than the condensation temperature. Additional subcooling can be achieved by heat exchange between relatively warm liquid refrigerant leaving the condenser and the cooler refrigerant vapor emerging from the evaporator. The enthalpy difference required for the subcooling leads to the superheating of the vapor drawn into the compressor. When the increase in cooling achieved by subcooling is greater that the compressor drive input required to overcome the additional pressure losses, such a heat exchange improves the coefficient of performance. One disadvantage of the subcooling of liquids is that the difference between the condensing temperature and the heat-sink temperature must be larger. This leads to a moderately high pressure difference between condensing and evaporating pressure, whereby the compressor energy increases. Refrigerant choice Pure refrigerants can be divided into organic substances (hydrocarbons (HCs), chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), hydrofluorocarbons (HFCs), hydrofluoroolefins (HFOs), and HCFOs), and inorganic substances (ammonia (), carbon dioxide (), and water ()). Their boiling points are usually below −25 °C. In the past 200 years, the standards and requirements for new refrigerants have changed. Nowadays low global warming potential (GWP) is required, in addition to all the previous requirements for safety, practicality, material compatibility, appropriate atmospheric life, and compatibility with high-efficiency products. By 2022, devices using refrigerants with a very low GWP still have a small market share but are expected to play an increasing role due to enforced regulations, as most countries have now ratified the Kigali Amendment to ban HFCs. Isobutane (R600A) and propane (R290) are far less harmful to the environment than conventional hydrofluorocarbons (HFC) and are already being used in air-source heat pumps. Propane may be the most suitable for high temperature heat pumps. Ammonia (R717) and carbon dioxide (R-744) also have a low GWP. smaller heat pumps are not widely available and research and development of them continues. A 2024 report said that refrigerants with GWP are vulnerable to further international restrictions. Until the 1990s, heat pumps, along with fridges and other related products used chlorofluorocarbons (CFCs) as refrigerants, which caused major damage to the ozone layer when released into the atmosphere. Use of these chemicals was banned or severely restricted by the Montreal Protocol of August 1987. Replacements, including R-134a and R-410A, are hydrofluorocarbons (HFC) with similar thermodynamic properties with insignificant ozone depletion potential (ODP) but had problematic GWP. HFCs are powerful greenhouse gases which contribute to climate change. Dimethyl ether (DME) also gained in popularity as a refrigerant in combination with R404a. More recent refrigerants include difluoromethane (R32) with a lower GWP, but still over 600. Devices with R-290 refrigerant (propane) are expected to play a key role in the future. The 100-year GWP of propane, at 0.02, is extremely low and is approximately 7000 times less than R-32. However, the flammability of propane requires additional safety measures: the maximum safe charges have been set significantly lower than for lower flammability refrigerants (only allowing approximately 13.5 times less refrigerant in the system than R-32). This means that R-290 is not suitable for all situations or locations. Nonetheless, by 2022, an increasing number of devices with R-290 were offered for domestic use, especially in Europe. At the same time, HFC refrigerants still dominate the market. Recent government mandates have seen the phase-out of R-22 refrigerant. Replacements such as R-32 and R-410A are being promoted as environmentally friendly but still have a high GWP. A heat pump typically uses 3 kg of refrigerant. With R-32 this amount still has a 20-year impact equivalent to 7 tons of , which corresponds to two years of natural gas heating in an average household. Refrigerants with a high ODP have already been phased out. Government incentives Financial incentives aim to protect consumers from high fossil gas costs and to reduce greenhouse gas emissions, and are currently available in more than 30 countries around the world, covering more than 70% of global heating demand in 2021. Australia Food processors, brewers, petfood producers and other industrial energy users are exploring whether it is feasible to use renewable energy to produce industrial-grade heat. Process heating accounts for the largest share of onsite energy use in Australian manufacturing, with lower-temperature operations like food production particularly well-suited to transition to renewables. To help producers understand how they could benefit from making the switch, the Australian Renewable Energy Agency (ARENA) provided funding to the Australian Alliance for Energy Productivity (A2EP) to undertake pre-feasibility studies at a range of sites around Australia, with the most promising locations advancing to full feasibility studies. In an effort to incentivize energy efficiency and reduce environmental impact, the Australian states of Victoria, New South Wales, and Queensland have implemented rebate programs targeting the upgrade of existing hot water systems. These programs specifically encourage the transition from traditional gas or electric systems to heat pump based systems. Canada In 2022, the Canada Greener Homes Grant provides up to $5000 for upgrades (including certain heat pumps), and $600 for energy efficiency evaluations. China Purchase subsidies in rural areas in the 2010s reduced burning coal for heating, which had been causing ill health. In the 2024 report by the International Energy Agency (IEA) titled "The Future of Heat Pumps in China," it is highlighted that China, as the world's largest market for heat pumps in buildings, plays a critical role in the global industry. The country accounts for over one-quarter of global sales, with a 12% increase in 2023 alone, despite a global sales dip of 3% the same year. Heat pumps are now used in approximately 8% of all heating equipment sales for buildings in China as of 2022, and they are increasingly becoming the norm in central and southern regions for both heating and cooling. Despite their higher upfront costs and relatively low awareness, heat pumps are favored for their energy efficiency, consuming three to five times less energy than electric heaters or fossil fuel-based solutions. Currently, decentralized heat pumps installed in Chinese buildings represent a quarter of the global installed capacity, with a total capacity exceeding 250 GW, which covers around 4% of the heating needs in buildings. Under the Announced Pledges Scenario (APS), which aligns with China's carbon neutrality goals, the capacity is expected to reach 1,400 GW by 2050, meeting 25% of heating needs. This scenario would require an installation of about 100 GW of heat pumps annually until 2050. Furthermore, the heat pump sector in China employs over 300,000 people, with employment numbers expected to double by 2050, underscoring the importance of vocational training for industry growth. This robust development in the heat pump market is set to play a significant role in reducing direct emissions in buildings by 30% and cutting PM2.5 emissions from residential heating by nearly 80% by 2030. European Union To speed up the deployment rate of heat pumps, the European Commission launched the Heat Pump Accelerator Platform in November 2024. It will encourage industry experts, policymakers, and stakeholders to collaborate, share best practices and ideas, and jointly discuss measures that promote sustainable heating solutions. United Kingdom Until 2027 fixed heat pumps have no Value Added Tax (VAT). the installation cost of a heat pump is more than a gas boiler, but with the "Boiler Upgrade Scheme" government grant and assuming electricity/gas costs remain similar their lifetime costs would be similar on average. However lifetime cost relative to a gas boiler varies considerably depending on several factors, such as the quality of the heat pump installation and the tariff used. In 2024 England was criticised for still allowing new homes to be built with gas boilers, unlike some other counties where this is banned. United States The High-efficiency Electric Home Rebate Program was created in 2022 to award grants to State energy offices and Indian Tribes in order to establish state-wide high-efficiency electric-home rebates. Effective immediately, American households are eligible for a tax credit to cover the costs of buying and installing a heat pump, up to $2,000. Starting in 2023, low- and moderate-level income households will be eligible for a heat-pump rebate of up to $8,000. In 2022, more heat pumps were sold in the United States than natural gas furnaces. In November 2023 Biden's administration allocated 169 million dollars from the Inflation Reduction Act to speed up production of heat pumps. It used the Defense Production Act to do so, because according to the administration, energy that is better for the climate is also better for national security. Notes References Sources IPCC reports https://www.ipcc.ch/sr15/. Other External links Bright green environmentalism Building engineering Energy conversion Energy recovery Energy technology Residential heating
Heat pump
[ "Engineering" ]
5,940
[ "Building engineering", "Civil engineering", "Architecture" ]
68,326
https://en.wikipedia.org/wiki/Extended%20periodic%20table
An extended periodic table theorizes about chemical elements beyond those currently known and proven. The element with the highest atomic number known is oganesson (Z = 118), which completes the seventh period (row) in the periodic table. All elements in the eighth period and beyond thus remain purely hypothetical. Elements beyond 118 will be placed in additional periods when discovered, laid out (as with the existing periods) to illustrate periodically recurring trends in the properties of the elements. Any additional periods are expected to contain more elements than the seventh period, as they are calculated to have an additional so-called g-block, containing at least 18 elements with partially filled g-orbitals in each period. An eight-period table containing this block was suggested by Glenn T. Seaborg in 1969. The first element of the g-block may have atomic number 121, and thus would have the systematic name unbiunium. Despite many searches, no elements in this region have been synthesized or discovered in nature. According to the orbital approximation in quantum mechanical descriptions of atomic structure, the g-block would correspond to elements with partially filled g-orbitals, but spin–orbit coupling effects reduce the validity of the orbital approximation substantially for elements of high atomic number. Seaborg's version of the extended period had the heavier elements following the pattern set by lighter elements, as it did not take into account relativistic effects. Models that take relativistic effects into account predict that the pattern will be broken. Pekka Pyykkö and Burkhard Fricke used computer modeling to calculate the positions of elements up to Z = 172, and found that several were displaced from the Madelung rule. As a result of uncertainty and variability in predictions of chemical and physical properties of elements beyond 120, there is currently no consensus on their placement in the extended periodic table. Elements in this region are likely to be highly unstable with respect to radioactive decay and undergo alpha decay or spontaneous fission with extremely short half-lives, though element 126 is hypothesized to be within an island of stability that is resistant to fission but not to alpha decay. Other islands of stability beyond the known elements may also be possible, including one theorised around element 164, though the extent of stabilizing effects from closed nuclear shells is uncertain. It is not clear how many elements beyond the expected island of stability are physically possible, whether period 8 is complete, or if there is a period 9. The International Union of Pure and Applied Chemistry (IUPAC) defines an element to exist if its lifetime is longer than 10−14 seconds (0.01 picoseconds, or 10 femtoseconds), which is the time it takes for the nucleus to form an electron cloud. As early as 1940, it was noted that a simplistic interpretation of the relativistic Dirac equation runs into problems with electron orbitals at Z > 1/α ≈ 137, suggesting that neutral atoms cannot exist beyond element 137, and that a periodic table of elements based on electron orbitals therefore breaks down at this point. On the other hand, a more rigorous analysis calculates the analogous limit to be Z ≈ 168–172 where the 1s subshell dives into the Dirac sea, and that it is instead not neutral atoms that cannot exist beyond this point, but bare nuclei, thus posing no obstacle to the further extension of the periodic system. Atoms beyond this critical atomic number are called supercritical atoms. History Elements beyond the actinides were first proposed to exist as early as 1895, when Danish chemist Hans Peter Jørgen Julius Thomsen predicted that thorium and uranium formed part of a 32-element period which would end at a chemically inactive element with atomic weight 292 (not far from the 294 for the only known isotope of oganesson). In 1913, Swedish physicist Johannes Rydberg similarly predicted that the next noble gas after radon would have atomic number 118, and purely formally derived even heavier congeners of radon at Z = 168, 218, 290, 362, and 460, exactly where the Aufbau principle would predict them to be. In 1922, Niels Bohr predicted the electronic structure of this next noble gas at Z = 118, and suggested that the reason why elements beyond uranium were not seen in nature was because they were too unstable. The German physicist and engineer Richard Swinne published a review paper in 1926 containing predictions on the transuranic elements (he may have coined the term) in which he anticipated modern predictions of an island of stability: he first hypothesised in 1914 that half-lives should not decrease strictly with atomic number, but suggested instead that there might be some longer-lived elements at Z = 98–102 and Z = 108–110, and speculated that such elements might exist in the Earth's core, in iron meteorites, or in the ice caps of Greenland where they had been locked up from their supposed cosmic origin. By 1955, these elements were called superheavy elements. The first predictions on properties of undiscovered superheavy elements were made in 1957, when the concept of nuclear shells was first explored and an island of stability was theorized to exist around element 126. In 1967, more rigorous calculations were performed, and the island of stability was theorized to be centered at the then-undiscovered flerovium (element 114); this and other subsequent studies motivated many researchers to search for superheavy elements in nature or attempt to synthesize them at accelerators. Many searches for superheavy elements were conducted in the 1970s, all with negative results. , synthesis has been attempted for every element up to and including unbiseptium (Z = 127), except unbitrium (Z = 123), with the heaviest successfully synthesized element being oganesson in 2002 and the most recent discovery being that of tennessine in 2010. As some superheavy elements were predicted to lie beyond the seven-period periodic table, an additional eighth period containing these elements was first proposed by Glenn T. Seaborg in 1969. This model continued the pattern in established elements and introduced a new g-block and superactinide series beginning at element 121, raising the number of elements in period 8 compared to known periods. These early calculations failed to consider relativistic effects that break down periodic trends and render simple extrapolation impossible, however. In 1971, Fricke calculated the periodic table up to Z = 172, and discovered that some elements indeed had different properties that break the established pattern, and a 2010 calculation by Pekka Pyykkö also noted that several elements might behave differently than expected. It is unknown how far the periodic table might extend beyond the known 118 elements, as heavier elements are predicted to be increasingly unstable. Glenn T. Seaborg suggested that practically speaking, the end of the periodic table might come as early as around Z = 120 due to nuclear instability. Predicted structures of an extended periodic table There is currently no consensus on the placement of elements beyond atomic number 120 in the periodic table. All hypothetical elements are given an International Union of Pure and Applied Chemistry (IUPAC) systematic element name, for use until the element has been discovered, confirmed, and an official name is approved. These names are typically not used in the literature, and the elements are instead referred to by their atomic numbers; hence, element 164 is usually not called "unhexquadium" or "Uhq" (the systematic name and symbol), but rather "element 164" with symbol "164", "(164)", or "E164". Aufbau principle At element 118, the orbitals 1s, 2s, 2p, 3s, 3p, 3d, 4s, 4p, 4d, 4f, 5s, 5p, 5d, 5f, 6s, 6p, 6d, 7s and 7p are assumed to be filled, with the remaining orbitals unfilled. A simple extrapolation from the Aufbau principle would predict the eighth row to fill orbitals in the order 8s, 5g, 6f, 7d, 8p; but after element 120, the proximity of the electron shells makes placement in a simple table problematic. Fricke Not all models show the higher elements following the pattern established by lighter elements. Burkhard Fricke et al., who carried out calculations up to element 184 in an article published in 1971, also found some elements to be displaced from the Madelung energy-ordering rule as a result of overlapping orbitals; this is caused by the increasing role of relativistic effects in heavy elements (They describe chemical properties up to element 184, but only draw a table to element 172.) Fricke et al.'s format is more focused on formal electron configurations than likely chemical behaviour. They place elements 156–164 in groups 4–12 because formally their configurations should be 7d2 through 7d10. However, they differ from the previous d-elements in that the 8s shell is not available for chemical bonding: instead, the 9s shell is. Thus element 164 with 7d109s0 is noted by Fricke et al. to be analogous to palladium with 4d105s0, and they consider elements 157–172 to have chemical analogies to groups 3–18 (though they are ambivalent on whether elements 165 and 166 are more like group 1 and 2 elements or more like group 11 and 12 elements, respectively). Thus, elements 157–164 are placed in their table in a group that the authors do not think is chemically most analogous. Nefedov , Trzhaskovskaya, and Yarzhemskii carried out calculations up to 164 (results published in 2006). They considered elements 158 through 164 to be homologues of groups 4 through 10, and not 6 through 12, noting similarities of electron configurations to the period 5 transition metals (e.g. element 159 7d49s1 vs Nb 4d45s1, element 160 7d59s1 vs Mo 4d55s1, element 162 7d79s1 vs Ru 4d75s1, element 163 7d89s1 vs Rh 4d85s1, element 164 7d109s0 vs Pd 4d105s0). They thus agree with Fricke et al. on the chemically most analogous groups, but differ from them in that Nefedov et al. actually place elements in the chemically most analogous groups. Rg and Cn are given an asterisk to reflect differing configurations from Au and Hg (in the original publication they are drawn as being displaced in the third dimension). In fact Cn probably has an analogous configuration to Hg, and the difference in configuration between Pt and Ds is not marked. Pyykkö Pekka Pyykkö used computer modeling to calculate the positions of elements up to Z = 172 and their possible chemical properties in an article published in 2011. He reproduced the orbital order of Fricke et al., and proposed a refinement of their table by formally assigning slots to elements 121–164 based on ionic configurations. In order to bookkeep the electrons, Pyykkö places some elements out of order: thus 139 and 140 are placed in groups 13 and 14 to reflect that the 8p1/2 shell needs to fill, and he distinguishes separate , 8p1/2, and 6f series. Fricke et al. and Nefedov et al. do not attempt to break up these series. Kulsha Computational chemist Andrey Kulsha has suggested two forms of the extended periodic table up to 172 that build on and refine Nefedov et al.'s versions up to 164 with reference to Pyykkö's calculations. Based on their likely chemical properties, elements 157–172 are placed by both forms as eighth-period congeners of yttrium through xenon in the fifth period; this extends Nefedov et al.'s placement of 157–164 under yttrium through palladium, and agrees with the chemical analogies given by Fricke et al. Kulsha suggested two ways to deal with elements 121–156, that lack precise analogues among earlier elements. In his first form (2011, after Pyykkö's paper was published), elements 121–138 and 139–156 are placed as two separate rows (together called "ultransition elements"), related by the addition of a 5g18 subshell into the core, as according to Pyykkö's calculations of oxidation states, they should, respectively, mimic lanthanides and actinides. In his second suggestion (2016), elements 121–142 form a g-block (as they have 5g activity), while elements 143–156 form an f-block placed under actinium through nobelium. Thus, period 8 emerges with 54 elements, and the next noble element after 118 is 172. Smits et al. In 2023 Smits, Düllmann, Indelicato, Nazarewicz, and Schwerdtfeger made another attempt to place elements from 119 to 170 in the periodic table based on their electron configurations. The configurations of a few elements (121–124 and 168) did not allow them to be placed unambiguously. Element 145 appears twice, some places have double occupancy, and others are empty. Searches for undiscovered elements Synthesis attempts Attempts have been made to synthesise the period 8 elements up to unbiseptium, except unbitrium. All such attempts have been unsuccessful. An attempt to synthesise ununennium, the first period 8 element, is ongoing . Ununennium (E119) The synthesis of element 119 (ununennium) was first attempted in 1985 by bombarding a target of einsteinium-254 with calcium-48 ions at the superHILAC accelerator at Berkeley, California: + → 119* → no atoms No atoms were identified, leading to a limiting cross section of 300 nb. Later calculations suggest that the cross section of the 3n reaction (which would result in 119 and three neutrons as products) would actually be six hundred thousand times lower than this upper bound, at 0.5 pb. From April to September 2012, an attempt to synthesize the isotopes 119 and 119 was made by bombarding a target of berkelium-249 with titanium-50 at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. Based on the theoretically predicted cross section, it was expected that an ununennium atom would be synthesized within five months of the beginning of the experiment. Moreover, as berkelium-249 decays to californium-249 (the next element) with a short half-life of 327 days, this allowed elements 119 and 120 to be searched for simultaneously. + → 119* → no atoms The experiment was originally planned to continue to November 2012, but was stopped early to make use of the Bk target to confirm the synthesis of tennessine (thus changing the projectiles to Ca). This reaction of Bk + Ti was predicted to be the most favorable practical reaction for formation of element 119, as it is rather asymmetrical, though also somewhat cold. (Es + Ca would be superior, but preparing milligram quantities of Es for a target is difficult.) Nevertheless, the necessary change from the "silver bullet" Ca to Ti divides the expected yield of element 119 by about twenty, as the yield is strongly dependent on the asymmetry of the fusion reaction. Due to the predicted short half-lives, the GSI team used new "fast" electronics capable of registering decay events within microseconds. No atoms of element 119 were identified, implying a limiting cross section of 70 fb. The predicted actual cross section is around 40 fb, which is at the limits of current technology. The team at RIKEN in Wakō, Japan began bombarding curium-248 targets with a vanadium-51 beam in January 2018 to search for element 119. Curium was chosen as a target, rather than heavier berkelium or californium, as these heavier targets are difficult to prepare. The Cm targets were provided by Oak Ridge National Laboratory. RIKEN developed a high-intensity vanadium beam. The experiment began at a cyclotron while RIKEN upgraded its linear accelerators; the upgrade was completed in 2020. Bombardment may be continued with both machines until the first event is observed; the experiment is currently running intermittently for at least 100 days a year. The RIKEN team's efforts are being financed by the Emperor of Japan. The team at the JINR plans to attempt synthesis of element 119 in the future, probably via the Am + Cr reaction, but a precise timeframe has not been publicly released. Unbinilium (E120) Following their success in obtaining oganesson by the reaction between 249Cf and 48Ca in 2006, the team at the Joint Institute for Nuclear Research (JINR) in Dubna started similar experiments in March–April 2007, in hope of creating element 120 (unbinilium) from nuclei of 58Fe and 244Pu. Isotopes of unbinilium are predicted to have alpha decay half-lives of the order of microseconds. Initial analysis revealed that no atoms of element 120 were produced, providing a limit of 400 fb for the cross section at the energy studied. + → 302120* → no atoms The Russian team planned to upgrade their facilities before attempting the reaction again. In April 2007, the team at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany, attempted to create element 120 using uranium-238 and nickel-64: + → 302120* → no atoms No atoms were detected, providing a limit of 1.6 pb for the cross section at the energy provided. The GSI repeated the experiment with higher sensitivity in three separate runs in April–May 2007, January–March 2008, and September–October 2008, all with negative results, reaching a cross section limit of 90 fb. In June–July 2010, and again in 2011, after upgrading their equipment to allow the use of more radioactive targets, scientists at the GSI attempted the more asymmetrical fusion reaction: + → 302120 → no atoms It was expected that the change in reaction would quintuple the probability of synthesizing element 120, as the yield of such reactions is strongly dependent on their asymmetry. Three correlated signals were observed that matched the predicted alpha decay energies of 299120 and its daughter 295Og, as well as the experimentally known decay energy of its granddaughter 291Lv. However, the lifetimes of these possible decays were much longer than expected, and the results could not be confirmed. In August–October 2011, a different team at the GSI using the TASCA facility tried a new, even more asymmetrical reaction: + → 299120* → no atoms This was also tried unsuccessfully the next year during the aforementioned attempt to make element 119 in the 249Bk+50Ti reaction, as 249Bk decays to 249Cf. Because of its asymmetry, the reaction between 249Cf and 50Ti was predicted to be the most favorable practical reaction for synthesizing unbinilium, although it is also somewhat cold. No unbinilium atoms were identified, implying a limiting cross-section of 200 fb. Jens Volker Kratz predicted the actual maximum cross-section for producing element 120 by any of these reactions to be around 0.1 fb; in comparison, the world record for the smallest cross section of a successful reaction was 30 fb for the reaction 209Bi(70Zn,n)278Nh, and Kratz predicted a maximum cross-section of 20 fb for producing the neighbouring element 119. If these predictions are accurate, then synthesizing element 119 would be at the limits of current technology, and synthesizing element 120 would require new methods. In May 2021, the JINR announced plans to investigate the 249Cf+50Ti reaction in their new facility. However, the 249Cf target would have had to be made by the Oak Ridge National Laboratory in the United States, and after the Russian invasion of Ukraine began in February 2022, collaboration between the JINR and other institutes completely ceased due to sanctions. Consequently, the JINR now plans to try the 248Cm+54Cr reaction instead. A preparatory experiment for the use of 54Cr projectiles was conducted in late 2023, successfully synthesising 288Lv in the 238U+54Cr reaction, and the hope is for experiments to synthesise element 120 to begin by 2025. Starting from 2022, plans have also been made to use 88-inch cyclotron in the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California, United States to attempt to make new elements using 50Ti projectiles. First, the 244Pu+50Ti reaction was tested, successfully creating two atoms of 290Lv in 2024. Since this was successful, an attempt to make element 120 in the 249Cf+50Ti reaction is planned to begin in 2025. The Lawrence Livermore National Laboratory (LLNL), which previously collaborated with the JINR, will collaborate with the LBNL on this project. Unbiunium (E121) The synthesis of element 121 (unbiunium) was first attempted in 1977 by bombarding a target of uranium-238 with copper-65 ions at the Gesellschaft für Schwerionenforschung in Darmstadt, Germany: + → 303121* → no atoms No atoms were identified. Unbibium (E122) The first attempts to synthesize element 122 (unbibium) were performed in 1972 by Flerov et al. at the Joint Institute for Nuclear Research (JINR), using the heavy-ion induced hot fusion reactions: + → 304, 306122* → no atoms These experiments were motivated by early predictions on the existence of an island of stability at N = 184 and Z > 120. No atoms were detected and a yield limit of 5 nb (5,000 pb) was measured. Current results (see flerovium) have shown that the sensitivity of these experiments were too low by at least 3 orders of magnitude. In 2000, the Gesellschaft für Schwerionenforschung (GSI) Helmholtz Center for Heavy Ion Research performed a very similar experiment with much higher sensitivity: + → 308122* → no atoms These results indicate that the synthesis of such heavier elements remains a significant challenge and further improvements of beam intensity and experimental efficiency is required. The sensitivity should be increased to 1 fb in the future for better quality results. Another unsuccessful attempt to synthesize element 122 was carried out in 1978 at the GSI Helmholtz Center, where a natural erbium target was bombarded with xenon-136 ions: + → 298, 300, 302, 303, 304, 306122* → no atoms In particular, the reaction between 170Er and 136Xe was expected to yield alpha-emitters with half-lives of microseconds that would decay down to isotopes of flerovium with half-lives perhaps increasing up to several hours, as flerovium is predicted to lie near the center of the island of stability. After twelve hours of irradiation, nothing was found in this reaction. Following a similar unsuccessful attempt to synthesize element 121 from 238U and 65Cu, it was concluded that half-lives of superheavy nuclei must be less than one microsecond or the cross sections are very small. More recent research into synthesis of superheavy elements suggests that both conclusions are true. The two attempts in the 1970s to synthesize element 122 were both propelled by the research investigating whether superheavy elements could potentially be naturally occurring. Several experiments studying the fission characteristics of various superheavy compound nuclei such as 306122* were performed between 2000 and 2004 at the Flerov Laboratory of Nuclear Reactions. Two nuclear reactions were used, namely 248Cm + 58Fe and 242Pu + 64Ni. The results reveal how superheavy nuclei fission predominantly by expelling closed shell nuclei such as 132Sn (Z = 50, N = 82). It was also found that the yield for the fusion-fission pathway was similar between 48Ca and 58Fe projectiles, suggesting a possible future use of 58Fe projectiles in superheavy element formation. Unbiquadium (E124) Scientists at GANIL (Grand Accélérateur National d'Ions Lourds) attempted to measure the direct and delayed fission of compound nuclei of elements with Z = 114, 120, and 124 in order to probe shell effects in this region and to pinpoint the next spherical proton shell. This is because having complete nuclear shells (or, equivalently, having a magic number of protons or neutrons) would confer more stability on the nuclei of such superheavy elements, thus moving closer to the island of stability. In 2006, with full results published in 2008, the team provided results from a reaction involving the bombardment of a natural germanium target with uranium ions: + → 308, 310, 311, 312, 314124* → fission The team reported that they had been able to identify compound nuclei fissioning with half-lives > 10−18 s. This result suggests a strong stabilizing effect at Z = 124 and points to the next proton shell at Z > 120, not at Z = 114 as previously thought. A compound nucleus is a loose combination of nucleons that have not arranged themselves into nuclear shells yet. It has no internal structure and is held together only by the collision forces between the target and projectile nuclei. It is estimated that it requires around 10−14 s for the nucleons to arrange themselves into nuclear shells, at which point the compound nucleus becomes a nuclide, and this number is used by IUPAC as the minimum half-life a claimed isotope must have to potentially be recognised as being discovered. Thus, the GANIL experiments do not count as a discovery of element 124. The fission of the compound nucleus 312124 was also studied in 2006 at the tandem ALPI heavy-ion accelerator at the Laboratori Nazionali di Legnaro (Legnaro National Laboratories) in Italy: + → 312124* → fission Similarly to previous experiments conducted at the JINR (Joint Institute for Nuclear Research), fission fragments clustered around doubly magic nuclei such as 132Sn (Z = 50, N = 82), revealing a tendency for superheavy nuclei to expel such doubly magic nuclei in fission. The average number of neutrons per fission from the 312124 compound nucleus (relative to lighter systems) was also found to increase, confirming that the trend of heavier nuclei emitting more neutrons during fission continues into the superheavy mass region. Unbipentium (E125) The first and only attempt to synthesize element 125 (unbipentium) was conducted in Dubna in 19701971 using zinc ions and an americium-243 target: + → 309, 311125* → no atoms No atoms were detected, and a cross section limit of 5 nb was determined. This experiment was motivated by the possibility of greater stability for nuclei around Z ~ 126 and N ~ 184, though more recent research suggests the island of stability may instead lie at a lower atomic number (such as copernicium, Z = 112), and the synthesis of heavier elements such as element 125 will require more sensitive experiments. Unbihexium (E126) The first and only attempt to synthesize element 126 (unbihexium), which was unsuccessful, was performed in 1971 at CERN (European Organization for Nuclear Research) by René Bimbot and John M. Alexander using the hot fusion reaction: + → 316126* → no atoms High-energy (13–15 MeV) alpha particles were observed and taken as possible evidence for the synthesis of element 126. Subsequent unsuccessful experiments with higher sensitivity suggest that the 10 mb sensitivity of this experiment was too low; hence, the formation of element 126 nuclei in this reaction is highly unlikely. Unbiseptium (E127) The first and only attempt to synthesize element 127 (unbiseptium), which was unsuccessful, was performed in 1978 at the UNILAC accelerator at the GSI Helmholtz Center, where a natural tantalum target was bombarded with xenon-136 ions: + → 316, 317127* → no atoms Searches in nature A study in 1976 by a group of American researchers from several universities proposed that primordial superheavy elements, mainly livermorium, elements 124, 126, and 127, could be a cause of unexplained radiation damage (particularly radiohalos) in minerals. This prompted many researchers to search for them in nature from 1976 to 1983. A group led by Tom Cahill, a professor at the University of California at Davis, claimed in 1976 that they had detected alpha particles and X-rays with the right energies to cause the damage observed, supporting the presence of these elements. In particular, the presence of long-lived (on the order of 109 years) nuclei of elements 124 and 126, along with their decay products, at an abundance of 10−11 relative to their possible congeners uranium and plutonium, was conjectured. Others claimed that none had been detected, and questioned the proposed characteristics of primordial superheavy nuclei. In particular, they cited that any such superheavy nuclei must have a closed neutron shell at N = 184 or N = 228, and this necessary condition for enhanced stability only exists in neutron deficient isotopes of livermorium or neutron rich isotopes of the other elements that would not be beta-stable unlike most naturally occurring isotopes. This activity was also proposed to be caused by nuclear transmutations in natural cerium, raising further ambiguity upon this claimed observation of superheavy elements. On April 24, 2008, a group led by Amnon Marinov at the Hebrew University of Jerusalem claimed to have found single atoms of 292122 in naturally occurring thorium deposits at an abundance of between 10−11 and 10−12 relative to thorium. The claim of Marinov et al. was criticized by a part of the scientific community. Marinov claimed that he had submitted the article to the journals Nature and Nature Physics but both turned it down without sending it for peer review. The 292122 atoms were claimed to be superdeformed or hyperdeformed isomers, with a half-life of at least 100 million years. A criticism of the technique, previously used in purportedly identifying lighter thorium isotopes by mass spectrometry, was published in Physical Review C in 2008. A rebuttal by the Marinov group was published in Physical Review C after the published comment. A repeat of the thorium experiment using the superior method of Accelerator Mass Spectrometry (AMS) failed to confirm the results, despite a 100-fold better sensitivity. This result throws considerable doubt on the results of the Marinov collaboration with regard to their claims of long-lived isotopes of thorium, roentgenium and element 122. It is still possible that traces of unbibium might only exist in some thorium samples, although this is unlikely. The possible extent of primordial superheavy elements on Earth today is uncertain. Even if they are confirmed to have caused the radiation damage long ago, they might now have decayed to mere traces, or even be completely gone. It is also uncertain if such superheavy nuclei may be produced naturally at all, as spontaneous fission is expected to terminate the r-process responsible for heavy element formation between mass number 270 and 290, well before elements beyond 120 may be formed. A recent hypothesis tries to explain the spectrum of Przybylski's Star by naturally occurring flerovium and element 120. Predicted properties of eighth-period elements Element 118, oganesson, is the heaviest element that has been synthesized. The next two elements, elements 119 and 120, should form an 8s series and be an alkali and alkaline earth metal, respectively. Beyond element 120, the superactinide series is expected to begin, when the 8s electrons and the filling of the 8p1/2, 7d3/2, 6f, and 5g subshells determine the chemistry of these elements. Complete and accurate CCSD calculations are not available for elements beyond 122 because of the extreme complexity of the situation: the 5g, 6f, and 7d orbitals should have about the same energy level, and in the region of element 160, the 9s, 8p3/2, and 9p1/2 orbitals should also be about equal in energy. This will cause the electron shells to mix so that the block concept no longer applies very well, and will also result in novel chemical properties that will make positioning some of these elements in a periodic table very difficult. Chemical and physical properties Elements 119 and 120 {| class="wikitable" |+ Some predicted properties of elements 119 and 120 ! Property ! 119 ! 120 |- ! Standard atomic weight | [322] | [325] |- ! Group | 1 | 2 |- ! Valence electron configuration | 8s1 | 8s2 |- ! Stable oxidation states | 1, 3 | 2, 4 |- ! First ionization energy | 463.1 kJ/mol | 563.3 kJ/mol |- ! Metallic radius | 260 pm | 200 pm |- ! Density | 3 g/cm3 | 7 g/cm3 |- ! Melting point | | |-sigfig= ! Boiling point | | |} The first two elements of period 8 will be ununennium and unbinilium, elements 119 and 120. Their electron configurations should have the 8s orbital being filled. This orbital is relativistically stabilized and contracted; thus, elements 119 and 120 should be more like rubidium and strontium than their immediate neighbours above, francium and radium. Another effect of the relativistic contraction of the 8s orbital is that the atomic radii of these two elements should be about the same as those of francium and radium. They should behave like normal alkali and alkaline earth metals (albeit less reactive than their immediate vertical neighbours), normally forming +1 and +2 oxidation states, respectively, but the relativistic destabilization of the 7p3/2 subshell and the relatively low ionization energies of the 7p3/2 electrons should make higher oxidation states like +3 and +4 (respectively) possible as well. Superactinides The superactinides may range from elements 121 through 157, which can be classified as the 5g and 6f elements of the eighth period, together with the first 7d element. In the superactinide series, the 7d, 8p, 6f and 5g shells should all fill simultaneously. This creates very complicated situations, so much so that complete and accurate CCSD calculations have been done only for elements 121 and 122. The first superactinide, unbiunium (element 121), should be similar to lanthanum and actinium: its main oxidation state should be +3, although the closeness of the valence subshells' energy levels may permit higher oxidation states, just as in elements 119 and 120. Relativistic stabilization of the 8p subshell should result in a ground-state 8s8p valence electron configuration for element 121, in contrast to the ds configurations of lanthanum and actinium; nevertheless, this anomalous configuration does not appear to affect its calculated chemistry, which remains similar to that of actinium. Its first ionization energy is predicted to be 429.4 kJ/mol, which would be lower than those of all known elements except for the alkali metals potassium, rubidium, caesium, and francium: this value is even lower than that of the period 8 alkali metal ununennium (463.1 kJ/mol). Similarly, the next superactinide, unbibium (element 122), may be similar to cerium and thorium, with a main oxidation state of +4, but would have a ground-state 7d8s8p or 8s8p valence electron configuration, unlike thorium's 6d7s configuration. Hence, its first ionization energy would be smaller than thorium's (Th: 6.3 eV; element 122: 5.6 eV) because of the greater ease of ionizing unbibium's 8p electron than thorium's 6d electron. The collapse of the 5g orbital itself is delayed until around element 125; the electron configurations of the 119-electron isoelectronic series are expected to be [Og]8s for elements 119 through 122, [Og]6f for elements 123 and 124, and [Og]5g for element 125 onwards. In the first few superactinides, the binding energies of the added electrons are predicted to be small enough that they can lose all their valence electrons; for example, unbihexium (element 126) could easily form a +8 oxidation state, and even higher oxidation states for the next few elements may be possible. Element 126 is also predicted to display a variety of other oxidation states: recent calculations have suggested a stable monofluoride 126F may be possible, resulting from a bonding interaction between the 5g orbital on element 126 and the 2p orbital on fluorine. Other predicted oxidation states include +2, +4, and +6; +4 is expected to be the most usual oxidation state of unbihexium. The superactinides from unbipentium (element 125) to unbiennium (element 129) are predicted to exhibit a +6 oxidation state and form hexafluorides, though 125F and 126F are predicted to be relatively weakly bound. The bond dissociation energies are expected to greatly increase at element 127 and even more so at element 129. This suggests a shift from strong ionic character in fluorides of element 125 to more covalent character, involving the 8p orbital, in fluorides of element 129. The bonding in these superactinide hexafluorides is mostly between the highest 8p subshell of the superactinide and the 2p subshell of fluorine, unlike how uranium uses its 5f and 6d orbitals for bonding in uranium hexafluoride. Despite the ability of early superactinides to reach high oxidation states, it has been calculated that the 5g electrons will be most difficult to ionize; the 125 and 126 ions are expected to bear a 5g configuration, similar to the 5f configuration of the Np ion. Similar behavior is observed in the low chemical activity of the 4f electrons in lanthanides; this is a consequence of the 5g orbitals being small and deeply buried in the electron cloud. The presence of electrons in g-orbitals, which do not exist in the ground state electron configuration of any currently known element, should allow presently unknown hybrid orbitals to form and influence the chemistry of the superactinides in new ways, although the absence of g electrons in known elements makes predicting superactinide chemistry more difficult. {| class="wikitable" |+ Some predicted compounds of the superactinides (X = a halogen) ! ! 121 ! 122 ! 123 ! 124 ! 125 ! 126 ! 127 ! 128 ! 129 ! 132 ! 142 ! 143 ! 144 ! 145 ! 146 ! 148 ! 153 ! 154 ! 155 ! 156 ! 157 |- ! Compound | 121X3 | 122X4 | 123X5 | 124X6 | 125F125F6 | 126F126F6126O4 | 127F6 | 128F6 | 129F129F6 | | 142X4142X6 | 143F6 | 144X6144F8144O4 | 145F6 | | 148O6 | | | | | |- ! Analogs | LaX3AcX3 | CeX4ThX4 | | | | | | | | | ThF4 | | UF6PuF8PuO4 | | | UO6 | | | | | |- ! Oxidation states | 3 | 4 | 5 | 6 | 1, 6, 7 | 1, 2, 4, 6, 8 | 6 | 6 | 1, 6 | 6 | 4, 6 | 6, 8 | 3, 4, 5, 6, 8 | 6 | 8 | 12 | 3 | 0, 2 | 3, 5 | 2 | 3 |} In the later superactinides, the oxidation states should become lower. By element 132, the predominant most stable oxidation state will be only +6; this is further reduced to +3 and +4 by element 144, and at the end of the superactinide series it will be only +2 (and possibly even 0) because the 6f shell, which is being filled at that point, is deep inside the electron cloud and the 8s and 8p electrons are bound too strongly to be chemically active. The 5g shell should be filled at element 144 and the 6f shell at around element 154, and at this region of the superactinides the 8p electrons are bound so strongly that they are no longer active chemically, so that only a few electrons can participate in chemical reactions. Calculations by Fricke et al. predict that at element 154, the 6f shell is full and there are no d- or other electron wave functions outside the chemically inactive 8s and 8p1/2 shells. This may cause element 154 to be rather unreactive with noble gas-like properties. Calculations by Pyykkö nonetheless expect that at element 155, the 6f shell is still chemically ionizable: 155 should have a full 6f shell, and the fourth ionization potential should be between those of terbium and dysprosium, both of which are known in the +4 state. Similarly to the lanthanide and actinide contractions, there should be a superactinide contraction in the superactinide series where the ionic radii of the superactinides are smaller than expected. In the lanthanides, the contraction is about 4.4 pm per element; in the actinides, it is about 3 pm per element. The contraction is larger in the lanthanides than in the actinides due to the greater localization of the 4f wave function as compared to the 5f wave function. Comparisons with the wave functions of the outer electrons of the lanthanides, actinides, and superactinides lead to a prediction of a contraction of about 2 pm per element in the superactinides; although this is smaller than the contractions in the lanthanides and actinides, its total effect is larger due to the fact that 32 electrons are filled in the deeply buried 5g and 6f shells, instead of just 14 electrons being filled in the 4f and 5f shells in the lanthanides and actinides, respectively. Pekka Pyykkö divides these superactinides into three series: a 5g series (elements 121 to 138), an 8p1/2 series (elements 139 to 140), and a 6f series (elements 141 to 155), also noting that there would be a great deal of overlapping between energy levels and that the 6f, 7d, or 8p1/2 orbitals could well also be occupied in the early superactinide atoms or ions. He also expects that they would behave more like "superlanthanides", in the sense that the 5g electrons would mostly be chemically inactive, similarly to how only one or two 4f electrons in each lanthanide are ever ionized in chemical compounds. He also predicted that the possible oxidation states of the superactinides might rise very high in the 6f series, to values such as +12 in element 148. Andrey Kulsha has called the elements 121 to 156 "ultransition" elements and has proposed to split them into two series of eighteen each, one from elements 121 to 138 and another from elements 139 to 156. The first would be analogous to the lanthanides, with oxidation states mainly ranging from +4 to +6, as the filling of the 5g shell dominates and neighbouring elements are very similar to each other, creating an analogy to uranium, neptunium, and plutonium. The second would be analogous to the actinides: at the beginning (around elements in the 140s) very high oxidation states would be expected as the 6f shell rises above the 7d one, but after that the typical oxidation states would lower and in elements in the 150s onwards the 8p electrons would stop being chemically active. Because the two rows are separated by the addition of a complete 5g subshell, they could be considered analogues of each other as well. As an example from the late superactinides, element 156 is expected to exhibit mainly the +2 oxidation state, on account of its electron configuration with easily removed 7d electrons over a stable [Og]5g6f8s8p core. It can thus be considered a heavier congener of nobelium, which likewise has a pair of easily removed 7s electrons over a stable [Rn]5f core, and is usually in the +2 state (strong oxidisers are required to obtain nobelium in the +3 state). Its first ionization energy should be about 400 kJ/mol and its metallic radius approximately 170 picometers. With a relative atomic mass of around 445 u, it should be a very heavy metal with a density of around 26 g/cm3. Elements 157 to 166 The 7d transition metals in period 8 are expected to be elements 157 to 166. Although the 8s and 8p1/2 electrons are bound so strongly in these elements that they should not be able to take part in any chemical reactions, the 9s and 9p1/2 levels are expected to be readily available for hybridization. These 7d elements should be similar to the 4d elements yttrium through cadmium. In particular, element 164 with a 7d109s0 electron configuration shows clear analogies with palladium with its 4d105s0 electron configuration. The noble metals of this series of transition metals are not expected to be as noble as their lighter homologues, due to the absence of an outer s shell for shielding and also because the 7d shell is strongly split into two subshells due to relativistic effects. This causes the first ionization energies of the 7d transition metals to be smaller than those of their lighter congeners. Theoretical interest in the chemistry of unhexquadium is largely motivated by theoretical predictions that it, especially the isotopes 472164 and 482164 (with 164 protons and 308 or 318 neutrons), would be at the center of a hypothetical second island of stability (the first being centered on copernicium, particularly the isotopes 291Cn, 293Cn, and 296Cn which are expected to have half-lives of centuries or millennia). Calculations predict that the 7d electrons of element 164 (unhexquadium) should participate very readily in chemical reactions, so that it should be able to show stable +6 and +4 oxidation states in addition to the normal +2 state in aqueous solutions with strong ligands. Element 164 should thus be able to form compounds like 164(CO)4, 164(PF3)4 (both tetrahedral like the corresponding palladium compounds), and (linear), which is very different behavior from that of lead, which element 164 would be a heavier homologue of if not for relativistic effects. Nevertheless, the divalent state would be the main one in aqueous solution (although the +4 and +6 states would be possible with stronger ligands), and unhexquadium(II) should behave more similarly to lead than unhexquadium(IV) and unhexquadium(VI). Element 164 is expected to be a soft Lewis acid and have Ahrlands softness parameter close to 4 eV. It should be at most moderately reactive, having a first ionization energy that should be around 685 kJ/mol, comparable to that of molybdenum. Due to the lanthanide, actinide, and superactinide contractions, element 164 should have a metallic radius of only 158 pm, very close to that of the much lighter magnesium, despite its expected atomic weight of around 474 u which is about 19.5 times the atomic weight of magnesium. This small radius and high weight cause it to be expected to have an extremely high density of around 46 g·cm−3, over twice that of osmium, currently the most dense element known, at 22.61 g·cm−3; element 164 should be the second most dense element in the first 172 elements in the periodic table, with only its neighbor unhextrium (element 163) being more dense (at 47 g·cm−3). Metallic element 164 should have a very large cohesive energy (enthalpy of crystallization) due to its covalent bonds, most probably resulting in a high melting point. In the metallic state, element 164 should be quite noble and analogous to palladium and platinum. Fricke et al. suggested some formal similarities to oganesson, as both elements have closed-shell configurations and similar ionisation energies, although they note that while oganesson would be a very bad noble gas, element 164 would be a good noble metal. Elements 165 (unhexpentium) and 166 (unhexhexium), the last two 7d metals, should behave similarly to alkali and alkaline earth metals when in the +1 and +2 oxidation states, respectively. The 9s electrons should have ionization energies comparable to those of the 3s electrons of sodium and magnesium, due to relativistic effects causing the 9s electrons to be much more strongly bound than non-relativistic calculations would predict. Elements 165 and 166 should normally exhibit the +1 and +2 oxidation states, respectively, although the ionization energies of the 7d electrons are low enough to allow higher oxidation states like +3 for element 165. The oxidation state +4 for element 166 is less likely, creating a situation similar to the lighter elements in groups 11 and 12 (particularly gold and mercury). As with mercury but not copernicium, ionization of element 166 to 1662+ is expected to result in a 7d10 configuration corresponding to the loss of the s-electrons but not the d-electrons, making it more analogous to the lighter "less relativistic" group 12 elements zinc, cadmium, and mercury. {| class="wikitable" |+ Some predicted properties of elements 156–166The metallic radii and densities are first approximations.Most analogous group is given first, followed by other similar groups. ! Property ! 156 ! 157 ! 158 ! 159 ! 160 ! 161 ! 162 ! 163 ! 164 ! 165 ! 166 |- ! Standard atomic weight | [445] | [448] | [452] | [456] | [459] | [463] | [466] | [470] | [474] | [477] | [481] |- ! Group | Yb group | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11(1) | 12(2) |- ! Valence electron configuration | 7d2 | 7d3 | 7d4 | 7d5 | 7d6 | 7d7 | 7d8 | 7d9 | 7d10 | 7d10 9s1 | 7d10 9s2 |- ! Stable oxidation states | 2 | 3 | 4 | 1, 5 | 2, 6 | 3, 7 | 4, 8 | 5 | 0, 2, 4, 6 | 1, 3 | 2 |- ! First ionization energy | 400 kJ/mol | 450 kJ/mol | 520 kJ/mol | 340 kJ/mol | 420 kJ/mol | 470 kJ/mol | 560 kJ/mol | 620 kJ/mol | 690 kJ/mol | 520 kJ/mol | 630 kJ/mol |- ! Metallic radius | 170 pm | 163 pm | 157 pm | 152 pm | 148 pm | 148 pm | 149 pm | 152 pm | 158 pm | 250 pm | 200 pm |- ! Density | 26 g/cm3 | 28 g/cm3 | 30 g/cm3 | 33 g/cm3 | 36 g/cm3 | 40 g/cm3 | 45 g/cm3 | 47 g/cm3 | 46 g/cm3 | 7 g/cm3 | 11 g/cm3 |} Elements 167 to 172 The next six elements on the periodic table are expected to be the last main-group elements in their period, and are likely to be similar to the 5p elements indium through xenon. In elements 167 to 172, the 9p1/2 and 8p3/2 shells will be filled. Their energy eigenvalues are so close together that they behave as one combined p-subshell, similar to the non-relativistic 2p and 3p subshells. Thus, the inert-pair effect does not occur and the most common oxidation states of elements 167 to 170 are expected to be +3, +4, +5, and +6, respectively. Element 171 (unseptunium) is expected to show some similarities to the halogens, showing various oxidation states ranging from −1 to +7, although its physical properties are expected to be closer to that of a metal. Its electron affinity is expected to be 3.0 eV, allowing it to form H171, analogous to a hydrogen halide. The 171− ion is expected to be a soft base, comparable to iodide (I−). Element 172 (unseptbium) is expected to be a noble gas with chemical behaviour similar to that of xenon, as their ionization energies should be very similar (Xe, 1170.4 kJ/mol; element 172, 1090 kJ/mol). The only main difference between them is that element 172, unlike xenon, is expected to be a liquid or a solid at standard temperature and pressure due to its much higher atomic weight. Unseptbium is expected to be a strong Lewis acid, forming fluorides and oxides, similarly to its lighter congener xenon. Because of some analogy of elements 165–172 to periods 2 and 3, Fricke et al. considered them to form a ninth period of the periodic table, while the eighth period was taken by them to end at the noble metal element 164. This ninth period would be similar to the second and third period in having no transition metals. That being said, the analogy is incomplete for elements 165 and 166; although they do start a new s-shell (9s), this is above a d-shell, making them chemically more similar to groups 11 and 12. {| class="wikitable" |+ Some predicted properties of elements 167–172The metallic or covalent radii and densities are first approximations. ! Property ! 167 ! 168 ! 169 ! 170 ! 171 ! 172 |- ! Standard atomic weight | [485] | [489] | [493] | [496] | [500] | [504] |- ! Group | 13 | 14 | 15 | 16 | 17 | 18 |- ! Valence electron configuration | 9s2 9p1 | 9s2 9p2 | 9s2 9p2 8p1 | 9s2 9p2 8p2 | 9s2 9p2 8p3 | 9s2 9p2 8p4 |- ! Stable oxidation states | 3 | 4 | 5 | 6 | −1, 3, 7 | 0, 4, 6, 8 |- ! First ionization energy | 620 kJ/mol | 720 kJ/mol | 800 kJ/mol | 890 kJ/mol | 984 kJ/mol | 1090 kJ/mol |- ! Metallic or covalent radius | 190 pm | 180 pm | 175 pm | 170 pm | 165 pm | 220 pm |- ! Density | 17 g/cm3 | 19 g/cm3 | 18 g/cm3 | 17 g/cm3 | 16 g/cm3 | 9 g/cm3 |} Beyond element 172 Beyond element 172, there is the potential to fill the 6g, 7f, 8d, 10s, 10p1/2, and perhaps 6h11/2 shells. These electrons would be very loosely bound, potentially rendering extremely high oxidation states reachable, though the electrons would become more tightly bound as the ionic charge rises. Thus, there will probably be another very long transition series, like the superactinides. In element 173 (unsepttrium), the outermost electron might enter the 6g7/2, 9p3/2, or 10s subshells. Because spin–orbit interactions would create a very large energy gap between these and the 8p3/2 subshell, this outermost electron is expected to be very loosely bound and very easily lost to form a 173+ cation. As a result, element 173 is expected to behave chemically like an alkali metal, and one that might be far more reactive than even caesium (francium and element 119 being less reactive than caesium due to relativistic effects): the calculated ionisation energy for element 173 is 3.070 eV, compared to the experimentally known 3.894 eV for caesium. Element 174 (unseptquadium) may add an 8d electron and form a closed-shell 1742+ cation; its calculated ionisation energy is 3.614 eV. Element 184 (unoctquadium) was significantly targeted in early predictions, as it was originally speculated that 184 would be a proton magic number: it is predicted to have an electron configuration of [172] 6g5 7f4 8d3, with at least the 7f and 8d electrons chemically active. Its chemical behaviour is expected to be similar to uranium and neptunium, as further ionisation past the +6 state (corresponding to removal of the 6g electrons) is likely to be unprofitable; the +4 state should be most common in aqueous solution, with +5 and +6 reachable in solid compounds. End of the periodic table The number of physically possible elements is unknown. A low estimate is that the periodic table may end soon after the island of stability, which is expected to center on Z = 126, as the extension of the periodic and nuclide tables is restricted by the proton and the neutron drip lines and stability toward alpha decay and spontaneous fission. One calculation by Y. Gambhir et al., analyzing nuclear binding energy and stability in various decay channels, suggests a limit to the existence of bound nuclei at Z = 146. Other predictions of an end to the periodic table include Z = 128 (John Emsley) and Z = 155 (Albert Khazan). Elements above the atomic number 137 It is a "folk legend" among physicists that Richard Feynman suggested that neutral atoms could not exist for atomic numbers greater than Z = 137, on the grounds that the relativistic Dirac equation predicts that the ground-state energy of the innermost electron in such an atom would be an imaginary number. Here, the number 137 arises as the inverse of the fine-structure constant. By this argument, neutral atoms cannot exist beyond atomic number 137, and therefore a periodic table of elements based on electron orbitals breaks down at this point. However, this argument presumes that the atomic nucleus is pointlike. A more accurate calculation must take into account the small, but nonzero, size of the nucleus, which is predicted to push the limit further to Z ≈ 173. Bohr model The Bohr model exhibits difficulty for atoms with atomic number greater than 137, for the speed of an electron in a 1s electron orbital, v, is given by where Z is the atomic number, and α is the fine-structure constant, a measure of the strength of electromagnetic interactions. Under this approximation, any element with an atomic number of greater than 137 would require 1s electrons to be traveling faster than c, the speed of light. Hence, the non-relativistic Bohr model is inaccurate when applied to such an element. Relativistic Dirac equation The relativistic Dirac equation gives the ground state energy as where m is the rest mass of the electron. For Z > 137, the wave function of the Dirac ground state is oscillatory, rather than bound, and there is no gap between the positive and negative energy spectra, as in the Klein paradox. More accurate calculations taking into account the effects of the finite size of the nucleus indicate that the binding energy first exceeds 2mc2 for Z > Zcr probably between 168 and 172. For Z > Zcr, if the innermost orbital (1s) is not filled, the electric field of the nucleus will pull an electron out of the vacuum, resulting in the spontaneous emission of a positron. This diving of the 1s subshell into the negative continuum has often been taken to constitute an "end" to the periodic table, but in fact it does not impose such a limit, as such resonances can be interpreted as Gamow states. Nonetheless, the accurate description of such states in a multi-electron system, needed to extend calculations and the periodic table past Zcr ≈ 172, are still open problems. Atoms with atomic numbers above Zcr ≈ 172 have been termed supercritical atoms. Supercritical atoms cannot be totally ionised because their 1s subshell would be filled by spontaneous pair creation in which an electron-positron pair is created from the negative continuum, with the electron being bound and the positron escaping. However, the strong field around the atomic nucleus is restricted to a very small region of space, so that the Pauli exclusion principle forbids further spontaneous pair creation once the subshells that have dived into the negative continuum are filled. Elements 173–184 have been termed weakly supercritical atoms as for them only the 1s shell has dived into the negative continuum; the 2p1/2 shell is expected to join around element 185 and the 2s shell around element 245. Experiments have so far not succeeded in detecting spontaneous pair creation from assembling supercritical charges through the collision of heavy nuclei (e.g. colliding lead with uranium to momentarily give an effective Z of 174; uranium with uranium gives effective Z = 184 and uranium with californium gives effective Z = 190). Even though passing Zcr does not mean elements can no longer exist, the increasing concentration of the 1s density close to the nucleus would likely make these electrons more vulnerable to K electron capture as Zcr is approached. For such heavy elements, these 1s electrons would likely spend a significant fraction of time so close to the nucleus that they are actually inside it. This may pose another limit to the periodic table. Because of the factor of m, muonic atoms become supercritical at a much larger atomic number of around 2200, as muons are about 207 times as heavy as electrons. Quark matter It has also been posited that in the region beyond A > 300, an entire "continent of stability" consisting of a hypothetical phase of stable quark matter, comprising freely flowing up and down quarks rather than quarks bound into protons and neutrons, may exist. Such a form of matter is theorized to be a ground state of baryonic matter with a greater binding energy per baryon than nuclear matter, favoring the decay of nuclear matter beyond this mass threshold into quark matter. If this state of matter exists, it could possibly be synthesized in the same fusion reactions leading to normal superheavy nuclei, and would be stabilized against fission as a consequence of its stronger binding that is enough to overcome Coulomb repulsion. Calculations published in 2020 suggest stability of up-down quark matter (udQM) nuggets against conventional nuclei beyond A ~ 266, and also show that udQM nuggets become supercritical earlier (Zcr ~ 163, A ~ 609) than conventional nuclei (Zcr ~ 177, A ~ 480). Nuclear properties Magic numbers and the island of stability The stability of nuclei decreases greatly with the increase in atomic number after curium, element 96, so that all isotopes with an atomic number above 101 decay radioactively with a half-life under a day. No elements with atomic numbers above 82 (after lead) have stable isotopes. Nevertheless, because of reasons not very well understood yet, there is a slight increased nuclear stability around atomic numbers 110–114, which leads to the appearance of what is known in nuclear physics as the "island of stability". This concept, proposed by University of California professor Glenn Seaborg, explains why superheavy elements last longer than predicted. Calculations according to the Hartree–Fock–Bogoliubov method using the non-relativistic Skyrme interaction have proposed Z = 126 as a closed proton shell. In this region of the periodic table, N = 184, N = 196, and N = 228 have been suggested as closed neutron shells. Therefore, the isotopes of most interest are 310126, 322126, and 354126, for these might be considerably longer-lived than other isotopes. Element 126, having a magic number of protons, is predicted to be more stable than other elements in this region, and may have nuclear isomers with very long half-lives. It is also possible that the island of stability is instead centered at 306122, which may be spherical and doubly magic. Probably, the island of stability occurs around Z = 114–126 and N = 184, with lifetimes probably around hours to days. Beyond the shell closure at N = 184, spontaneous fission lifetimes should drastically drop below 10−15 seconds – too short for a nucleus to obtain an electron cloud and participate in any chemistry. That being said, such lifetimes are very model-dependent, and predictions range across many orders of magnitude. Taking nuclear deformation and relativistic effects into account, an analysis of single-particle levels predicts new magic numbers for superheavy nuclei at Z = 126, 138, 154, and 164 and N = 228, 308, and 318. Therefore, in addition to the island of stability centered at 291Cn, 293Cn, and 298Fl, further islands of stability may exist around the doubly magic 354126 as well as 472164 or 482164. These nuclei are predicted to be beta-stable and decay by alpha emission or spontaneous fission with relatively long half-lives, and confer additional stability on neighboring N = 228 isotones and elements 152–168, respectively. On the other hand, the same analysis suggests that proton shell closures may be relatively weak or even nonexistent in some cases such as 354126, meaning that such nuclei might not be doubly magic and stability will instead be primarily determined by strong neutron shell closures. Additionally, due to the enormously greater forces of electromagnetic repulsion that must be overcome by the strong force at the second island (Z = 164), it is possible that nuclei around this region only exist as resonances and cannot stay together for a meaningful amount of time. It is also possible that some of the superactinides between these series may not actually exist because they are too far from both islands, in which case the periodic table might end around Z = 130. The area of elements 121–156 where periodicity is in abeyance is quite similar to the gap between the two islands. Beyond element 164, the fissility line defining the limit of stability with respect to spontaneous fission may converge with the neutron drip line, posing a limit to the existence of heavier elements. Nevertheless, further magic numbers have been predicted at Z = 210, 274, and 354 and N = 308, 406, 524, 644, and 772, with two beta-stable doubly magic nuclei found at 616210 and 798274; the same calculation method reproduced the predictions for 298Fl and 472164. (The doubly magic nuclei predicted for Z = 354 are beta-unstable, with 998354 being neutron-deficient and 1126354 being neutron-rich.) Although additional stability toward alpha decay and fission are predicted for 616210 and 798274, with half-lives up to hundreds of microseconds for 616210, there will not exist islands of stability as significant as those predicted at Z = 114 and 164. As the existence of superheavy elements is very strongly dependent on stabilizing effects from closed shells, nuclear instability and fission will likely determine the end of the periodic table beyond these islands of stability. The International Union of Pure and Applied Chemistry (IUPAC) defines an element to exist if its lifetime is longer than 10−14 seconds, which is the time it takes for the nucleus to form an electron cloud. However, a nuclide is generally considered to exist if its lifetime is longer than about 10−22 seconds, which is the time it takes for nuclear structure to form. Consequently, it is possible that some Z values can only be realised in nuclides and that the corresponding elements do not exist. It is also possible that no further islands actually exist beyond 126, as the nuclear shell structure gets smeared out (as the electron shell structure already is expected to be around oganesson) and low-energy decay modes become readily available. In some regions of the table of nuclides, there are expected to be additional regions of stability due to non-spherical nuclei that have different magic numbers than spherical nuclei do; the egg-shaped 270Hs is one such deformed doubly magic nucleus. In the superheavy region, the strong Coulomb repulsion of protons may cause some nuclei, including isotopes of oganesson, to assume a bubble shape in the ground state with a reduced central density of protons, unlike the roughly uniform distribution inside most smaller nuclei. Such a shape would have a very low fission barrier, however. Even heavier nuclei in some regions, such as 342136 and 466156, may instead become toroidal or red blood cell-like in shape, with their own magic numbers and islands of stability, but they would also fragment easily. Predicted decay properties of undiscovered elements As the main island of stability is thought to lie around 291Cn and 293Cn, undiscovered elements beyond oganesson may be very unstable and undergo alpha decay or spontaneous fission in microseconds or less. The exact region in which half-lives exceed one microsecond is unknown, though various models suggest that isotopes of elements heavier than unbinilium that may be produced in fusion reactions with available targets and projectiles will have half-lives under one microsecond and therefore may not be detected. It is consistently predicted that there will exist regions of stability at N = 184 and N = 228, and possibly also at Z ~ 124 and N ~ 198. These nuclei may have half-lives of a few seconds and undergo predominantly alpha decay and spontaneous fission, though minor beta-plus decay (or electron capture) branches may also exist. Outside these regions of enhanced stability, fission barriers are expected to drop significantly due to loss of stabilization effects, resulting in fission half-lives below 10−18 seconds, especially in even–even nuclei for which hindrance is even lower due to nucleon pairing. In general, alpha decay half-lives are expected to increase with neutron number, from nanoseconds in the most neutron-deficient isotopes to seconds closer to the beta-stability line. For nuclei with only a few neutrons more than a magic number, binding energy substantially drops, resulting in a break in the trend and shorter half-lives. The most neutron deficient isotopes of these elements may also be unbound and undergo proton emission. Cluster decay (heavy particle emission) has also been proposed as an alternative decay mode for some isotopes, posing yet another hurdle to identification of these elements. Electron configurations The following are expected electron configurations of elements 119–174 and 184. The symbol [Og] indicates the probable electron configuration of oganesson (Z = 118), which is currently the last known element. The configurations of the elements in this table are written starting with [Og] because oganesson is expected to be the last prior element with a closed-shell (inert gas) configuration, 1s2 2s2 2p6 3s2 3p6 3d10 4s2 4p6 4d10 4f14 5s2 5p6 5d10 5f14 6s2 6p6 6d10 7s2 7p6. Similarly, the [172] in the configurations for elements 173, 174, and 184 denotes the likely closed-shell configuration of element 172. Beyond element 123, no complete calculations are available and hence the data in this table must be taken as tentative. In the case of element 123, and perhaps also heavier elements, several possible electron configurations are predicted to have very similar energy levels, such that it is very difficult to predict the ground state. All configurations that have been proposed (since it was understood that the Madelung rule probably stops working here) are included. The predicted block assignments up to 172 are Kulsha's, following the expected available valence orbitals. There is, however, not a consensus in the literature as to how the blocks should work after element 138. {| class="wikitable" ! colspan="3" | Chemical element !! Block !! Predicted electron configurations |-bgcolor="" || 119 || Uue || Ununennium ||s-block ||[Og] 8s1 |-bgcolor="" || 120 || Ubn || Unbinilium ||s-block ||[Og] 8s2 |-bgcolor="" || 121 || Ubu || Unbiunium ||g-block || [Og] 8s2 8p |-bgcolor="" || 122 || Ubb || Unbibium ||g-block || [Og] 8s2 8p[Og] 7d1 8s2 8p |-bgcolor="" || 123 || Ubt || Unbitrium ||g-block || [Og] 6f1 8s2 8p[Og] 6f1 7d1 8s2 8p[Og] 6f2 8s2 8p[Og] 8s2 8p 8p |-bgcolor="" || 124 || Ubq || Unbiquadium ||g-block || [Og] 6f2 8s2 8p[Og] 6f3 8s2 8p |-bgcolor="" || 125 || Ubp || Unbipentium ||g-block || [Og] 6f4 8s2 8p[Og] 5g1 6f2 8s2 8p[Og] 5g1 6f3 8s2 8p[Og] 8s2 0.81(5g1 6f2 8p) + 0.17(5g1 6f1 7d2 8p) + 0.02(6f3 7d1 8p) |-bgcolor="" || 126 || Ubh || Unbihexium ||g-block || [Og] 5g1 6f4 8s2 8p[Og] 5g2 6f2 8s2 8p[Og] 5g2 6f3 8s2 8p[Og] 8s2 0.998(5g2 6f3 8p) + 0.002(5g2 6f2 8p) |-bgcolor="" || 127 || Ubs || Unbiseptium ||g-block || [Og] 5g2 6f3 8s2 8p[Og] 5g3 6f2 8s2 8p[Og] 8s2 0.88(5g3 6f2 8p) + 0.12(5g3 6f1 7d2 8p) |-bgcolor="" || 128 || Ubo ||Unbioctium||g-block || [Og] 5g3 6f3 8s2 8p[Og] 5g4 6f2 8s2 8p[Og] 8s2 0.88(5g4 6f2 8p) + 0.12(5g4 6f1 7d2 8p) |-bgcolor="" || 129 || Ube || Unbiennium ||g-block || [Og] 5g4 6f3 7d1 8s2 8p[Og] 5g4 6f3 8s2 8p[Og] 5g5 6f2 8s2 8p[Og] 5g4 6f3 7d1 8s2 8p |-bgcolor="" || 130 || Utn || Untrinilium ||g-block || [Og] 5g5 6f3 7d1 8s2 8p[Og] 5g5 6f3 8s2 8p[Og] 5g6 6f2 8s2 8p[Og] 5g5 6f3 7d1 8s2 8p |-bgcolor="" || 131 || Utu || Untriunium ||g-block || [Og] 5g6 6f3 8s2 8p[Og] 5g7 6f2 8s2 8p[Og] 8s2 0.86(5g6 6f3 8p) + 0.14(5g6 6f2 7d2 8p) |-bgcolor="" || 132 || Utb || Untribium ||g-block || [Og] 5g7 6f3 8s2 8p[Og] 5g8 6f2 8s2 8p |-bgcolor="" || 133 || Utt || Untritrium ||g-block || [Og] 5g8 6f3 8s2 8p |-bgcolor="" || 134 || Utq || Untriquadium ||g-block || [Og] 5g8 6f4 8s2 8p |-bgcolor="" || 135 || Utp || Untripentium ||g-block || [Og] 5g9 6f4 8s2 8p |-bgcolor="" || 136 || Uth || Untrihexium ||g-block || [Og] 5g10 6f4 8s2 8p |-bgcolor="" || 137 || Uts || Untriseptium ||g-block || [Og] 5g11 6f4 8s2 8p |-bgcolor="" || 138 || Uto || Untrioctium ||g-block || [Og] 5g12 6f4 8s2 8p[Og] 5g12 6f3 7d1 8s2 8p |-bgcolor="" || 139 || Ute || Untriennium ||g-block || [Og] 5g13 6f3 7d1 8s2 8p[Og] 5g13 6f2 7d2 8s2 8p |-bgcolor="" || 140 || Uqn || Unquadnilium ||g-block || [Og] 5g14 6f3 7d1 8s2 8p[Og] 5g15 6f1 8s2 8p 8p |-bgcolor="" || 141 || Uqu || Unquadunium ||g-block || [Og] 5g15 6f2 7d2 8s2 8p |-bgcolor="" || 142 || Uqb || Unquadbium ||g-block || [Og] 5g16 6f2 7d2 8s2 8p |-bgcolor="" || 143 || Uqt || Unquadtrium ||f-block || [Og] 5g17 6f2 7d2 8s2 8p |-bgcolor="" || 144 || Uqq || Unquadquadium ||f-block || [Og] 5g18 6f2 7d2 8s2 8p[Og] 5g18 6f1 7d3 8s2 8p[Og] 5g17 6f2 7d3 8s2 8p[Og] 8s2 0.95(5g17 6f2 7d3 8p) + 0.05(5g17 6f4 7d1 8p) |-bgcolor="" || 145 || Uqp || Unquadpentium ||f-block || [Og] 5g18 6f3 7d2 8s2 8p |-bgcolor="" || 146 || Uqh || Unquadhexium ||f-block || [Og] 5g18 6f4 7d2 8s2 8p |-bgcolor="" || 147 || Uqs || Unquadseptium ||f-block || [Og] 5g18 6f5 7d2 8s2 8p |-bgcolor="" || 148 || Uqo || Unquadoctium ||f-block || [Og] 5g18 6f6 7d2 8s2 8p |-bgcolor="" || 149 || Uqe || Unquadennium ||f-block || [Og] 5g18 6f6 7d3 8s2 8p |-bgcolor="" || 150 || Upn || Unpentnilium ||f-block || [Og] 5g18 6f6 7d4 8s2 8p[Og] 5g18 6f7 7d3 8s2 8p |-bgcolor="" || 151 || Upu || Unpentunium ||f-block || [Og] 5g18 6f8 7d3 8s2 8p |-bgcolor="" || 152 || Upb || Unpentbium ||f-block || [Og] 5g18 6f9 7d3 8s2 8p |-bgcolor="" || 153 || Upt || Unpenttrium ||f-block || [Og] 5g18 6f10 7d3 8s2 8p[Og] 5g18 6f11 7d2 8s2 8p |-bgcolor="" || 154 || Upq || Unpentquadium ||f-block || [Og] 5g18 6f11 7d3 8s2 8p[Og] 5g18 6f12 7d2 8s2 8p |-bgcolor="" || 155 || Upp || Unpentpentium ||f-block || [Og] 5g18 6f12 7d3 8s2 8p[Og] 5g18 6f13 7d2 8s2 8p |-bgcolor="" || 156 || Uph || Unpenthexium ||f-block|| [Og] 5g18 6f13 7d3 8s2 8p[Og] 5g18 6f14 7d2 8s2 8p |-bgcolor="" || 157 || Ups || Unpentseptium ||d-block || [Og] 5g18 6f14 7d3 8s2 8p |-bgcolor="" || 158 || Upo || Unpentoctium ||d-block || [Og] 5g18 6f14 7d4 8s2 8p |-bgcolor="" || 159 || Upe || Unpentennium ||d-block || [Og] 5g18 6f14 7d5 8s2 8p[Og] 5g18 6f14 7d4 8s2 8p 9s1 |-bgcolor="" || 160 || Uhn || Unhexnilium ||d-block || [Og] 5g18 6f14 7d6 8s2 8p[Og] 5g18 6f14 7d5 8s2 8p 9s1 |-bgcolor="" || 161 || Uhu || Unhexunium ||d-block || [Og] 5g18 6f14 7d7 8s2 8p[Og] 5g18 6f14 7d6 8s2 8p 9s1 |-bgcolor="" || 162 || Uhb || Unhexbium ||d-block || [Og] 5g18 6f14 7d8 8s2 8p[Og] 5g18 6f14 7d7 8s2 8p 9s1 |-bgcolor="" || 163 || Uht || Unhextrium ||d-block || [Og] 5g18 6f14 7d9 8s2 8p[Og] 5g18 6f14 7d8 8s2 8p 9s1 |-bgcolor="" || 164 || Uhq || Unhexquadium ||d-block || [Og] 5g18 6f14 7d10 8s2 8p |-bgcolor="" || 165 || Uhp || Unhexpentium ||d-block || [Og] 5g18 6f14 7d10 8s2 8p 9s1 |-bgcolor="" || 166 || Uhh || Unhexhexium ||d-block ||[Og] 5g18 6f14 7d10 8s2 8p 9s2 |-bgcolor="" || 167 || Uhs || Unhexseptium ||p-block || [Og] 5g18 6f14 7d10 8s2 8p 9s2 9p[Og] 5g18 6f14 7d10 8s2 8p 8p 9s2 |-bgcolor="" || 168 || Uho || Unhexoctium ||p-block || [Og] 5g18 6f14 7d10 8s2 8p 9s2 9p[Og] 5g18 6f14 7d10 8s2 8p 8p 9s2 |-bgcolor="" || 169 || Uhe || Unhexennium ||p-block || [Og] 5g18 6f14 7d10 8s2 8p 8p 9s2 9p[Og] 5g18 6f14 7d10 8s2 8p 8p 9s2 |-bgcolor="" || 170 || Usn || Unseptnilium ||p-block || [Og] 5g18 6f14 7d10 8s2 8p 8p 9s2 9p[Og] 5g18 6f14 7d10 8s2 8p 8p 9s2 |-bgcolor="" || 171 || Usu || Unseptunium ||p-block || [Og] 5g18 6f14 7d10 8s2 8p 8p 9s2 9p[Og] 5g18 6f14 7d10 8s2 8p 8p 9s2 9p |-bgcolor="" || 172 || || Unseptbium ||p-block || [Og] 5g18 6f14 7d10 8s2 8p 8p 9s2 9p |- || 173 || Ust || Unsepttrium || ? || [172] 6g1[172] 9p[172] 10s1 |- || 174 || Usq || Unseptquadium || ? || [172] 8d1 10s1 |- || ... || ... || ... || ... || ... |- || 184 || Uoq || Unoctquadium || ? || [172] 6g5 7f4 8d3 |} See also Table of nuclides Hypernucleus Neutronium References Further reading External links Periodic table Nuclear physics
Extended periodic table
[ "Physics", "Chemistry" ]
19,085
[ "Periodic table", "Nuclear physics" ]
68,344
https://en.wikipedia.org/wiki/Bilirubin
Bilirubin (BR) (from the Latin for "red bile") is a red-orange compound that occurs in the normal catabolic pathway that breaks down heme in vertebrates. This catabolism is a necessary process in the body's clearance of waste products that arise from the destruction of aged or abnormal red blood cells. In the first step of bilirubin synthesis, the heme molecule is stripped from the hemoglobin molecule. Heme then passes through various processes of porphyrin catabolism, which varies according to the region of the body in which the breakdown occurs. For example, the molecules excreted in the urine differ from those in the feces. The production of biliverdin from heme is the first major step in the catabolic pathway, after which the enzyme biliverdin reductase performs the second step, producing bilirubin from biliverdin. Ultimately, bilirubin is broken down within the body, and its metabolites excreted through bile and urine; elevated levels may indicate certain diseases. It is responsible for the yellow color of healing bruises and the yellow discoloration in jaundice. The bacterial enzyme bilirubin reductase is responsible for the breakdown of bilirubin in the gut. One breakdown product, urobilin, is the main component of the straw-yellow color in urine. Another breakdown product, stercobilin, causes the brown color of feces. Although bilirubin is usually found in animals rather than plants, at least one plant species, Strelitzia nicolai, is known to contain the pigment. Structure Bilirubin consists of an open-chain tetrapyrrole. It is formed by oxidative cleavage of a porphyrin in heme, which affords biliverdin. Biliverdin is reduced to bilirubin. After conjugation with glucuronic acid, bilirubin is water-soluble and can be excreted. Bilirubin is structurally similar to the pigment phycobilin used by certain algae to capture light energy, and to the pigment phytochrome used by plants to sense light. All of these contain an open chain of four pyrrolic rings. Like these other pigments, some of the double-bonds in bilirubin isomerize when exposed to light. This isomerization is relevant to the phototherapy of jaundiced newborns: the E,Z-isomers of bilirubin formed upon light exposure are more soluble than the unilluminated Z,Z-isomer, as the possibility of intramolecular hydrogen bonding is removed. Increased solubility allows the excretion of unconjugated bilirubin in bile. Some textbooks and research articles show the incorrect geometric isomer of bilirubin. The naturally occurring isomer is the Z,Z-isomer. Function Bilirubin is created by the activity of biliverdin reductase on biliverdin, a green tetrapyrrolic bile pigment that is also a product of heme catabolism. Bilirubin, when oxidized, reverts to become biliverdin once again. This cycle, in addition to the demonstration of the potent antioxidant activity of bilirubin, has led to the hypothesis that bilirubin's main physiologic role is as a cellular antioxidant. Consistent with this, animal studies suggest that eliminating bilirubin results in endogenous oxidative stress. Bilirubin's antioxidant activity may be particularly important in the brain, where it prevents excitotoxicity and neuronal death by scavenging superoxide during N-methyl-D-aspartic acid neurotransmission. Metabolism Bilirubin in plasma is mostly produced by the destruction of erythrocytes. Heme is metabolized into biliverdin (via heme oxygenase) and then into bilirubin (via biliverdin reductase) inside the macrophages. Bilirubin is then released into the plasma and transported to the liver bound by albumin, since it is insoluble in water in this state. In this state, bilirubin is called unconjugated (despite being bound by albumin). In the liver, unconjugated bilirubin is up-taken by the hepatocytes and subsequently conjugated with glucuronic acid (via the enzyme uridine diphosphate–glucuronyl transferase). In this state, bilirubin is soluble in water and it is called conjugated bilirubin. Conjugated bilirubin is excreted into the bile ducts and enters the duodenum. During its transport to the colon, it is converted into urobilinogen by the bacterial enzyme bilirubin reductase. Most of the urobilinogen is further reduced into stercobilinogen and is excreted through feces (air oxidizes stercobilinogen to stercobilin, which gives feces their characteristic brown color). A lesser amount of urobilinogen is re-absorbed into portal circulation and transferred to the liver. For the most part, this urobilinogen is recycled to conjugated bilirubin and this process closes the enterohepatic circle. There is also an amount of urobilinogen which is not recycled, but rather enters the systemic circulation and subsequently the kidneys, where it is excreted. Air oxidizes urobilinogen into urobilin, which gives urine its characteristic color. In parallel, a small amount of conjugated billirubin can also enter the systemic circulation and get excreted through urine. This is exaggerated in various pathological situations. Toxicity Hyperbilirubinemia Hyperbilirubinemia is a higher-than-normal level of bilirubin in the blood. Hyperbilirubinemia may refer to increased levels of conjugated, unconjugated or both conjugated and unconjugated bilirubin. The causes of hyperbilirubinemia can also be classified into prehepatic, intrahepatic, and posthepatic. Prehepatic causes are associated mostly with an increase of unconjugated (indirect) bilirubin. They include: Hemolysis or increased breakdown of red blood cells (for example hematoma resorption) Intrahepatic causes can be associated with elevated levels of conjugated bilirubin, unconjugated bilirubin or both. They include: Neonatal hyperbilirubinemia, where the newborn's liver is not able to properly process the bilirubin causing jaundice Hepatocellular disease Viral infections (hepatitis A, B, and C) Chronic alcohol use Autoimmune disorders Genetic syndromes: Gilbert's syndrome – a genetic disorder of bilirubin metabolism that can result in mild jaundice, found in about 5% of the population Rotor syndrome: non-itching jaundice, with rise of bilirubin in the patient's serum, mainly of the conjugated type Dubin–Johnson syndrome Crigler–Najjar syndrome Pharmaceutical drugs (especially antipsychotic, some sex hormones, and a wide range of other drugs) Sulfonamides are contraindicated in infants less than 2 months old (exception when used with pyrimethamine in treating toxoplasmosis) as they increase unconjugated bilirubin leading to kernicterus. Drugs such as protease inhibitors like Indinavir can also cause disorders of bilirubin metabolism by competitively inhibiting the UGT1A1 enzyme. Post-hepatic causes are associated with elevated levels of conjugated bilirubin. These include: Unusually large bile duct obstruction, e.g. gallstone in common bile duct (which is the most common post-hepatic cause) Biliary stricture (benign or malignant) Cholangitis Severe liver failure with cirrhosis (e.g. primary biliary cirrhosis) Pancreatitis Cirrhosis may cause normal, moderately high or high levels of bilirubin, depending on exact features of the cirrhosis. To further elucidate the causes of jaundice or increased bilirubin, it is usually simpler to look at other liver function tests (especially the enzymes alanine transaminase, aspartate transaminase, gamma-glutamyl transpeptidase, alkaline phosphatase), blood film examination (hemolysis, etc.) or evidence of infective hepatitis (e.g., hepatitis A, B, C, delta, E, etc.). Jaundice Hemoglobin acts to transport oxygen which the body receives to all body tissue via blood vessels. Over time, when red blood cells need to be replenished, the hemoglobin is broken down in the spleen; it breaks down into two parts: heme group consisting of iron and bile and protein fraction. While protein and iron are utilized to renew red blood cells, pigments that make up the red color in blood are deposited into the bile to form bilirubin. Jaundice leads to raised bilirubin levels> that in turn negatively remove elastin-rich tissues. Jaundice may be noticeable in the sclera of the eyes at levels of about 2 to 3 mg/dl (34 to 51 μmol/L), and in the skin at higher levels. Jaundice is classified, depending upon whether the bilirubin is free or conjugated to glucuronic acid, into conjugated jaundice or unconjugated jaundice. Kernicterus Unbound bilirubin (Bf) levels can be used to predict the risk of neurodevelopmental handicaps within infants. Unconjugated hyperbilirubinemia in a newborn can lead to accumulation of bilirubin in certain brain regions (particularly the basal nuclei) with consequent irreversible damage to these areas manifesting as various neurological deficits, seizures, abnormal reflexes and eye movements. This type of neurological injury is known as kernicterus. The spectrum of clinical effect is called bilirubin encephalopathy. The neurotoxicity of neonatal hyperbilirubinemia manifests because the blood–brain barrier has yet to develop fully, and bilirubin can freely pass into the brain interstitium, whereas more developed individuals with increased bilirubin in the blood are protected. Aside from specific chronic medical conditions that may lead to hyperbilirubinemia, neonates in general are at increased risk since they lack the intestinal bacteria that facilitate the breakdown and excretion of conjugated bilirubin in the feces (this is largely why the feces of a neonate are paler than those of an adult). Instead the conjugated bilirubin is converted back into the unconjugated form by the enzyme β-glucuronidase (in the gut, this enzyme is located in the brush border of the lining intestinal cells) and a large proportion is reabsorbed through the enterohepatic circulation. In addition, recent studies point towards high total bilirubin levels as a cause for gallstones regardless of gender or age. Health benefits In the absence of liver disease, high levels of total bilirubin confers various health benefits. Studies have also revealed that levels of serum bilirubin (SBR) are inversely related to risk of certain heart diseases. While the poor solubility and potential toxicity of bilirubin limit its potential medicinal applications, current research is being done on whether bilirubin encapsulated silk fibrin nanoparticles can alleviate symptoms of disorders such as acute pancreatitis. In addition to this, there have been recent discoveries linking bilirubin and its ε-polylysine-bilirubin conjugate (PLL-BR), to more efficient insulin medication. It seems that bilirubin exhibits protective properties during the islet transplantation process when drugs are delivered throughout the bloodstream. Blood tests Bilirubin is degraded by light. Blood collection tubes containing blood or (especially) serum to be used in bilirubin assays should be protected from illumination. For adults, blood is typically collected by needle from a vein in the arm. In newborns, blood is often collected from a heel stick, a technique that uses a small, sharp blade to cut the skin on the infant's heel and collect a few drops of blood into a small tube. Non-invasive technology is available in some health care facilities that will measure bilirubin by using a bilirubinometer which shines light onto the skin and calculates the amount of bilirubin by analysing how the light is absorbed or reflected. This device is also known as a transcutaneous bilirubin meter. Bilirubin (in blood) is found in two forms: Note: Conjugated bilirubin is often incorrectly called "direct bilirubin" and unconjugated bilirubin is incorrectly called "indirect bilirubin". Direct and indirect refer solely to how compounds are measured or detected in solution. Direct bilirubin is any form of bilirubin which is water-soluble and is available in solution to react with assay reagents; direct bilirubin is often made up largely of conjugated bilirubin, but some unconjugated bilirubin (up to 25%) can still be part of the "direct" bilirubin fraction. Likewise, not all conjugated bilirubin is readily available in solution for reaction or detection (for example, if it is hydrogen bonding with itself) and therefore would not be included in the direct bilirubin fraction. Total bilirubin (TBIL) measures both BU and BC. Total bilirubin assays work by using surfactants and accelerators (like caffeine) to bring all of the different bilirubin forms into solution where they can react with assay reagents. Total and direct bilirubin levels can be measured from the blood, but indirect bilirubin is calculated from the total and direct bilirubin. Indirect bilirubin is fat-soluble and direct bilirubin is water-soluble. Total bilirubin Total bilirubin = direct bilirubin + indirect bilirubin Elevation of both alanine aminotransferase (ALT) and bilirubin is more indicative of serious liver injury than is elevation in ALT alone, as postulated in Hy's law that elucidates the relation between the lab test results and drug-induced liver injury Indirect (unconjugated) The measurement of unconjugated bilirubin (UCB) is underestimated by measurement of indirect bilirubin, as unconjugated bilirubin (without/yet glucuronidation) reacts with diazosulfanilic acid to create azobilirubin which is measured as direct bilirubin. Direct Direct bilirubin = Conjugated bilirubin + delta bilirubin Conjugated In the liver, bilirubin is conjugated with glucuronic acid by the enzyme glucuronyltransferase, first to bilirubin glucuronide and then to bilirubin diglucuronide, making it soluble in water: the conjugated version is the main form of bilirubin present in the "direct" bilirubin fraction. Much of it goes into the bile and thus out into the small intestine. Though most bile acid is reabsorbed in the terminal ileum to participate in enterohepatic circulation, conjugated bilirubin is not absorbed and instead passes into the colon. There, colonic bacteria deconjugate and metabolize the bilirubin into colorless urobilinogen, which can be oxidized to form urobilin and stercobilin. Urobilin is excreted by the kidneys to give urine its yellow color and stercobilin is excreted in the feces giving stool its characteristic brown color. A trace (~1%) of the urobilinogen is reabsorbed into the enterohepatic circulation to be re-excreted in the bile. Conjugated bilirubin's half-life is shorter than delta bilirubin. Delta bilirubin Although the terms direct and indirect bilirubin are used equivalently with conjugated and unconjugated bilirubin, this is not quantitatively correct, because the direct fraction includes both conjugated bilirubin and δ bilirubin. Delta bilirubin is albumin-bound conjugated bilirubin. In the other words, delta bilirubin is the kind of bilirubin covalently bound to albumin, which appears in the serum when hepatic excretion of conjugated bilirubin is impaired in patients with hepatobiliary disease. Furthermore, direct bilirubin tends to overestimate conjugated bilirubin levels due to unconjugated bilirubin that has reacted with diazosulfanilic acid, leading to increased azobilirubin levels (and increased direct bilirubin). δ bilirubin = total bilirubin – (unconjugated bilirubin + conjugated bilirubin) Half-life The half-life of delta bilirubin is equivalent to that of albumin since the former is bound to the latter, yields 2–3 weeks. A free-of-bound bilirubin has a half-life of 2 to 4 hours. Measurement methods Originally, the Van den Bergh reaction was used for a qualitative estimate of bilirubin. This test is performed routinely in most medical laboratories and can be measured by a variety of methods. Total bilirubin is now often measured by the 2,5-dichlorophenyldiazonium (DPD) method, and direct bilirubin is often measured by the method of Jendrassik and Grof. Blood levels The bilirubin level found in the body reflects the balance between production and excretion. Blood test results are advised to always be interpreted using the reference range provided by the laboratory that performed the test. The SI units are μmol/L. Typical ranges for adults are: 0–0.3 mg/dl – Direct (conjugated) bilirubin level 0.1–1.2 mg/dl – Total serum bilirubin level Urine tests Urine bilirubin may also be clinically significant. Bilirubin is not normally detectable in the urine of healthy people. If the blood level of conjugated bilirubin becomes elevated, e.g. due to liver disease, excess conjugated bilirubin is excreted in the urine, indicating a pathological process. Unconjugated bilirubin is not water-soluble and so is not excreted in the urine. Testing urine for both bilirubin and urobilinogen can help differentiate obstructive liver disease from other causes of jaundice. As with billirubin, under normal circumstances, only a very small amount of urobilinogen is excreted in the urine. If the liver's function is impaired or when biliary drainage is blocked, some of the conjugated bilirubin leaks out of the hepatocytes and appears in the urine, turning it dark amber. However, in disorders involving hemolytic anemia, an increased number of red blood cells are broken down, causing an increase in the amount of unconjugated bilirubin in the blood. Because the unconjugated bilirubin is not water-soluble, one will not see an increase in bilirubin in the urine. Because there is no problem with the liver or bile systems, this excess unconjugated bilirubin will go through all of the normal processing mechanisms that occur (e.g., conjugation, excretion in bile, metabolism to urobilinogen, reabsorption) and will show up as an increase of urobilinogen in the urine. This difference between increased urine bilirubin and increased urine urobilinogen helps to distinguish between various disorders in those systems. History In ancient history, Hippocrates discussed bile pigments in two of the four humours in the context of a relationship between yellow and black biles. Hippocrates visited Democritus in Abdera who was regarded as the expert in melancholy "black bile". Relevant documentation emerged in 1827 when M. Louis Jacques Thénard examined the biliary tract of an elephant that had died at a Paris zoo. He observed dilated bile ducts were full of yellow magma, which he isolated and found to be insoluble in water. Treating the yellow pigment with hydrochloric acid produced a strong green color. Thenard suspected the green pigment was caused by impurities derived from mucus of bile. Leopold Gmelin experimented with nitric acid in 1826 to establish the redox behavior in change from bilirubin to biliverdin, although the nomenclature did not exist at the time. The term biliverdin was coined by Jöns Jacob Berzelius in 1840, although he preferred "bilifulvin" (yellow/red) over "bilirubin" (red). The term "bilirubin" was thought to have become mainstream based on the works of Staedeler in 1864 who crystallized bilirubin from cattle gallstones. Rudolf Virchow in 1847 recognized hematoidin to be identical to bilirubin. It is not always distinguished from hematoidin, which one modern dictionary defines as synonymous with it but another defines as "apparently chemically identical with bilirubin but with a different site of origin, formed locally in the tissues from hemoglobin, particularly under conditions of reduced oxygen tension." The synonymous identity of bilirubin and hematoidin was confirmed in 1923 by Fischer and Steinmetz using analytical crystallography. In the 1930s, significant advances in bilirubin isolation and synthesis were described by Hans Fischer, Plieninger, and others, and pioneering work pertaining to endogenous formation of bilirubin from heme was likewise conducted in the same decade. The suffix IXα is partially based on a system developed Fischer, which means the bilin's parent compound was protoporphyrin IX cleaved at the alpha-methine bridge (see protoporphyrin IX nomenclature). Origins pertaining to the physiological activity of bilirubin were described by Ernst Stadelmann in 1891, who may have observed the biotransformation of infused hemoglobin into bilirubin possibly inspired by Ivan Tarkhanov's 1874 works. Georg Barkan suggested the source of endogenous bilirubin to be from hemoglobin in 1932. Plieninger and Fischer demonstrated an enzymatic oxidative loss of the alpha-methine bridge of heme resulting in a bis-lactam structure in 1942. It is widely accepted that Irving London was the first to demonstrate endogenous formation of bilirubin from hemoglobin in 1950, and Sjostrand demonstrated hemoglobin catabolism produces carbon monoxide between 1949 and 1952. 14C labeled protoporphyrin biotransformation to bilirubin evidence emerged in 1966 by Cecil Watson. Rudi Schmid and Tenhunen discovered heme oxygenase, the enzyme responsible, in 1968. Earlier in 1963, Nakajima described a soluble "heme alpha-methnyl oxygeanse" which what later determined to be a non-enzymatic pathway, such as formation of a 1,2-Dioxetane intermediate at the methine bridge resulting in carbon monoxide release and biliverdin formation. Notable people Claudio Tiribelli, Italian hepatologist, studies on bilirubin See also Babesiosis Biliary atresia Bilirubin diglucuronide Biliverdin Crigler–Najjar syndrome Gilbert's syndrome, a genetic disorder of bilirubin metabolism that can result in mild jaundice, found in about 5% of the population. Hy's Law Lumirubin Primary biliary cholangitis Primary sclerosing cholangitis Notes References External links Bilirubin: analyte monograph from The Association for Clinical Biochemistry and Laboratory Medicine Liver function tests Hepatology Metabolism Biological pigments Tetrapyrroles Polyenes Vinyl compounds
Bilirubin
[ "Chemistry", "Biology" ]
5,364
[ "Chemical pathology", "Cellular processes", "Pigmentation", "Biochemistry", "Biological pigments", "Liver function tests", "Metabolism" ]
68,503
https://en.wikipedia.org/wiki/Pseudometric%20space
In mathematics, a pseudometric space is a generalization of a metric space in which the distance between two distinct points can be zero. Pseudometric spaces were introduced by Đuro Kurepa in 1934. In the same way as every normed space is a metric space, every seminormed space is a pseudometric space. Because of this analogy, the term semimetric space (which has a different meaning in topology) is sometimes used as a synonym, especially in functional analysis. When a topology is generated using a family of pseudometrics, the space is called a gauge space. Definition A pseudometric space is a set together with a non-negative real-valued function called a , such that for every Symmetry: Subadditivity/Triangle inequality: Unlike a metric space, points in a pseudometric space need not be distinguishable; that is, one may have for distinct values Examples Any metric space is a pseudometric space. Pseudometrics arise naturally in functional analysis. Consider the space of real-valued functions together with a special point This point then induces a pseudometric on the space of functions, given by for A seminorm induces the pseudometric . This is a convex function of an affine function of (in particular, a translation), and therefore convex in . (Likewise for .) Conversely, a homogeneous, translation-invariant pseudometric induces a seminorm. Pseudometrics also arise in the theory of hyperbolic complex manifolds: see Kobayashi metric. Every measure space can be viewed as a complete pseudometric space by defining for all where the triangle denotes symmetric difference. If is a function and d2 is a pseudometric on X2, then gives a pseudometric on X1. If d2 is a metric and f is injective, then d1 is a metric. Topology The is the topology generated by the open balls which form a basis for the topology. A topological space is said to be a if the space can be given a pseudometric such that the pseudometric topology coincides with the given topology on the space. The difference between pseudometrics and metrics is entirely topological. That is, a pseudometric is a metric if and only if the topology it generates is T0 (that is, distinct points are topologically distinguishable). The definitions of Cauchy sequences and metric completion for metric spaces carry over to pseudometric spaces unchanged. Metric identification The vanishing of the pseudometric induces an equivalence relation, called the metric identification, that converts the pseudometric space into a full-fledged metric space. This is done by defining if . Let be the quotient space of by this equivalence relation and define This is well defined because for any we have that and so and vice versa. Then is a metric on and is a well-defined metric space, called the metric space induced by the pseudometric space . The metric identification preserves the induced topologies. That is, a subset is open (or closed) in if and only if is open (or closed) in and is saturated. The topological identification is the Kolmogorov quotient. An example of this construction is the completion of a metric space by its Cauchy sequences. See also Notes References Metric geometry Properties of topological spaces
Pseudometric space
[ "Mathematics" ]
663
[ "Properties of topological spaces", "Topological spaces", "Topology", "Space (mathematics)" ]
68,513
https://en.wikipedia.org/wiki/Surface%20science
Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. It includes the fields of surface chemistry and surface physics. Some related practical applications are classed as surface engineering. The science encompasses concepts such as heterogeneous catalysis, semiconductor device fabrication, fuel cells, self-assembled monolayers, and adhesives. Surface science is closely related to interface and colloid science. Interfacial chemistry and physics are common subjects for both. The methods are different. In addition, interface and colloid science studies macroscopic phenomena that occur in heterogeneous systems due to peculiarities of interfaces. History The field of surface chemistry started with heterogeneous catalysis pioneered by Paul Sabatier on hydrogenation and Fritz Haber on the Haber process. Irving Langmuir was also one of the founders of this field, and the scientific journal on surface science, Langmuir, bears his name. The Langmuir adsorption equation is used to model monolayer adsorption where all surface adsorption sites have the same affinity for the adsorbing species and do not interact with each other. Gerhard Ertl in 1974 described for the first time the adsorption of hydrogen on a palladium surface using a novel technique called LEED. Similar studies with platinum, nickel, and iron followed. Most recent developments in surface sciences include the 2007 Nobel prize of Chemistry winner Gerhard Ertl's advancements in surface chemistry, specifically his investigation of the interaction between carbon monoxide molecules and platinum surfaces. Chemistry Surface chemistry can be roughly defined as the study of chemical reactions at interfaces. It is closely related to surface engineering, which aims at modifying the chemical composition of a surface by incorporation of selected elements or functional groups that produce various desired effects or improvements in the properties of the surface or interface. Surface science is of particular importance to the fields of heterogeneous catalysis, electrochemistry, and geochemistry. Catalysis The adhesion of gas or liquid molecules to the surface is known as adsorption. This can be due to either chemisorption or physisorption, and the strength of molecular adsorption to a catalyst surface is critically important to the catalyst's performance (see Sabatier principle). However, it is difficult to study these phenomena in real catalyst particles, which have complex structures. Instead, well-defined single crystal surfaces of catalytically active materials such as platinum are often used as model catalysts. Multi-component materials systems are used to study interactions between catalytically active metal particles and supporting oxides; these are produced by growing ultra-thin films or particles on a single crystal surface. Relationships between the composition, structure, and chemical behavior of these surfaces are studied using ultra-high vacuum techniques, including adsorption and temperature-programmed desorption of molecules, scanning tunneling microscopy, low energy electron diffraction, and Auger electron spectroscopy. Results can be fed into chemical models or used toward the rational design of new catalysts. Reaction mechanisms can also be clarified due to the atomic-scale precision of surface science measurements. Electrochemistry Electrochemistry is the study of processes driven through an applied potential at a solid–liquid or liquid–liquid interface. The behavior of an electrode–electrolyte interface is affected by the distribution of ions in the liquid phase next to the interface forming the electrical double layer. Adsorption and desorption events can be studied at atomically flat single-crystal surfaces as a function of applied potential, time and solution conditions using spectroscopy, scanning probe microscopy and surface X-ray scattering. These studies link traditional electrochemical techniques such as cyclic voltammetry to direct observations of interfacial processes. Geochemistry Geological phenomena such as iron cycling and soil contamination are controlled by the interfaces between minerals and their environment. The atomic-scale structure and chemical properties of mineral–solution interfaces are studied using in situ synchrotron X-ray techniques such as X-ray reflectivity, X-ray standing waves, and X-ray absorption spectroscopy as well as scanning probe microscopy. For example, studies of heavy metal or actinide adsorption onto mineral surfaces reveal molecular-scale details of adsorption, enabling more accurate predictions of how these contaminants travel through soils or disrupt natural dissolution–precipitation cycles. Physics Surface physics can be roughly defined as the study of physical interactions that occur at interfaces. It overlaps with surface chemistry. Some of the topics investigated in surface physics include friction, surface states, surface diffusion, surface reconstruction, surface phonons and plasmons, epitaxy, the emission and tunneling of electrons, spintronics, and the self-assembly of nanostructures on surfaces. Techniques to investigate processes at surfaces include surface X-ray scattering, scanning probe microscopy, surface-enhanced Raman spectroscopy and X-ray photoelectron spectroscopy. Analysis techniques The study and analysis of surfaces involves both physical and chemical analysis techniques. Several modern methods probe the topmost 1–10 nm of surfaces exposed to vacuum. These include angle-resolved photoemission spectroscopy (ARPES), X-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy (AES), low-energy electron diffraction (LEED), electron energy loss spectroscopy (EELS), thermal desorption spectroscopy (TPD), ion scattering spectroscopy (ISS), secondary ion mass spectrometry, dual-polarization interferometry, and other surface analysis methods included in the list of materials analysis methods. Many of these techniques require vacuum as they rely on the detection of electrons or ions emitted from the surface under study. Moreover, in general ultra-high vacuum, in the range of 10−7 pascal pressure or better, it is necessary to reduce surface contamination by residual gas, by reducing the number of molecules reaching the sample over a given time period. At 0.1 mPa (10−6 torr) partial pressure of a contaminant and standard temperature, it only takes on the order of 1 second to cover a surface with a one-to-one monolayer of contaminant to surface atoms, so much lower pressures are needed for measurements. This is found by an order of magnitude estimate for the (number) specific surface area of materials and the impingement rate formula from the kinetic theory of gases. Purely optical techniques can be used to study interfaces under a wide variety of conditions. Reflection-absorption infrared, dual polarisation interferometry, surface-enhanced Raman spectroscopy and sum frequency generation spectroscopy can be used to probe solid–vacuum as well as solid–gas, solid–liquid, and liquid–gas surfaces. Multi-parametric surface plasmon resonance works in solid–gas, solid–liquid, liquid–gas surfaces and can detect even sub-nanometer layers. It probes the interaction kinetics as well as dynamic structural changes such as liposome collapse or swelling of layers in different pH. Dual-polarization interferometry is used to quantify the order and disruption in birefringent thin films. This has been used, for example, to study the formation of lipid bilayers and their interaction with membrane proteins. Acoustic techniques, such as quartz crystal microbalance with dissipation monitoring, is used for time-resolved measurements of solid–vacuum, solid–gas and solid–liquid interfaces. The method allows for analysis of molecule–surface interactions as well as structural changes and viscoelastic properties of the adlayer.   X-ray scattering and spectroscopy techniques are also used to characterize surfaces and interfaces. While some of these measurements can be performed using laboratory X-ray sources, many require the high intensity and energy tunability of synchrotron radiation. X-ray crystal truncation rods (CTR) and X-ray standing wave (XSW) measurements probe changes in surface and adsorbate structures with sub-Ångström resolution. Surface-extended X-ray absorption fine structure (SEXAFS) measurements reveal the coordination structure and chemical state of adsorbates. Grazing-incidence small angle X-ray scattering (GISAXS) yields the size, shape, and orientation of nanoparticles on surfaces. The crystal structure and texture of thin films can be investigated using grazing-incidence X-ray diffraction (GIXD, GIXRD). X-ray photoelectron spectroscopy (XPS) is a standard tool for measuring the chemical states of surface species and for detecting the presence of surface contamination. Surface sensitivity is achieved by detecting photoelectrons with kinetic energies of about 10–1000 eV, which have corresponding inelastic mean free paths of only a few nanometers. This technique has been extended to operate at near-ambient pressures (ambient pressure XPS, AP-XPS) to probe more realistic gas–solid and liquid–solid interfaces. Performing XPS with hard X-rays at synchrotron light sources yields photoelectrons with kinetic energies of several keV (hard X-ray photoelectron spectroscopy, HAXPES), enabling access to chemical information from buried interfaces. Modern physical analysis methods include scanning-tunneling microscopy (STM) and a family of methods descended from it, including atomic force microscopy (AFM). These microscopies have considerably increased the ability of surface scientists to measure the physical structure of many surfaces. For example, they make it possible to follow reactions at the solid–gas interface in real space, if those proceed on a time scale accessible by the instrument. See also References Further reading External links "Ram Rao Materials and Surface Science", a video from the Vega Science Trust Surface Chemistry Discoveries Surface Metrology Guide Physical chemistry
Surface science
[ "Physics", "Chemistry", "Materials_science" ]
2,022
[ "Applied and interdisciplinary physics", "Surface science", "Condensed matter physics", "nan", "Physical chemistry" ]
68,518
https://en.wikipedia.org/wiki/Chemisorption
Chemisorption is a kind of adsorption which involves a chemical reaction between the surface and the adsorbate. New chemical bonds are generated at the adsorbent surface. Examples include macroscopic phenomena that can be very obvious, like corrosion, and subtler effects associated with heterogeneous catalysis, where the catalyst and reactants are in different phases. The strong interaction between the adsorbate and the substrate surface creates new types of electronic bonds. In contrast with chemisorption is physisorption, which leaves the chemical species of the adsorbate and surface intact. It is conventionally accepted that the energetic threshold separating the binding energy of "physisorption" from that of "chemisorption" is about 0.5 eV per adsorbed species. Due to specificity, the nature of chemisorption can greatly differ, depending on the chemical identity and the surface structural properties. The bond between the adsorbate and adsorbent in chemisorption is either ionic or covalent. Uses An important example of chemisorption is in heterogeneous catalysis which involves molecules reacting with each other via the formation of chemisorbed intermediates. After the chemisorbed species combine (by forming bonds with each other) the product desorbs from the surface. Self-assembled monolayers Self-assembled monolayers (SAMs) are formed by chemisorbing reactive reagents with metal surfaces. A famous example involves thiols (RS-H) adsorbing onto the surface of gold. This process forms strong Au-SR bonds and releases H2. The densely packed SR groups protect the surface. Gas-surface chemisorption Adsorption kinetics As an instance of adsorption, chemisorption follows the adsorption process. The first stage is for the adsorbate particle to come into contact with the surface. The particle needs to be trapped onto the surface by not possessing enough energy to leave the gas-surface potential well. If it elastically collides with the surface, then it would return to the bulk gas. If it loses enough momentum through an inelastic collision, then it "sticks" onto the surface, forming a precursor state bonded to the surface by weak forces, similar to physisorption. The particle diffuses on the surface until it finds a deep chemisorption potential well. Then it reacts with the surface or simply desorbs after enough energy and time. The reaction with the surface is dependent on the chemical species involved. Applying the Gibbs energy equation for reactions: General thermodynamics states that for spontaneous reactions at constant temperature and pressure, the change in free energy should be negative. Since a free particle is restrained to a surface, and unless the surface atom is highly mobile, entropy is lowered. This means that the enthalpy term must be negative, implying an exothermic reaction. Physisorption is given as a Lennard-Jones potential and chemisorption is given as a Morse potential. There exists a point of crossover between the physisorption and chemisorption, meaning a point of transfer. It can occur above or below the zero-energy line (with a difference in the Morse potential, a), representing an activation energy requirement or lack of. Most simple gases on clean metal surfaces lack the activation energy requirement. Modeling For experimental setups of chemisorption, the amount of adsorption of a particular system is quantified by a sticking probability value. However, chemisorption is very difficult to theorize. A multidimensional potential energy surface (PES) derived from effective medium theory is used to describe the effect of the surface on absorption, but only certain parts of it are used depending on what is to be studied. A simple example of a PES, which takes the total of the energy as a function of location: where is the energy eigenvalue of the Schrödinger equation for the electronic degrees of freedom and is the ion interactions. This expression is without translational energy, rotational energy, vibrational excitations, and other such considerations. There exist several models to describe surface reactions: the Langmuir–Hinshelwood mechanism in which both reacting species are adsorbed, and the Eley–Rideal mechanism in which one is adsorbed and the other reacts with it. Real systems have many irregularities, making theoretical calculations more difficult: Solid surfaces are not necessarily at equilibrium. They may be perturbed and irregular, defects and such. Distribution of adsorption energies and odd adsorption sites. Bonds formed between the adsorbates. Compared to physisorption where adsorbates are simply sitting on the surface, the adsorbates can change the surface, along with its structure. The structure can go through relaxation, where the first few layers change interplanar distances without changing the surface structure, or reconstruction where the surface structure is changed. A direct transition from physisorption to chemisorption has been observed by attaching a CO molecule to the tip of an atomic force microscope and measuring its interaction with a single iron atom. For example, oxygen can form very strong bonds (~4 eV) with metals, such as Cu(110). This comes with the breaking apart of surface bonds in forming surface-adsorbate bonds. A large restructuring occurs by missing row. Dissociative chemisorption A particular brand of gas-surface chemisorption is the dissociation of diatomic gas molecules, such as hydrogen, oxygen, and nitrogen. One model used to describe the process is precursor-mediation. The absorbed molecule is adsorbed onto a surface into a precursor state. The molecule then diffuses across the surface to the chemisorption sites. They break the molecular bond in favor of new bonds to the surface. The energy to overcome the activation potential of dissociation usually comes from translational energy and vibrational energy. An example is the hydrogen and copper system, one that has been studied many times over. It has a large activation energy of 0.35 – 0.85 eV. The vibrational excitation of the hydrogen molecule promotes dissociation on low index surfaces of copper. See also Adsorption Physisorption References Bibliography Physical chemistry Catalysis
Chemisorption
[ "Physics", "Chemistry" ]
1,315
[ "Catalysis", "Applied and interdisciplinary physics", "nan", "Chemical kinetics", "Physical chemistry" ]
68,520
https://en.wikipedia.org/wiki/Physisorption
Physisorption, also called physical adsorption, is a process in which the electronic structure of the atom or molecule is barely perturbed upon adsorption. Overview The fundamental interacting force of physisorption is Van der Waals force. Even though the interaction energy is very weak (~10–100 meV), physisorption plays an important role in nature. For instance, the van der Waals attraction between surfaces and foot-hairs of geckos (see Synthetic setae) provides the remarkable ability to climb up vertical walls. Van der Waals forces originate from the interactions between induced, permanent or transient electric dipoles. In comparison with chemisorption, in which the electronic structure of bonding atoms or molecules is changed and covalent or ionic bonds form, physisorption does not result in changes to the chemical bonding structure. In practice, the categorisation of a particular adsorption as physisorption or chemisorption depends principally on the binding energy of the adsorbate to the substrate, with physisorption being far weaker on a per-atom basis than any type of connection involving a chemical bond. Modeling by image charge To give a simple illustration of physisorption, we can first consider an adsorbed hydrogen atom in front of a perfect conductor, as shown in Fig. 1. A nucleus with positive charge is located at R = (0, 0, Z), and the position coordinate of its electron, r = (x, y, z) is given with respect to the nucleus. The adsorption process can be viewed as the interaction between this hydrogen atom and its image charges of both the nucleus and electron in the conductor. As a result, the total electrostatic energy is the sum of attraction and repulsion terms: The first term is the attractive interaction of the nucleus and its image charge, and the second term is due to the interaction of the electron and its image charge. The repulsive interaction is shown in the third and fourth terms arising from the interaction between the nucleus and the image electron, and, the interaction between the electron and the image nucleus, respectively. By Taylor expansion in powers of |r| / |R|, this interaction energy can be further expressed as: One can find from the first non-vanishing term that the physisorption potential depends on the distance Z between adsorbed atom and surface as Z−3, in contrast with the r−6 dependence of the molecular van der Waals potential, where r is the distance between two dipoles. Modeling by quantum-mechanical oscillator The van der Waals binding energy can be analyzed by another simple physical picture: modeling the motion of an electron around its nucleus by a three-dimensional simple harmonic oscillator with a potential energy Va: where me and ω are the mass and vibrational frequency of the electron, respectively. As this atom approaches the surface of a metal and forms adsorption, this potential energy Va will be modified due to the image charges by additional potential terms which are quadratic in the displacements: (from the Taylor expansion above.) Assuming the potential is well approximated as , where If one assumes that the electron is in the ground state, then the van der Waals binding energy is essentially the change of the zero-point energy: This expression also shows the nature of the Z−3 dependence of the van der Waals interaction. Furthermore, by introducing the atomic polarizability, the van der Waals potential can be further simplified: where is the van der Waals constant which is related to the atomic polarizability. Also, by expressing the fourth-order correction in the Taylor expansion above as (aCvZ0) / (Z4), where a is some constant, we can define Z0 as the position of the dynamical image plane and obtain The origin of Z0 comes from the spilling of the electron wavefunction out of the surface. As a result, the position of the image plane representing the reference for the space coordinate is different from the substrate surface itself and modified by Z0. Table 1 shows the jellium model calculation for van der Waals constant Cv and dynamical image plane Z0 of rare gas atoms on various metal surfaces. The increasing of Cv from He to Xe for all metal substrates is caused by the larger atomic polarizability of the heavier rare gas atoms. For the position of the dynamical image plane, it decreases with increasing dielectric function and is typically on the order of 0.2 Å. Physisorption potential Even though the van der Waals interaction is attractive, as the adsorbed atom moves closer to the surface the wavefunction of electron starts to overlap with that of the surface atoms. Further the energy of the system will increase due to the orthogonality of wavefunctions of the approaching atom and surface atoms. This Pauli exclusion and repulsion are particularly strong for atoms with closed valence shells that dominate the surface interaction. As a result, the minimum energy of physisorption must be found by the balance between the long-range van der Waals attraction and short-range Pauli repulsion. For instance, by separating the total interaction of physisorption into two contributions—a short-range term depicted by Hartree–Fock theory and a long-range van der Waals attraction—the equilibrium position of physisorption for rare gases adsorbed on jellium substrate can be determined. Fig. 2 shows the physisorption potential energy of He adsorbed on Ag, Cu, and Au substrates which are described by the jellium model with different densities of smear-out background positive charges. It can be found that the weak van der Waals interaction leads to shallow attractive energy wells (<10 meV). One of the experimental methods for exploring physisorption potential energy is the scattering process, for instance, inert gas atoms scattered from metal surfaces. Certain specific features of the interaction potential between scattered atoms and surface can be extracted by analyzing the experimentally determined angular distribution and cross sections of the scattered particles. Quantum mechanical – thermodynamic modelling for surface area and porosity Since 1980 two theories were worked on to explain adsorption and obtain equations that work. These two are referred to as the chi hypothesis, the quantum mechanical derivation, and excess surface work, ESW. Both these theories yield the same equation for flat surfaces: Where U is the unit step function. The definitions of the other symbols is as follows: where "ads" stands for "adsorbed", "m" stands for "monolayer equivalence" and "vap" is reference to the vapor pressure ("ads" and "vap" are the latest IUPAC convention but "m" has no IUAPC equivalent notation) of the liquid adsorptive at the same temperature as the solid sample. The unit function creates the definition of the molar energy of adsorption for the first adsorbed molecule by: The plot of adsorbed versus is referred to as the chi plot. For flat surfaces, the slope of the chi plot yields the surface area. Empirically, this plot was notice as being a very good fit to the isotherm by Polanyi and also by deBoer and Zwikker but not pursued. This was due to criticism in the former case by Einstein and in the latter case by Brunauer. This flat surface equation may be used as a "standard curve" in the normal tradition of comparison curves, with the exception that the porous sample's early portion of the plot of versus acts as a self-standard. Ultramicroporous, microporous and mesoporous conditions may be analyzed using this technique. Typical standard deviations for full isotherm fits including porous samples are typically less than 2%. A typical fit to good data on a homogeneous non-porous surface is shown in figure 3. The data is by Payne, Sing and Turk and was used to create the -s standard curve. Unlike the BET, which can only be at best fit over the range of 0.05 to 0.35 of P/Pvap, the range of the fit is the full isotherm. Comparison with chemisorption Physisorption is a general phenomenon and occurs in any solid/fluid or solid/gas system. Chemisorption is characterized by chemical specificity. In physisorption, perturbation of the electronic states of adsorbent and adsorbate is minimal. The adsorption forces include London Forces, dipole-dipole attractions, dipole-induced attraction and "hydrogen bonding." For chemisorption, changes in the electronic states may be detectable by suitable physical means, in other words, chemical bonding. Typical binding energy of physisorption is about 10–300 meV and non-localized. Chemisorption usually forms bonding with energy of 1–10 eV and localized. The elementary step in physisorption from a gas phase does not involve activation energy. Chemisorption often involves an activation energy. For physisorption gas phase molecules, adsorbates, form multilayer adsorption unless physical barriers, such as porosity, interfere. In chemisorption, molecules are adsorbed on the surface by valence bonds and only form monolayer adsorption. A direct transition from physisorption to chemisorption has been observed by attaching a CO molecule to the tip of an atomic force microscope and measuring its interaction with a single iron atom. This effect was observed in the late 1960s for benzene from field emission as reported by Condon and ESR measurements as reported by Moyes and Wells. Another way of looking at this is that chemisorption alters the topology of the electrons in the adsorbate molecule (by the process of chemical reaction) but physisorption does not. See also Adsorption Chemisorption van der Waals force References Surface science
Physisorption
[ "Physics", "Chemistry", "Materials_science" ]
2,064
[ "Condensed matter physics", "Surface science" ]
68,946
https://en.wikipedia.org/wiki/Born%E2%80%93Oppenheimer%20approximation
In quantum chemistry and molecular physics, the Born–Oppenheimer (BO) approximation is the best-known mathematical approximation in molecular dynamics. Specifically, it is the assumption that the wave functions of atomic nuclei and electrons in a molecule can be treated separately, based on the fact that the nuclei are much heavier than the electrons. Due to the larger relative mass of a nucleus compared to an electron, the coordinates of the nuclei in a system are approximated as fixed, while the coordinates of the electrons are dynamic. The approach is named after Max Born and his 23-year-old graduate student J. Robert Oppenheimer, the latter of whom proposed it in 1927 during a period of intense ferment in the development of quantum mechanics. The approximation is widely used in quantum chemistry to speed up the computation of molecular wavefunctions and other properties for large molecules. There are cases where the assumption of separable motion no longer holds, which make the approximation lose validity (it is said to "break down"), but even then the approximation is usually used as a starting point for more refined methods. In molecular spectroscopy, using the BO approximation means considering molecular energy as a sum of independent terms, e.g.: These terms are of different orders of magnitude and the nuclear spin energy is so small that it is often omitted. The electronic energies consist of kinetic energies, interelectronic repulsions, internuclear repulsions, and electron–nuclear attractions, which are the terms typically included when computing the electronic structure of molecules. Example The benzene molecule consists of 12 nuclei and 42 electrons. The Schrödinger equation, which must be solved to obtain the energy levels and wavefunction of this molecule, is a partial differential eigenvalue equation in the three-dimensional coordinates of the nuclei and electrons, giving 3 × 12 = 36 nuclear plus 3 × 42 = 126 electronic, totalling 162 variables for the wave function. The computational complexity, i.e., the computational power required to solve an eigenvalue equation, increases faster than the square of the number of coordinates. When applying the BO approximation, two smaller, consecutive steps can be used: For a given position of the nuclei, the electronic Schrödinger equation is solved, while treating the nuclei as stationary (not "coupled" with the dynamics of the electrons). This corresponding eigenvalue problem then consists only of the 126 electronic coordinates. This electronic computation is then repeated for other possible positions of the nuclei, i.e. deformations of the molecule. For benzene, this could be done using a grid of 36 possible nuclear position coordinates. The electronic energies on this grid are then connected to give a potential energy surface for the nuclei. This potential is then used for a second Schrödinger equation containing only the 36 coordinates of the nuclei. So, taking the most optimistic estimate for the complexity, instead of a large equation requiring at least hypothetical calculation steps, a series of smaller calculations requiring (with N being the number of grid points for the potential) and a very small calculation requiring steps can be performed. In practice, the scaling of the problem is larger than , and more approximations are applied in computational chemistry to further reduce the number of variables and dimensions. The slope of the potential energy surface can be used to simulate molecular dynamics, using it to express the mean force on the nuclei caused by the electrons and thereby skipping the calculation of the nuclear Schrödinger equation. Detailed description The BO approximation recognizes the large difference between the electron mass and the masses of atomic nuclei, and correspondingly the time scales of their motion. Given the same amount of momentum, the nuclei move much more slowly than the electrons. In mathematical terms, the BO approximation consists of expressing the wavefunction () of a molecule as the product of an electronic wavefunction and a nuclear (vibrational, rotational) wavefunction. . This enables a separation of the Hamiltonian operator into electronic and nuclear terms, where cross-terms between electrons and nuclei are neglected, so that the two smaller and decoupled systems can be solved more efficiently. In the first step, the nuclear kinetic energy is neglected, that is, the corresponding operator Tn is subtracted from the total molecular Hamiltonian. In the remaining electronic Hamiltonian He the nuclear positions are no longer variable, but are constant parameters (they enter the equation "parametrically"). The electron–nucleus interactions are not removed, i.e., the electrons still "feel" the Coulomb potential of the nuclei clamped at certain positions in space. (This first step of the BO approximation is therefore often referred to as the clamped-nuclei approximation.) The electronic Schrödinger equation where is the electronic wavefunction for given positions of nuclei (fixed R), is solved approximately. The quantity r stands for all electronic coordinates and R for all nuclear coordinates. The electronic energy eigenvalue Ee depends on the chosen positions R of the nuclei. Varying these positions R in small steps and repeatedly solving the electronic Schrödinger equation, one obtains Ee as a function of R. This is the potential energy surface (PES): . Because this procedure of recomputing the electronic wave functions as a function of an infinitesimally changing nuclear geometry is reminiscent of the conditions for the adiabatic theorem, this manner of obtaining a PES is often referred to as the adiabatic approximation and the PES itself is called an adiabatic surface. In the second step of the BO approximation, the nuclear kinetic energy Tn (containing partial derivatives with respect to the components of R) is reintroduced, and the Schrödinger equation for the nuclear motion is solved. This second step of the BO approximation involves separation of vibrational, translational, and rotational motions. This can be achieved by application of the Eckart conditions. The eigenvalue E is the total energy of the molecule, including contributions from electrons, nuclear vibrations, and overall rotation and translation of the molecule. In accord with the Hellmann–Feynman theorem, the nuclear potential is taken to be an average over electron configurations of the sum of the electron–nuclear and internuclear electric potentials. Derivation It will be discussed how the BO approximation may be derived and under which conditions it is applicable. At the same time we will show how the BO approximation may be improved by including vibronic coupling. To that end the second step of the BO approximation is generalized to a set of coupled eigenvalue equations depending on nuclear coordinates only. Off-diagonal elements in these equations are shown to be nuclear kinetic energy terms. It will be shown that the BO approximation can be trusted whenever the PESs, obtained from the solution of the electronic Schrödinger equation, are well separated: . We start from the exact non-relativistic, time-independent molecular Hamiltonian: with The position vectors of the electrons and the position vectors of the nuclei are with respect to a Cartesian inertial frame. Distances between particles are written as (distance between electron i and nucleus A) and similar definitions hold for and . We assume that the molecule is in a homogeneous (no external force) and isotropic (no external torque) space. The only interactions are the two-body Coulomb interactions among the electrons and nuclei. The Hamiltonian is expressed in atomic units, so that we do not see the Planck constant, the dielectric constant of the vacuum, electronic charge, or electronic mass in this formula. The only constants explicitly entering the formula are ZA and MA – the atomic number and mass of nucleus A. It is useful to introduce the total nuclear momentum and to rewrite the nuclear kinetic energy operator as follows: Suppose we have K electronic eigenfunctions of ; that is, we have solved The electronic wave functions will be taken to be real, which is possible when there are no magnetic or spin interactions. The parametric dependence of the functions on the nuclear coordinates is indicated by the symbol after the semicolon. This indicates that, although is a real-valued function of , its functional form depends on . For example, in the molecular-orbital-linear-combination-of-atomic-orbitals (LCAO-MO) approximation, is a molecular orbital (MO) given as a linear expansion of atomic orbitals (AOs). An AO depends visibly on the coordinates of an electron, but the nuclear coordinates are not explicit in the MO. However, upon change of geometry, i.e., change of , the LCAO coefficients obtain different values and we see corresponding changes in the functional form of the MO . We will assume that the parametric dependence is continuous and differentiable, so that it is meaningful to consider which in general will not be zero. The total wave function is expanded in terms of : with and where the subscript indicates that the integration, implied by the bra–ket notation, is over electronic coordinates only. By definition, the matrix with general element is diagonal. After multiplication by the real function from the left and integration over the electronic coordinates the total Schrödinger equation is turned into a set of K coupled eigenvalue equations depending on nuclear coordinates only The column vector has elements . The matrix is diagonal, and the nuclear Hamilton matrix is non-diagonal; its off-diagonal (vibronic coupling) terms are further discussed below. The vibronic coupling in this approach is through nuclear kinetic energy terms. Solution of these coupled equations gives an approximation for energy and wavefunction that goes beyond the Born–Oppenheimer approximation. Unfortunately, the off-diagonal kinetic energy terms are usually difficult to handle. This is why often a diabatic transformation is applied, which retains part of the nuclear kinetic energy terms on the diagonal, removes the kinetic energy terms from the off-diagonal and creates coupling terms between the adiabatic PESs on the off-diagonal. If we can neglect the off-diagonal elements the equations will uncouple and simplify drastically. In order to show when this neglect is justified, we suppress the coordinates in the notation and write, by applying the Leibniz rule for differentiation, the matrix elements of as The diagonal () matrix elements of the operator vanish, because we assume time-reversal invariant, so can be chosen to be always real. The off-diagonal matrix elements satisfy The matrix element in the numerator is The matrix element of the one-electron operator appearing on the right side is finite. When the two surfaces come close, , the nuclear momentum coupling term becomes large and is no longer negligible. This is the case where the BO approximation breaks down, and a coupled set of nuclear motion equations must be considered instead of the one equation appearing in the second step of the BO approximation. Conversely, if all surfaces are well separated, all off-diagonal terms can be neglected, and hence the whole matrix of is effectively zero. The third term on the right side of the expression for the matrix element of Tn (the Born–Oppenheimer diagonal correction) can approximately be written as the matrix of squared and, accordingly, is then negligible also. Only the first (diagonal) kinetic energy term in this equation survives in the case of well separated surfaces, and a diagonal, uncoupled, set of nuclear motion equations results: which are the normal second step of the BO equations discussed above. We reiterate that when two or more potential energy surfaces approach each other, or even cross, the Born–Oppenheimer approximation breaks down, and one must fall back on the coupled equations. Usually one invokes then the diabatic approximation. Born–Oppenheimer approximation with correct symmetry To include the correct symmetry within the Born–Oppenheimer (BO) approximation, a molecular system presented in terms of (mass-dependent) nuclear coordinates and formed by the two lowest BO adiabatic potential energy surfaces (PES) and is considered. To ensure the validity of the BO approximation, the energy E of the system is assumed to be low enough so that becomes a closed PES in the region of interest, with the exception of sporadic infinitesimal sites surrounding degeneracy points formed by and (designated as (1, 2) degeneracy points). The starting point is the nuclear adiabatic BO (matrix) equation written in the form where is a column vector containing the unknown nuclear wave functions , is a diagonal matrix containing the corresponding adiabatic potential energy surfaces , m is the reduced mass of the nuclei, E is the total energy of the system, is the gradient operator with respect to the nuclear coordinates , and is a matrix containing the vectorial non-adiabatic coupling terms (NACT): Here are eigenfunctions of the electronic Hamiltonian assumed to form a complete Hilbert space in the given region in configuration space. To study the scattering process taking place on the two lowest surfaces, one extracts from the above BO equation the two corresponding equations: where (k = 1, 2), and is the (vectorial) NACT responsible for the coupling between and . Next a new function is introduced: and the corresponding rearrangements are made: Multiplying the second equation by i and combining it with the first equation yields the (complex) equation The last term in this equation can be deleted for the following reasons: At those points where is classically closed, by definition, and at those points where becomes classically allowed (which happens at the vicinity of the (1, 2) degeneracy points) this implies that: , or . Consequently, the last term is, indeed, negligibly small at every point in the region of interest, and the equation simplifies to become In order for this equation to yield a solution with the correct symmetry, it is suggested to apply a perturbation approach based on an elastic potential , which coincides with at the asymptotic region. The equation with an elastic potential can be solved, in a straightforward manner, by substitution. Thus, if is the solution of this equation, it is presented as where is an arbitrary contour, and the exponential function contains the relevant symmetry as created while moving along . The function can be shown to be a solution of the (unperturbed/elastic) equation Having , the full solution of the above decoupled equation takes the form where satisfies the resulting inhomogeneous equation: In this equation the inhomogeneity ensures the symmetry for the perturbed part of the solution along any contour and therefore for the solution in the required region in configuration space. The relevance of the present approach was demonstrated while studying a two-arrangement-channel model (containing one inelastic channel and one reactive channel) for which the two adiabatic states were coupled by a Jahn–Teller conical intersection. A nice fit between the symmetry-preserved single-state treatment and the corresponding two-state treatment was obtained. This applies in particular to the reactive state-to-state probabilities (see Table III in Ref. 5a and Table III in Ref. 5b), for which the ordinary BO approximation led to erroneous results, whereas the symmetry-preserving BO approximation produced the accurate results, as they followed from solving the two coupled equations. See also Adiabatic ionization Adiabatic process (quantum mechanics) Avoided crossing Born–Huang approximation Franck–Condon principle Kohn anomaly Notes References External links Resources related to the Born–Oppenheimer approximation: The original article (in German) Translation by S. M. Blinder Another version of the same translation by S. M. Blinder The Born–Oppenheimer approximation, a section from Peter Haynes' doctoral thesis Quantum chemistry Approximations Max Born J. Robert Oppenheimer
Born–Oppenheimer approximation
[ "Physics", "Chemistry", "Mathematics" ]
3,227
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Mathematical relations", " molecular", "Atomic", "Approximations", " and optical physics" ]
69,720
https://en.wikipedia.org/wiki/Gastrointestinal%20tract
The gastrointestinal tract (GI tract, digestive tract, alimentary canal) is the tract or passageway of the digestive system that leads from the mouth to the anus. The GI tract contains all the major organs of the digestive system, in humans and other animals, including the esophagus, stomach, and intestines. Food taken in through the mouth is digested to extract nutrients and absorb energy, and the waste expelled at the anus as feces. Gastrointestinal is an adjective meaning of or pertaining to the stomach and intestines. Most animals have a "through-gut" or complete digestive tract. Exceptions are more primitive ones: sponges have small pores (ostia) throughout their body for digestion and a larger dorsal pore (osculum) for excretion, comb jellies have both a ventral mouth and dorsal anal pores, while cnidarians and acoels have a single pore for both digestion and excretion. The human gastrointestinal tract consists of the esophagus, stomach, and intestines, and is divided into the upper and lower gastrointestinal tracts. The GI tract includes all structures between the mouth and the anus, forming a continuous passageway that includes the main organs of digestion, namely, the stomach, small intestine, and large intestine. The complete human digestive system is made up of the gastrointestinal tract plus the accessory organs of digestion (the tongue, salivary glands, pancreas, liver and gallbladder). The tract may also be divided into foregut, midgut, and hindgut, reflecting the embryological origin of each segment. The whole human GI tract is about nine meters (30 feet) long at autopsy. It is considerably shorter in the living body because the intestines, which are tubes of smooth muscle tissue, maintain constant muscle tone in a halfway-tense state but can relax in spots to allow for local distention and peristalsis. The gastrointestinal tract contains the gut microbiota, with some 1,000 different strains of bacteria having diverse roles in the maintenance of immune health and metabolism, and many other microorganisms. Cells of the GI tract release hormones to help regulate the digestive process. These digestive hormones, including gastrin, secretin, cholecystokinin, and ghrelin, are mediated through either intracrine or autocrine mechanisms, indicating that the cells releasing these hormones are conserved structures throughout evolution. Human gastrointestinal tract Structure The structure and function can be described both as gross anatomy and as microscopic anatomy or histology. The tract itself is divided into upper and lower tracts, and the intestines small and large parts. Upper gastrointestinal tract The upper gastrointestinal tract consists of the mouth, pharynx, esophagus, stomach, and duodenum. The exact demarcation between the upper and lower tracts is the suspensory muscle of the duodenum. This differentiates the embryonic borders between the foregut and midgut, and is also the division commonly used by clinicians to describe gastrointestinal bleeding as being of either "upper" or "lower" origin. Upon dissection, the duodenum may appear to be a unified organ, but it is divided into four segments based on function, location, and internal anatomy. The four segments of the duodenum are as follows (starting at the stomach, and moving toward the jejunum): bulb, descending, horizontal, and ascending. The suspensory muscle attaches the superior border of the ascending duodenum to the jejunum. The suspensory muscle is an important anatomical landmark that shows the formal division between the duodenum and the jejunum, the first and second parts of the small intestine, respectively. This is a thin muscle which is derived from the embryonic mesoderm. Lower gastrointestinal tract The lower gastrointestinal tract includes most of the small intestine and all of the large intestine. In human anatomy, the intestine (bowel or gut; Greek: éntera) is the segment of the gastrointestinal tract extending from the pyloric sphincter of the stomach to the anus and as in other mammals, consists of two segments: the small intestine and the large intestine. In humans, the small intestine is further subdivided into the duodenum, jejunum, and ileum while the large intestine is subdivided into the cecum, ascending, transverse, descending, and sigmoid colon, rectum, and anal canal. Small intestine The small intestine begins at the duodenum and is a tubular structure, usually between 6 and 7 m long. Its mucosal area in an adult human is about . The combination of the circular folds, the villi, and the microvilli increases the absorptive area of the mucosa about 600-fold, making a total area of about for the entire small intestine. Its main function is to absorb the products of digestion (including carbohydrates, proteins, lipids, and vitamins) into the bloodstream. There are three major divisions: Duodenum: A short structure (about 20–25 cm long) that receives chyme from the stomach, together with pancreatic juice containing digestive enzymes and bile from the gall bladder. The digestive enzymes break down proteins, and bile emulsifies fats into micelles. The duodenum contains Brunner's glands which produce a mucus-rich alkaline secretion containing bicarbonate. These secretions, in combination with bicarbonate from the pancreas, neutralize the stomach acids contained in the chyme. Jejunum: This is the midsection of the small intestine, connecting the duodenum to the ileum. It is about long and contains the circular folds also known as plicae circulares and villi that increase its surface area. Products of digestion (sugars, amino acids, and fatty acids) are absorbed into the bloodstream here. Ileum: The final section of the small intestine. It is about 3 m long, and contains villi similar to the jejunum. It absorbs mainly vitamin B12 and bile acids, as well as any other remaining nutrients. Large intestine The large intestine, also called the colon, forms an arch starting at the cecum and ending at the rectum and anal canal. It also includes the appendix, which is attached to the cecum. Its length is about 1.5 m, and the area of the mucosa in an adult human is about . Its main function is to absorb water and salts. The colon is further divided into: Cecum (first portion of the colon) and appendix Ascending colon (ascending in the back wall of the abdomen) Right colic flexure (flexed portion of the ascending and transverse colon apparent to the liver) Transverse colon (passing below the diaphragm) Left colic flexure (flexed portion of the transverse and descending colon apparent to the spleen) Descending colon (descending down the left side of the abdomen) Sigmoid colon (a loop of the colon closest to the rectum) Rectum Anal canal Development The gut is an endoderm-derived structure. At approximately the sixteenth day of human development, the embryo begins to fold ventrally (with the embryo's ventral surface becoming concave) in two directions: the sides of the embryo fold in on each other and the head and tail fold toward one another. The result is that a piece of the yolk sac, an endoderm-lined structure in contact with the ventral aspect of the embryo, begins to be pinched off to become the primitive gut. The yolk sac remains connected to the gut tube via the vitelline duct. Usually, this structure regresses during development; in cases where it does not, it is known as Meckel's diverticulum. During fetal life, the primitive gut is gradually patterned into three segments: foregut, midgut, and hindgut. Although these terms are often used in reference to segments of the primitive gut, they are also used regularly to describe regions of the definitive gut as well. Each segment of the gut is further specified and gives rise to specific gut and gut-related structures in later development. Components derived from the gut proper, including the stomach and colon, develop as swellings or dilatations in the cells of the primitive gut. In contrast, gut-related derivatives — that is, those structures that derive from the primitive gut but are not part of the gut proper, in general, develop as out-pouchings of the primitive gut. The blood vessels supplying these structures remain constant throughout development. Histology The gastrointestinal tract has a form of general histology with some differences that reflect the specialization in functional anatomy. The GI tract can be divided into four concentric layers in the following order: Mucosa Submucosa Muscular layer Adventitia or serosa Mucosa The mucosa is the innermost layer of the gastrointestinal tract. The mucosa surrounds the lumen, or open space within the tube. This layer comes in direct contact with digested food (chyme). The mucosa is made up of: Epithelium – innermost layer. Responsible for most digestive, absorptive and secretory processes. Lamina propria – a layer of connective tissue. Unusually cellular compared to most connective tissue Muscularis mucosae – a thin layer of smooth muscle that aids the passing of material and enhances the interaction between the epithelial layer and the contents of the lumen by agitation and peristalsis The mucosae are highly specialized in each organ of the gastrointestinal tract to deal with the different conditions. The most variation is seen in the epithelium. Submucosa The submucosa consists of a dense irregular layer of connective tissue with large blood vessels, lymphatics, and nerves branching into the mucosa and muscularis externa. It contains the submucosal plexus, an enteric nervous plexus, situated on the inner surface of the muscularis externa. Muscular layer The muscular layer consists of an inner circular layer and a longitudinal outer layer. The circular layer prevents food from traveling backward and the longitudinal layer shortens the tract. The layers are not truly longitudinal or circular, rather the layers of muscle are helical with different pitches. The inner circular is helical with a steep pitch and the outer longitudinal is helical with a much shallower pitch. Whilst the muscularis externa is similar throughout the entire gastrointestinal tract, an exception is the stomach which has an additional inner oblique muscular layer to aid with grinding and mixing of food. The muscularis externa of the stomach is composed of the inner oblique layer, middle circular layer, and the outer longitudinal layer. Between the circular and longitudinal muscle layers is the myenteric plexus. This controls peristalsis. Activity is initiated by the pacemaker cells, (myenteric interstitial cells of Cajal). The gut has intrinsic peristaltic activity (basal electrical rhythm) due to its self-contained enteric nervous system. The rate can be modulated by the rest of the autonomic nervous system. The coordinated contractions of these layers is called peristalsis and propels the food through the tract. Food in the GI tract is called a bolus (ball of food) from the mouth down to the stomach. After the stomach, the food is partially digested and semi-liquid, and is referred to as chyme. In the large intestine, the remaining semi-solid substance is referred to as faeces. Adventitia and serosa The outermost layer of the gastrointestinal tract consists of several layers of connective tissue. Intraperitoneal parts of the GI tract are covered with serosa. These include most of the stomach, first part of the duodenum, all of the small intestine, caecum and appendix, transverse colon, sigmoid colon and rectum. In these sections of the gut, there is a clear boundary between the gut and the surrounding tissue. These parts of the tract have a mesentery. Retroperitoneal parts are covered with adventitia. They blend into the surrounding tissue and are fixed in position. For example, the retroperitoneal section of the duodenum usually passes through the transpyloric plane. These include the esophagus, pylorus of the stomach, distal duodenum, ascending colon, descending colon and anal canal. In addition, the oral cavity has adventitia. Gene and protein expression Approximately 20,000 protein coding genes are expressed in human cells and 75% of these genes are expressed in at least one of the different parts of the digestive organ system. Over 600 of these genes are more specifically expressed in one or more parts of the GI tract and the corresponding proteins have functions related to digestion of food and uptake of nutrients. Examples of specific proteins with such functions are pepsinogen PGC and the lipase LIPF, expressed in chief cells, and gastric ATPase ATP4A and gastric intrinsic factor GIF, expressed in parietal cells of the stomach mucosa. Specific proteins expressed in the stomach and duodenum involved in defence include mucin proteins, such as mucin 6 and intelectin-1. Transit time The time taken for food to transit through the gastrointestinal tract varies on multiple factors, including age, ethnicity, and gender. Several techniques have been used to measure transit time, including radiography following a barium-labeled meal, breath hydrogen analysis, scintigraphic analysis following a radiolabeled meal, and simple ingestion and spotting of corn kernels. It takes 2.5 to 3 hours for 50% of the contents to leave the stomach. The rate of digestion is also dependent of the material being digested, as food composition from the same meal may leave the stomach at different rates. Total emptying of the stomach takes around 4–5 hours, and transit through the colon takes 30 to 50 hours. Immune function The gastrointestinal tract forms an important part of the immune system. Immune barrier The surface area of the digestive tract is estimated to be about 32 square meters, or about half a badminton court. With such a large exposure (more than three times larger than the exposed surface of the skin), these immune components function to prevent pathogens from entering the blood and lymph circulatory systems. Fundamental components of this protection are provided by the intestinal mucosal barrier, which is composed of physical, biochemical, and immune elements elaborated by the intestinal mucosa. Microorganisms also are kept at bay by an extensive immune system comprising the gut-associated lymphoid tissue (GALT) There are additional factors contributing to protection from pathogen invasion. For example, low pH (ranging from 1 to 4) of the stomach is fatal for many microorganisms that enter it. Similarly, mucus (containing IgA antibodies) neutralizes many pathogenic microorganisms. Other factors in the GI tract contribution to immune function include enzymes secreted in the saliva and bile. Immune system homeostasis Beneficial bacteria also can contribute to the homeostasis of the gastrointestinal immune system. For example, Clostridia, one of the most predominant bacterial groups in the GI tract, play an important role in influencing the dynamics of the gut's immune system. It has been demonstrated that the intake of a high fiber diet could be responsible for the induction of T-regulatory cells (Tregs). This is due to the production of short-chain fatty acids during the fermentation of plant-derived nutrients such as butyrate and propionate. Basically, the butyrate induces the differentiation of Treg cells by enhancing histone H3 acetylation in the promoter and conserved non-coding sequence regions of the FOXP3 locus, thus regulating the T cells, resulting in the reduction of the inflammatory response and allergies. Intestinal microbiota The large intestine contains multiple types of bacteria that can break down molecules the human body cannot process alone, demonstrating a symbiotic relationship. These bacteria are responsible for gas production at host–pathogen interface, which is released as flatulence. Intestinal bacteria can also participate in biosynthesis reactions. For example, certain strains in the large intestine produce vitamin B12; an essential compound in humans for things like DNA synthesis and red blood cell production. However, the primary function of the large intestine is water absorption from digested material (regulated by the hypothalamus) and the reabsorption of sodium and nutrients. Beneficial intestinal bacteria compete with potentially harmful bacteria for space and "food", as the intestinal tract has limited resources. A ratio of 80–85% beneficial to 15–20% potentially harmful bacteria is proposed for maintaining homeostasis. An imbalanced ratio results in dysbiosis. Detoxification and drug metabolism Enzymes such as CYP3A4, along with the antiporter activities, are also instrumental in the intestine's role of drug metabolism in the detoxification of antigens and xenobiotics. Other animals In most vertebrates, including amphibians, birds, reptiles, egg-laying mammals, and some fish, the gastrointestinal tract ends in a cloaca and not an anus. In the cloaca, the urinary system is fused with the genito-anal pore. Therians (all mammals that do not lay eggs, including humans) possess separate anal and uro-genital openings. The females of the subgroup Placentalia have even separate urinary and genital openings. During early development, the asymmetric position of the bowels and inner organs is initiated (see also axial twist theory). Ruminants show many specializations for digesting and fermenting tough plant material, consisting of additional stomach compartments. Many birds and other animals have a specialised stomach in the digestive tract called a gizzard used for grinding up food. Another feature found in a range of animals is the crop. In birds this is found as a pouch alongside the esophagus. In 2020, the oldest known fossil digestive tract, of an extinct wormlike organism in the Cloudinidae was discovered; it lived during the late Ediacaran period about 550 million years ago. A through-gut (one with both mouth and anus) is thought to have evolved within the nephrozoan clade of Bilateria, after their ancestral ventral orifice (single, as in cnidarians and acoels; re-evolved in nephrozoans like flatworms) stretched antero-posteriorly, before the middle part of the stretch would get narrower and closed fully, leaving an anterior orifice (mouth) and a posterior orifice (anus plus genital opening). A stretched gut without the middle part closed is present in another branch of bilaterians, the extinct proarticulates. This and the amphistomic development (when both mouth and anus develop from the gut stretch in the embryo) present in some nephrozoans (e.g. roundworms) are considered to support this hypothesis. Clinical significance Diseases There are many diseases and conditions that can affect the gastrointestinal system, including infections, inflammation and cancer. Various pathogens, such as bacteria that cause foodborne illnesses, can induce gastroenteritis which results from inflammation of the stomach and small intestine. Antibiotics to treat such bacterial infections can decrease the microbiome diversity of the gastrointestinal tract, and further enable inflammatory mediators. Gastroenteritis is the most common disease of the GI tract. Gastrointestinal cancer may occur at any point in the gastrointestinal tract, and includes mouth cancer, tongue cancer, oesophageal cancer, stomach cancer, and colorectal cancer. Inflammatory conditions. Ileitis is an inflammation of the ileum, colitis is an inflammation of the large intestine. Appendicitis is inflammation of the appendix located at the caecum. This is a potentially fatal condition if left untreated; most cases of appendicitis require surgical intervention. Diverticular disease is a condition that is very common in older people in industrialized countries. It usually affects the large intestine but has been known to affect the small intestine as well. Diverticulosis occurs when pouches form on the intestinal wall. Once the pouches become inflamed it is known as diverticulitis. Inflammatory bowel disease is an inflammatory condition affecting the bowel walls, and includes the subtypes Crohn's disease and ulcerative colitis. While Crohn's can affect the entire gastrointestinal tract, ulcerative colitis is limited to the large intestine. Crohn's disease is widely regarded as an autoimmune disease. Although ulcerative colitis is often treated as though it were an autoimmune disease, there is no consensus that it actually is such. Functional gastrointestinal disorders the most common of which is irritable bowel syndrome. Functional constipation and chronic functional abdominal pain are other functional disorders of the intestine that have physiological causes but do not have identifiable structural, chemical, or infectious pathologies. Symptoms Several symptoms can indicate problems with the gastrointestinal tract, including: Vomiting, which may include regurgitation of food or the vomiting of blood Diarrhea, or the passage of liquid or more frequent stools Constipation, which refers to the passage of fewer and hardened stools Blood in stool, which includes fresh red blood, maroon-coloured blood, and tarry-coloured blood Treatment Gastrointestinal surgery can often be performed in the outpatient setting. In the United States in 2012, operations on the digestive system accounted for 3 of the 25 most common ambulatory surgery procedures and constituted 9.1 percent of all outpatient ambulatory surgeries. Imaging Various methods of imaging the gastrointestinal tract include the upper and lower gastrointestinal series: Radioopaque dyes may be swallowed to produce a barium swallow Parts of the tract may be visualised by camera. This is known as endoscopy if examining the upper gastrointestinal tract and colonoscopy or sigmoidoscopy if examining the lower gastrointestinal tract. Capsule endoscopy is where a capsule containing a camera is swallowed in order to examine the tract. Biopsies may also be taken when examined. An abdominal X-ray may be used to examine the lower gastrointestinal tract. Other related diseases Cholera Enteric duplication cyst Giardiasis Pancreatitis Peptic ulcer disease Yellow fever Helicobacter pylori is a gram-negative spiral bacterium. Over half the world's population is infected with it, mainly during childhood; it is not certain how the disease is transmitted. It colonizes the gastrointestinal system, predominantly the stomach. The bacterium has specific survival conditions that are specific to the human gastric microenvironment: it is both capnophilic and microaerophilic. Helicobacter also exhibits a tropism for gastric epithelial lining and the gastric mucosal layer about it. Gastric colonization of this bacterium triggers a robust immune response leading to moderate to severe inflammation, known as gastritis. Signs and symptoms of infection are gastritis, burning abdominal pain, weight loss, loss of appetite, bloating, burping, nausea, bloody vomit, and black tarry stools. Infection can be detected in a number of ways: GI X-rays, endoscopy, blood tests for anti-Helicobacter antibodies, a stool test, and a urease breath test (which is a by-product of the bacteria). If caught soon enough, it can be treated with three doses of different proton pump inhibitors as well as two antibiotics, taking about a week to cure. If not caught soon enough, surgery may be required. Intestinal pseudo-obstruction is a syndrome caused by a malformation of the digestive system, characterized by a severe impairment in the ability of the intestines to push and assimilate. Symptoms include daily abdominal and stomach pain, nausea, severe distension, vomiting, heartburn, dysphagia, diarrhea, constipation, dehydration and malnutrition. There is no cure for intestinal pseudo-obstruction. Different types of surgery and treatment managing life-threatening complications such as ileus and volvulus, intestinal stasis which lead to bacterial overgrowth, and resection of affected or dead parts of the gut may be needed. Many patients require parenteral nutrition. Ileus is a blockage of the intestines. Coeliac disease is a common form of malabsorption, affecting up to 1% of people of northern European descent. An autoimmune response is triggered in intestinal cells by digestion of gluten proteins. Ingestion of proteins found in wheat, barley and rye, causes villous atrophy in the small intestine. Lifelong dietary avoidance of these foodstuffs in a gluten-free diet is the only treatment. Enteroviruses are named by their transmission-route through the intestine (enteric meaning intestinal), but their symptoms are not mainly associated with the intestine. Endometriosis can affect the intestines, with similar symptoms to IBS. Bowel twist (or similarly, bowel strangulation) is a comparatively rare event (usually developing sometime after major bowel surgery). It is, however, hard to diagnose correctly, and if left uncorrected can lead to bowel infarction and death. (The singer Maurice Gibb is understood to have died from this.) Angiodysplasia of the colon Constipation Diarrhea Hirschsprung's disease (aganglionosis) Intussusception Polyp (medicine) (see also colorectal polyp) Pseudomembranous colitis Toxic megacolon usually a complication of ulcerative colitis Uses of animal guts Intestines from animals other than humans are used in a number of ways. From each species of livestock that is a source of milk, a corresponding rennet is obtained from the intestines of milk-fed . Pig and calf intestines are eaten, and pig intestines are used as sausage casings. Calf intestines supply calf-intestinal alkaline phosphatase (CIP), and are used to make goldbeater's skin. Other uses are: The use of animal gut strings by musicians can be traced back to the third dynasty of Egypt. In the recent past, strings were made out of lamb gut. With the advent of the modern era, musicians have tended to use strings made of silk, or synthetic materials such as nylon or steel. Some instrumentalists, however, still use gut strings in order to evoke the older tone quality. Although such strings were commonly referred to as "catgut" strings, cats were never used as a source for gut strings. Sheep gut was the original source for natural gut string used in racquets, such as for tennis. Today, synthetic strings are much more common, but the best gut strings are now made out of cow gut. Gut cord has also been used to produce strings for the snares that provide a snare drum's characteristic buzzing timbre. While the modern snare drum almost always uses metal wire rather than gut cord, the North African bendir frame drum still uses gut for this purpose. "Natural" sausage hulls, or casings, are made of animal gut, especially hog, beef, and lamb. The wrapping of kokoretsi, gardoubakia, and torcinello is made of lamb (or goat) gut. Haggis is traditionally boiled in, and served in, a sheep stomach. Chitterlings, a kind of food, consist of thoroughly washed pig's gut. Animal gut was used to make the cord lines in longcase clocks and for fusee movements in bracket clocks, but may be replaced by metal wire. The oldest known condoms, from 1640 AD, were made from animal intestine. See also Gastrointestinal physiology Gut-on-a-chip References External links The gastro intestinal tract in the Human Protein Atlas Your Digestive System and How It Works at National Institutes of Health Abdomen Digestive system Endocrine system Routes of administration
Gastrointestinal tract
[ "Chemistry", "Biology" ]
6,068
[ "Digestive system", "Pharmacology", "Endocrine system", "Routes of administration", "Organ systems" ]
69,817
https://en.wikipedia.org/wiki/Forensic%20engineering
Forensic engineering has been defined as "the investigation of failures—ranging from serviceability to catastrophic—which may lead to legal activity, including both civil and criminal". The forensic engineering field is very broad in terms of the many disciplines that it covers, investigations that use forensic engineering are case of environmental damages to structures, system failures of machines, explosions, electrical, fire point of origin, vehicle failures and many more. It includes the investigation of materials, products, structures or components that fail or do not operate or function as intended, causing personal injury, damage to property or economic loss. The consequences of failure may give rise to action under either criminal or civil law including but not limited to health and safety legislation, the laws of contract and/or product liability and the laws of tort. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. Generally, the purpose of a forensic engineering investigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of an accident. It can also involve investigation of intellectual property claims, especially patents. In the US, forensic engineers require a professional engineering license from each state. History As the field of engineering has evolved over time, so has the field of forensic engineering. Early examples include investigation of bridge failures such as the Tay rail bridge disaster of 1879 and the Dee bridge disaster of 1847. Many early rail accidents prompted the invention of tensile testing of samples and fractography of failed components. Investigation Vital to the field of forensic engineering is the process of investigating and collecting data related to the: materials, products, structures or components that failed. This involves: inspections, collecting evidence, measurements, developing models, obtaining exemplar products, and performing experiments. Often, testing and measurements are conducted in an Independent testing laboratory or other reputable unbiased laboratory. When investigating a case a forensic engineer will follow a series of standard steps of their investigation process. First thing is when the forensic engineer arrives to the scene is to establish safety, they make sure that all the hazards have been dealt with an are safe to handle and be analyzed. The next step would be to do an initial incident appraisal, this is done before any analysis is done and they take a quick observation of what the solution is at hand. The third step in the investigative process is to plan how to the investigation will go and would resources they will need to obtain to do the analysis accurately. Next would be establishing the terms of reverence, this is when the forensic engineer will consult with the client on what they want done in the investigation. The next step is to create the investigative team, once there is plan on how to investigate they will make a team of the experts in the given field needed to conduct the analysis. lastly would be to start the investigation, and this is where they conduct their analysis. Analysis There are two of the main types of analysis done in forensic engineering, there is root cause analysis and failure analysis. Root cause analysis is defined as looking at the system as a whole and what led to the system failing, and is done with large scale object, for example a building collapse. Failure analysis is defined as the analysis of one part in the system that failed to operate, an example of this would be a car failure causing an accident. These two types of analysis are the initial assessments done when forensic engineering investigators start their investigation. Failure mode and effects analysis (FMEA) and fault tree analysis methods also examine product or process failure in a structured and systematic way, in the general context of safety engineering. However, all such techniques rely on accurate reporting of failure rates, and precise identification, of the failure modes involved. There is some common ground between forensic science and forensic engineering, such as scene of crime and scene of accident analysis, integrity of the evidence and court appearances. Both disciplines make extensive use of optical and scanning electron microscopes, for example. They also share common use of spectroscopy (infrared, ultraviolet, and nuclear magnetic resonance) to examine critical evidence. Radiography using X-rays (such as X-ray computed tomography), or neutrons is also very useful in examining thick products for their internal defects before destructive examination is attempted. Often, however, a simple hand lens may reveal the cause of a particular problem. Trace evidence is sometimes an important factor in reconstructing the sequence of events in an accident. For example, tire burn marks on a road surface can enable vehicle speeds to be estimated, when the brakes were applied and so on. Ladder feet often leave a trace of movement of the ladder during a slip and may show how the accident occurred. When a product fails for no obvious reason, SEM and Energy-dispersive X-ray spectroscopy (EDX) performed in the microscope can reveal the presence of aggressive chemicals that have left traces on the fracture or adjacent surfaces. Thus an acetal resin water pipe joint suddenly failed and caused substantial damages to a building in which it was situated. Analysis of the joint showed traces of chlorine, indicating a stress corrosion cracking failure mode. The failed fuel pipe junction mentioned above showed traces of sulfur on the fracture surface from the sulfuric acid, which had initiated the crack. Extracting physical evidence from digital photography is a major technique used in forensic accident reconstruction. Camera matching, photogrammetry, and photo rectification techniques are used to create three-dimensional and top-down views from the two-dimensional photos typically taken at an accident scene. Overlooked or undocumented evidence for accident reconstruction can be retrieved and quantified as long as photographs of such evidence are available. By using photographs of the accident scene including the vehicle, "lost" evidence can be recovered and accurately determined. Forensic materials engineering involves methods applied to specific materials, such as metals, glasses, ceramics, composites and polymers. Organizations The National Academy of Forensic Engineers (NAFE) was founded in 1982 by Marvin M. Specter, P.E., L.S., Paul E. Pritzker, P.E., and William A. Cox Jr., P.E. to identify and bring together professional engineers having qualifications and expertise as practicing forensic engineers to further their continuing education and promote high standards of professional ethics and excellence of practice. It seeks to improve the practice, elevate the standards, and advance the cause of forensic engineering. Full membership in the academy is limited to Registered Professional Engineers who are also members of the National Society of Professional Engineers (NSPE). They must also be members in an acceptable grade of a recognized major technical engineering society. NAFE also offers Affiliate grades of membership to those who do not yet qualify for Member grade. Full members are board-certified through the Council of Engineering and Scientific Specialty Boards and earn the title "Diplomate of Forensic Engineering", or "DFE". This is typically used after their designation as Profesional Engineer. Examples The broken fuel pipe shown at left caused a serious accident when diesel fuel poured out from a van onto the road. A following car skidded and the driver was seriously injured when she collided with an oncoming lorry. Scanning electron microscopy or SEM showed that the nylon connector had fractured by stress corrosion cracking (SCC) due to a small leak of battery acid. Nylon is susceptible to hydrolysis when in contact with sulfuric acid, and only a small leak of acid would have sufficed to start a brittle crack in the injection moulded nylon 6,6 connector by SCC. The crack took about 7 days to grow across the diameter of the tube. The fracture surface showed a mainly brittle surface with striations indicating progressive growth of the crack across the diameter of the pipe. Once the crack had penetrated the inner bore, fuel started leaking onto the road. The nylon 6,6 had been attacked by the following reaction, which was catalyzed by the acid: Diesel fuel is especially hazardous on road surfaces because it forms a thin, oily film that cannot be easily seen by drivers. It is much like black ice in its slipperiness, so skids are common when diesel leaks occur. The insurers of the van driver admitted liability and the injured driver was compensated. Applications Most manufacturing models will have a forensic component that monitors early failures to improve quality or efficiencies. Insurance companies use forensic engineers to prove liability or nonliability. Most engineering disasters (structural failures such as bridge and building collapses) are subject to forensic investigation by engineers experienced in forensic methods of investigation. Rail crashes, aviation accidents, and some automobile accidents are investigated by forensic engineers in particular where component failure is suspected. Furthermore, appliances, consumer products, medical devices, structures, industrial machinery, and even simple hand tools such as hammers or chisels can warrant investigations upon incidents causing injury or property damages. The failure of medical devices is often safety-critical to the user, so reporting failures and analysing them is particularly important. The environment of the body is complex, and implants must both survive this environment, and not leach potentially toxic impurities. Problems have been reported with breast implants, heart valves, and catheters, for example. Failures that occur early in the life of a new product are vital information for the manufacturer to improve the product. New product development aims to eliminate defects by testing in the factory before launch, but some may occur during its early life. Testing products to simulate their behavior in the external environment is a difficult skill, and may involve accelerated life testing for example. The worst kind of defect to occur after launch is a safety-critical defect, a defect that can endanger life or limb. Their discovery usually leads to a product recall or even complete withdrawal of the product from the market. Product defects often follow the bathtub curve, with high initial failures, a lower rate during regular life, followed by another rise due to wear-out. National standards, such as those of ASTM and the British Standards Institute, and International Standards can help the designer in increasing product integrity. Historic examples There are many examples of forensic methods used to investigate accidents and disasters, one of the earliest in the modern period being the fall of the Dee bridge at Chester, England. It was built using cast iron girders, each of which was made of three very large castings dovetailed together. Each girder was strengthened by wrought iron bars along the length. It was finished in September 1846, and opened for local traffic after approval by the first Railway Inspector, General Charles Pasley. However, on 24 May 1847, a local train to Ruabon fell through the bridge. The accident resulted in five deaths (three passengers, the train guard, and the locomotive fireman) and nine serious injuries. The bridge had been designed by Robert Stephenson, and he was accused of negligence by a local inquest. Although strong in compression, cast iron was known to be brittle in tension or bending. On the day of the accident, the bridge deck was covered with track ballast to prevent the oak beams supporting the track from catching fire, imposing a heavy extra load on the girders supporting the bridge and probably exacerbating the accident. Stephenson took this precaution because of a recent fire on the Great Western Railway at Uxbridge, London, where Isambard Kingdom Brunel's bridge caught fire and collapsed. One of the first major inquiries conducted by the newly formed Railway Inspectorate was conducted by Captain Simmons of the Royal Engineers, and his report suggested that repeated flexing of the girder weakened it substantially. He examined the broken parts of the main girder, and confirmed that the girder had broken in two places, the first break occurring at the center. He tested the remaining girders by driving a locomotive across them, and found that they deflected by several inches under the moving load. He concluded that the design was flawed, and that the wrought iron trusses fixed to the girders did not reinforce the girders at all, which was a conclusion also reached by the jury at the inquest. Stephenson's design had depended on the wrought iron trusses to strengthen the final structures, but they were anchored on the cast iron girders themselves, and so deformed with any load on the bridge. Others (especially Stephenson) argued that the train had derailed and hit the girder, the impact force causing it to fracture. However, eyewitnesses maintained that the girder broke first and the fact that the locomotive remained on the track showed otherwise. Publications Product failures are not widely published in the academic literature or trade literature, partly because companies do not want to advertise their problems. However, it then denies others the opportunity to improve product design so as to prevent further accidents. The journal Engineering Failure Analysis, published in affiliation with the European Structural Integrity Society, publishes case studies of a wide range of different products, failing under different circumstances. A publication dealing with failures of buildings, bridges, and other structures, is the Journal of Performance of Constructed Facilities, which is published by the American Society of Civil Engineers, under the umbrella of its Technical Council on Forensic Engineering. The Journal of the National Academy of Forensic Engineers is a peer-reviewed open access journal that provides a multi-disciplinary examination of the forensic engineering field. Submission is open to NAFE members and the journal's peer review process includes in-person presentation for live feedback prior to a single-blind technical peer review. See also Failure mode and effects analysis References Further reading Forensic Materials Engineering: Case Studies by Peter Rhys Lewis, Colin Gagg, Ken Reynolds, CRC Press (2004). Forensic Engineering Investigation by Randall K. Noon, CRC Press (2000). Introduction to Forensic Engineering (The Forensic Library) by Randall K. Noon, CRC Press (1992). National Academy of Forensic Engineers Introduction to Forensic Engineering. OpenLearn. Open University Forensic Engineering by Origin and Cause Guidelines for Investigating Process Safety Incidents, CCPS, AIChE, Wiley (3rd edition) Journals Engineering Failure Analysis Journal of the National Academy of Forensic Engineers Forensic Engineering. Institution of Civil Engineers Engineering disciplines Materials science Engineering failures Engineering
Forensic engineering
[ "Physics", "Materials_science", "Technology", "Engineering" ]
2,879
[ "Systems engineering", "Applied and interdisciplinary physics", "Reliability engineering", "Technological failures", "Materials science", "Engineering failures", "Civil engineering", "nan" ]
187,344
https://en.wikipedia.org/wiki/Oil%20drop%20experiment
The oil drop experiment was performed by Robert A. Millikan and Harvey Fletcher in 1909 to measure the elementary electric charge (the charge of the electron). The experiment took place in the Ryerson Physical Laboratory at the University of Chicago. Millikan received the Nobel Prize in Physics in 1923. The experiment observed tiny electrically charged droplets of oil located between two parallel metal surfaces, forming the plates of a capacitor. The plates were oriented horizontally, with one plate above the other. A mist of atomized oil drops was introduced through a small hole in the top plate and was ionized by x-rays, making them negatively charged. First, with zero applied electric field, the velocity of a falling droplet was measured. At terminal velocity, the drag force equals the gravitational force. As both forces depend on the radius in different ways, the radius of the droplet, and therefore the mass and gravitational force, could be determined (using the known density of the oil). Next, a voltage inducing an electric field was applied between the plates and adjusted until the drops were suspended in mechanical equilibrium, indicating that the electrical force and the gravitational force were in balance. Using the known electric field, Millikan and Fletcher could determine the charge on the oil droplet. By repeating the experiment for many droplets, they confirmed that the charges were all small integer multiples of a certain base value, which was found to be , about 0.6% difference from the currently accepted value of They proposed that this was the magnitude of the negative charge of a single electron. Background Starting in 1908, while a professor at the University of Chicago, Millikan, with the significant input of Fletcher, the "able assistance of Mr. J. Yinbong Lee", and after improving his setup, published his seminal study in 1913. This remains controversial since papers found after Fletcher's death describe events in which Millikan coerced Fletcher into relinquishing authorship as a condition for receiving his PhD. In return, Millikan used his influence in support of Fletcher's career at Bell Labs. Millikan and Fletcher's experiment involved measuring the force on oil droplets in a glass chamber sandwiched between two electrodes, one above and one below. With the electrical field calculated, they could measure the droplet's charge, the charge on a single electron being (). At the time of Millikan and Fletcher's oil drop experiments, the existence of subatomic particles was not universally accepted. Experimenting with cathode rays in 1897, J. J. Thomson had discovered negatively charged "corpuscles", as he called them, with a mass about 1/1837 that of a hydrogen atom. Similar results had been found by George FitzGerald and Walter Kaufmann. Most of what was then known about electricity and magnetism, however, could be explained on the basis that charge is a continuous variable; in much the same way that many of the properties of light can be explained by treating it as a continuous wave rather than as a stream of photons. The elementary charge e is one of the fundamental physical constants and thus the accuracy of the value is of great importance. In 1923, Millikan won the Nobel Prize in physics, in part because of this experiment. Thomas Edison, who had previously thought of charge as a continuous variable, became convinced after working with Millikan and Fletcher's apparatus. This experiment has since been repeated by generations of physics students, although it is rather expensive and difficult to conduct properly. From 1995 to 2007, several computer-automated experiments have been conducted at SLAC to search for isolated fractionally charged particles, however, no evidence for fractional charge particles has been found after measuring over 100 million drops. Experimental procedure Apparatus Millikan's and Fletcher's apparatus incorporated a parallel pair of horizontal metal plates. By applying a potential difference across the plates, a uniform electric field was created in the space between them. A ring of insulating material was used to hold the plates apart. Four holes were cut into the ring, three for illumination by a bright light, and another to allow viewing through a microscope. A fine mist of oil droplets was sprayed into a chamber above the plates. The oil was of a type usually used in vacuum apparatus and was chosen because it had an extremely low vapour pressure. Ordinary oils would evaporate under the heat of the light source causing the mass of the oil drop to change over the course of the experiment. Some oil drops became electrically charged through friction with the nozzle as they were sprayed. Alternatively, charging could be brought about by including an ionizing radiation source (such as an X-ray tube). The droplets entered the space between the plates and, because they were charged, could be made to rise and fall by changing the voltage across the plates. Method Initially the oil drops are allowed to fall between the plates with the electric field turned off. They very quickly reach a terminal velocity because of friction with the air in the chamber. The field is then turned on and, if it is large enough, some of the drops (the charged ones) will start to rise. (This is because the upwards electric force FE is greater for them than the downwards gravitational force Fg, in the same way bits of paper can be picked by a charged rubber rod). A likely looking drop is selected and kept in the middle of the field of view by alternately switching off the voltage until all the other drops have fallen. The experiment is then continued with this one drop. The drop is allowed to fall and its terminal velocity v1 in the absence of an electric field is calculated. The drag force acting on the drop can then be worked out using Stokes' law: where v1 is the terminal velocity (i.e. velocity in the absence of an electric field) of the falling drop, η is the viscosity of the air, and r is the radius of the drop. The weight w is the volume D multiplied by the density ρ and the acceleration due to gravity g. However, what is needed is the apparent weight. The apparent weight in air is the true weight minus the upthrust (which equals the weight of air displaced by the oil drop). For a perfectly spherical droplet the apparent weight can be written as: At terminal velocity the oil drop is not accelerating. Therefore, the total force acting on it must be zero and the two forces F and must cancel one another out (that is, ). This implies Once r is calculated, can easily be worked out. Now the field is turned back on, and the electric force on the drop is where q is the charge on the oil drop and E is the electric field between the plates. For parallel plates where V is the potential difference and d is the distance between the plates. One conceivable way to work out q would be to adjust V until the oil drop remained steady. Then we could equate FE with . Also, determining FE proves difficult because the mass of the oil drop is difficult to determine without reverting to the use of Stokes' Law. A more practical approach is to turn V up slightly so that the oil drop rises with a new terminal velocity v2. Then Comparison to modern values Effective from the 2019 revision of the SI, the value of the elementary charge is defined to be exactly . Before that, the most recent (2014) accepted value was , where the (98) indicates the uncertainty of the last two decimal places. In his Nobel lecture, Millikan gave his measurement as , which equals . The difference is less than one percent, but is six times greater than Millikan's standard error, so the disagreement is significant. Using X-ray experiments, Erik Bäcklin in 1928 found a higher value of the elementary charge, or , which is within uncertainty of the exact value. Raymond Thayer Birge, conducting a review of physical constants in 1929, stated "The investigation by Bäcklin constitutes a pioneer piece of work, and it is quite likely, as such, to contain various unsuspected sources of systematic error. If [... it is ...] weighted according to the apparent probable error [...], the weighted average will still be suspiciously high. [...] the writer has finally decided to reject the Bäcklin value, and to use the weighted mean of the remaining two values." Birge averaged Millikan's result and a different, less accurate X-ray experiment that agreed with Millikan's result. Successive X-ray experiments continued to give high results, and proposals for the discrepancy were ruled out experimentally. Sten von Friesen measured the value with a new electron diffraction method, and the oil drop experiment was redone. Both gave high numbers. By 1937 it was "quite obvious" that Millikan's value could not be maintained any longer, and the established value became or . Controversy Some controversy was raised by physicist Gerald Holton (1978) who pointed out that Millikan recorded more measurements in his journal than he included in his final results. Holton suggested these data points were omitted from the large set of oil drops measured in his experiments without apparent reason. This claim was disputed by Allan Franklin, a high energy physics experimentalist and philosopher of science at the University of Colorado. Franklin contended that Millikan's exclusions of data did not substantively affect his final value of e, but did reduce the statistical error around this estimate e. This enabled Millikan to claim that he had calculated e to better than one half of one percent; in fact, if Millikan had included all of the data he had thrown out, the standard error of the mean would have been within 2%. While this would still have resulted in Millikan having measured e better than anyone else at the time, the slightly larger uncertainty might have allowed more disagreement with his results within the physics community. While Franklin left his support for Millikan's measurement with the conclusion that concedes that Millikan may have performed "cosmetic surgery" on the data, David Goodstein investigated the original detailed notebooks kept by Millikan, concluding that Millikan plainly states here and in the reports that he included only drops that had undergone a "complete series of observations" and excluded no drops from this group of complete measurements. Reasons for a failure to generate a complete observation include annotations regarding the apparatus setup, oil drop production, and atmospheric effects which invalidated, in Millikan's opinion (borne out by the reduced error in this set), a given particular measurement. Millikan's experiment as an example of psychological effects in scientific methodology In a commencement address given at the California Institute of Technology (Caltech) in 1974 (and reprinted in Surely You're Joking, Mr. Feynman! in 1985 as well as in The Pleasure of Finding Things Out in 1999), physicist Richard Feynman noted: References Further reading External links Simulation of the oil drop experiment (requires JavaScript) Thomsen, Marshall, "Good to the Last Drop". Millikan Stories as "Canned" Pedagogy. Eastern Michigan University. CSR/TSGC Team, "Quark search experiment". The University of Texas at Austin. The oil drop experiment appears in a list of Science's 10 Most Beautiful Experiments , originally published in the New York Times. Engeness, T.E., "The Millikan Oil Drop Experiment". 25 April 2005. Paper by Millikan discussing modifications to his original experiment to improve its accuracy. A variation of this experiment has been suggested for the International Space Station. Physics experiments Electrostatics Foundational quantum physics 1909 in science California Institute of Technology
Oil drop experiment
[ "Physics" ]
2,404
[ "Quantum mechanics", "Foundational quantum physics", "Experimental physics", "Physics experiments" ]
187,360
https://en.wikipedia.org/wiki/Magnetic%20susceptibility
In electromagnetism, the magnetic susceptibility (; denoted , chi) is a measure of how much a material will become magnetized in an applied magnetic field. It is the ratio of magnetization (magnetic moment per unit volume) to the applied magnetic field intensity . This allows a simple classification, into two categories, of most materials' responses to an applied magnetic field: an alignment with the magnetic field, , called paramagnetism, or an alignment against the field, , called diamagnetism. Magnetic susceptibility indicates whether a material is attracted into or repelled out of a magnetic field. Paramagnetic materials align with the applied field and are attracted to regions of greater magnetic field. Diamagnetic materials are anti-aligned and are pushed away, toward regions of lower magnetic fields. On top of the applied field, the magnetization of the material adds its own magnetic field, causing the field lines to concentrate in paramagnetism, or be excluded in diamagnetism. Quantitative measures of the magnetic susceptibility also provide insights into the structure of materials, providing insight into bonding and energy levels. Furthermore, it is widely used in geology for paleomagnetic studies and structural geology. The magnetizability of materials comes from the atomic-level magnetic properties of the particles of which they are made. Usually, this is dominated by the magnetic moments of electrons. Electrons are present in all materials, but without any external magnetic field, the magnetic moments of the electrons are usually either paired up or random so that the overall magnetism is zero (the exception to this usual case is ferromagnetism). The fundamental reasons why the magnetic moments of the electrons line up or do not are very complex and cannot be explained by classical physics. However, a useful simplification is to measure the magnetic susceptibility of a material and apply the macroscopic form of Maxwell's equations. This allows classical physics to make useful predictions while avoiding the underlying quantum mechanical details. Definition Volume susceptibility Magnetic susceptibility is a dimensionless proportionality constant that indicates the degree of magnetization of a material in response to an applied magnetic field. A related term is magnetizability, the proportion between magnetic moment and magnetic flux density. A closely related parameter is the permeability, which expresses the total magnetization of material and volume. The volume magnetic susceptibility, represented by the symbol (often simply , sometimes  – magnetic, to distinguish from the electric susceptibility), is defined in the International System of Units – in other systems there may be additional constants – by the following relationship: Here, is the magnetization of the material (the magnetic dipole moment per unit volume), with unit amperes per meter, and is the magnetic field strength, also with the unit amperes per meter. is therefore a dimensionless quantity. Using SI units, the magnetic induction is related to by the relationship where is the vacuum permeability (see table of physical constants), and is the relative permeability of the material. Thus the volume magnetic susceptibility and the magnetic permeability are related by the following formula: Sometimes an auxiliary quantity called intensity of magnetization (also referred to as magnetic polarisation ) and with unit teslas, is defined as This allows an alternative description of all magnetization phenomena in terms of the quantities and , as opposed to the commonly used and . Molar susceptibility and mass susceptibility There are two other measures of susceptibility, the molar magnetic susceptibility () with unit m3/mol, and the mass magnetic susceptibility () with unit m3/kg that are defined below, where is the density with unit kg/m3 and is molar mass with unit kg/mol: In CGS units The definitions above are according to the International System of Quantities (ISQ) upon which the SI is based. However, many tables of magnetic susceptibility give the values of the corresponding quantities of the CGS system (more specifically CGS-EMU, short for electromagnetic units, or Gaussian-CGS; both are the same in this context). The quantities characterizing the permeability of free space for each system have different defining equations: The respective CGS susceptibilities are multiplied by 4 to give the corresponding ISQ quantities (often referred to as SI quantities) with the same units: For example, the CGS volume magnetic susceptibility of water at 20 °C is , which is using the SI convention, both quantities being dimensionless. Whereas for most electromagnetic quantities, which system of quantities it belongs to can be disambiguated by incompatibility of their units, this is not true for the susceptibility quantities. In physics it is common to see CGS mass susceptibility with unit cm3/g or emu/g⋅Oe−1, and the CGS molar susceptibility with unit cm3/mol or emu/mol⋅Oe−1. Paramagnetism and diamagnetism If is positive, a material can be paramagnetic. In this case, the magnetic field in the material is strengthened by the induced magnetization. Alternatively, if is negative, the material is diamagnetic. In this case, the magnetic field in the material is weakened by the induced magnetization. Generally, nonmagnetic materials are said to be para- or diamagnetic because they do not possess permanent magnetization without external magnetic field. Ferromagnetic, ferrimagnetic, or antiferromagnetic materials possess permanent magnetization even without external magnetic field and do not have a well defined zero-field susceptibility. Experimental measurement Volume magnetic susceptibility is measured by the force change felt upon a substance when a magnetic field gradient is applied. Early measurements are made using the Gouy balance where a sample is hung between the poles of an electromagnet. The change in weight when the electromagnet is turned on is proportional to the susceptibility. Today, high-end measurement systems use a superconductive magnet. An alternative is to measure the force change on a strong compact magnet upon insertion of the sample. This system, widely used today, is called the Evans balance. For liquid samples, the susceptibility can be measured from the dependence of the NMR frequency of the sample on its shape or orientation. Another method using NMR techniques measures the magnetic field distortion around a sample immersed in water inside an MR scanner. This method is highly accurate for diamagnetic materials with susceptibilities similar to water. Tensor susceptibility The magnetic susceptibility of most crystals is not a scalar quantity. Magnetic response is dependent upon the orientation of the sample and can occur in directions other than that of the applied field . In these cases, volume susceptibility is defined as a tensor: where and refer to the directions (e.g., of the and Cartesian coordinates) of the applied field and magnetization, respectively. The tensor is thus degree 2 (second order), dimension (3,3) describing the component of magnetization in the th direction from the external field applied in the th direction. Differential susceptibility In ferromagnetic crystals, the relationship between and is not linear. To accommodate this, a more general definition of differential susceptibility is used: where is a tensor derived from partial derivatives of components of with respect to components of . When the coercivity of the material parallel to an applied field is the smaller of the two, the differential susceptibility is a function of the applied field and self interactions, such as the magnetic anisotropy. When the material is not saturated, the effect will be nonlinear and dependent upon the domain wall configuration of the material. Several experimental techniques allow for the measurement of the electronic properties of a material. An important effect in metals under strong magnetic fields, is the oscillation of the differential susceptibility as function of . This behaviour is known as the De Haas–Van Alphen effect and relates the period of the susceptibility with the Fermi surface of the material. An analogue non-linear relation between magnetization and magnetic field happens for antiferromagnetic materials. In the frequency domain When the magnetic susceptibility is measured in response to an AC magnetic field (i.e. a magnetic field that varies sinusoidally), this is called AC susceptibility. AC susceptibility (and the closely related "AC permeability") are complex number quantities, and various phenomena, such as resonance, can be seen in AC susceptibility that cannot occur in constant-field (DC) susceptibility. In particular, when an AC field is applied perpendicular to the detection direction (called the "transverse susceptibility" regardless of the frequency), the effect has a peak at the ferromagnetic resonance frequency of the material with a given static applied field. Currently, this effect is called the microwave permeability or network ferromagnetic resonance in the literature. These results are sensitive to the domain wall configuration of the material and eddy currents. In terms of ferromagnetic resonance, the effect of an AC-field applied along the direction of the magnetization is called parallel pumping. Table of examples Sources of published data The CRC Handbook of Chemistry and Physics has one of the few published magnetic susceptibility tables. The data are listed as CGS quantities. The molar susceptibility of several elements and compounds are listed in the CRC. Application in the geosciences In Earth science, magnetism is a useful parameter to describe and analyze rocks. Additionally, the anisotropy of magnetic susceptibility (AMS) within a sample determines parameters as directions of paleocurrents, maturity of paleosol, flow direction of magma injection, tectonic strain, etc. It is a non-destructive tool which quantifies the average alignment and orientation of magnetic particles within a sample. See also Curie's law Electric susceptibility Iron Magnetic flux density Magnetochemistry Magnetometer Maxwell's equations Paleomagnetism Permeability (electromagnetism) Quantitative susceptibility mapping Susceptibility weighted imaging References External links Linear Response Functions in Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.): DMFT at 25: Infinite Dimensions, Verlag des Forschungszentrum Jülich, 2014 Physical quantities Magnetism Electric and magnetic fields in matter Scientific techniques
Magnetic susceptibility
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,215
[ "Physical phenomena", "Physical quantities", "Quantity", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", "Physical properties" ]
187,408
https://en.wikipedia.org/wiki/Self-adjoint%20operator
In mathematics, a self-adjoint operator on a complex vector space V with inner product is a linear map A (from V to itself) that is its own adjoint. That is, for all ∊ V. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension. Self-adjoint operators are used in functional analysis and quantum mechanics. In quantum mechanics their importance lies in the Dirac–von Neumann formulation of quantum mechanics, in which physical observables such as position, momentum, angular momentum and spin are represented by self-adjoint operators on a Hilbert space. Of particular significance is the Hamiltonian operator defined by which as an observable corresponds to the total energy of a particle of mass m in a real potential field V. Differential operators are an important class of unbounded operators. The structure of self-adjoint operators on infinite-dimensional Hilbert spaces essentially resembles the finite-dimensional case. That is to say, operators are self-adjoint if and only if they are unitarily equivalent to real-valued multiplication operators. With suitable modifications, this result can be extended to possibly unbounded operators on infinite-dimensional spaces. Since an everywhere-defined self-adjoint operator is necessarily bounded, one needs to be more attentive to the domain issue in the unbounded case. This is explained below in more detail. Definitions Let be a Hilbert space and an unbounded (i.e. not necessarily bounded) linear operator with a dense domain This condition holds automatically when is finite-dimensional since for every linear operator on a finite-dimensional space. The graph of an (arbitrary) operator is the set An operator is said to extend if This is written as Let the inner product be conjugate linear on the second argument. The adjoint operator acts on the subspace consisting of the elements such that The densely defined operator is called symmetric (or Hermitian) if , i.e., if and for all . Equivalently, is symmetric if and only if Since is dense in , symmetric operators are always closable (i.e. the closure of is the graph of an operator). If is a closed extension of , the smallest closed extension of must be contained in . Hence, for symmetric operators and for closed symmetric operators. The densely defined operator is called self-adjoint if , that is, if and only if is symmetric and . Equivalently, a closed symmetric operator is self-adjoint if and only if is symmetric. If is self-adjoint, then is real for all , i.e., A symmetric operator is said to be essentially self-adjoint if the closure of is self-adjoint. Equivalently, is essentially self-adjoint if it has a unique self-adjoint extension. In practical terms, having an essentially self-adjoint operator is almost as good as having a self-adjoint operator, since we merely need to take the closure to obtain a self-adjoint operator. In physics, the term Hermitian refers to symmetric as well as self-adjoint operators alike. The subtle difference between the two is generally overlooked. Bounded self-adjoint operators Let be a Hilbert space and a symmetric operator. According to Hellinger–Toeplitz theorem, if then is necessarily bounded. A bounded operator is self-adjoint if Every bounded operator can be written in the complex form where and are bounded self-adjoint operators. Alternatively, every positive bounded linear operator is self-adjoint if the Hilbert space is complex. Properties A bounded self-adjoint operator defined on has the following properties: is invertible if the image of is dense in The operator norm is given by If is an eigenvalue of then ; the eigenvalues are real and the corresponding eigenvectors are orthogonal. Bounded self-adjoint operators do not necessarily have an eigenvalue. If, however, is a compact self-adjoint operator then it always has an eigenvalue and corresponding normalized eigenvector. Spectrum of self-adjoint operators Let be an unbounded operator. The resolvent set (or regular set) of is defined as If is bounded, the definition reduces to being bijective on . The spectrum of is defined as the complement In finite dimensions, consists exclusively of (complex) eigenvalues. The spectrum of a self-adjoint operator is always real (i.e. ), though non-self-adjoint operators with real spectrum exist as well. For bounded (normal) operators, however, the spectrum is real if and only if the operator is self-adjoint. This implies, for example, that a non-self-adjoint operator with real spectrum is necessarily unbounded. As a preliminary, define and with . Then, for every and every where Indeed, let By the Cauchy–Schwarz inequality, If then and is called bounded below. Spectral theorem In the physics literature, the spectral theorem is often stated by saying that a self-adjoint operator has an orthonormal basis of eigenvectors. Physicists are well aware, however, of the phenomenon of "continuous spectrum"; thus, when they speak of an "orthonormal basis" they mean either an orthonormal basis in the classic sense or some continuous analog thereof. In the case of the momentum operator , for example, physicists would say that the eigenvectors are the functions , which are clearly not in the Hilbert space . (Physicists would say that the eigenvectors are "non-normalizable.") Physicists would then go on to say that these "generalized eigenvectors" form an "orthonormal basis in the continuous sense" for , after replacing the usual Kronecker delta by a Dirac delta function . Although these statements may seem disconcerting to mathematicians, they can be made rigorous by use of the Fourier transform, which allows a general function to be expressed as a "superposition" (i.e., integral) of the functions , even though these functions are not in . The Fourier transform "diagonalizes" the momentum operator; that is, it converts it into the operator of multiplication by , where is the variable of the Fourier transform. The spectral theorem in general can be expressed similarly as the possibility of "diagonalizing" an operator by showing it is unitarily equivalent to a multiplication operator. Other versions of the spectral theorem are similarly intended to capture the idea that a self-adjoint operator can have "eigenvectors" that are not actually in the Hilbert space in question. Multiplication operator form of the spectral theorem Firstly, let be a σ-finite measure space and a measurable function on . Then the operator , defined by where is called a multiplication operator. Any multiplication operator is a self-adjoint operator. Secondly, two operators and with dense domains and in Hilbert spaces and , respectively, are unitarily equivalent if and only if there is a unitary transformation such that: If unitarily equivalent and are bounded, then ; if is self-adjoint, then so is . The spectral theorem holds for both bounded and unbounded self-adjoint operators. Proof of the latter follows by reduction to the spectral theorem for unitary operators. We might note that if is multiplication by , then the spectrum of is just the essential range of . More complete versions of the spectral theorem exist as well that involve direct integrals and carry with it the notion of "generalized eigenvectors". Functional calculus One application of the spectral theorem is to define a functional calculus. That is, if is a function on the real line and is a self-adjoint operator, we wish to define the operator . The spectral theorem shows that if is represented as the operator of multiplication by , then is the operator of multiplication by the composition . One example from quantum mechanics is the case where is the Hamiltonian operator . If has a true orthonormal basis of eigenvectors with eigenvalues , then can be defined as the unique bounded operator with eigenvalues such that: The goal of functional calculus is to extend this idea to the case where has continuous spectrum (i.e. where has no normalizable eigenvectors). It has been customary to introduce the following notation where is the indicator function of the interval . The family of projection operators E(λ) is called resolution of the identity for T. Moreover, the following Stieltjes integral representation for T can be proved: Formulation in the physics literature In quantum mechanics, Dirac notation is used as combined expression for both the spectral theorem and the Borel functional calculus. That is, if H is self-adjoint and f is a Borel function, with where the integral runs over the whole spectrum of H. The notation suggests that H is diagonalized by the eigenvectors ΨE. Such a notation is purely formal. The resolution of the identity (sometimes called projection-valued measures) formally resembles the rank-1 projections . In the Dirac notation, (projective) measurements are described via eigenvalues and eigenstates, both purely formal objects. As one would expect, this does not survive passage to the resolution of the identity. In the latter formulation, measurements are described using the spectral measure of , if the system is prepared in prior to the measurement. Alternatively, if one would like to preserve the notion of eigenstates and make it rigorous, rather than merely formal, one can replace the state space by a suitable rigged Hilbert space. If , the theorem is referred to as resolution of unity: In the case is the sum of an Hermitian H and a skew-Hermitian (see skew-Hermitian matrix) operator , one defines the biorthogonal basis set and write the spectral theorem as: (See Feshbach–Fano partitioning for the context where such operators appear in scattering theory). Formulation for symmetric operators The spectral theorem applies only to self-adjoint operators, and not in general to symmetric operators. Nevertheless, we can at this point give a simple example of a symmetric (specifically, an essentially self-adjoint) operator that has an orthonormal basis of eigenvectors. Consider the complex Hilbert space L2[0,1] and the differential operator with consisting of all complex-valued infinitely differentiable functions f on [0, 1] satisfying the boundary conditions Then integration by parts of the inner product shows that A is symmetric. The eigenfunctions of A are the sinusoids with the real eigenvalues n2π2; the well-known orthogonality of the sine functions follows as a consequence of A being symmetric. The operator A can be seen to have a compact inverse, meaning that the corresponding differential equation Af = g is solved by some integral (and therefore compact) operator G. The compact symmetric operator G then has a countable family of eigenvectors which are complete in . The same can then be said for A. Pure point spectrum A self-adjoint operator A on H has pure point spectrum if and only if H has an orthonormal basis {ei}i ∈ I consisting of eigenvectors for A. Example. The Hamiltonian for the harmonic oscillator has a quadratic potential V, that is This Hamiltonian has pure point spectrum; this is typical for bound state Hamiltonians in quantum mechanics. As was pointed out in a previous example, a sufficient condition that an unbounded symmetric operator has eigenvectors which form a Hilbert space basis is that it has a compact inverse. Symmetric vs self-adjoint operators Although the distinction between a symmetric operator and a (essentially) self-adjoint operator is subtle, it is important since self-adjointness is the hypothesis in the spectral theorem. Here we discuss some concrete examples of the distinction. Boundary conditions In the case where the Hilbert space is a space of functions on a bounded domain, these distinctions have to do with a familiar issue in quantum physics: One cannot define an operator—such as the momentum or Hamiltonian operator—on a bounded domain without specifying boundary conditions. In mathematical terms, choosing the boundary conditions amounts to choosing an appropriate domain for the operator. Consider, for example, the Hilbert space (the space of square-integrable functions on the interval [0,1]). Let us define a momentum operator A on this space by the usual formula, setting the Planck constant to 1: We must now specify a domain for A, which amounts to choosing boundary conditions. If we choose then A is not symmetric (because the boundary terms in the integration by parts do not vanish). If we choose then using integration by parts, one can easily verify that A is symmetric. This operator is not essentially self-adjoint, however, basically because we have specified too many boundary conditions on the domain of A, which makes the domain of the adjoint too big (see also the example below). Specifically, with the above choice of domain for A, the domain of the closure of A is whereas the domain of the adjoint of A is That is to say, the domain of the closure has the same boundary conditions as the domain of A itself, just a less stringent smoothness assumption. Meanwhile, since there are "too many" boundary conditions on A, there are "too few" (actually, none at all in this case) for . If we compute for using integration by parts, then since vanishes at both ends of the interval, no boundary conditions on are needed to cancel out the boundary terms in the integration by parts. Thus, any sufficiently smooth function is in the domain of , with . Since the domain of the closure and the domain of the adjoint do not agree, A is not essentially self-adjoint. After all, a general result says that the domain of the adjoint of is the same as the domain of the adjoint of A. Thus, in this case, the domain of the adjoint of is bigger than the domain of itself, showing that is not self-adjoint, which by definition means that A is not essentially self-adjoint. The problem with the preceding example is that we imposed too many boundary conditions on the domain of A. A better choice of domain would be to use periodic boundary conditions: With this domain, A is essentially self-adjoint. In this case, we can understand the implications of the domain issues for the spectral theorem. If we use the first choice of domain (with no boundary conditions), all functions for are eigenvectors, with eigenvalues , and so the spectrum is the whole complex plane. If we use the second choice of domain (with Dirichlet boundary conditions), A has no eigenvectors at all. If we use the third choice of domain (with periodic boundary conditions), we can find an orthonormal basis of eigenvectors for A, the functions . Thus, in this case finding a domain such that A is self-adjoint is a compromise: the domain has to be small enough so that A is symmetric, but large enough so that . Schrödinger operators with singular potentials A more subtle example of the distinction between symmetric and (essentially) self-adjoint operators comes from Schrödinger operators in quantum mechanics. If the potential energy is singular—particularly if the potential is unbounded below—the associated Schrödinger operator may fail to be essentially self-adjoint. In one dimension, for example, the operator is not essentially self-adjoint on the space of smooth, rapidly decaying functions. In this case, the failure of essential self-adjointness reflects a pathology in the underlying classical system: A classical particle with a potential escapes to infinity in finite time. This operator does not have a unique self-adjoint, but it does admit self-adjoint extensions obtained by specifying "boundary conditions at infinity". (Since is a real operator, it commutes with complex conjugation. Thus, the deficiency indices are automatically equal, which is the condition for having a self-adjoint extension.) In this case, if we initially define on the space of smooth, rapidly decaying functions, the adjoint will be "the same" operator (i.e., given by the same formula) but on the largest possible domain, namely It is then possible to show that is not a symmetric operator, which certainly implies that is not essentially self-adjoint. Indeed, has eigenvectors with pure imaginary eigenvalues, which is impossible for a symmetric operator. This strange occurrence is possible because of a cancellation between the two terms in : There are functions in the domain of for which neither nor is separately in , but the combination of them occurring in is in . This allows for to be nonsymmetric, even though both and are symmetric operators. This sort of cancellation does not occur if we replace the repelling potential with the confining potential . Non-self-adjoint operators in quantum mechanics In quantum mechanics, observables correspond to self-adjoint operators. By Stone's theorem on one-parameter unitary groups, self-adjoint operators are precisely the infinitesimal generators of unitary groups of time evolution operators. However, many physical problems are formulated as a time-evolution equation involving differential operators for which the Hamiltonian is only symmetric. In such cases, either the Hamiltonian is essentially self-adjoint, in which case the physical problem has unique solutions or one attempts to find self-adjoint extensions of the Hamiltonian corresponding to different types of boundary conditions or conditions at infinity. Example. The one-dimensional Schrödinger operator with the potential , defined initially on smooth compactly supported functions, is essentially self-adjoint for but not for . The failure of essential self-adjointness for has a counterpart in the classical dynamics of a particle with potential : The classical particle escapes to infinity in finite time. Example. There is no self-adjoint momentum operator for a particle moving on a half-line. Nevertheless, the Hamiltonian of a "free" particle on a half-line has several self-adjoint extensions corresponding to different types of boundary conditions. Physically, these boundary conditions are related to reflections of the particle at the origin. Examples A symmetric operator that is not essentially self-adjoint We first consider the Hilbert space and the differential operator defined on the space of continuously differentiable complex-valued functions on [0,1], satisfying the boundary conditions Then D is a symmetric operator as can be shown by integration by parts. The spaces N+, N− (defined below) are given respectively by the distributional solutions to the equation which are in L2[0, 1]. One can show that each one of these solution spaces is 1-dimensional, generated by the functions x → e−x and x → ex respectively. This shows that D is not essentially self-adjoint, but does have self-adjoint extensions. These self-adjoint extensions are parametrized by the space of unitary mappings N+ → N−, which in this case happens to be the unit circle T. In this case, the failure of essential self-adjointenss is due to an "incorrect" choice of boundary conditions in the definition of the domain of . Since is a first-order operator, only one boundary condition is needed to ensure that is symmetric. If we replaced the boundary conditions given above by the single boundary condition , then D would still be symmetric and would now, in fact, be essentially self-adjoint. This change of boundary conditions gives one particular essentially self-adjoint extension of D. Other essentially self-adjoint extensions come from imposing boundary conditions of the form . This simple example illustrates a general fact about self-adjoint extensions of symmetric differential operators P on an open set M. They are determined by the unitary maps between the eigenvalue spaces where Pdist is the distributional extension of P. Constant-coefficient operators We next give the example of differential operators with constant coefficients. Let be a polynomial on Rn with real coefficients, where α ranges over a (finite) set of multi-indices. Thus and We also use the notation Then the operator P(D) defined on the space of infinitely differentiable functions of compact support on Rn by is essentially self-adjoint on L2(Rn). More generally, consider linear differential operators acting on infinitely differentiable complex-valued functions of compact support. If M is an open subset of Rn where aα are (not necessarily constant) infinitely differentiable functions. P is a linear operator Corresponding to P there is another differential operator, the formal adjoint of P Spectral multiplicity theory The multiplication representation of a self-adjoint operator, though extremely useful, is not a canonical representation. This suggests that it is not easy to extract from this representation a criterion to determine when self-adjoint operators A and B are unitarily equivalent. The finest grained representation which we now discuss involves spectral multiplicity. This circle of results is called the Hahn–Hellinger theory of spectral multiplicity. Uniform multiplicity We first define uniform multiplicity: Definition. A self-adjoint operator A has uniform multiplicity n where n is such that 1 ≤ n ≤ ω if and only if A is unitarily equivalent to the operator Mf of multiplication by the function f(λ) = λ on where Hn is a Hilbert space of dimension n. The domain of Mf consists of vector-valued functions ψ on R such that Non-negative countably additive measures μ, ν are mutually singular if and only if they are supported on disjoint Borel sets. This representation is unique in the following sense: For any two such representations of the same A, the corresponding measures are equivalent in the sense that they have the same sets of measure 0. Direct integrals The spectral multiplicity theorem can be reformulated using the language of direct integrals of Hilbert spaces: Unlike the multiplication-operator version of the spectral theorem, the direct-integral version is unique in the sense that the measure equivalence class of μ (or equivalently its sets of measure 0) is uniquely determined and the measurable function is determined almost everywhere with respect to μ. The function is the spectral multiplicity function of the operator. We may now state the classification result for self-adjoint operators: Two self-adjoint operators are unitarily equivalent if and only if (1) their spectra agree as sets, (2) the measures appearing in their direct-integral representations have the same sets of measure zero, and (3) their spectral multiplicity functions agree almost everywhere with respect to the measure in the direct integral. Example: structure of the Laplacian The Laplacian on Rn is the operator As remarked above, the Laplacian is diagonalized by the Fourier transform. Actually it is more natural to consider the negative of the Laplacian −Δ since as an operator it is non-negative; (see elliptic operator). See also Compact operator on Hilbert space Unbounded operator Hermitian adjoint Normal operator Positive operator Helffer–Sjöstrand formula Remarks Notes References Hilbert spaces Operator theory Linear operators
Self-adjoint operator
[ "Physics", "Mathematics" ]
4,959
[ "Functions and mappings", "Mathematical objects", "Linear operators", "Quantum mechanics", "Mathematical relations", "Hilbert spaces" ]
187,442
https://en.wikipedia.org/wiki/Software%20metric
In software engineering and development, a software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement (metrics are functions, while measurements are the numbers obtained by the application of metrics), often the two terms are used as synonyms. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, software debugging, software performance optimization, and optimal personnel task assignments. Common software measurements Common software measurements include: ABC Software Metric Balanced scorecard Bugs per line of code Code coverage Cohesion Comment density Connascent software components Constructive Cost Model Coupling Cyclomatic complexity (McCabe's complexity) Cyclomatic complexity density Defect density - defects found in a component Defect potential - expected number of defects in a particular component Defect removal rate DSQI (design structure quality index) Function Points and Automated Function Points, an Object Management Group standard Halstead Complexity Instruction path length Maintainability index Source lines of code - number of lines of code Program execution time Program load time Program size (binary) Weighted Micro Function Points Cycle time (software) First pass yield Corrective Commit Probability Limitations As software development is a complex process, with high variance on both methodologies and objectives, it is difficult to define or measure software qualities and quantities and to determine a valid and concurrent measurement metric, especially when making such a prediction prior to the detail design. Another source of difficulty and debate is in determining which metrics matter, and what they mean. The practical utility of software measurements has therefore been limited to the following domains: Scheduling Software sizing Programming complexity Software development effort estimation Software quality A specific measurement may target one or more of the above aspects, or the balance between them, for example as an indicator of team motivation or project performance. Additionally metrics vary between static and dynamic program code, as well as for object oriented software (systems). Acceptance and public opinion Some software development practitioners point out that simplistic measurements can cause more harm than good. Others have noted that metrics have become an integral part of the software development process. Impact of measurement on programmer psychology have raised concerns for harmful effects to performance due to stress, performance anxiety, and attempts to cheat the metrics, while others find it to have positive impact on developers value towards their own work, and prevent them being undervalued. Some argue that the definition of many measurement methodologies are imprecise, and consequently it is often unclear how tools for computing them arrive at a particular result, while others argue that imperfect quantification is better than none (“You can’t control what you can't measure.”). Evidence shows that software metrics are being widely used by government agencies, the US military, NASA, IT consultants, academic institutions, and commercial and academic development estimation software. Further reading J. Smith, Introduction to Linear Programming, Acme Press, 2010. An introductory text. Reijo M.Savola, Quality of security metrics and measurements, Computers & Security, Volume 37, September 2013, Pages 78-90. See also Goal Question-Metric List of tools for static code analysis Orthogonal Defect Classification Software engineering Software package metrics References External links Software Metrics (SQA.net) Software Engineering Metrics: What do they measure and how do we know NASA Standard NASA-STD-8739.8 (Software Assurance and Software Safety Standard) HIS Source Code Metrics (outdated but for reference; related see AUTOSAR) HIS Source Code Metrics version 1.3.1 01.04.2008 (outdated but for reference; related see AUTOSAR) A framework for source code metrics NASA.gov SonarQube Metric Definitions Metrics of Object Oriented Software (2010) Metrics
Software metric
[ "Mathematics", "Engineering" ]
817
[ "Software engineering", "Quantity", "Metrics", "Software metrics" ]
187,446
https://en.wikipedia.org/wiki/Orientability
In mathematics, orientability is a property of some topological spaces such as real vector spaces, Euclidean spaces, surfaces, and more generally manifolds that allows a consistent definition of "clockwise" and "anticlockwise". A space is orientable if such a consistent definition exists. In this case, there are two possible definitions, and a choice between them is an orientation of the space. Real vector spaces, Euclidean spaces, and spheres are orientable. A space is non-orientable if "clockwise" is changed into "counterclockwise" after running through some loops in it, and coming back to the starting point. This means that a geometric shape, such as , that moves continuously along such a loop is changed into its own mirror image . A Möbius strip is an example of a non-orientable space. Various equivalent formulations of orientability can be given, depending on the desired application and level of generality. Formulations applicable to general topological manifolds often employ methods of homology theory, whereas for differentiable manifolds more structure is present, allowing a formulation in terms of differential forms. A generalization of the notion of orientability of a space is that of orientability of a family of spaces parameterized by some other space (a fiber bundle) for which an orientation must be selected in each of the spaces which varies continuously with respect to changes in the parameter values. Orientable surfaces A surface S in the Euclidean space R3 is orientable if a chiral two-dimensional figure (for example, ) cannot be moved around the surface and back to where it started so that it looks like its own mirror image (). Otherwise the surface is non-orientable. An abstract surface (i.e., a two-dimensional manifold) is orientable if a consistent concept of clockwise rotation can be defined on the surface in a continuous manner. That is to say that a loop going around one way on the surface can never be continuously deformed (without overlapping itself) to a loop going around the opposite way. This turns out to be equivalent to the question of whether the surface contains no subset that is homeomorphic to the Möbius strip. Thus, for surfaces, the Möbius strip may be considered the source of all non-orientability. For an orientable surface, a consistent choice of "clockwise" (as opposed to counter-clockwise) is called an orientation, and the surface is called oriented. For surfaces embedded in Euclidean space, an orientation is specified by the choice of a continuously varying surface normal n at every point. If such a normal exists at all, then there are always two ways to select it: n or −n. More generally, an orientable surface admits exactly two orientations, and the distinction between an oriented surface and an orientable surface is subtle and frequently blurred. An orientable surface is an abstract surface that admits an orientation, while an oriented surface is a surface that is abstractly orientable, and has the additional datum of a choice of one of the two possible orientations. Examples Most surfaces encountered in the physical world are orientable. Spheres, planes, and tori are orientable, for example. But Möbius strips, real projective planes, and Klein bottles are non-orientable. They, as visualized in 3-dimensions, all have just one side. The real projective plane and Klein bottle cannot be embedded in R3, only immersed with nice intersections. Note that locally an embedded surface always has two sides, so a near-sighted ant crawling on a one-sided surface would think there is an "other side". The essence of one-sidedness is that the ant can crawl from one side of the surface to the "other" without going through the surface or flipping over an edge, but simply by crawling far enough. In general, the property of being orientable is not equivalent to being two-sided; however, this holds when the ambient space (such as R3 above) is orientable. For example, a torus embedded in can be one-sided, and a Klein bottle in the same space can be two-sided; here refers to the Klein bottle. Orientation by triangulation Any surface has a triangulation: a decomposition into triangles such that each edge on a triangle is glued to at most one other edge. Each triangle is oriented by choosing a direction around the perimeter of the triangle, associating a direction to each edge of the triangle. If this is done in such a way that, when glued together, neighboring edges are pointing in the opposite direction, then this determines an orientation of the surface. Such a choice is only possible if the surface is orientable, and in this case there are exactly two different orientations. If the figure can be consistently positioned at all points of the surface without turning into its mirror image, then this will induce an orientation in the above sense on each of the triangles of the triangulation by selecting the direction of each of the triangles based on the order red-green-blue of colors of any of the figures in the interior of the triangle. This approach generalizes to any n-manifold having a triangulation. However, some 4-manifolds do not have a triangulation, and in general for n > 4 some n-manifolds have triangulations that are inequivalent. Orientability and homology If H1(S) denotes the first homology group of a closed surface S, then S is orientable if and only if H1(S) has a trivial torsion subgroup. More precisely, if S is orientable then H1(S) is a free abelian group, and if not then H1(S) = F + Z/2Z where F is free abelian, and the Z/2Z factor is generated by the middle curve in a Möbius band embedded in S. Orientability of manifolds Let M be a connected topological n-manifold. There are several possible definitions of what it means for M to be orientable. Some of these definitions require that M has extra structure, like being differentiable. Occasionally, must be made into a special case. When more than one of these definitions applies to M, then M is orientable under one definition if and only if it is orientable under the others. Orientability of differentiable manifolds The most intuitive definitions require that M be a differentiable manifold. This means that the transition functions in the atlas of M are C1-functions. Such a function admits a Jacobian determinant. When the Jacobian determinant is positive, the transition function is said to be orientation preserving. An oriented atlas on M is an atlas for which all transition functions are orientation preserving. M is orientable if it admits an oriented atlas. When , an orientation of M is a maximal oriented atlas. (When , an orientation of M is a function .) Orientability and orientations can also be expressed in terms of the tangent bundle. The tangent bundle is a vector bundle, so it is a fiber bundle with structure group . That is, the transition functions of the manifold induce transition functions on the tangent bundle which are fiberwise linear transformations. If the structure group can be reduced to the group of positive determinant matrices, or equivalently if there exists an atlas whose transition functions determine an orientation preserving linear transformation on each tangent space, then the manifold M is orientable. Conversely, M is orientable if and only if the structure group of the tangent bundle can be reduced in this way. Similar observations can be made for the frame bundle. Another way to define orientations on a differentiable manifold is through volume forms. A volume form is a nowhere vanishing section ω of , the top exterior power of the cotangent bundle of M. For example, Rn has a standard volume form given by . Given a volume form on M, the collection of all charts for which the standard volume form pulls back to a positive multiple of ω is an oriented atlas. The existence of a volume form is therefore equivalent to orientability of the manifold. Volume forms and tangent vectors can be combined to give yet another description of orientability. If is a basis of tangent vectors at a point p, then the basis is said to be right-handed if . A transition function is orientation preserving if and only if it sends right-handed bases to right-handed bases. The existence of a volume form implies a reduction of the structure group of the tangent bundle or the frame bundle to . As before, this implies the orientability of M. Conversely, if M is orientable, then local volume forms can be patched together to create a global volume form, orientability being necessary to ensure that the global form is nowhere vanishing. Homology and the orientability of general manifolds At the heart of all the above definitions of orientability of a differentiable manifold is the notion of an orientation preserving transition function. This raises the question of what exactly such transition functions are preserving. They cannot be preserving an orientation of the manifold because an orientation of the manifold is an atlas, and it makes no sense to say that a transition function preserves or does not preserve an atlas of which it is a member. This question can be resolved by defining local orientations. On a one-dimensional manifold, a local orientation around a point p corresponds to a choice of left and right near that point. On a two-dimensional manifold, it corresponds to a choice of clockwise and counter-clockwise. These two situations share the common feature that they are described in terms of top-dimensional behavior near p but not at p. For the general case, let M be a topological n-manifold. A local orientation of M around a point p is a choice of generator of the group To see the geometric significance of this group, choose a chart around p. In that chart there is a neighborhood of p which is an open ball B around the origin O. By the excision theorem, is isomorphic to . The ball B is contractible, so its homology groups vanish except in degree zero, and the space is an -sphere, so its homology groups vanish except in degrees and . A computation with the long exact sequence in relative homology shows that the above homology group is isomorphic to . A choice of generator therefore corresponds to a decision of whether, in the given chart, a sphere around p is positive or negative. A reflection of through the origin acts by negation on , so the geometric significance of the choice of generator is that it distinguishes charts from their reflections. On a topological manifold, a transition function is orientation preserving if, at each point p in its domain, it fixes the generators of . From here, the relevant definitions are the same as in the differentiable case. An oriented atlas is one for which all transition functions are orientation preserving, M is orientable if it admits an oriented atlas, and when , an orientation of M is a maximal oriented atlas. Intuitively, an orientation of M ought to define a unique local orientation of M at each point. This is made precise by noting that any chart in the oriented atlas around p can be used to determine a sphere around p, and this sphere determines a generator of . Moreover, any other chart around p is related to the first chart by an orientation preserving transition function, and this implies that the two charts yield the same generator, whence the generator is unique. Purely homological definitions are also possible. Assuming that M is closed and connected, M is orientable if and only if the nth homology group is isomorphic to the integers Z. An orientation of M is a choice of generator of this group. This generator determines an oriented atlas by fixing a generator of the infinite cyclic group and taking the oriented charts to be those for which pushes forward to the fixed generator. Conversely, an oriented atlas determines such a generator as compatible local orientations can be glued together to give a generator for the homology group . Orientation and cohomology A manifold M is orientable if and only if the first Stiefel–Whitney class vanishes. In particular, if the first cohomology group with Z/2 coefficients is zero, then the manifold is orientable. Moreover, if M is orientable and w1 vanishes, then parametrizes the choices of orientations. This characterization of orientability extends to orientability of general vector bundles over M, not just the tangent bundle. The orientation double cover Around each point of M there are two local orientations. Intuitively, there is a way to move from a local orientation at a point to a local orientation at a nearby point : when the two points lie in the same coordinate chart , that coordinate chart defines compatible local orientations at and . The set of local orientations can therefore be given a topology, and this topology makes it into a manifold. More precisely, let O be the set of all local orientations of M. To topologize O we will specify a subbase for its topology. Let U be an open subset of M chosen such that is isomorphic to Z. Assume that α is a generator of this group. For each p in U, there is a pushforward function . The codomain of this group has two generators, and α maps to one of them. The topology on O is defined so that is open. There is a canonical map that sends a local orientation at p to p. It is clear that every point of M has precisely two preimages under . In fact, is even a local homeomorphism, because the preimages of the open sets U mentioned above are homeomorphic to the disjoint union of two copies of U. If M is orientable, then M itself is one of these open sets, so O is the disjoint union of two copies of M. If M is non-orientable, however, then O is connected and orientable. The manifold O is called the orientation double cover. Manifolds with boundary If M is a manifold with boundary, then an orientation of M is defined to be an orientation of its interior. Such an orientation induces an orientation of ∂M. Indeed, suppose that an orientation of M is fixed. Let be a chart at a boundary point of M which, when restricted to the interior of M, is in the chosen oriented atlas. The restriction of this chart to ∂M is a chart of ∂M. Such charts form an oriented atlas for ∂M. When M is smooth, at each point p of ∂M, the restriction of the tangent bundle of M to ∂M is isomorphic to , where the factor of R is described by the inward pointing normal vector. The orientation of Tp∂M is defined by the condition that a basis of Tp∂M is positively oriented if and only if it, when combined with the inward pointing normal vector, defines a positively oriented basis of TpM. Orientable double cover A closely related notion uses the idea of covering space. For a connected manifold take , the set of pairs where is a point of and is an orientation at ; here we assume is either smooth so we can choose an orientation on the tangent space at a point or we use singular homology to define orientation. Then for every open, oriented subset of we consider the corresponding set of pairs and define that to be an open set of . This gives a topology and the projection sending to is then a 2-to-1 covering map. This covering space is called the orientable double cover, as it is orientable. is connected if and only if is not orientable. Another way to construct this cover is to divide the loops based at a basepoint into either orientation-preserving or orientation-reversing loops. The orientation preserving loops generate a subgroup of the fundamental group which is either the whole group or of index two. In the latter case (which means there is an orientation-reversing path), the subgroup corresponds to a connected double covering; this cover is orientable by construction. In the former case, one can simply take two copies of , each of which corresponds to a different orientation. Orientation of vector bundles A real vector bundle, which a priori has a GL(n) structure group, is called orientable when the structure group may be reduced to , the group of matrices with positive determinant. For the tangent bundle, this reduction is always possible if the underlying base manifold is orientable and in fact this provides a convenient way to define the orientability of a smooth real manifold: a smooth manifold is defined to be orientable if its tangent bundle is orientable (as a vector bundle). Note that as a manifold in its own right, the tangent bundle is always orientable, even over nonorientable manifolds. Related concepts Lorentzian geometry In Lorentzian geometry, there are two kinds of orientability: space orientability and time orientability. These play a role in the causal structure of spacetime. In the context of general relativity, a spacetime manifold is space orientable if, whenever two right-handed observers head off in rocket ships starting at the same spacetime point, and then meet again at another point, they remain right-handed with respect to one another. If a spacetime is time-orientable then the two observers will always agree on the direction of time at both points of their meeting. In fact, a spacetime is time-orientable if and only if any two observers can agree which of the two meetings preceded the other. Formally, the pseudo-orthogonal group O(p,q) has a pair of characters: the space orientation character σ+ and the time orientation character σ−, Their product σ = σ+σ− is the determinant, which gives the orientation character. A space-orientation of a pseudo-Riemannian manifold is identified with a section of the associated bundle where O(M) is the bundle of pseudo-orthogonal frames. Similarly, a time orientation is a section of the associated bundle See also Curve orientation Orientation sheaf References External links Orientation of manifolds at the Manifold Atlas. Orientation covering at the Manifold Atlas. Orientation of manifolds in generalized cohomology theories at the Manifold Atlas. The Encyclopedia of Mathematics article on Orientation. Differential topology Surfaces Articles containing video clips de:Orientierung (Mathematik)#Orientierung einer Mannigfaltigkeit
Orientability
[ "Mathematics" ]
3,750
[ "Topology", "Differential topology" ]
187,461
https://en.wikipedia.org/wiki/Lenz%27s%20law
Lenz's law states that the direction of the electric current induced in a conductor by a changing magnetic field is such that the magnetic field created by the induced current opposes changes in the initial magnetic field. It is named after physicist Heinrich Lenz, who formulated it in 1834. It is a qualitative law that specifies the direction of induced current, but states nothing about its magnitude. Lenz's law predicts the direction of many effects in electromagnetism, such as the direction of voltage induced in an inductor or wire loop by a changing current, or the drag force of eddy currents exerted on moving objects in the magnetic field. Lenz's law may be seen as analogous to Newton's third law in classical mechanics and Le Chatelier's principle in chemistry. Definition Lenz's law states that: The current induced in a circuit due to a change in a magnetic field is directed to oppose the change in flux and to exert a mechanical force which opposes the motion. Lenz's law is contained in the rigorous treatment of Faraday's law of induction (the magnitude of EMF induced in a coil is proportional to the rate of change of the magnetic flux), where it finds expression by the negative sign: which indicates that the induced electromotive force and the rate of change in magnetic flux have opposite signs. This means that the direction of the back EMF of an induced field opposes the changing current that is its cause. D.J. Griffiths summarized it as follows: Nature abhors a change in flux. If a change in the magnetic field of current i1 induces another electric current, i2, the direction of i2 is opposite that of the change in i1. If these currents are in two coaxial circular conductors ℓ1 and ℓ2 respectively, and both are initially 0, then the currents i1 and i2 must counter-rotate. The opposing currents will repel each other as a result. Example Magnetic fields from strong magnets can create counter-rotating currents in a copper or aluminium pipe. This is shown by dropping the magnet through the pipe. The descent of the magnet inside the pipe is observably slower than when dropped outside the pipe. When a voltage is generated by a change in magnetic flux according to Faraday's law, the polarity of the induced voltage is such that it produces a current whose magnetic field opposes the change which produces it. The induced magnetic field inside any loop of wire always acts to keep the magnetic flux in the loop constant. The direction of an induced current can be determined using the right-hand rule to show which direction of current flow would create a magnetic field that would oppose the direction of changing flux through the loop. In the examples above, if the flux is increasing, the induced field acts in opposition to it. If it is decreasing, the induced field acts in the direction of the applied field to oppose the change. Detailed interaction of charges in these currents In electromagnetism, when charges move along electric field lines work is done on them, whether it involves storing potential energy (negative work) or increasing kinetic energy (positive work). When net positive work is applied to a charge q1, it gains speed and momentum. The net work on q1 thereby generates a magnetic field whose strength (in units of magnetic flux density (1 tesla = 1 volt-second per square meter)) is proportional to the speed increase of q1. This magnetic field can interact with a neighboring charge q2, passing on this momentum to it, and in return, q1 loses momentum. The charge q2 can also act on q1 in a similar manner, by which it returns some of the momentum that it received from q1. This back-and-forth component of momentum contributes to magnetic inductance. The closer that q1 and q2 are, the greater the effect. When q2 is inside a conductive medium such as a thick slab made of copper or aluminum, it more readily responds to the force applied to it by q1. The energy of q1 is not instantly consumed as heat generated by the current of q2 but is also stored in two opposing magnetic fields. The energy density of magnetic fields tends to vary with the square of the magnetic field's intensity; however, in the case of magnetically non-linear materials such as ferromagnets and superconductors, this relationship breaks down. Conservation of momentum Momentum must be conserved in the process, so if q1 is pushed in one direction, then q2 ought to be pushed in the other direction by the same force at the same time. However, the situation becomes more complicated when the finite speed of electromagnetic wave propagation is introduced (see retarded potential). This means that for a brief period the total momentum of the two charges is not conserved, implying that the difference should be accounted for by momentum in the fields, as asserted by Richard P. Feynman. Famous 19th century electrodynamicist James Clerk Maxwell called this the "electromagnetic momentum". Yet, such a treatment of fields may be necessary when Lenz's law is applied to opposite charges. It is normally assumed that the charges in question have the same sign. If they do not, such as a proton and an electron, the interaction is different. An electron generating a magnetic field would generate an EMF that causes a proton to accelerate in the same direction as the electron. At first, this might seem to violate the law of conservation of momentum, but such an interaction is seen to conserve momentum if the momentum of electromagnetic fields is taken into account. References External links with an aluminum block in an MRI Magnetic levitation Electrodynamics Articles containing video clips
Lenz's law
[ "Mathematics" ]
1,172
[ "Electrodynamics", "Dynamical systems" ]
187,584
https://en.wikipedia.org/wiki/Pyrotechnics
Pyrotechnics is the science and craft of creating such things as fireworks, safety matches, oxygen candles, explosive bolts and other fasteners, parts of automotive airbags, as well as gas-pressure blasting in mining, quarrying, and demolition. This trade relies upon self-contained and self-sustained exothermic chemical reactions to make heat, light, gas, smoke and/or sound. The name comes from the Greek words pyr ("fire") and tekhnikos ("made by art"). Improper use of pyrotechnics could lead to pyrotechnic accidents. People responsible for the safe storage, handling, and functioning of pyrotechnic devices are known as pyrotechnicians. Proximate pyrotechnics Explosions, flashes, smoke, flames, fireworks and other pyrotechnic-driven effects used in the entertainment industry are referred to as proximate pyrotechnics. Proximate refers to the pyrotechnic device's location relative to an audience. In the majority of jurisdictions, special training and licensing must be obtained from local authorities to legally prepare and use proximate pyrotechnics. Many musical groups use pyrotechnics to enhance their live shows. Pink Floyd were innovators of pyrotechnic use in concerts. For instance, at the climax of their song "Careful with That Axe, Eugene", a blast of smoke was set off at the back of the stage. Bands such as the Who, KISS and Queen soon followed with use of pyrotechnics in their shows. Michael Jackson attempted using pyrotechnics in a 1984 Pepsi advertisement, where a stray spark caused a small fire in his hair. German industrial metal band Rammstein are renowned for their incorporation of a large variety of pyrotechnics into performances, which range from flaming costumes to face-mounted flamethrowers. Nightwish, Lordi, Sabaton and Parkway Drive are also known for their vivid pyrotechnics in concert. Many professional wrestlers have also used pyrotechnics as part of their entrances to the ring. Modern pyrotechnics are, in general, divided into categories based upon the type of effect produced or manufacturing method. The most common categories are: Airburst – Hanging charges designed to burst into spheres of sparks. Primarily used to provide an effect similar to an aerial shell without producing fallout. May also be designed to launch pieces of confetti or streamers. A plurality of airbursts is usually achieved with a harness. Binary powders – Kits divided into separate oxidizer and fuel, intended to be mixed on site before being loaded into hardware. Comet (meteor) – Brightly colored burning pellets resembling shooting stars. These are no larger than 50mm for proximate use. Crossette comets carry a small cavity filled with a small amount of burst charge in the middle which causes the stars to split into four pieces. Crossette comets are usually no larger than 45mm for proximate use. Flame Mortars – These articles use a smokeless powder-based composition to produce a rising column or a rolling ball of fire in various colors. Duration is typically 5 seconds or less for proximate pyrotechnics, and the diameter is usually 1 to 4 inches. Flare – Cylindrical tubes containing a pressed pyrotechnic composition intended to produce a bright flame of various colors. Proximate flares are usually 2 to 6 inches in length and 0.5-1 inches in diameter, and may last 60 seconds or longer. Flash Paper/Flash String/Flash Cotton – Stored and transported when wet with either water or alcohol, these are different forms of nitrocellulose which burn with very little smoke or ash, and are popularly used as hand flashes for magicians, amongst other uses. Flash Trays – A preloaded tube 6 to 18 inches in length with a slit cut between the two end plugs. Used to produce a fan pattern flash and spray of sparks. Flash Pots – Preloaded cylindrical tubes 2 to 4 inches long and 0.5 to 1 inches in diameter used to emit bright flashes of light, often with a bang or spray of sparks. Gerbs – Pyrotechnic fountains used to produce a controlled plume of sparks, and may be classified as fast gerbs or duration gerbs depending on the burn duration. Special waterfall gerbs are often used to produce an effect similar to a waterfall, using several hung upside down in a line. Line Rockets – A device attached to a line specifically to produce thrust. Usually has a duration of 5 seconds or less. May also produce a whistle effect. Ice Fountains – A gerb-type device with no choke specifically used to provide a low-smoke alternative to gerbs, at the cost of lower height. Mines – Devices containing multiple stars, propelled into the air using a lift charge. Special mine-comet effects feature both mines and comets in the same article. Mortar Hits – Produces a bright flash and a puff of smoke, and may be designed to produce noise in addition to the effect. Concussion mortars are special mortars exclusively used to produce loud bangs, and may be used to accentuate other effects. Multi-Shot Devices – Articles used to chain multiple effects together, may be timed or instantaneous. Smoke Cartridges – Used to produce a plume of smoke, duration and size vary greatly depending on the device used. Smoke Cookies – Compressed discs of pyrotechnic composition used to produce a smoke effect. Spark Hits – Used to simulate short circuits in an electrical panel. Saxons – Articles that produce revolving showers of sparks, consisting of two gerb-type devices pinned at the center. Shock Tubing – A special thermoplastic tube used to simulate lightning strikes and the resulting thunder, using special igniters. Strobe Pots – Used to produce multiple flashes of light. Various ingredients may be added to pyrotechnic devices to provide colour, smoke, noise or sparks. Special additives and construction methods are used to modify the character of the effect produced, either to enhance or subdue the effect; for example, sandwiching layers of pyrotechnic compounds containing potassium perchlorate, sodium salicylate or sodium benzoate with layers that do not creates a fountain of sparks with an undulating whistle. In general, such pyrotechnic devices are initiated by a remotely controlled electrical signal that causes an electric match, or e-match, to produce ignition. The remote control may be manual, via a switch console, or computer controlled according to a pre-programmed sequence and/or a sequence that tracks the live performance via stage cues. Display pyrotechnics Display pyrotechnics, also known as commercial fireworks, are pyrotechnic devices intended for use outdoors, where the audience can be further away, and smoke and fallout is less of a concern. Generally the effects, though often similar to proximate pyrotechnics, are of a larger size and more vigorous in nature. It will typically take an entire day to set up a professional fireworks display. This work is normally undertaken on temporarily secured locations by specialist companies employing teams of experienced pyrotechnicians. In modern times a familiar feature of larger fireworks displays are aerial shells, which commonly appear as large spherical bursts of stars in the sky. The exterior of these shells are commonly made of a hard paper-adhesive layered composite which holds the interior stars arranged around a burst charge, or other pyrotechnic effects. Aerial shells are fired out of mortars from the ground and have internal timing fuses that accurately and reliably position their bursts. A continuous sequence of shells are launched, often with effects artistically choreographed to music and themes, accompanied by various types of ground effects. Modern fireworks displays are commonly executed to a designed program using electrical wiring and ignition linked to an electronic firing system. The size of these fireworks can range from 50 mm (2") to over 600 mm (24") diameter depending on the type of effect and available distance from the audience. In most jurisdictions, special fireworks training and licensing must be obtained from local authorities to legally prepare and use display pyrotechnics. Consumer pyrotechnics Consumer pyrotechnics are devices readily available for purchase to the general public with little or no special licensing or training. These items are considered relatively low hazard devices but, like all pyrotechnics, can still be hazardous and should be stored, handled and used appropriately. Some of the most common examples of consumer pyrotechnics encountered include recreational fireworks (including whistling and sparking types), model rocket motors, highway and marine distress flares, sparklers and caps for toy guns. Pyrotechnics are also indirectly involved in other consumer products such as powder actuated nail guns, ammunition for firearms, and modern fireplaces. Some types, including bird scarers, shell crackers, whistle crackers and flares, may be designed to be fired from a 12-gauge pistol or rifle. Safety Pyrotechnics are dangerous and must be handled and used properly. Recently, several high-profile incidents involving pyrotechnics have re-enforced the need to respect these explosives at all times. Proximate pyrotechnics is an area of expertise that requires additional training beyond that of other professional pyrotechnics areas and the use of devices specifically manufactured for indoor, close proximity use. Despite this, accidents can still happen due to the use of low-quality product, or due to an unexpected event, or even due to an error on the part of the operator. Homemade devices A common low-budget pyrotechnic flash pot is built using modified screw-in electric fuses in a common light fixture. The fuses are intentionally blown, acting as ignitors for a pyrotechnic material. Homemade devices may fail to include safety features and can provide numerous hazards, including: A firing circuit using high-power, non-isolated AC line voltage can be a shock hazard to the operator and bystanders. The use of high-current fuses as ignitors can cause main circuit breakers and fuses to trip, due to the sudden inrush of hundreds of amperes through a dead-shorted circuit. Switches used to control ignition may be damaged from the high-current surges. There may not be indicators or interlocks preventing premature ignition of the pyrotechnic material. Screwing a powder-loaded fuse into an unknowingly powered socket will result in immediate ignition, injuring the operator. Commercial flash pots include safety features such as warning pilot lamps, preignition grounding, and safing circuits. They also use isolated and low-voltage power sources, and have keyed power connections to help prevent accidental ignition. See also Fireworks List of pyrotechnic incidents List of nightclub fires Notes References Natural Resources Canada (2003), "Pyrotechnics Special Effects Manual. Edition 2" Minister of Public Works and Government Services Canada NFPA (2006), "NFPA 160; Standard for Flame Effects Before an Audience" NFPA International NFPA (2006), "NFPA 1123; Code for Fireworks Display" NFPA International NFPA (2006), "NFPA 1126; Standard for the Use of Pyrotechnics before a Proximate Audience" NFPA International External links Explosives Safety and Security Branch; a division of Natural resources Canada. Canadian Fireworks Association ACP PGI.org – Pyrotechnics Guild International PyroGuide – pyrotechnics wiki Film Pyrotechnics Pyrotechnic film examples Explosives Special effects Hobbies mt:Piroteknika
Pyrotechnics
[ "Chemistry" ]
2,474
[ "Explosives", "Explosions" ]
187,805
https://en.wikipedia.org/wiki/Percolation%20theory
In statistical physics and mathematics, percolation theory describes the behavior of a network when nodes or links are added. This is a geometric type of phase transition, since at a critical fraction of addition the network of small, disconnected clusters merge into significantly larger connected, so-called spanning clusters. The applications of percolation theory to materials science and in many other disciplines are discussed here and in the articles Network theory and Percolation (cognitive psychology). Introduction A representative question (and the source of the name) is as follows. Assume that some liquid is poured on top of some porous material. Will the liquid be able to make its way from hole to hole and reach the bottom? This physical question is modelled mathematically as a three-dimensional network of vertices, usually called "sites", in which the edge or "bonds" between each two neighbors may be open (allowing the liquid through) with probability , or closed with probability , and they are assumed to be independent. Therefore, for a given , what is the probability that an open path (meaning a path, each of whose links is an "open" bond) exists from the top to the bottom? The behavior for large  is of primary interest. This problem, called now bond percolation, was introduced in the mathematics literature by , and has been studied intensively by mathematicians and physicists since then. In a slightly different mathematical model for obtaining a random graph, a site is "occupied" with probability or "empty" (in which case its edges are removed) with probability ; the corresponding problem is called site percolation. The question is the same: for a given p, what is the probability that a path exists between top and bottom? Similarly, one can ask, given a connected graph at what fraction of failures the graph will become disconnected (no large component). The same questions can be asked for any lattice dimension. As is quite typical, it is actually easier to examine infinite networks than just large ones. In this case the corresponding question is: does an infinite open cluster exist? That is, is there a path of connected points of infinite length "through" the network? By Kolmogorov's zero–one law, for any given , the probability that an infinite cluster exists is either zero or one. Since this probability is an increasing function of (proof via coupling argument), there must be a critical (denoted by ) below which the probability is always 0 and above which the probability is always 1. In practice, this criticality is very easy to observe. Even for as small as 100, the probability of an open path from the top to the bottom increases sharply from very close to zero to very close to one in a short span of values of . History The Flory–Stockmayer theory was the first theory investigating percolation processes. The history of the percolation model as we know it has its root in the coal industry. Since the industrial revolution, the economical importance of this source of energy fostered many scientific studies to understand its composition and optimize its use. During the 1930s and 1940s, the qualitative analysis by organic chemistry left more and more room to more quantitative studies. In this context, the British Coal Utilisation Research Association (BCURA) was created in 1938. It was a research association funded by the coal mines owners. In 1942, Rosalind Franklin, who then recently graduated in chemistry from the university of Cambridge, joined the BCURA. She started research on the density and porosity of coal. During the Second World War, coal was an important strategic resource. It was used as a source of energy, but also was the main constituent of gas masks. Coal is a porous medium. To measure its 'real' density, one was to sink it in a liquid or a gas whose molecules are small enough to fill its microscopic pores. While trying to measure the density of coal using several gases (helium, methanol, hexane, benzene), and as she found different values depending on the gas used, Rosalind Franklin showed that the pores of coal are made of microstructures of various lengths that act as a microscopic sieve to discriminate the gases. She also discovered that the size of these structures depends on the temperature of carbonation during the coal production. With this research, she obtained a PhD degree and left the BCURA in 1946. In the mid fifties, Simon Broadbent worked in the BCURA as a statistician. Among other interests, he studied the use of coal in gas masks. One question is to understand how a fluid can diffuse in the coal pores, modeled as a random maze of open or closed tunnels. In 1954, during a symposium on Monte Carlo methods, he asks questions to John Hammersley on the use of numerical methods to analyze this model. Broadbent and Hammersley introduced in their article of 1957 a mathematical model to model this phenomenon, that is percolation. Computation of the critical parameter For most infinite lattice graphs, cannot be calculated exactly, though in some cases there is an exact value. For example: for the square lattice in two dimensions, for bond percolation, a fact which was an open question for more than 20 years and was finally resolved by Harry Kesten in the early 1980s, see . For site percolation on the square lattice, the value of is not known from analytic derivation but only via simulations of large lattices which provide the estimate 0.59274621 ± 0.00000013.   A limit case for lattices in high dimensions is given by the Bethe lattice, whose threshold is at for a coordination number . In other words: for the regular tree of degree , is equal to . For a random tree-like network without degree-degree correlation, it can be shown that such network can have a giant component, and the percolation threshold (transmission probability) is given by , where is the generating function corresponding to the excess degree distribution. So, for random Erdős–Rényi networks of average degree , . In networks with low clustering, , the critical point gets scaled by such that: This indicates that for a given degree distribution, the clustering leads to a larger percolation threshold, mainly because for a fixed number of links, the clustering structure reinforces the core of the network with the price of diluting the global connections. For networks with high clustering, strong clustering could induce the core–periphery structure, in which the core and periphery might percolate at different critical points, and the above approximate treatment is not applicable. Universality The universality principle states that the numerical value of is determined by the local structure of the graph, whereas the behavior near the critical threshold, , is characterized by universal critical exponents. For example the distribution of the size of clusters at criticality decays as a power law with the same exponent for all 2d lattices. This universality means that for a given dimension, the various critical exponents, the fractal dimension of the clusters at is independent of the lattice type and percolation type (e.g., bond or site). However, recently percolation has been performed on a weighted planar stochastic lattice (WPSL) and found that although the dimension of the WPSL coincides with the dimension of the space where it is embedded, its universality class is different from that of all the known planar lattices. Phases Subcritical and supercritical The main fact in the subcritical phase is "exponential decay". That is, when , the probability that a specific point (for example, the origin) is contained in an open cluster (meaning a maximal connected set of "open" edges of the graph) of size decays to zero exponentially in . This was proved for percolation in three and more dimensions by and independently by . In two dimensions, it formed part of Kesten's proof that . The dual graph of the square lattice is also the square lattice. It follows that, in two dimensions, the supercritical phase is dual to a subcritical percolation process. This provides essentially full information about the supercritical model with . The main result for the supercritical phase in three and more dimensions is that, for sufficiently large , there is almost certainly an infinite open cluster in the two-dimensional slab . This was proved by . In two dimensions with , there is with probability one a unique infinite closed cluster (a closed cluster is a maximal connected set of "closed" edges of the graph). Thus the subcritical phase may be described as finite open islands in an infinite closed ocean. When just the opposite occurs, with finite closed islands in an infinite open ocean. The picture is more complicated when since , and there is coexistence of infinite open and closed clusters for between and . Criticality Percolation has a singularity at the critical point and many properties behave as of a power-law with , near . Scaling theory predicts the existence of critical exponents, depending on the number d of dimensions, that determine the class of the singularity. When these predictions are backed up by arguments from conformal field theory and Schramm–Loewner evolution, and include predicted numerical values for the exponents. Most of these predictions are conjectural except when the number of dimensions satisfies either or . They include: There are no infinite clusters (open or closed) The probability that there is an open path from some fixed point (say the origin) to a distance of decreases polynomially, i.e. is on the order of for some  does not depend on the particular lattice chosen, or on other local parameters. It depends only on the dimension (this is an instance of the universality principle). decreases from until and then stays fixed. . The shape of a large cluster in two dimensions is conformally invariant. See . In 11 or more dimensions, these facts are largely proved using a technique known as the lace expansion. It is believed that a version of the lace expansion should be valid for 7 or more dimensions, perhaps with implications also for the threshold case of 6 dimensions. The connection of percolation to the lace expansion is found in . In two dimensions, the first fact ("no percolation in the critical phase") is proved for many lattices, using duality. Substantial progress has been made on two-dimensional percolation through the conjecture of Oded Schramm that the scaling limit of a large cluster may be described in terms of a Schramm–Loewner evolution. This conjecture was proved by in the special case of site percolation on the triangular lattice. Different models Directed percolation that models the effect of gravitational forces acting on the liquid was also introduced in , and has connections with the contact process. The first model studied was Bernoulli percolation. In this model all bonds are independent. This model is called bond percolation by physicists. A generalization was next introduced as the Fortuin–Kasteleyn random cluster model, which has many connections with the Ising model and other Potts models. Bernoulli (bond) percolation on complete graphs is an example of a random graph. The critical probability is , where is the number of vertices (sites) of the graph. Bootstrap percolation removes active cells from clusters when they have too few active neighbors, and looks at the connectivity of the remaining cells. First passage percolation. Invasion percolation. Applications In biology, biochemistry, and physical virology Percolation theory has been used to successfully predict the fragmentation of biological virus shells (capsids), with the fragmentation threshold of Hepatitis B virus capsid predicted and detected experimentally. When a critical number of subunits has been randomly removed from the nanoscopic shell, it fragments and this fragmentation may be detected using Charge Detection Mass Spectroscopy (CDMS) among other single-particle techniques. This is a molecular analog to the common board game Jenga, and has relevance to the broader study of virus disassembly. More stable viral particles (tilings with greater fragmentation thresholds) are found in greater abundance in nature. In ecology Percolation theory has been applied to studies of how environment fragmentation impacts animal habitats and models of how the plague bacterium Yersinia pestis spreads. See also References Further reading External links PercoVIS: a macOS program to visualize percolation on networks in real time Interactive Percolation Nanohub online course on Percolation Theory
Percolation theory
[ "Physics", "Chemistry", "Mathematics" ]
2,580
[ "Physical phenomena", "Phase transitions", "Percolation theory", "Combinatorics", "Statistical mechanics" ]
188,037
https://en.wikipedia.org/wiki/Pnictogen
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 2 | |- ! 3 | |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} The pnictogens ( or ; from "to choke" and -gen, "generator") are the chemical elements in group 15 of the periodic table. This group is also known as the nitrogen group or nitrogen family. Group 15 consists of the elements nitrogen (N), phosphorus (P), arsenic (As), antimony (Sb), bismuth (Bi), and moscovium (Mc). Since 1988, it has been called Group 15 by the IUPAC. Before that, in America it was called Group VA, owing to a text by H. C. Deming and the Sargent-Welch Scientific Company, while in Europe it was called Group VB, which the IUPAC had recommended in 1970. (Pronounced "group five A" and "group five B"; "V" is the Roman numeral 5). In semiconductor physics, it is still usually called Group V. The "five" ("V") in the historical names comes from the "pentavalency" of nitrogen, reflected by the stoichiometry of compounds such as N2O5. They have also been called the pentels. Characteristics Chemical Like other groups, the members of this family manifest similar patterns in electron configuration, notably in their valence shells, resulting in trends in chemical behavior. This group has a defining characteristic whereby each component element has 5 electrons in its valence shell, that is, 2 electrons in the s sub-shell and 3 unpaired electrons in the p sub-shell. They are therefore 3 electrons shy of filling their valence shell in their non-ionized state. The Russell-Saunders term symbol of the ground state in all elements in the group is 4S. The most important elements of this group to life on Earth are nitrogen (N), which in its diatomic form is the principal component of air, and phosphorus (P), which, like nitrogen, is essential to all known forms of life. Compounds Binary compounds of the group can be referred to collectively as pnictides. Magnetic properties of pnictide compounds span the cases of diamagnetic systems (such as BN or GaN) and magnetically ordered systems (MnSb is paramagnetic at elevated temperatures and ferromagnetic at room temperature); the former compounds are usually transparent and the latter metallic. Other pnictides include the ternary rare-earth (RE) main-group variety of pnictides. These are in the form of , where M is a carbon group or boron group element and Pn is any pnictogen except nitrogen. These compounds are between ionic and covalent compounds and thus have unusual bonding properties. These elements are also noted for their stability in compounds due to their tendency to form covalent double bonds and triple bonds. This property of these elements leads to their potential toxicity, most evident in phosphorus, arsenic, and antimony. When these substances react with various chemicals of the body, they create strong free radicals that are not easily processed by the liver, where they accumulate. Paradoxically, this same strong bonding causes nitrogen's and bismuth's reduced toxicity (when in molecules), because these strong bonds with other atoms are difficult to split, creating very unreactive molecules. For example, , the diatomic form of nitrogen, is used as an inert gas in situations where using argon or another noble gas would be too expensive. Formation of multiple bonds is facilitated by their five valence electrons, as the octet rule permits a pnictogen to accept three electrons on covalent bonding. As 5  3, it leaves two unused electrons in a lone pair unless there is a positive charge around (like in ). When a pnictogen forms only three single bonds, effects of the lone pair typically results in trigonal pyramidal molecular geometry. Oxidation states The light pnictogens (nitrogen, phosphorus, and arsenic) tend to form −3 charges when reduced, completing their octet. When oxidized or ionized, pnictogens typically take an oxidation state of +3 (by losing all three p-shell electrons in the valence shell) or +5 (by losing all three p-shell and both s-shell electrons in the valence shell). However heavier pnictogens are more likely to form the +3 oxidation state than lighter ones due to the s-shell electrons becoming more stabilized. −3 oxidation state Pnictogens can react with hydrogen to form pnictogen hydrides such as ammonia. Going down the group, to phosphane (phosphine), arsane (arsine), stibane (stibine), and finally bismuthane (bismuthine), each pnictogen hydride becomes progressively less stable (more unstable), more toxic, and has a smaller hydrogen-hydrogen angle (from 107.8° in ammonia to 90.48° in bismuthane). (Also, technically, only ammonia and phosphane have the pnictogen in the −3 oxidation state because, for the rest, the pnictogen is less electronegative than hydrogen.) Crystal solids featuring pnictogens fully reduced include yttrium nitride, calcium phosphide, sodium arsenide, indium antimonide, and even double salts like aluminum gallium indium phosphide. These include III-V semiconductors, including gallium arsenide, the second-most widely used semiconductor after silicon. +3 oxidation state Nitrogen forms a limited number of stable III compounds. Nitrogen(III) oxide can only be isolated at low temperatures, and nitrous acid is unstable. Nitrogen trifluoride is the only stable nitrogen trihalide, with nitrogen trichloride, nitrogen tribromide, and nitrogen triiodide being explosive—nitrogen triiodide being so shock-sensitive that the touch of a feather detonates it (the last three actually feature nitrogen in the −3 oxidation state). Phosphorus forms a +III oxide which is stable at room temperature, phosphorous acid, and several trihalides, although the triiodide is unstable. Arsenic forms +III compounds with oxygen as arsenites, arsenous acid, and arsenic(III) oxide, and it forms all four trihalides. Antimony forms antimony(III) oxide and antimonite but not oxyacids. Its trihalides, antimony trifluoride, antimony trichloride, antimony tribromide, and antimony triiodide, like all pnictogen trihalides, each have trigonal pyramidal molecular geometry. The +3 oxidation state is bismuth's most common oxidation state because its ability to form the +5 oxidation state is hindered by relativistic properties on heavier elements, effects that are even more pronounced concerning moscovium. Bismuth(III) forms an oxide, an oxychloride, an oxynitrate, and a sulfide. Moscovium(III) is predicted to behave similarly to bismuth(III). Moscovium is predicted to form all four trihalides, of which all but the trifluoride are predicted to be soluble in water. It is also predicted to form an oxychloride and oxybromide in the +III oxidation state. +5 oxidation state For nitrogen, the +5 state is typically serves as only a formal explanation of molecules like N2O5, as the high electronegativity of nitrogen causes the electrons to be shared almost evenly. Pnictogen compounds with coordination number 5 are hypervalent. Nitrogen(V) fluoride is only theoretical and has not been synthesized. The "true" +5 state is more common for the essentially non-relativistic typical pnictogens phosphorus, arsenic, and antimony, as shown in their oxides, phosphorus(V) oxide, arsenic(V) oxide, and antimony(V) oxide, and their fluorides, phosphorus(V) fluoride, arsenic(V) fluoride, antimony(V) fluoride. They also form related fluoride-anions, hexafluorophosphate, hexafluoroarsenate, hexafluoroantimonate, that function as non-coordinating anions. Phosphorus even forms mixed oxide-halides, known as oxyhalides, like phosphorus oxychloride, and mixed pentahalides, like phosphorus trifluorodichloride. Pentamethylpnictogen(V) compounds exist for arsenic, antimony, and bismuth. However, for bismuth, the +5 oxidation state becomes rare due to the relativistic stabilization of the 6s orbitals known as the inert-pair effect, so that the 6s electrons are reluctant to bond chemically. This causes bismuth(V) oxide to be unstable and bismuth(V) fluoride to be more reactive than the other pnictogen pentafluorides, making it an extremely powerful fluorinating agent. This effect is even more pronounced for moscovium, prohibiting it from attaining a +5 oxidation state. Other oxidation states Nitrogen forms a variety of compounds with oxygen in which the nitrogen can take on a variety of oxidation states, including +II, +IV, and even some mixed-valence compounds and very unstable +VI oxidation state. In hydrazine, diphosphane, and organic derivatives of the two, the nitrogen or phosphorus atoms have the −2 oxidation state. Likewise, diimide, which has two nitrogen atoms double-bonded to each other, and its organic derivatives have nitrogen in the oxidation state of −1. Similarly, realgar has arsenic–arsenic bonds, so the arsenic's oxidation state is +II. A corresponding compound for antimony is Sb2(C6H5)4, where the antimony's oxidation state is +II. Phosphorus has the +1 oxidation state in hypophosphorous acid and the +4 oxidation state in hypophosphoric acid. Antimony tetroxide is a mixed-valence compound, where half of the antimony atoms are in the +3 oxidation state, and the other half are in the +5 oxidation state. It is expected that moscovium will have an inert-pair effect for both the 7s and the 7p1/2 electrons, as the binding energy of the lone 7p3/2 electron is noticeably lower than that of the 7p1/2 electrons. This is predicted to cause +I to be a common oxidation state for moscovium, although it also occurs to a lesser extent for bismuth and nitrogen. Physical The pnictogens exemplify the transition from nonmetal to metal going down the periodic table: a gaseous diatomic nonmetal (N), two elements displaying many allotropes of varying conductivities and structures (P and As), and then at least two elements that only form metallic structures in bulk (Sb and Bi; probably Mc as well). All the elements in the group are solids at room temperature, except for nitrogen which is gaseous at room temperature. Nitrogen and bismuth, despite both being pnictogens, are very different in their physical properties. For instance, at STP nitrogen is a transparent non-metallic gas, while bismuth is a silvery-white metal. The densities of the pnictogens increase towards the heavier pnictogens. Nitrogen's density is 0.001251 g/cm3 at STP. Phosphorus's density is 1.82 g/cm3 at STP, arsenic's is 5.72 g/cm3, antimony's is 6.68 g/cm3, and bismuth's is 9.79 g/cm3. Nitrogen's melting point is −210 °C and its boiling point is −196 °C. Phosphorus has a melting point of 44 °C and a boiling point of 280 °C. Arsenic is one of only two elements to sublimate at standard pressure; it does this at 603 °C. Antimony's melting point is 631 °C and its boiling point is 1587 °C. Bismuth's melting point is 271 °C and its boiling point is 1564 °C. Nitrogen's crystal structure is hexagonal. Phosphorus's crystal structure is cubic. Arsenic, antimony, and bismuth all have rhombohedral crystal structures. Nuclear All pnictogens up to antimony have at least one stable isotope; bismuth has no stable isotopes, but has a primordial radioisotope with a half-life much longer than the age of the universe (209Bi); and all known isotopes of moscovium are synthetic and highly radioactive. In addition to these isotopes, traces of 13N, 32P, and 33P occur in nature, along with various bismuth isotopes (other than 209Bi) in the decay chains of thorium and uranium. History The nitrogen compound sal ammoniac (ammonium chloride) has been known since the time of the Ancient Egyptians. In the 1760s two scientists, Henry Cavendish and Joseph Priestley, isolated nitrogen from air, but neither realized the presence of an undiscovered element. It was not until several years later, in 1772, that Daniel Rutherford realized that the gas was indeed nitrogen. The alchemist Hennig Brandt first discovered phosphorus in Hamburg in 1669. Brandt produced the element by heating evaporated urine and condensing the resulting phosphorus vapor in water. Brandt initially thought that he had discovered the Philosopher's Stone, but eventually realized that this was not the case. Arsenic compounds have been known for at least 5000 years, and the ancient Greek Theophrastus recognized the arsenic minerals called realgar and orpiment. Elemental arsenic was discovered in the 13th century by Albertus Magnus. Antimony was well known to the ancients. A 5000-year-old vase made of nearly pure antimony exists in the Louvre. Antimony compounds were used in dyes in the Babylonian times. The antimony mineral stibnite may have been a component of Greek fire. Bismuth was first discovered by an alchemist in 1400. Within 80 years of bismuth's discovery, it had applications in printing and decorated caskets. The Incas were also using bismuth in knives by 1500. Bismuth was originally thought to be the same as lead, but in 1753, Claude François Geoffroy proved that bismuth was different from lead. Moscovium was successfully produced in 2003 by bombarding americium-243 atoms with calcium-48 atoms. Names and etymology The term "pnictogen" (or "pnigogen") is derived from the ancient Greek word () meaning "to choke", referring to the choking or stifling property of nitrogen gas. It can also be used as a mnemonic for the two most common members, P and N. The term "pnictogen" was suggested by the Dutch chemist Anton Eduard van Arkel in the early 1950s. It is also spelled "pnicogen" or "pnigogen". The term "pnicogen" is rarer than the term "pnictogen", and the ratio of academic research papers using "pnictogen" to those using "pnicogen" is 2.5 to 1. It comes from the Greek root (choke, strangle), and thus the word "pnictogen" is also a reference to the Dutch and German names for nitrogen ( and , respectively, "suffocating substance": i.e., substance in air, unsupportive of breathing). Hence, "pnictogen" could be translated as "suffocation maker". The word "pnictide" also comes from the same root. Previously, the name pentels (from Greek , , five) was also used for this group. Occurrence Nitrogen makes up 25 parts per million of the Earth's crust, 5 parts per million of soil on average, 100 to 500 parts per trillion of seawater, and 78% of dry air. Most nitrogen on Earth is in nitrogen gas, but some nitrate minerals exist. Nitrogen makes up 2.5% of a typical human by weight. Phosphorus is 0.1% of the earth's crust, making it the 11th most abundant element. Phosphorus comprises 0.65 parts per million of soil and 15 to 60 parts per billion of seawater. There are 200 Mt of accessible phosphates on earth. Phosphorus makes up 1.1% of a typical human by weight. Phosphorus occurs in minerals of the apatite family, which are the main components of the phosphate rocks. Arsenic constitutes 1.5 parts per million of the Earth's crust, making it the 53rd most abundant element. The soils hold 1 to 10 parts per million of arsenic, and seawater carries 1.6 parts per billion of arsenic. Arsenic comprises 100 parts per billion of a typical human by weight. Some arsenic exists in elemental form, but most arsenic is found in the arsenic minerals orpiment, realgar, arsenopyrite, and enargite. Antimony makes up 0.2 parts per million of the earth's crust, making it the 63rd most abundant element. The soils contain 1 part per million of antimony on average, and seawater contains 300 parts per trillion on average. A typical human has 28 parts per billion of antimony by weight. Some elemental antimony occurs in silver deposits. Bismuth makes up 48 parts per billion of the earth's crust, making it the 70th most abundant element. The soils contain approximately 0.25 parts per million of bismuth, and seawater contains 400 parts per trillion of bismuth. Bismuth most commonly occurs as the mineral bismuthinite, but bismuth also occurs in elemental form or sulfide ores. Moscovium is a synthetic element which does not occur naturally. Production Nitrogen Nitrogen can be produced by fractional distillation of air. Phosphorus The principal method for producing phosphorus is to reduce phosphates with carbon in an electric arc furnace. Arsenic Most arsenic is prepared by heating the mineral arsenopyrite in the presence of air. This forms As4O6, from which arsenic can be extracted via carbon reduction. However, it is also possible to make metallic arsenic by heating arsenopyrite at 650 to 700 °C without oxygen. Antimony With sulfide ores, the method by which antimony is produced depends on the amount of antimony in the raw ore. If the ore contains 25% to 45% antimony by weight, then crude antimony is produced by smelting the ore in a blast furnace. If the ore contains 45% to 60% antimony by weight, antimony is obtained by heating the ore, also known as liquidation. Ores with more than 60% antimony by weight are chemically displaced with iron shavings from the molten ore, resulting in impure metal. If an oxide ore of antimony contains less than 30% antimony by weight, the ore is reduced in a blast furnace. If the ore contains closer to 50% antimony by weight, the ore is instead reduced in a reverberatory furnace. Antimony ores with mixed sulfides and oxides are smelted in a blast furnace. Bismuth Bismuth minerals do occur, in particular in the form of sulfides and oxides, but it is more economic to produce bismuth as a by-product of the smelting of lead ores or, as in China, of tungsten and zinc ores. Moscovium Moscovium is produced a few atoms at a time in particle accelerators by firing a beam of calcium-48 ions at americium-243 until the nuclei fuse. Applications Liquid nitrogen is a commonly used cryogenic liquid. Nitrogen in the form of ammonia is a nutrient critical to most plants' survival. Synthesis of ammonia accounts for about 1–2% of the world's energy consumption and the majority of reduced nitrogen in food. Phosphorus is used in matches and incendiary bombs. Phosphate fertilizer helps feed much of the world. Arsenic was historically used as a Paris green pigment, which has since been discontinued due to its extreme toxicity. Arsenic in the form of organoarsenic compounds is sometimes used in chicken feed. Antimony is alloyed with lead to produce some bullets. Antimony currency was briefly used in the 1930s in parts of China, but was discontinued as antimony is both soft and toxic. Bismuth subsalicylate is the active ingredient in Pepto-Bismol. Bismuth chalcogenides are being studied in cancerous mice as a candidate for use in improving radiation therapy in human cancer patients. Moscovium is too unstable and scarce to have any known practical application. Biological role Nitrogen is a component of molecules critical to life on earth, such as DNA and amino acids. Nitrates occur in some plants, due to bacteria present in the nodes of the plant. This is seen in leguminous plants such as peas or spinach and lettuce. A typical 70 kg human contains 1.8 kg of nitrogen. Phosphorus in the form of phosphates occur in compounds important to life, such as DNA and ATP. Humans consume approximately 1 g of phosphorus per day. Phosphorus is found in foods such as fish, liver, turkey, chicken, and eggs. Phosphate deficiency is a problem known as hypophosphatemia. A typical 70 kg human contains 480 g of phosphorus. Arsenic promotes growth in chickens and rats, and may be essential for humans in small quantities. Arsenic has been shown to be helpful in metabolizing the amino acid arginine. There are 7 mg of arsenic in a typical 70 kg human. Antimony is not known to have a biological role. Plants take up only trace amounts of antimony. There are approximately 2 mg of antimony in a typical 70 kg human. Bismuth is not known to have a biological role. Humans ingest on average less than 20 μg of bismuth per day. There is less than 500 μg of bismuth in a typical 70 kg human. Moscovium is too unstable to occur in nature or have a known biological role. Moscovium does not typically occur in organisms in any meaningful amount. Toxicity Nitrogen gas is completely non-toxic, but breathing in pure nitrogen gas is deadly, because it causes nitrogen asphyxiation. The build-up of nitrogen bubbles in the blood, such as those that may occur during scuba diving, can cause a condition known as the "bends" (decompression sickness). Many nitrogen compounds such as hydrogen cyanide and nitrogen-based explosives are also highly dangerous. White phosphorus, an allotrope of phosphorus, is toxic, with 1 mg per kg bodyweight being a lethal dose. White phosphorus usually kills humans within a week of ingestion by attacking the liver. Breathing in phosphorus in its gaseous form can cause an industrial disease called "phossy jaw", which eats away the jawbone. White phosphorus is also highly flammable. Some organophosphorus compounds can fatally block certain enzymes in the human body. Elemental arsenic is toxic, as are many of its inorganic compounds; however some of its organic compounds can promote growth in chickens. The lethal dose of arsenic for a typical adult is 200 mg and can cause diarrhea, vomiting, colic, dehydration, and coma. Death from arsenic poisoning typically occurs within a day. Antimony is mildly toxic. Additionally, wine steeped in antimony containers can induce vomiting. When taken in large doses, antimony causes vomiting in a victim, who then appears to recover before dying several days later. Antimony attaches itself to certain enzymes and is difficult to dislodge. Stibine, or SbH3, is far more toxic than pure antimony. Bismuth itself is largely non-toxic, although consuming too much of it can damage the liver. Only one person has ever been reported to have died from bismuth poisoning. However, consumption of soluble bismuth salts can turn a person's gums black. Moscovium is too unstable to conduct any toxicity chemistry. See also Oxypnictide, including superconductors discovered in 2008 Iron-based superconductor, ferropnictide and oxypnictide superconductors References Periodic table Groups (periodic table)
Pnictogen
[ "Chemistry" ]
5,215
[ "Periodic table", "Groups (periodic table)" ]
188,183
https://en.wikipedia.org/wiki/Transactivation
In the context of gene regulation: transactivation is the increased rate of gene expression triggered either by biological processes or by artificial means, through the expression of an intermediate transactivator protein. In the context of receptor signaling, transactivation occurs when one or more receptors activate yet another; receptor transactivation may result from the crosstalk of signaling cascades or the activation of G protein–coupled receptor hetero-oligomer subunits, among other mechanisms. Natural transactivation Transactivation can be triggered either by endogenous cellular or viral proteins, also called transactivators. These protein factors act in trans (i.e., intermolecularly). HIV and HTLV are just two of the many viruses that encode transactivators to enhance viral gene expression. These transactivators can also be linked to cancer if they start interacting with, and increasing expression of, a cellular proto-oncogene. HTLV, for instance, has been associated with causing leukemia primarily through this process. Its transactivator, Tax, can interact with p40, inducing overexpression of interleukin 2, interleukin receptors, GM-CSF and the transcription factor c-Fos. HTLV infects T-cells and via the increased expression of these stimulatory cytokines and transcription factors, leads to uncontrolled proliferation of T-cells and hence lymphoma. Artificial transactivation Artificial transactivation of a gene is achieved by inserting it into the genome at the appropriate area as transactivator gene adjoined to special promoter regions of DNA. The transactivator gene expresses a transcription factor that binds to specific promoter region of DNA. By binding to the promoter region of a gene, the transcription factor causes that gene to be expressed. The expression of one transactivator gene can activate multiple genes, as long as they have the same, specific promoter region attached. Because the expression of the transactivator gene can be controlled, transactivation can be used to turn genes on and off. If this specific promoter region is also attached to a reporter gene, we can measure when the transactivator is being expressed. See also Transrepression Selective glucocorticoid receptor agonist References External links Molecular biology
Transactivation
[ "Chemistry", "Biology" ]
487
[ "Biochemistry", "Molecular biology" ]
188,401
https://en.wikipedia.org/wiki/Axiomatic%20system
In mathematics and logic, an axiomatic system is any set of primitive notions and axioms to logically derive theorems. A theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theorems. An axiomatic system that is completely described is a special kind of formal system. A formal theory is an axiomatic system (usually formulated within model theory) that describes a set of sentences that is closed under logical implication. A formal proof is a complete rendition of a mathematical proof within a formal system. Properties An axiomatic system is said to be consistent if it lacks contradiction. That is, it is impossible to derive both a statement and its negation from the system's axioms. Consistency is a key requirement for most axiomatic systems, as the presence of contradiction would allow any statement to be proven (principle of explosion). In an axiomatic system, an axiom is called independent if it cannot be proven or disproven from other axioms in the system. A system is called independent if each of its underlying axioms is independent. Unlike consistency, independence is not a necessary requirement for a functioning axiomatic system — though it is usually sought after to minimize the number of axioms in the system. An axiomatic system is called complete if for every statement, either itself or its negation is derivable from the system's axioms (equivalently, every statement is capable of being proven true or false). Relative consistency Beyond consistency, relative consistency is also the mark of a worthwhile axiom system. This describes the scenario where the undefined terms of a first axiom system are provided definitions from a second, such that the axioms of the first are theorems of the second. A good example is the relative consistency of absolute geometry with respect to the theory of the real number system. Lines and points are undefined terms (also called primitive notions) in absolute geometry, but assigned meanings in the theory of real numbers in a way that is consistent with both axiom systems. Models A model for an axiomatic system is a well-defined set, which assigns meaning for the undefined terms presented in the system, in a manner that is correct with the relations defined in the system. The existence of a proves the consistency of a system. A model is called concrete if the meanings assigned are objects and relations from the real world, as opposed to an which is based on other axiomatic systems. Models can also be used to show the independence of an axiom in the system. By constructing a valid model for a subsystem without a specific axiom, we show that the omitted axiom is independent if its correctness does not necessarily follow from the subsystem. Two models are said to be isomorphic if a one-to-one correspondence can be found between their elements, in a manner that preserves their relationship. An axiomatic system for which every model is isomorphic to another is called (sometimes ). The property of categoriality (categoricity) ensures the completeness of a system, however the converse is not true: Completeness does not ensure the categoriality (categoricity) of a system, since two models can differ in properties that cannot be expressed by the semantics of the system. Example As an example, observe the following axiomatic system, based on first-order logic with additional semantics of the following countably infinitely many axioms added (these can be easily formalized as an axiom schema): (informally, there exist two different items). (informally, there exist three different items). Informally, this infinite set of axioms states that there are infinitely many different items. However, the concept of an infinite set cannot be defined within the system — let alone the cardinality of such a set. The system has at least two different models – one is the natural numbers (isomorphic to any other countably infinite set), and another is the real numbers (isomorphic to any other set with the cardinality of the continuum). In fact, it has an infinite number of models, one for each cardinality of an infinite set. However, the property distinguishing these models is their cardinality — a property which cannot be defined within the system. Thus the system is not categorial. However it can be shown to be complete, for example by using the Łoś–Vaught test. Axiomatic method Stating definitions and propositions in a way such that each new term can be formally eliminated by the priorly introduced terms requires primitive notions (axioms) to avoid infinite regress. This way of doing mathematics is called the axiomatic method. A common attitude towards the axiomatic method is logicism. In their book Principia Mathematica, Alfred North Whitehead and Bertrand Russell attempted to show that all mathematical theory could be reduced to some collection of axioms. More generally, the reduction of a body of propositions to a particular collection of axioms underlies the mathematician's research program. This was very prominent in the mathematics of the twentieth century, in particular in subjects based around homological algebra. The explication of the particular axioms used in a theory can help to clarify a suitable level of abstraction that the mathematician would like to work with. For example, mathematicians opted that rings need not be commutative, which differed from Emmy Noether's original formulation. Mathematicians decided to consider topological spaces more generally without the separation axiom which Felix Hausdorff originally formulated. The Zermelo–Fraenkel set theory, a result of the axiomatic method applied to set theory, allowed the "proper" formulation of set-theory problems and helped avoid the paradoxes of naïve set theory. One such problem was the continuum hypothesis. Zermelo–Fraenkel set theory, with the historically controversial axiom of choice included, is commonly abbreviated ZFC, where "C" stands for "choice". Many authors use ZF to refer to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded. Today ZFC is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. History Mathematical methods developed to some degree of sophistication in ancient Egypt, Babylon, India, and China, apparently without employing the axiomatic method. Euclid of Alexandria authored the earliest extant axiomatic presentation of Euclidean geometry and number theory. His idea begins with five undeniable geometric assumptions called axioms. Then, using these axioms, he established the truth of other propositions by proofs, hence the axiomatic method. Many axiomatic systems were developed in the nineteenth century, including non-Euclidean geometry, the foundations of real analysis, Cantor's set theory, Frege's work on foundations, and Hilbert's 'new' use of axiomatic method as a research tool. For example, group theory was first put on an axiomatic basis towards the end of that century. Once the axioms were clarified (that inverse elements should be required, for example), the subject could proceed autonomously, without reference to the transformation group origins of those studies. Issues Not every consistent body of propositions can be captured by a describable collection of axioms. In recursion theory, a collection of axioms is called recursive if a computer program can recognize whether a given proposition in the language is a theorem. Gödel's first incompleteness theorem then tells us that there are certain consistent bodies of propositions with no recursive axiomatization. Typically, the computer can recognize the axioms and logical rules for deriving theorems, and the computer can recognize whether a proof is valid, but to determine whether a proof exists for a statement is only soluble by "waiting" for the proof or disproof to be generated. The result is that one will not know which propositions are theorems and the axiomatic method breaks down. An example of such a body of propositions is the theory of the natural numbers, which is only partially axiomatized by the Peano axioms (described below). In practice, not every proof is traced back to the axioms. At times, it is not even clear which collection of axioms a proof appeals to. For example, a number-theoretic statement might be expressible in the language of arithmetic (i.e. the language of the Peano axioms) and a proof might be given that appeals to topology or complex analysis. It might not be immediately clear whether another proof can be found that derives itself solely from the Peano axioms. Any more-or-less arbitrarily chosen system of axioms is the basis of some mathematical theory, but such an arbitrary axiomatic system will not necessarily be free of contradictions, and even if it is, it is not likely to shed light on anything. Philosophers of mathematics sometimes assert that mathematicians choose axioms "arbitrarily", but it is possible that although they may appear arbitrary when viewed only from the point of view of the canons of deductive logic, that appearance is due to a limitation on the purposes that deductive logic serves. Example: The Peano axiomatization of natural numbers The mathematical system of natural numbers 0, 1, 2, 3, 4, ... is based on an axiomatic system first devised by the mathematician Giuseppe Peano in 1889. He chose the axioms, in the language of a single unary function symbol S (short for "successor"), for the set of natural numbers to be: There is a natural number 0. Every natural number a has a successor, denoted by Sa. There is no natural number whose successor is 0. Distinct natural numbers have distinct successors: if a ≠ b, then Sa ≠ Sb. If a property is possessed by 0 and also by the successor of every natural number it is possessed by, then it is possessed by all natural numbers ("Induction axiom"). Axiomatization In mathematics, axiomatization is the process of taking a body of knowledge and working backwards towards its axioms. It is the formulation of a system of statements (i.e. axioms) that relate a number of primitive terms — in order that a consistent body of propositions may be derived deductively from these statements. Thereafter, the proof of any proposition should be, in principle, traceable back to these axioms. See also , an axiomatic system for set theory and today's most common foundation for mathematics. References Further reading Eric W. Weisstein, Axiomatic System, From MathWorld—A Wolfram Web Resource. Mathworld.wolfram.com & Answers.com Formal systems Methods of proof
Axiomatic system
[ "Mathematics" ]
2,234
[ "Proof theory", "Mathematical logic", "Methods of proof", "Mathematical axioms", "Formal systems" ]
188,517
https://en.wikipedia.org/wiki/Micropsia
Micropsia is a condition affecting human visual perception in which objects are perceived to be smaller than they actually are. Micropsia can be caused by optical factors (such as wearing glasses), by distortion of images in the eye (such as optically, via swelling of the cornea or from changes in the shape of the retina such as from retinal edema, macular degeneration, or central serous retinopathy), by changes in the brain (such as from traumatic brain injury, epilepsy, migraines, or drugs), and from psychological factors. Dissociative phenomena are linked with micropsia, which may be the result of brain-lateralization disturbance. Micropsia is also commonly reported when the eyes are fixating at (convergence), or focusing at (accommodation), a distance closer than that of the object in accord with Emmert's law. Specific types of micropsia include hemimicropsia, a form of micropsia that is localized to one half of the visual field and can be caused by brain lesions in one of the cerebral hemispheres. Related visual distortion conditions include macropsia, a less common condition with the reverse effect, and Alice in Wonderland syndrome, a condition that has symptoms that can include both micropsia and macropsia. Signs and symptoms Micropsia causes affected individuals to perceive objects as being smaller or more distant than they actually are. The majority of individuals with micropsia are aware that their perceptions do not mimic reality. Many can imagine the actual sizes of objects and distances between objects. It is common for patients with micropsia to be able to indicate true size and distance despite their inability to perceive objects as they actually are. One specific patient was able to indicate the dimensions of specific objects with her hands. She was also able to estimate the distances between two objects and between an object and herself. She succeeded in indicating horizontal, vertical, and 45 degree positions and did not find it difficult to search for an object in a cluttered drawer, indicating that her figure-ground discrimination was intact despite having micropsia. Individuals experiencing hemimicropsia often complain that objects in their left or right visual field appear to be shrunken or compressed. They may also have difficulty appreciating the symmetry of pictures. When drawing, patients often have a tendency to compensate for their perceptual asymmetry by drawing the left or right half of objects slightly larger than the other. In a case of one person with hemimicropsia asked to draw six symmetrical objects, the size of the picture on the left half was on average 16% larger than the corresponding right half. Diagnosis EEG testing can diagnose patients with medial temporal lobe epilepsy. Epileptiform abnormalities including spikes and sharp waves in the medial temporal lobe of the brain can diagnose this condition, which can in turn be the cause of an epileptic patient's micropsia. The Amsler grid test can be used to diagnose macular degeneration. For this test, patients are asked to look at a grid, and distortions or blank spots in the patient's central field of vision can be detected. A positive diagnosis of macular degeneration may account for a patient's micropsia. A controlled size comparison task can be employed to evaluate objectively whether a person is experiencing hemimicropsia. For each trial, a pair of horizontally aligned circles is presented on a computer screen, and the person being tested is asked to decide which circle is larger. After a set of trials, the overall pattern of responses should display a normal distance effect where the more similar the two circles, the higher the number of errors. This test is able to effectively diagnose micropsia and confirm which hemisphere is being distorted. Due to the large range of causes that lead to micropsia, diagnosis varies among cases. Computed tomography (CT) and magnetic resonance imaging (MRI) may find lesions and hypodense areas in the temporal and occipital lobes. MRI and CT techniques are able to rule out lesions as the cause for micropsia, but are not sufficient to diagnose the most common causes. Definition Micropsia is the most common visual distortion, or dysmetropsia. It is categorized as an illusion in the positive phenomena grouping of abnormal visual distortions. Convergence-accommodative micropsia is a physiologic phenomenon in which an object appears smaller as it approaches the subject. Psychogenic micropsia can present itself in individuals with certain psychiatric disorders. Retinal micropsia is characterized by an increase in the distance between retinal photoreceptors and is associated with decreased visual acuity. Cerebral micropsia is a rare form of micropsia that can arise in children with chronic migraines. Hemimicropsia is a type of cerebral micropsia that occurs within one half of the visual field. Differential diagnosis Of all of the visual distortions, micropsia has the largest variety of causes. Migraines Micropsia can occur during the aura phase of a migraine attack, a phase that often precedes the onset of a headache and is commonly characterized by visual disturbances. Micropsia, along with hemianopsia, quadrantopsia, scotoma, phosphene, teicopsia, metamorphopsia, macropsia, teleopsia, diplopia, dischromatopsia, and hallucination disturbances, is a type of aura that occurs immediately before or during the onset of a migraine headache. The symptom usually occurs less than thirty minutes before the migraine headache begins and lasts for five to twenty minutes. Only 10-20% of children with migraine headaches experience auras. Visual auras such as micropsia are most common in children with migraines. Seizures The most frequent neurological origin of micropsia is a result of temporal lobe seizures. These seizures affect the entire visual field of the patient. More rarely, micropsia can be part of purely visual seizures. This in turn only affects one half of the visual field and is accompanied by other cerebral visual disturbances. The most common cause of seizures which produce perceptual disturbances such as micropsia and macropsia is medial temporal lobe epilepsy in which the seizures originate in the amygdala-hippocampus complex. Micropsia often occurs as an aura signalling a seizure in patients with medial temporal lobe epilepsy. Most auras last for a very short period, ranging from a few seconds to a few minutes. Drug use Micropsia can result from the action of mescaline and other hallucinogenic drugs. Although drug-induced changes in perception usually subside as the chemical leaves the body, long-term cocaine use can result in the chronic residual effect of micropsia. Micropsia can be a symptom of Hallucinogen Persisting Perception Disorder, or HPPD, in which a person can experience hallucinogenic flashbacks long after ingesting a hallucinogen. A majority of these flashbacks are visual distortions which include micropsia, and 15-80% of hallucinogen users may experience these flashbacks. Micropsia can also be a rare side effect of zolpidem, a prescription medication used to temporarily treat insomnia. Psychological factors Psychiatric patients may experience micropsia in an attempt to distance themselves from situations involving conflict. Micropsia may also be a symptom of psychological conditions in which patients visualize people as small objects as a way to control others in response to their insecurities and feelings of weakness. In some adults who experienced loneliness as children, micropsia may arise as a mirror of prior feelings of separation from people and objects. Epstein-Barr virus infection Micropsia can be caused by swelling of the cornea due to infection by the Epstein-Barr virus (EBV) and can therefore present as an initial symptom of EBV mononucleosis, a disease caused by Epstein-Barr virus infection. Retinal edema Micropsia can result from retinal edema causing a dislocation of the receptor cells. Photoreceptor misalignment seems to occur following the surgical re-attachment for macula-off rhegmatogenous retinal detachment. After surgery, patients may experience micropsia as a result of larger photoreceptor separation by edematous fluid. Macular degeneration Macular degeneration typically produces micropsia due to the swelling or bulging of the macula, an oval-shaped yellow spot near the center of the retina in the human eye. The main factors leading to this disease are age, smoking, heredity, and obesity. Some studies show that consuming spinach or collard greens five times a week cuts the risk of macular degeneration by 43%. Central serous chorioretinopathy CSCR is a disease in which a serous detachment of the neurosensory retina occurs over an area of leakage from the choriocapillaris through the retinal pigment epithelium (RPE). The most common symptoms that result from the disease are a deterioration of visual acuity and micropsia. Brain lesions Micropsia is sometimes seen in individuals with brain infarctions. The damaged side of the brain conveys size information that contradicts the size information conveyed by the other side of the brain. This causes a contradiction to arise between the true perception of an object's size and the smaller perception of the object, and micropsic bias ultimately causes the individual to experience micropsia. Lesions affecting other parts of the extracerebral visual pathways can also cause micropsia. Treatment Treatment varies for micropsia due to the large number of different causes for the condition. Treatments involving the occlusion of one eye and the use of a prism fitted over an eyeglass lens have both been shown to provide relief from micropsia. Micropsia that is induced by macular degeneration can be treated in several ways. A study called AREDS (age-related eye disease study) determined that taking dietary supplements containing high-dose antioxidants and zinc produced significant benefits with regard to disease progression. This study was the first ever to prove that dietary supplements can alter the natural progression and complications of a disease state. Laser treatments also look promising but are still in clinical stages. Epidemiology Episodes of micropsia or macropsia occur in 9% of adolescents. 10-35% of those with migraines experience auras, with 88% of these patients experiencing both visual auras (which include micropsia) and neurological auras. Micropsia seems to be slightly more common in boys than in girls among children who experience migraines. Approximately 80% of temporal lobe seizures produce auras that may lead to micropsia or macropsia. They are a common feature of simple partial seizures and usually precede complex partial seizures of temporal lobe origin. Central Serous Chorioretinopathy (CSCR) which can produce micropsia predominantly affects persons between the ages of 20 and 50. Women appear to be affected more than men by a factor of almost 3 to 1. Society and culture Comparison with Alice's Adventures in Wonderland Alice in Wonderland Syndrome, a neurological condition associated with both micropsia and macropsia, is named after Lewis Carroll's famous 19th century novel Alice's Adventures in Wonderland. In the story, the title character, Alice, experiences numerous situations similar to those of micropsia and macropsia. Speculation has arisen that Carroll may have written the story using his own direct experience with episodes of micropsia resulting from the numerous migraines he was known to have. It has also been suggested that Carroll may have had temporal lobe epilepsy. Comparison with Gulliver's Travels Micropsia has also been related to Jonathan Swift's novel Gulliver's Travels. It has been referred to as "Lilliput sight" and "Lilliputian hallucination," a term coined by British physician Raoul Leroy in 1909, based on the small people that inhabited the island of Lilliput in the novel. Research Current experimental evidence focuses on the involvement of the occipitotemporal pathway in both the perceptual equivalence of objects across translations of retinal position and also across size modifications. Recent evidence points to this pathway as a mediator for an individual's perception of size. Even further, numerous cases suggest that size perception may be dissociated from other aspects of visual perception such as color and movement. However, more research is called for to correctly relate the condition to defined physiological conditions. Current research is being done on macular degeneration which could help prevent cases of micropsia. A variety of drugs that block vascular endothelial growth factors (VEGFs) are being evaluated as a treatment option. These treatments for the first time have produced actual improvements in vision, rather than simply delaying or arresting the continued loss of vision characteristic of macular degeneration. A number of surgical treatments are also being investigated for macular degeneration lesions that may not qualify for laser treatment, including macular translocation to a healthier area of the eye, displacement of submacular blood using gas, and removing membranes by surgery. See also Alice in Wonderland syndrome Convergence micropsia Dysmetropsia Macropsia References External links Medical Dictionary: Micropsia Web-Md: Migraines in Children Neurological disorders Optical illusions Eye diseases Visual disturbances and blindness es:Micropsia
Micropsia
[ "Physics" ]
2,837
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
188,688
https://en.wikipedia.org/wiki/Active%20site
In biology and biochemistry, the active site is the region of an enzyme where substrate molecules bind and undergo a chemical reaction. The active site consists of amino acid residues that form temporary bonds with the substrate, the binding site, and residues that catalyse a reaction of that substrate, the catalytic site. Although the active site occupies only ~10–20% of the volume of an enzyme, it is the most important part as it directly catalyzes the chemical reaction. It usually consists of three to four amino acids, while other amino acids within the protein are required to maintain the tertiary structure of the enzymes. Each active site is evolved to be optimised to bind a particular substrate and catalyse a particular reaction, resulting in high specificity. This specificity is determined by the arrangement of amino acids within the active site and the structure of the substrates. Sometimes enzymes also need to bind with some cofactors to fulfil their function. The active site is usually a groove or pocket of the enzyme which can be located in a deep tunnel within the enzyme, or between the interfaces of multimeric enzymes. An active site can catalyse a reaction repeatedly as residues are not altered at the end of the reaction (they may change during the reaction, but are regenerated by the end). This process is achieved by lowering the activation energy of the reaction, so more substrates have enough energy to undergo reaction. Binding site Usually, an enzyme molecule has only one active site, and the active site fits with one specific type of substrate. An active site contains a binding site that binds the substrate and orients it for catalysis. The orientation of the substrate and the close proximity between it and the active site is so important that in some cases the enzyme can still function properly even though all other parts are mutated and lose function. Initially, the interaction between the active site and the substrate is non-covalent and transient. There are four important types of interaction that hold the substrate in a defined orientation and form an enzyme-substrate complex (ES complex): hydrogen bonds, van der Waals interactions, hydrophobic interactions and electrostatic force interactions. The charge distribution on the substrate and active site must be complementary, which means all positive and negative charges must be cancelled out. Otherwise, there will be a repulsive force pushing them apart. The active site usually contains non-polar amino acids, although sometimes polar amino acids may also occur. The binding of substrate to the binding site requires at least three contact points in order to achieve stereo-, regio-, and enantioselectivity. For example, alcohol dehydrogenase which catalyses the transfer of a hydride ion from ethanol to NAD+ interacts with the substrate methyl group, hydroxyl group and the pro-(R) hydrogen that will be abstracted during the reaction. In order to exert their function, enzymes need to assume their correct protein fold (native fold) and tertiary structure. To maintain this defined three-dimensional structure, proteins rely on various types of interactions between their amino acid residues. If these interactions are interfered with, for example by extreme pH values, high temperature or high ion concentrations, this will cause the enzyme to denature and lose its catalytic activity. A tighter fit between an active site and the substrate molecule is believed to increase the efficiency of a reaction. If the tightness between the active site of DNA polymerase and its substrate is increased, the fidelity, which means the correct rate of DNA replication will also increase. Most enzymes have deeply buried active sites, which can be accessed by a substrate via access channels. There are three proposed models of how enzymes fit their specific substrate: the lock and key model, the induced fit model, and the conformational selection model. The latter two are not mutually exclusive: conformational selection can be followed by a change in the enzyme's shape. Additionally, a protein may not wholly follow either model. Amino acids at the binding site of ubiquitin generally follow the induced fit model, whereas the rest of the protein generally adheres to conformational selection. Factors such as temperature likely influences the pathway taken during binding, with higher temperatures predicted to increase the importance of conformational selection and decrease that of induced fit. Lock and key hypothesis This concept was suggested by the 19th-century chemist Emil Fischer. He proposed that the active site and substrate are two stable structures that fit perfectly without any further modification, just like a key fits into a lock. If one substrate perfectly binds to its active site, the interactions between them will be strongest, resulting in high catalytic efficiency. As time went by, limitations of this model started to appear. For example, the competitive enzyme inhibitor methylglucoside can bind tightly to the active site of 4-alpha-glucanotransferase and perfectly fits into it. However, 4-alpha-glucanotransferase is not active on methylglucoside and no glycosyl transfer occurs. The Lock and Key hypothesis cannot explain this, as it would predict a high efficiency of methylglucoside glycosyl transfer due to its tight binding. Apart from competitive inhibition, this theory cannot explain the mechanism of action of non-competitive inhibitors either, as they do not bind to the active site but nevertheless influence catalytic activity. Induced fit hypothesis Daniel Koshland's theory of enzyme-substrate binding is that the active site and the binding portion of the substrate are not exactly complementary. The induced fit model is a development of the lock-and-key model and assumes that an active site is flexible and changes shape until the substrate is completely bound. This model is similar to a person wearing a glove: the glove changes shape to fit the hand. The enzyme initially has a conformation that attracts its substrate. Enzyme surface is flexible and only the correct catalyst can induce interaction leading to catalysis. Conformational changes may then occur as the substrate is bound. After the reaction products will move away from the enzyme and the active site returns to its initial shape. This hypothesis is supported by the observation that the entire protein domain could move several nanometers during catalysis. This movement of protein surface can create microenvironments that favour the catalysis. Conformational selection hypothesis This model suggests that enzymes exist in a variety of conformations, only some of which are capable of binding to a substrate. When a substrate is bound to the protein, the equilibrium in the conformational ensemble shifts towards those able to bind ligands (as enzymes with bound substrates are removed from the equilibrium between the free conformations). Types of non-covalent interactions Electrostatic interaction: In an aqueous environment, the oppositely charged groups in amino acid side chains within the active site and substrates attract each other, which is termed electrostatic interaction. For example, when a carboxylic acid (R-COOH) dissociates into RCOO− and H+ ions, COO− will attract positively charged groups such as protonated guanidine side chain of arginine. Hydrogen bond: A hydrogen bond is a specific type of dipole-dipole interaction between a partially positive hydrogen atom and a partially negative electron donor that contain a pair of electrons such as oxygen, fluorine and nitrogen. The strength of hydrogen bond depends on the chemical nature and geometric arrangement of each group. Van der Waals force: Van der Waals force is formed between oppositely charged groups due to transient uneven electron distribution in each group. If all electrons are concentrated at one pole of the group this end will be negative, while the other end will be positive. Although the individual force is weak, as the total number of interactions between the active site and substrate is massive the sum of them will be significant. Hydrophobic interaction: Non-polar hydrophobic groups tend to aggregate together in the aqueous environment and try to leave from polar solvent. These hydrophobic groups usually have long carbon chain and do not react with water molecules. When dissolving in water a protein molecule will curl up into a ball-like shape, leaving hydrophilic groups in outside while hydrophobic groups are deeply buried within the centre. Catalytic site Once the substrate is bound and oriented to the active site, catalysis can begin. The residues of the catalytic site are typically very close to the binding site, and some residues can have dual-roles in both binding and catalysis. Catalytic residues of the site interact with the substrate to lower the activation energy of a reaction and thereby make it proceed faster. They do this by a number of different mechanisms including the approximation of the reactants, nucleophilic/electrophilic catalysis and acid/base catalysis. These mechanisms will be explained below. Mechanisms involved in Catalytic process Approximation of the reactant During enzyme catalytic reaction, the substrate and active site are brought together in a close proximity. This approach has various purposes. Firstly, when substrates bind within the active site the effective concentration of it significantly increases than in solution. This means the number of substrate molecules involved in the reaction is also increased. This process also reduces the desolvation energy required for the reaction to occur. In solution substrate molecules are surrounded by solvent molecules and energy is required for enzyme molecules to replace them and contact with the substrate. Since bulk molecules can be excluded from the active site this energy output can be minimised. Next, the active site is designed to reorient the substrate to reduce the activation energy for the reaction to occur. The alignment of the substrate, after binding, is locked in a high energy state and can proceed to the next step. In addition, this binding is favoured by entropy as the energy cost associated with solution reaction is largely eliminated since solvent cannot enter active site. In the end, the active site may manipulate the Molecular orbital of the substrate into a suitable orientation to reduce activation energy. The electrostatic states of substrate and active site must be complementary to each other. A polarized negatively charged amino acid side chain will repel uncharged substrate. But if the transition state involves the formation of an ion centre then the side chain will now produce a favourable interaction. Covalent catalysis Many enzymes including serine protease, cysteine protease, protein kinase and phosphatase evolved to form transient covalent bonds between them and their substrates to lower the activation energy and allow the reaction to occur. This process can be divided into 2 steps: formation and breakdown. The former step is rate-limit step while the later step is needed to regenerate intact enzyme. Nucleophilic catalysis: This process involves the donation of electrons from the enzyme's nucleophile to a substrate to form a covalent bond between them during the transition state. The strength of this interaction depends on two aspects.: the ability of the nucleophilic group to donate electrons and the electrophile to accept them. The former one is mainly affected by the basicity(the ability to donate electron pairs) of the species while the later one is in regard to its pKa. Both groups are also affected by their chemical properties such as polarizability, electronegativity and ionization potential. Amino acids that can form nucleophile including serine, cysteine, aspartate and glutamine. Electrophilic catalysis: The mechanism behind this process is exactly same as nucleophilic catalysis except that now amino acids in active site act as electrophile while substrates are nucleophiles. This reaction usually requires cofactors as the amino acid side chains are not strong enough in attracting electrons. Metal ions Metal ions have multiple roles during the reaction. Firstly it can bind to negatively charged substrate groups so they will not repel electron pairs from active site's nucleophilic groups. It can attract negatively charged electrons to increase electrophilicity. It can also bridge between active site and substrate. At last, they may change the conformational structure of the substrate to favour reaction. Acid/base catalysis In some reactions, protons and hydroxide may directly act as acid and base in term of specific acid and specific base catalysis. But more often groups in substrate and active site act as Brønsted–Lowry acid and base. This is called general acid and general base theory. The easiest way to distinguish between them is to check whether the reaction rate is determined by the concentrations of the general acid and base. If the answer is yes then the reaction is the general type. Since most enzymes have an optimum pH of 6 to 7, the amino acids in the side chain usually have a pKa of 4~10. Candidate include aspartate, glutamate, histidine, cysteine. These acids and bases can stabilise the nucleophile or electrophile formed during the catalysis by providing positive and negative charges. Conformational distortion Quantitative studies of enzymatic reactions often found that the acceleration of chemical reaction speed cannot be fully explained by existing theories like the approximation, acid/base catalysis and electrophile/nucleophile catalysis. And there is an obvious paradox: in reversible enzymatic reaction if the active site perfectly fits the substrates then the backward reaction will be slowed since products cannot fit perfectly into the active site. So conformational distortion was introduced and argues that both active site and substrate can undergo conformational changes to fit with each other all the time. Preorganised active site complementarity to the transition state This theory is a little similar to the Lock and Key Theory, but at this time the active site is preprogrammed to bind perfectly to substrate in transition state rather than in ground state. The formation of transition state within the solution requires a large amount of energy to relocate solvent molecules and the reaction is slowed. So the active site can substitute solvent molecules and surround the substrates to minimize the counterproductive effect imposed by the solution. The presence of charged groups with the active site will attract substrates and ensure electrostatic complementarity. Examples of enzyme catalysis mechanisms In reality, most enzyme mechanisms involve a combination of several different types of catalysis. Glutathione reductase The role of glutathione(GSH) is to remove accumulated reactive oxygen species which may damage cells. During this process, its thiol side chain is oxidised and two glutathione molecules are connected by a disulphide bond to form a dimer(GSSG). In order to regenerate glutathione the disulphide bond has to be broken, In human cells, this is done by glutathione reductase(GR). Glutathione reductase is a dimer that contains two identical subunits. It requires one NADP and one FAD as the cofactors. The active site is located in the linkage between two subunits. The NADPH is involved in the generation of FADH-. In the active site, there are two cysteine residues besides the FAD cofactor and are used to break the disulphide bond during the catalytic reaction. NADPH is bound by three positively charged residues: Arg-218, His-219 and Arg-224. The catalytic process starts when the FAD is reduced by NADPH to accept one electron and from FADH−. It then attacks the disulphide bond formed between 2 cysteine residues, forming one SH bond and a single S− group. This S− group will act as a nucleophile to attack the disulphide bond in the oxidised glutathione(GSSG), breaking it and forming a cysteine-SG complex. The first SG− anion is released and then receives one proton from adjacent SH group and from the first glutathione monomer. Next the adjacent S− group attack disulphide bond in cysteine-SG complex and release the second SG− anion. It receives one proton in solution and forms the second glutathione monomer. Chymotrypsin Chymotrypsin is a serine endopeptidase that is present in pancreatic juice and helps the hydrolysis of proteins and peptide. It catalyzes the hydrolysis of peptide bonds in L-isomers of tyrosine, phenylalanine, and tryptophan. In the active site of this enzyme, three amino acid residues work together to form a catalytic triad which makes up the catalytic site. In chymotrypsin, these residues are Ser-195, His-57 and Asp-102. The mechanism of chymotrypsin can be divided into two phases. First, Ser-195 nucleophilically attacks the peptide bond carbon in the substrate to form a tetrahedral intermediate. The nucleophilicity of Ser-195 is enhanced by His-57, which abstracts a proton from Ser-195 and is in turn stabilised by the negatively charged carboxylate group (RCOO−) in Asp-102. Furthermore, the tetrahedral oxyanion intermediate generated in this step is stabilised by hydrogen bonds from Ser-195 and Gly-193. In the second stage, the R'NH group is protonated by His-57 to form R'NH2 and leaves the intermediate, leaving behind the acylated Ser-195. His-57 then acts as a base again to abstract one proton from a water molecule. The resulting hydroxide anion nucleophilically attacks the acyl-enzyme complex to form a second tetrahedral oxyanion intermediate, which is once again stabilised by H bonds. In the end, Ser-195 leaves the tetrahedral intermediate, breaking the CO bond that connected the enzyme to the peptide substrate. A proton is transferred to Ser-195 through His-57, so that all three amino acid return to their initial state. Unbinding Substrate unbinding is influenced by various factors. Larger ligands generally stay in the active site longer, as do those with more rotatable bonds (although this may be a side effect of size). When the solvent is excluded from the active site, less flexible proteins result in longer residence times. More hydrogen bonds shielded from the solvent also decrease unbinding. Cofactors Enzymes can use cofactors as 'helper molecules'. Coenzymes are referred to those non-protein molecules that bind with enzymes to help them fulfill their jobs. Mostly they are connected to the active site by non-covalent bonds such as hydrogen bond or hydrophobic interaction. But sometimes a covalent bond can also form between them. For example, the heme in cytochrome C is bound to the protein through thioester bond. In some occasions, coenzymes can leave enzymes after the reaction is finished. Otherwise, they permanently bind to the enzyme. Coenzyme is a broad concept which includes metal ions, various vitamins and ATP. If an enzyme needs coenzyme to work itself, it is called an apoenzyme. In fact, it alone cannot catalyze reactions properly. Only when its cofactor comes in and binds to the active site to form holoenzyme does it work properly. One example of the coenzyme is Flavin. It contains a distinct conjugated isoalloxazine ring system. Flavin has multiple redox states and can be used in processes that involve the transfer of one or two electrons. It can act as an electron acceptor in reaction, like the oxidation of NAD to NADH, to accept two electrons and form 1,5-dihydroflavin. On the other hand, it can form semiquinone(free radical) by accepting one electron, and then converts to fully reduced form by the addition of an extra electron. This property allows it to be used in one electron oxidation process. Inhibitors Inhibitors disrupt the interaction between enzyme and substrate, slowing down the rate of a reaction. There are different types of inhibitor, including both reversible and irreversible forms. Competitive inhibitors are inhibitors that only target free enzyme molecules. They compete with substrates for free enzyme acceptor and can be overcome by increasing the substrate concentration. They have two mechanisms. Competitive inhibitors usually have structural similarities to the substrates and or ES complex. As a result, they can fit into the active site and trigger favourable interactions to fill in the space and block substrates from entry. They can also induce transient conformational changes in the active site so substrates cannot fit perfectly with it. After a short period of time, competitive inhibitors will drop off and leave the enzyme intact. Inhibitors are classified as non-competitive inhibitors when they bind both free enzyme and ES complex. Since they do not compete with substrates for the active site, they cannot be overcome by simply increasing the substrate concentration. They usually bind to a different site on the enzyme and alter the 3-dimensional structure of the active site to block substrates from entry or leaving the enzyme. Irreversible inhibitors are similar to competitive inhibitors as they both bind to the active site. However, irreversible inhibitors form irreversible covalent bonds with the amino acid residues in the active site and never leave. Therefore, the active site is occupied and the substrate cannot enter. Occasionally the inhibitor will leave but the catalytic site is permanently altered in shape. These inhibitors usually contain electrophilic groups like halogen substitutes and epoxides. As time goes by more and more enzymes are bound by irreversible inhibitors and cannot function anymore. Examples of competitive and irreversible enzyme inhibitors Competitive inhibitor: HIV protease inhibitor HIV protease inhibitors are used to treat patients having AIDS virus by preventing its DNA replication. HIV protease is used by the virus to cleave Gag-Pol polyprotein into 3 smaller proteins that are responsible for virion assembly, package and maturation. This enzyme targets the specific phenylalanine-proline cleave site within the target protein. If HIV protease is switched off the virion particle will lose function and cannot infect patients. Since it is essential in viral replication and is absent in healthy human, it is an ideal target for drug development. HIV protease belongs to aspartic protease family and has a similar mechanism. Firstly the aspartate residue activates a water molecule and turns it into a nucleophile. Then it attacks the carbonyl group within the peptide bond (NH-CO) to form a tetrahedral intermediate. The nitrogen atom within the intermediate receives a proton, forming an amide group and subsequent rearrangement leads to the breakdown of the bond between it and the intermediate and forms two products. Inhibitors usually contain a nonhydrolyzable hydroxyethylene or hydroxyethylamine groups that mimic the tetrahedral intermediate. Since they share a similar structure and electrostatic arrangement to the transition state of substrates they can still fit into the active site but cannot be broken down, so hydrolysis cannot occur. Non-competitive inhibitor: Strychnine Strychnine is a neurotoxin that causes death by affecting nerves that control muscular contraction and cause respiration difficulty. The impulse is transmitted between the synapse through a neurotransmitter called acetylcholine. It is released into the synapse between nerve cells and binds to receptors in the postsynaptic cell. Then an action potential is generated and transmitted through the postsynaptic cell to start a new cycle. Glycine can inhibit the activity of neurotransmitter receptors, thus a larger amount of acetylcholinesterase is required to trigger an action potential. This makes sure that the generation of nerve impulses is tightly controlled. However, this control is broken down when strychnine is added. It inhibits glycine receptors(a chloride channel) and a much lower level of neurotransmitter concentration can trigger an action potential. Nerves now constantly transmit signals and cause excessive muscular contraction, leading to asphyxiation and death. Irreversible inhibitor: Diisopropyl fluorophosphate Diisopropyl fluorophosphate (DIFP) is an irreversible inhibitor that blocks the action of serine protease. When it binds to the enzyme a nucleophilic substitution reaction occurs and releases one hydrogen fluoride molecule. The OH group in the active site acts as a nucleophile to attack the phosphorus in DIFP and form a tetrahedral intermediate and release a proton. Then the P-F bond is broken, one electron is transferred to the F atom and it leaves the intermediate as F− anion. It combines with a proton in solution to form one HF molecule. A covalent bond formed between the active site and DIFP, so the serine side chain is no longer available to the substrate. In drug discovery Identification of active sites is crucial in the process of drug discovery. The 3-D structure of the enzyme is analysed to identify active site residues and design drugs which can fit into them. Proteolytic enzymes are targets for some drugs, such as protease inhibitors, which include drugs against AIDS and hypertension. These protease inhibitors bind to an enzyme's active site and block interaction with natural substrates. An important factor in drug design is the strength of binding between the active site and an enzyme inhibitor. If the enzyme found in bacteria is significantly different from the human enzyme then an inhibitor can be designed against that particular bacterium without harming the human enzyme. If one kind of enzyme is only present in one kind of organism, its inhibitor can be used to specifically wipe them out. Active sites can be mapped to aid the design of new drugs such as enzyme inhibitors. This involves the description of the size of an active site and the number and properties of sub-sites, such as details of the binding interaction. Modern database technology called CPASS (Comparison of Protein Active Site Structures) however allows the comparison of active sites in more detail and the finding of structural similarity using software. Application of enzyme inhibitors Allosteric sites An allosteric site is a site on an enzyme, unrelated to its active site, which can bind an effector molecule. This interaction is another mechanism of enzyme regulation. Allosteric modification usually happens in proteins with more than one subunit. Allosteric interactions are often present in metabolic pathways and are beneficial in that they allow one step of a reaction to regulate another step. They allow an enzyme to have a range of molecular interactions, other than the highly specific active site. See also Hugh Stott Taylor SitEx References Further reading Alan Fersht, Structure and Mechanism in Protein Science: A Guide to Enzyme Catalysis and Protein Folding. W. H. Freeman, 1998. Bugg, T. Introduction to Enzyme and Coenzyme Chemistry. (2nd edition), Blackwell Publishing Limited, 2004. . Enzymes Catalysis Biochemistry terminology
Active site
[ "Chemistry", "Biology" ]
5,580
[ "Catalysis", "Chemical kinetics", "Biochemistry", "Biochemistry terminology" ]
188,932
https://en.wikipedia.org/wiki/Calabi%E2%80%93Yau%20manifold
In algebraic and differential geometry, a Calabi–Yau manifold, also known as a Calabi–Yau space, is a particular type of manifold which has certain properties, such as Ricci flatness, yielding applications in theoretical physics. Particularly in superstring theory, the extra dimensions of spacetime are sometimes conjectured to take the form of a 6-dimensional Calabi–Yau manifold, which led to the idea of mirror symmetry. Their name was coined by , after , who first conjectured that such surfaces might exist, and , who proved the Calabi conjecture. Calabi–Yau manifolds are complex manifolds that are generalizations of K3 surfaces in any number of complex dimensions (i.e. any even number of real dimensions). They were originally defined as compact Kähler manifolds with a vanishing first Chern class and a Ricci-flat metric, though many other similar but inequivalent definitions are sometimes used. Definitions The motivational definition given by Shing-Tung Yau is of a compact Kähler manifold with a vanishing first Chern class, that is also Ricci flat. There are many other definitions of a Calabi–Yau manifold used by different authors, some inequivalent. This section summarizes some of the more common definitions and the relations between them. A Calabi–Yau -fold or Calabi–Yau manifold of (complex) dimension is sometimes defined as a compact -dimensional Kähler manifold satisfying one of the following equivalent conditions: The canonical bundle of is trivial. has a holomorphic -form that vanishes nowhere. The structure group of the tangent bundle of can be reduced from , the unitary group, to , the special unitary group. has a Kähler metric with global holonomy contained in . These conditions imply that the first integral Chern class of vanishes. Nevertheless, the converse is not true. The simplest examples where this happens are hyperelliptic surfaces, finite quotients of a complex torus of complex dimension 2, which have vanishing first integral Chern class but non-trivial canonical bundle. For a compact -dimensional Kähler manifold the following conditions are equivalent to each other, but are weaker than the conditions above, though they are sometimes used as the definition of a Calabi–Yau manifold: has vanishing first real Chern class. has a Kähler metric with vanishing Ricci curvature. has a Kähler metric with local holonomy contained in . A positive power of the canonical bundle of is trivial. has a finite cover that has trivial canonical bundle. has a finite cover that is a product of a torus and a simply connected manifold with trivial canonical bundle. If a compact Kähler manifold is simply connected, then the weak definition above is equivalent to the stronger definition. Enriques surfaces give examples of complex manifolds that have Ricci-flat metrics, but their canonical bundles are not trivial, so they are Calabi–Yau manifolds according to the second but not the first definition above. On the other hand, their double covers are Calabi–Yau manifolds for both definitions (in fact, K3 surfaces). By far the hardest part of proving the equivalences between the various properties above is proving the existence of Ricci-flat metrics. This follows from Yau's proof of the Calabi conjecture, which implies that a compact Kähler manifold with a vanishing first real Chern class has a Kähler metric in the same class with vanishing Ricci curvature. (The class of a Kähler metric is the cohomology class of its associated 2-form.) Calabi showed such a metric is unique. There are many other inequivalent definitions of Calabi–Yau manifolds that are sometimes used, which differ in the following ways (among others): The first Chern class may vanish as an integral class or as a real class. Most definitions assert that Calabi–Yau manifolds are compact, but some allow them to be non-compact. In the generalization to non-compact manifolds, the difference must vanish asymptotically. Here, is the Kähler form associated with the Kähler metric, . Some definitions put restrictions on the fundamental group of a Calabi–Yau manifold, such as demanding that it be finite or trivial. Any Calabi–Yau manifold has a finite cover that is the product of a torus and a simply-connected Calabi–Yau manifold. Some definitions require that the holonomy be exactly equal to rather than a subgroup of it, which implies that the Hodge numbers vanish for . Abelian surfaces have a Ricci flat metric with holonomy strictly smaller than (in fact trivial) so are not Calabi–Yau manifolds according to such definitions. Most definitions assume that a Calabi–Yau manifold has a Riemannian metric, but some treat them as complex manifolds without a metric. Most definitions assume the manifold is non-singular, but some allow mild singularities. While the Chern class fails to be well-defined for singular Calabi–Yau's, the canonical bundle and canonical class may still be defined if all the singularities are Gorenstein, and so may be used to extend the definition of a smooth Calabi–Yau manifold to a possibly singular Calabi–Yau variety. Examples The fundamental fact is that any smooth algebraic variety embedded in a projective space is a Kähler manifold, because there is a natural Fubini–Study metric on a projective space which one can restrict to the algebraic variety. By definition, if ω is the Kähler metric on the algebraic variety X and the canonical bundle KX is trivial, then X is Calabi–Yau. Moreover, there is unique Kähler metric ω on X such that [ω0] = [ω] ∈ H2(X,R), a fact which was conjectured by Eugenio Calabi and proved by Shing-Tung Yau (see Calabi conjecture). Calabi–Yau algebraic curves In one complex dimension, the only compact examples are tori, which form a one-parameter family. The Ricci-flat metric on a torus is actually a flat metric, so that the holonomy is the trivial group SU(1). A one-dimensional Calabi–Yau manifold is a complex elliptic curve, and in particular, algebraic. CY algebraic surfaces In two complex dimensions, the K3 surfaces furnish the only compact simply connected Calabi–Yau manifolds. These can be constructed as quartic surfaces in , such as the complex algebraic variety defined by the vanishing locus of for Other examples can be constructed as elliptic fibrations, as quotients of abelian surfaces, or as complete intersections. Non simply-connected examples are given by abelian surfaces, which are real four tori equipped with a complex manifold structure. Enriques surfaces and hyperelliptic surfaces have first Chern class that vanishes as an element of the real cohomology group, but not as an element of the integral cohomology group, so Yau's theorem about the existence of a Ricci-flat metric still applies to them but they are sometimes not considered to be Calabi–Yau manifolds. Abelian surfaces are sometimes excluded from the classification of being Calabi–Yau, as their holonomy (again the trivial group) is a proper subgroup of SU(2), instead of being isomorphic to SU(2). However, the Enriques surface subset do not conform entirely to the SU(2) subgroup in the String theory landscape. CY threefolds In three complex dimensions, classification of the possible Calabi–Yau manifolds is an open problem, although Yau suspects that there is a finite number of families (albeit a much bigger number than his estimate from 20 years ago). In turn, it has also been conjectured by Miles Reid that the number of topological types of Calabi–Yau 3-folds is infinite, and that they can all be transformed continuously ( through certain mild singularizations such as conifolds) one into another—much as Riemann surfaces can. One example of a three-dimensional Calabi–Yau manifold is a non-singular quintic threefold in CP4, which is the algebraic variety consisting of all of the zeros of a homogeneous quintic polynomial in the homogeneous coordinates of the CP4. Another example is a smooth model of the Barth–Nieto quintic. Some discrete quotients of the quintic by various Z5 actions are also Calabi–Yau and have received a lot of attention in the literature. One of these is related to the original quintic by mirror symmetry. For every positive integer n, the zero set, in the homogeneous coordinates of the complex projective space CPn+1, of a non-singular homogeneous degree n + 2 polynomial in n + 2 variables is a compact Calabi–Yau n-fold. The case n = 1 describes an elliptic curve, while for n = 2 one obtains a K3 surface. More generally, Calabi–Yau varieties/orbifolds can be found as weighted complete intersections in a weighted projective space. The main tool for finding such spaces is the adjunction formula. All hyper-Kähler manifolds are Calabi–Yau manifolds. Constructed from algebraic curves For an algebraic curve a quasi-projective Calabi-Yau threefold can be constructed as the total space where . For the canonical projection we can find the relative tangent bundle is using the relative tangent sequence and observing the only tangent vectors in the fiber which are not in the pre-image of are canonically associated with the fibers of the vector bundle. Using this, we can use the relative cotangent sequence together with the properties of wedge powers that and giving the triviality of . Constructed from algebraic surfaces Using a similar argument as for curves, the total space of the canonical sheaf for an algebraic surface forms a Calabi-Yau threefold. A simple example is over projective space. Applications in superstring theory Calabi–Yau manifolds are important in superstring theory. Essentially, Calabi–Yau manifolds are shapes that satisfy the requirement of space for the six "unseen" spatial dimensions of string theory, which may be smaller than our currently observable lengths as they have not yet been detected. A popular alternative known as large extra dimensions, which often occurs in braneworld models, is that the Calabi–Yau is large but we are confined to a small subset on which it intersects a D-brane. Further extensions into higher dimensions are currently being explored with additional ramifications for general relativity. In the most conventional superstring models, ten conjectural dimensions in string theory are supposed to come as four of which we are aware, carrying some kind of fibration with fiber dimension six. Compactification on Calabi–Yau n-folds are important because they leave some of the original supersymmetry unbroken. More precisely, in the absence of fluxes, compactification on a Calabi–Yau 3-fold (real dimension 6) leaves one quarter of the original supersymmetry unbroken if the holonomy is the full SU(3). More generally, a flux-free compactification on an n-manifold with holonomy SU(n) leaves 21−n of the original supersymmetry unbroken, corresponding to 26−n supercharges in a compactification of type IIA supergravity or 25−n supercharges in a compactification of type I. When fluxes are included the supersymmetry condition instead implies that the compactification manifold be a generalized Calabi–Yau, a notion introduced by . These models are known as flux compactifications. F-theory compactifications on various Calabi–Yau four-folds provide physicists with a method to find a large number of classical solutions in the so-called string theory landscape. Connected with each hole in the Calabi–Yau space is a group of low-energy string vibrational patterns. Since string theory states that our familiar elementary particles correspond to low-energy string vibrations, the presence of multiple holes causes the string patterns to fall into multiple groups, or families. Although the following statement has been simplified, it conveys the logic of the argument: if the Calabi–Yau has three holes, then three families of vibrational patterns and thus three families of particles will be observed experimentally. Logically, since strings vibrate through all the dimensions, the shape of the curled-up ones will affect their vibrations and thus the properties of the elementary particles observed. For example, Andrew Strominger and Edward Witten have shown that the masses of particles depend on the manner of the intersection of the various holes in a Calabi–Yau. In other words, the positions of the holes relative to one another and to the substance of the Calabi–Yau space was found by Strominger and Witten to affect the masses of particles in a certain way. This is true of all particle properties. Calabi-Yau algebra A Calabi–Yau algebra was introduced by Victor Ginzburg to transport the geometry of a Calabi–Yau manifold to noncommutative algebraic geometry. In popular culture The Calabi-Yau manifold was the subject of a paper coauthored by Sheldon Cooper in the episode 2 of the seventh season in Young Sheldon. Imagery based on Calabi-Yau manifolds was used in episode 5 of the TV series 3 Body Problem in order to illustrate the high-dimensional abilities of the San-Ti alien civilization. In Half-Life 2, Dr. Mossman describes teleporters as working via a 'String-based' technology using 'the Calabi-Yau model.' See also Quintic threefold G2 manifold References Further reading (similar to ) External links Calabi–Yau Homepage is an interactive reference which describes many examples and classes of Calabi–Yau manifolds and also the physical theories in which they appear. Spinning Calabi–Yau Space video. Calabi–Yau Space by Andrew J. Hanson with additional contributions by Jeff Bryant, Wolfram Demonstrations Project. Beginner articles An overview of Calabi-Yau Elliptic fibrations Lectures on the Calabi-Yau Landscape Fibrations in CICY Threefolds - (complete intersection Calabi-Yau) Algebraic geometry Differential geometry Mathematical physics String theory Complex manifolds
Calabi–Yau manifold
[ "Physics", "Astronomy", "Mathematics" ]
2,993
[ "Astronomical hypotheses", "Applied mathematics", "Theoretical physics", "Fields of abstract algebra", "Algebraic geometry", "String theory", "Mathematical physics" ]
188,935
https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20statistics
In quantum statistics, Bose–Einstein statistics (B–E statistics) describes one of two possible ways in which a collection of non-interacting identical particles may occupy a set of available discrete energy states at thermodynamic equilibrium. The aggregation of particles in the same state, which is a characteristic of particles obeying Bose–Einstein statistics, accounts for the cohesive streaming of laser light and the frictionless creeping of superfluid helium. The theory of this behaviour was developed (1924–25) by Satyendra Nath Bose, who recognized that a collection of identical and indistinguishable particles can be distributed in this way. The idea was later adopted and extended by Albert Einstein in collaboration with Bose. Bose–Einstein statistics apply only to particles that do not follow the Pauli exclusion principle restrictions. Particles that follow Bose-Einstein statistics are called bosons, which have integer values of spin. In contrast, particles that follow Fermi-Dirac statistics are called fermions and have half-integer spins. Bose–Einstein distribution At low temperatures, bosons behave differently from fermions (which obey the Fermi–Dirac statistics) in a way that an unlimited number of them can "condense" into the same energy state. This apparently unusual property also gives rise to the special state of matter – the Bose–Einstein condensate. Fermi–Dirac and Bose–Einstein statistics apply when quantum effects are important and the particles are "indistinguishable". Quantum effects appear if the concentration of particles satisfies where is the number of particles, is the volume, and is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are barely overlapping. Fermi–Dirac statistics applies to fermions (particles that obey the Pauli exclusion principle), and Bose–Einstein statistics applies to bosons. As the quantum concentration depends on temperature, most systems at high temperatures obey the classical (Maxwell–Boltzmann) limit, unless they also have a very high density, as for a white dwarf. Both Fermi–Dirac and Bose–Einstein become Maxwell–Boltzmann statistics at high temperature or at low concentration. Bose–Einstein statistics was introduced for photons in 1924 by Bose and generalized to atoms by Einstein in 1924–25. The expected number of particles in an energy state for Bose–Einstein statistics is: with and where is the occupation number (the number of particles) in state , is the degeneracy of energy level , is the energy of the -th state, μ is the chemical potential (zero for a photon gas), is the Boltzmann constant, and is the absolute temperature. The variance of this distribution is calculated directly from the expression above for the average number. For comparison, the average number of fermions with energy given by Fermi–Dirac particle-energy distribution has a similar form: As mentioned above, both the Bose–Einstein distribution and the Fermi–Dirac distribution approaches the Maxwell–Boltzmann distribution in the limit of high temperature and low particle density, without the need for any ad hoc assumptions: In the limit of low particle density, , therefore or equivalently . In that case, , which is the result from Maxwell–Boltzmann statistics. In the limit of high temperature, the particles are distributed over a large range of energy values, therefore the occupancy on each state (especially the high energy ones with ) is again very small, . This again reduces to Maxwell–Boltzmann statistics. In addition to reducing to the Maxwell–Boltzmann distribution in the limit of high and low density, Bose–Einstein statistics also reduces to Rayleigh–Jeans law distribution for low energy states with , namely History Władysław Natanson in 1911 concluded that Planck's law requires indistinguishability of "units of energy", although he did not frame this in terms of Einstein's light quanta. While presenting a lecture at the University of Dhaka (in what was then British India and is now Bangladesh) on the theory of radiation and the ultraviolet catastrophe, Satyendra Nath Bose intended to show his students that the contemporary theory was inadequate, because it predicted results not in accordance with experimental results. During this lecture, Bose committed an error in applying the theory, which unexpectedly gave a prediction that agreed with the experiment. The error was a simple mistake—similar to arguing that flipping two fair coins will produce two heads one-third of the time—that would appear obviously wrong to anyone with a basic understanding of statistics (remarkably, this error resembled the famous blunder by d'Alembert known from his Croix ou Pile article). However, the results it predicted agreed with experiment, and Bose realized it might not be a mistake after all. For the first time, he took the position that the Maxwell–Boltzmann distribution would not be true for all microscopic particles at all scales. Thus, he studied the probability of finding particles in various states in phase space, where each state is a little patch having phase volume of h3, and the position and momentum of the particles are not kept particularly separate but are considered as one variable. Bose adapted this lecture into a short article called "Planck's law and the hypothesis of light quanta" and submitted it to the Philosophical Magazine. However, the referee's report was negative, and the paper was rejected. Undaunted, he sent the manuscript to Albert Einstein requesting publication in the . Einstein immediately agreed, personally translated the article from English into German (Bose had earlier translated Einstein's article on the general theory of relativity from German to English), and saw to it that it was published. Bose's theory achieved respect when Einstein sent his own paper in support of Bose's to , asking that they be published together. The paper came out in 1924. The reason Bose produced accurate results was that since photons are indistinguishable from each other, one cannot treat any two photons having equal quantum numbers (e.g., polarization and momentum vector) as being two distinct identifiable photons. Bose originally had a factor of 2 for the possible spin states, but Einstein changed it to polarization. By analogy, if in an alternate universe coins were to behave like photons and other bosons, the probability of producing two heads would indeed be one-third, and so is the probability of getting a head and a tail which equals one-half for the conventional (classical, distinguishable) coins. Bose's "error" leads to what is now called Bose–Einstein statistics. Bose and Einstein extended the idea to atoms and this led to the prediction of the existence of phenomena which became known as Bose–Einstein condensate, a dense collection of bosons (which are particles with integer spin, named after Bose), which was demonstrated to exist by experiment in 1995. Derivation Derivation from the microcanonical ensemble In the microcanonical ensemble, one considers a system with fixed energy, volume, and number of particles. We take a system composed of identical bosons, of which have energy and are distributed over levels or states with the same energy , i.e. is the degeneracy associated with energy of total energy . Calculation of the number of arrangements of particles distributed among states is a problem of combinatorics. Since particles are indistinguishable in the quantum mechanical context here, the number of ways for arranging particles in boxes (for the th energy level) would be (see image): where is the k-combination of a set with m elements. The total number of arrangements in an ensemble of bosons is simply the product of the binomial coefficients above over all the energy levels, i.e. The maximum number of arrangements determining the corresponding occupation number is obtained by maximizing the entropy, or equivalently, setting and taking the subsidiary conditions into account (as Lagrange multipliers). The result for , , is the Bose–Einstein distribution. Derivation from the grand canonical ensemble The Bose–Einstein distribution, which applies only to a quantum system of non-interacting bosons, is naturally derived from the grand canonical ensemble without any approximations. In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential μ fixed by the reservoir). Due to the non-interacting quality, each available single-particle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir. That is, the number of particles within the overall system that occupy a given single particle state form a sub-ensemble that is also grand canonical ensemble; hence, it may be analysed through the construction of a grand partition function. Every single-particle state is of a fixed energy, . As the sub-ensemble associated with a single-particle state varies by the number of particles only, it is clear that the total energy of the sub-ensemble is also directly proportional to the number of particles in the single-particle state; where is the number of particles, the total energy of the sub-ensemble will then be . Beginning with the standard expression for a grand partition function and replacing with , the grand partition function takes the form This formula applies to fermionic systems as well as bosonic systems. Fermi–Dirac statistics arises when considering the effect of the Pauli exclusion principle: whilst the number of fermions occupying the same single-particle state can only be either 1 or 0, the number of bosons occupying a single particle state may be any integer. Thus, the grand partition function for bosons can be considered a geometric series and may be evaluated as such: Note that the geometric series is convergent only if , including the case where . This implies that the chemical potential for the Bose gas must be negative, i.e., , whereas the Fermi gas is allowed to take both positive and negative values for the chemical potential. The average particle number for that single-particle substate is given by This result applies for each single-particle level and thus forms the Bose–Einstein distribution for the entire state of the system. The variance in particle number, , is: As a result, for highly occupied states the standard deviation of the particle number of an energy level is very large, slightly larger than the particle number itself: . This large uncertainty is due to the fact that the probability distribution for the number of bosons in a given energy level is a geometric distribution; somewhat counterintuitively, the most probable value for N is always 0. (In contrast, classical particles have instead a Poisson distribution in particle number for a given state, with a much smaller uncertainty of , and with the most-probable N value being near .) Derivation in the canonical approach It is also possible to derive approximate Bose–Einstein statistics in the canonical ensemble. These derivations are lengthy and only yield the above results in the asymptotic limit of a large number of particles. The reason is that the total number of bosons is fixed in the canonical ensemble. The Bose–Einstein distribution in this case can be derived as in most texts by maximization, but the mathematically best derivation is by the Darwin–Fowler method of mean values as emphasized by Dingle. See also Müller-Kirsten. The fluctuations of the ground state in the condensed region are however markedly different in the canonical and grand-canonical ensembles. Suppose we have a number of energy levels, labeled by index , each level having energy and containing a total of particles. Suppose each level contains distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta, in which case they are distinguishable from each other, yet they can still have the same energy. The value of associated with level is called the "degeneracy" of that energy level. Any number of bosons can occupy the same sublevel. Let be the number of ways of distributing particles among the sublevels of an energy level. There is only one way of distributing particles with one sublevel, therefore . It is easy to see that there are ways of distributing particles in two sublevels which we will write as: With a little thought (see Notes below) it can be seen that the number of ways of distributing particles in three sublevels is so that where we have used the following theorem involving binomial coefficients: Continuing this process, we can see that is just a binomial coefficient (See Notes below) For example, the population numbers for two particles in three sublevels are 200, 110, 101, 020, 011, or 002 for a total of six which equals 4!/(2!2!). The number of ways that a set of occupation numbers can be realized is the product of the ways that each individual energy level can be populated: where the approximation assumes that . Following the same procedure used in deriving the Maxwell–Boltzmann statistics, we wish to find the set of for which W is maximised, subject to the constraint that there be a fixed total number of particles, and a fixed total energy. The maxima of and occur at the same value of and, since it is easier to accomplish mathematically, we will maximise the latter function instead. We constrain our solution using Lagrange multipliers forming the function: Using the approximation and using Stirling's approximation for the factorials gives where K is the sum of a number of terms which are not functions of the . Taking the derivative with respect to , and setting the result to zero and solving for , yields the Bose–Einstein population numbers: By a process similar to that outlined in the Maxwell–Boltzmann statistics article, it can be seen that: which, using Boltzmann's famous relationship becomes a statement of the second law of thermodynamics at constant volume, and it follows that and where S is the entropy, is the chemical potential, kB is the Boltzmann constant and T is the temperature, so that finally: Note that the above formula is sometimes written: where is the absolute activity, as noted by McQuarrie. Also note that when the particle numbers are not conserved, removing the conservation of particle numbers constraint is equivalent to setting and therefore the chemical potential to zero. This will be the case for photons and massive particles in mutual equilibrium and the resulting distribution will be the Planck distribution. A much simpler way to think of Bose–Einstein distribution function is to consider that n particles are denoted by identical balls and g shells are marked by g-1 line partitions. It is clear that the permutations of these n balls and g − 1 partitions will give different ways of arranging bosons in different energy levels. Say, for 3 (= n) particles and 3 (= g) shells, therefore , the arrangement might be |●●|●, or ||●●●, or |●|●● , etc. Hence the number of distinct permutations of objects which have n identical items and (g − 1) identical items will be: See the image for a visual representation of one such distribution of n particles in g boxes that can be represented as partitions. OR The purpose of these notes is to clarify some aspects of the derivation of the Bose–Einstein distribution for beginners. The enumeration of cases (or ways) in the Bose–Einstein distribution can be recast as follows. Consider a game of dice throwing in which there are dice, with each die taking values in the set, for . The constraints of the game are that the value of a die , denoted by , has to be greater than or equal to the value of die , denoted by , in the previous throw, i.e., . Thus a valid sequence of die throws can be described by an n-tuple , such that . Let denote the set of these valid n-tuples: Then the quantity (defined above as the number of ways to distribute particles among the sublevels of an energy level) is the cardinality of , i.e., the number of elements (or valid n-tuples) in . Thus the problem of finding an expression for becomes the problem of counting the elements in . Example n = 4, g = 3: (there are elements in ) Subset is obtained by fixing all indices to , except for the last index, , which is incremented from to . Subset is obtained by fixing , and incrementing from to . Due to the constraint on the indices in , the index must automatically take values in . The construction of subsets and follows in the same manner. Each element of can be thought of as a multiset of cardinality ; the elements of such multiset are taken from the set of cardinality , and the number of such multisets is the multiset coefficient More generally, each element of is a multiset of cardinality (number of dice) with elements taken from the set of cardinality (number of possible values of each die), and the number of such multisets, i.e., is the multiset coefficient which is exactly the same as the formula for , as derived above with the aid of a theorem involving binomial coefficients, namely To understand the decomposition or for example, and let us rearrange the elements of as follows Clearly, the subset of is the same as the set By deleting the index (shown in red with double underline) in the subset of , one obtains the set In other words, there is a one-to-one correspondence between the subset of and the set . We write Similarly, it is easy to see that Thus we can write or more generally, and since the sets are non-intersecting, we thus have with the convention that Continuing the process, we arrive at the following formula Using the convention (7)2 above, we obtain the formula keeping in mind that for and being constants, we have It can then be verified that (8) and (2) give the same result for , , , etc. Interdisciplinary applications Viewed as a pure probability distribution, the Bose–Einstein distribution has found application in other fields: In recent years, Bose–Einstein statistics has also been used as a method for term weighting in information retrieval. The method is one of a collection of DFR ("Divergence From Randomness") models, the basic notion being that Bose–Einstein statistics may be a useful indicator in cases where a particular term and a particular document have a significant relationship that would not have occurred purely by chance. Source code for implementing this model is available from the Terrier project at the University of Glasgow. The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system's constituents. Despite their irreversible and nonequilibrium nature these networks follow Bose statistics and can undergo Bose–Einstein condensation. Addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the "first-mover-advantage", "fit-get-rich" (FGR) and "winner-takes-all" phenomena observed in competitive systems are thermodynamically distinct phases of the underlying evolving networks. See also Bose–Einstein correlations Bose–Einstein condensate Bose gas Einstein solid Higgs boson Parastatistics Planck's law of black body radiation Superconductivity Fermi–Dirac statistics Maxwell–Boltzmann statistics Kompaneyets equation Notes References Concepts in physics Quantum field theory Albert Einstein Statistical mechanics Satyendra Nath Bose
Bose–Einstein statistics
[ "Physics" ]
4,052
[ "Quantum field theory", "Statistical mechanics", "Quantum mechanics", "nan" ]
189,331
https://en.wikipedia.org/wiki/Quantum%20indeterminacy
Quantum indeterminacy is the apparent necessary incompleteness in the description of a physical system, that has become one of the characteristics of the standard description of quantum physics. Prior to quantum physics, it was thought that Quantum indeterminacy can be quantitatively characterized by a probability distribution on the set of outcomes of measurements of an observable. The distribution is uniquely determined by the system state, and moreover quantum mechanics provides a recipe for calculating this probability distribution. Indeterminacy in measurement was not an innovation of quantum mechanics, since it had been established early on by experimentalists that errors in measurement may lead to indeterminate outcomes. By the later half of the 18th century, measurement errors were well understood, and it was known that they could either be reduced by better equipment or accounted for by statistical error models. In quantum mechanics, however, indeterminacy is of a much more fundamental nature, having nothing to do with errors or disturbance. Measurement An adequate account of quantum indeterminacy requires a theory of measurement. Many theories have been proposed since the beginning of quantum mechanics and quantum measurement continues to be an active research area in both theoretical and experimental physics. Possibly the first systematic attempt at a mathematical theory was developed by John von Neumann. The kinds of measurements he investigated are now called projective measurements. That theory was based in turn on the theory of projection-valued measures for self-adjoint operators that had been recently developed (by von Neumann and independently by Marshall Stone) and the Hilbert space formulation of quantum mechanics (attributed by von Neumann to Paul Dirac). In this formulation, the state of a physical system corresponds to a vector of length 1 in a Hilbert space H over the complex numbers. An observable is represented by a self-adjoint (i.e. Hermitian) operator A on H. If H is finite dimensional, by the spectral theorem, A has an orthonormal basis of eigenvectors. If the system is in state ψ, then immediately after measurement the system will occupy a state that is an eigenvector e of A and the observed value λ will be the corresponding eigenvalue of the equation . It is immediate from this that measurement in general will be non-deterministic. Quantum mechanics, moreover, gives a recipe for computing a probability distribution Pr on the possible outcomes given the initial system state is ψ. The probability is where E(λ) is the projection onto the space of eigenvectors of A with eigenvalue λ. Example In this example, we consider a single spin 1/2 particle (such as an electron) in which we only consider the spin degree of freedom. The corresponding Hilbert space is the two-dimensional complex Hilbert space C2, with each quantum state corresponding to a unit vector in C2 (unique up to phase). In this case, the state space can be geometrically represented as the surface of a sphere, as shown in the figure on the right. The Pauli spin matrices are self-adjoint and correspond to spin-measurements along the 3 coordinate axes. The Pauli matrices all have the eigenvalues +1, −1. For σ1, these eigenvalues correspond to the eigenvectors For σ3, they correspond to the eigenvectors Thus in the state σ1 has the determinate value +1, while measurement of σ3 can produce either +1, −1 each with probability 1/2. In fact, there is no state in which measurement of both σ1 and σ3 have determinate values. There are various questions that can be asked about the above indeterminacy assertion. Can the apparent indeterminacy be construed as in fact deterministic, but dependent upon quantities not modeled in the current theory, which would therefore be incomplete? More precisely, are there hidden variables that could account for the statistical indeterminacy in a completely classical way? Can the indeterminacy be understood as a disturbance of the system being measured? Von Neumann formulated the question 1) and provided an argument why the answer had to be no, if one accepted the formalism he was proposing. However, according to Bell, von Neumann's formal proof did not justify his informal conclusion. A definitive but partial negative answer to 1) has been established by experiment: because Bell's inequalities are violated, any such hidden variable(s) cannot be local (see Bell test experiments). The answer to 2) depends on how disturbance is understood, particularly since measurement entails disturbance (however note that this is the observer effect, which is distinct from the uncertainty principle). Still, in the most natural interpretation the answer is also no. To see this, consider two sequences of measurements: (A) that measures exclusively σ1 and (B) that measures only σ3 of a spin system in the state ψ. The measurement outcomes of (A) are all +1, while the statistical distribution of the measurements (B) is still divided between +1, −1 with equal probability. Other examples of indeterminacy Quantum indeterminacy can also be illustrated in terms of a particle with a definitely measured momentum for which there must be a fundamental limit to how precisely its location can be specified. This quantum uncertainty principle can be expressed in terms of other variables, for example, a particle with a definitely measured energy has a fundamental limit to how precisely one can specify how long it will have that energy. The magnitude involved in quantum uncertainty is on the order of the Planck constant (). Indeterminacy and incompleteness Quantum indeterminacy is the assertion that the state of a system does not determine a unique collection of values for all its measurable properties. Indeed, according to the Kochen–Specker theorem, in the quantum mechanical formalism it is impossible that, for a given quantum state, each one of these measurable properties (observables) has a determinate (sharp) value. The values of an observable will be obtained non-deterministically in accordance with a probability distribution that is uniquely determined by the system state. Note that the state is destroyed by measurement, so when we refer to a collection of values, each measured value in this collection must be obtained using a freshly prepared state. This indeterminacy might be regarded as a kind of essential incompleteness in our description of a physical system. Notice however, that the indeterminacy as stated above only applies to values of measurements not to the quantum state. For example, in the spin 1/2 example discussed above, the system can be prepared in the state ψ by using measurement of σ1 as a filter that retains only those particles such that σ1 yields +1. By the von Neumann (so-called) postulates, immediately after the measurement the system is assuredly in the state ψ. However, Albert Einstein believed that quantum state cannot be a complete description of a physical system and, it is commonly thought, never came to terms with quantum mechanics. In fact, Einstein, Boris Podolsky and Nathan Rosen showed that if quantum mechanics is correct, then the classical view of how the real world works (at least after special relativity) is no longer tenable. This view included the following two ideas: A measurable property of a physical system whose value can be predicted with certainty is actually an element of (local) reality (this was the terminology used by EPR). Effects of local actions have a finite propagation speed. This failure of the classical view was one of the conclusions of the EPR thought experiment in which two remotely located observers, now commonly referred to as Alice and Bob, perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called a spin singlet state. It was a conclusion of EPR, using the formal apparatus of quantum theory, that once Alice measured spin in the x direction, Bob's measurement in the x direction was determined with certainty, whereas immediately before Alice's measurement Bob's outcome was only statistically determined. From this it follows that either value of spin in the x direction is not an element of reality or that the effect of Alice's measurement has infinite speed of propagation. Indeterminacy for mixed states We have described indeterminacy for a quantum system that is in a pure state. Mixed states are a more general kind of state obtained by a statistical mixture of pure states. For mixed states the "quantum recipe" for determining the probability distribution of a measurement is determined as follows: Let A be an observable of a quantum mechanical system. A is given by a densely defined self-adjoint operator on H. The spectral measure of A is a projection-valued measure defined by the condition for every Borel subset U of R. Given a mixed state S, we introduce the distribution of A under S as follows: This is a probability measure defined on the Borel subsets of R that is the probability distribution obtained by measuring A in S. Logical independence and quantum randomness Quantum indeterminacy is often understood as information (or lack of it) whose existence we infer, occurring in individual quantum systems, prior to measurement. Quantum randomness is the statistical manifestation of that indeterminacy, witnessable in results of experiments repeated many times. However, the relationship between quantum indeterminacy and randomness is subtle and can be considered differently. In classical physics, experiments of chance, such as coin-tossing and dice-throwing, are deterministic, in the sense that, perfect knowledge of the initial conditions would render outcomes perfectly predictable. The ‘randomness’ stems from ignorance of physical information in the initial toss or throw. In diametrical contrast, in the case of quantum physics, the theorems of Kochen and Specker, the inequalities of John Bell, and experimental evidence of Alain Aspect, all indicate that quantum randomness does not stem from any such physical information. In 2008, Tomasz Paterek et al. provided an explanation in mathematical information. They proved that quantum randomness is, exclusively, the output of measurement experiments whose input settings introduce logical independence into quantum systems. Logical independence is a well-known phenomenon in Mathematical Logic. It refers to the null logical connectivity that exists between mathematical propositions (in the same language) that neither prove nor disprove one another. In the work of Paterek et al., the researchers demonstrate a link connecting quantum randomness and logical independence in a formal system of Boolean propositions. In experiments measuring photon polarisation, Paterek et al. demonstrate statistics correlating predictable outcomes with logically dependent mathematical propositions, and random outcomes with propositions that are logically independent. In 2020, Steve Faulkner reported on work following up on the findings of Tomasz Paterek et al.; showing what logical independence in the Paterek Boolean propositions means, in the domain of Matrix Mechanics proper. He showed how indeterminacy's indefiniteness arises in evolved density operators representing mixed states, where measurement processes encounter irreversible 'lost history' and ingression of ambiguity. See also Complementarity (physics) Counterfactual definiteness EPR paradox Interpretations of quantum mechanics: Comparisons chart Quantum contextuality Quantum entanglement Quantum measurement Quantum mechanics Uncertainty principle Notes References A. Aspect, Bell's inequality test: more ideal than ever, Nature 398 189 (1999). G. Bergmann, The Logic of Quanta, American Journal of Physics, 1947. Reprinted in Readings in the Philosophy of Science, Ed. H. Feigl and M. Brodbeck, Appleton-Century-Crofts, 1953. Discusses measurement, accuracy and determinism. J.S. Bell, On the Einstein–Poldolsky–Rosen paradox, Physics 1 195 (1964). A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 777 (1935). G. Mackey, Mathematical Foundations of Quantum Mechanics, W. A. Benjamin, 1963 (paperback reprint by Dover 2004). J. von Neumann, Mathematical Foundations of Quantum Mechanics, Princeton University Press, 1955. Reprinted in paperback form. Originally published in German in 1932. R. Omnès, Understanding Quantum Mechanics, Princeton University Press, 1999. External links Common Misconceptions Regarding Quantum Mechanics See especially part III "Misconceptions regarding measurement". Quantum mechanics Determinism
Quantum indeterminacy
[ "Physics" ]
2,562
[ "Theoretical physics", "Quantum mechanics" ]
189,699
https://en.wikipedia.org/wiki/Fire%20sprinkler%20system
A fire sprinkler system is an active fire protection method, consisting of a water supply system providing adequate pressure and flowrate to a water distribution piping system, to which fire sprinklers are connected. Although initially used only in factories and large commercial buildings, systems for homes and small buildings are now available at a cost-effective price. Fire sprinkler systems are extensively used worldwide, with over 40 million sprinkler heads fitted each year. Fire sprinkler systems are generally designed as a life saving system, but are not necessarily designed to protect the building. Of buildings completely protected by fire sprinkler systems, if a fire did initiate, it was controlled by the fire sprinklers alone in 96% of these cases. History Leonardo da Vinci designed a sprinkler system in the 15th century. Leonardo automated his patron's kitchen with a super-oven and a system of conveyor belts. In a comedy of errors, everything went wrong during a huge banquet, and a fire broke out. "The sprinkler system worked all too well, causing a flood that washed away all the food and a good part of the kitchen." Ambrose Godfrey created the first successful automated sprinkler system in 1723. He used gunpowder to release a tank of extinguishing fluid. The world's first modern recognizable sprinkler system was installed in the Theatre Royal, Drury Lane in the United Kingdom in 1812 by its architect, William Congreve, and was covered by patent No. 3606 dated the same year. The apparatus consisted of a cylindrical airtight reservoir of 400 hogsheads (c. 95,000 litres) fed by a water main which branched to all parts of the theatre. A series of smaller pipes fed from the distribution pipe were pierced with a series of holes which would pour water in the event of a fire. Frederick Grinnell improved Henry S. Parmalee's design and in 1881 patented the automatic sprinkler that bears his name. He continued to improve the device and in 1890 invented the glass disc sprinkler, essentially the same as that in use today. Until the 1940s, sprinklers were installed almost exclusively for the protection of commercial buildings, whose owners were generally able to recoup their expenses with savings in insurance costs. Over the years, fire sprinklers have become mandatory safety equipment" in some parts of North America, in certain occupancies, including, but not limited to newly constructed "hospitals, schools, hotels and other public buildings", subject to the local building codes and enforcement. However, outside of the US and Canada, sprinklers have rarely been mandated by building codes for normal hazard occupancies which do not have large numbers of occupants (e.g. factories, process lines, retail outlets, petrol stations, etc.) Sprinklers are now commonly installed in non-industrial buildings, including schools and residential premises. This is largely as a result of lobbying by the National Fire Sprinkler Network, the European Fire Sprinkler Network, and the British Automatic Fire Sprinkler Association. Usage Sprinklers have been in use in the United States since 1874, and were installed in factory applications where fires at the turn of the century were often catastrophic in terms of both human and property losses. Sprinklers may be required to be installed by building codes, or may be recommended by insurance companies to reduce potential property losses or business interruption. US building codes for places of assembly (generally over 100 persons) and places with overnight sleeping accommodation such as hotels, nursing homes, dormitories, and hospitals, usually require sprinklers either under local building codes, as a condition of receiving State and Federal funding, or as a requirement to obtain certification (essential for institutions who wish to train medical staff). Regulations United States The primary fire code writing organization is the private National Fire Protection Association or NFPA. NFPA sets the standards for technical aspects of sprinklers installed in the USA. Building codes, which specify which buildings require sprinklers are generally left to local jurisdictions. However, there are some exceptions: In 1990 the US passed PL-101-391, better known as the Hotel and Motel Fire Safety Act of 1990. This law requires that any hotel, meeting hall, or similar institution that receives federal funds (i.e. for overnight stay, or a conference, etc.), must meet fire and other safety requirements. The most visible of these conditions is the implementation of sprinklers. As more and more hotels and other public accommodations upgraded their facilities to enable business with government visitors, this type of construction became the de facto industry norm – even when not directly mandated by any local building codes. If building codes do not explicitly mandate the use of fire sprinklers, the code often makes it highly advantageous to install them as an optional system. Most US building codes allow for less-expensive construction materials, larger floor area limitations, longer egress paths, and fewer requirements for fire-rated construction in structures which are protected by fire sprinklers. Consequently, the total building cost is often decreased by installing a sprinkler system and saving money in the other aspects of the project, as compared to building a non-sprinklered structure. In 2011, Pennsylvania and California became the first US states to require sprinkler systems in all new residential construction. However, Pennsylvania repealed the law later that same year. Many municipalities now require residential sprinklers, even if they are not required at the state level. Europe In Norway as of July 2010, all new housing of more than two storeys, all new hotels, care homes, and hospitals must be sprinklered. Other Nordic countries require or soon will require sprinklers in new care homes, and in Finland a third of care homes were retrofitted with sprinklers. A fire in an illegal immigrant detention center at Schiphol Airport in the Netherlands on 27 October 2005 killed 11 detainees, and led to the retrofitting of sprinklers in all similarly-designed prisons in the Netherlands. A fire at Düsseldorf Airport on 11 April 1996 which killed 17 people led to sprinklers being retrofitted in all major German airports. Most European countries also require sprinklers in shopping centers, in large warehouses, and in high-rise buildings. Renewed interest in and support for sprinkler systems in the UK has resulted in sprinkler systems being more widely installed. In schools, for example, the government has issued recommendations through Building Bulletin 100, a design guide for fire safety in schools, that most new schools, except for a few low risk schools, should be constructed with sprinkler protection. In 2011, Wales became the first country in the world where sprinklers are compulsory in all new homes. The law applies to newly built houses and blocks of flats, as well as care homes and university halls of residence. In Scotland, all new schools are sprinklered, as are new care homes, sheltered housing and high rise flats. In the UK, since the 1990s sprinklers have gained recognition within the Building Regulations (England and Wales) and Scottish Building Standards and under certain circumstances, the presence of sprinkler systems is deemed to provide a form of alternative compliance to some parts of the codes. For example, the presence of a sprinkler system will usually permit doubling of compartment sizes and increases in travel distances (to fire exits) as well as allowing a reduction in the fire rating of internal compartment walls. Operation Each closed-head sprinkler is held closed by either a heat-sensitive glass bulb or a two-part metal link held together with fusible alloy. The glass bulb or link hold in place a "pip cap" which acts as a plug to prevent water from flowing, unless the ambient temperature around the sprinkler reaches the design activation temperature of the individual sprinkler head. In a standard wet-pipe sprinkler system, each sprinkler activates independently when the predetermined heat level is reached. Thus, only the sprinklers sufficiently heated from the fire will operate. This maximizes water pressure over the point of fire origin, and minimizes water damage to the building. A sprinkler activation will usually do less water damage than a fire department hose stream (which provide approximately 900 litres/min (250 US gallons/min). A typical sprinkler used for industrial manufacturing occupancies discharges about 75–150 litres/min (20–40 US gallons/min). However, a typical Early Suppression Fast Response (ESFR) sprinkler at a pressure of will discharge approximately . In addition, a sprinkler will usually activate within one to four minutes of the fire's start, whereas it typically takes at least five minutes for a fire department to register an alarm and drive to the fire site, and an additional ten minutes to set up equipment and apply hose streams to the fire. This additional time can result in a much larger fire, requiring much more water to extinguish. Types Wet pipe By a wide margin, wet pipe sprinkler systems are installed more often than all other types of fire sprinkler systems. They also are the most reliable, because they are simple, with the only operating components being the automatic sprinklers and (commonly, but not always) the automatic alarm check valve. An automatic water supply provides water under pressure to the system piping. Wet systems have optionally been charged with an antifreeze chemical, for use where pipes cannot reliably be kept above . While such systems were once common in cold areas, after several fires which were not controlled because of sprinkler systems filled with too high a percentage of antifreeze, the regulatory authority in the United States effectively banned new antifreeze installations. A sunset date of 2022 applies to older antifreeze systems in the US. This regulatory action has greatly increased costs and reduced options for cold weather tolerant sprinkler systems. Dry pipe Dry pipe systems are the second most common sprinkler system type. Dry pipe systems are installed in spaces in which the ambient temperature may be cold enough to freeze the water in a wet pipe system, which would make the system inoperable. Dry pipe systems are most often used in unheated buildings, in parking garages, in outside canopies attached to heated buildings (within which a wet pipe system would also be provided), or in refrigerated coolers. In regions using NFPA regulations, wet pipe systems cannot be installed unless the range of ambient temperatures remains above . Water is not present in the piping until the system operates; instead, the piping is filled with dry air at a pressure below the water supply pressure. To prevent the larger water supply pressure from prematurely forcing water into the piping, the design of the dry pipe valve (a specialized type of check valve) results in a greater force on top of the check valve clapper by the use of a larger valve clapper area exposed to the piping air pressure, as compared to the higher water pressure but smaller clapper surface area. When one or more of the automatic sprinkler heads is triggered, it opens, allowing the air in the piping to vent from that sprinkler. Each sprinkler operates independently, as its temperature rises above its triggering threshold. As the air pressure in the piping drops, the pressure differential across the dry pipe valve changes, allowing water to enter the piping system. Water flow from sprinklers, needed to control the fire, is delayed until the air is vented from the sprinklers. In regions using NFPA 13 regulations, the time it takes water to reach the hydraulically remote sprinkler from the time that sprinkler is activated is limited to a maximum of 60 seconds. In industry practice, this is known as the "Maximum Time of Water Delivery". The maximum time of water delivery may be required to be reduced, depending on the hazard classification of the area protected by the sprinkler system. Disadvantages of using dry pipe fire sprinkler systems include: Increased complexity: Dry pipe systems require additional control equipment and air pressure supply components, which increases system complexity. This puts a premium on proper maintenance, as this increase in system complexity results in an inherently less-reliable overall system (i.e. more single failure points) as compared to a wet pipe system. Higher installation and maintenance costs: The added complexity impacts the overall dry-pipe installation cost, and increases maintenance expenditure including more frequent internal pipe inspections. Increased fire response time: Because the piping is empty at the time the sprinkler operates, there is an inherent time delay in delivering water to the sprinklers which have operated while the water travels from the riser to the sprinkler, partially filling the piping in the process. A maximum of 60 seconds is normally allowed by regulatory requirements from the time a single sprinkler opens until water is discharged onto the fire. This delay in fire suppression results in a larger fire prior to control, increasing property damage. Increased corrosion potential: Following operation or testing, dry-pipe sprinkler system piping should be drained, but residual water collects in piping low spots, and moisture is also retained in the atmosphere within the piping. This moisture, coupled with the oxygen available in the compressed air in the piping, increases internal pipe corrosion, eventually leading to pin-hole leaks or other piping failures. The internal corrosion rate in wet pipe systems (in which the piping is constantly full of water) is much lower, as the amount of oxygen available for the corrosion process is more limited. Corrosion can be combated by using galvanized, copper, or stainless steel pipe which are less susceptible to corrosion, or by using dry nitrogen gas to pressurize the system, rather than air. Nitrogen generators can be used as a permanent source of nitrogen gas, which is beneficial because dry pipe sprinkler systems require an uninterrupted supply of supervisory gas. These additional precautions can increase the up-front cost of the system, but will help prevent system failure, increased maintenance costs, and premature need for system replacement in the future. Deluge "Deluge" systems are systems in which all sprinklers connected to the water piping system are open, in that the heat sensing operating element is removed. These systems are used for special hazards where rapid fire spread is a concern, as they provide a simultaneous application of water over the entire hazard. Water is not present in the piping until the system operates. Because the sprinkler orifices are open, the piping is at atmospheric pressure. To prevent the water supply pressure from forcing water into the piping, a "deluge valve" (a mechanically latched valve) is used in the water supply connection. It is a non-resetting valve, and stays open once tripped. Because the heat sensing elements normally present in automatic sprinklers have been removed (resulting in open sprinkler heads), the deluge valve is opened via a signal from the fire alarm system which utilizes fire detectors. The type of fire alarm initiating device is selected mainly based on the hazard (e.g. pilot sprinklers, smoke detectors, heat detectors, or optical flame detectors). The initiation device signals the fire alarm panel, which in turn signals the deluge valve to open. Activation can also be via an electric or pneumatic fire alarm pull station which signals the fire alarm panel to signal the deluge valve to open. Pre-action Pre-action sprinkler systems are specialized for use in locations where accidental activation is especially undesirable, such as in museums with rare art works, manuscripts, or books; and data centers, for protection of computer equipment from accidental water discharge. There are two main sub-types of pre-action systems: single interlock, and double interlock. The operation of single interlock systems are similar to wet systems except that these systems require that a "preceding" fire detection event, typically the activation of a heat or smoke detector takes place prior to the "action" of water introduction into the system's piping by opening the pre-action valve which is a mechanically latched valve (i.e. similar to a deluge valve). In this way, the system is essentially converted from a dry system into a wet system. The intent is to reduce the undesirable time delay of water delivery to sprinklers that is inherent in dry systems. Prior to fire detection, if the sprinkler operates, or the piping system develops a leak, loss of air pressure in the piping will activate a trouble alarm. In this case, the pre-action valve will not open due to loss of supervisory pressure, and water will not enter the piping. Double interlock systems require that both activation of a heat or smoke detector, and an automatic sprinkler operation take place prior to the "action" of water introduction into the system's piping. Activation of either the fire detectors alone, or sprinklers alone, without the concurrent operation of the other will not allow water to enter the piping. Because water does not enter the piping until a sprinkler operates, double interlock systems are considered as dry systems in terms of water delivery times, and similarly require a larger design area. Foam water A foam water fire sprinkler system is a special application system, discharging a mixture of water and low expansion foam concentrate, resulting in a foam spray from the sprinkler. These systems are usually used with special hazards occupancies associated with high challenge fires, such as flammable liquids, such as in aircraft hangars. Water spray "Water spray" systems are operationally identical to a deluge system, but the piping and discharge nozzle spray patterns are designed to protect a uniquely configured hazard, usually being three-dimensional components or equipment (i.e. as opposed to a deluge system, which is designed to cover the horizontal floor area of a room). The nozzles used may not be listed fire sprinklers, and are usually selected for a specific spray pattern to conform to the three-dimensional nature of the hazard (e.g. typical spray patterns being oval, fan, full circle, narrow jet). Examples of hazards protected by water spray systems are electrical transformers containing oil for cooling or turbo-generator bearings. Water spray systems can also be used externally on the surfaces of tanks containing flammable liquids or gases (such as hydrogen). Here the water spray is intended to cool the tank and its contents to prevent tank rupture/explosion (BLEVE) and fire spread. Water mist Water mist systems are used for special hazards applications. This type of system is typically used where water damage may be a concern, or where water supplies are limited. NFPA 750 defines water mist as a water spray with a droplet size of "less than 1000 microns at the minimum operation pressure of the discharge nozzle". The droplet size can be controlled by adjusting the discharge pressure through a nozzle of a fixed orifice size. The fire suppression mechanisms provided by water mist systems include cooling, local flame oxygen reduction, and radiation blocking. In operation, water mist systems can operate with the same functionality as deluge, wet pipe, dry pipe, or pre-action systems. Systems can be applied using local application method or total flooding method, similar to Clean Agent Fire Protection Systems. Valves Major control and isolation valves in traditional fire sprinkler systems are typically large gate valves of the "Outside Screw and Yoke" (OS&Y) type, sometimes called "rising stem" valves; or butterfly valves. The position (open or closed) of these valves can be determined visually. Alarm sensors may be attached to monitor the settings of these valves, which are critical to overall building safety. Design Sprinkler systems are intended to either control the fire or to suppress the fire. Control mode sprinklers are intended to control the heat release rate of the fire to prevent building structure collapse, and pre-wet the surrounding combustibles to prevent fire spread. The fire is not extinguished until the burning combustibles are exhausted or manual extinguishment is effected by firefighters. Suppression mode sprinklers, also known as Early Suppression Fast Response (ESFR) sprinklers, are intended to result in a severe sudden reduction of the heat release rate of the fire, prior to manual intervention. Control mode sprinkler systems are designed using an area and density approach. The building use and contents are analyzed to determine the level of fire hazard. The hazard is classified as light hazard, ordinary hazard group 1, ordinary hazard group 2, extra hazard group 1, or extra hazard group 2. After determining the hazard classification, a design area and density can be determined by referencing tables in the 13 National Fire Protection Association standard. The design area represents the maximum area in which sprinklers are expected to operate before fire control is achieved. The design density is a measurement of how much water per square foot of floor area should be applied to the design area. For example, in an office building classified as light hazard, a typical design area would be and the design density would be per or a minimum of applied over the design area. Another example would be a manufacturing facility classified as ordinary hazard group 2 where a typical design area would be and the design density would be per or a minimum of applied over the design area. After the design area and density have been determined, calculations are performed to prove that the system can deliver the required amount of water over the required design area. These calculations account for all of the water pressure that is lost or gained between the water supply source and the sprinklers that would operate in the design area. This includes pressure losses due to friction inside the piping and losses or gains due to elevational differences between the source and the discharging sprinklers. Sometimes momentum pressure from water velocity inside the piping is also calculated. Typically these calculations are performed using computer software, but before the advent of computer systems these sometimes complicated calculations were performed by hand. This skill of calculating sprinkler systems by hand is still required training for a sprinkler system design technologist who seeks senior level certification from engineering certification organizations such as the National Institute for Certification in Engineering Technologies (NICET). Sprinkler systems in residential structures are becoming more common, as the cost of such systems becomes more practical and the benefits become more obvious. Residential sprinkler systems usually fall under a residential classification separate from the commercial classifications mentioned above. A commercial sprinkler system is designed primarily for property protection. Residential sprinkler systems are designed to control a fire for a sufficient time to allow for the safe escape of the building occupants. While these systems will often also protect the structure from major fire damage, this is a secondary consideration. In residential structures, sprinklers are often omitted from closets, bathrooms, balconies, garages and attics because a fire in these areas would not usually impact the occupant's escape route. Costs and effectiveness In 2008, the installed costs of sprinkler systems ranged from US$0.31 – $3.66 per square foot, depending on type and location. Residential systems, installed at the time of initial home construction and utilizing municipal water supplies, average about US$0.35/square foot. Systems can be installed during initial construction, or retrofitted. Some communities have laws requiring residential sprinkler systems, especially where large municipal hydrant water supplies ("fire flows") are not available. Nationwide in the United States, one and two-family homes generally do not require fire sprinkler systems, although the overwhelming loss of life due to fires occurs in these spaces. Residential sprinkler systems are inexpensive (about the same per square foot as carpeting or floor tiling), but require larger water supply piping than is normally installed in homes, so retrofitting is usually cost prohibitive. According to the National Fire Protection Association (NFPA), fires in hotels with sprinklers averaged 78% less damage than fires in hotels without them (1983–1987). The NFPA says the average loss per fire in buildings with sprinklers was $2,300, compared to an average loss of $10,300 in unsprinklered buildings. However, in a purely economic comparison, this is not a complete picture; the total costs of fitting, and the costs arising from non-fire triggered release must be factored. The NFPA states that it "has no record of a fire killing more than two people in a completely sprinklered building where a sprinkler system was properly operating, except in an explosion or flash fire or where industrial fire brigade members or employees were killed during fire suppression operations." Elsewhere it has stated, "NFPA has no record of a multiple fatality in a fully sprinklered building where the system operated." See also Active fire protection Architectural engineering Fire protection Fire protection engineering Listing and approval use and compliance Passive fire protection Pipe support Sprinkler fitting Victaulic References External links National Fire Protection Association Firefighting equipment Active fire protection Safety equipment Hydraulics Plumbing Piping Fire suppression
Fire sprinkler system
[ "Physics", "Chemistry", "Engineering" ]
5,283
[ "Building engineering", "Chemical engineering", "Plumbing", "Physical systems", "Construction", "Hydraulics", "Mechanical engineering", "Piping", "Fluid dynamics" ]
642,330
https://en.wikipedia.org/wiki/Newtonian%20dynamics
In physics, Newtonian dynamics (also known as Newtonian mechanics) is the study of the dynamics of a particle or a small body according to Newton's laws of motion. Mathematical generalizations Typically, the Newtonian dynamics occurs in a three-dimensional Euclidean space, which is flat. However, in mathematics Newton's laws of motion can be generalized to multidimensional and curved spaces. Often the term Newtonian dynamics is narrowed to Newton's second law . Newton's second law in a multidimensional space Consider particles with masses in the regular three-dimensional Euclidean space. Let be their radius-vectors in some inertial coordinate system. Then the motion of these particles is governed by Newton's second law applied to each of them The three-dimensional radius-vectors can be built into a single -dimensional radius-vector. Similarly, three-dimensional velocity vectors can be built into a single -dimensional velocity vector: In terms of the multidimensional vectors () the equations () are written as i.e. they take the form of Newton's second law applied to a single particle with the unit mass . Definition. The equations () are called the equations of a Newtonian dynamical system in a flat multidimensional Euclidean space, which is called the configuration space of this system. Its points are marked by the radius-vector . The space whose points are marked by the pair of vectors is called the phase space of the dynamical system (). Euclidean structure The configuration space and the phase space of the dynamical system () both are Euclidean spaces, i. e. they are equipped with a Euclidean structure. The Euclidean structure of them is defined so that the kinetic energy of the single multidimensional particle with the unit mass is equal to the sum of kinetic energies of the three-dimensional particles with the masses : Constraints and internal coordinates In some cases the motion of the particles with the masses can be constrained. Typical constraints look like scalar equations of the form Constraints of the form () are called holonomic and scleronomic. In terms of the radius-vector of the Newtonian dynamical system () they are written as Each such constraint reduces by one the number of degrees of freedom of the Newtonian dynamical system (). Therefore, the constrained system has degrees of freedom. Definition. The constraint equations () define an -dimensional manifold within the configuration space of the Newtonian dynamical system (). This manifold is called the configuration space of the constrained system. Its tangent bundle is called the phase space of the constrained system. Let be the internal coordinates of a point of . Their usage is typical for the Lagrangian mechanics. The radius-vector is expressed as some definite function of : The vector-function () resolves the constraint equations () in the sense that upon substituting () into () the equations () are fulfilled identically in . Internal presentation of the velocity vector The velocity vector of the constrained Newtonian dynamical system is expressed in terms of the partial derivatives of the vector-function (): The quantities are called internal components of the velocity vector. Sometimes they are denoted with the use of a separate symbol and then treated as independent variables. The quantities are used as internal coordinates of a point of the phase space of the constrained Newtonian dynamical system. Embedding and the induced Riemannian metric Geometrically, the vector-function () implements an embedding of the configuration space of the constrained Newtonian dynamical system into the -dimensional flat configuration space of the unconstrained Newtonian dynamical system (). Due to this embedding the Euclidean structure of the ambient space induces the Riemannian metric onto the manifold . The components of the metric tensor of this induced metric are given by the formula where is the scalar product associated with the Euclidean structure (). Kinetic energy of a constrained Newtonian dynamical system Since the Euclidean structure of an unconstrained system of particles is introduced through their kinetic energy, the induced Riemannian structure on the configuration space of a constrained system preserves this relation to the kinetic energy: The formula () is derived by substituting () into () and taking into account (). Constraint forces For a constrained Newtonian dynamical system the constraints described by the equations () are usually implemented by some mechanical framework. This framework produces some auxiliary forces including the force that maintains the system within its configuration manifold . Such a maintaining force is perpendicular to . It is called the normal force. The force from () is subdivided into two components The first component in () is tangent to the configuration manifold . The second component is perpendicular to . In coincides with the normal force . Like the velocity vector (), the tangent force has its internal presentation The quantities in () are called the internal components of the force vector. Newton's second law in a curved space The Newtonian dynamical system () constrained to the configuration manifold by the constraint equations () is described by the differential equations where are Christoffel symbols of the metric connection produced by the Riemannian metric (). Relation to Lagrange equations Mechanical systems with constraints are usually described by Lagrange equations: where is the kinetic energy the constrained dynamical system given by the formula (). The quantities in () are the inner covariant components of the tangent force vector (see () and ()). They are produced from the inner contravariant components of the vector by means of the standard index lowering procedure using the metric (): The equations () are equivalent to the equations (). However, the metric () and other geometric features of the configuration manifold are not explicit in (). The metric () can be recovered from the kinetic energy by means of the formula See also Modified Newtonian dynamics References Classical mechanics Isaac Newton
Newtonian dynamics
[ "Physics" ]
1,197
[ "Mechanics", "Classical mechanics" ]
642,886
https://en.wikipedia.org/wiki/Quantum%20well
A quantum well is a potential well with only discrete energy values. The classic model used to demonstrate a quantum well is to confine particles, which were initially free to move in three dimensions, to two dimensions, by forcing them to occupy a planar region. The effects of quantum confinement take place when the quantum well thickness becomes comparable to the de Broglie wavelength of the carriers (generally electrons and holes), leading to energy levels called "energy subbands", i.e., the carriers can only have discrete energy values. The concept of quantum well was proposed in 1963 independently by Herbert Kroemer and by Zhores Alferov and R.F. Kazarinov. History The semiconductor quantum well was developed in 1970 by Esaki and Tsu, who also invented synthetic superlattices. They suggested that a heterostructure made up of alternating thin layers of semiconductors with different band-gaps should exhibit interesting and useful properties. Since then, much effort and research has gone into studying the physics of quantum well systems as well as developing quantum well devices. The development of quantum well devices is greatly attributed to the advancements in crystal growth techniques. This is because quantum well devices require structures that are of high purity with few defects. Therefore, having great control over the growth of these heterostructures allows for the development of semiconductor devices that can have very fine-tuned properties. Quantum wells and semiconductor physics has been a hot topic in physics research. Development of semiconductor devices using structures made up of multiple semiconductors resulted in Nobel Prizes for Zhores Alferov and Herbert Kroemer in 2000. The theory surrounding quantum well devices has led to significant advancements in the production and efficiency of many modern components such as light-emitting diodes, transistors for example. Today, such devices are ubiquitous in modern cell phones, computers, and many other computing devices. Fabrication Quantum wells are formed in semiconductors by having a material, like gallium arsenide, sandwiched between two layers of a material with a wider bandgap, like aluminum arsenide. (Other examples: a layer of indium gallium nitride sandwiched between two layers of gallium nitride.) These structures can be grown by molecular beam epitaxy or chemical vapor deposition with control of the layer thickness down to monolayers. Thin metal films can also support quantum well states, in particular, thin metallic overlayers grown in metal and semiconductor surfaces. The vacuum-metal interface confines the electron (or hole) on one side, and in general, by an absolute gap with semiconductor substrates, or by a projected band-gap with metal substrates. There are 3 main approaches to growing a QW material system: lattice-matched, strain-balanced, and strained. Lattice-matched system: In a lattice-matched system, the well and the barrier have a similar lattice constant as the underlying substrate material. With this method, the bandgap difference there is minimal dislocation but also a minimal shift in the absorption spectrum. Strain-balanced system: In a strain-balanced system, the well and barrier are grown so that the increase in lattice constant of one of the layers is compensated by the decrease in lattice constant in the next compared to the substrate material. The choice of thickness and composition of the layers affect bandgap requirements and carrier transport limitations. This approach provides the most flexibility in design, offering a high number of periodic QWs with minimal strain relaxation. Strained system: A strained system is grown with wells and barriers that are not similar in lattice constant. A strained system compresses the whole structure. As a result, the structure is only able to accommodate a few quantum wells. Description and overview One of the simplest quantum well systems can be constructed by inserting a thin layer of one type of semiconductor material between two layers of another with a different band-gap. Consider, as an example, two layers of AlGaAs with a large bandgap surrounding a thin layer of GaAs with a smaller band-gap. Let’s assume that the change in material occurs along the z-direction and therefore the potential well is along the z-direction (no confinement in the x–y plane.). Since the bandgap of the contained material is lower than the surrounding AlGaAs, a quantum well (Potential well) is created in the GaAs region. This change in band energy across the structure can be seen as the change in the potential that a carrier would feel, therefore low energy carriers can be trapped in these wells. Within the quantum well, there are discrete energy eigenstates that carriers can have. For example, an electron in the conduction band can have lower energy within the well than it could have in the AlGaAs region of this structure. Consequently, an electron in the conduction band with low energy can be trapped within the quantum well. Similarly, holes in the valence band can also be trapped in the top of potential wells created in the valence band. The states that confined carriers can be in are particle-in-a-box-like states. Physics Quantum wells and quantum well devices are a subfield of solid-state physics that is still extensively studied and researched today. The theory used to describe such systems uses important results from the fields of quantum physics, statistical physics, and electrodynamics. Infinite well model The simplest model of a quantum well system is the infinite well model. The walls/barriers of the potential well are assumed to be infinite in this model. In reality, the quantum wells are generally of the order of a few hundred millielectronvolts. However, as a first approximation, the infinite well model serves as a simple and useful model that provides some insight into the physics behind quantum wells. Consider an infinite quantum well oriented in the z-direction, such that carriers in the well are confined in the z-direction but free to move in the x–y plane. we choose the quantum well to run from to . We assume that carriers experience no potential within the well and that the potential in the barrier region is infinitely high. The Schrödinger equation for carriers in the infinite well model is: where is the reduced Planck constant and is the effective mass of the carriers within the well region. The effective mass of a carrier is the mass that the electron "feels" in its quantum environment and generally differs between different semiconductors as the value of effective mass depends heavily on the curvature of the band. Note that can be the effective mass of electrons in a well in the conduction band or for holes in a well in the valence band. Solutions and energy levels The solution wave functions cannot exist in the barrier region of the well, due to the infinitely high potential. Therefore, by imposing the following boundary conditions, the allowed wave functions are obtained, The solution wave functions take the following form: The subscript , () denotes the integer quantum number and is the wave vector associated with each state, given above. The associated discrete energies are given by: The simple infinite well model provides a good starting point for analyzing the physics of quantum well systems and the effects of quantum confinement. The model correctly predicts that the energies in the well are inversely proportional to the square of the length of the well. This means that precise control over the width of the semiconductor layers, i.e. the length of the well, will allow for precise control of the energy levels allowed for carriers in the wells. This is an incredibly useful property for band-gap engineering. Furthermore, the model shows that the energy levels are proportional to the inverse of the effective mass. Consequently, heavy holes and light holes will have different energy states when trapped in the well. Heavy and light holes arise when the maxima of valence bands with different curvature coincide; resulting in two different effective masses. A drawback of the infinite well model is that it predicts many more energy states than exist, as the walls of real quantum wells, are finite. The model also neglects the fact that in reality, the wave functions do not go to zero at the boundary of the well but 'bleed' into the wall (due to quantum tunneling) and decay exponentially to zero. This property allows for the design and production of superlattices and other novel quantum well devices and is described better by the finite well model. Finite well model The finite well model provides a more realistic model of quantum wells. Here the walls of the well in the heterostructure are modeled using a finite potential , which is the difference in the conduction band energies of the different semiconductors. Since the walls are finite and the electrons can tunnel into the barrier region. Therefore the allowed wave functions will penetrate the barrier wall. Consider a finite quantum well oriented in the z-direction, such that carriers in the well are confined in the z-direction but free to move in the x–y plane. We choose the quantum well to run from to . We assume that the carriers experience no potential within the well and potential of in the barrier regions. The Schrodinger equation for carriers within the well is unchanged compared to the infinite well model, except for the boundary conditions at the walls, which now demand that the wave functions and their slopes are continuous at the boundaries. Within the barrier region, Schrodinger’s equation for carriers reads: where is the effective mass of the carrier in the barrier region, which will generally differ from its effective mass within the well. Solutions and energy levels Using the relevant boundary conditions and the condition that the wave function must be continuous at the edge of the well, we get solutions for the wave vector that satisfy the following transcendental equations: and where is the exponential decay constant in the barrier region, which is a measure of how fast the wave function decays to zero in the barrier region. The quantized energy eigenstates inside the well, which depend on the wave vector and the quantum number () are given by: The exponential decay constant is given by: It depends on the eigenstate of a bound carrier , the depth of the well , and the effective mass of the carrier within the barrier region, . The solutions to the transcendental equations above can easily be found using numerical or graphical methods. There are generally only a few solutions. However, there will always be at least one solution, so one bound state in the well, regardless of how small the potential is. Similar to the infinite well, the wave functions in the well are sinusoidal-like but exponentially decay in the barrier of the well. This has the effect of reducing the bound energy states of the quantum well compared to the infinite well. Superlattices A superlattice is a periodic heterostructure made of alternating materials with different band-gaps. The thickness of these periodic layers is generally of the order of a few nanometers. The band structure that results from such a configuration is a periodic series of quantum wells. It is important that these barriers are thin enough such that carriers can tunnel through the barrier regions of the multiple wells. A defining property of superlattices is that the barriers between wells are thin enough for adjacent wells to couple. Periodic structures made of repeated quantum wells that have barriers that are too thick for adjacent wave functions to couple, are called multiple quantum well (MQW) structures. Since carriers can tunnel through the barrier regions between the wells, the wave functions of neighboring wells couple together through the thin barrier, therefore, the electronic states in superlattices form delocalized minibands. Solutions for the allowed energy states in superlattices is similar to that for finite quantum wells with a change in the boundary conditions that arise due to the periodicity of the structures. Since the potential is periodic, the system can be mathematically described in a similar way to a one-dimensional crystal lattice. Applications Because of their quasi-two-dimensional nature, electrons in quantum wells have a density of states as a function of energy that has distinct steps, versus a smooth square root dependence that is found in bulk materials. Additionally, the effective mass of holes in the valence band is changed to more closely match that of electrons in the valence band. These two factors, together with the reduced amount of active material in quantum wells, leads to better performance in optical devices such as laser diodes. As a result, quantum wells are used widely in diode lasers, including red lasers for DVDs and laser pointers, infra-red lasers in fiber optic transmitters, or in blue lasers. They are also used to make HEMTs (high electron mobility transistors), which are used in low-noise electronics. Quantum well infrared photodetectors are also based on quantum wells and are used for infrared imaging. By doping either the well itself or preferably, the barrier of a quantum well with donor impurities, a two-dimensional electron gas (2DEG) may be formed. Such a structure creates the conducting channel of a HEMT and has interesting properties at low temperature. One such feature is the quantum Hall effect, seen at high magnetic fields. Acceptor dopants can also lead to a two-dimensional hole gas (2DHG). Saturable absorber A quantum well can be fabricated as a saturable absorber using its saturable absorption property. Saturable absorbers are widely used in passively mode locking lasers. Semiconductor saturable absorbers (SESAMs) were used for laser mode-locking as early as 1974 when p-type germanium was used to mode lock a CO2 laser which generated pulses ~500 ps. Modern SESAMs are III–V semiconductor single quantum well (SQW) or multiple quantum wells (MQW) grown on semiconductor distributed Bragg reflectors (DBRs). They were initially used in a resonant pulse modelocking (RPM) scheme as starting mechanisms for Ti:sapphire lasers which employed KLM as a fast saturable absorber. RPM is another coupled-cavity mode-locking technique. Different from APM lasers that employ non-resonant Kerr-type phase nonlinearity for pulse shortening, RPM employs the amplitude nonlinearity provided by the resonant band filling effects of semiconductors. SESAMs were soon developed into intracavity saturable absorber devices because of more inherent simplicity with this structure. Since then, the use of SESAMs has enabled the pulse durations, average powers, pulse energies and repetition rates of ultrafast solid-state lasers to be improved by several orders of magnitude. Average power of 60 W and repetition rate up to 160 GHz were obtained. By using SESAM-assisted KLM, sub-6 fs pulses directly from a Ti:sapphire oscillator was achieved. A major advantage SESAMs have over other saturable absorber techniques is that absorber parameters can be easily controlled over a wide range of values. For example, saturation fluence can be controlled by varying the reflectivity of the top reflector while modulation depth and recovery time can be tailored by changing the low-temperature growing conditions for the absorber layers. This freedom of design has further extended the application of SESAMs into mode-locking of fibre lasers where a relatively high modulation depth is needed to ensure self-starting and operation stability. Fibre lasers working at ~1 μm and 1.5 μm were successfully demonstrated. Thermoelectrics Quantum wells have shown promise for energy harvesting as thermoelectric devices. They are claimed to be easier to fabricate and offer the potential to operate at room temperature. The wells connect a central cavity to two electronic reservoirs. The central cavity is kept at a hotter temperature than the reservoirs. The wells act as filters that allow electrons of certain energies to pass through. In general, greater temperature differences between the cavity and the reservoirs increases electron flow and output power. An experimental device delivered output power of about 0.18 W/cm2 for a temperature difference of 1 K, nearly double the power of a quantum dot energy harvester. The extra degrees of freedom allowed larger currents. Its efficiency is slightly lower than the quantum dot energy harvesters. Quantum wells transmit electrons of any energy above a certain level, while quantum dots pass only electrons of a specific energy. One possible application is to convert waste heat from electric circuits, e.g., in computer chips, back into electricity, reducing the need for cooling and energy to power the chip. Solar cells Quantum wells have been proposed to increase the efficiency of solar cells. The theoretical maximum efficiency of traditional single-junction cells is about 34%, due in large part to their inability to capture many different wavelengths of light. Multi-junction solar cells, which consist of multiple p-n junctions of different bandgaps connected in series, increase the theoretical efficiency by broadening the range of absorbed wavelengths, but their complexity and manufacturing cost limit their use to niche applications. On the other hand, cells consisting of a p–i–n junction in which the intrinsic region contains one or more quantum wells, lead to an increased photocurrent over dark current, resulting in a net efficiency increase over conventional p–n cells. Photons of energy within the well depth are absorbed in the wells and generate electron–hole pairs. In room temperature conditions, these photo-generated carriers have sufficient thermal energy to escape the well faster than the recombination rate. Elaborate multi-junction quantum well solar cells can be fabricated using layer-by-layer deposition techniques such as molecular beam epitaxy or chemical vapor deposition. It has also been shown that metal or dielectric nanoparticles added above the cell lead to further increases in photo-absorption by scattering incident light into lateral propagation paths confined within the multiple-quantum-well intrinsic layer. Single-junction solar cells With conventional single-junction photovoltaic solar cells, the power it generates is the product of the photocurrent and voltage across the diode. As semiconductors only absorb photons with energies higher than their bandgap, smaller bandgap material absorbs more of the sun's radiative spectrum resulting in a larger current. The highest open-circuit voltage achievable is the built-in bandgap of the material. Because the bandgap of the semiconductor determines both the Current and Voltage, designing a solar cell is always a trade-off between maximizing current output with a low bandgap and voltage output with a high bandgap. The maximum theoretical limit of efficiency for conventional solar cells is determined to be only 31%, with the best silicon devices achieving an optimal limit of 25%. With the introduction of quantum wells (QWs), the efficiency limit of single-junction strained QW silicon devices have increased to 28.3%. The increase is due to the bandgap of the barrier material determining the built-in voltage. Whereas the bandgap of the QWs is now what determines the absorption limit. With their experiments on p–i–n junction photodiodes, Barnham's group showed that placing QWs in the depleted region increases the efficiency of a device. Researchers infer that the resulting increase indicates that the generation of new carriers and photocurrent due to the inclusion of lower energies in the absorption spectrum outweighs the drop in terminal voltage resulting from the recombination of carriers trapped in the quantum wells. Further studies have been able to conclude that the photocurrent increase is directly related to the redshift of the absorption spectrum. Multi-junction solar cells Nowadays, among non-QW solar cells, the III/V multi-junction solar cells are the most efficient, recording a maximum efficiency of 46% under high sunlight concentrations. Multi-junction solar cells are created by stacking multiple p-i-n junctions of different bandgaps. The efficiency of the solar cell increases with the inclusion of more of the solar radiation in the absorption spectrum by introducing more QWs of different bandgaps. The direct relation between the bandgap and lattice constant hinders the advancement of multi-junction solar cells. As more quantum wells (QWs) are grown together, the material grows with dislocations due to the varying lattice constants. Dislocations reduce the diffusion length and carrier lifetime. Hence, QWs provide an alternate approach to multi-junction solar cells with minimal crystal dislocation. Bandgap energy Researchers are looking to use QWs to grow high-quality material with minimal crystal dislocations and increase the efficiency of light absorption and carrier collection to realize higher efficiency QW solar cells. Bandgap tunability helps researchers with designing their solar cells. We can estimate the effective bandgap as the function of the bandgap energy of the QW and the shift in bandgap energy due to the steric strain: the quantum confinement Stark effect (QCSE) and quantum size effect (QSE). The strain of the material causes two effects to the bandgap energy. First is the change in relative energy of the conduction and valence band. This energy change is affected by the strain, , elastic stiffness coefficients, and , and hydrostatic deformation potential, . Second, due to the strain, there is a splitting of heavy-hole and light-hole degeneracy. In a heavily compressed material, the heavy holes (hh) move to a higher energy state. In tensile material, light holes (lh) move to a higher energy state. One can calculate the difference in energy due to the splitting of hh and lh from the shear deformation potential, , strain, , and elastic stiffness coefficients, and . The quantum confinement Stark effect induces a well-thickness dependent shift in the bandgap. If is the elemental charge; and are the effective width of QWs in the conduction and valence band, respectively; is the induced electric field due to piezoelectric and spontaneous polarization; and is the reduced Planck constant, then the energy shift is: The quantum size effect (QSE) is the discretization of energy a charge carrier undergoes due to confinement when its Bohr radius is larger than the size of the well. As the quantum well thickness increases, QSEs decrease. The decrease in QSEs causes the state to move down and decrease the effective bandgap. The Kronig–Penney model is used to calculate the quantum states, and Anderson's rule is applied to estimate the conduction band and valence band offsets in energy. Carrier capture and lifetime With the effective use of carriers in the QWs, researchers can increase the efficiency of quantum well solar cells (QWSCs). Within QWs in the intrinsic region of the p-i-n solar cells, optically generated carriers are either collected by the built-in field or lost due to carrier recombination. Carrier recombination is the process in which a hole and electron recombine to cancel their charges. Carriers can be collected through drift by the electric field. One can either use thin wells and transport carriers via thermionic emission or use thin barriers and transport carriers via tunneling. Carrier lifetime for escape is determined by tunneling and thermionic emission lifetimes. Tunneling and thermionic emission lifetimes both depend on having a low effective barrier height. They are expressed through the following equations: , where and are effective masses of charge carriers in the barrier and well, is the effective barrier height, and is the electric field. Then one can calculate the escape lifetime by the following: The total probability of minority carriers escaping from QWs is a sum of the probability of each well, . Here, , where is recombination lifetime, and is the total number of QWs in the intrinsic region. For , there is a high probability for carrier recollection. Assumptions made in this method of modeling are that each carrier crosses QWs, whereas, in reality, they cross different numbers of QWs and that a carrier capture is at 100%, which may not be true in high background doping conditions. For example, taking In0.18Ga0.82As (125)/GaAs0.36P0.64 (40) into consideration, tunneling, and thermionic emission lifetimes are 0.89 and 1.84, respectively. Even if a recombination time of 50ns is assumed, the escape probability of a single quantum well and a 100 quantum wells is 0.984 and 0.1686, which is not sufficient for efficient carrier capture. Reducing the barrier thickness to 20 ångstroms reduces to 4.1276 ps, increasing the escape probability over a 100 QWs to 0.9918. Indicating that using thin-barriers is essential for more efficient carrier collection. Sustainability of quantum well devices compared to bulk material in light of performance In the 1.1–1.3 eV range, Sayed et al. compares the external quantum efficiency (EQE) of a metamorphic InGaAs bulk subcell on Ge substrates by Spectrolab to a 100-period In0.30Ga0.70As(3.5 nm)/GaAs(2.7 nm)/ GaAs0.60P0.40(3.0 nm) QWSC by Fuji et al. The bulk material shows higher EQE values than those of QWs in the 880-900 nm region, whereas the QWs have higher EQE values in the 400-600 nm range. This result provides some evidence that there is a struggle of extending the QWs' absorption thresholds to longer wavelengths due to strain balance and carrier transport issues. However, the bulk material has more deformations leading to low minority carrier lifetimes. In the 1.6–1.8 eV range, the lattice-matched AlGaAs by Heckelman et al. and InGaAsP by Jain et al. are compared by Sayed with the lattice-matched InGaAsP/InGaP QW structure by Sayed et al. Like the 1.1–1.3 eV range, the EQE of the bulk material is higher in the longer wavelength region of the spectrum, but QWs are advantageous in the sense that they absorb a broader region in the spectrum. Furthermore, they can be grown in lower temperatures preventing thermal degradation. The application of quantum wells in many devices is a viable solution to increasing the energy efficiency of such devices. With lasers, the improvement has already lead to significant results like the LED. With QWSCs harvesting energy from the sun become a more potent method of cultivating energy by being able to absorb more of the sun's radiation and by being able to capture such energy from the charge carriers more efficiently. A viable option such as QWSCs provides the public with an opportunity to move away from greenhouse gas inducing methods to a greener alternative, solar energy. See also Modulating retro-reflector Quantum dot, carriers confined in all three dimensions. Quantum well laser Quantum wire, carriers confined in two dimensions. Particle in a box Finite potential well References Further reading Thomas Engel, Philip Reid Quantum Chemistry and Spectroscopy. . Pearson Education, 2006. Pages 73–75. Quantum mechanical potentials Quantum electronics Semiconductor structures
Quantum well
[ "Physics", "Materials_science" ]
5,565
[ "Quantum electronics", "Quantum mechanics", "Quantum mechanical potentials", "Condensed matter physics", "Nanotechnology" ]
642,896
https://en.wikipedia.org/wiki/Thermoacoustics
Thermoacoustics is the interaction between temperature, density and pressure variations of acoustic waves. Thermoacoustic heat engines can readily be driven using solar energy or waste heat and they can be controlled using proportional control. They can use heat available at low temperatures which makes it ideal for heat recovery and low power applications. The components included in thermoacoustic engines are usually very simple compared to conventional engines. The device can easily be controlled and maintained. Thermoacoustic effects can be observed when partly molten glass tubes are connected to glass vessels. Sometimes spontaneously a loud and monotone sound is produced. A similar effect is observed if one side of a stainless steel tube is at room temperature (293 K) and the other side is in contact with liquid helium at 4.2 K. In this case, spontaneous oscillations are observed which are named "Taconis oscillations". The mathematical foundation of thermoacoustics is by Nikolaus Rott. Later, the field was inspired by the work of John Wheatley and Swift and his co-workers. Technologically thermoacoustic devices have the advantage that they have no moving parts, which makes them attractive for applications where reliability is of key importance. Historical review of thermoacoustics Thermoacoustic-induced oscillations have been observed for centuries. Glass blowers produced heat-generated sound when blowing a hot bulb at the end of a cold narrow tube. This phenomenon also has been observed in cryogenic storage vessels, where oscillations are induced by the insertion of a hollow tube open at the bottom end in liquid helium, called Taconis oscillations, but the lack of heat removal system causes the temperature gradient to diminish and acoustic wave to weaken and then to stop completely. Byron Higgins made the first scientific observation of heat energy conversion into acoustical oscillations. He investigated the "singing flame" phenomena in a portion of a hydrogen flame in a tube with both ends open. Physicist Pieter Rijke introduced this phenomenon into a greater scale by using a heated wire screen to induce strong oscillations in his Rijke tube. Feldman mentioned in his related review that a convective air current through the pipe is the main inducer of this phenomenon. The oscillations are strongest when the screen is at one fourth of the tube length. Research performed by Sondhauss in 1850 is known to be the first to approximate the modern concept of thermoacoustic oscillation. Sondhauss experimentally investigated the oscillations related to glass blowers. Sondhauss observed that sound frequency and intensity depends on the length and volume of the bulb. Lord Rayleigh gave a qualitative explanation of the Sondhauss thermoacoustic oscillations phenomena, where he stated that producing any type of thermoacoustic oscillations needs to meet a criterion: "If heat be given to the air at the moment of greatest condensation or taken from it at the moment of greatest rarefaction, the vibration is encouraged". This shows that he related thermoacoustics to the interplay of density variations and heat injection. The formal theoretical study of thermoacoustics started by Kramers in 1949 when he generalized the Kirchhoff theory of the attenuation of sound waves at constant temperature to the case of attenuation in the presence of a temperature gradient. Rott made a breakthrough in the study and modeling of thermodynamic phenomena by developing a successful linear theory. After that, the acoustical part of thermoacoustics was linked in a broad thermodynamic framework by Swift. Sound Usually sound is understood in terms of pressure variations accompanied by an oscillating motion of a medium (gas, liquid or solid). Thermoacoustic machines rely more on the temperature-position variations than the usual pressure-velocity variations. The sound intensity of ordinary speech is 65 dB. The pressure variations are about 0.05 Pa, the displacements 0.2 μm, and the temperature variations about 40 μK. So, the thermal effects of sound cannot be observed in daily life. However, at sound levels of 180 dB, which are normal in thermoacoustic systems, the pressure variations are 30 kPa, the displacements more than 10 cm, and the temperature variations 24 K. A full theory of thermoacoustics should account for the propagation of heat in the fluid as it makes compression cycles during the propagation of the sound wave. Good insights can however be gained by making the usual assumption of adiabatic compression. Even if no heat is exchanged during adiabatic compression, the temperature of the fluid does change and will tell the correct direction of heat flow. Under the adiabatic approximation, the one-dimensional wave equation for sound reads with t time, v the gas velocity, x the position, and c the sound velocity given by c2=γp0/ρ0. For an ideal gas, c2=γRT0/M with M the molar mass. In these expressions, p0, T0, and ρ0 are the average pressure, temperature, and density respectively. In monochromatic plane waves, with angular frequency ω and with ω=kc, the solution is The pressure variations are given by The deviation δx of a gas-particle with equilibrium position x is given by {| width=500px | | style="text-align:right"|(1) |} and the temperature variations are {| width=500px | | style="text-align:right"|(2) |} The last two equations form a parametric representation of a tilted ellipse in the δT – δx plane with t as the parameter. If , we are dealing with a pure standing wave. Figure 1a gives the dependence of the velocity and position amplitudes (red curve) and the pressure and temperature amplitudes (blue curve) for this case. The ellipse of the δT – δx plane is reduced to a straight line as shown in Fig. 1b. At the tube ends δx =0, so the δT – δx plot is a vertical line here. In the middle of the tube the pressure and temperature variations are zero, so we have a horizontal line. It can be shown that the power, transported by sound, is given by where γ is the ratio of the gas specific heat at fixed pressure to the specific heat at fixed volume and A is the area of the cross section of the sound duct. Since in a standing wave, , the average energy transport is zero. If or , we have a pure traveling wave. In this case, Eqs.(1) and (2) represent circles in the δT – δx diagram as shown in Fig. 1c, which applies to a pure traveling wave to the right. The gas moves to the right with a high temperature and back with a low temperature, so there is a net transport of energy. Penetration depths The thermoacoustic effect inside the stack takes place mainly in the region that is close to the solid walls of the stack. The layers of gas too far away from the stack walls experience adiabatic oscillations in temperature that result in no heat transfer to or from the walls, which is undesirable. Therefore, an important characteristic for any thermoacoustic element is the value of the thermal and viscous penetration depths. The thermal penetration depth δκ is the thickness of the layer of the gas where heat can diffuse through during half a cycle of oscillations. Viscous penetration depth δv is the thickness of the layer where viscosity effect is effective near the boundaries. In case of sound, the characteristic length for thermal interaction is given by the thermal penetration depth δκ Here κ is the thermal conductivity, Vm the molar volume, and Cp the molar heat capacity at constant pressure. Viscous effects are determined by the viscous penetration depth δν with η the gas viscosity and ρ its density. The Prandtl number of the gas is defined as The two penetration depths are related as follows For many working fluids, like air and helium, Pr is of order 1, so the two penetration depths are about equal. For helium at normal temperature and pressure, Pr≈0.66. For typical sound frequencies the thermal penetration depth is ca. 0.1 mm. That means that the thermal interaction between the gas and a solid surface is limited to a very thin layer near the surface. The effect of thermoacoustic devices is increased by putting a large number of plates (with a plate distance of a few times the thermal penetration depth) in the sound field forming a stack. Stacks play a central role in so-called standing-wave thermoacoustic devices. Thermoacoustic systems Acoustic oscillations in a medium are a set of time depending properties, which may transfer energy along its path. Along the path of an acoustic wave, pressure and density are not the only time dependent property, but also entropy and temperature. Temperature changes along the wave can be invested to play the intended role in the thermoacoustic effect. The interplay of heat and sound is applicable in both conversion ways. The effect can be used to produce acoustic oscillations by supplying heat to the hot side of a stack, and sound oscillations can be used to induce a refrigeration effect by supplying a pressure wave inside a resonator where a stack is located. In a thermoacoustic prime mover, a high temperature gradient along a tube where a gas media is contained induces density variations. Such variations in a constant volume of matter force changes in pressure. The cycle of thermoacoustic oscillation is a combination of heat transfer and pressure changes in a sinusoidal pattern. Self-induced oscillations can be encouraged, according to Lord Rayleigh, by the appropriate phasing of heat transfer and pressure changes. Standing-wave systems The thermoacoustic engine (TAE) is a device that converts heat energy into work in the form of acoustic energy. A thermoacoustic engine operates using the effects that arise from the resonance of a standing-wave in a gas. A standing-wave thermoacoustic engine typically has a thermoacoustic element called the "stack". A stack is a solid component with pores that allow the operating gas fluid to oscillate while in contact with the solid walls. The oscillation of the gas is accompanied with the change of its temperature. Due to the introduction of solid walls into the oscillating gas, the plate modifies the original, unperturbed temperature oscillations in both magnitude and phase for the gas about a thermal penetration depth δ=√(2k/ω) away from the plate, where k is the thermal diffusivity of the gas and ω=2πf is the angular frequency of the wave. Thermal penetration depth is defined as the distance that heat can diffuse though the gas during a time 1/ω. In air oscillating at 1000 Hz, the thermal penetration depth is about 0.1 mm. Standing-wave TAE must be supplied with the necessary heat to maintain the temperature gradient on the stack. This is done by two heat exchangers on both sides of the stack. If we put a thin horizontal plate in the sound field, the thermal interaction between the oscillating gas and the plate leads to thermoacoustic effects. If the thermal conductivity of the plate material would be zero, the temperature in the plate would exactly match the temperature profiles as in Fig. 1b. Consider the blue line in Fig. 1b as the temperature profile of a plate at that position. The temperature gradient in the plate would be equal to the so-called critical temperature gradient. If we would fix the temperature at the left side of the plate at ambient temperature Ta (e.g. using a heat exchanger), then the temperature at the right would be below Ta. In other words: we have produced a cooler. This is the basis of thermoacoustic cooling as shown in Fig. 2b which represents a thermoacoustic refrigerator. It has a loudspeaker at the left. The system corresponds with the left half of Fig. 1b with the stack in the position of the blue line. Cooling is produced at temperature TL. It is also possible to fix the temperature of the right side of the plate at Ta and heat up the left side so that the temperature gradient in the plate would be larger than the critical temperature gradient. In that case, we have made an engine (prime mover) which can e.g. produce sound as in Fig. 2a. This is a so-called thermoacoustic prime mover. Stacks can be made of stainless steel plates but the device works also very well with loosely packed stainless steel wool or screens. It is heated at the left, e.g., by a propane flame and heat is released to ambient temperature by a heat exchanger. If the temperature at the left side is high enough, the system starts to produces a loud sound. Thermoacoustic engines still suffer from some limitations, including that: The device usually has low power-to-volume ratio. Very high densities of operating fluids are required to obtain high power densities The commercially available linear alternators used to convert acoustic energy into electricity currently have low efficiencies compared to rotary electric generators Only expensive specially-made alternators can give satisfactory performance. TAE uses gases at high pressures to provide reasonable power densities which imposes sealing challenges particularly if the mixture has light gases like helium. The heat exchanging process in TAE is critical to maintain the power conversion process. The hot heat exchanger has to transfer heat to the stack and the cold heat exchanger has to sustain the temperature gradient across the stack. Yet, the available space for it is constrained with the small size and the blockage it adds to the path of the wave. The heat exchange process in oscillating media is still under extensive research. The acoustic waves inside thermoacoustic engines operated at large pressure ratios suffer many kinds of non-linearities, such as turbulence which dissipates energy due to viscous effects, harmonic generation of different frequencies that carries acoustic power in frequencies other than the fundamental frequency. The performance of thermoacoustic engines usually is characterized through several indicators as follows: The first and second law efficiencies. The onset temperature difference, defined as the minimum temperature difference across the sides of the stack at which the dynamic pressure is generated. The frequency of the resultant pressure wave, since this frequency should match the resonance frequency required by the load device, either a thermoacoustic refrigerator/heat pump or a linear alternator. The degree of harmonic distortion, indicating the ratio of higher harmonics to the fundamental mode in the resulting dynamic pressure wave. The variation of the resultant wave frequency with the TAE operating temperature Travelling-wave systems Figure 3 is a schematic drawing of a travelling-wave thermoacoustic engine. It consists of a resonator tube and a loop which contains a regenerator, three heat exchangers, and a bypass loop. A regenerator is a porous medium with a high heat capacity. As the gas flows back and forth through the regenerator, it periodically stores and takes up heat from the regenerator material. In contrast to the stack, the pores in the regenerator are much smaller than the thermal penetration depth, so the thermal contact between gas and material is very good. Ideally, the energy flow in the regenerator is zero, so the main energy flow in the loop is from the hot heat exchanger via the pulse tube and the bypass loop to the heat exchanger at the other side of the regenerator (main heat exchanger). The energy in the loop is transported via a travelling wave as in Fig. 1c, hence the name travelling-wave systems. The ratio of the volume flows at the ends of the regenerator is TH/Ta, so the regenerator acts as a volume-flow amplifier. Just like in the case of the standing-wave system, the machine "spontaneously" produces sound if the temperature TH is high enough. The resulting pressure oscillations can be used in a variety of ways, such as in producing electricity, cooling, and heat pumping. See also Cryocooler Photoacoustic effect Thermoelectric cooling Pyrophone Thermophone References External links Thermoacoustic research at Los Alamos National Laboratory M. Emam, Experimental Investigations on a Standing-Wave Thermoacoustic Engine, M.Sc. Thesis, Cairo University, Egypt (2013) M.E.H. Tijani, Loudspeaker-driven thermo-acoustic refrigeration, Ph.D. Thesis, Technische Universiteit Eindhoven, (2001) Design Environment for Low-amplitude ThermoAcoustic Energy Conversion Acoustics Heat transfer Energy conversion pt:Refrigeração termoacústica th:อุณหสวนศาสตร์
Thermoacoustics
[ "Physics", "Chemistry" ]
3,557
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Classical mechanics", "Acoustics", "Thermodynamics" ]
642,903
https://en.wikipedia.org/wiki/Photomask
A photomask (also simply called a mask) is an opaque plate with transparent areas that allow light to shine through in a defined pattern. Photomasks are commonly used in photolithography for the production of integrated circuits (ICs or "chips") to produce a pattern on a thin wafer of material (usually silicon). In semiconductor manufacturing, a mask is sometimes called a reticle. In photolithography, several masks are used in turn, each one reproducing a layer of the completed design, and together known as a mask set. A curvilinear photomask has patterns with curves, which is a departure from conventional photomasks which only have patterns that are completely vertical or horizontal, known as manhattan geometry. These photomasks require special equipment to manufacture. History For IC production in the 1960s and early 1970s, an opaque rubylith film laminated onto a transparent mylar sheet was used. The design of one layer was cut into the rubylith, initially by hand on an illuminated drafting table (later by machine (plotter)) and the unwanted rubylith was peeled off by hand, forming the master image of that layer of the chip, often called "artwork". Increasingly complex and thus larger chips required larger and larger rubyliths, eventually even filling the wall of a room, and artworks were to be photographically reduced to produce photomasks (Eventually this whole process was replaced by the optical pattern generator to produce the master image). At this point the master image could be arrayed into a multi-chip image called a reticle. The reticle was originally a 10X larger image of a single chip. The reticle was by step-and-repeater photolithography and etching used to produce a photomask with image-size the same as the final chip. The photomask might be used directly in the fab or be used as master-photomask to produce the final actual working photomasks. As feature size shrank, the only way to properly focus the image was to place it in direct contact with the wafer. These contact aligners often lifted some of the photoresist off the wafer and onto the photomask and it had to be cleaned or discarded. This drove the adoption of reverse master photomasks (see above), which were used to produce (with contact photolithography and etching) the needed many actual working photomasks. Later, projection photo-lithography meant photomask lifetime was indefinite. Still later direct-step-on-wafer stepper photo-lithography used reticles directly and ended the use of photomasks. Photomask materials changed over time. Initially soda glass was used with silver halide opacity. Later borosilicate and then fused silica to control expansion, and chromium which has better opacity to ultraviolet light were introduced. The original pattern generators have since been replaced by electron beam lithography and laser-driven mask writer or maskless lithography systems which generate reticles directly from the original computerized design. Overview Lithographic photomasks are typically transparent fused silica plates covered with a pattern defined with a chromium (Cr) or Fe2O3 metal absorbing film. Photomasks are used at wavelengths of 365 nm, 248 nm, and 193 nm. Photomasks have also been developed for other forms of radiation such as 157 nm, 13.5 nm (EUV), X-ray, electrons, and ions; but these require entirely new materials for the substrate and the pattern film. A set of photomasks, each defining a pattern layer in integrated circuit fabrication, is fed into a photolithography stepper or scanner, and individually selected for exposure. In multi-patterning techniques, a photomask would correspond to a subset of the layer pattern. Historically in photolithography for the mass production of integrated circuit devices, there was a distinction between the term photoreticle or simply reticle, and the term photomask. In the case of a photomask, there is a one-to-one correspondence between the mask pattern and the wafer pattern. The mask covered the entire surface of the wafer which was exposed in its entirety in one shot. This was the standard for the 1:1 mask aligners that were succeeded by steppers and scanners with reduction optics. As used in steppers and scanners which use image projection, the reticle commonly contains only one copy, also called one layer of the designed VLSI circuit. (However, some photolithography fabrications utilize reticles with more than one layer placed side by side onto the same mask, used as copies to create several identical integrated circuits from one photomask). In modern usage, the terms reticle and photomask are synonymous. In a modern stepper or scanner, the pattern in the photomask is projected and shrunk by four or five times onto the wafer surface. To achieve complete wafer coverage, the wafer is repeatedly "stepped" from position to position under the optical column or the stepper lens until full exposure of the wafer is achieved. A photomask with several copies of the integrated circuit design is used to reduce the number of steppings required to expose the entire wafer, thus increasing productivity. Features 150 nm or below in size generally require phase-shifting to enhance the image quality to acceptable values. This can be achieved in many ways. The two most common methods are to use an attenuated phase-shifting background film on the mask to increase the contrast of small intensity peaks, or to etch the exposed quartz so that the edge between the etched and unetched areas can be used to image nearly zero intensity. In the second case, unwanted edges would need to be trimmed out with another exposure. The former method is attenuated phase-shifting, and is often considered a weak enhancement, requiring special illumination for the most enhancement, while the latter method is known as alternating-aperture phase-shifting, and is the most popular strong enhancement technique. As leading-edge semiconductor features shrink, photomask features that are 4× larger must inevitably shrink as well. This could pose challenges since the absorber film will need to become thinner, and hence less opaque. A 2005 study by IMEC found that thinner absorbers degrade image contrast and therefore contribute to line-edge roughness, using state-of-the-art photolithography tools. One possibility is to eliminate absorbers altogether and use "chromeless" masks, relying solely on phase-shifting for imaging. The emergence of immersion lithography has a strong impact on photomask requirements. The commonly used attenuated phase-shifting mask is more sensitive to the higher incidence angles applied in "hyper-NA" lithography, due to the longer optical path through the patterned film. During manufacturing, inspection using a special form of microscopy called CD-SEM (Critical-Dimension Scanning Electron Microscopy) is used to measure critical dimensions on photomasks which are the dimensions of the patterns on a photomask. EUV lithography EUV photomasks work by reflecting light, which is achieved by using multiple alternating layers of molybdenum and silicon. Mask error enhancement factor (MEEF) Leading-edge photomasks (pre-corrected) images of the final chip patterns are magnified by four times. This magnification factor has been a key benefit in reducing pattern sensitivity to imaging errors. However, as features continue to shrink, two trends come into play: the first is that the mask error factor begins to exceed one, i.e., the dimension error on the wafer may be more than 1/4 the dimension error on the mask, and the second is that the mask feature is becoming smaller, and the dimension tolerance is approaching a few nanometers. For example, a 25 nm wafer pattern should correspond to a 100 nm mask pattern, but the wafer tolerance could be 1.25 nm (5% spec), which translates into 5 nm on the photomask. The variation of electron beam scattering in directly writing the photomask pattern can easily well exceed this. Pellicles The term "pellicle" is used to mean "film", "thin film", or "membrane." Beginning in the 1960s, thin film stretched on a metal frame, also known as a "pellicle", was used as a beam splitter for optical instruments. It has been used in a number of instruments to split a beam of light without causing an optical path shift due to its small film thickness. In 1978, Shea et al. at IBM patented a process to use the "pellicle" as a dust cover to protect a photomask or reticle. In the context of this entry, "pellicle" means "thin film dust cover to protect a photomask". Particle contamination can be a significant problem in semiconductor manufacturing. A photomask is protected from particles by a pelliclea thin transparent film stretched over a frame that is glued over one side of the photomask. The pellicle is far enough away from the mask patterns so that moderate-to-small sized particles that land on the pellicle will be too far out of focus to print. Although they are designed to keep particles away, pellicles become a part of the imaging system and their optical properties need to be taken into account. Pellicles material are Nitrocellulose and made for various Transmission Wavelengths. Current pellicles are made from polysilicon, and companies are exploring other materials for high-NA EUV and future chip making processes. Leading commercial photomask manufacturers The SPIE Annual Conference, Photomask Technology reports the SEMATECH Mask Industry Assessment which includes current industry analysis and the results of their annual photomask manufacturers survey. The following companies are listed in order of their global market share (2009 info): Dai Nippon Printing Toppan Photomasks Photronics Inc Hoya Corporation Taiwan Mask Corporation Compugraphics Major chipmakers such as Intel, Globalfoundries, IBM, NEC, TSMC, UMC, Samsung, and Micron Technology, have their own large maskmaking facilities or joint ventures with the abovementioned companies. The worldwide photomask market was estimated as $3.2 billion in 2012 and $3.1 billion in 2013. Almost half of the market was from captive mask shops (in-house mask shops of major chipmakers). The costs of creating new mask shop for 180 nm processes were estimated in 2005 as $40 million, and for 130 nm - more than $100 million. The purchase price of a photomask, in 2006, could range from $250 to $100,000 for a single high-end phase-shift mask. As many as 30 masks (of varying price) may be required to form a complete mask set. As modern chips are built in several layers stacked on top of each other, at least one mask is required for each of these layers. See also Computational lithography Integrated circuit layout design protection (or "Mask work") Mask inspection Nanochannel glass materials SMIF interface Stepping level References Lithography (microfabrication) Semiconductor fabrication equipment
Photomask
[ "Materials_science", "Engineering" ]
2,345
[ "Nanotechnology", "Semiconductor fabrication equipment", "Microtechnology", "Lithography (microfabrication)" ]
642,936
https://en.wikipedia.org/wiki/Immersion%20lithography
Immersion lithography is a technique used in semiconductor manufacturing to enhance the resolution and accuracy of the lithographic process. It involves using a liquid medium, typically water, between the lens and the wafer during exposure. By using a liquid with a higher refractive index than air, immersion lithography allows for smaller features to be created on the wafer. Immersion lithography replaces the usual air gap between the final lens and the wafer surface with a liquid medium that has a refractive index greater than one. The angular resolution is increased by a factor equal to the refractive index of the liquid. Current immersion lithography tools use highly purified water for this liquid, achieving feature sizes below 45 nanometers. Background The ability to resolve features in optical lithography is directly related to the numerical aperture of the imaging equipment, the numerical aperture being the sine of the maximum refraction angle multiplied by the refractive index of the medium through which the light travels. The lenses in the highest resolution "dry" photolithography scanners focus light in a cone whose boundary is nearly parallel to the wafer surface. As it is impossible to increase resolution by further refraction, additional resolution is obtained by inserting an immersion medium with a higher index of refraction between the lens and the wafer. The blurriness is reduced by a factor equal to the refractive index of the medium. For example, for water immersion using ultraviolet light at 193 nm wavelength, the index of refraction is 1.44. The resolution enhancement from immersion lithography is about 30–40% depending on materials used. However, the depth of focus, or tolerance in wafer topography flatness, is improved compared to the corresponding "dry" tool at the same resolution. The idea for immersion lithography was patented in 1984 by Takanashi et al. It was also proposed by Taiwanese engineer Burn J. Lin and realized in the 1980s. In 2004, IBM's director of silicon technology, Ghavam Shahidi, announced that IBM planned to commercialize lithography based on light filtered through water. Defects Defect concerns, e.g., water left behind (watermarks) and loss of resist-water adhesion (air gap or bubbles), have led to considerations of using a topcoat layer directly on top of the photoresist. This topcoat would serve as a barrier for chemical diffusion between the liquid medium and the photoresist. In addition, the interface between the liquid and the topcoat would be optimized for watermark reduction. At the same time, defects from topcoat use should be avoided. As of 2005, Topcoats had been tuned for use as antireflection coatings, especially for hyper-NA (NA>1) cases. By 2008, defect counts on wafers printed by immersion lithography had reached zero level capability. Polarization impacts As of 2000, Polarization effects due to high angles of interference in the photoresist were considered as features approach 40 nm. Hence, illumination sources generally need to be azimuthally polarized to match the pole illumination for ideal line-space imaging. Throughput As of 1996, this was achieved through higher stage speeds, which in turn, as of 2013 were allowed by higher power ArF laser pulse sources. Specifically, the throughput is directly proportional to stage speed V, which is related to dose D and rectangular slit width S and slit intensity Iss (which is directly related to pulse power) by V=Iss*S/D. The slit height is the same as the field height. The slit width S, in turn, is limited by the number of pulses to make the dose (n), divided by the frequency of the laser pulses (f), at the maximum scan speed Vmax by S=Vmax*n/f. At a fixed frequency f and pulse number n, the slit width will be proportional to the maximum stage speed. Hence, throughput at a given dose is improved by increasing maximum stage speed as well as increasing pulse power. According to ASML s product information about twinscan-nxt1980di, immersion lithography tools currently boasted the highest throughputs (275 WPH) as targeted for high volume manufacturing. Multiple patterning The resolution limit for a 1.35 NA immersion tool operating at 193 nm wavelength is 36 nm. Going beyond this limit to sub-20nm nodes requires multiple patterning. At the 20nm foundry and memory nodes and beyond, double patterning and triple patterning are already being used with immersion lithography for the densest layers. See also Oil immersion Water immersion objective References Lithography (microfabrication) Taiwanese inventions ja:液浸
Immersion lithography
[ "Materials_science" ]
967
[ "Nanotechnology", "Microtechnology", "Lithography (microfabrication)" ]
643,021
https://en.wikipedia.org/wiki/Nonstandard%20calculus
In mathematics, nonstandard calculus is the modern application of infinitesimals, in the sense of nonstandard analysis, to infinitesimal calculus. It provides a rigorous justification for some arguments in calculus that were previously considered merely heuristic. Non-rigorous calculations with infinitesimals were widely used before Karl Weierstrass sought to replace them with the (ε, δ)-definition of limit starting in the 1870s. For almost one hundred years thereafter, mathematicians such as Richard Courant viewed infinitesimals as being naive and vague or meaningless. Contrary to such views, Abraham Robinson showed in 1960 that infinitesimals are precise, clear, and meaningful, building upon work by Edwin Hewitt and Jerzy Łoś. According to Howard Keisler, "Robinson solved a three hundred year old problem by giving a precise treatment of infinitesimals. Robinson's achievement will probably rank as one of the major mathematical advances of the twentieth century." History The history of nonstandard calculus began with the use of infinitely small quantities, called infinitesimals in calculus. The use of infinitesimals can be found in the foundations of calculus independently developed by Gottfried Leibniz and Isaac Newton starting in the 1660s. John Wallis refined earlier techniques of indivisibles of Cavalieri and others by exploiting an infinitesimal quantity he denoted in area calculations, preparing the ground for integral calculus. They drew on the work of such mathematicians as Pierre de Fermat, Isaac Barrow and René Descartes. In early calculus the use of infinitesimal quantities was criticized by a number of authors, most notably Michel Rolle and Bishop Berkeley in his book The Analyst. Several mathematicians, including Maclaurin and d'Alembert, advocated the use of limits. Augustin Louis Cauchy developed a versatile spectrum of foundational approaches, including a definition of continuity in terms of infinitesimals and a (somewhat imprecise) prototype of an ε, δ argument in working with differentiation. Karl Weierstrass formalized the concept of limit in the context of a (real) number system without infinitesimals. Following the work of Weierstrass, it eventually became common to base calculus on ε, δ arguments instead of infinitesimals. This approach formalized by Weierstrass came to be known as the standard calculus. After many years of the infinitesimal approach to calculus having fallen into disuse other than as an introductory pedagogical tool, use of infinitesimal quantities was finally given a rigorous foundation by Abraham Robinson in the 1960s. Robinson's approach is called nonstandard analysis to distinguish it from the standard use of limits. This approach used technical machinery from mathematical logic to create a theory of hyperreal numbers that interpret infinitesimals in a manner that allows a Leibniz-like development of the usual rules of calculus. An alternative approach, developed by Edward Nelson, finds infinitesimals on the ordinary real line itself, and involves a modification of the foundational setting by extending ZFC through the introduction of a new unary predicate "standard". Motivation To calculate the derivative of the function at x, both approaches agree on the algebraic manipulations: This becomes a computation of the derivatives using the hyperreals if is interpreted as an infinitesimal and the symbol "" is the relation "is infinitely close to". In order to make f a real-valued function, the final term is dispensed with. In the standard approach using only real numbers, that is done by taking the limit as tends to zero. In the hyperreal approach, the quantity is taken to be an infinitesimal, a nonzero number that is closer to 0 than to any nonzero real. The manipulations displayed above then show that is infinitely close to 2x, so the derivative of f at x is then 2x. Discarding the "error term" is accomplished by an application of the standard part function. Dispensing with infinitesimal error terms was historically considered paradoxical by some writers, most notably George Berkeley. Once the hyperreal number system (an infinitesimal-enriched continuum) is in place, one has successfully incorporated a large part of the technical difficulties at the foundational level. Thus, the epsilon, delta techniques that some believe to be the essence of analysis can be implemented once and for all at the foundational level, and the students needn't be "dressed to perform multiple-quantifier logical stunts on pretense of being taught infinitesimal calculus", to quote a recent study. More specifically, the basic concepts of calculus such as continuity, derivative, and integral can be defined using infinitesimals without reference to epsilon, delta. Keisler's textbook Keisler's Elementary Calculus: An Infinitesimal Approach defines continuity on page 125 in terms of infinitesimals, to the exclusion of epsilon, delta methods. The derivative is defined on page 45 using infinitesimals rather than an epsilon-delta approach. The integral is defined on page 183 in terms of infinitesimals. Epsilon, delta definitions are introduced on page 282. Definition of derivative The hyperreals can be constructed in the framework of Zermelo–Fraenkel set theory, the standard axiomatisation of set theory used elsewhere in mathematics. To give an intuitive idea for the hyperreal approach, note that, naively speaking, nonstandard analysis postulates the existence of positive numbers ε which are infinitely small, meaning that ε is smaller than any standard positive real, yet greater than zero. Every real number x is surrounded by an infinitesimal "cloud" of hyperreal numbers infinitely close to it. To define the derivative of f at a standard real number x in this approach, one no longer needs an infinite limiting process as in standard calculus. Instead, one sets where st is the standard part function, yielding the real number infinitely close to the hyperreal argument of st, and is the natural extension of to the hyperreals. Continuity A real function f is continuous at a standard real number x if for every hyperreal x' infinitely close to x, the value f(x' ) is also infinitely close to f(x). This captures Cauchy's definition of continuity as presented in his 1821 textbook Cours d'Analyse, p. 34. Here to be precise, f would have to be replaced by its natural hyperreal extension usually denoted f*. Using the notation for the relation of being infinitely close as above, the definition can be extended to arbitrary (standard or nonstandard) points as follows: A function f is microcontinuous at x if whenever , one has Here the point x' is assumed to be in the domain of (the natural extension of) f. The above requires fewer quantifiers than the (ε, δ)-definition familiar from standard elementary calculus: f is continuous at x if for every ε > 0, there exists a δ > 0 such that for every x' , whenever |x − x' | < δ, one has |f(x) − f(x' )| < ε. Uniform continuity A function f on an interval I is uniformly continuous if its natural extension f* in I* has the following property: for every pair of hyperreals x and y in I*, if then . In terms of microcontinuity defined in the previous section, this can be stated as follows: a real function is uniformly continuous if its natural extension f* is microcontinuous at every point of the domain of f*. This definition has a reduced quantifier complexity when compared with the standard (ε, δ)-definition. Namely, the epsilon-delta definition of uniform continuity requires four quantifiers, while the infinitesimal definition requires only two quantifiers. It has the same quantifier complexity as the definition of uniform continuity in terms of sequences in standard calculus, which however is not expressible in the first-order language of the real numbers. The hyperreal definition can be illustrated by the following three examples. Example 1: a function f is uniformly continuous on the semi-open interval (0,1], if and only if its natural extension f* is microcontinuous (in the sense of the formula above) at every positive infinitesimal, in addition to continuity at the standard points of the interval. Example 2: a function f is uniformly continuous on the semi-open interval [0,∞) if and only if it is continuous at the standard points of the interval, and in addition, the natural extension f* is microcontinuous at every positive infinite hyperreal point. Example 3: similarly, the failure of uniform continuity for the squaring function is due to the absence of microcontinuity at a single infinite hyperreal point. Concerning quantifier complexity, the following remarks were made by Kevin Houston: The number of quantifiers in a mathematical statement gives a rough measure of the statement’s complexity. Statements involving three or more quantifiers can be difficult to understand. This is the main reason why it is hard to understand the rigorous definitions of limit, convergence, continuity and differentiability in analysis as they have many quantifiers. In fact, it is the alternation of the and that causes the complexity. Andreas Blass wrote as follows: Often ... the nonstandard definition of a concept is simpler than the standard definition (both intuitively simpler and simpler in a technical sense, such as quantifiers over lower types or fewer alternations of quantifiers). Compactness A set A is compact if and only if its natural extension A* has the following property: every point in A* is infinitely close to a point of A. Thus, the open interval (0,1) is not compact because its natural extension contains positive infinitesimals which are not infinitely close to any positive real number. Heine–Cantor theorem The fact that a continuous function on a compact interval I is necessarily uniformly continuous (the Heine–Cantor theorem) admits a succinct hyperreal proof. Let x, y be hyperreals in the natural extension I* of I. Since I is compact, both st(x) and st(y) belong to I. If x and y were infinitely close, then by the triangle inequality, they would have the same standard part Since the function is assumed continuous at c, and therefore f(x) and f(y) are infinitely close, proving uniform continuity of f. Why is the squaring function not uniformly continuous? Let f(x) = x2 defined on . Let be an infinite hyperreal. The hyperreal number is infinitely close to N. Meanwhile, the difference is not infinitesimal. Therefore, f* fails to be microcontinuous at the hyperreal point N. Thus, the squaring function is not uniformly continuous, according to the definition in uniform continuity above. A similar proof may be given in the standard setting . Example: Dirichlet function Consider the Dirichlet function It is well known that, under the standard definition of continuity, the function is discontinuous at every point. Let us check this in terms of the hyperreal definition of continuity above, for instance let us show that the Dirichlet function is not continuous at π. Consider the continued fraction approximation an of π. Now let the index n be an infinite hypernatural number. By the transfer principle, the natural extension of the Dirichlet function takes the value 1 at an. Note that the hyperrational point an is infinitely close to π. Thus the natural extension of the Dirichlet function takes different values (0 and 1) at these two infinitely close points, and therefore the Dirichlet function is not continuous at π. Limit While the thrust of Robinson's approach is that one can dispense with the approach using multiple quantifiers, the notion of limit can be easily recaptured in terms of the standard part function st, namely if and only if whenever the difference x − a is infinitesimal, the difference f(x) − L is infinitesimal, as well, or in formulas: if st(x) = a  then st(f(x)) = L, cf. (ε, δ)-definition of limit. Limit of sequence Given a sequence of real numbers , if L is the limit of the sequence and if for every infinite hypernatural n, st(xn)=L (here the extension principle is used to define xn for every hyperinteger n). This definition has no quantifier alternations. The standard (ε, δ)-style definition, on the other hand, does have quantifier alternations: Extreme value theorem To show that a real continuous function f on [0,1] has a maximum, let N be an infinite hyperinteger. The interval [0, 1] has a natural hyperreal extension. The function f is also naturally extended to hyperreals between 0 and 1. Consider the partition of the hyperreal interval [0,1] into N subintervals of equal infinitesimal length 1/N, with partition points xi = i /N as i "runs" from 0 to N. In the standard setting (when N is finite), a point with the maximal value of f can always be chosen among the N+1 points xi, by induction. Hence, by the transfer principle, there is a hyperinteger i0 such that 0 ≤ i0 ≤ N and for all i = 0, …, N (an alternative explanation is that every hyperfinite set admits a maximum). Consider the real point where st is the standard part function. An arbitrary real point x lies in a suitable sub-interval of the partition, namely , so that st(xi) = x. Applying st''' to the inequality , . By continuity of f, . Hence f(c) ≥ f(x), for all x, proving c to be a maximum of the real function f. Intermediate value theorem As another illustration of the power of Robinson's approach, a short proof of the intermediate value theorem (Bolzano's theorem) using infinitesimals is done by the following. Let f be a continuous function on [a,b] such that f(a)<0 while f(b)>0. Then there exists a point c in [a,b] such that f(c)=0. The proof proceeds as follows. Let N be an infinite hyperinteger. Consider a partition of [a,b] into N intervals of equal length, with partition points xi as i runs from 0 to N. Consider the collection I of indices such that f(xi)>0. Let i0 be the least element in I (such an element exists by the transfer principle, as I is a hyperfinite set). Then the real number is the desired zero of f. Such a proof reduces the quantifier complexity of a standard proof of the IVT. Basic theorems If f is a real valued function defined on an interval [a, b], then the transfer operator applied to f, denoted by *f, is an internal, hyperreal-valued function defined on the hyperreal interval [*a, *b].Theorem: Let f be a real-valued function defined on an interval [a, b]. Then f is differentiable at a < x < b if and only if for every non-zero infinitesimal h, the value is independent of h. In that case, the common value is the derivative of f at x. This fact follows from the transfer principle of nonstandard analysis and overspill. Note that a similar result holds for differentiability at the endpoints a, b provided the sign of the infinitesimal h is suitably restricted. For the second theorem, the Riemann integral is defined as the limit, if it exists, of a directed family of Riemann sums; these are sums of the form where Such a sequence of values is called a partition or mesh and the width of the mesh. In the definition of the Riemann integral, the limit of the Riemann sums is taken as the width of the mesh goes to 0.Theorem: Let f be a real-valued function defined on an interval [a, b]. Then f is Riemann-integrable on [a, b] if and only if for every internal mesh of infinitesimal width, the quantity is independent of the mesh. In this case, the common value is the Riemann integral of f over [a, b]. Applications One immediate application is an extension of the standard definitions of differentiation and integration to internal functions on intervals of hyperreal numbers. An internal hyperreal-valued function f on [a, b] is S-differentiable at x, provided exists and is independent of the infinitesimal h. The value is the S derivative at x.Theorem: Suppose f is S-differentiable at every point of [a, b] where b − a is a bounded hyperreal. Suppose furthermore that Then for some infinitesimal ε To prove this, let N be a nonstandard natural number. Divide the interval [a, b] into N subintervals by placing N − 1 equally spaced intermediate points: Then Now the maximum of any internal set of infinitesimals is infinitesimal. Thus all the εk's are dominated by an infinitesimal ε. Therefore, from which the result follows. See also Adequality Criticism of nonstandard analysis Archimedes' use of infinitesimals Elementary Calculus: An Infinitesimal Approach Non-classical analysis History of calculus Notes References H. Jerome Keisler: Elementary Calculus: An Approach Using Infinitesimals. First edition 1976; 2nd edition 1986. (This book is now out of print. The publisher has reverted the copyright to the author, who has made available the 2nd edition in .pdf format available for downloading at http://www.math.wisc.edu/~keisler/calc.html.) H. Jerome Keisler: Foundations of Infinitesimal Calculus, available for downloading at http://www.math.wisc.edu/~keisler/foundations.html (10 jan '07) Baron, Margaret E.: The origins of the infinitesimal calculus. Pergamon Press, Oxford-Edinburgh-New York 1969. Dover Publications, Inc., New York, 1987. (A new edition of Baron's book appeared in 2004) External links On-line version (2022) Brief Calculus (2005, rev. 2015) by Benjamin Crowel. This short text is designed more for self-study or review than for classroom use. Infinitesimals are used when appropriate, and are treated more rigorously than in old books like Thompson's Calculus Made Easy, but in less detail than in Keisler's Elementary Calculus: An Approach Using Infinitesimals''. Nonstandard analysis Calculus Infinity
Nonstandard calculus
[ "Mathematics" ]
3,959
[ "Calculus", "Mathematical objects", "Infinity", "Nonstandard analysis", "Mathematics of infinitesimals", "Model theory" ]
643,036
https://en.wikipedia.org/wiki/Input%20impedance
In electrical engineering, the input impedance of an electrical network is the measure of the opposition to current (impedance), both static (resistance) and dynamic (reactance), into a load network or circuit that is external to the electrical source network. The input admittance (the reciprocal of impedance) is a measure of the load network's propensity to draw current. The source network is the portion of the network that transmits power, and the load network is the portion of the network that consumes power. For an electrical property measurement instrument like an oscilloscope, the instrument is a load circuit to an electrical circuit (source circuit) to be measured, so the input impedance is the impedance of the instrument seen by the circuit to be measured. Input impedance If the load network were replaced by a device with an output impedance equal to the input impedance of the load network (equivalent circuit), the characteristics of the source-load network would be the same from the perspective of the connection point. So, the voltage across and the current through the input terminals would be identical to the chosen load network. Therefore, the input impedance of the load and the output impedance of the source determine how the source current and voltage change. The Thévenin's equivalent circuit of the electrical network uses the concept of input impedance to determine the impedance of the equivalent circuit. Calculation If one were to create a circuit with equivalent properties across the input terminals by placing the input impedance across the load of the circuit and the output impedance in series with the signal source, Ohm's law could be used to calculate the transfer function. Electrical efficiency The values of the input and output impedance are often used to evaluate the electrical efficiency of networks by breaking them up into multiple stages and evaluating the efficiency of the interaction between each stage independently. To minimize electrical losses, the output impedance of the signal should be insignificant in comparison to the input impedance of the network being connected, as the gain is equivalent to the ratio of the input impedance to the total impedance (input impedance + output impedance). In this case, (or ) The input impedance of the driven stage (load) is much larger than the output impedance of the drive stage (source). Power factor In AC circuits carrying power, the losses of energy in conductors due to the reactive component of the impedance can be significant. These losses manifest themselves in a phenomenon called phase imbalance, where the current is out of phase (lagging behind or ahead) with the voltage. Therefore, the product of the current and the voltage is less than what it would be if the current and voltage were in phase. With DC sources, reactive circuits have no impact, therefore power factor correction is not necessary. For a circuit to be modelled with an ideal source, output impedance, and input impedance; the circuit's input reactance can be sized to be the negative of the output reactance at the source. In this scenario, the reactive component of the input impedance cancels the reactive component of the output impedance at the source. The resulting equivalent circuit is purely resistive in nature, and there are no losses due to phase imbalance in the source or the load. Power transfer The condition of maximum power transfer states that for a given source maximum power will be transferred when the resistance of the source is equal to the resistance of the load and the power factor is corrected by canceling out the reactance. When this occurs the circuit is said to be complex conjugate matched to the signals impedance. Note this only maximizes the power transfer, not the efficiency of the circuit. When the power transfer is optimized the circuit only runs at 50% efficiency. The formula for complex conjugate matched is When there is no reactive component this equation simplifies to as the imaginary part of is zero. Impedance matching When the characteristic impedance of a transmission line, , does not match the impedance of the load network, , the load network will reflect back some of the source signal. This can create standing waves on the transmission line. To minimize reflections, the characteristic impedance of the transmission line and the impedance of the load circuit have to be equal (or "matched"). If the impedance matches, the connection is known as a matched connection, and the process of correcting an impedance mismatch is called impedance matching. Since the characteristic impedance for a homogeneous transmission line is based on geometry alone and is therefore constant, and the load impedance can be measured independently, the matching condition holds regardless of the placement of the load (before or after the transmission line). Applications Signal processing In modern signal processing, devices, such as operational amplifiers, are designed to have an input impedance several orders of magnitude higher than the output impedance of the source device connected to that input. This is called impedance bridging. The losses due to input impedance (loss) in these circuits will be minimized, and the voltage at the input of the amplifier will be close to voltage as if the amplifier circuit was not connected. When a device whose input impedance could cause significant degradation of the signal is used, often a device with a high input impedance and a low output impedance is used to minimize its effects. Voltage follower or impedance-matching transformers are often used for these effects. The input impedance for high-impedance amplifiers (such as vacuum tubes, field effect transistor amplifiers and op-amps) is often specified as a resistance in parallel with a capacitance (e.g., 2.2MΩ ∥ 1pF). Pre-amplifiers designed for high input impedance may have a slightly higher effective noise voltage at the input (while providing a low effective noise current), and so slightly more noisy than an amplifier designed for a specific low-impedance source, but in general a relatively low-impedance source configuration will be more resistant to noise (particularly mains hum). Radio frequency power systems Signal reflections caused by an impedance mismatch at the end of a transmission line can result in distortion and potential damage to the driving circuitry. In analog video circuits, impedance mismatch can cause "ghosting", where the time-delayed echo of the principal image appears as a weak and displaced image (typically to the right of the principal image). In high-speed digital systems, such as HD video, reflections result in interference and potentially corrupt signal. The standing waves created by the mismatch are periodic regions of higher than normal voltage. If this voltage exceeds the dielectric breakdown strength of the insulating material of the line then an arc will occur. This in turn can cause a reactive pulse of high voltage that can destroy the transmitter's final output stage. In RF systems, typical values for line and termination impedance are 50 Ω and 75 Ω. To maximise power transmission for radio frequency power systems the circuits should be complex conjugate matched throughout the power chain, from the transmitter output, through the transmission line (a balanced pair, a coaxial cable, or a waveguide), to the antenna system, which consists of an impedance matching device and the radiating element(s). See also Output impedance Damping factor Voltage divider Dummy load References The Art of Electronics, Winfield Hill, Paul Horowitz, Cambridge University Press, "Aortic input impedance in normal man: relationship to pressure wave forms", JP Murgo, N Westerhof, JP Giolma, SA Altobelli pdf An excellent introduction to the importance of impedance and impedance matching can be found in A practical introduction to electronic circuits, M H Jones, Cambridge University Press, External links Calculation of the damping factor and the damping of impedance bridging Interconnection of two audio units - Input impedance and output impedance Impedance and Reactance Input Impedance Measurement Electrical parameters Audio amplifier specifications
Input impedance
[ "Engineering" ]
1,639
[ "Electronic engineering", "Electrical engineering", "Audio engineering", "Audio amplifier specifications", "Electrical parameters" ]
643,070
https://en.wikipedia.org/wiki/Output%20impedance
In electrical engineering, the output impedance of an electrical network is the measure of the opposition to current flow (impedance), both static (resistance) and dynamic (reactance), into the load network being connected that is internal to the electrical source. The output impedance is a measure of the source's propensity to drop in voltage when the load draws current, the source network being the portion of the network that transmits and the load network being the portion of the network that consumes. Because of this the output impedance is sometimes referred to as the source impedance or internal impedance. Description All devices and connections have non-zero resistance and reactance, and therefore no device can be a perfect source. The output impedance is often used to model the source's response to current flow. Some portion of the device's measured output impedance may not physically exist within the device; some are artifacts that are due to the chemical, thermodynamic, or mechanical properties of the source. This impedance can be imagined as an impedance in series with an ideal voltage source, or in parallel with an ideal current source (see: Series and parallel circuits). Sources are modeled as ideal sources (ideal meaning sources that always keep the desired value) combined with their output impedance. The output impedance is defined as this modeled and/or real impedance in series with an ideal voltage source. Mathematically, current and voltage sources can be converted to each other using Thévenin's theorem and Norton's theorem. In the case of a nonlinear device, such as a transistor, the term "output impedance" usually refers to the effect upon a small-amplitude signal, and will vary with the bias point of the transistor, that is, with the direct current (DC) and voltage applied to the device. Measurement The source resistance of a purely resistive device can be experimentally determined by increasingly loading the device until the voltage across the load (AC or DC) is one half of the open circuit voltage. At this point, the load resistance and internal resistance are equal. It can more accurately be described by keeping track of the voltage vs current curves for various loads, and calculating the resistance from Ohm's law. (The internal resistance may not be the same for different types of loading or at different frequencies, especially in devices like chemical batteries.) The generalized source impedance for a reactive (inductive or capacitive) source device is more complicated to determine, and is usually measured with specialized instruments, rather than taking many measurements by hand. Audio amplifiers The real output impedance (ZS) of a power amplifier is usually less than 0.1 Ω, but this is rarely specified. Instead it is "hidden" within the damping factor parameter, which is: Solving for ZS, gives the small source impedance (output impedance) of the power amplifier. This can be calculated from the ZL of the loudspeaker (typically 2, 4, or 8 ohms) and the given value of the damping factor. Generally in audio and hifi, the input impedance of components is several times (technically, more than 10) the output impedance of the device connected to them. This is called impedance bridging or voltage bridging. In this case, ZL>> ZS, (in practice:) DF > 10 In video, RF, and other systems, impedances of inputs and outputs are the same. This is called impedance matching or a matched connection. In this case, ZS = ZL, DF = 1/1 = 1 . The actual output impedance for most devices is not the same as the rated output impedance. A power amplifier may have a rated impedance of 8 ohms, but the actual output impedance will vary depending on circuit conditions. The rated output impedance is the impedance into which the amplifier can deliver its maximum amount of power without failing. Batteries Internal resistance is a concept that helps model the electrical consequences of the complex chemical reactions inside a battery. It is impossible to directly measure the internal resistance of a battery, but it can be calculated from current and voltage data measured from a circuit. When a load is applied to a battery, the internal resistance can be calculated from the following equations: where is the internal resistance of the battery is the battery voltage without a load is the battery voltage with a load is the total resistance of the circuit is the total current supplied by the battery Internal resistance varies with the age of a battery, but for most commercial batteries the internal resistance is on the order of 1 ohm. When there is a current through a cell, the measured e.m.f. is lower than when there is no current delivered by the cell. The reason for this is that part of the available energy of the cell is used up to drive charges through the cell. This energy is wasted by the so-called "internal resistance" of that cell. This wasted energy shows up as lost voltage. Internal resistance is . See also Electrical impedance Input impedance Nominal impedance Damping factor Voltage divider Early effect small-signal model Equivalent series resistance Power gain References External links Calculation of the Damping Factor and the Damping of Impedance Bridging Audio amplifier specifications Electrical parameters
Output impedance
[ "Engineering" ]
1,091
[ "Electronic engineering", "Electrical engineering", "Audio engineering", "Audio amplifier specifications", "Electrical parameters" ]
643,769
https://en.wikipedia.org/wiki/Quantum%20tunnelling
In physics, quantum tunnelling, barrier penetration, or simply tunnelling is a quantum mechanical phenomenon in which an object such as an electron or atom passes through a potential energy barrier that, according to classical mechanics, should not be passable due to the object not having sufficient energy to pass or surmount the barrier. Tunneling is a consequence of the wave nature of matter, where the quantum wave function describes the state of a particle or other physical system, and wave equations such as the Schrödinger equation describe their behavior. The probability of transmission of a wave packet through a barrier decreases exponentially with the barrier height, the barrier width, and the tunneling particle's mass, so tunneling is seen most prominently in low-mass particles such as electrons or protons tunneling through microscopically narrow barriers. Tunneling is readily detectable with barriers of thickness about 1–3 nm or smaller for electrons, and about 0.1 nm or smaller for heavier particles such as protons or hydrogen atoms. Some sources describe the mere penetration of a wave function into the barrier, without transmission on the other side, as a tunneling effect, such as in tunneling into the walls of a finite potential well. Tunneling plays an essential role in physical phenomena such as nuclear fusion and alpha radioactive decay of atomic nuclei. Tunneling applications include the tunnel diode, quantum computing, flash memory, and the scanning tunneling microscope. Tunneling limits the minimum size of devices used in microelectronics because electrons tunnel readily through insulating layers and transistors that are thinner than about 1 nm. The effect was predicted in the early 20th century. Its acceptance as a general physical phenomenon came mid-century. Introduction to the concept Quantum tunnelling falls under the domain of quantum mechanics. To understand the phenomenon, particles attempting to travel across a potential barrier can be compared to a ball trying to roll over a hill. Quantum mechanics and classical mechanics differ in their treatment of this scenario. Classical mechanics predicts that particles that do not have enough energy to classically surmount a barrier cannot reach the other side. Thus, a ball without sufficient energy to surmount the hill would roll back down. In quantum mechanics, a particle can, with a small probability, tunnel to the other side, thus crossing the barrier. The reason for this difference comes from treating matter as having properties of waves and particles. Tunnelling problem The wave function of a physical system of particles specifies everything that can be known about the system. Therefore, problems in quantum mechanics analyze the system's wave function. Using mathematical formulations, such as the Schrödinger equation, the time evolution of a known wave function can be deduced. The square of the absolute value of this wave function is directly related to the probability distribution of the particle positions, which describes the probability that the particles would be measured at those positions. As shown in the animation, a wave packet impinges on the barrier, most of it is reflected and some is transmitted through the barrier. The wave packet becomes more de-localized: it is now on both sides of the barrier and lower in maximum amplitude, but equal in integrated square-magnitude, meaning that the probability the particle is somewhere remains unity. The wider the barrier and the higher the barrier energy, the lower the probability of tunneling. Some models of a tunneling barrier, such as the rectangular barriers shown, can be analysed and solved algebraically. Most problems do not have an algebraic solution, so numerical solutions are used. "Semiclassical methods" offer approximate solutions that are easier to compute, such as the WKB approximation. History The Schrödinger equation was published in 1926. The first person to apply the Schrödinger equation to a problem that involved tunneling between two classically allowed regions through a potential barrier was Friedrich Hund in a series of articles published in 1927. He studied the solutions of a double-well potential and discussed molecular spectra. Leonid Mandelstam and Mikhail Leontovich discovered tunneling independently and published their results in 1928. In 1927, Lothar Nordheim, assisted by Ralph Fowler, published a paper that discussed thermionic emission and reflection of electrons from metals. He assumed a surface potential barrier that confines the electrons within the metal and showed that the electrons have a finite probability of tunneling through or reflecting from the surface barrier when their energies are close to the barrier energy. Classically, the electron would either transmit or reflect with 100% certainty, depending on its energy. In 1928 J. Robert Oppenheimer published two papers on field emission, i.e. the emission of electrons induced by strong electric fields. Nordheim and Fowler simplified Oppenheimer's derivation and found values for the emitted currents and work functions that agreed with experiments. A great success of the tunnelling theory was the mathematical explanation for alpha decay, which was developed in 1928 by George Gamow and independently by Ronald Gurney and Edward Condon. The latter researchers simultaneously solved the Schrödinger equation for a model nuclear potential and derived a relationship between the half-life of the particle and the energy of emission that depended directly on the mathematical probability of tunneling. All three researchers were familiar with the works on field emission, and Gamow was aware of Mandelstam and Leontovich's findings. In the early days of quantum theory, the term tunnel effect was not used, and the effect was instead referred to as penetration of, or leaking through, a barrier. The German term wellenmechanische Tunneleffekt was used in 1931 by Walter Schottky. The English term tunnel effect entered the language in 1932 when it was used by Yakov Frenkel in his textbook. In 1957 Leo Esaki demonstrated tunneling of electrons over a few nanometer wide barrier in a semiconductor structure and developed a diode based on tunnel effect. In 1960, following Esaki's work, Ivar Giaever showed experimentally that tunnelling also took place in superconductors. The tunnelling spectrum gave direct evidence of the superconducting energy gap. In 1962, Brian Josephson predicted the tunneling of superconducting Cooper pairs. Esaki, Giaever and Josephson shared the 1973 Nobel Prize in Physics for their works on quantum tunneling in solids. In 1981, Gerd Binnig and Heinrich Rohrer developed a new type of microscope, called scanning tunneling microscope, which is based on tunnelling and is used for imaging surfaces at the atomic level. Binnig and Rohrer were awarded the Nobel Prize in Physics in 1986 for their discovery. Applications Tunnelling is the cause of some important macroscopic physical phenomena. Solid-state physics Electronics Tunnelling is a source of current leakage in very-large-scale integration (VLSI) electronics and results in a substantial power drain and heating effects that plague such devices. It is considered the lower limit on how microelectronic device elements can be made. Tunnelling is a fundamental technique used to program the floating gates of flash memory. Cold emission Cold emission of electrons is relevant to semiconductors and superconductor physics. It is similar to thermionic emission, where electrons randomly jump from the surface of a metal to follow a voltage bias because they statistically end up with more energy than the barrier, through random collisions with other particles. When the electric field is very large, the barrier becomes thin enough for electrons to tunnel out of the atomic state, leading to a current that varies approximately exponentially with the electric field. These materials are important for flash memory, vacuum tubes, and some electron microscopes. Tunnel junction A simple barrier can be created by separating two conductors with a very thin insulator. These are tunnel junctions, the study of which requires understanding quantum tunnelling. Josephson junctions take advantage of quantum tunnelling and superconductivity to create the Josephson effect. This has applications in precision measurements of voltages and magnetic fields, as well as the multijunction solar cell. Tunnel diode Diodes are electrical semiconductor devices that allow electric current flow in one direction more than the other. The device depends on a depletion layer between N-type and P-type semiconductors to serve its purpose. When these are heavily doped the depletion layer can be thin enough for tunnelling. When a small forward bias is applied, the current due to tunnelling is significant. This has a maximum at the point where the voltage bias is such that the energy level of the p and n conduction bands are the same. As the voltage bias is increased, the two conduction bands no longer line up and the diode acts typically. Because the tunnelling current drops off rapidly, tunnel diodes can be created that have a range of voltages for which current decreases as voltage increases. This peculiar property is used in some applications, such as high speed devices where the characteristic tunnelling probability changes as rapidly as the bias voltage. The resonant tunnelling diode makes use of quantum tunnelling in a very different manner to achieve a similar result. This diode has a resonant voltage for which a current favors a particular voltage, achieved by placing two thin layers with a high energy conductance band near each other. This creates a quantum potential well that has a discrete lowest energy level. When this energy level is higher than that of the electrons, no tunnelling occurs and the diode is in reverse bias. Once the two voltage energies align, the electrons flow like an open wire. As the voltage further increases, tunnelling becomes improbable and the diode acts like a normal diode again before a second energy level becomes noticeable. Tunnel field-effect transistors A European research project demonstrated field effect transistors in which the gate (channel) is controlled via quantum tunnelling rather than by thermal injection, reducing gate voltage from ≈1 volt to 0.2 volts and reducing power consumption by up to 100×. If these transistors can be scaled up into VLSI chips, they would improve the performance per power of integrated circuits. Conductivity of crystalline solids While the Drude-Lorentz model of electrical conductivity makes excellent predictions about the nature of electrons conducting in metals, it can be furthered by using quantum tunnelling to explain the nature of the electron's collisions. When a free electron wave packet encounters a long array of uniformly spaced barriers, the reflected part of the wave packet interferes uniformly with the transmitted one between all barriers so that 100% transmission becomes possible. The theory predicts that if positively charged nuclei form a perfectly rectangular array, electrons will tunnel through the metal as free electrons, leading to extremely high conductance, and that impurities in the metal will disrupt it. Scanning tunneling microscope The scanning tunnelling microscope (STM), invented by Gerd Binnig and Heinrich Rohrer, may allow imaging of individual atoms on the surface of a material. It operates by taking advantage of the relationship between quantum tunnelling with distance. When the tip of the STM's needle is brought close to a conduction surface that has a voltage bias, measuring the current of electrons that are tunnelling between the needle and the surface reveals the distance between the needle and the surface. By using piezoelectric rods that change in size when voltage is applied, the height of the tip can be adjusted to keep the tunnelling current constant. The time-varying voltages that are applied to these rods can be recorded and used to image the surface of the conductor. STMs are accurate to 0.001 nm, or about 1% of atomic diameter. Nuclear physics Nuclear fusion Quantum tunnelling is an essential phenomenon for nuclear fusion. The temperature in stellar cores is generally insufficient to allow atomic nuclei to overcome the Coulomb barrier and achieve thermonuclear fusion. Quantum tunnelling increases the probability of penetrating this barrier. Though this probability is still low, the extremely large number of nuclei in the core of a star is sufficient to sustain a steady fusion reaction. Radioactive decay Radioactive decay is the process of emission of particles and energy from the unstable nucleus of an atom to form a stable product. This is done via the tunnelling of a particle out of the nucleus (an electron tunneling into the nucleus is electron capture). This was the first application of quantum tunnelling. Radioactive decay is a relevant issue for astrobiology as this consequence of quantum tunnelling creates a constant energy source over a large time interval for environments outside the circumstellar habitable zone where insolation would not be possible (subsurface oceans) or effective. Quantum tunnelling may be one of the mechanisms of hypothetical proton decay. Chemistry Energetically forbidden reactions Chemical reactions in the interstellar medium occur at extremely low energies. Probably the most fundamental ion-molecule reaction involves hydrogen ions with hydrogen molecules. The quantum mechanical tunnelling rate for the same reaction using the hydrogen isotope deuterium, D− + H2 → H− + HD, has been measured experimentally in an ion trap. The deuterium was placed in an ion trap and cooled. The trap was then filled with hydrogen. At the temperatures used in the experiment, the energy barrier for reaction would not allow the reaction to succeed with classical dynamics alone. Quantum tunneling allowed reactions to happen in rare collisions. It was calculated from the experimental data that collisions happened one in every hundred billion. Kinetic isotope effect In chemical kinetics, the substitution of a light isotope of an element with a heavier one typically results in a slower reaction rate. This is generally attributed to differences in the zero-point vibrational energies for chemical bonds containing the lighter and heavier isotopes and is generally modeled using transition state theory. However, in certain cases, large isotopic effects are observed that cannot be accounted for by a semi-classical treatment, and quantum tunnelling is required. R. P. Bell developed a modified treatment of Arrhenius kinetics that is commonly used to model this phenomenon. Astrochemistry in interstellar clouds By including quantum tunnelling, the astrochemical syntheses of various molecules in interstellar clouds can be explained, such as the synthesis of molecular hydrogen, water (ice) and the prebiotic important formaldehyde. Tunnelling of molecular hydrogen has been observed in the lab. Quantum biology Quantum tunnelling is among the central non-trivial quantum effects in quantum biology. Here it is important both as electron tunnelling and proton tunnelling. Electron tunnelling is a key factor in many biochemical redox reactions (photosynthesis, cellular respiration) as well as enzymatic catalysis. Proton tunnelling is a key factor in spontaneous DNA mutation. Spontaneous mutation occurs when normal DNA replication takes place after a particularly significant proton has tunnelled. A hydrogen bond joins DNA base pairs. A double well potential along a hydrogen bond separates a potential energy barrier. It is believed that the double well potential is asymmetric, with one well deeper than the other such that the proton normally rests in the deeper well. For a mutation to occur, the proton must have tunnelled into the shallower well. The proton's movement from its regular position is called a tautomeric transition. If DNA replication takes place in this state, the base pairing rule for DNA may be jeopardised, causing a mutation. Per-Olov Lowdin was the first to develop this theory of spontaneous mutation within the double helix. Other instances of quantum tunnelling-induced mutations in biology are believed to be a cause of ageing and cancer. Mathematical discussion Schrödinger equation The time-independent Schrödinger equation for one particle in one dimension can be written as or where is the reduced Planck constant, m is the particle mass, x represents distance measured in the direction of motion of the particle, Ψ is the Schrödinger wave function, V is the potential energy of the particle (measured relative to any convenient reference level), E is the energy of the particle that is associated with motion in the x-axis (measured relative to V), M(x) is a quantity defined by V(x) − E, which has no accepted name in physics. The solutions of the Schrödinger equation take different forms for different values of x, depending on whether M(x) is positive or negative. When M(x) is constant and negative, then the Schrödinger equation can be written in the form The solutions of this equation represent travelling waves, with phase-constant +k or −k. Alternatively, if M(x) is constant and positive, then the Schrödinger equation can be written in the form The solutions of this equation are rising and falling exponentials in the form of evanescent waves. When M(x) varies with position, the same difference in behaviour occurs, depending on whether M(x) is negative or positive. It follows that the sign of M(x) determines the nature of the medium, with negative M(x) corresponding to medium A and positive M(x) corresponding to medium B. It thus follows that evanescent wave coupling can occur if a region of positive M(x) is sandwiched between two regions of negative M(x), hence creating a potential barrier. The mathematics of dealing with the situation where M(x) varies with x is difficult, except in special cases that usually do not correspond to physical reality. A full mathematical treatment appears in the 1965 monograph by Fröman and Fröman. Their ideas have not been incorporated into physics textbooks, but their corrections have little quantitative effect. WKB approximation The wave function is expressed as the exponential of a function: where is then separated into real and imaginary parts: where A(x) and B(x) are real-valued functions. Substituting the second equation into the first and using the fact that the imaginary part needs to be 0 results in: To solve this equation using the semiclassical approximation, each function must be expanded as a power series in . From the equations, the power series must start with at least an order of to satisfy the real part of the equation; for a good classical limit starting with the highest power of the Planck constant possible is preferable, which leads to and with the following constraints on the lowest order terms, and At this point two extreme cases can be considered. Case 1 If the amplitude varies slowly as compared to the phase and which corresponds to classical motion. Resolving the next order of expansion yields Case 2 If the phase varies slowly as compared to the amplitude, and which corresponds to tunneling. Resolving the next order of the expansion yields In both cases it is apparent from the denominator that both these approximate solutions are bad near the classical turning points . Away from the potential hill, the particle acts similar to a free and oscillating wave; beneath the potential hill, the particle undergoes exponential changes in amplitude. By considering the behaviour at these limits and classical turning points a global solution can be made. To start, a classical turning point, is chosen and is expanded in a power series about : Keeping only the first order term ensures linearity: Using this approximation, the equation near becomes a differential equation: This can be solved using Airy functions as solutions. Taking these solutions for all classical turning points, a global solution can be formed that links the limiting solutions. Given the two coefficients on one side of a classical turning point, the two coefficients on the other side of a classical turning point can be determined by using this local solution to connect them. Hence, the Airy function solutions will asymptote into sine, cosine and exponential functions in the proper limits. The relationships between and are and With the coefficients found, the global solution can be found. Therefore, the transmission coefficient for a particle tunneling through a single potential barrier is where are the two classical turning points for the potential barrier. For a rectangular barrier, this expression simplifies to: Faster than light Some physicists have claimed that it is possible for spin-zero particles to travel faster than the speed of light when tunnelling. This appears to violate the principle of causality, since a frame of reference then exists in which the particle arrives before it has left. In 1998, Francis E. Low reviewed briefly the phenomenon of zero-time tunnelling. More recently, experimental tunnelling time data of phonons, photons, and electrons was published by Günter Nimtz. Another experiment overseen by A. M. Steinberg, seems to indicate that particles could tunnel at apparent speeds faster than light. Other physicists, such as Herbert Winful, disputed these claims. Winful argued that the wave packet of a tunnelling particle propagates locally, so a particle can't tunnel through the barrier non-locally. Winful also argued that the experiments that are purported to show non-local propagation have been misinterpreted. In particular, the group velocity of a wave packet does not measure its speed, but is related to the amount of time the wave packet is stored in the barrier. Moreover, if quantum tunneling is modeled with the relativistic Dirac equation, well established mathematical theorems imply that the process is completely subluminal. Dynamical tunneling The concept of quantum tunneling can be extended to situations where there exists a quantum transport between regions that are classically not connected even if there is no associated potential barrier. This phenomenon is known as dynamical tunnelling. Tunnelling in phase space The concept of dynamical tunnelling is particularly suited to address the problem of quantum tunnelling in high dimensions (d>1). In the case of an integrable system, where bounded classical trajectories are confined onto tori in phase space, tunnelling can be understood as the quantum transport between semi-classical states built on two distinct but symmetric tori. Chaos-assisted tunnelling In real life, most systems are not integrable and display various degrees of chaos. Classical dynamics is then said to be mixed and the system phase space is typically composed of islands of regular orbits surrounded by a large sea of chaotic orbits. The existence of the chaotic sea, where transport is classically allowed, between the two symmetric tori then assists the quantum tunnelling between them. This phenomenon is referred as chaos-assisted tunnelling. and is characterized by sharp resonances of the tunnelling rate when varying any system parameter. Resonance-assisted tunnelling When is small in front of the size of the regular islands, the fine structure of the classical phase space plays a key role in tunnelling. In particular the two symmetric tori are coupled "via a succession of classically forbidden transitions across nonlinear resonances" surrounding the two islands. Related phenomena Several phenomena have the same behavior as quantum tunnelling. Two examples are evanescent wave coupling (the application of Maxwell's wave-equation to light) and the application of the non-dispersive wave-equation from acoustics applied to "waves on strings". These effects are modeled similarly to the rectangular potential barrier. In these cases, one transmission medium through which the wave propagates that is the same or nearly the same throughout, and a second medium through which the wave travels differently. This can be described as a thin region of medium B between two regions of medium A. The analysis of a rectangular barrier by means of the Schrödinger equation can be adapted to these other effects provided that the wave equation has travelling wave solutions in medium A but real exponential solutions in medium B. In optics, medium A is a vacuum while medium B is glass. In acoustics, medium A may be a liquid or gas and medium B a solid. For both cases, medium A is a region of space where the particle's total energy is greater than its potential energy and medium B is the potential barrier. These have an incoming wave and resultant waves in both directions. There can be more mediums and barriers, and the barriers need not be discrete. Approximations are useful in this case. A classical wave-particle association was originally analyzed as analogous to quantum tunneling, but subsequent analysis found a fluid dynamics cause related to the vertical momentum imparted to particles near the barrier. See also Dielectric barrier discharge Field electron emission Holstein–Herring method Proton tunneling Quantum cloning Superconducting tunnel junction Tunnel diode Tunnel junction White hole References Further reading External links Animation, applications and research linked to tunnel effect and other quantum phenomena (Université Paris Sud) Animated illustration of quantum tunneling Animated illustration of quantum tunneling in a RTD device Interactive Solution of Schrodinger Tunnel Equation Articles containing video clips Particle physics Quantum mechanics Solid state engineering
Quantum tunnelling
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
5,004
[ "Theoretical physics", "Quantum mechanics", "Electronic engineering", "Condensed matter physics", "Particle physics", "Solid state engineering" ]
644,002
https://en.wikipedia.org/wiki/Alnico
Alnico is a family of iron alloys which, in addition to iron are composed primarily of aluminium (Al), nickel (Ni), and cobalt (Co), hence the acronym al-ni-co. They also include copper, and sometimes titanium. Alnico alloys are ferromagnetic, and are used to make permanent magnets. Before the development of rare-earth magnets in the 1970s, they were the strongest permanent magnet type. Other trade names for alloys in this family are: Alni, Alcomax, Hycomax, Columax, and Ticonal. The composition of alnico alloys is typically 8–12% Al, 15–26% Ni, 5–24% Co, up to 6% Cu, up to 1% Ti, and the rest is Fe. The development of alnico began in 1931, when T. Mishima in Japan discovered that an alloy of iron, nickel, and aluminum had a coercivity of , double that of the best magnet steels of the time. Properties Alnico alloys can be magnetised to produce strong magnetic fields and have a high coercivity (resistance to demagnetization), thus making strong permanent magnets. Of the more commonly available magnets, only rare-earth magnets such as neodymium and samarium-cobalt are stronger. Alnico magnets produce magnetic field strength at their poles as high as 1500 gauss (0.15 tesla), or about 3000 times the strength of Earth's magnetic field. Some alnico brands are isotropic and can be efficiently magnetized in any direction. Other types, such as alnico 5 and alnico 8, are anisotropic, each having a preferred direction of magnetization, or orientation. Anisotropic alloys generally have greater magnetic capacity in a preferred orientation than isotropic types. Alnico's remanence (Br) may exceed 12,000 G (1.2 T), its coercivity (Hc) can be up to 1000 oersteds (80 kA/m), its maximum energy product ((BH)max) can be up to 5.5 MG·Oe (44 T·A/m). Therefore, alnico can produce a strong magnetic flux in closed magnetic circuits, but has relatively small resistance against demagnetization. The field strength at the poles of any permanent magnet depends very much on the shape and is usually well below the remanence strength of the material. Alnico alloys have some of the highest Curie temperatures of any magnetic material, around , although the maximal working temperature is typically limited to around . They are the only magnets that have useful magnetism even when heated red-hot. This property, as well as its brittleness and high melting point, results from the strong tendency toward order due to intermetallic bonding between aluminum and other constituents. They are also one of the most stable magnets if handled properly. Alnico magnets are electrically conductive, unlike ceramic magnets. Alnico 3 has a melting temperature of 1200 - 1450 °C. As of 2018, Alnico magnets cost about 44 USD/kg (US$20/lb) or US$4.30/BHmax. Classification Alnico magnets are traditionally classified using numbers assigned by the Magnetic Materials Producers Association (MMPA), for example, alnico 3 or alnico 5. These classifications indicate chemical composition and magnetic properties. (The classification numbers themselves do not directly relate to the magnet's properties; for instance, a higher number does not necessarily indicate a stronger magnet.) These classification numbers, while still in use, have been deprecated in favor of a new system by the MMPA, which designates Alnico magnets based on maximum energy product in megagauss-oersteds and intrinsic coercive force as kilo oersted, as well as an IEC classification system. Manufacturing process Alnico magnets are produced by casting or sintering processes. Cast alnico is produced by conventional methods using resin bonded sand molds, which can be intricate and detailed, thereby allowing for complex shapes to be produced. The produced alnico magnet typically has a rough surface. This process has higher initial tooling costs for mold creation. Sintered alnico magnets are formed using powdered metal manufacturing methods. While sintering can also produce a range of shapes, it may not be as suitable for extremely intricate or detailed designs compared to casting. Most alnico produced is anisotropic, meaning that the magnetic direction of the grains is randomly oriented when initially made. Anisotropic alnico magnets are oriented by heating above a critical temperature and cooling in the presence of a magnetic field. Both isotropic and anisotropic alnico require proper heat treatment to develop optimal magnetic properties. Without it, alnico's coercivity is about 10 Oe, comparable to technical iron, a soft magnetic material. After the heat treatment alnico becomes a composite material, named "precipitation material"—it consists of iron- and cobalt-rich precipitates in a rich-NiAl matrix. Alnico's anisotropy is oriented along the desired magnetic axis by applying an external magnetic field to it during the precipitate particle nucleation, which occurs when cooling from to , near the Curie point. There are local anisotropies of different orientations without an external field due to spontaneous magnetization. The precipitate structure is a "barrier" against magnetization changes, as it prefers few magnetization states requiring much energy to get the material into any intermediate state. Also, a weak magnetic field shifts the magnetization of the matrix phase only and is reversible. Uses Alnico magnets are widely used in industrial and consumer applications where strong permanent magnets are needed. Examples are electric motors, electric guitar pickups, microphones, sensors, loudspeakers, magnetron tubes, and cow magnets. In many applications they are being superseded by rare-earth magnets, whose stronger fields (Br) and larger energy products (B·Hmax) allow smaller-size magnets to be used for a given application. The high-temperature resistance of alnico magnets leads to many uses that cannot be filled by less resistant magnets, such as in magnetic stirring hotplates. References Further reading MMPA 0100-00, Standard Specifications for Permanent Magnet Materials Ferrous alloys Magnetic alloys Ferromagnetic materials Loudspeakers Nickel alloys
Alnico
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,375
[ "Nickel alloys", "Ferrous alloys", "Ferromagnetic materials", "Electric and magnetic fields in matter", "Materials science", "Aluminium alloys", "Magnetic alloys", "Materials", "Alloys", "Matter" ]
644,443
https://en.wikipedia.org/wiki/AdS/CFT%20correspondence
In theoretical physics, the anti-de Sitter/conformal field theory correspondence (frequently abbreviated as AdS/CFT) is a conjectured relationship between two kinds of physical theories. On one side are anti-de Sitter spaces (AdS) that are used in theories of quantum gravity, formulated in terms of string theory or M-theory. On the other side of the correspondence are conformal field theories (CFT) that are quantum field theories, including theories similar to the Yang–Mills theories that describe elementary particles. The duality represents a major advance in the understanding of string theory and quantum gravity. This is because it provides a non-perturbative formulation of string theory with certain boundary conditions and because it is the most successful realization of the holographic principle, an idea in quantum gravity originally proposed by Gerard 't Hooft and promoted by Leonard Susskind. It also provides a powerful toolkit for studying strongly coupled quantum field theories. Much of the usefulness of the duality results from the fact that it is a strong–weak duality: when the fields of the quantum field theory are strongly interacting, the ones in the gravitational theory are weakly interacting and thus more mathematically tractable. This fact has been used to study many aspects of nuclear and condensed matter physics by translating problems in those subjects into more mathematically tractable problems in string theory. The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were soon elaborated on in two articles, one by Steven Gubser, Igor Klebanov and Alexander Polyakov, and another by Edward Witten. By 2015, Maldacena's article had over 10,000 citations, becoming the most highly cited article in the field of high energy physics. One of the most prominent examples of the AdS/CFT correspondence has been the AdS5/CFT4 correspondence: a relation between N = 4 supersymmetric Yang–Mills theory in 3+1 dimensions and type IIB superstring theory on . Background Quantum gravity and strings Current understanding of gravity is based on Albert Einstein's general theory of relativity. Formulated in 1915, general relativity explains gravity in terms of the geometry of space and time, or spacetime. It is formulated in the language of classical physics that was developed by physicists such as Isaac Newton and James Clerk Maxwell. The other nongravitational forces are explained in the framework of quantum mechanics. Developed in the first half of the twentieth century by a number of different physicists, quantum mechanics provides a radically different way of describing physical phenomena based on probability. Quantum gravity is the branch of physics that seeks to describe gravity using the principles of quantum mechanics. Currently, a popular approach to quantum gravity is string theory, which models elementary particles not as zero-dimensional points but as one-dimensional objects called strings. In the AdS/CFT correspondence, one typically considers theories of quantum gravity derived from string theory or its modern extension, M-theory. In everyday life, there are three familiar dimensions of space (up/down, left/right, and forward/backward), and there is one dimension of time. Thus, in the language of modern physics, one says that spacetime is four-dimensional. One peculiar feature of string theory and M-theory is that these theories require extra dimensions of spacetime for their mathematical consistency: in string theory spacetime is ten-dimensional, while in M-theory it is eleven-dimensional. The quantum gravity theories appearing in the AdS/CFT correspondence are typically obtained from string and M-theory by a process known as compactification. This produces a theory in which spacetime has effectively a lower number of dimensions and the extra dimensions are "curled up" into circles. A standard analogy for compactification is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length, but as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling inside it would move in two dimensions. Quantum field theory The application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory. In particle physics, quantum field theories form the basis for our understanding of elementary particles, which are modeled as excitations in the fundamental fields. Quantum field theories are also used throughout condensed matter physics to model particle-like objects called quasiparticles. In the AdS/CFT correspondence, one considers, in addition to a theory of quantum gravity, a certain kind of quantum field theory called a conformal field theory. This is a particularly symmetric and mathematically well behaved type of quantum field theory. Such theories are often studied in the context of string theory, where they are associated with the surface swept out by a string propagating through spacetime, and in statistical mechanics, where they model systems at a thermodynamic critical point. Overview of the correspondence Geometry of anti-de Sitter space In the AdS/CFT correspondence, one considers string theory or M-theory on an anti-de Sitter background. This means that the geometry of spacetime is described in terms of a certain vacuum solution of Einstein's equation called anti-de Sitter space. In very elementary terms, anti-de Sitter space is a mathematical model of spacetime in which the notion of distance between points (the metric) is different from the notion of distance in ordinary Euclidean geometry. It is closely related to hyperbolic space, which can be viewed as a disk as illustrated on the right. This image shows a tessellation of a disk by triangles and squares. One can define the distance between points of this disk in such a way that all the triangles and squares are the same size and the circular outer boundary is infinitely far from any point in the interior. Now imagine a stack of hyperbolic disks where each disk represents the state of the universe at a given time. The resulting geometric object is three-dimensional anti-de Sitter space. It looks like a solid cylinder in which any cross section is a copy of the hyperbolic disk. Time runs along the vertical direction in this picture. The surface of this cylinder plays an important role in the AdS/CFT correspondence. As with the hyperbolic plane, anti-de Sitter space is curved in such a way that any point in the interior is actually infinitely far from this boundary surface. This construction describes a hypothetical universe with only two space and one time dimension, but it can be generalized to any number of dimensions. Indeed, hyperbolic space can have more than two dimensions and one can "stack up" copies of hyperbolic space to get higher-dimensional models of anti-de Sitter space. Idea of AdS/CFT An important feature of anti-de Sitter space is its boundary (which looks like a cylinder in the case of three-dimensional anti-de Sitter space). One property of this boundary is that, locally around any point, it looks just like Minkowski space, the model of spacetime used in nongravitational physics. One can therefore consider an auxiliary theory in which "spacetime" is given by the boundary of anti-de Sitter space. This observation is the starting point for the AdS/CFT correspondence, which states that the boundary of anti-de Sitter space can be regarded as the "spacetime" for a conformal field theory. The claim is that this conformal field theory is equivalent to the gravitational theory on the bulk anti-de Sitter space in the sense that there is a "dictionary" for translating calculations in one theory into calculations in the other. Every entity in one theory has a counterpart in the other theory. For example, a single particle in the gravitational theory might correspond to some collection of particles in the boundary theory. In addition, the predictions in the two theories are quantitatively identical so that if two particles have a 40 percent chance of colliding in the gravitational theory, then the corresponding collections in the boundary theory would also have a 40 percent chance of colliding. Notice that the boundary of anti-de Sitter space has fewer dimensions than anti-de Sitter space itself. For instance, in the three-dimensional example illustrated above, the boundary is a two-dimensional surface. The AdS/CFT correspondence is often described as a "holographic duality" because this relationship between the two theories is similar to the relationship between a three-dimensional object and its image as a hologram. Although a hologram is two-dimensional, it encodes information about all three dimensions of the object it represents. In the same way, theories that are related by the AdS/CFT correspondence are conjectured to be exactly equivalent, despite living in different numbers of dimensions. The conformal field theory is like a hologram that captures information about the higher-dimensional quantum gravity theory. Examples of the correspondence Following Maldacena's insight in 1997, theorists have discovered many different realizations of the AdS/CFT correspondence. These relate various conformal field theories to compactifications of string theory and M-theory in various numbers of dimensions. The theories involved are generally not viable models of the real world, but they have certain features, such as their particle content or high degree of symmetry, which make them useful for solving problems in quantum field theory and quantum gravity. The most famous example of the AdS/CFT correspondence states that type IIB string theory on the product space is equivalent to N = 4 supersymmetric Yang–Mills theory on the four-dimensional boundary. In this example, the spacetime on which the gravitational theory lives is effectively five-dimensional (hence the notation AdS5), and there are five additional compact dimensions (encoded by the S5 factor). In the real world, spacetime is four-dimensional, at least macroscopically, so this version of the correspondence does not provide a realistic model of gravity. Likewise, the dual theory is not a viable model of any real-world system as it assumes a large amount of supersymmetry. Nevertheless, as explained below, this boundary theory shares some features in common with quantum chromodynamics, the fundamental theory of the strong force. It describes particles similar to the gluons of quantum chromodynamics together with certain fermions. As a result, it has found applications in nuclear physics, particularly in the study of the quark–gluon plasma. Another realization of the correspondence states that M-theory on is equivalent to the so-called (2,0)-theory in six dimensions. In this example, the spacetime of the gravitational theory is effectively seven-dimensional. The existence of the (2,0)-theory that appears on one side of the duality is predicted by the classification of superconformal field theories. It is still poorly understood because it is a quantum mechanical theory without a classical limit. Despite the inherent difficulty in studying this theory, it is considered to be an interesting object for a variety of reasons, both physical and mathematical. Yet another realization of the correspondence states that M-theory on is equivalent to the ABJM superconformal field theory in three dimensions. Here the gravitational theory has four noncompact dimensions, so this version of the correspondence provides a somewhat more realistic description of gravity. Applications to quantum gravity A non-perturbative formulation of string theory In quantum field theory, one typically computes the probabilities of various physical events using the techniques of perturbation theory. Developed by Richard Feynman and others in the first half of the twentieth century, perturbative quantum field theory uses special diagrams called Feynman diagrams to organize computations. One imagines that these diagrams depict the paths of point-like particles and their interactions. Although this formalism is extremely useful for making predictions, these predictions are only possible when the strength of the interactions, the coupling constant, is small enough to reliably describe the theory as being close to a theory without interactions. The starting point for string theory is the idea that the point-like particles of quantum field theory can also be modeled as one-dimensional objects called strings. The interaction of strings is most straightforwardly defined by generalizing the perturbation theory used in ordinary quantum field theory. At the level of Feynman diagrams, this means replacing the one-dimensional diagram representing the path of a point particle by a two-dimensional surface representing the motion of a string. Unlike in quantum field theory, string theory does not yet have a full non-perturbative definition, so many of the theoretical questions that physicists would like to answer remain out of reach. The problem of developing a non-perturbative formulation of string theory was one of the original motivations for studying the AdS/CFT correspondence. As explained above, the correspondence provides several examples of quantum field theories that are equivalent to string theory on anti-de Sitter space. One can alternatively view this correspondence as providing a definition of string theory in the special case where the gravitational field is asymptotically anti-de Sitter (that is, when the gravitational field resembles that of anti-de Sitter space at spatial infinity). Physically interesting quantities in string theory are defined in terms of quantities in the dual quantum field theory. Black hole information paradox In 1975, Stephen Hawking published a calculation that suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon. At first, Hawking's result posed a problem for theorists because it suggested that black holes destroy information. More precisely, Hawking's calculation seemed to conflict with one of the basic postulates of quantum mechanics, which states that physical systems evolve in time according to the Schrödinger equation. This property is usually referred to as unitarity of time evolution. The apparent contradiction between Hawking's calculation and the unitarity postulate of quantum mechanics came to be known as the black hole information paradox. The AdS/CFT correspondence resolves the black hole information paradox, at least to some extent, because it shows how a black hole can evolve in a manner consistent with quantum mechanics in some contexts. Indeed, one can consider black holes in the context of the AdS/CFT correspondence, and any such black hole corresponds to a configuration of particles on the boundary of anti-de Sitter space. These particles obey the usual rules of quantum mechanics and in particular evolve in a unitary fashion, so the black hole must also evolve in a unitary fashion, respecting the principles of quantum mechanics. In 2005, Hawking announced that the paradox had been settled in favor of information conservation by the AdS/CFT correspondence, and he suggested a concrete mechanism by which black holes might preserve information. Applications to quantum field theory Nuclear physics One physical system that has been studied using the AdS/CFT correspondence is the quark–gluon plasma, an exotic state of matter produced in particle accelerators. This state of matter arises for brief instants when heavy ions such as gold or lead nuclei are collided at high energies. Such collisions cause the quarks that make up atomic nuclei to deconfine at temperatures of approximately two trillion kelvins, conditions similar to those present at around 10−11 seconds after the Big Bang. The physics of the quark–gluon plasma is governed by quantum chromodynamics, but this theory is mathematically intractable in problems involving the quark–gluon plasma. In an article appearing in 2005, Đàm Thanh Sơn and his collaborators showed that the AdS/CFT correspondence could be used to understand some aspects of the quark–gluon plasma by describing it in the language of string theory. By applying the AdS/CFT correspondence, Sơn and his collaborators were able to describe the quark gluon plasma in terms of black holes in five-dimensional spacetime. The calculation showed that the ratio of two quantities associated with the quark–gluon plasma, the shear viscosity η and volume density of entropy s, should be approximately equal to a certain universal constant: where ħ denotes the reduced Planck constant and k is the Boltzmann constant. In addition, the authors conjectured that this universal constant provides a lower bound for η/s in a large class of systems. In an experiment conducted at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory, the experimental result in one model was close to this universal constant but it was not the case in another model. Another important property of the quark–gluon plasma is that very high energy quarks moving through the plasma are stopped or "quenched" after traveling only a few femtometres. This phenomenon is characterized by a number called the jet quenching parameter, which relates the energy loss of such a quark to the squared distance traveled through the plasma. Calculations based on the AdS/CFT correspondence give the estimated value , and the experimental value of lies in the range . Condensed matter physics Over the decades, experimental condensed matter physicists have discovered a number of exotic states of matter, including superconductors and superfluids. These states are described using the formalism of quantum field theory, but some phenomena are difficult to explain using standard field theoretic techniques. Some condensed matter theorists including Subir Sachdev hope that the AdS/CFT correspondence will make it possible to describe these systems in the language of string theory and learn more about their behavior. So far some success has been achieved in using string theory methods to describe the transition of a superfluid to an insulator. A superfluid is a system of electrically neutral atoms that flows without any friction. Such systems are often produced in the laboratory using liquid helium, but recently experimentalists have developed new ways of producing artificial superfluids by pouring trillions of cold atoms into a lattice of criss-crossing lasers. These atoms initially behave as a superfluid, but as experimentalists increase the intensity of the lasers, they become less mobile and then suddenly transition to an insulating state. During the transition, the atoms behave in an unusual way. For example, the atoms slow to a halt at a rate that depends on the temperature and on the Planck constant, the fundamental parameter of quantum mechanics, which does not enter into the description of the other phases. This behavior has recently been understood by considering a dual description where properties of the fluid are described in terms of a higher dimensional black hole. Criticism With many physicists turning towards string-based methods to solve problems in nuclear and condensed matter physics, some theorists working in these areas have expressed doubts about whether the AdS/CFT correspondence can provide the tools needed to realistically model real-world systems. In a talk at the Quark Matter conference in 2006, an American physicist, Larry McLerran pointed out that the super Yang–Mills theory that appears in the AdS/CFT correspondence differs significantly from quantum chromodynamics, making it difficult to apply these methods to nuclear physics. According to McLerran, In a letter to Physics Today, Nobel laureate Philip W. Anderson voiced similar concerns about applications of AdS/CFT to condensed matter physics, stating History and development String theory and nuclear physics The discovery of the AdS/CFT correspondence in late 1997 was the culmination of a long history of efforts to relate string theory to nuclear physics. In fact, string theory was originally developed during the late 1960s and early 1970s as a theory of hadrons, the subatomic particles like the proton and neutron that are held together by the strong nuclear force. The idea was that each of these particles could be viewed as a different oscillation mode of a string. In the late 1960s, experimentalists had found that hadrons fall into families called Regge trajectories with squared energy proportional to angular momentum, and theorists showed that this relationship emerges naturally from the physics of a rotating relativistic string. On the other hand, attempts to model hadrons as strings faced serious problems. One problem was that string theory includes a massless spin-2 particle whereas no such particle appears in the physics of hadrons. Such a particle would mediate a force with the properties of gravity. In 1974, Joël Scherk and John Schwarz suggested that string theory was therefore not a theory of nuclear physics as many theorists had thought but instead a theory of quantum gravity. At the same time, it was realized that hadrons are actually made of quarks, and the string theory approach was abandoned in favor of quantum chromodynamics. In quantum chromodynamics, quarks have a kind of charge that comes in three varieties called colors. In a paper from 1974, Gerard 't Hooft studied the relationship between string theory and nuclear physics from another point of view by considering theories similar to quantum chromodynamics, where the number of colors is some arbitrary number N, rather than three. In this article, 't Hooft considered a certain limit where N tends to infinity and argued that in this limit certain calculations in quantum field theory resemble calculations in string theory. Black holes and holography In 1975, Stephen Hawking published a calculation that suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon. This work extended previous results of Jacob Bekenstein who had suggested that black holes have a well-defined entropy. At first, Hawking's result appeared to contradict one of the main postulates of quantum mechanics, namely the unitarity of time evolution. Intuitively, the unitarity postulate says that quantum mechanical systems do not destroy information as they evolve from one state to another. For this reason, the apparent contradiction came to be known as the black hole information paradox. Later, in 1993, Gerard 't Hooft wrote a speculative paper on quantum gravity in which he revisited Hawking's work on black hole thermodynamics, concluding that the total number of degrees of freedom in a region of spacetime surrounding a black hole is proportional to the surface area of the horizon. This idea was promoted by Leonard Susskind and is now known as the holographic principle. The holographic principle and its realization in string theory through the AdS/CFT correspondence have helped elucidate the mysteries of black holes suggested by Hawking's work and are believed to provide a resolution of the black hole information paradox. In 2004, Hawking conceded that black holes do not violate quantum mechanics, and he suggested a concrete mechanism by which they might preserve information. Maldacena's paper On January 1, 1998, Juan Maldacena published a landmark paper that initiated the study of AdS/CFT. According to Alexander Markovich Polyakov, "[Maldacena's] work opened the flood gates." The conjecture immediately excited great interest in the string theory community and was considered in a paper by Steven Gubser, Igor Klebanov and Polyakov, and another paper of Edward Witten. These papers made Maldacena's conjecture more precise and showed that the conformal field theory appearing in the correspondence lives on the boundary of anti-de Sitter space. One special case of Maldacena's proposal says that super Yang–Mills theory, a gauge theory similar in some ways to quantum chromodynamics, is equivalent to string theory in five-dimensional anti-de Sitter space. This result helped clarify the earlier work of 't Hooft on the relationship between string theory and quantum chromodynamics, taking string theory back to its roots as a theory of nuclear physics. Maldacena's results also provided a concrete realization of the holographic principle with important implications for quantum gravity and black hole physics. By the year 2015, Maldacena's paper had become the most highly cited paper in high energy physics with over 10,000 citations. These subsequent articles have provided considerable evidence that the correspondence is correct, although so far it has not been rigorously proved. Generalizations Three-dimensional gravity In order to better understand the quantum aspects of gravity in our four-dimensional universe, some physicists have considered a lower-dimensional mathematical model in which spacetime has only two spatial dimensions and one time dimension. In this setting, the mathematics describing the gravitational field simplifies drastically, and one can study quantum gravity using familiar methods from quantum field theory, eliminating the need for string theory or other more radical approaches to quantum gravity in four dimensions. Beginning with the work of J. David Brown and Marc Henneaux in 1986, physicists have noticed that quantum gravity in a three-dimensional spacetime is closely related to two-dimensional conformal field theory. In 1995, Henneaux and his coworkers explored this relationship in more detail, suggesting that three-dimensional gravity in anti-de Sitter space is equivalent to the conformal field theory known as Liouville field theory. Another conjecture formulated by Edward Witten states that three-dimensional gravity in anti-de Sitter space is equivalent to a conformal field theory with monster group symmetry. These conjectures provide examples of the AdS/CFT correspondence that do not require the full apparatus of string or M-theory. dS/CFT correspondence Unlike our universe, which is now known to be expanding at an accelerating rate, anti-de Sitter space is neither expanding nor contracting. Instead it looks the same at all times. In more technical language, one says that anti-de Sitter space corresponds to a universe with a negative cosmological constant, whereas the real universe has a small positive cosmological constant. Although the properties of gravity at short distances should be somewhat independent of the value of the cosmological constant, it is desirable to have a version of the AdS/CFT correspondence for positive cosmological constant. In 2001, Andrew Strominger introduced a version of the duality called the dS/CFT correspondence. This duality involves a model of spacetime called de Sitter space with a positive cosmological constant. Such a duality is interesting from the point of view of cosmology since many cosmologists believe that the very early universe was close to being de Sitter space. Kerr/CFT correspondence Although the AdS/CFT correspondence is often useful for studying the properties of black holes, most of the black holes considered in the context of AdS/CFT are physically unrealistic. Indeed, as explained above, most versions of the AdS/CFT correspondence involve higher-dimensional models of spacetime with unphysical supersymmetry. In 2009, Monica Guica, Thomas Hartman, Wei Song, and Andrew Strominger showed that the ideas of AdS/CFT could nevertheless be used to understand certain astrophysical black holes. More precisely, their results apply to black holes that are approximated by extremal Kerr black holes, which have the largest possible angular momentum compatible with a given mass. They showed that such black holes have an equivalent description in terms of conformal field theory. The Kerr/CFT correspondence was later extended to black holes with lower angular momentum. Higher spin gauge theories The AdS/CFT correspondence is closely related to another duality conjectured by Igor Klebanov and Alexander Markovich Polyakov in 2002. This duality states that certain "higher spin gauge theories" on anti-de Sitter space are equivalent to conformal field theories with O(N) symmetry. Here the theory in the bulk is a type of gauge theory describing particles of arbitrarily high spin. It is similar to string theory, where the excited modes of vibrating strings correspond to particles with higher spin, and it may help to better understand the string theoretic versions of AdS/CFT and possibly even prove the correspondence. In 2010, Simone Giombi and Xi Yin obtained further evidence for this duality by computing quantities called three-point functions. See also Algebraic holography Ambient construction Randall–Sundrum model Notes References Conformal field theory Quantum gravity String theory
AdS/CFT correspondence
[ "Physics", "Astronomy" ]
5,764
[ "Astronomical hypotheses", "Unsolved problems in physics", "Quantum gravity", "String theory", "Physics beyond the Standard Model" ]
644,550
https://en.wikipedia.org/wiki/Higgs%20mechanism
In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property "mass" for gauge bosons. Without the Higgs mechanism, all bosons (one of the two classes of particles, the other being fermions) would be considered massless, but measurements show that the W+, W−, and Z0 bosons actually have relatively large masses of around 80 GeV/c2. The Higgs field resolves this conundrum. The simplest description of the mechanism adds to the Standard Model a quantum field (the Higgs field), which permeates all of space. Below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons with which it interacts to have mass. In the Standard Model, the phrase "Higgs mechanism" refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking. The Large Hadron Collider at CERN announced results consistent with the Higgs particle on 14 March 2013, making it extremely likely that the field, or one like it, exists, and explaining how the Higgs mechanism takes place in nature. The view of the Higgs mechanism as involving spontaneous symmetry breaking of a gauge symmetry is technically incorrect since by Elitzur's theorem gauge symmetries can never be spontaneously broken. Rather, the Fröhlich–Morchio–Strocchi mechanism reformulates the Higgs mechanism in an entirely gauge invariant way, generally leading to the same results. The mechanism was proposed in 1962 by Philip Warren Anderson, following work in the late 1950s on symmetry breaking in superconductivity and a 1960 paper by Yoichiro Nambu that discussed its application within particle physics. A theory able to finally explain mass generation without "breaking" gauge theory was published almost simultaneously by three independent groups in 1964: by Robert Brout and François Englert; by Peter Higgs; and by Gerald Guralnik, C. R. Hagen, and Tom Kibble. The Higgs mechanism is therefore also called the Brout–Englert–Higgs mechanism, or Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism, Anderson–Higgs mechanism, Anderson–Higgs–Kibble mechanism, Higgs–Kibble mechanism by Abdus Salam and ABEGHHK'tH mechanism (for Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble, and 't Hooft) by Peter Higgs. The Higgs mechanism in electrodynamics was also discovered independently by Eberly and Reiss in reverse as the "gauge" Dirac field mass gain due to the artificially displaced electromagnetic field as a Higgs field. On 8 October 2013, following the discovery at CERN's Large Hadron Collider of a new particle that appeared to be the long-sought Higgs boson predicted by the theory, it was announced that Peter Higgs and François Englert had been awarded the 2013 Nobel Prize in Physics. Standard Model The Higgs mechanism was incorporated into modern particle physics by Steven Weinberg and Abdus Salam, and is an essential part of the Standard Model. In the Standard Model, at temperatures high enough that electroweak symmetry is unbroken, all elementary particles are massless. At a critical temperature, the Higgs field develops a vacuum expectation value; some theories suggest the symmetry is spontaneously broken by tachyon condensation, and the W and Z bosons acquire masses (also called "electroweak symmetry breaking", or EWSB). In the history of the universe, this is believed to have happened about a picosecond after the hot big bang, when the universe was at a temperature 159.5 ± 1.5 GeV. Fermions, such as the leptons and quarks in the Standard Model, can also acquire mass as a result of their interaction with the Higgs field, but not in the same way as the gauge bosons. Structure of the Higgs field In the standard model, the Higgs field is an SU(2) doublet (i.e. the standard representation with two complex components called isospin), which is a scalar under Lorentz transformations. Its electric charge is zero; its weak isospin is and the third component of weak isospin is −; and its weak hypercharge (the charge for the U(1) gauge group defined up to an arbitrary multiplicative constant) is 1. Under U(1) rotations, it is multiplied by a phase, which thus mixes the real and imaginary parts of the complex spinor into each other, combining to the standard two-component complex representation of the group U(2). The Higgs field, through the interactions specified (summarized, represented, or even simulated) by its potential, induces spontaneous breaking of three out of the four generators ("directions") of the gauge group U(2). This is often written as SU(2)L × U(1)Y, (which is strictly speaking only the same on the level of infinitesimal symmetries) because the diagonal phase factor also acts on other fields – quarks in particular. Three out of its four components would ordinarily resolve as Goldstone bosons, if they were not coupled to gauge fields. However, after symmetry breaking, these three of the four degrees of freedom in the Higgs field mix with the three W and Z bosons (, and ), and are only observable as components of these weak bosons, which are made massive by their inclusion; only the single remaining degree of freedom becomes a new scalar particle: the Higgs boson. The components that do not mix with Goldstone bosons form a massless photon. The photon as the part that remains massless The gauge group of the electroweak part of the standard model is SU(2)L × U(1)Y. The group SU(2) is the group of all 2-by-2 unitary matrices with unit determinant; all the orthonormal changes of coordinates in a complex two dimensional vector space. Rotating the coordinates so that the second basis vector points in the direction of the Higgs boson makes the vacuum expectation value of H the spinor The generators for rotations about the x, y, and z axes are by half the Pauli matrices , , and , so that a rotation of angle about the z-axis takes the vacuum to While the and generators mix up the top and bottom components of the spinor, the rotations only multiply each by opposite phases. This phase can be undone by a U(1) rotation of angle Consequently, under both an SU(2) -rotation and a U(1) rotation by an amount the vacuum is invariant. This combination of generators defines the unbroken part of the gauge group, where is the electric charge, is the generator of rotations around the 3-axis in the adjoint representation of SU(2) and is the weak hypercharge generator of the U(1). This combination of generators (a 3 rotation in the SU(2) and a simultaneous U(1) rotation by half the angle) preserves the vacuum, and defines the unbroken gauge group in the standard model, namely the electric charge group. The part of the gauge field in this direction stays massless, and amounts to the physical photon. By contrast, the broken trace-orthogonal charge couples to the massive  boson. Consequences for fermions In spite of the introduction of spontaneous symmetry breaking, the mass terms preclude chiral gauge invariance. For these fields, the mass terms should always be replaced by a gauge-invariant "Higgs" mechanism. One possibility is some kind of Yukawa coupling (see below) between the fermion field and the Higgs field , with unknown couplings , which after symmetry breaking (more precisely: after expansion of the Lagrange density around a suitable ground state) again results in the original mass terms, which are now, however (i.e., by introduction of the Higgs field) written in a gauge-invariant way. The Lagrange density for the Yukawa interaction of a fermion field and the Higgs field is where again the gauge field only enters via the gauge covariant derivative operator (i.e., it is only indirectly visible). The quantities are the Dirac matrices, and is the already-mentioned Yukawa coupling parameter for . Now the mass-generation follows the same principle as above, namely from the existence of a finite expectation value Again, this is crucial for the existence of the property mass. History of research Background Spontaneous symmetry breaking offered a framework to introduce bosons into relativistic quantum field theories. However, according to Goldstone's theorem, these bosons should be massless. The only observed particles which could be approximately interpreted as Goldstone bosons were the pions, which Yoichiro Nambu related to chiral symmetry breaking. A similar problem arises with Yang–Mills theory (also known as non-abelian gauge theory), which predicts massless spin-1 gauge bosons. Massless weakly-interacting gauge bosons lead to long-range forces, which are only observed for electromagnetism and the corresponding massless photon. Gauge theories of the weak force needed a way to describe massive gauge bosons in order to be consistent. Discovery That breaking gauge symmetries did not lead to massless particles was observed in 1961 by Julian Schwinger, but he did not demonstrate massive particles would eventuate. This was done in Philip Warren Anderson's 1962 paper but only in non-relativistic field theory; it also discussed consequences for particle physics but did not work out an explicit relativistic model. The relativistic model was developed in 1964 by three independent groups: Robert Brout and François Englert Peter Higgs Gerald Guralnik, Carl Richard Hagen, and Tom Kibble. Slightly later, in 1965, but independently from the other publications the mechanism was also proposed by Alexander Migdal and Alexander Polyakov, at that time Soviet undergraduate students. However, their paper was delayed by the editorial office of JETP, and was published late, in 1966. The mechanism is closely analogous to phenomena previously discovered by Yoichiro Nambu involving the "vacuum structure" of quantum fields in superconductivity. A similar but distinct effect (involving an affine realization of what is now recognized as the Higgs field), known as the Stueckelberg mechanism, had previously been studied by Ernst Stueckelberg. These physicists discovered that when a gauge theory is combined with an additional field that spontaneously breaks the symmetry group, the gauge bosons can consistently acquire a nonzero mass. In spite of the large values involved (see below) this permits a gauge theory description of the weak force, which was independently developed by Steven Weinberg and Abdus Salam in 1967. Higgs's original article presenting the model was rejected by Physics Letters. When revising the article before resubmitting it to Physical Review Letters, he added a sentence at the end, mentioning that it implies the existence of one or more new, massive scalar bosons, which do not form complete representations of the symmetry group; these are the Higgs bosons. The three papers by Brout and Englert; Higgs; and Guralnik, Hagen, and Kibble were each recognized as "milestone letters" by Physical Review Letters in 2008. While each of these seminal papers took similar approaches, the contributions and differences among the 1964 PRL symmetry breaking papers are noteworthy. All six physicists were jointly awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work. Benjamin W. Lee is often credited with first naming the "Higgs-like" mechanism, although there is debate around when this first occurred. One of the first times the Higgs name appeared in print was in 1972 when Gerardus 't Hooft and Martinus J. G. Veltman referred to it as the "Higgs–Kibble mechanism" in their Nobel winning paper. Simple explanation of the theory, from its origins in superconductivity The proposed Higgs mechanism arose as a result of theories proposed to explain observations in superconductivity. A superconductor does not allow penetration by external magnetic fields (the Meissner effect). This strange observation implies that the electromagnetic field somehow becomes short ranged during this phenomenon. Successful theories arose to explain this during the 1950s, first for fermions (Ginzburg–Landau theory, 1950), and then for bosons (BCS theory, 1957). In these theories, superconductivity is interpreted as arising from a charged condensate. Initially, the condensate value does not have any preferred direction. This implies it is scalar, but its phase is capable of defining a gauge, in gauge based field theories. To do this, the field must be charged. A charged scalar field must also be complex (or described another way, it contains at least two components, and a symmetry capable of rotating each into the other(s)). In naïve gauge theory, a gauge transformation of a condensate usually rotates the phase. However, in these circumstances, it instead fixes a preferred choice of phase. However it turns out that fixing the choice of gauge so that the condensate has the same phase everywhere, also causes the electromagnetic field to gain an extra term. This extra term causes the electromagnetic field to become short range. Goldstone's theorem also plays a role in such theories. The connection is technically, when a condensate breaks a symmetry, then the state reached by acting with a symmetry generator on the condensate has the same energy as before. This means that some kinds of oscillation will not involve change of energy. Oscillations with unchanged energy imply that excitations (particles) associated with the oscillation are massless. Once attention was drawn to this theory within particle physics, the parallels were clear. A change of the usually long range electromagnetic field to become short ranged, within a gauge invariant theory, was exactly the needed effect sought for the weak force bosons (because a long range force has massless gauge bosons, and a short ranged force implies massive gauge bosons, suggesting that a result of this interaction is that the field's gauge bosons acquired mass, or a similar and equivalent effect). The features of a field required to do this was also quite well defined – it would have to be a charged scalar field, with at least two components, and complex in order to support a symmetry able to rotate these into each other. Examples The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. In the non-relativistic context this is a superconductor, more formally known as the Landau model of a charged Bose–Einstein condensate. In the relativistic condensate, the condensate is a scalar field that is relativistically invariant. Landau model The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged, or, in field language, when a charged field has a nonzero vacuum expectation value. Interaction with the quantum fluid filling the space prevents certain forces from propagating over long distances (as it does inside a superconductor; e.g., in the Ginzburg–Landau theory). A superconductor expels all magnetic fields from its interior, a phenomenon known as the Meissner effect. This was mysterious for a long time, because it implies that electromagnetic forces somehow become short-range inside the superconductor. Contrast this with the behavior of an ordinary metal. In a metal, the conductivity shields electric fields by rearranging charges on the surface until the total field cancels in the interior. But magnetic fields can penetrate to any distance, and if a magnetic monopole (an isolated magnetic pole) is surrounded by a metal the field can escape without collimating into a string. In a superconductor, however, electric charges move with no dissipation, and this allows for permanent surface currents, not just surface charges. When magnetic fields are introduced at the boundary of a superconductor, they produce surface currents which exactly neutralize them. The Meissner effect arises due to currents in a thin surface layer, whose thickness can be calculated from the simple model of Ginzburg–Landau theory, which treats superconductivity as a charged Bose–Einstein condensate. Suppose that a superconductor contains bosons with charge  . The wavefunction of the bosons can be described by introducing a quantum field, which obeys the Schrödinger equation as a field equation. In units where the reduced Planck constant, , is set to 1: The operator annihilates a boson at the point , while its adjoint creates a new boson at the same point. The wavefunction of the Bose–Einstein condensate is then the expectation value of which is a classical function that obeys the same equation. The interpretation of the expectation value is that it is the phase that one should give to a newly created boson so that it will coherently superpose with all the other bosons already in the condensate. When there is a charged condensate, the electromagnetic interactions are screened. To see this, consider the effect of a gauge transformation on the field. A gauge transformation rotates the phase of the condensate by an amount which changes from point to point, and shifts the vector potential by a gradient: When there is no condensate, this transformation only changes the definition of the phase of at every point. But when there is a condensate, the phase of the condensate defines a preferred choice of phase. The condensate wave function can be written as where is real amplitude, which determines the local density of the condensate. If the condensate were neutral, the flow would be along the gradients of , the direction in which the phase of the Schrödinger field changes. If the phase changes slowly, the flow is slow and has very little energy. But now can be made equal to zero just by making a gauge transformation to rotate the phase of the field. The energy of slow changes of phase can be calculated from the Schrödinger kinetic energy, and taking the density of the condensate to be constant, Fixing the choice of gauge so that the condensate has the same phase everywhere, the electromagnetic field energy has an extra term, When this term is present, electromagnetic interactions become short-ranged. Every field mode, no matter how long the wavelength, oscillates with a nonzero frequency. The lowest frequency can be read off from the energy of a long wavelength mode, This is a harmonic oscillator with frequency The quantity is the density of the condensate of superconducting particles. In an actual superconductor, the charged particles are electrons, which are fermions not bosons. So in order to have superconductivity, the electrons need to somehow bind into Cooper pairs. The charge of the condensate is therefore twice the electron charge . The pairing in a normal superconductor is due to lattice vibrations, and is in fact very weak; this means that the pairs are very loosely bound. The description of a Bose–Einstein condensate of loosely bound pairs is actually more difficult than the description of a condensate of elementary particles, and was only worked out in 1957 by John Bardeen, Leon Cooper, and John Robert Schrieffer in the famous BCS theory. Abelian Higgs mechanism Gauge invariance means that certain transformations of the gauge field do not change the energy at all. If an arbitrary gradient is added to , the energy of the field is exactly the same. This makes it difficult to add a mass term, because a mass term tends to push the field toward the value zero. But the zero value of the vector potential is not a gauge invariant idea. What is zero in one gauge is nonzero in another. So in order to give mass to a gauge theory, the gauge invariance must be broken by a condensate. The condensate will then define a preferred phase, and the phase of the condensate will define the zero value of the field in a gauge-invariant way. The gauge-invariant definition is that a gauge field is zero when the phase change along any path from parallel transport is equal to the phase difference in the condensate wavefunction. The condensate value is described by a quantum field with an expectation value, just as in the Ginzburg–Landau model. In order for the phase of the vacuum to define a gauge, the field must have a phase (also referred to as 'to be charged'). In order for a scalar field to have a phase, it must be complex, or (equivalently) it should contain two fields with a symmetry which rotates them into each other. The vector potential changes the phase of the quanta produced by the field when they move from point to point. In terms of fields, it defines how much to rotate the real and imaginary parts of the fields into each other when comparing field values at nearby points. The only renormalizable model where a complex scalar field acquires a nonzero value is the 'Mexican-hat' model, where the field energy has a minimum away from zero. The action for this model is which results in the Hamiltonian The first term is the kinetic energy of the field. The second term is the extra potential energy when the field varies from point to point. The third term is the potential energy when the field has any given magnitude. This potential energy, the Higgs potential, has a graph which looks like a Mexican hat, which gives the model its name. In particular, the minimum energy value is not at but on the circle of points where the magnitude of When the field is not coupled to electromagnetism, the Mexican-hat potential has flat directions. Starting in any one of the circle of vacua and changing the phase of the field from point to point costs very little energy. Mathematically, if with a constant prefactor, then the action for the field i.e., the "phase" of the Higgs field has only derivative terms. This is not a surprise: Adding a constant to is a symmetry of the original theory, so different values of cannot have different energies. This is an example of configuring the model to conform to Goldstone's theorem: Spontaneously broken continuous symmetries (normally) produce massless excitations. The Abelian Higgs model is the Mexican-hat model coupled to electromagnetism: The classical vacuum is again at the minimum of the potential, where the magnitude of the complex field is equal But now the phase of the field is arbitrary, because gauge transformations change it. This means that the field can be set to zero by a gauge transformation, and does not represent any actual degrees of freedom at all. Furthermore, choosing a gauge where the phase of the vacuum is fixed, the potential energy for fluctuations of the vector field is nonzero. So in the Abelian Higgs model, the gauge field acquires a mass. To calculate the magnitude of the mass, consider a constant value of the vector potential in the -direction in the gauge where the condensate has constant phase. This is the same as a sinusoidally varying condensate in the gauge where the vector potential is zero. In the gauge where A is zero, the potential energy density in the condensate is the scalar gradient energy: This energy is the same as a mass term where Mathematical details of the abelian Higgs mechanism Non-Abelian Higgs mechanism The Non-Abelian Higgs model has the following action where now the non-Abelian field A is contained in the covariant derivative D and in the tensor components and (the relation between A and those components is well-known from the Yang–Mills theory). It is exactly analogous to the Abelian Higgs model. Now the field is in a representation of the gauge group, and the gauge covariant derivative is defined by the rate of change of the field minus the rate of change from parallel transport using the gauge field A as a connection. Again, the expectation value of defines a preferred gauge where the vacuum is constant, and fixing this gauge, fluctuations in the gauge field A come with a nonzero energy cost. Depending on the representation of the scalar field, not every gauge field acquires a mass. A simple example is in the renormalizable version of an early electroweak model due to Julian Schwinger. In this model, the gauge group is SO(3) (or SU(2) − there are no spinor representations in the model), and the gauge invariance is broken down to U(1) or SO(2) at long distances. To make a consistent renormalizable version using the Higgs mechanism, introduce a scalar field which transforms as a vector (a triplet) of SO(3). If this field has a vacuum expectation value, it points in some direction in field space. Without loss of generality, one can choose the z-axis in field space to be the direction that is pointing, and then the vacuum expectation value of is , where is a constant with dimensions of mass (). Rotations around the z-axis form a U(1) subgroup of SO(3) which preserves the vacuum expectation value of , and this is the unbroken gauge group. Rotations around the x and y-axis do not preserve the vacuum, and the components of the SO(3) gauge field which generate these rotations become massive vector mesons. There are two massive W mesons in the Schwinger model, with a mass set by the mass scale , and one massless U(1) gauge boson, similar to the photon. The Schwinger model predicts magnetic monopoles at the electroweak unification scale, and does not predict the Z boson. It doesn't break electroweak symmetry properly as in nature. But historically, a model similar to this (but not using the Higgs mechanism) was the first in which the weak force and the electromagnetic force were unified. Affine Higgs mechanism Ernst Stueckelberg discovered a version of the Higgs mechanism by analyzing the theory of quantum electrodynamics with a massive photon. Effectively, Stueckelberg's model is a limit of the regular Mexican hat Abelian Higgs model, where the vacuum expectation value goes to infinity and the charge of the Higgs field goes to zero in such a way that their product stays fixed. The mass of the Higgs boson is proportional to , so the Higgs boson becomes infinitely massive and decouples, so is not present in the discussion. The vector meson mass, however, is equal to the product , and stays finite. The interpretation is that when a U(1) gauge field does not require quantized charges, it is possible to keep only the angular part of the Higgs oscillations, and discard the radial part. The angular part of the Higgs field has the following gauge transformation law: The gauge covariant derivative for the angle (which is actually gauge invariant) is: In order to keep fluctuations finite and nonzero in this limit, should be rescaled so that its kinetic term in the action stays normalized. The action for the theta field is read off from the Mexican hat action by substituting since is the gauge boson mass. By making a gauge transformation to set the gauge freedom in the action is eliminated, and the action becomes that of a massive vector field: To have arbitrarily small charges requires that the U(1) is not the circle of unit complex numbers under multiplication, but the real numbers under addition, which is only different in the global topology. Such a U(1) group is non-compact. The field transforms as an affine representation of the gauge group. Among the allowed gauge groups, only non-compact U(1) admits affine representations, and the U(1) of electromagnetism is experimentally known to be compact, since charge quantization holds to extremely high accuracy. The Higgs condensate in this model has infinitesimal charge, so interactions with the Higgs boson do not violate charge conservation. The theory of quantum electrodynamics with a massive photon is still a renormalizable theory, one in which electric charge is still conserved, but magnetic monopoles are not allowed. For non-Abelian gauge theory, there is no affine limit, and the Higgs oscillations cannot be too much more massive than the vectors. See also Electromagnetic mass Higgs bundle Quantum triviality Weinberg angle Yang–Mills–Higgs equations Notes References Further reading External links For a pedagogic introduction to electroweak symmetry breaking with step by step derivations, not found in texts, of many key relations, see Electroweak theory Phase transitions Quantum field theory Standard Model Symmetry
Higgs mechanism
[ "Physics", "Chemistry", "Mathematics" ]
6,117
[ "Standard Model", "Physical phenomena", "Phase transitions", "Quantum field theory", "Symmetry", "Critical phenomena", "Quantum mechanics", "Phases of matter", "Electroweak theory", "Particle physics", "Fundamental interactions", "Geometry", "Statistical mechanics", "Matter" ]
644,671
https://en.wikipedia.org/wiki/Mirror%20symmetry%20%28string%20theory%29
In algebraic geometry and theoretical physics, mirror symmetry is a relationship between geometric objects called Calabi–Yau manifolds. The term refers to a situation where two Calabi–Yau manifolds look very different geometrically but are nevertheless equivalent when employed as extra dimensions of string theory. Early cases of mirror symmetry were discovered by physicists. Mathematicians became interested in this relationship around 1990 when Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that it could be used as a tool in enumerative geometry, a branch of mathematics concerned with counting the number of solutions to geometric questions. Candelas and his collaborators showed that mirror symmetry could be used to count rational curves on a Calabi–Yau manifold, thus solving a longstanding problem. Although the original approach to mirror symmetry was based on physical ideas that were not understood in a mathematically precise way, some of its mathematical predictions have since been proven rigorously. Today, mirror symmetry is a major research topic in pure mathematics, and mathematicians are working to develop a mathematical understanding of the relationship based on physicists' intuition. Mirror symmetry is also a fundamental tool for doing calculations in string theory, and it has been used to understand aspects of quantum field theory, the formalism that physicists use to describe elementary particles. Major approaches to mirror symmetry include the homological mirror symmetry program of Maxim Kontsevich and the SYZ conjecture of Andrew Strominger, Shing-Tung Yau, and Eric Zaslow. Overview Strings and compactification In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. These strings look like small segments or loops of ordinary string. String theory describes how strings propagate through space and interact with each other. On distance scales larger than the string scale, a string will look just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. Splitting and recombination of strings correspond to particle emission and absorption, giving rise to the interactions between particles. There are notable differences between the world described by string theory and the everyday world. In everyday life, there are three familiar dimensions of space (up/down, left/right, and forward/backward), and there is one dimension of time (later/earlier). Thus, in the language of modern physics, one says that spacetime is four-dimensional. One of the peculiar features of string theory is that it requires extra dimensions of spacetime for its mathematical consistency. In superstring theory, the version of the theory that incorporates a theoretical idea called supersymmetry, there are six extra dimensions of spacetime in addition to the four that are familiar from everyday experience. One of the goals of current research in string theory is to develop models in which the strings represent particles observed in high energy physics experiments. For such a model to be consistent with observations, its spacetime must be four-dimensional at the relevant distance scales, so one must look for ways to restrict the extra dimensions to smaller scales. In most realistic models of physics based on string theory, this is accomplished by a process called compactification, in which the extra dimensions are assumed to "close up" on themselves to form circles. In the limit where these curled up dimensions become very small, one obtains a theory in which spacetime has effectively a lower number of dimensions. A standard analogy for this is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length. However, as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling on the surface of the hose would move in two dimensions. Calabi–Yau manifolds Compactification can be used to construct models in which spacetime is effectively four-dimensional. However, not every way of compactifying the extra dimensions produces a model with the right properties to describe nature. In a viable model of particle physics, the compact extra dimensions must be shaped like a Calabi–Yau manifold. A Calabi–Yau manifold is a special space which is typically taken to be six-dimensional in applications to string theory. It is named after mathematicians Eugenio Calabi and Shing-Tung Yau. After Calabi–Yau manifolds had entered physics as a way to compactify extra dimensions, many physicists began studying these manifolds. In the late 1980s, Lance Dixon, Wolfgang Lerche, Cumrun Vafa, and Nick Warner noticed that given such a compactification of string theory, it is not possible to reconstruct uniquely a corresponding Calabi–Yau manifold. Instead, two different versions of string theory called type IIA string theory and type IIB can be compactified on completely different Calabi–Yau manifolds giving rise to the same physics. In this situation, the manifolds are called mirror manifolds, and the relationship between the two physical theories is called mirror symmetry. The mirror symmetry relationship is a particular example of what physicists call a physical duality. In general, the term physical duality refers to a situation where two seemingly different physical theories turn out to be equivalent in a nontrivial way. If one theory can be transformed so it looks just like another theory, the two are said to be dual under that transformation. Put differently, the two theories are mathematically different descriptions of the same phenomena. Such dualities play an important role in modern physics, especially in string theory. Regardless of whether Calabi–Yau compactifications of string theory provide a correct description of nature, the existence of the mirror duality between different string theories has significant mathematical consequences. The Calabi–Yau manifolds used in string theory are of interest in pure mathematics, and mirror symmetry allows mathematicians to solve problems in enumerative algebraic geometry, a branch of mathematics concerned with counting the numbers of solutions to geometric questions. A classical problem of enumerative geometry is to enumerate the rational curves on a Calabi–Yau manifold such as the one illustrated above. By applying mirror symmetry, mathematicians have translated this problem into an equivalent problem for the mirror Calabi–Yau, which turns out to be easier to solve. In physics, mirror symmetry is justified on physical grounds. However, mathematicians generally require rigorous proofs that do not require an appeal to physical intuition. From a mathematical point of view, the version of mirror symmetry described above is still only a conjecture, but there is another version of mirror symmetry in the context of topological string theory, a simplified version of string theory introduced by Edward Witten, which has been rigorously proven by mathematicians. In the context of topological string theory, mirror symmetry states that two theories called the A-model and B-model are equivalent in the sense that there is a duality relating them. Today mirror symmetry is an active area of research in mathematics, and mathematicians are working to develop a more complete mathematical understanding of mirror symmetry based on physicists' intuition. History The idea of mirror symmetry can be traced back to the mid-1980s when it was noticed that a string propagating on a circle of radius is physically equivalent to a string propagating on a circle of radius in appropriate units. This phenomenon is now known as T-duality and is understood to be closely related to mirror symmetry. In a paper from 1985, Philip Candelas, Gary Horowitz, Andrew Strominger, and Edward Witten showed that by compactifying string theory on a Calabi–Yau manifold, one obtains a theory roughly similar to the standard model of particle physics that also consistently incorporates an idea called supersymmetry. Following this development, many physicists began studying Calabi–Yau compactifications, hoping to construct realistic models of particle physics based on string theory. Cumrun Vafa and others noticed that given such a physical model, it is not possible to reconstruct uniquely a corresponding Calabi–Yau manifold. Instead, there are two Calabi–Yau manifolds that give rise to the same physics. By studying the relationship between Calabi–Yau manifolds and certain conformal field theories called Gepner models, Brian Greene and Ronen Plesser found nontrivial examples of the mirror relationship. Further evidence for this relationship came from the work of Philip Candelas, Monika Lynker, and Rolf Schimmrigk, who surveyed a large number of Calabi–Yau manifolds by computer and found that they came in mirror pairs. Mathematicians became interested in mirror symmetry around 1990 when physicists Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that mirror symmetry could be used to solve problems in enumerative geometry that had resisted solution for decades or more. These results were presented to mathematicians at a conference at the Mathematical Sciences Research Institute (MSRI) in Berkeley, California in May 1991. During this conference, it was noticed that one of the numbers Candelas had computed for the counting of rational curves disagreed with the number obtained by Norwegian mathematicians Geir Ellingsrud and Stein Arild Strømme using ostensibly more rigorous techniques. Many mathematicians at the conference assumed that Candelas's work contained a mistake since it was not based on rigorous mathematical arguments. However, after examining their solution, Ellingsrud and Strømme discovered an error in their computer code and, upon fixing the code, they got an answer that agreed with the one obtained by Candelas and his collaborators. In 1990, Edward Witten introduced topological string theory, a simplified version of string theory, and physicists showed that there is a version of mirror symmetry for topological string theory. This statement about topological string theory is usually taken as the definition of mirror symmetry in the mathematical literature. In an address at the International Congress of Mathematicians in 1994, mathematician Maxim Kontsevich presented a new mathematical conjecture based on the physical idea of mirror symmetry in topological string theory. Known as homological mirror symmetry, this conjecture formalizes mirror symmetry as an equivalence of two mathematical structures: the derived category of coherent sheaves on a Calabi–Yau manifold and the Fukaya category of its mirror. Also around 1995, Kontsevich analyzed the results of Candelas, which gave a general formula for the problem of counting rational curves on a quintic threefold, and he reformulated these results as a precise mathematical conjecture. In 1996, Alexander Givental posted a paper that claimed to prove this conjecture of Kontsevich. Initially, many mathematicians found this paper hard to understand, so there were doubts about its correctness. Subsequently, Bong Lian, Kefeng Liu, and Shing-Tung Yau published an independent proof in a series of papers. Despite controversy over who had published the first proof, these papers are now collectively seen as providing a mathematical proof of the results originally obtained by physicists using mirror symmetry. In 2000, Kentaro Hori and Cumrun Vafa gave another physical proof of mirror symmetry based on T-duality. Work on mirror symmetry continues today with major developments in the context of strings on surfaces with boundaries. In addition, mirror symmetry has been related to many active areas of mathematics research, such as the McKay correspondence, topological quantum field theory, and the theory of stability conditions. At the same time, basic questions continue to vex. For example, mathematicians still lack an understanding of how to construct examples of mirror Calabi–Yau pairs, though there has been progress in understanding this issue. Applications Enumerative geometry Many of the important mathematical applications of mirror symmetry belong to the branch of mathematics called enumerative geometry. In enumerative geometry, one is interested in counting the number of solutions to geometric questions, typically using the techniques of algebraic geometry. One of the earliest problems of enumerative geometry was posed around the year 200 BCE by the ancient Greek mathematician Apollonius, who asked how many circles in the plane are tangent to three given circles. In general, the solution to the problem of Apollonius is that there are eight such circles. Enumerative problems in mathematics often concern a class of geometric objects called algebraic varieties which are defined by the vanishing of polynomials. For example, the Clebsch cubic (see the illustration) is defined using a certain polynomial of degree three in four variables. A celebrated result of nineteenth-century mathematicians Arthur Cayley and George Salmon states that there are exactly 27 straight lines that lie entirely on such a surface. Generalizing this problem, one can ask how many lines can be drawn on a quintic Calabi–Yau manifold, such as the one illustrated above, which is defined by a polynomial of degree five. This problem was solved by the nineteenth-century German mathematician Hermann Schubert, who found that there are exactly 2,875 such lines. In 1986, geometer Sheldon Katz proved that the number of curves, such as circles, that are defined by polynomials of degree two and lie entirely in the quintic is 609,250. By the year 1991, most of the classical problems of enumerative geometry had been solved and interest in enumerative geometry had begun to diminish. According to mathematician Mark Gross, "As the old problems had been solved, people went back to check Schubert's numbers with modern techniques, but that was getting pretty stale." The field was reinvigorated in May 1991 when physicists Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that mirror symmetry could be used to count the number of degree three curves on a quintic Calabi–Yau. Candelas and his collaborators found that these six-dimensional Calabi–Yau manifolds can contain exactly 317,206,375 curves of degree three. In addition to counting degree-three curves on a quintic three-fold, Candelas and his collaborators obtained a number of more general results for counting rational curves which went far beyond the results obtained by mathematicians. Although the methods used in this work were based on physical intuition, mathematicians have gone on to prove rigorously some of the predictions of mirror symmetry. In particular, the enumerative predictions of mirror symmetry have now been rigorously proven. Theoretical physics In addition to its applications in enumerative geometry, mirror symmetry is a fundamental tool for doing calculations in string theory. In the A-model of topological string theory, physically interesting quantities are expressed in terms of infinitely many numbers called Gromov–Witten invariants, which are extremely difficult to compute. In the B-model, the calculations can be reduced to classical integrals and are much easier. By applying mirror symmetry, theorists can translate difficult calculations in the A-model into equivalent but technically easier calculations in the B-model. These calculations are then used to determine the probabilities of various physical processes in string theory. Mirror symmetry can be combined with other dualities to translate calculations in one theory into equivalent calculations in a different theory. By outsourcing calculations to different theories in this way, theorists can calculate quantities that are impossible to calculate without the use of dualities. Outside of string theory, mirror symmetry is used to understand aspects of quantum field theory, the formalism that physicists use to describe elementary particles. For example, gauge theories are a class of highly symmetric physical theories appearing in the standard model of particle physics and other parts of theoretical physics. Some gauge theories which are not part of the standard model, but which are nevertheless important for theoretical reasons, arise from strings propagating on a nearly singular background. For such theories, mirror symmetry is a useful computational tool. Indeed, mirror symmetry can be used to perform calculations in an important gauge theory in four spacetime dimensions that was studied by Nathan Seiberg and Edward Witten and is also familiar in mathematics in the context of Donaldson invariants. There is also a generalization of mirror symmetry called 3D mirror symmetry which relates pairs of quantum field theories in three spacetime dimensions. Approaches Homological mirror symmetry In string theory and related theories in physics, a brane is a physical object that generalizes the notion of a point particle to higher dimensions. For example, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher-dimensional branes. The word brane comes from the word "membrane" which refers to a two-dimensional brane. In string theory, a string may be open (forming a segment with two endpoints) or closed (forming a closed loop). D-branes are an important class of branes that arise when one considers open strings. As an open string propagates through spacetime, its endpoints are required to lie on a D-brane. The letter "D" in D-brane refers to a condition that it satisfies, the Dirichlet boundary condition. Mathematically, branes can be described using the notion of a category. This is a mathematical structure consisting of objects, and for any pair of objects, a set of morphisms between them. In most examples, the objects are mathematical structures (such as sets, vector spaces, or topological spaces) and the morphisms are functions between these structures. One can also consider categories where the objects are D-branes and the morphisms between two branes and are states of open strings stretched between and . In the B-model of topological string theory, the D-branes are complex submanifolds of a Calabi–Yau together with additional data that arise physically from having charges at the endpoints of strings. Intuitively, one can think of a submanifold as a surface embedded inside the Calabi–Yau, although submanifolds can also exist in dimensions different from two. In mathematical language, the category having these branes as its objects is known as the derived category of coherent sheaves on the Calabi–Yau. In the A-model, the D-branes can again be viewed as submanifolds of a Calabi–Yau manifold. Roughly speaking, they are what mathematicians call special Lagrangian submanifolds. This means among other things that they have half the dimension of the space in which they sit, and they are length-, area-, or volume-minimizing. The category having these branes as its objects is called the Fukaya category. The derived category of coherent sheaves is constructed using tools from complex geometry, a branch of mathematics that describes geometric curves in algebraic terms and solves geometric problems using algebraic equations. On the other hand, the Fukaya category is constructed using symplectic geometry, a branch of mathematics that arose from studies of classical physics. Symplectic geometry studies spaces equipped with a symplectic form, a mathematical tool that can be used to compute area in two-dimensional examples. The homological mirror symmetry conjecture of Maxim Kontsevich states that the derived category of coherent sheaves on one Calabi–Yau manifold is equivalent in a certain sense to the Fukaya category of its mirror. This equivalence provides a precise mathematical formulation of mirror symmetry in topological string theory. In addition, it provides an unexpected bridge between two branches of geometry, namely complex and symplectic geometry. Strominger–Yau–Zaslow conjecture Another approach to understanding mirror symmetry was suggested by Andrew Strominger, Shing-Tung Yau, and Eric Zaslow in 1996. According to their conjecture, now known as the SYZ conjecture, mirror symmetry can be understood by dividing a Calabi–Yau manifold into simpler pieces and then transforming them to get the mirror Calabi–Yau. The simplest example of a Calabi–Yau manifold is a two-dimensional torus or donut shape. Consider a circle on this surface that goes once through the hole of the donut. An example is the red circle in the figure. There are infinitely many circles like it on a torus; in fact, the entire surface is a union of such circles. One can choose an auxiliary circle (the pink circle in the figure) such that each of the infinitely many circles decomposing the torus passes through a point of . This auxiliary circle is said to parametrize the circles of the decomposition, meaning there is a correspondence between them and points of . The circle is more than just a list, however, because it also determines how these circles are arranged on the torus. This auxiliary space plays an important role in the SYZ conjecture. The idea of dividing a torus into pieces parametrized by an auxiliary space can be generalized. Increasing the dimension from two to four real dimensions, the Calabi–Yau becomes a K3 surface. Just as the torus was decomposed into circles, a four-dimensional K3 surface can be decomposed into two-dimensional tori. In this case the space is an ordinary sphere. Each point on the sphere corresponds to one of the two-dimensional tori, except for twenty-four "bad" points corresponding to "pinched" or singular tori. The Calabi–Yau manifolds of primary interest in string theory have six dimensions. One can divide such a manifold into 3-tori (three-dimensional objects that generalize the notion of a torus) parametrized by a 3-sphere (a three-dimensional generalization of a sphere). Each point of corresponds to a 3-torus, except for infinitely many "bad" points which form a grid-like pattern of segments on the Calabi–Yau and correspond to singular tori. Once the Calabi–Yau manifold has been decomposed into simpler parts, mirror symmetry can be understood in an intuitive geometric way. As an example, consider the torus described above. Imagine that this torus represents the "spacetime" for a physical theory. The fundamental objects of this theory will be strings propagating through the spacetime according to the rules of quantum mechanics. One of the basic dualities of string theory is T-duality, which states that a string propagating around a circle of radius is equivalent to a string propagating around a circle of radius in the sense that all observable quantities in one description are identified with quantities in the dual description. For example, a string has momentum as it propagates around a circle, and it can also wind around the circle one or more times. The number of times the string winds around a circle is called the winding number. If a string has momentum and winding number in one description, it will have momentum and winding number in the dual description. By applying T-duality simultaneously to all of the circles that decompose the torus, the radii of these circles become inverted, and one is left with a new torus which is "fatter" or "skinnier" than the original. This torus is the mirror of the original Calabi–Yau. T-duality can be extended from circles to the two-dimensional tori appearing in the decomposition of a K3 surface or to the three-dimensional tori appearing in the decomposition of a six-dimensional Calabi–Yau manifold. In general, the SYZ conjecture states that mirror symmetry is equivalent to the simultaneous application of T-duality to these tori. In each case, the space provides a kind of blueprint that describes how these tori are assembled into a Calabi–Yau manifold. See also Donaldson–Thomas theory Wall-crossing Notes References Further reading Popularizations Textbooks Algebraic geometry Symplectic geometry Mathematical physics String theory
Mirror symmetry (string theory)
[ "Physics", "Astronomy", "Mathematics" ]
4,848
[ "Astronomical hypotheses", "Applied mathematics", "Theoretical physics", "Fields of abstract algebra", "Algebraic geometry", "String theory", "Mathematical physics" ]
644,814
https://en.wikipedia.org/wiki/Einstein%20manifold
In differential geometry and mathematical physics, an Einstein manifold is a Riemannian or pseudo-Riemannian differentiable manifold whose Ricci tensor is proportional to the metric. They are named after Albert Einstein because this condition is equivalent to saying that the metric is a solution of the vacuum Einstein field equations (with cosmological constant), although both the dimension and the signature of the metric can be arbitrary, thus not being restricted to Lorentzian manifolds (including the four-dimensional Lorentzian manifolds usually studied in general relativity). Einstein manifolds in four Euclidean dimensions are studied as gravitational instantons. If M is the underlying n-dimensional manifold, and g is its metric tensor, the Einstein condition means that for some constant k, where Ric denotes the Ricci tensor of g. Einstein manifolds with are called Ricci-flat manifolds. The Einstein condition and Einstein's equation In local coordinates the condition that be an Einstein manifold is simply Taking the trace of both sides reveals that the constant of proportionality k for Einstein manifolds is related to the scalar curvature R by where n is the dimension of M. In general relativity, Einstein's equation with a cosmological constant Λ is where is the Einstein gravitational constant. The stress–energy tensor Tab gives the matter and energy content of the underlying spacetime. In vacuum (a region of spacetime devoid of matter) , and Einstein's equation can be rewritten in the form (assuming that ): Therefore, vacuum solutions of Einstein's equation are (Lorentzian) Einstein manifolds with k proportional to the cosmological constant. Examples Simple examples of Einstein manifolds include: All 2D manifolds admit Einstein metrics. In fact, in this dimension, a metric is Einstein if and only if it has constant Gauss curvature. The classical uniformization theorem for Riemann surfaces guarantees that there is such a metric in every conformal class on any 2-manifold. Any manifold with constant sectional curvature is an Einstein manifold—in particular: Euclidean space, which is flat, is a simple example of Ricci-flat, hence Einstein metric. The n-sphere, , with the round metric is Einstein with . Hyperbolic space with the canonical metric is Einstein with . Complex projective space, , with the Fubini–Study metric, have Calabi–Yau manifolds admit an Einstein metric that is also Kähler, with Einstein constant . Such metrics are not unique, but rather come in families; there is a Calabi–Yau metric in every Kähler class, and the metric also depends on the choice of complex structure. For example, there is a 60-parameter family of such metrics on K3, 57 parameters of which give rise to Einstein metrics which are not related by isometries or rescalings. Kähler–Einstein metrics exist on a variety of compact complex manifolds due to the existence results of Shing-Tung Yau, and the later study of K-stability especially in the case of Fano manifolds. Irreducible symmetric spaces, as classified by Elie Cartan, are always Einstein. Among these spaces, the compact ones all have positive Einstein constant . Examples of these include the Grassmannians , , and . Every such compact space has a so-called non-compact dual, which instead has negative Einstein constant . These dual pairs are related in manner that is exactly parallel to the relationship between spheres and hyperbolic spaces. One necessary condition for closed, oriented, 4-manifolds to be Einstein is satisfying the Hitchin–Thorpe inequality. However, this necessary condition is very far from sufficient, as further obstructions have been discovered by LeBrun, Sambusetti, and others. Applications Four dimensional Riemannian Einstein manifolds are also important in mathematical physics as gravitational instantons in quantum theories of gravity. The term "gravitational instanton" is sometimes restricted to Einstein 4-manifolds whose Weyl tensor is anti-self-dual, and it is very often assumed that the metric is asymptotic to the standard metric on a finite quotient Euclidean 4-space (and are therefore complete but non-compact). In differential geometry, simply connected self-dual Einstein 4-manifolds are coincide with the 4-dimensional, reverse-oriented hyperkähler manifolds in the Ricci-flat case, but are sometimes called quaternion Kähler manifolds otherwise. Higher-dimensional Lorentzian Einstein manifolds are used in modern theories of gravity, such as string theory, M-theory and supergravity. Hyperkähler and quaternion Kähler manifolds (which are special kinds of Einstein manifolds) also have applications in physics as target spaces for nonlinear σ-models with supersymmetry. Compact Einstein manifolds have been much studied in differential geometry, and many examples are known, although constructing them is often challenging. Compact Ricci-flat manifolds are particularly difficult to find: in the monograph on the subject by the pseudonymous author Arthur Besse, readers are offered a meal in a starred restaurant in exchange for a new example. See also Einstein–Hermitian vector bundle Osserman manifold Notes and references Riemannian manifolds Manifold Mathematical physics
Einstein manifold
[ "Physics", "Mathematics" ]
1,064
[ "Applied mathematics", "Theoretical physics", "Space (mathematics)", "Riemannian manifolds", "Metric spaces", "Mathematical physics" ]
645,335
https://en.wikipedia.org/wiki/Diffusion%20equation
The diffusion equation is a parabolic partial differential equation. In physics, it describes the macroscopic behavior of many micro-particles in Brownian motion, resulting from the random movements and collisions of the particles (see Fick's laws of diffusion). In mathematics, it is related to Markov processes, such as random walks, and applied in many other fields, such as materials science, information theory, and biophysics. The diffusion equation is a special case of the convection–diffusion equation when bulk velocity is zero. It is equivalent to the heat equation under some circumstances. Statement The equation is usually written as: where is the density of the diffusing material at location and time and is the collective diffusion coefficient for density at location ; and represents the vector differential operator del. If the diffusion coefficient depends on the density then the equation is nonlinear, otherwise it is linear. The equation above applies when the diffusion coefficient is isotropic; in the case of anisotropic diffusion, is a symmetric positive definite matrix, and the equation is written (for three dimensional diffusion) as: The diffusion equation has numerous analytic solutions. If is constant, then the equation reduces to the following linear differential equation: which is identical to the heat equation. Historical origin The particle diffusion equation was originally derived by Adolf Fick in 1855. Derivation The diffusion equation can be trivially derived from the continuity equation, which states that a change in density in any part of the system is due to inflow and outflow of material into and out of that part of the system. Effectively, no material is created or destroyed: where j is the flux of the diffusing material. The diffusion equation can be obtained easily from this when combined with the phenomenological Fick's first law, which states that the flux of the diffusing material in any part of the system is proportional to the local density gradient: If drift must be taken into account, the Fokker–Planck equation provides an appropriate generalization. Discretization The diffusion equation is continuous in both space and time. One may discretize space, time, or both space and time, which arise in application. Discretizing time alone just corresponds to taking time slices of the continuous system, and no new phenomena arise. In discretizing space alone, the Green's function becomes the discrete Gaussian kernel, rather than the continuous Gaussian kernel. In discretizing both time and space, one obtains the random walk. Discretization in image processing The product rule is used to rewrite the anisotropic tensor diffusion equation, in standard discretization schemes, because direct discretization of the diffusion equation with only first order spatial central differences leads to checkerboard artifacts. The rewritten diffusion equation used in image filtering: where "tr" denotes the trace of the 2nd rank tensor, and superscript "T" denotes transpose, in which in image filtering D(ϕ, r) are symmetric matrices constructed from the eigenvectors of the image structure tensors. The spatial derivatives can then be approximated by two first order and a second order central finite differences. The resulting diffusion algorithm can be written as an image convolution with a varying kernel (stencil) of size 3 × 3 in 2D and 3 × 3 × 3 in 3D. See also Continuity equation Heat equation Self-similar solutions Reaction-diffusion equation Fokker–Planck equation Fick's laws of diffusion Maxwell–Stefan equation Radiative transfer equation and diffusion theory for photon transport in biological tissue Streamline diffusion Numerical solution of the convection–diffusion equation References Further reading Carslaw, H. S. and Jaeger, J. C. (1959). Conduction of Heat in Solids Oxford: Clarendon Press Jacobs, M.H. (1935). Diffusion Processes Berlin/Heidelberg: Springer Crank, J. (1956). The Mathematics of Diffusion Oxford: Clarendon Press Mathews, Jon; Walker, Robert L. (1970). Mathematical methods of physics (2nd ed.), New York: W. A. Benjamin, Thambynayagam, R. K. M (2011). The Diffusion Handbook: Applied Solutions for Engineers. McGraw-Hill Ghez, R. (1988). A Primer Of Diffusion Problems, Wiley Ghez, R. (2001). Diffusion Phenomena. Long Island, NY, USA: Dover Publication Inc Pekalski, A. (1994). Diffusion Processes: Experiment, Theory, Simulations, Springer Bennett, T.D. (2013). Transport by Advection and Diffusion. John Wiley & Sons Vogel, G. (2019). Adventure Diffusion Springer Gillespie, D.T.; Seitaridou, E (2013). Simple Brownian Diffusion,Oxford University Press Nakicenovic, N.; Griübler, A.: (1991). Diffusion of Technologies and Social Behavior; Springer Michaud, G.; Alecian, G.; Richer, G.: (2013). Atomic Diffusion in Stars, Springer Stroock, D. W.:, Varadhan, S.R.S.: (2006). Multidimensional diffusion processes, Springer Zhuoqun, W., Yin J., Li H., Zhao J., Jingxue Y., and Huilai L. (2001). Nonlinear diffusion equations, World Scientific Shewmon, P. (1989). Diffusion in Solids, Wiley Banks, R.B. (2010). Growth and diffusion phenomena, Springer Roque-Malherbe, R.M.A. (2007). Adsorption and Diffusion in Nanoporous Materials, CRC Press Cunningham, R. (1980). Diffusion in gases and porous media, Plenum Pasquill, F., Smith, F.B. (1983). Atmospheric diffusion, Horwood Ikeda, N., Watanabe, S. (1981). Stochastic Differential Equations and Diffusion Processes, Elsevier, Academic Press Philibert, J., Laskar, A.L., Bocquet, J.L., Brebec, G., Monty, C. (1990). Diffusion in Materials, Springer Netherlands Freedman, D., (1983). Brownian Motion and Diffusion, Springer-Verlag New York Nagasawa, M., (1993). Schrödinger Equations and Diffusion Theory, Birkhäuser Burgers, J.M., (1974). The Nonlinear Diffusion Equation: Asymptotic Solutions and Statistical Problems,Springer Netherlands Ito, S., (1992). Diffusion Equations, American Mathematical Society Krylov, N. V. (1994). Introduction to the Theory of Diffusion Processes, American Mathematical Society Knight, F.B., (1981). Essentials of Brownian Motion and Diffusion, American Mathematical Society Ibe, O.C., (2013). Elements of random walk and diffusion processes, Wiley Dattagupta, S. (2013). Diffusion: Formalism and Applications, CRC Press External links Diffusion Calculator for Impurities & Dopants in Silicon A tutorial on the theory behind and solution of the Diffusion Equation. Classical and nanoscale diffusion (with figures and animations) Diffusion Partial differential equations Parabolic partial differential equations Functions of space and time it:Leggi di Fick
Diffusion equation
[ "Physics", "Chemistry" ]
1,526
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Functions of space and time", "Spacetime" ]
645,792
https://en.wikipedia.org/wiki/Chronology%20protection%20conjecture
The chronology protection conjecture is a hypothesis first proposed by Stephen Hawking that laws of physics beyond those of standard general relativity prevent time travel—even when the latter theory states that it should be possible (such as in scenarios where faster than light travel is allowed). The permissibility of time travel is represented mathematically by the existence of closed timelike curves in some solutions to the field equations of general relativity. The chronology protection conjecture should be distinguished from chronological censorship under which every closed timelike curve passes through an event horizon, which might prevent an observer from detecting the causal violation (also known as chronology violation). Etymology In a 1992 paper, Hawking uses the metaphorical device of a "Chronology Protection Agency" as a personification of the aspects of physics that make time travel impossible at macroscopic scales, thus apparently preventing temporal paradoxes. He says: The idea of the Chronology Protection Agency appears to be drawn playfully from the Time Patrol or Time Police concept, which has been used in many works of science fiction such as Poul Anderson's series of Time Patrol stories or Isaac Asimov's novel The End of Eternity, or in the television series Doctor Who. "The Chronology Protection Case" by Paul Levinson, published after Hawking's paper, posits a universe that goes so far as to murder any scientists who are close to inventing any means of time travel. Larry Niven, in his short story ‘Rotating Cylinders and the possibility of Global Causality Violation’ expands this concept so that the universe causes environmental catastrophe, or global civil war, or the local sun going nova, to any civilisation which shows any sign of successful construction. General relativity and quantum corrections Many attempts to generate scenarios for closed timelike curves have been suggested, and the theory of general relativity does allow them in certain circumstances. Some theoretical solutions in general relativity that contain closed timelike curves would require an infinite universe with certain features that our universe does not appear to have, such as the universal rotation of the Gödel metric or the rotating cylinder of infinite length known as a Tipler cylinder. However, some solutions allow for the creation of closed timelike curves in a bounded region of spacetime, with the Cauchy horizon being the boundary between the region of spacetime where closed timelike curves can exist and the rest of spacetime where they cannot. One of the first such bounded time travel solutions found was constructed from a traversable wormhole, based on the idea of taking one of the two "mouths" of the wormhole on a round-trip journey at relativistic speed to create a time difference between it and the other mouth (see the discussion at Wormhole#Time travel). General relativity does not include quantum effects on its own, and a full integration of general relativity and quantum mechanics would require a theory of quantum gravity, but there is an approximate method for modeling quantum fields in the curved spacetime of general relativity, known as semiclassical gravity. Initial attempts to apply semiclassical gravity to the traversable wormhole time machine indicated that at exactly the moment that wormhole would first allow for closed timelike curves, quantum vacuum fluctuations build up and drive the energy density to infinity in the region of the wormholes. This occurs when the two wormhole mouths, call them A and B, have been moved in such a way that it becomes possible for a particle or wave moving at the speed of light to enter mouth B at some time T2 and exit through mouth A at an earlier time T1, then travel back towards mouth B through ordinary space, and arrive at mouth B at the same time T2 that it entered B on the previous loop; in this way the same particle or wave can make a potentially infinite number of loops through the same regions of spacetime, piling up on itself. Calculations showed that this effect would not occur for an ordinary beam of radiation, because it would be "defocused" by the wormhole so that most of a beam emerging from mouth A would spread out and miss mouth B. But when the calculation was done for vacuum fluctuations, it was found that they would spontaneously refocus on the trip between the mouths, indicating that the pileup effect might become large enough to destroy the wormhole in this case. Uncertainty about this conclusion remained, because the semiclassical calculations indicated that the pileup would only drive the energy density to infinity for an infinitesimal moment of time, after which the energy density would die down. But semiclassical gravity is considered unreliable for large energy densities or short time periods that reach the Planck scale; at these scales, a complete theory of quantum gravity is needed for accurate predictions. So, it remains uncertain whether quantum-gravitational effects might prevent the energy density from growing large enough to destroy the wormhole. Stephen Hawking conjectured that not only would the pileup of vacuum fluctuations still succeed in destroying the wormhole in quantum gravity, but also that the laws of physics would ultimately prevent any type of time machine from forming; this is the chronology protection conjecture. Subsequent works in semiclassical gravity provided examples of spacetimes with closed timelike curves where the energy density due to vacuum fluctuations does not approach infinity in the region of spacetime outside the Cauchy horizon. However, in 1997 a general proof was found demonstrating that according to semiclassical gravity, the energy of the quantum field (more precisely, the expectation value of the quantum stress-energy tensor) must always be either infinite or undefined on the horizon itself. Both cases indicate that semiclassical methods become unreliable at the horizon and quantum gravity effects would be important there, consistent with the possibility that such effects would always intervene to prevent time machines from forming. A definite theoretical decision on the status of the chronology protection conjecture would require a full theory of quantum gravity as opposed to semiclassical methods. There are also some arguments from string theory that seem to support chronology protection, but string theory is not yet a complete theory of quantum gravity. Experimental observation of closed timelike curves would of course demonstrate this conjecture to be false, but short of that, if physicists had a theory of quantum gravity whose predictions had been well-confirmed in other areas, this would give them a significant degree of confidence in the theory's predictions about the possibility or impossibility of time travel. Other proposals that allow for backwards time travel but prevent time paradoxes, such as the Novikov self-consistency principle, which would ensure the timeline stays consistent, or the idea that a time traveler is taken to a parallel universe while their original timeline remains intact, do not qualify as "chronology protection". See also Causality Cosmic censorship hypothesis Novikov self-consistency principle Time travel Wormhole Notes References Hawking, S.W., (1992) The chronology protection conjecture. Phys. Rev. D46, 603–611. Matt Visser, "The quantum physics of chronology protection" in The Future of Theoretical Physics and Cosmology: Celebrating Stephen Hawking's 60th Birthday by G. W. Gibbons (Editor), E. P. S. Shellard (Editor), S. J. Rankin (Editor) External links https://web.archive.org/web/20101125122824/http://hawking.org.uk/index.php/lectures/63 https://plus.maths.org/content/time-travel-allowed — Kip Thorne discusses time travel in general relativity, and the basis in quantum physics for the chronology protection conjecture Time in physics Causality Time travel Conjectures
Chronology protection conjecture
[ "Physics", "Mathematics" ]
1,540
[ "Time in physics", "Physical phenomena", "Unsolved problems in mathematics", "Physical quantities", "Time", "Time travel", "Conjectures", "Spacetime", "Mathematical problems" ]
14,878,404
https://en.wikipedia.org/wiki/MTA3
Metastasis-associated protein MTA3 is a protein that in humans is encoded by the MTA3 gene. MTA3 protein localizes in the nucleus as well as in other cellular compartments MTA3 is a component of the nucleosome remodeling and deacetylate (NuRD) complex and participates in gene expression. The expression pattern of MTA3 is opposite to that of MTA1 and MTA2 during mammary gland tumorigenesis. However, MTA3 is also overexpressed in a variety of human cancers. Discovery Mouse Mta3 was initially identified as a partial cDNA with open reading frames in screening of a mouse keratinocyte cDNA library with a human MTA1 partial fragment by My G. Mahoney's research team. The full length Mta3 cDNA was cloned through 5'-RACE methodology using RNA from C57B1/6J mouse skin. The deduced amino acids and its comparison with the sequences in the GeneBank established MTA3 as the third MTA family member. Gene and spliced variants The Mta3 is localized on chromosome 12p in mice and MTA3 on 2p21 in human. The human MTA3 gene contains 20 exons, and 19 alternative spliced transcripts. Of these, nine MTA3 transcripts are predicted to code six proteins of 392, 514, 515, 537, 590 and 594 amino acids long, two MTA3 transcripts code 18 amino acids and 91 amino acids polypeptides. The remaining 10 transcripts are non-coding RNAs. The murine Mta3 gene contains nine transcripts, six of which are predicted to code proteins ranging from 251 amino acids to 591 amino acids while one transcript codes for 40 amino acids polypeptide. The murine Mta3 gene contains two predicted non-coding RNAs. Structure The overall organization of MTA3 protein domains is similar to the other two family members with a BAH (Bromo-Adjacent Homology), an ELM2 (egl-27 and MTA1 homology), a SANT (SWI, ADA2, N-CoR, TFIIIB-B), a GATA-like zinc finger, and one predicted bipartite nuclear localization signal (NLS). The SH3 motif of Mta3 allows it to interact with Fyn and Grb2 – both SH3 containing signaling proteins. Function Functions of MTA3 are believed to be differentially regulated in the context of cancer-types. For example, MTA3 expression is downregulated in breast cancer and endometrioid adenocarcinomas. MTA3 is overexpressed in non-small cell lung cancer and human placenta and chorionic carcinoma cells. In breast cancer, loss of MTA3 promotes EMT and invasiveness of breast cancer cells via upregulating Snail, which in turn represses E-cadherin adhesion molecule. In the mammary epithelium and breast cancer cells, MTA3 is an estrogen regulated gene and part of a larger regulatory network involving MTA1 and MTAs, all modifiers of hormone response, and participate in the processes involved in growth and differentiation. Accordingly, the MTA3-NuRD complex regulates the expression of Wnt4 in mammary epithelial cells and mice, and controls Wnt4-dependent ductal morphogenesis. In contrast to its repressive actions, MTA3 also stimulates the expression of HIF1α as well as its target genes under hypoxic conditions in trophoblasts and is thought to be involved in differentiation during pregnancy. MTA3-NuRD complex and downstream targets have been shown to participate in primitive hematopoietic and angiogenesis in a zebrafish model system As a part of BCL6 corepressor complex, MTA3 regulates BCL6-dependent repression of target genes, including PRDM1, and modulates the differentiation of B-cells. Regulation The estrogen receptor-stimulates the expression of MTA3 in breast cancer cells. The SP1 transcription factor stimulates the transcription of MTA3. MicroRNA-495 inhibits the level of MTA3 mRNA as well as the growth and migration of non-small cell lung cancer cells. The β-elemene - a traditional Chinese medicine, upregulates MTA3's expression in breast cancer cells Targets The MTA3-NuRD complex represses Snail, a master regulator of epithelial-to-mesenchymal transition (EMT), Wnt4 expression in mammary epithelial cells, and BCL6-corepressor target genes The MTA3-NuRD complex interacts with GATA3 to regulate the expression of GATA3 downstream targets. In addition, MTA3 upregulates HIF1 and its transactivation activity in hypoxic conditions. Notes References External links Transcription factors
MTA3
[ "Chemistry", "Biology" ]
1,053
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,878,536
https://en.wikipedia.org/wiki/PRDM16
PR domain containing 16, also known as PRDM16, is a protein which in humans is encoded by the PRDM16 gene. PRDM16 acts as a transcription coregulator that controls the development of brown adipocytes in brown adipose tissue. Previously, this coregulator was believed to be present only in brown adipose tissue, but more recent studies have shown that PRDM16 is highly expressed in subcutaneous white adipose tissue as well. Function The protein encoded by this gene is a zinc finger transcription factor. PRDM16 controls the cell fate between muscle and brown fat cells. Loss of PRDM16 from brown fat precursors causes a loss of brown fat characteristics and promotes muscle differentiation. Clinical significance The reciprocal translocation t(1;3)(p36;q21) occurs in a subset of myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML). This gene is located near the 1p36.3 breakpoint and has been shown to be specifically expressed in the t(1:3)(p36;q21)-positive MDS/AML. The protein encoded by this gene contains an N-terminal PR domain. The translocation results in the overexpression of a truncated version of this protein that lacks the PR domain, which may play an important role in the pathogenesis of MDS and AML. Alternatively spliced transcript variants encoding distinct isoforms have been reported. PRDM16 in BAT Brown adipose tissue (BAT) oxidizes chemical energy to produce heat. This heat energy can act as a defense against hypothermia and obesity. PRDM16 is highly enriched in brown adipose cells as compared to white adipose cells, and plays a role in these thermogenic processes in brown adipose tissue. PRDM16 activates brown fat cell identity and can control the determination of brown adipose fate. A knock-out of PRDM16 in mice shows a loss of brown cell characteristics, showing that PRDM16 activity is important in determining brown adipose fate. Brown adipocytes consist of densely packed mitochondria that contain uncoupling protein 1 (UCP-1). UCP-1 plays a key role in brown adipocyte thermogenesis. The presence of PRDM16 in adipose tissue causes a significant up-regulation of thermogenic genes, such as UCP-1 and CIDEA, resulting in thermogenic heat production. Understanding and stimulating the thermogenic processes in brown adipocytes provides possible therapeutic options for treating obesity. PRDM16 in WAT White adipose tissue (WAT) primarily stores excess energy in the form of triglycerides. Recent research has shown that PRDM16 is present in subcutaneous white adipose tissue. The activity of PRDM16 in white adipose tissue leads to the production of brown fat-like adipocytes within white adipose tissue, called beige cells (also called brite cells). These beige cells have a brown adipose tissue-like phenotype and actions, including thermogenic processes seen in BAT. In mice, the levels of PRDM16 within WAT, specifically anterior subcutaneous WAT and inguinal subcutaneous WAT, is about 50% that of interscapular BAT, both in protein expression and in mRNA quantity. This expression takes place primarily within mature adipocytes. Transgenic aP2-PRDM16 mice were used in a study to observe the effects of PRDM16 expression in WAT. The study found that the presence of PRDM16 in subcutaneous WAT leads to a significant up-regulation of brown-fat selective genes UCP-1, CIDEA, and PPARGC1A. This up-regulation lead to the development of a BAT-like phenotype within the white adipose tissue. Expression of PRDM16 has also been shown to protect against high-fat diet induced weight gain. Seale et al.’s experiment with aP2-PRDM16 transgenic mice and wild type mice showed that transgenic mice eating a 60% high-fat diet had significantly less weight gain than wild type mice on the same diet. Seale et al. determined the weight difference was not due to differences in food intake, as both transgenic and wild type mice were consuming the same amount of food on a daily basis. Rather, the weight difference stemmed from higher energy expenditure in the transgenic mice. Another of Seale et al.’s experiments showed the transgenic mice consumed a greater volume of oxygen over a 72-hour period than the wild type mice, showing a greater amount of energy expenditure in the transgenic mice. This energy expenditure in turn is attributed to PRDM16’s ability to up-regulate UCP-1 and CIDEA gene expression, resulting in thermogenesis. If human WAT expresses PRDM16 as in mice, this WAT could be a potential target for stimulating energy expenditure and combating obesity. Notes References Further reading External links Transcription factors
PRDM16
[ "Chemistry", "Biology" ]
1,038
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,878,691
https://en.wikipedia.org/wiki/KDM5C
Lysine-specific demethylase 5C is an enzyme that in humans is encoded by the KDM5C gene. KDM5C belongs to the alpha-ketoglutarate-dependent hydroxylase superfamily. Function This gene is a member of the SMCY homolog family and encodes a protein with one ARID domain, one JmjC domain, one JmjN domain and two PHD-type zinc fingers. The DNA-binding motif suggest this protein is involved in the regulation of transcription and chromatin remodeling. Mutations in this gene have been associated with X-linked intellectual disability. Alternatively spliced variants that encode different protein isoforms have been described but the full-length nature of only one has been determined. See also Xp11.2 duplication, section KDM5C References Further reading External links Transcription factors Genes on human chromosome X Human 2OG oxygenases EC 1.14.11
KDM5C
[ "Chemistry", "Biology" ]
197
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,879,054
https://en.wikipedia.org/wiki/CBFA2T2
Protein CBFA2T2 is a protein that in humans is encoded by the CBFA2T2 gene. Function In acute myeloid leukemia, especially in the M2 subtype, the t(8;21)(q22;q22) translocation is one of the most frequent karyotypic abnormalities. The translocation produces a chimeric gene made up of the 5'-region of the RUNX1 (AML1) gene fused to the 3'-region of the CBFA2T1 (MTG8) gene. The chimeric protein is thought to associate with the nuclear corepressor/histone deacetylase complex to block hematopoietic differentiation. The protein encoded by this gene binds to the AML1-MTG8 complex and may be important in promoting leukemogenesis. Several transcript variants are thought to exist for this gene, but the full-length natures of only three have been described. Interactions CBFA2T2 has been shown to interact with RUNX1T1. References Further reading External links Transcription factors
CBFA2T2
[ "Chemistry", "Biology" ]
228
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,879,214
https://en.wikipedia.org/wiki/RNF14
E3 ubiquitin-protein ligase RNF14 is an enzyme that in humans is encoded by the RNF14 gene. Function The protein encoded by this gene contains a RING zinc finger, a motif known to be involved in protein-protein interactions. This protein interacts with androgen receptor (AR) and may function as a coactivator that induces AR target gene expression in prostate. A dominant negative mutant of this gene has been demonstrated to inhibit the AR-mediated growth of prostate cancer. This protein also interacts with class III ubiquitin-conjugating enzymes (E2s) and may act as a ubiquitin-ligase (E3) in the ubiquitination of certain nuclear proteins. Five alternatively spliced transcript variants encoding two distinct isoforms have been reported. Another function of RNF14 protein relates to its regulation of the inter-relationship between bioenergetic status and inflammation. It influences the expression of mitochondrial and immune-related genes in skeletal muscle including cytokines and interferon regulatory factors. Interactions RNF14 has been shown to interact with the Androgen receptor. See also RING finger domain References Further reading External links Gene expression Transcription coregulators RING finger proteins
RNF14
[ "Chemistry", "Biology" ]
256
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
14,880,032
https://en.wikipedia.org/wiki/CRTC2
CREB regulated transcription coactivator 2, also known as CRTC2, is a protein which in humans is encoded by the CRTC2 gene. Function CRTC2, initially called TORC2, is a transcriptional coactivator for the transcription factor CREB and a central regulator of gluconeogenic gene expression in response to cAMP. CRTC2 is thought to drive tumorigenesis in STK11(LKB1)-null non-small cell lung cancers (NSCLC). Interactions CRTC2 has been shown to interact with SNF1LK2 and YWHAQ. References Further reading External links Gene expression Transcription coregulators
CRTC2
[ "Chemistry", "Biology" ]
143
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
14,880,151
https://en.wikipedia.org/wiki/PHF3
PHD finger protein 3 is a protein that in humans is encoded by the PHF3 gene. References Further reading External links Transcription factors
PHF3
[ "Chemistry", "Biology" ]
28
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,880,271
https://en.wikipedia.org/wiki/GRLF1
Glucocorticoid receptor DNA-binding factor 1 is a protein that in humans is encoded by the GRLF1 gene. Function The human glucocorticoid receptor DNA binding factor, which associates with the promoter region of the glucocorticoid receptor gene (hGR gene), is a repressor of glucocorticoid receptor transcription. The amino acid sequence deduced from the cDNA sequences show the presence of three sequence motifs characteristic of a zinc finger and one motif suggestive of a leucine zipper in which 1 cysteine is found instead of all leucines. The GRLF1 enhances the homologous down-regulation of wild-type hGR gene expression. Biochemical analysis suggests that GRLF1 interaction is sequence specific and that transcriptional efficacy of GRLF1 is regulated through its interaction with specific sequence motif. The level of expression is regulated by glucocorticoids. References Further reading External links GRLF1 Info with links in the Cell Migration Gateway Transcription factors
GRLF1
[ "Chemistry", "Biology" ]
214
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,880,488
https://en.wikipedia.org/wiki/INSRR
Insulin receptor-related protein is a protein that in humans is encoded by the INSRR gene. References Further reading Tyrosine kinase receptors
INSRR
[ "Chemistry" ]
29
[ "Tyrosine kinase receptors", "Signal transduction" ]
17,724,517
https://en.wikipedia.org/wiki/Singular%20isothermal%20sphere%20profile
The singular isothermal sphere (SIS) profile is the simplest parameterization of the spatial distribution of matter in an astronomical system (e.g. galaxies, clusters of galaxies, etc.). Density distribution where σV2 is the velocity dispersion and G is the gravitational constant. The SIS profile is unphysical because of the singularity at zero radius and the fact that the total mass calculated by integrating the function out to infinite radius does not converge (i.e., is infinite). However, it is commonly utilized in the literature due to the simplicity of its form. See also Navarro-Frenk-White profile References Large-scale structure of the cosmos Equations of astronomy
Singular isothermal sphere profile
[ "Physics", "Astronomy" ]
141
[ "Concepts in astronomy", "Equations of astronomy" ]
17,728,011
https://en.wikipedia.org/wiki/Vine%E2%80%93Matthews%E2%80%93Morley%20hypothesis
The Vine–Matthews–Morley hypothesis, also known as the Morley–Vine–Matthews hypothesis, was the first key scientific test of the seafloor spreading theory of continental drift and plate tectonics. Its key impact was that it allowed the rates of plate motions at mid-ocean ridges to be computed. It states that the Earth's oceanic crust acts as a recorder of reversals in the geomagnetic field direction as seafloor spreading takes place. History Harry Hess proposed the seafloor spreading hypothesis in 1960 (published in 1962); the term "spreading of the seafloor" was introduced by geophysicist Robert S. Dietz in 1961. According to Hess, seafloor was created at mid-oceanic ridges by the convection of the Earth's mantle, pushing and spreading the older crust away from the ridge. Geophysicist Frederick John Vine and the Canadian geologist Lawrence W. Morley independently realized that if Hess's seafloor spreading theory was correct, then the rocks surrounding the mid-oceanic ridges should show symmetric patterns of magnetization reversals using newly collected magnetic surveys. Both of Morley's letters to Nature (February 1963) and Journal of Geophysical Research (April 1963) were rejected, hence Vine and his PhD adviser at Cambridge University, Drummond Hoyle Matthews, were first to publish the theory in September 1963. Some colleagues were skeptical of the hypothesis because of the numerous assumptions made—seafloor spreading, geomagnetic reversals, and remanent magnetism—all hypotheses that were still not widely accepted. The Vine–Matthews–Morley hypothesis describes the magnetic reversals of oceanic crust. Further evidence for this hypothesis came from Allan V. Cox and colleagues (1964) when they measured the remanent magnetization of lavas from land sites. Walter C. Pitman and J. R. Heirtzler offered further evidence with a remarkably symmetric magnetic anomaly profile from the Pacific-Antarctic Ridge. Marine magnetic anomalies The Vine–Matthews-Morley hypothesis correlates the symmetric magnetic patterns seen on the seafloor with geomagnetic field reversals. At mid-ocean ridges, new crust is created by the injection, extrusion, and solidification of magma. After the magma has cooled through the Curie point, ferromagnetism becomes possible and the magnetization direction of magnetic minerals in the newly formed crust orients parallel to the current background geomagnetic field vector. Once fully cooled, these directions are locked into the crust and it becomes permanently magnetized. Lithospheric creation at the ridge is considered continuous and symmetrical as the new crust intrudes into the diverging plate boundary. The old crust moves laterally and equally on either side of the ridge. Therefore, as geomagnetic reversals occur, the crust on either side of the ridge will contain a record of remanent normal (parallel) or reversed (antiparallel) magnetizations in comparison to the current geomagnetic field. A magnetometer towed above (near bottom, sea surface, or airborne) the seafloor will record positive (high) or negative (low) magnetic anomalies when over crust magnetized in the normal or reversed direction. The ridge crest is analogous to “twin-headed tape recorder”, recording the Earth's magnetic history. Typically there are positive magnetic anomalies over normally magnetized crust and negative anomalies over reversed crust. Local anomalies with a short wavelength also exist, but are considered to be correlated with bathymetry. Magnetic anomalies over mid-ocean ridges are most apparent at high magnetic latitudes, over north-south trending ridges at all latitudes away from the magnetic equator, and east-west trending spreading ridges at the magnetic equator. The intensity of the remanent magnetization in the crust is greater than the induced magnetization. Consequently, the shape and amplitude of the magnetic anomaly is controlled predominately by the primary remanent vector in the crust. In addition, where the anomaly is measured on Earth affects its shape when measured with a magnetometer. This is because the field vector generated by the magnetized crust and the direction of the Earth's magnetic field vector are both measured by the magnetometers used in marine surveys. Because the Earth's field vector is much stronger than the anomaly field, a modern magnetometer measures the sum of the Earth's field and the component of the anomaly field in the direction of the Earth's field. Sections of crust magnetized at high latitudes have magnetic vectors that dip steeply downward in a normal geomagnetic field. However, close to the magnetic south pole, magnetic vectors are inclined steeply upwards in a normal geomagnetic field. Therefore, in both these cases the anomalies are positive. At the equator the Earth's field vector is horizontal so that crust magnetized there will also align horizontal. Here, the orientation of the spreading ridge affects the anomaly shape and amplitude. The component of the vector that effects the anomaly is at a maximum when the ridge is aligned east-west and the magnetic profile crossing is north-south. Impact The hypothesis links seafloor spreading and geomagnetic reversals in a powerful manner, with each expanding knowledge of the other. Early in the history of investigating the hypothesis only a short record of geomagnetic field reversals was available for studies of rocks on land. This was sufficient to allow computing of spreading rates over the last 700,000 years on many mid-ocean ridges by locating the closest reversed crust boundary to the crest of a mid-ocean ridge. Marine magnetic anomalies were found later to span the vast flanks of the ridges. Drillcores into the crust on these ridge flanks allowed dating of the early and of the older anomalies. This in turn allowed design of a predicted geomagnetic time scale. With time, investigations married land and marine data to produce an accurate geomagnetic reversal time scale for almost 200 million years. See also Edward Bullard Drummond Matthews Walter C. Pitman III Frederick Vine Geodynamo Lamont–Doherty Earth Observatory References External links Geophysics History of Earth science Plate tectonics Geology theories
Vine–Matthews–Morley hypothesis
[ "Physics" ]
1,273
[ "Applied and interdisciplinary physics", "Geophysics" ]
17,728,993
https://en.wikipedia.org/wiki/Apparent%20viscosity
In fluid mechanics, apparent viscosity (sometimes denoted ) is the shear stress applied to a fluid divided by the shear rate: For a Newtonian fluid, the apparent viscosity is constant, and equal to the Newtonian viscosity of the fluid, but for non-Newtonian fluids, the apparent viscosity depends on the shear rate. Apparent viscosity has the SI derived unit Pa·s (Pascal-second), but the centipoise is frequently used in practice: (1 mPa·s = 1 cP). Application A single viscosity measurement at a constant speed in a typical viscometer is a measurement of the instrument viscosity of a fluid (not the apparent viscosity). In the case of non-Newtonian fluids, measurement of apparent viscosity without knowledge of the shear rate is of limited value: the measurement cannot be compared to other measurements if the speed and geometry of the two instruments is not identical. An apparent viscosity that is reported without the shear rate or information about the instrument and settings (e.g. speed and spindle type for a rotational viscometer) is meaningless. Multiple measurements of apparent viscosity at different, well-defined shear rates, can give useful information about the non-Newtonian behaviour of a fluid, and allow it to be modeled. Power-law fluids In many non-Newtonian fluids, the shear stress due to viscosity, , can be modeled by where k is the consistency index n is the flow behavior index du/dy is the shear rate, with velocity u and position y These fluids are called power-law fluids. To ensure that has the same sign as du/dy, this is often written as where the term gives the apparent viscosity. See also Fluid Dynamics Rheology Viscosity References Fluid dynamics Petroleum engineering Tribology
Apparent viscosity
[ "Chemistry", "Materials_science", "Engineering" ]
385
[ "Tribology", "Chemical engineering", "Petroleum engineering", "Materials science", "Surface science", "Energy engineering", "Mechanical engineering", "Piping", "Fluid dynamics" ]
17,729,026
https://en.wikipedia.org/wiki/Cell-free%20protein%20synthesis
Cell-free protein synthesis, also known as in vitro protein synthesis or CFPS, is the production of protein using biological machinery in a cell-free system, that is, without the use of living cells. The in vitro protein synthesis environment is not constrained by a cell wall or homeostasis conditions necessary to maintain cell viability. Thus, CFPS enables direct access and control of the translation environment which is advantageous for a number of applications including co-translational solubilisation of membrane proteins, optimisation of protein production, incorporation of non-natural amino acids, selective and site-specific labelling. Due to the open nature of the system, different expression conditions such as pH, redox potentials, temperatures, and chaperones can be screened. Since there is no need to maintain cell viability, toxic proteins can be produced. Introduction Common components of a cell-free reaction include a cell extract, an energy source, a supply of amino acids, cofactors such as magnesium, and the DNA with the desired genes. A cell extract is obtained by lysing the cell of interest and centrifuging out the cell walls, DNA genome, and other debris. The remains are the necessary cell machinery including ribosomes, aminoacyl-tRNA synthetases, translation initiation and elongation factors, nucleases, etc. Two types of DNA can be used in CFPS: plasmids and linear expression templates (LETs). Plasmids are circular, and only made inside cells. LETs can be made much more effectively via PCR, which replicates DNA much faster than raising cells in an incubator. While LETs are easier and faster to make, plasmid yields are usually much higher in CFPS. Because of this, much research today is focused on optimizing CFPS LET yields to approach the yields of CFPS with plasmids. An energy source is an important part of a cell-free reaction. Usually, a separate mixture containing the needed energy source, along with a supply of amino acids, is added to the extract for the reaction. Common sources are phosphoenol pyruvate, acetyl phosphate, and creatine phosphate. Advantages and Applications CFPS has many advantages over the traditional in vivo synthesis of proteins. Most notably, a cell-free reaction, including extract preparation, usually takes 1 –2 days, whereas in vivo protein expression may take 1–2 weeks. CFPS is an open reaction. The lack of cell wall allows direct manipulation of the chemical environment. Samples are easily taken, concentrations optimized, and the reaction can be monitored. In contrast, once DNA is inserted into live cells, the reaction cannot be accessed until it is over and the cells are lysed. Another advantage to CFPS is the lack of concern for toxicity. Some desired proteins and labeled proteins are toxic to cells when synthesized. Since live cells are not being used, the toxicity of the product protein is not a significant concern. These advantages enable numerous applications. A major application of CFPS is incorporation of unnatural amino acids into protein structures (see expanded genetic code). The openness of the reaction is ideal for inserting the modified tRNAs and unnatural amino acids required for such a reaction. Synthetic biology has many other uses and is a bright future in fields such as protein evolution, nanomachines, nucleic acid circuits, and synthesis of virus-like particles for vaccines and drug therapy. Limitations One challenge associated with CFPS is the degradation of the DNA by endogenous nucleases in the cell extract. This is particularly problematic with LETs. Cells have endonucleases that attack random sites of a DNA strands; however, much more common are the exonucleases which attack DNA from the ends. Since plasmids are circular and have no end to which the exonucleases may attach, they are not affected by the latter. LETs, however, are susceptible to both. Because of LET vulnerability, much research today is focused on optimizing CFPS LET yields to approach the yields of CFPS using plasmids. One example of this improved protection with plasmids is use of the bacteriophage lambda gam protein. Gam is an inhibitor of RecBCD, an exonuclease found in Escherichia coli (E. coli). With the use of gam, CFPS yields with LETs were greatly increased, and were comparable to CFPS yields with plasmids. PURE extracts can also be made, eliminating the concern of exonucleases. These extracts are expensive to make and are not currently an economical solution to the issue of exogenous DNA degradation. Types of Cell-free systems Common cell extracts in use today are made from E. coli (ECE), rabbit reticulocytes (RRL), wheat germ (WGE), insect cells (ICE) and Yeast Kluyveromyces (the D2P system). All of these extracts are commercially available. ECE is the most popular lysate for several reasons. It is the most inexpensive extract and the least time intensive to create. Also, large amounts of E. coli are easily grown, and then easily lysed through use of a homogenizer or a sonicator. ECE also provides the highest protein yields. However, high yield production can limit the complexity of the synthesized protein, particularly in post-translational modification. In that regard, the lower efficient eukaryotic systems could be advantageous, provided that modifying enzyme systems have been maintained in the extracts. Each eukaryotic system has their advantages and disadvantages. For example, WGE extract produces the highest yields of the three eukaryotic extracts; however, it is not as effective for some post-translational modifications such as glycosylation. When choosing an extract, the type of post-translational modification, desired yields, and cost should be taken into account. History Cell-free protein synthesis has been used for over 60 years, and notably, the first elucidation of a codon was done by Marshall Nirenberg and Heinrich J. Matthaei in 1961 at the National Institutes of Health. They used a cell-free system to translate a poly-uracil RNA sequence (or UUUUU... in biochemical terms) and discovered that the polypeptide they had synthesized consisted of only the amino acid phenylalanine. They thereby deduced from this poly-phenylalanine that the codon UUU specified the amino-acid phenylalanine. Extending this work, Nirenberg and his coworkers were able to determine the nucleotide makeup of each codon. See also Nirenberg and Matthaei experiment Polymerase chain reaction optimization References Further reading Cell biology Synthetic biology Protein biosynthesis
Cell-free protein synthesis
[ "Chemistry", "Engineering", "Biology" ]
1,433
[ "Synthetic biology", "Protein biosynthesis", "Biological engineering", "Cell biology", "Gene expression", "Bioinformatics", "Molecular genetics", "Biosynthesis" ]
17,731,917
https://en.wikipedia.org/wiki/Tricritical%20point
A tricritical point is a point where a second order phase transition curve meets a first order phase transition curve. The notion was first introduced by Lev Landau in 1937, who referred to the tricritical point as the critical point of the continuous transition. The first example of a tricritical point was shown by Robert B. Griffiths in a helium-3 helium-4 mixture. In condensed matter physics, dealing with the macroscopic physical properties of matter, a tricritical point is a point in the phase diagram of a system at which three-phase coexistence terminates. This definition is clearly parallel to the definition of an ordinary critical point as the point at which two-phase coexistence terminates. A point of three-phase coexistence is termed a triple point for a one-component system, since, from Gibbs' phase rule, this condition is only achieved for a single point in the phase diagram (F = 2-3+1 =0). For tricritical points to be observed, one needs a mixture with more components. It can be shown that three is the minimum number of components for which these points can appear. In this case, one may have a two-dimensional region of three-phase coexistence (F = 2-3+3 =2) (thus, each point in this region corresponds to a triple point). This region will terminate in two critical lines of two-phase coexistence; these two critical lines may then terminate at a single tricritical point. This point is therefore "twice critical", since it belongs to two critical branches. Indeed, its critical behavior is different from that of a conventional critical point: the upper critical dimension is lowered from d=4 to d=3 so the classical exponents turn out to apply for real systems in three dimensions (but not for systems whose spatial dimension is 2 or lower). Solid state It seems more convenient experimentally to consider mixtures with four components for which one thermodynamic variable (usually the pressure or the volume) is kept fixed. The situation then reduces to the one described for mixtures of three components. Historically, it was for a long time unclear whether a superconductor undergoes a first- or a second-order phase transition. The question was finally settled in 1982. If the Ginzburg–Landau parameter that distinguishes type-I and type-II superconductors (see also here) is large enough, vortex fluctuations become important which drive the transition to second order. The tricritical point lies at roughly , slightly below the value where type-I goes over into type-II superconductor. The prediction was confirmed in 2002 by Monte Carlo computer simulations. References Phase transitions Critical phenomena
Tricritical point
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
565
[ "Physical phenomena", "Phase transitions", "Materials science stubs", "Condensed matter stubs", "Critical phenomena", "Phases of matter", "Condensed matter physics", "Statistical mechanics", "Matter", "Dynamical systems" ]
17,734,187
https://en.wikipedia.org/wiki/Chamberland%20filter
A Chamberland filter, also known as a Pasteur–Chamberland filter, is a porcelain water filter invented by Charles Chamberland in 1884. It was developed after Henry Doulton's ceramic water filter of 1827. It is similar to the Berkefeld filter in principle. Design The filter consists of a permeable unglazed porcelain tube (called bisque) that contains a ring of enameled porcelain through which the inflow pipe fits. The core of the porcelain is made up of a metal pipe with holes through which water flows out and is collected. Inflow is pressurized so filtration occurs under force. There are 13 types: L1 to L13. L1 filters have the coarsest pore size while L13 have the finest. Usefulness The Pasteur-Chamberland filter is as useful as other ceramic and porcelain filters. It is a good bacterial water filter used mainly as a high volume water filter. The filter works more quickly when the water supplied is under pressure. As with other filters of its kind, it cannot filter very small particles like viruses or mycoplasma. It is used in removal of organisms from a fluid culture in order to obtain the bacterial toxins. History The Chamberland filter was developed by Charles Edouard Chamberland, one of Louis Pasteur’s assistants in Paris. The original intention was to produce filtered water, free of bacteria, for use in Pasteur's experiments. The filter became increasingly known for its ability to filter out bacteria, the smallest living organisms then known. The filter was patented by Chamberland and Pasteur in America and Europe. An American company licensed the name in Ohio. They sold filters to private homes, hotels, restaurants, and the 1893 Chicago World's Columbian Exposition. Use of the Pasteur-Chamberland filter led to the discovery that diphtheria and tetanus toxins, among others, could still cause illness even after filtration. Identification of these toxins contributed to the development of antitoxins to treat such diseases. It was also discovered that a type of substance, initially known as a "filterable virus", passed through the smallest Pasteur-Chamberland filters, and replicated itself inside living cells. The discovery that biological entities smaller than bacteria existed was important in establishing the field of virology. References Microbiology equipment Water filters 19th-century inventions French inventions
Chamberland filter
[ "Chemistry", "Biology" ]
488
[ "Water treatment", "Water filters", "Filters", "Microbiology equipment" ]
1,649,731
https://en.wikipedia.org/wiki/Ekman%20spiral
The Ekman spiral is an arrangement of ocean currents: the directions of horizontal current appear to twist as the depth changes. The oceanic wind driven Ekman spiral is the result of a force balance created by a shear stress force, Coriolis force and the water drag. This force balance gives a resulting current of the water different from the winds. In the ocean, there are two places where the Ekman spiral can be observed. At the surface of the ocean, the shear stress force corresponds with the wind stress force. At the bottom of the ocean, the shear stress force is created by friction with the ocean floor. This phenomenon was first observed at the surface by the Norwegian oceanographer Fridtjof Nansen during his Fram expedition. He noticed that icebergs did not drift in the same direction as the wind. His student, the Swedish oceanographer Vagn Walfrid Ekman, was the first person to physically explain this process. Bottom Ekman spiral In order to derive the properties of an Ekman spiral a look is taken at a uniform, horizontal geostrophic interior flow in a homogeneous fluid. This flow will be denoted by , where the two components are constant because of uniformity. Another result of this property is that the horizontal gradients will equal zero. As a result, the continuity equation will yield, . Note that the concerning interior flow is horizontal, so at all depths, even in the boundary layers. In this case, the Navier-Stokes momentum equations, governing geophysical motion can now be reduced to: Where is the Coriolis parameter, the fluid density and the eddy viscosity, which are all taken as a constant here for simplicity. These parameters have a small variance on the scale of an Ekman spiral, thus this approximation will hold. A uniform flow requires a uniformly varying pressure gradient. When substituting the flow components of the interior flow, and , in the equations above, the following is obtained: Using the last of the three equations at the top of this section, yields that the pressure is independent of depth. and will suffice as a solution to the differential equations above. After substitution of these possible solutions in the same equations, will follow. Now, has the following possible outcomes: Because of the no-slip condition at the bottom and the constant interior flow for , coefficients and can be determined. In the end, this will lead to the following solution for : Here, . Note that the velocity vector will approach the values of the interior flow, when the takes the order of . This is the reason why is defined as the thickness of the Ekman layer. A number of important properties of the Ekman spiral will follow from this solution: When , it appears that the flow has a transverse component with respect to the interior flow, which differs 45 degrees to the left on the northern hemisphere, , and 45 degrees to the right on the southern hemisphere, . Note that, in this case, the angle between this flow and the interior flow is at its maximum. It will decrease for increasing . When takes the value of , the resulting flow is in line with the interior flow, but will be increased with , with respect to the interior flow. For higher values of , there will be a minimal transverse component in the other direction as before. The exponential term will go to zero for , resulting in . Because of these properties, the velocity vector of the flow as a function of depth will look like a spiral. Surface Ekman spiral The solution for the flow forming the bottom Ekman spiral was a result of the shear stress exerted on the flow by the bottom. Logically, wherever shear stress can be exerted on a flow, Ekman spirals will form. This is the case at the air–water interface, because of wind. A situation is considered where a wind stress is exerted along a water-surface with an interior flow beneath. Again, the flow is uniform, has a geostrophic interior and is homogeneous fluid. The equations of motion for a geostrophic flow, which are the same as stated in the bottom spiral section, can be reduced to: The boundary conditions for this case are as follows: Surface : and Towards interior : and With these conditions, the solution can be determined: Some differences with respect to the bottom Ekman spiral emerge. The deviation from the interior flow is exclusively dependent on the wind stress and not on the interior flow. Whereas in the case of the bottom Ekman spiral, the deviation is determined by the interior flow. The wind-driven component of the flow is inversely proportional with respect to the Ekman-layer thickness . So if the layer thickness is small, because of a small viscosity of the fluid for example, this component could be very large. At last, the flow at the surface is 45 degrees to the right on the northern hemisphere and 45 degrees to the left on the southern hemisphere with respect to the wind-direction. In case of the bottom Ekman spiral, this is the other way around. Observations The equations and assumptions above are not representative for the actual observations of the Ekman spiral. The differences between the theory and the observations are that the angle is between 5–20 degrees instead of the 45 degrees as expected and that the Ekman layer depth and thus the Ekman spiral is less deep than expected. There are three main factors which contribute to the reason why this is, stratification, turbulence and horizontal gradients. Other less important factors which play a role in this are the Stokes drift, waves and the Stokes-Coriolis force. See also References Oceanography Fluid dynamics Spirals
Ekman spiral
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
1,140
[ "Hydrology", "Applied and interdisciplinary physics", "Oceanography", "Chemical engineering", "Piping", "Fluid dynamics" ]
1,649,775
https://en.wikipedia.org/wiki/Viral%20replication
Viral replication is the formation of biological viruses during the infection process in the target host cells. Viruses must first get into the cell before viral replication can occur. Through the generation of abundant copies of its genome and packaging these copies, the virus continues infecting new hosts. Replication between viruses is greatly varied and depends on the type of genes involved in them. Most DNA viruses assemble in the nucleus while most RNA viruses develop solely in cytoplasm. Viral production / replication Viruses multiply only in living cells. The host cell must provide the energy and synthetic machinery and the low-molecular-weight precursors for the synthesis of viral proteins and nucleic acids. Virus replication occurs in seven stages: Attachment Entry (penetration) Uncoating Replication Assembly Maturation Release (liberation stage). Attachment It is the first step of viral replication. Some viruses attach to the cell membrane of the host cell and inject its DNA or RNA into the host to initiate infection. Attachment to a host cell is often achieved by a virus attachment protein that extends from the protein shell (capsid), of a virus. This protein is responsible for binding to a surface receptor on the plasma membrane (or membrane carbohydrates) of a host cell. Viruses can exploit normal cell receptor functions to allow attachment to occur by mimicking molecules that bind to host cell receptors. For example, the rhinovirus uses their virus attachment protein to bind to the receptor ICAM-1on host cells that is normally used to facilitate adhesion between other host cells. Entry Entry, or penetration, is the second step in viral replication. This step is characterized by the virus passing through the plasma membrane of the host cell. The most common way a virus gains entry to the host cell is by receptor-mediated endocytosis, which comes at no energy cost to the virus, only the host cell. Receptor-mediated endocytosis occurs when a molecule (in this case a virus) binds to receptor on the membrane of the cell. A series of chemical signals from this binding causes the cell to wrap the attached virus in the plasma membrane around it forming a virus-containing vesicle inside the cell. Viruses enter host cells using a variety of mechanisms, including the endocytic and non-endocytic routes. They can also fuse at the plasma membrane and can spread within the host via fusion or cell-cell fusion. Viruses attach to proteins on the host cell surface known as cellular receptors or attachment factors to aid entry. Evidence shows that viruses utilize ion channels on the host cells during viral entry. Fusion: External viral proteins promote the fusion of the virion with the plasma membrane. This forms a pore in the host membrane, and after entry, the virion becomes uncoated, and its genomic material is then transferred into the cytoplasm. Cell-to-cell fusion: Some viruses prompt specific protein expression on the surfaces of infected cells to attract uninfected cells. This interaction causes the uninfected cell to fuse with the infected cell at lower pH levels to form a multinuclear cell known as a syncytium. Endocytic routes: the process by which an intracellular vesicle is formed by membrane invagination, which results in the engulfment of extracellular and membrane-bound components, in this context, a virus. Non-endocytic routes: the process by which viral particles are released into the cell by fusion of the extracellular viral envelope and the membrane of the host cell. Uncoating Uncoating is the third step in viral replication. Uncoating is defined by the removal of the virion's protein "coat" and the release of its genetic material. This step occurs in the same area that viral transcription occurs. Different viruses have various mechanisms for uncoating. Some RNA viruses such as Rhinoviruses use the low pH in a host cell's endosomes to activate their uncoating mechanism. This involves the rhinovirus releasing a protein that creates holes in the endosome, and allows the virus to release its genome through the holes. Many DNA viruses travel to the host cells nucleus and release their genetic material through nuclear pores. Replication The fourth step in the viral cycle is replication, which is defined by the rapid production of the viral genome. How a virus undergoes replication relies on the type of genetic material the virus possesses. Based on their genetic material, viruses will hijack the corresponding cellular machinery for said genetic material. Viruses that contain double-stranded DNA (dsDNA) share the same kind of genetic material as all organisms, and can therefore use the replication enzymes in the host cell nucleus to replicate the viral genome. Many RNA viruses typically replicate in the cytosol, and can directly access the host cell's ribosomes to manufacture viral proteins once the RNA is in a replicative form. Viruses may undergo two types of life cycles: the lytic cycle and the lysogenic cycle. In the lytic cycle, the virus introduces its genome into a host cell and initiates replication by hijacking the host's cellular machinery to make new copies of the virus. In the lysogenic life cycle, the viral genome is incorporated into the host genome. The host genome will undergo its normal life cycle, replicating and dividing replicating the viral genome along with its own. The viral genome can be triggered to begin viral production via chemical and environmental stimulants. Once a lysogenic virus enters the lytic life cycle, it will continue in the viral production pathways and proceed with transcription / mRNA production. (ex: Cold sores, herpes simplex virus (HSV)-1, lysogenic bacteriophages, etc.) Assembly Assembly is when the newly manufactured viral proteins and genomes are gathered and put together to form immature viruses. Like the other steps, how a particular virus is assembled is dependent on what type of virus it is. Assembly can occur in the plasma membrane, cytosol, nucleus, golgi apparatus, and other locations within the host cell. Some viruses only insert their genome into a capsid once the capsid is completed, while in other viruses the capsid will wrap around the genome as it is being copied. Maturation This is the final step before a competent virus is formed. This typically involves capsid modifications that are provided enzymes (host or virus-encoded). Release (liberation stage) The final step in viral replication is release, which is when the newly assembled and mature viruses leave the host cell. How a virus releases from the host cell is dependent on the type of virus it is. One common type of release is budding. This occurs when viruses that form their envelope from the host's plasma membrane bend the membrane around the capsid. As the virus bends the plasma membrane it begins to wrap around the whole capsid until the virus is no longer attached to the host cell. Another common way viruses leave the host cell is through cell lysis, where the viruses lyse the cell causing it to burst which releases mature viruses that were in the host cell. Baltimore classification Viruses are split into seven classes, according to the type of genetic material and method of mRNA production, each of which has its own families of viruses, which in turn have differing replication strategies themselves. David Baltimore, a Nobel Prize-winning biologist, devised a system called the Baltimore Classification System to classify different viruses based on their unique replication strategy. There are seven different replication strategies based on this system (Baltimore Class I, II, III, IV, V, VI, VII). The seven classes of viruses are listed here briefly and in generalities. Class 1: Double-stranded DNA viruses This type of virus usually must enter the host nucleus before it is able to replicate. Some of these viruses require host cell polymerases to replicate their genome, while others, such as adenoviruses or herpes viruses, encode their own replication factors. However, in either case, replication of the viral genome is highly dependent on a cellular state permissive to DNA replication and, thus, on the cell cycle. The virus may induce the cell to forcefully undergo cell division, which may lead to transformation of the cell and, ultimately, cancer. An example of a family within this classification is the Adenoviridae. There is only one well-studied example in which a class 1 family of viruses does not replicate within the nucleus. This is the Poxvirus family, which comprises highly pathogenic viruses that infect vertebrates. Class 2: Single-stranded DNA viruses Viruses that fall under this category include ones that are not as well-studied, but still do pertain highly to vertebrates. Two examples include the Circoviridae and Parvoviridae. They replicate within the nucleus, and form a double-stranded DNA intermediate during replication. A human Anellovirus called TTV is included within this classification and is found in almost all humans, infecting them asymptomatically in nearly every major organ. RNA viruses: The polymerase of RNA viruses lacks the proofreading functions found in the polymerase of DNA viruses. This contributed to RNA viruses having lower replicative fidelity compared to DNA viruses, causing RNA viruses to be highly mutagenic, which can increase their overall survival rate. RNA viruses lack the capacity to identify and repair mismatched or damaged nucleotides, and thus, RNA genomes are prone to mutations introduced by mechanisms intrinsic and extrinsic to viral replication. RNA viruses present a therapeutic double-edged sword: RNA viruses can withstand the challenge of antiviral drugs, cause epidemics, and infect multiple host species due to their mutagenic nature, making them difficult to treat. However, the reverse transcriptase protein that often comes with the RNA virus can be used as an indirect target for RNA viruses, preventing transcription and synthesis of viral particles. (This is the basis for anti-AIDs and anti-HIV drugs) Class 3: Double-stranded RNA viruses Like most viruses with RNA genomes, double-stranded RNA viruses do not rely on host polymerases for replication to the extent that viruses with DNA genomes do. Double-stranded RNA viruses are not as well-studied as other classes. This class includes two major families, the Reoviridae and Birnaviridae. Replication is monocistronic and includes individual, segmented genomes, meaning that each of the genes codes for only one protein, unlike other viruses, which exhibit more complex translation. Classes 4 & 5: Single-stranded RNA viruses These viruses consist of two types, however both share the fact that replication is primarily in the cytoplasm, and that replication is not as dependent on the cell cycle as that of DNA viruses. This class of viruses is also one of the most-studied types of viruses, alongside the double-stranded DNA viruses. Class 4: Single-stranded RNA viruses - positive-sense The positive-sense RNA viruses and indeed all genes defined as positive-sense can be directly accessed by host ribosomes to immediately form proteins. These can be divided into two groups, both of which replicate in the cytoplasm: Viruses with polycistronic mRNA where the genome RNA forms the mRNA and is translated into a polyprotein product that is subsequently cleaved to form the mature proteins. This means that the gene can utilize a few methods in which to produce proteins from the same strand of RNA, reducing the size of its genome. Viruses with complex transcription, for which subgenomic mRNAs, ribosomal frameshifting and proteolytic processing of polyproteins may be used. All of which are different mechanisms with which to produce proteins from the same strand of RNA. Examples of this class include the families Coronaviridae, Flaviviridae, and Picornaviridae. Class 5: Single-stranded RNA viruses - negative-sense The negative-sense RNA viruses and indeed all genes defined as negative-sense cannot be directly accessed by host ribosomes to immediately form proteins. Instead, they must be transcribed by viral polymerases into the "readable" complementary positive-sense. These can also be divided into two groups: Viruses containing nonsegmented genomes for which the first step in replication is transcription from the negative-stranded genome by the viral RNA-dependent RNA polymerase to yield monocistronic mRNAs that code for the various viral proteins. A positive-sense genome copy that serves as template for production of the negative-strand genome is then produced. Replication is within the cytoplasm. Viruses with segmented genomes for which replication occurs in the cytoplasm and for which the viral RNA-dependent RNA polymerase produces monocistronic mRNAs from each genome segment. Examples in this class include the families Orthomyxoviridae, Paramyxoviridae, Bunyaviridae, Filoviridae, and Rhabdoviridae (which includes rabies). Class 6: Positive-sense single-stranded RNA viruses that replicate through a DNA intermediate A well-studied family of this class of viruses include the retroviruses. One defining feature is the use of reverse transcriptase to convert the positive-sense RNA into DNA. Instead of using the RNA for templates of proteins, they use DNA to create the templates, which is spliced into the host genome using integrase. Replication can then commence with the help of the host cell's polymerases. Class 7: Double-stranded DNA viruses that replicate through a single-stranded RNA intermediate This small group of viruses, exemplified by the Hepatitis B virus, have a double-stranded, gapped genome that is subsequently filled in to form a covalently closed circle (cccDNA) that serves as a template for production of viral mRNAs and a subgenomic RNA. The pregenome RNA serves as template for the viral reverse transcriptase and for production of the DNA genome. References Viruses Virology Viral life cycle
Viral replication
[ "Biology" ]
2,859
[ "Viruses", "Viral life cycle", "Tree of life (biology)", "Microorganisms" ]
1,649,909
https://en.wikipedia.org/wiki/Deadband
A deadband or dead-band (also known as a dead zone or a neutral zone) is a band of input values in the domain of a transfer function in a control system or signal processing system where the output is zero (the output is 'dead' - no action occurs). Deadband regions can be used in control systems such as servoamplifiers to prevent oscillation or repeated activation-deactivation cycles (called 'hunting' in proportional control systems). A form of deadband that occurs in mechanical systems, compound machines such as gear trains is backlash. Voltage regulators In some power substations there are regulators that keep the voltage within certain predetermined limits, but there is a range of voltage in-between during which no changes are made, such as between 112 and 118 volts (the deadband is 6 volts), or between 215 to 225 volts (deadband is 10 volts). Backlash Gear teeth with slop (backlash) exhibit deadband. There is no drive from the input to the output shaft in either direction while the teeth are not meshed. Leadscrews generally also have backlash and hence a deadband, which must be taken into account when making position adjustments, especially with CNC systems. If mechanical backlash eliminators are not available, the control can compensate for backlash by adding the deadband value to the position vector whenever direction is reversed. Hysteresis versus Deadband Deadband is different from hysteresis. With hysteresis, there is no deadband and so the output is always in one direction or another. Devices with hysteresis have memory, in that previous system states dictate future states. Examples of devices with hysteresis are single-mode thermostats and smoke alarms. Deadband is the range in a process where no changes to output are made. Hysteresis is the difference in a variable depending on the direction of travel. Thermostats Simple (single mode) thermostats exhibit hysteresis. For example, the furnace in the basement of a house is adjusted automatically by the thermostat to be switched ON as soon as the temperature at the thermostat falls to 18 °C and the furnace is switched OFF by the thermostat as soon as the temperature at the thermostat reaches 22 °C. There is no temperature at which the house is not being heated or allowed to cool (furnace on or off). A thermostat which sets a single temperature and automatically controls both heating and cooling systems without a mode change exhibits a deadband range around the target temperature. The low end of the deadband is just above the temperature where the heating system turns on. The high end of the deadband is just below the temperature where the air-conditioning system starts. See also Schmitt trigger References Johnson, Curtis D. "Process Control Instrumentation Technology", Prentice Hall (2002, 7th ed.) Control theory
Deadband
[ "Mathematics" ]
613
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
1,650,174
https://en.wikipedia.org/wiki/Tower%20testing%20station
A tower testing station is a special plant for testing various design for towers for transmission lines and similar uses. A tower testing station consists of two steel stands and one or more foundations, on which a sample of the tower can be built. The number of test conditions is normally limited to between six and eight individual cases, with loading condition such as reduced wind and ice. The towers to be tested are erected on rigid foundation and the wire ropes attached to the loading point required. Loading may either be applied by 'dead' weights using scale pans, winches or hydraulic rams. In the latter cases a load cell or dynamometer is placed in the rigging adjacent to the point of loading at the structure. The loading methods induce strain by pulling cables away from the tower to the specified loads. The pulling load is indicated through a strain gauge placed on the pulling point. Loading points on a tower naturally encompasses longitudinal, transverse and vertical components, either as individual or a combined resultant load. The degree of sophistication of the control equipment for the application and recording of the load varies considerably at individual test stations. From individual load point application of individual load components with corresponding dial gauges to electronic equipment capable of applying all the loads with constant data recording facilities. The test set up is made to conform to the design specifications and verify the adequacy of the main components of the structure and their connections to withstand the static design loads specified for that particular structure as an individual entity under controlled conditions. It furnishes insight into actual stress distribution of unique configurations, fit-up verification, performance of the structure in a deflected position and other benefits. Locations of tower testing stations Chungju, South Korea, BOSUNG POWERTEC CO., LTD., Moscow, Russia, ORGRES, Mannheim, Germany, ABB Group Livorno, Italy, Tower Test srl Seville, Spain, Eucomsa Toronto, Ontario, Canada, Kinectrics Vashi, Jaipur, Jabalpur, India, Bucharest, Romania, Liangxiang, China, Butibori, Nagpur, India, Betim, Minas Gerais, Brazil, Kanchipuram, Chennai, India, Larsen & Toubro IRAN, NRI (Niroo Research Institute), Riyadh, Saudi Arabia, Al-Babtain Tower Testing Station Linhares, Espírito Santo, Brazil, Brametal Test Station Towers
Tower testing station
[ "Engineering" ]
490
[ "Structural engineering", "Towers" ]
1,650,455
https://en.wikipedia.org/wiki/Galvanoluminescence
Luminescence Materials science Galvanoluminescence is the emission of light produced by the passage of an electric current through an appropriate electrolyte in which an electrode, made of certain metals such as aluminium or tantalum, has been immersed. An example being the electrolysis of sodium bromide (NaBr). References
Galvanoluminescence
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
70
[ " and optical physics stubs", "Luminescence", "Materials science stubs", "Applied and interdisciplinary physics", "Molecular physics", "Materials science", " molecular", "nan", "Atomic", "Physical chemistry stubs", " and optical physics" ]
1,651,916
https://en.wikipedia.org/wiki/Correlation%20function%20%28statistical%20mechanics%29
In statistical mechanics, the correlation function is a measure of the order in a system, as characterized by a mathematical correlation function. Correlation functions describe how microscopic variables, such as spin and density, at different positions are related. More specifically, correlation functions measure quantitatively the extent to which microscopic variables fluctuate together, on average, across space and/or time. Keep in mind that correlation doesn’t automatically equate to causation. So, even if there’s a non-zero correlation between two points in space or time, it doesn’t mean there is a direct causal link between them. Sometimes, a correlation can exist without any causal relationship. This could be purely coincidental or due to other underlying factors, known as confounding variables, which cause both points to covary (statistically). A classic example of spatial correlation can be seen in ferromagnetic and antiferromagnetic materials. In these materials, atomic spins tend to align in parallel and antiparallel configurations with their adjacent counterparts, respectively. The figure on the right visually represents this spatial correlation between spins in such materials. Definitions The most common definition of a correlation function is the canonical ensemble (thermal) average of the scalar product of two random variables, and , at positions and and times and : Here the brackets, , indicate the above-mentioned thermal average. It is important to note here, however, that while the brackets are called an average, they are calculated as an expected value, not an average value. It is a matter of convention whether one subtracts the uncorrelated average product of and , from the correlated product, , with the convention differing among fields. The most common uses of correlation functions are when and describe the same variable, such as a spin-spin correlation function, or a particle position-position correlation function in an elemental liquid or a solid (often called a Radial distribution function or a pair correlation function). Correlation functions between the same random variable are autocorrelation functions. However, in statistical mechanics, not all correlation functions are autocorrelation functions. For example, in multicomponent condensed phases, the pair correlation function between different elements is often of interest. Such mixed-element pair correlation functions are an example of cross-correlation functions, as the random variables and represent the average variations in density as a function position for two distinct elements. Equilibrium equal-time (spatial) correlation functions Often, one is interested in solely the spatial influence of a given random variable, say the direction of a spin, on its local environment, without considering later times, . In this case, we neglect the time evolution of the system, so the above definition is re-written with . This defines the equal-time correlation function, . It is written as: Often, one omits the reference time, , and reference radius, , by assuming equilibrium (and thus time invariance of the ensemble) and averaging over all sample positions, yielding: where, again, the choice of whether to subtract the uncorrelated variables differs among fields. The Radial distribution function is an example of an equal-time correlation function where the uncorrelated reference is generally not subtracted. Other equal-time spin-spin correlation functions are shown on this page for a variety of materials and conditions. Equilibrium equal-position (temporal) correlation functions One might also be interested in the temporal evolution of microscopic variables. In other words, how the value of a microscopic variable at a given position and time, and , influences the value of the same microscopic variable at a later time, (and usually at the same position). Such temporal correlations are quantified via equal-position correlation functions, . They are defined analogously to above equal-time correlation functions, but we now neglect spatial dependencies by setting , yielding: Assuming equilibrium (and thus time invariance of the ensemble) and averaging over all sites in the sample gives a simpler expression for the equal-position correlation function as for the equal-time correlation function: The above assumption may seem non-intuitive at first: how can an ensemble which is time-invariant have a non-uniform temporal correlation function? Temporal correlations remain relevant to talk about in equilibrium systems because a time-invariant, macroscopic ensemble can still have non-trivial temporal dynamics microscopically. One example is in diffusion. A single-phase system at equilibrium has a homogeneous composition macroscopically. However, if one watches the microscopic movement of each atom, fluctuations in composition are constantly occurring due to the quasi-random walks taken by the individual atoms. Statistical mechanics allows one to make insightful statements about the temporal behavior of such fluctuations of equilibrium systems. This is discussed below in the section on the temporal evolution of correlation functions and Onsager's regression hypothesis. Time correlation function Time correlation function plays a significant role in nonequilibrium statistical mechanics as partition function does in equilibrium statistical mechancis. For instance, transport coefficients are closely related to time correlation functions through the Fourier transform; and the Green-Kubo relations, used to calculate relaxation and dissipation processes in a system, are expressed in terms of equilibrium time correlation functions. The time correlation function of two observable and is defined as, and this definition applies for both classical and quantum version. For stationary (equilibrium) system, the time origin is irrelevant, and , with as the time difference. The explicit expression of classical time correlation function is, where is the value of at time , is the value of at time given the initial state , and is the phase space distribution function for the initial state. If the ergodicity is assumed, then the ensemble average is the same as time average in a long time; mathematically, scanning different time window gives the time correlation function. As , the correlation function , while as , we may assume the correlation vanishes and . Correspondingly, the quantum time correlation function is, in the canonical ensemble, where and are the quantum operator, and in the Heisenberg picture. If evaluating the (non-symmetrized) quantum time correlation function by expanding the trace to the eigenstates, Evaluating quantum time correlation function quantum mechanically is very expensive, and this cannot be applied to a large system with many degrees of freedom. Nevertheless, semiclassical initial value representation (SC-IVR) is a family to evaluate the quantum time correlation function from the definition. Additionally, there are two alternative quantum time correlations, and they both related to the definition of quantum time correlation function in the Fourier space. The first symmetrized correlation function is defined by, with as a complex time variable. is related with the definition of quantum time correlation function by, The second symmetrized (Kubo transformed) correlation function is, and reduces to its classical counterpart both in the high temperature and harmonic limit. is related with the definition of quantum time correlation function by, The symmetrized quantum time correlation function are easier to evaluate, and the Fourier transformed relation makes them applicable in calculating spectrum, transport coefficients, etc. Quantum time correlation function can be approximated using the path integral molecular dynamics. Generalization beyond equilibrium correlation functions All of the above correlation functions have been defined in the context of equilibrium statistical mechanics. However, it is possible to define correlation functions for systems away from equilibrium. Examining the general definition of , it is clear that one can define the random variables used in these correlation functions, such as atomic positions and spins, away from equilibrium. As such, their scalar product is well-defined away from equilibrium. The operation which is no longer well-defined away from equilibrium is the average over the equilibrium ensemble. This averaging process for non-equilibrium system is typically replaced by averaging the scalar product across the entire sample. This is typical in scattering experiments and computer simulations, and is often used to measure the radial distribution functions of glasses. One can also define averages over states for systems perturbed slightly from equilibrium. See, for example, http://xbeams.chem.yale.edu/~batista/vaa/node56.html Measuring correlation functions Correlation functions are typically measured with scattering experiments. For example, x-ray scattering experiments directly measure electron-electron equal-time correlations. From knowledge of elemental structure factors, one can also measure elemental pair correlation functions. See Radial distribution function for further information. Equal-time spin–spin correlation functions are measured with neutron scattering as opposed to x-ray scattering. Neutron scattering can also yield information on pair correlations as well. For systems composed of particles larger than about one micrometer, optical microscopy can be used to measure both equal-time and equal-position correlation functions. Optical microscopy is thus common for colloidal suspensions, especially in two dimensions. Time evolution of correlation functions In 1931, Lars Onsager proposed that the regression of microscopic thermal fluctuations at equilibrium follows the macroscopic law of relaxation of small non-equilibrium disturbances. This is known as the Onsager regression hypothesis. As the values of microscopic variables separated by large timescales, , should be uncorrelated beyond what we would expect from thermodynamic equilibrium, the evolution in time of a correlation function can be viewed from a physical standpoint as the system gradually 'forgetting' the initial conditions placed upon it via the specification of some microscopic variable. There is actually an intuitive connection between the time evolution of correlation functions and the time evolution of macroscopic systems: on average, the correlation function evolves in time in the same manner as if a system was prepared in the conditions specified by the correlation function's initial value and allowed to evolve. Equilibrium fluctuations of the system can be related to its response to external perturbations via the Fluctuation-dissipation theorem. The connection between phase transitions and correlation functions Continuous phase transitions, such as order-disorder transitions in metallic alloys and ferromagnetic-paramagnetic transitions, involve a transition from an ordered to a disordered state. In terms of correlation functions, the equal-time correlation function is non-zero for all lattice points below the critical temperature, and is non-negligible for only a fairly small radius above the critical temperature. As the phase transition is continuous, the length over which the microscopic variables are correlated, , must transition continuously from being infinite to finite when the material is heated through its critical temperature. This gives rise to a power-law dependence of the correlation function as a function of distance at the critical point. This is shown in the figure in the left for the case of a ferromagnetic material, with the quantitative details listed in the section on magnetism. Applications Magnetism In a spin system, the equal-time correlation function is especially well-studied. It describes the canonical ensemble (thermal) average of the scalar product of the spins at two lattice points over all possible orderings: Here the brackets mean the above-mentioned thermal average. Schematic plots of this function are shown for a ferromagnetic material below, at, and above its Curie temperature on the left. Even in a magnetically disordered phase, spins at different positions are correlated, i.e., if the distance r is very small (compared to some length scale ), the interaction between the spins will cause them to be correlated. The alignment that would naturally arise as a result of the interaction between spins is destroyed by thermal effects. At high temperatures exponentially-decaying correlations are observed with increasing distance, with the correlation function being given asymptotically by where r is the distance between spins, and d is the dimension of the system, and is an exponent, whose value depends on whether the system is in the disordered phase (i.e. above the critical point), or in the ordered phase (i.e. below the critical point). At high temperatures, the correlation decays to zero exponentially with the distance between the spins. The same exponential decay as a function of radial distance is also observed below , but with the limit at large distances being the mean magnetization . Precisely at the critical point, an algebraic behavior is seen where is a critical exponent, which does not have any simple relation with the non-critical exponent introduced above. For example, the exact solution of the two-dimensional Ising model (with short-ranged ferromagnetic interactions) gives precisely at criticality , but above criticality and below criticality . As the temperature is lowered, thermal disordering is lowered, and in a continuous phase transition the correlation length diverges, as the correlation length must transition continuously from a finite value above the phase transition, to infinite below the phase transition: with another critical exponent . This power law correlation is responsible for the scaling, seen in these transitions. All exponents mentioned are independent of temperature. They are in fact universal, i.e. found to be the same in a wide variety of systems. Radial distribution functions One common correlation function is the radial distribution function which is seen often in statistical mechanics and fluid mechanics. The correlation function can be calculated in exactly solvable models (one-dimensional Bose gas, spin chains, Hubbard model) by means of Quantum inverse scattering method and Bethe ansatz. In an isotropic XY model, time and temperature correlations were evaluated by Its, Korepin, Izergin & Slavnov. Higher order correlation functions Higher-order correlation functions involve multiple reference points, and are defined through a generalization of the above correlation function by taking the expected value of the product of more than two random variables: However, such higher order correlation functions are relatively difficult to interpret and measure. For example, in order to measure the higher-order analogues of pair distribution functions, coherent x-ray sources are needed. Both the theory of such analysis and the experimental measurement of the needed X-ray cross-correlation functions are areas of active research. See also Ornstein–Zernike equation References Further reading Radial distribution function C. Domb, M.S. Green, J.L. Lebowitz editors, Phase Transitions and Critical Phenomena, vol. 1-20 (1972–2001), Academic Press. Covariance and correlation Statistical mechanics Conceptual models
Correlation function (statistical mechanics)
[ "Physics" ]
2,913
[ "Statistical mechanics" ]
1,651,963
https://en.wikipedia.org/wiki/Biomedical%20cybernetics
Biomedical cybernetics investigates signal processing, decision making and control structures in living organisms. Applications of this research field are in biology, ecology and health sciences. Fields Biological cybernetics Medical cybernetics Methods Connectionism Decision theory Information theory Systeomics Systems theory See also Cybernetics Prosthetics List of biomedical cybernetics software References Kitano, H. (Hrsg.) (2001). Foundations of Systems Biology. Cambridge (Massachusetts), London, MIT Press, . External links ResearchGate topic on biomedical cybernetics Cybernetics
Biomedical cybernetics
[ "Engineering", "Biology" ]
114
[ "Biological engineering", "Bioengineering stubs", "Biotechnology stubs", "Medical technology stubs", "Medical technology" ]
1,651,967
https://en.wikipedia.org/wiki/Berezinskii%E2%80%93Kosterlitz%E2%80%93Thouless%20transition
The Berezinskii–Kosterlitz–Thouless (BKT) transition is a phase transition of the two-dimensional (2-D) XY model in statistical physics. It is a transition from bound vortex-antivortex pairs at low temperatures to unpaired vortices and anti-vortices at some critical temperature. The transition is named for condensed matter physicists Vadim Berezinskii, John M. Kosterlitz and David J. Thouless. BKT transitions can be found in several 2-D systems in condensed matter physics that are approximated by the XY model, including Josephson junction arrays and thin disordered superconducting granular films. More recently, the term has been applied by the 2-D superconductor insulator transition community to the pinning of Cooper pairs in the insulating regime, due to similarities with the original vortex BKT transition. The critical density of the BKT transition in the weakly interacting system reads where the dimensionless constant was found to be . Work on the transition led to the 2016 Nobel Prize in Physics being awarded to Thouless and Kosterlitz; Berezinskii died in 1981. XY model The XY model is a two-dimensional vector spin model that possesses U(1) or circular symmetry. This system is not expected to possess a normal second-order phase transition. This is because the expected ordered phase of the system is destroyed by transverse fluctuations, i.e. the Nambu-Goldstone modes associated with this broken continuous symmetry, which logarithmically diverge with system size. This is a specific case of what is called the Mermin–Wagner theorem in spin systems. Rigorously the transition is not completely understood, but the existence of two phases was proved by and . Disordered phases with different correlations In the XY model in two dimensions, a second-order phase transition is not seen. However, one finds a low-temperature quasi-ordered phase with a correlation function (see statistical mechanics) that decreases with the distance like a power, which depends on the temperature. The transition from the high-temperature disordered phase with the exponential correlation to this low-temperature quasi-ordered phase is a Kosterlitz–Thouless transition. It is a phase transition of infinite order. Role of vortices In the 2-D XY model, vortices are topologically stable configurations. It is found that the high-temperature disordered phase with exponential correlation decay is a result of the formation of vortices. Vortex generation becomes thermodynamically favorable at the critical temperature of the Kosterlitz–Thouless transition. At temperatures below this, vortex generation has a power law correlation. Kosterlitz–Thouless transitions is described as a dissociation of bound vortex pairs with opposite circulations, called vortex–antivortex pairs, first described by Vadim Berezinskii. In these systems, thermal generation of vortices produces an even number of vortices of opposite sign. Bound vortex–antivortex pairs have lower energies than free vortices, but have lower entropy as well. In order to minimize free energy, , the system undergoes a transition at a critical temperature, . Below , there are only bound vortex–antivortex pairs. Above , there are free vortices. Informal description There is an elegant thermodynamic argument for the Kosterlitz–Thouless transition. The energy of a single vortex is , where is a parameter that depends upon the system in which the vortex is located, is the system size, and is the radius of the vortex core. One assumes . In the 2D system, the number of possible positions of a vortex is approximately . From Boltzmann's entropy formula, (with W is the number of states), the entropy is , where is the Boltzmann constant. Thus, the Helmholtz free energy is When , the system will not have a vortex. On the other hand, when , entropic considerations favor the formation of a vortex. The critical temperature above which vortices may form can be found by setting and is given by The Kosterlitz–Thouless transition can be observed experimentally in systems like 2D Josephson junction arrays by taking current and voltage (I-V) measurements. Above , the relation will be linear . Just below , the relation will be , as the number of free vortices will go as . This jump from linear dependence is indicative of a Kosterlitz–Thouless transition and may be used to determine . This approach was used in Resnick et al. to confirm the Kosterlitz–Thouless transition in proximity-coupled Josephson junction arrays. Field theoretic analysis The following discussion uses field theoretic methods. Assume a field φ(x) defined in the plane which takes on values in , so that is identified with . That is, the circle is realized as . The energy is given by and the Boltzmann factor is . Taking a contour integral over any contractible closed path , we would expect it to be zero (for example, by the fundamental theorem of calculus. However, this is not the case due to the singular nature of vortices (which give singularities in ). To render the theory well-defined, it is only defined up to some energetic cut-off scale , so that we can puncture the plane at the points where the vortices are located, by removing regions with size of order . If winds counter-clockwise once around a puncture, the contour integral is an integer multiple of . The value of this integer is the index of the vector field . Suppose that a given field configuration has punctures located at each with index . Then, decomposes into the sum of a field configuration with no punctures, and , where we have switched to the complex plane coordinates for convenience. The complex argument function has a branch cut, but, because is defined modulo , it has no physical consequences. Now, If , the second term is positive and diverges in the limit : configurations with unbalanced numbers of vortices of each orientation are never energetically favoured. However, if the neutral condition holds, the second term is equal to , which is the total potential energy of a two-dimensional Coulomb gas. The scale L is an arbitrary scale that renders the argument of the logarithm dimensionless. Assume the case with only vortices of multiplicity . At low temperatures and large the distance between a vortex and antivortex pair tends to be extremely small, essentially of the order . At large temperatures and small this distance increases, and the favoured configuration becomes effectively the one of a gas of free vortices and antivortices. The transition between the two different configurations is the Kosterlitz–Thouless phase transition, and the transition point is associated with an unbinding of vortex-antivortex pairs. See also KTHNY theory Goldstone boson Composite fermion Lambda transition Ising model Potts model Topological defect Quantum vortex Superfluid film Hexatic phase Notes References . Translation available: . Translation available: B. I. Halperin, D. R. Nelson, Phys. Rev. Lett. 41, 121 (1978) A. P. Young, Phys. Rev. B 19, 1855 (1979) Books J.V. Jose, 40 Years of Berezinskii–Kosterlitz–Thouless Theory, World Scientific, 2013, H. Kleinert, Gauge Fields in Condensed Matter, Vol. I, " SUPERFLOW AND VORTEX LINES", pp. 1–742, World Scientific (Singapore, 1989); Paperback (also available online: Vol. I. Read pp. 618–688); H. Kleinert, Multivalued Fields in Condensed Matter, Electrodynamics, and Gravitation'', World Scientific (Singapore, 2008) (also available online: here) Statistical mechanics Lattice models Phase transitions
Berezinskii–Kosterlitz–Thouless transition
[ "Physics", "Chemistry", "Materials_science" ]
1,657
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Lattice models", "Computational physics", "Condensed matter physics", "Statistical mechanics", "Matter" ]
1,652,307
https://en.wikipedia.org/wiki/Isotopic%20labeling
Isotopic labeling (or isotopic labelling) is a technique used to track the passage of an isotope (an atom with a detectable variation in neutron count) through chemical reaction, metabolic pathway, or a biological cell. The reactant is 'labeled' by replacing one or more specific atoms with their isotopes. The reactant is then allowed to undergo the reaction. The position of the isotopes in the products is measured to determine what sequence the isotopic atom followed in the reaction or the cell's metabolic pathway. The nuclides used in isotopic labeling may be stable nuclides or radionuclides. In the latter case, the labeling is called radiolabeling. In isotopic labeling, there are multiple ways to detect the presence of labeling isotopes; through their mass, vibrational mode, or radioactive decay. Mass spectrometry detects the difference in an isotope's mass, while infrared spectroscopy detects the difference in the isotope's vibrational modes. Nuclear magnetic resonance detects atoms with different gyromagnetic ratios. The radioactive decay can be detected through an ionization chamber or autoradiographs of gels. An example of the use of isotopic labeling is the study of phenol (C6H5OH) in water by replacing common hydrogen (protium) with deuterium (deuterium labeling). Upon adding phenol to deuterated water (water containing D2O in addition to the usual H2O), a hydrogen-deuterium exchange is observed to affect phenol's hydroxyl group (resulting in C6H5OD), indicating that phenol readily undergoes hydrogen-exchange reactions with water. Mainly the hydroxyl group is affected—without a catalyst, the other 5 hydrogen atoms are much slower to undergo exchange—reflecting the difference in chemical environments between the hydroxyl hydrogen and the aryl hydrogens. Isotopic tracer An isotopic tracer, (also "isotopic marker" or "isotopic label"), is used in chemistry and biochemistry to help understand chemical reactions and interactions. In this technique, one or more of the atoms of the molecule of interest is substituted for an atom of the same chemical element, but of a different isotope (like a radioactive isotope used in radioactive tracing). Because the labeled atom has the same number of protons, it will behave in almost exactly the same way as its unlabeled counterpart and, with few exceptions, will not interfere with the reaction under investigation. The difference in the number of neutrons, however, means that it can be detected separately from the other atoms of the same element. Nuclear magnetic resonance (NMR) and mass spectrometry (MS) are used to investigate the mechanisms of chemical reactions. NMR and MS detects isotopic differences, which allows information about the position of the labeled atoms in the products' structure to be determined. With information on the positioning of the isotopic atoms in the products, the reaction pathway the initial metabolites utilize to convert into the products can be determined. Radioactive isotopes can be tested using the autoradiographs of gels in gel electrophoresis. The radiation emitted by compounds containing the radioactive isotopes darkens a piece of photographic film, recording the position of the labeled compounds relative to one another in the gel. Isotope tracers are commonly used in the form of isotope ratios. By studying the ratio between two isotopes of the same element, we avoid effects involving the overall abundance of the element, which usually swamp the much smaller variations in isotopic abundances. Isotopic tracers are some of the most important tools in geology because they can be used to understand complex mixing processes in earth systems. Further discussion of the application of isotopic tracers in geology is covered under the heading of isotope geochemistry. Isotopic tracers are usually subdivided into two categories: stable isotope tracers and radiogenic isotope tracers. Stable isotope tracers involve only non-radiogenic isotopes and usually are mass-dependent. In theory, any element with two stable isotopes can be used as an isotopic tracer. However, the most commonly used stable isotope tracers involve relatively light isotopes, which readily undergo fractionation in natural systems. See also isotopic signature. A radiogenic isotope tracer involves an isotope produced by radioactive decay, which is usually in a ratio with a non-radiogenic isotope (whose abundance in the earth does not vary due to radioactive decay). Stable isotope labeling Stable isotope labeling involves the use of non-radioactive isotopes that can act as tracers used to model several chemical and biochemical systems. The chosen isotope can act as a label on that compound that can be identified through nuclear magnetic resonance (NMR) and mass spectrometry (MS). Some of the most common stable isotopes are 2H, 13C, and 15N, which can further be produced into NMR solvents, amino acids, nucleic acids, lipids, common metabolites and cell growth media. The compounds produced using stable isotopes are either specified by the percentage of labeled isotopes (that is, 30% uniformly labeled 13C glucose contains a mixture that is 30% labeled with 13 carbon isotope and 70% naturally labeled carbon) or by the specifically labeled carbon positions on the compound (that is, 1-13C glucose which is labeled at the first carbon position of glucose). A network of reactions adopted from the glycolysis pathway and the pentose phosphate pathway is shown in which the labeled carbon isotope rearranges to different carbon positions throughout the network of reactions. The network starts with fructose 6-phosphate (F6P), which has 6 carbon atoms with a label 13C at carbon positions 1 and 2. 1,2-13C F6P becomes two glyceraldehyde 3-phosphate (G3P), one 2,3-13C T3P and one unlabeled T3P. The 2,3-13C T3P can now be reacted with sedoheptulose 7-phosphate (S7P) to form an unlabeled erythrose 4-phosphate(E4P) and a 5,6-13C F6P. The unlabeled T3P will react with the S7P to synthesize unlabeled products. The figure demonstrates the use of stable isotope labeling to discover the carbon atom rearrangement through reactions using position specific labeled compounds. Metabolic flux analysis using stable isotope labeling Metabolic flux analysis (MFA) using stable isotope labeling is an important tool for explaining the flux of certain elements through the metabolic pathways and reactions within a cell. An isotopic label is fed to the cell, then the cell is allowed to grow utilizing the labeled feed. For stationary metabolic flux analysis the cell must reach a steady state (the isotopes entering and leaving the cell remain constant with time) or a quasi-steady state (steady state is reached for a given period of time). The isotope pattern of the output metabolite is determined. The output isotope pattern provides valuable information, which can be used to find the magnitude of flux, rate of conversion from reactants to products, through each reaction. The figure demonstrates the ability to use different labels to determine the flux through a certain reaction. Assume the original metabolite, a three carbon compound, has the ability to either split into a two carbon metabolite and one carbon metabolite in one reaction then recombine or remain a three carbon metabolite. If the reaction is provided with two isotopes of the metabolite in equal proportion, one completely labeled (blue circles), commonly known as uniformly labeled, and one completely unlabeled (white circles). The pathway down the left side of the diagram does not display any change in the metabolites, while the right side shows the split and recombination. As shown, if the metabolite only takes the pathway down the left side, it remains in a 50–50 ratio of uniformly labeled to unlabeled metabolite. If the metabolite only takes the right side new labeling patterns can occur, all in equal proportion. Other proportions can occur depending on how much of the original metabolite follows the left side of the pathway versus the right side of the pathway. Here the proportions are shown for a situation in which half of the metabolites take the left side and half the right, but other proportions can occur. These patterns of labeled atoms and unlabeled atoms in one compound represent isotopomers. By measuring the isotopomer distribution of the differently labeled metabolites, the flux through each reaction can be determined. MFA combines the data harvested from isotope labeling with the stoichiometry of each reaction, constraints, and an optimization procedure resolve a flux map. The irreversible reactions provide the thermodynamic constraints needed to find the fluxes. A matrix is constructed that contains the stoichiometry of the reactions. The intracellular fluxes are estimated by using an iterative method in which simulated fluxes are plugged into the stoichiometric model. The simulated fluxes are displayed in a flux map, which shows the rate of reactants being converted to products for each reaction. In most flux maps, the thicker the arrow, the larger the flux value of the reaction. Isotope labeling measuring techniques Any technique in measuring the difference between isotopomers can be used. The two primary methods, nuclear magnetic resonance (NMR) and mass spectrometry (MS), have been developed for measuring mass isotopomers in stable isotope labeling. Proton NMR was the first technique used for 13C-labeling experiments. Using this method, each single protonated carbon position inside a particular metabolite pool can be observed separately from the other positions. This allows the percentage of isotopomers labeled at that specific position to be known. The limit to proton NMR is that if there are n carbon atoms in a metabolite, there can only be at most n different positional enrichment values, which is only a small fraction of the total isotopomer information. Although the use of proton NMR labeling is limiting, pure proton NMR experiments are much easier to evaluate than experiments with more isotopomer information. In addition to Proton NMR, using 13C NMR techniques will allow a more detailed view of the distribution of the isotopomers. A labeled carbon atom will produce different hyperfine splitting signals depending on the labeling state of its direct neighbors in the molecule. A singlet peak emerges if the neighboring carbon atoms are not labeled. A doublet peak emerges if only one neighboring carbon atom is labeled. The size of the doublet split depends on the functional group of the neighboring carbon atom. If two neighboring carbon atoms are labeled, a doublet of doublets may degenerate into a triplet if the doublet splittings are equal. The drawbacks to using NMR techniques for metabolic flux analysis purposes is that it is different from other NMR applications because it is a rather specialized discipline. An NMR spectrometer may not be directly available for all research teams. The optimization of NMR measurement parameters and proper analysis of peak structures requires a skilled NMR specialist. Certain metabolites also may require specialized measurement procedures to obtain additional isotopomer data. In addition, specially adapted software tools are needed to determine the precise quantity of peak areas as well as identifying the decomposition of entangled singlet, doublet, and triplet peaks. As opposed to nuclear magnetic resonance, mass spectrometry (MS) is another method that is more applicable and sensitive to metabolic flux analysis experiments. MS instruments are available in different variants. Different from two-dimensional nuclear magnetic resonance (2D-NMR), the MS instruments work directly with hydrolysate. In gas chromatography-mass spectrometry (GC-MS), the MS is coupled to a gas chromatograph to separate the compounds of the hydrolysate. The compounds eluting from the GC column are then ionized and simultaneously fragmented. The benefit in using GC-MS is that not only are the mass isotopomers of the molecular ion measured but also the mass isotopomer spectrum of several fragments, which significantly increases the measured information. In liquid chromatography-mass spectrometry (LC-MS), the GC is replaced with a liquid chromatograph. The main difference is that chemical derivatization is not necessary. Applications of LC-MS to MFA, however, are rare. In each case, MS instruments divide a particular isotopomer distribution by its molecular weight. All isotopomers of a particular metabolite that contain the same number of labeled carbon atoms are collected in one peak signal. Because every isotopomer contributes to exactly one peak in the MS spectrum, the percentage value can then be calculated for each peak, yielding the mass isotopomer fraction. For a metabolite with n carbon atoms, n+1 measurements are produced. After normalization, exactly n informative mass isotopomer quantities remain. The drawback to using MS techniques is that for gas chromatography, the sample must be prepared by chemical derivatization in order to obtain molecules with charge. There are numerous compounds used to derivatize samples. N,N-Dimethylformamide dimethyl acetal (DMFDMA) and N-(tert-butyldimethylsilyl)-N-methyltrifluoroacetamide (MTBSTFA) are two examples of compounds that have been used to derivatize amino acids. In addition, strong isotope effects observed affect the retention time of differently labeled isotopomers in the GC column. Overloading of the GC column also must be prevented. Lastly, the natural abundance of other atoms than carbon also leads to a disturbance in the mass isotopomer spectrum. For example, each oxygen atom in the molecule might also be present as a 17O isotope and as a 18O isotope. A more significant impact of the natural abundance of isotopes is the effect of silicon with a natural abundance of the isotopes 29Si and 30Si. Si is used in derivatizing agents for MS techniques. Radioisotopic labeling Radioisotopic labeling is a technique for tracking the passage of a sample of substance through a system. The substance is "labeled" by including radionuclides in its chemical composition. When these decay, their presence can be determined by detecting the radiation emitted by them. Radioisotopic labeling is a special case of isotopic labeling. For these purposes, a particularly useful type of radioactive decay is positron emission. When a positron collides with an electron, it releases two high-energy photons traveling in diametrically opposite directions. If the positron is produced within a solid object, it is likely to do this before traveling more than a millimeter. If both of these photons can be detected, the location of the decay event can be determined very precisely. Strictly speaking, radioisotopic labeling includes only cases where radioactivity is artificially introduced by experimenters, but some natural phenomena allow similar analysis to be performed. In particular, radiometric dating uses a closely related principle. Applications Applications in human mineral nutrition research The use of stable isotope tracers to study mineral nutrition and metabolism in humans was first reported in the 1960s. While radioisotopes had been used in human nutrition research for several decades prior, stable isotopes presented a safer option, especially in subjects for which there is elevated concern about radiation exposure, e.g. pregnant and lactating women and children. Other advantages offered by stable isotopes include the ability to study elements having no suitable radioisotopes and to study long-term tracer behavior. Thus the use of stable isotopes became commonplace with the increasing availability of isotopically enriched materials and inorganic mass spectrometers. The use of stable isotopes instead of radioisotopes does have several drawbacks: larger quantities of tracer are required, having the potential of perturbing the naturally existing mineral; analytical sample preparation is more complex and mass spectrometry instrumentation more costly; the presence of tracer in whole bodies or particular tissues cannot be measured externally. Nonetheless, the advantages have prevailed making stable isotopes the standard in human studies. Most of the minerals that are essential for human health and of particular interest to nutrition researchers have stable isotopes, some well-suited as biological tracers because of their low natural abundance. Iron, zinc, calcium, copper, magnesium, selenium and molybdenum are among the essential minerals having stable isotopes to which isotope tracer methods have been applied. Iron, zinc and calcium in particular have been extensively studied. Aspects of mineral nutrition/metabolism that are studied include absorption (from the gastrointestinal tract into the body), distribution, storage, excretion and the kinetics of these processes. Isotope tracers are administered to subjects orally (with or without food, or with a mineral supplement) and/or intravenously. Isotope enrichment is then measured in blood plasma, erythrocytes, urine and/or feces. Enrichment has also been measured in breast milk and intestinal contents. Tracer experiment design sometimes differs between minerals due to differences in their metabolism. For example, iron absorption is usually determined from incorporation of tracer in erythrocytes whereas zinc or calcium absorption is measured from tracer appearance in plasma, urine or feces. The administration of multiple isotope tracers in a single study is common, permitting the use of more reliable measurement methods and simultaneous investigations of multiple aspects of metabolism. The measurement of mineral absorption from the diet, often conceived of as bioavailability, is the most common application of isotope tracer methods to nutrition research. Among the purposes of such studies are the investigations of how absorption is influenced by type of food (e.g. plant vs animal source, breast milk vs formula), other components of the diet (e.g. phytate), disease and metabolic disorders (e.g. environmental enteric dysfunction), the reproductive cycle, quantity of mineral in diet, chronic mineral deficiency, subject age and homeostatic mechanisms. When results from such studies are available for a mineral, they may serve as a basis for estimations of the human physiological and dietary requirements of the mineral. When tracer is administered with food for the purpose of observing mineral absorption and metabolism, it may be in the form of an intrinsic or extrinsic label. An intrinsic label is isotope that has been introduced into the food during its production, thus enriching the natural mineral content of the food, whereas extrinsic labeling refers to the addition of tracer isotope to the food during the study. Because it is a very time-consuming and expensive approach, intrinsic labeling is not routinely used. Studies comparing measurements of absorption using intrinsic and extrinsic labeling of various foods have generally demonstrated good agreement between the two labeling methods, supporting the hypothesis that extrinsic and natural minerals are handled similarly in the human gastrointestinal tract. Enrichment is quantified from the measurement of isotope ratios, the ratio of the tracer isotope to a reference isotope, by mass spectrometry. Multiple definitions and calculations of enrichment have been adopted by different researchers. Calculations of enrichment become more complex when multiple tracers are used simultaneously. Because enriched isotope preparations are never isotopically pure, i.e. they contain all the element's isotopes in unnatural abundances, calculations of enrichment of multiple isotope tracers must account for the perturbation of each isotope ratio by the presence of the other tracers. Due to the prevalence of mineral deficiencies and their critical impact on human health and well-being in resource-poor countries, the International Atomic Energy Agency has recently published detailed and comprehensive descriptions of stable isotope methods to facilitate the dissemination of this knowledge to researchers beyond western academic centers. Applications in proteomics In proteomics, the study of the full set of proteins expressed by a genome, identifying diseases biomarkers can involve the usage of stable isotope labeling by amino acids in cell culture (SILAC), that provides isotopic labeled forms of amino acid used to estimate protein levels. In protein recombinant, manipulated proteins are produced in large quantities and isotope labeling is a tool to test for relevant proteins. The method used to be about selectively enrich nuclei with 13C or 15N or deplete 1H from them. The recombinant would be expressed in E.coli with media containing 15N-ammonium chloride as a source of nitrogen. The resulting 15N labeled proteins are then purified by immobilized metal affinity and their percentage estimated. In order to increase the yield of labeled proteins and cut down the cost of isotope labeled media, an alternative procedure primarily increases the cell mass using unlabeled media before introducing it in a minimal amount of labeled media. Another application of isotope labeling would be in measuring DNA synthesis, that is cell proliferation in vitro. Uses H3-thymidine labeling to compare pattern of synthesis (or sequence) in cells. Applications for ecosystem process analysis Isotopic tracers are used to examine processes in natural systems, especially terrestrial and aquatic environments. In soil science 15N tracers are used extensively to study nitrogen cycling, whereas 13C and 14C, stable and radioisotopes of carbon respectively, are used for studying turnover of organic compounds and fixation of by autotrophs. For example, Marsh et al. (2005) used dual labeled (15N- and 14C) urea to demonstrate utilization of the compound by ammonia oxidizers as both an energy source (ammonia oxidation) and carbon source (chemoautotrophic carbon fixation). Deuterated water is also used for tracing the fate and ages of water in a tree or in an ecosystem. Applications for oceanography Tracers are also used extensively in oceanography to study a wide array of processes. The isotopes used are typically naturally occurring with well-established sources and rates of formation and decay. However, anthropogenic isotopes may also be used with great success. The researchers measure the isotopic ratios at different locations and times to infer information about the physical processes of the ocean. Particle transport The ocean is an extensive network of particle transport. Thorium isotopes can help researchers decipher the vertical and horizontal movement of matter. 234Th has a constant, well-defined production rate in the ocean and a half-life of 24 days. This naturally occurring isotope has been shown to vary linearly with depth. Therefore, any changes in this linear pattern can be attributed to the transport of 234Th on particles. For example, low isotopic ratios in surface water with very high values a few meters down would indicate a vertical flux in the downward direction. Furthermore, the thorium isotope may be traced within a specific depth to decipher the lateral transport of particles. Circulation Circulation within local systems, such as bays, estuaries, and groundwater, may be examined with radium isotopes. 223Ra has a half-life of 11 days and can occur naturally at specific locations in rivers and groundwater sources. The isotopic ratio of radium will then decrease as the water from the source river enters a bay or estuary. By measuring the amount of 223Ra at a number of different locations, a circulation pattern can be deciphered. This same exact process can also be used to study the movement and discharge of groundwater. Various isotopes of lead can be used to study circulation on a global scale. Different oceans (i.e. the Atlantic, Pacific, Indian, etc.) have different isotopic signatures. This results from differences in isotopic ratios of sediments and rocks within the different oceans. Because the different isotopes of lead have half-lives of 50–200 years, there is not enough time for the isotopic ratios to be homogenized throughout the whole ocean. Therefore, precise analysis of Pb isotopic ratios can be used to study the circulation of the different oceans. Tectonic processes and climate change Isotopes with extremely long half-lives and their decay products can be used to study multi-million year processes, such as tectonics and extreme climate change. For example, in rubidium–strontium dating, the isotopic ratio of strontium (87Sr/86Sr) can be analyzed within ice cores to examine changes over the earth's lifetime. Differences in this ratio within the ice core would indicate significant alterations in the earth's geochemistry. Isotopes related to nuclear weapons The aforementioned processes can be measured using naturally occurring isotopes. Nevertheless, anthropogenic isotopes are also extremely useful for oceanographic measurements. Nuclear weapons tests released a plethora of uncommon isotopes into the world's oceans. 3H, 129I, and 137Cs can be found dissolved in seawater, while 241Am and 238Pu are attached to particles. The isotopes dissolved in water are particularly useful in studying global circulation. For example, differences in lateral isotopic ratios within an ocean can indicate strong water fronts or gyres. Conversely, the isotopes attached to particles can be used to study mass transport within water columns. For instance, high levels of Am or Pu can indicate downwelling when observed at great depths, or upwelling when observed at the surface. See also Uses of radionuclides Radioactivity in biology Radioactive tracer Isotopomer Isotopologue Isobaric labeling Isotope dilution Infrared spectroscopy of metal carbonyls References External links Synthesis of Radiolabeled Compounds Labeling Laboratory techniques Physical chemistry Biochemistry methods Mass spectrometry Spectroscopy Nuclear physics
Isotopic labeling
[ "Physics", "Chemistry", "Biology" ]
5,359
[ "Biochemistry methods", "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Molecular physics", "Instrumental analysis", "Mass", "Isotopes", "Spectroscopy", "Mass spectrometry", "nan", "Nuclear physics", "Biochemistry", "Physical chemistry", "Matter" ]
1,652,911
https://en.wikipedia.org/wiki/Spectral%20efficiency
Spectral efficiency, spectrum efficiency or bandwidth efficiency refers to the information rate that can be transmitted over a given bandwidth in a specific communication system. It is a measure of how efficiently a limited frequency spectrum is utilized by the physical layer protocol, and sometimes by the medium access control (the channel access protocol). Link spectral efficiency The link spectral efficiency of a digital communication system is measured in bit/s/Hz, or, less frequently but unambiguously, in (bit/s)/Hz. It is the net bit rate (useful information rate excluding error-correcting codes) or maximum throughput divided by the bandwidth in hertz of a communication channel or a data link. Alternatively, the spectral efficiency may be measured in bit/symbol, which is equivalent to bits per channel use (bpcu), implying that the net bit rate is divided by the symbol rate (modulation rate) or line code pulse rate. Link spectral efficiency is typically used to analyze the efficiency of a digital modulation method or line code, sometimes in combination with a forward error correction (FEC) code and other physical layer overhead. In the latter case, a "bit" refers to a user data bit; FEC overhead is always excluded. The modulation efficiency in bit/s is the gross bit rate (including any error-correcting code) divided by the bandwidth. Example 1: A transmission technique using one kilohertz of bandwidth to transmit 1,000 bits per second has a modulation efficiency of 1 (bit/s)/Hz. Example 2: A V.92 modem for the telephone network can transfer 56,000 bit/s downstream and 48,000 bit/s upstream over an analog telephone network. Due to filtering in the telephone exchange, the frequency range is limited to between 300 hertz and 3,400 hertz, corresponding to a bandwidth of 3,400 − 300 = 3,100 hertz. The spectral efficiency or modulation efficiency is 56,000/3,100 = 18.1 (bit/s)/Hz downstream, and 48,000/3,100 = 15.5 (bit/s)/Hz upstream. An upper bound for the attainable modulation efficiency is given by the Nyquist rate or Hartley's law as follows: For a signaling alphabet with M alternative symbols, each symbol represents N = log2 M bits. N is the modulation efficiency measured in bit/symbol or bpcu. In the case of baseband transmission (line coding or pulse-amplitude modulation) with a baseband bandwidth (or upper cut-off frequency) B, the symbol rate can not exceed 2B symbols/s in view to avoid intersymbol interference. Thus, the spectral efficiency can not exceed 2N (bit/s)/Hz in the baseband transmission case. In the passband transmission case, a signal with passband bandwidth W can be converted to an equivalent baseband signal (using undersampling or a superheterodyne receiver), with upper cut-off frequency W/2. If double-sideband modulation schemes such as QAM, ASK, PSK or OFDM are used, this results in a maximum symbol rate of W symbols/s, and in that the modulation efficiency can not exceed N (bit/s)/Hz. If digital single-sideband modulation is used, the passband signal with bandwidth W corresponds to a baseband message signal with baseband bandwidth W, resulting in a maximum symbol rate of 2W and an attainable modulation efficiency of 2N (bit/s)/Hz. Example 3: A 16QAM modem has an alphabet size of M = 16 alternative symbols, with N = 4 bit/symbol or bpcu. Since QAM is a form of double sideband passband transmission, the spectral efficiency cannot exceed N = 4 (bit/s)/Hz. Example 4: The 8VSB (8-level vestigial sideband) modulation scheme used in the ATSC digital television standard gives N=3 bit/symbol or bpcu. Since it can be described as nearly single-side band, the modulation efficiency is close to 2N = 6 (bit/s)/Hz. In practice, ATSC transfers a gross bit rate of 32 Mbit/s over a 6 MHz wide channel, resulting in a modulation efficiency of 32/6 = 5.3 (bit/s)/Hz. Example 5: The downlink of a V.92 modem uses a pulse-amplitude modulation with 128 signal levels, resulting in N = 7 bit/symbol. Since the transmitted signal before passband filtering can be considered as baseband transmission, the spectral efficiency cannot exceed 2N = 14 (bit/s)/Hz over the full baseband channel (0 to 4 kHz). As seen above, a higher spectral efficiency is achieved if we consider the smaller passband bandwidth. If a forward error correction code is used, the spectral efficiency is reduced from the uncoded modulation efficiency figure. Example 6: If a forward error correction (FEC) code with code rate 1/2 is added, meaning that the encoder input bit rate is one half the encoder output rate, the spectral efficiency is 50% of the modulation efficiency. In exchange for this reduction in spectral efficiency, FEC usually reduces the bit-error rate, and typically enables operation at a lower signal-to-noise ratio (SNR). An upper bound for the spectral efficiency possible without bit errors in a channel with a certain SNR, if ideal error coding and modulation is assumed, is given by the Shannon–Hartley theorem. Example 7: If the SNR is 1, corresponding to 0 decibel, the link spectral efficiency can not exceed 1 (bit/s)/Hz for error-free detection (assuming an ideal error-correcting code) according to Shannon–Hartley regardless of the modulation and coding. Note that the goodput (the amount of application layer useful information) is normally lower than the maximum throughput used in the above calculations, because of packet retransmissions, higher protocol layer overhead, flow control, congestion avoidance, etc. On the other hand, a data compression scheme, such as the V.44 or V.42bis compression used in telephone modems, may however give higher goodput if the transferred data is not already efficiently compressed. The link spectral efficiency of a wireless telephony link may also be expressed as the maximum number of simultaneous calls over 1 MHz frequency spectrum in erlangs per megahertz, or E/MHz. This measure is also affected by the source coding (data compression) scheme. It may be applied to analog as well as digital transmission. In wireless networks, the link spectral efficiency can be somewhat misleading, as larger values are not necessarily more efficient in their overall use of radio spectrum. In a wireless network, high link spectral efficiency may result in high sensitivity to co-channel interference (crosstalk), which affects the capacity. For example, in a cellular telephone network with frequency reuse, spectrum spreading and forward error correction reduce the spectral efficiency in (bit/s)/Hz but substantially lower the required signal-to-noise ratio in comparison to non-spread spectrum techniques. This can allow for much denser geographical frequency reuse that compensates for the lower link spectral efficiency, resulting in approximately the same capacity (the same number of simultaneous phone calls) over the same bandwidth, using the same number of base station transmitters. As discussed below, a more relevant measure for wireless networks would be system spectral efficiency in bit/s/Hz per unit area. However, in closed communication links such as telephone lines and cable TV networks, and in noise-limited wireless communication system where co-channel interference is not a factor, the largest link spectral efficiency that can be supported by the available SNR is generally used. System spectral efficiency or area spectral efficiency In digital wireless networks, the system spectral efficiency or area spectral efficiency is typically measured in (bit/s)/Hz per unit area, in (bit/s)/Hz per cell, or in (bit/s)/Hz per site. It is a measure of the quantity of users or services that can be simultaneously supported by a limited radio frequency bandwidth in a defined geographic area. It may for example be defined as the maximum aggregated throughput or goodput, i.e. summed over all users in the system, divided by the channel bandwidth and by the covered area or number of base station sites. This measure is affected not only by the single-user transmission technique, but also by multiple access schemes and radio resource management techniques utilized. It can be substantially improved by dynamic radio resource management. If it is defined as a measure of the maximum goodput, retransmissions due to co-channel interference and collisions are excluded. Higher-layer protocol overhead (above the media access control sublayer) is normally neglected. Example 8: In a cellular system based on frequency-division multiple access (FDMA) with a fixed channel allocation (FCA) cellplan using a frequency reuse factor of 1/4, each base station has access to 1/4 of the total available frequency spectrum. Thus, the maximum possible system spectral efficiency in (bit/s)/Hz per site is 1/4 of the link spectral efficiency. Each base station may be divided into 3 cells by means of 3 sector antennas, also known as a 4/12 reuse pattern. Then each cell has access to 1/12 of the available spectrum, and the system spectral efficiency in (bit/s)/Hz per cell or (bit/s)/Hz per sector is 1/12 of the link spectral efficiency. The system spectral efficiency of a cellular network may also be expressed as the maximum number of simultaneous phone calls per area unit over 1 MHz frequency spectrum in E/MHz per cell, E/MHz per sector, E/MHz per site, or (E/MHz)/m2. This measure is also affected by the source coding (data compression) scheme. It may be used in analog cellular networks as well. Low link spectral efficiency in (bit/s)/Hz does not necessarily mean that an encoding scheme is inefficient from a system spectral efficiency point of view. As an example, consider Code Division Multiplexed Access (CDMA) spread spectrum, which is not a particularly spectral-efficient encoding scheme when considering a single channel or single user. However, the fact that one can "layer" multiple channels on the same frequency band means that the system spectrum utilization for a multi-channel CDMA system can be very good. Example 9: In the W-CDMA 3G cellular system, every phone call is compressed to a maximum of 8,500 bit/s (the useful bitrate), and spread out over a 5 MHz wide frequency channel. This corresponds to a link throughput of only 8,500/5,000,000 = 0.0017 (bit/s)/Hz. Let us assume that 100 simultaneous (non-silent) calls are possible in the same cell. Spread spectrum makes it possible to have as low a frequency reuse factor as 1, if each base station is divided into 3 cells by means of 3 directional sector antennas. This corresponds to a system spectrum efficiency of over 1 × 100 × 0.0017 = 0.17 (bit/s)/Hz per site, and 0.17/3 = 0.06 (bit/s)/Hz per cell or sector. The spectral efficiency can be improved by radio resource management techniques such as efficient fixed or dynamic channel allocation, power control, link adaptation and diversity schemes. A combined fairness measure and system spectral efficiency measure is the fairly shared spectral efficiency. Comparison table Examples of predicted numerical spectral efficiency values of some common communication systems can be found in the table below. These results will not be achieved in all systems. Those further from the transmitter will not get this performance. N/A means not applicable. See also Baud CDMA spectral efficiency Channel capacity Comparison of mobile phone standards Cooper's Law Goodput Network throughput Orders of magnitude (bit rate) Radio resource management (RRM) Spatial capacity References Network performance Wireless networking Information theory Telecommunication theory Radio resource management
Spectral efficiency
[ "Mathematics", "Technology", "Engineering" ]
2,503
[ "Telecommunications engineering", "Applied mathematics", "Wireless networking", "Computer networks engineering", "Computer science", "Information theory" ]
1,653,141
https://en.wikipedia.org/wiki/Robinson%20annulation
The Robinson annulation is a chemical reaction used in organic chemistry for ring formation. It was discovered by Robert Robinson in 1935 as a method to create a six membered ring by forming three new carbon–carbon bonds. The method uses a ketone and a methyl vinyl ketone to form an α,β-unsaturated ketone in a cyclohexane ring by a Michael addition followed by an aldol condensation. This procedure is one of the key methods to form fused ring systems. Formation of cyclohexenone and derivatives are important in chemistry for their application to the synthesis of many natural products and other interesting organic compounds such as antibiotics and steroids. Specifically, the synthesis of cortisone is completed through the use of the Robinson annulation. The initial paper on the Robinson annulation was published by William Rapson and Robert Robinson while Rapson studied at Oxford with professor Robinson. Before their work, cyclohexenone syntheses were not derived from the α,β-unsaturated ketone component. Initial approaches coupled the methyl vinyl ketone with a naphthol to give a naphtholoxide, but this procedure was not sufficient to form the desired cyclohexenone. This was attributed to unsuitable conditions of the reaction. Robinson and Rapson found in 1935 that the interaction between cyclohexanone and α,β-unsaturated ketone afforded the desired cyclohexenone. It remains one of the key methods for the construction of six membered ring compounds. Since it is so widely used, there are many aspects of the reaction that have been investigated such as variations of the substrates and reaction conditions as discussed in the scope and variations section. Robert Robinson won the Nobel Prize for Chemistry in 1947 for his contribution to the study of alkaloids. Reaction mechanism The original procedure of the Robinson annulation begins with the nucleophilic attack of a ketone in a Michael reaction on a vinyl ketone to produce the intermediate Michael adduct. Subsequent aldol type ring closure leads to the keto alcohol, which is then followed by dehydration to produce the annulation product. In the Michael reaction, the ketone is deprotonated by a base to form an enolate nucleophile which attacks the electron acceptor (in red). This acceptor is generally an α,β-unsaturated ketone, although aldehydes, acid derivatives and similar compounds can work as well (see scope). In the example shown here, regioselectivity is dictated by the formation of the thermodynamic enolate. Alternatively, the regioselectivity is often controlled by using a β-diketone or β-ketoester as the enolate component, since deprotonation at the carbon flanked by the carbonyl groups is strongly favored. The intramolecular aldol condensation then takes place in such a way that installs the six-membered ring. In the final product, the three carbon atoms of the α,β-unsaturated system and the carbon α to its carbonyl group make up the four-carbon bridge of the newly installed ring. In order to avoid a reaction between the original enolate and the cyclohexenone product, the initial Michael adduct is often isolated first and then cyclized to give the desired octalone in a separate step. Stereochemistry Studies have been completed on the formation of the hydroxy ketones in the Robinson annulation reaction scheme. The trans compound is favored due to antiperiplanar effects of the final aldol condensation in kinetically controlled reactions. It has also been found though that the cyclization can proceed in synclinal orientation. The figure below shows the three possible stereochemical pathways, assuming a chair transition state. It has been postulated that the difference in the formation of these transition states and their corresponding products is due to solvent interactions. Scanio found that changing the solvent of the reaction from dioxane to DMSO gives different stereochemistry in step D above. This suggests that the presence of protic or aprotic solvents gives rise to different transition states. Mechanistic classification Robinson annulation is one notable example of a wider class of chemical transformations termed Tandem Michael-aldol reactions, that sequentially combine Michael addition and aldol reaction into a single reaction. As is the case with Robinson annulation, Michael addition usually happens first to tether the two reactants together, then aldol reaction proceeds intramolecularly to generate the ring system in the product. Usually five- or six-membered rings are generated. Scope and variations Reaction conditions Although the Robinson annulation is generally conducted under basic conditions, reactions have been conducted under a variety of conditions. Heathcock and Ellis report similar results to the base-catalyzed method using sulfuric acid. The Michael reaction can occur under neutral conditions through an enamine. A Mannich base can be heated in the presence of the ketone to produce the Michael adduct. Successful preparation of compounds using the Robinson annulation methods have been reported. The Michael acceptor A typical Michael acceptor is an α,β-unsaturated ketone, although aldehydes and acid derivatives work as well. In addition, Bergmann et al. reports that donors such as nitriles, nitro compounds, sulfones and certain hydrocarbons can be used as acceptors. Overall, Michael acceptors are generally activated olefins such as those shown below where EWG refers to an electron withdrawing group such as cyano, keto, or ester as shown. Wichterle reaction The Wichterle reaction is a variant of the Robinson annulation that replaces methyl vinyl ketone with 1,3-dichloro-cis-2-butene. This gives an example of using a different Michael acceptor from the typical α,β-unsaturated ketone. The 1,3-dichloro-cis-2-butene is employed to avoid undesirable polymerization or condensation during the Michael addition. Hauser annulation The reaction sequence in the related Hauser annulation is a Michael addition followed by a Dieckmann condensation and finally an elimination. The Dieckmann condensation is a similar ring closing intramolecular chemical reaction of diesters with base to give β-ketoesters. The Hauser donor is an aromatic sulfone or methylene sulfoxide with a carboxylic ester group in the ortho position. The Hauser acceptor is a Michael acceptor. In the original Hauser publication ethyl 2-carboxybenzyl phenyl sulfoxide reacts with pent-3-ene-2-one with LDA as a base in THF at −78 °C. Asymmetric Robinson annulation Asymmetric synthesis of Robinson annulation products most often involve the use of a proline catalyst. Studies report the use of L-proline as well as several other chiral amines for use as catalysts during both steps of the Robinson annulation reaction. The advantages of using the optically active proline catalysis is that they are stereoselective with enantiomeric excesses of 60–70%. Wang, et al. reported the one-pot synthesis of chiral thiochromenes by such an organocatalytic Robinson annulation. Applications to synthesis The Wieland–Miescher ketone is the Robinson annulation product of 2-methyl-cyclohexane-1,3-dione and methyl vinyl ketone. This compound is used in the syntheses of many steroids possessing important biological properties and can be made enantiopure using proline catalysis. F. Dean Toste and co-workers have used Robinson annulation in the total synthesis of (+)-fawcettimine, a tetracyclic Lycopodium alkaloid that has potential application to inhibiting the acetylcholine esterase. Enantioselective route to platensimycin Scientists at Merck discovered platensimycin, a novel antibiotic lead compound with potential medicinal applications as seen in the adjacent picture. Initial synthesis gave a racemic form of the compound using an intramolecular etherification reaction of the alcohol motifs and the double bond. Yamamoto and coworkers report the use of an alternative intramolecular Robinson annulation to provide a straightforward enantioselective synthesis of tetracyclic core of platensimycin. The key Robinson annulation step was reported to be accomplished in one pot using L-proline for chiral control. The reaction conditions can be seen below. References Name reactions Addition reactions Carbon-carbon bond forming reactions
Robinson annulation
[ "Chemistry" ]
1,838
[ "Name reactions", "Carbon-carbon bond forming reactions", "Ring forming reactions", "Organic reactions" ]
1,653,453
https://en.wikipedia.org/wiki/Fermion%20doubling
In lattice field theory, fermion doubling occurs when naively putting fermionic fields on a lattice, resulting in more fermionic states than expected. For the naively discretized Dirac fermions in Euclidean dimensions, each fermionic field results in identical fermion species, referred to as different tastes of the fermion. The fermion doubling problem is intractably linked to chiral invariance by the Nielsen–Ninomiya theorem. Most strategies used to solve the problem require using modified fermions which reduce to the Dirac fermion only in the continuum limit. Naive fermion discretization For simplicity we will consider a four-dimensional theory of a free fermion, although the fermion doubling problem remains in arbitrary dimensions and even if interactions are included. Lattice field theory is usually carried out in Euclidean spacetime arrived at from Minkowski spacetime after a Wick rotation, where the continuum Dirac action takes the form This is discretized by introducing a lattice with lattice spacing and points indexed by a vector of integers . The integral becomes a sum over all lattice points, while the fermionic fields are replaced by four-component Grassmann variables at each lattice site denoted by and . The derivative discretization used is the symmetric derivative discretization, with the vectors being unit vectors in the direction. These steps give the naive free fermion action This action reduces down to the continuum Dirac action in the continuum limit, so is expect to be a theory of a single fermion. However, it instead describes sixteen identical fermions, with each fermion said to have a different taste, analogously to how particles have different flavours in particle physics. The fifteen additional fermions are often referred to as doublers. This extended particle content can be seen by analyzing the symmetries or the correlation functions of the lattice theory. Doubling symmetry The naive fermion action possesses a new taste-exchange symmetry not found in the continuum theory acting on the fermion fields as where the vectors are the sixteen vectors with non-zero entries of specified by . For example, , , , and . The Dirac structure in the symmetry is similarly defined by the indices of as where and ; for example with . The presence of these sixteen symmetry transformations implies the existence of sixteen identical fermion states rather than just one. Starting with a fermion field , the symmetry maps it to another field . Fourier transforming this shows that its momentum has been shifted as . Therefore, a fermion with momentum near the center of the Brillouin zone is mapped to one of its corners while one of the corner fermions comes in to replace the center fermion, showing that the transformation acts to exchange the tastes of the fermions. Since this is a symmetry of the action, the different tastes must be physically indistinguishable from each other. Here the Brillouin momentum for small is not the physical momentum of the particle, rather that is . Instead acts more as an additional quantum number specifying the taste of a fermion. The term is responsible for changing the representation of the -matrices of the doublers to , which has the effect of changing the signs of the matrices as . Since any such sign change results in a set of matrices still satisfying the Dirac algebra, the resulting matrices form a valid representation. It is also the term that enters the wave function of the doublers given by and , where and are the usual Dirac equation solutions with momentum . Propagator and dispersion relation In the continuum theory, the Dirac propagator has a single pole as the theory describes only a single particle. However, calculating the propagator from the naive action yields for a fermion with momentum . For low momenta this still has the expected pole at , but there are fifteen additional poles when . Each of these is a new fermion species with doubling arising because the function has two poles over the range . This is in contrast to what happens when particles of different spins are discretized. For example, scalars acquire propagators taking a similar form except with , which only has a single pole over the momentum range and so the theory does not suffer from a doubling problem. The necessity of fermion doubling can be deduced from the fact that the massless fermion propagator is odd around the origin. That is, in the continuum limit it is proportional to , which must still be the case on the lattice in the small momentum limit. But since any local lattice theory that can be constructed must have a propagator that is continuous and periodic, it must cross the zero axis at least once more, which is exactly what occurs on the Brillouin zone corners where for the naive fermion propagator. This is in contrast to the bosonic propagator which is quadratic around the origin and so does not have such problem. Doubling can be avoided if a discontinuous propagator is used, but this results in a non-local theory. The presence of doublers is also reflected in the fermion dispersion relation. Since this is a relation between the energy of the fermion and its momentum, it requires performing an inverse Wick transformation , with the dispersion relation arising from the pole of the propagator The zeros of this dispersion relation are local energy minima around which excitations correspond to different particle species. The above has eight different species arising due to doubling in the three spatial directions. The remaining eight doublers occur due to another doubling in the Euclidean temporal direction, which seems to have been lost. But this is due to a naive application of the inverse Wick transformation. The theory has an obstruction that does not allow for the simple replacement of and instead requires performing the full contour integration. Doing this for the position space propagator results in two separate terms, each of which has the same dispersion relation of eight fermion species, giving a total of sixteen. The obstruction between the Minkowski and Euclidean naive fermion lattice theories occurs because doubling does not occur in the Minkowski temporal direction, so the two theories differ in their particle content. Resolutions to fermion doubling Fermion doubling is a consequence of a no-go theorem in lattice field theory known as the Nielsen–Ninomiya theorem. It states that any even dimensional local, hermitian, translationally invariant, bilinear fermionic theory always has the same number of left-handed and right-handed Weyl fermions, generating the additional fermions when they are lacking. The theorem does not say how many doublers will arise, but without breaking the assumptions of the theorem, there will always be at least one doubler, with the naive discretization having fifteen. A consequence of the theorem is that the chiral anomaly cannot be simulated with chirally invariant theories as it trivially vanishes. Simulating lattice field theories with fermion doubling leads to incorrect results due to the doublers, so many strategies to overcome this problem have been developed. While doublers can be ignored in a free theory as there the different tastes decouple, they cannot be ignored in an interacting theory where interactions mix different tastes, since momentum is conserved only up to modulo . For example, two taste fermions can scatter by the exchange of a highly virtual gauge boson to produce two taste fermions without violating momentum conservation. Therefore, to overcome the fermion doubling problem, one must violate one or more assumptions of the Nielsen–Ninomiya theorem, giving rise to a multitude of proposed resolutions: Domain wall fermion: explicitly violates chiral symmetry, increases spatial dimensionality. Ginsparg–Wilson fermion: explicitly violates chiral symmetry. Overlap fermion: explicitly violates chiral symmetry (type of Ginsparg–Wilson fermion). Perfect lattice fermion: nonlocal formulation. SLAC fermion: nonlocal formulation. Stacey fermion: nonlocal formulation. Staggered fermion (Kogut–Susskind fermion): explicitly violates translational invariance, reduces number of doublers. Symmetric mass generation: This approach goes beyond the fermion-bilinear model and introduces non-perturbative interaction effects. One realization based on the Eichten–Preskill model starts from a vector-symmetric fermion model where chiral fermions and mirror fermions are realized on two domain walls. Gapping the mirror fermion using symmetric mass generation results in chiral fermions at low energy with no fermion doubling. Twisted mass fermion: explicitly violates chiral symmetry (type of Wilson fermion). Wilson fermion: explicitly violates chiral symmetry. These fermion formulations each have their own advantages and disadvantages. They differ in the speed at which they can be simulated, the easy of their implementation, and the presence or absence of exceptional configurations. Some of them have a residual chiral symmetry allowing one to simulate axial anomalies. They can also differ in how many of the doublers they eliminate, with some consisting of a doublet, or a quartet of fermions. For this reason different fermion formulations are used for different problems. Derivative discretization Another possible although impractical solution to the doubling problem is to adopt a derivative discretization different from the symmetric difference used in the naive fermion action. Instead it is possible to use the forward difference or a backward difference discretizations. The effect of the derivative discretizations on doubling is seen by considering the one-dimensional toy problem of finding the eigensolutions of . In the continuum this differential equation has a single solution. However, implementing the symmetric difference derivative leads to the presence of two distinct eigensolutions, while a forward or backward difference derivative has one eigensolution. This effect carries forward to the fermion action where fermion doubling is absent with forward or backward discretizations. The reason for this particle content disparity is that the symmetric difference derivative maintains the hermiticity property of the continuum operator, while the forward and backward discretizations do not. These latter discretizations lead to non-hermitian actions, breaking the assumptions of the Nielsen–Ninomiya theorem, and so avoid the fermion doubling problem. Developing an interacting theory with a non-hermitian derivative discretization leads to a theory with non-covariant contributions to the fermion self-energy and vertex function, rendering the theory non-renormalizable and difficult to work with. For this reason such a resolution to the fermion doubling problem is generally not implemented. See also Acoustic and optical phonons: a similar phenomenon in solid state crystals References Lattice field theory Fermions
Fermion doubling
[ "Physics", "Materials_science" ]
2,246
[ "Fermions", "Subatomic particles", "Condensed matter physics", "Matter" ]
1,654,012
https://en.wikipedia.org/wiki/Rauisuchia
"Rauisuchia" is a paraphyletic group of mostly large and carnivorous Triassic archosaurs. Rauisuchians are a category of archosaurs within a larger group called Pseudosuchia, which encompasses all archosaurs more closely related to crocodilians than to birds and other dinosaurs. First named in the 1940s, Rauisuchia was a name exclusive to Triassic archosaurs which were generally large (often ), carnivorous, and quadrupedal with a pillar-erect hip posture, though exceptions exist for all of these traits. Rauisuchians, as a traditional taxonomic group, were considered distinct from other Triassic archosaur groups such as early dinosaurs, phytosaurs (crocodile-like carnivores), aetosaurs (armored herbivores), and crocodylomorphs (lightly-built crocodilian ancestors). However, more recent studies on archosaur evolution have upended this idea based on phylogenetic analyses and cladistics, a modern approach to taxonomy based on clades (nested monophyletic groups of common ancestry). Since the early 2010s, archosaur classification schemes have stabilized on a system where Rauisuchia is rendered an evolutionary grade, or even a wastebin taxon. Crocodylomorphs most likely originated from a rauisuchian ancestor based on a myriad of shared traits, and some "rauisuchians" (such as Postosuchus and Rauisuchus) appear to be more closely related to crocodylomorphs than to other "rauisuchians" (such as Prestosuchus and Saurosuchus). As a result, Rauisuchia in its traditional usage may be considered paraphyletic: a group which is defined by shared ancestry but also excludes a descendant taxon (in this case, crocodylomorphs). To designate it as an informal group in scientific literature, the name is often enclosed in quotation marks. Several monophyletic groups have been erected to classify "rauisuchians" in a cladistic framework. The closest concept is the clade Paracrocodylomorpha, which includes most "rauisuchian" taxa and their crocodylomorph descendants. Paracrocodylomorpha is divided into two branches: Poposauroidea, which includes a variety of strange "rauisuchians" (some of which were bipedal and/or herbivorous) and Loricata, which includes most typical "rauisuchians" and crocodylomorphs. Characteristics "Rauisuchians" had an erect gait with their legs oriented vertically beneath the body rather than sprawling outward. This type of gait is also seen in dinosaurs, but evolved independently in the two groups. In dinosaurs, the hip socket faces outward and the femur (thigh bone) connects to the side of the hip; while in rauisuchians, the hip socket faces downward to form a shelf of bone under which the femur connects. This has been referred to as the pillar-erect posture. "Rauisuchians" lived throughout most of the Triassic. Along with many other large archosaurs, the group died out in the Triassic-Jurassic extinction event (barring crocodylomorphs, which survive to the present in the form of crocodilians). After their extinction, theropod dinosaurs were able to emerge as the sole large terrestrial predators, though there is still some debate over how the extinction influenced dinosaur evolution. The footprints of meat-eating dinosaurs may have suddenly increased in size at the start of the Jurassic, when rauisuchians were absent. However, the apparent increase in dinosaur footprint size has instead been argued to be a result of increasing abundance of large theropods, rather than an abrupt acquisition of large size. Some "rauisuchians" may have existed in the very early Jurassic based on bone fragments from South Africa, but this identification is tentative. The name "Rauisuchia" comes from the genus Rauisuchus, which was named after fossil collector Dr. Wilhelm Rau. The name Rauisuchus means Wilhelm Rau's crocodile. History of classification "Rauisuchians" were originally thought to be related to erythrosuchids, but it is now known that they are pseudosuchians. Three families have historically been recognised: Prestosuchidae, Rauisuchidae, and Poposauridae, as well as a number of forms (e.g. those from the Olenekian of Russia) that are too primitive and/or poorly known to fit in any of these groups. There has been considerable suggestion that the group as currently defined is paraphyletic, representing a number of related lineages independently evolving and filling the same ecological niche of medium to top terrestrial predator. For example, Parrish (1993) and Juul (1994) considered poposaurid rauisuchians to be more closely related to Crocodilia than to prestosuchids. Nesbitt (2003) presented a different phylogeny with a monophyletic Rauisuchia. The group may even be something of a "wastebasket taxon". Determining exact phylogenetic relationships is difficult because of the scrappy nature of a lot of the material. However, further discoveries and studies, such as a study on the braincase of Batrachotomus (2002) and restudies of other forms, such as Erpetosuchus (2002) have shed some light on the evolutionary relationships of this poorly known group. Cladistics Despite its inclusion as an informal grouping in numerous phylogenetic studies, "Rauisuchia" has never received a formal definition. Most analyses in the past decade have found "Rauisuchia" to be a paraphyletic grouping, including all studies with a large sample size. Those that found the possibility that it was a natural group produced only weak support for this hypothesis. In his large 2011 analysis of archosaurian relationships, Nesbitt recommended that the term "Rauisuchia" be abandoned. In a study of the ctenosauriscid Arizonasaurus, paleontologist Sterling Nesbitt defined a clade of rauisuchians called "Group X". This group includes Arizonasuchus, Lotosaurus, Sillosuchus, Shuvosaurus, and Effigia. One distinguishing feature of Group X is their lack of osteoderms, which are common among many other crurotarsans. Many more features are found in the pelvis, including fully fused sacral vertebrae and a long, thin crest on the ilium called the supra-acetabular crest. Additionally, many members of Group X have smooth frontal and nasal bones, which make up the upper portion of the rostrum. In other "rauisuchians" and many other crurotarsans, this area has bumps and ridges. "Group X" is now termed Poposauroidea. Nesbitt later erected another clade, "Group Y", in 2007. Group Y falls within Group X to include Sillosuchus, Shuvosaurus, and Effigia. Group Y is diagnosed by the presence of four or more sacral vertebrae with fully fused neural arches, which is also seen in theropod dinosaurs (a case of evolutionary convergence). In addition, the cervical vertebrae that make up the neck are strongly amphicoelus, meaning that they are concave at both ends. The fourth trochanter, a ridge of bone on the femur for muscle attachment seen in nearly all archosaurs, is absent in Group Y. "Group Y" is now termed Shuvosauridae. Although not placed within Group Y, Lotosaurus shares many similarities with members of the clade, foremost of which is edentulous, or toothless, jaws. Edentulism is also seen in Shuvosaurus and Effigia, which have beak-like jaws. Nesbitt suggested that the derived characters of Lotosaurus may indicate that it is a transitional form between basal members of Group X and members of Group Y. Below is the cladogram from Nesbitt (2007): In their phylogenetic study of archosaurs, Brusatte et al. (2010) found only weak support for Rauisuchia as a monophyletic grouping. As a result of their analysis, two clades were found to be within Rauisuchia, which they named Rauisuchoidea and Poposauroidea. Rauisuchoidea included Rauisuchidae and Prestosuchidae, as well as several basal taxa that were once assigned to the families, including Fasolasuchus and Ticinosuchus. Poposauroidea included poposaurids and ctenosauriscids, but the phylogeny had a large polytomy of genera in both groups that was difficult to resolve, which included Arizonasaurus, Poposaurus, and Sillosuchus. However, the characters linking these two groups were weak, and the question as to whether or not "Rauisuchia" forms a natural group remains unresolved. Brusatte et al. (2010) was one of the last studies to find a monophyletic Rauisuchia clade. Below is the cladogram from Brusatte et al. (2010): In a more thorough test of archosaurian relationships published in 2011 by Sterling Nesbitt, "rauisuchians" were found to be paraphyletic, with Poposauroidea at the base of the clade Paracrocodylomorpha, and the rest of the "rauisuchians" forming a grade within the clade Loricata. Nesbitt noted that no previous study of "rauisuchian" relationships had ever included a wide variety of supposed "rauisuchians" as well as a large number of non-"rauisuchian" taxa as controls. Fossil record Well-known "rauisuchians" include Ticinosuchus of the Middle Triassic of Switzerland and Northern Italy, Saurosuchus of the Late Triassic (late Carnian) of Argentina, Prestosuchus of the Middle-Late Triassic (late Ladinian-early Carnian) of Brazil, and Postosuchus of the Late Triassic (Norian) of the southwest United States. The first "rauisuchian" known to paleontology was Teratosaurus, a German genus from the Late Triassic (Norian) of Germany. However, Teratosaurus was considered an early theropod dinosaur for much of its history, before it was demonstrated to be non-dinosaurian in the 1980s. The concept of "rauisuchians" as a distinct group of reptiles distantly related to crocodiles was recognized by discoveries in Brazil in the 1940s (particularly Prestosuchus and Rauisuchus) and emphasized further by the description of Ticinosuchus in the 1960s. The oldest known "rauisuchians", in terms of geological age, are probably from the end of the Early Triassic (late Olenekian). Most of these early fossils are fragmentary and dubious remains from Russia, but some are better-described and constrained, such as Xilousuchus, a ctenosauriscid from the Heshanggou Formation of China. Xilousuchus is neither the earliest-branching archosaur nor "rauisuchian" despite its early age, and its presence in the Early Triassic suggests that other archosaur fossils are simply undiscovered from that time. The last known "rauisuchians", excluding their descendants the crocodylomorphs, are from the latter part of the Late Triassic. The shuvosaurid Effigia, from the "siltstone member" of the Chinle Formation in New Mexico, may be as young as the Rhaetian, the last stage of the Triassic. Effigia was recovered from the Coelophysis Quarry of Ghost Ranch. The same site also preserves a large undescribed archosaur, CM 73372, which seemingly represents a transitional form between "rauisuchians" and crocodylomorphs. Indeterminate large paracrocodylomorph material from the Lower Elliot Formation of South Africa may be even younger, late Rhaetian or possibly even lowermost Jurassic. List of rauisuchian genera The following is a list of valid pseudosuchian genera which have been informally or formally classified as rauisuchians, as well as their modern cladistic interpretation. This list does not include genera named for dubious and poorly-diagnosed "rauisuchian" material from Russia (Dongusia, Energosuchus, Jaikosuchus, Jushatyria, Scythosuchus, Tsylmosuchus, Vjushkovisaurus, Vytshegdosuchus) and China (Fenhosuchus, Wangisuchus), nor taxa reclassified as non-"rauisuchian" archosaurs (Ornithosuchus, Gracilisuchus, Dongusuchus, Yarasuchus). See also Notes References Fossil taxa described in 1942 Paraphyletic groups Pseudosuchians Taxa named by Friedrich von Huene
Rauisuchia
[ "Biology" ]
2,766
[ "Phylogenetics", "Paraphyletic groups" ]
1,654,410
https://en.wikipedia.org/wiki/Daz%20%28detergent%29
Daz is a laundry detergent on the market in the United Kingdom and Ireland. It was introduced in February 1953. It is manufactured by Bluesun, Bluesun acquired Daz from Procter & Gamble March 2024 Ariel. Aggressively marketed, it is associated in popular culture with the "Daz Doorstep Challenge" series of commercials, which saw various 'hosts' including Danny Baker, Shane Richie and Michael Barrymore surprising house occupiers by asking them to put Daz to the test against a rival detergent. The advert was spoofed by Dom Joly in the British sketch series Trigger Happy TV and in a John Smith's advertising campaign featuring Peter Kay. From 1999 to 2002 Julian Clary was the face of Daz laundry detergent, one of the first of his advert campaigns being a "Wash Your Dirty Linen in Public" roadshow with Daz Tablets. Daz is available in powder (handwash and automatic), liquid, professional liquid and all-in-one multi-compartment pods, in common with most other P&G laundry detergent brands. In some packs of 3-in-1 Pods, the individual pods are printed with Daz/Vizir/Tide, as the same product was sold in multiple European markets under different local brands. However, despite the Tide name appearing, they are not the same formulation as Tide Pods in the USA. Cleaner Close advertising campaign From 2002 to 2019, Daz began a series of soap opera style adverts called Cleaner Close. Some of these featured a new packet of Daz or a prize give-away as part of the plot, such as a character who either hid money in Daz packets or donated money to Daz in their will. Other episodes have a character who do not use Daz which causes them problems, which are solved by a character who uses Daz. Cleaner Close is a parody of Coronation Street, Brookside or EastEnders. Most of the adverts feature ex-soap stars, sometimes portrayed as a similar character to the one they played in their original soap, and are narrated by Tony Hirst; who would later star in both Hollyoaks and Coronation Street. Soap stars or ex-soap stars to have appeared in Cleaner Close include: Michelle Collins (Cindy Beale in EastEnders, Stella Price in Coronation Street), Alison King (Carla Connor in Coronation Street), Chris O'Dowd (Brendan Davenport in The Clinic), Jennifer Ellison (Emily Shadwick in Brookside) and Julie Goodyear (Bet Lynch in Coronation Street). References External links Official website Products introduced in 1953 1953 establishments in the United Kingdom Laundry detergents Procter & Gamble brands Cleaning product brands Cleaning products
Daz (detergent)
[ "Chemistry" ]
547
[ "Cleaning products", "Products of chemical industry" ]
11,213,645
https://en.wikipedia.org/wiki/Carboxypeptidase%20A
Carboxypeptidase A usually refers to the pancreatic exopeptidase that hydrolyzes peptide bonds of C-terminal residues with aromatic or aliphatic side-chains. Most scientists in the field now refer to this enzyme as CPA1, and to a related pancreatic carboxypeptidase as CPA2. Types In addition, there are 4 other mammalian enzymes named CPA-3 through CPA-6, and none of these are expressed in the pancreas. Instead, these other CPA-like enzymes have diverse functions. CPA3 (also known as mast-cell CPA) is involved in the digestion of proteins by mast cells. CPA4 (previously known as CPA-3, but renumbered when mast-cell CPA was designated CPA-3) may be involved in tumor progression, but this enzyme has not been well studied. CPA5 has not been well studied. CPA6 is expressed in many tissues during mouse development, and in adult shows a more limited distribution in brain and several other tissues. CPA6 is present in the extracellular matrix where it is enzymatically active. A human mutation of CPA-6 has been linked to Duane's syndrome (abnormal eye movement). Recently, mutations in CPA6 were found to be linked to epilepsy. CPA6 is also one of several enzymes which degrade enkephalins. Function CPA-1 and CPA-2 (and, it is presumed, all other CPAs) employ a zinc ion within the protein for hydrolysis of the peptide bond at the C-terminal end of an amino acid residue. Loss of the zinc leads to loss of activity, which can be replaced easily by zinc, and also by some other divalent metals (cobalt, nickel). Carboxypeptidase A is produced in the pancreas and is crucial to many processes in the human body to include digestion, post-translational modification of proteins, blood clotting, and reproduction. Applications This vast scope of functionality for a single protein makes it the ideal model for research regarding other zinc proteases of unknown structure. Recent biomedical research on collagenase, enkephalinase, and angiotensin-converting enzyme used carboxypeptidase A for inhibitor synthesis and kinetic testing. For example, a drug that treats high blood pressure, Captopril, was designed based on a carboxypeptidase A inhibitor. Carboxypeptidase A and the target enzyme of Captopril, angiotensin-converting enzyme, have very similar structures, as they both contain a zinc ion within the active site. This allowed for a potent carboxypeptidase A inhibitor to be used to inhibit the enzyme and, thus, lower blood pressure through the renin-angiotensin-aldosterone system. Structure Carboxypeptidase A (CPA) contains a zinc (Zn2+) metal center in a tetrahedral geometry with amino acid residues in close proximity around zinc to facilitate catalysis and binding. Out of the 307 amino acids bonded in a peptide chain, the following amino acid residues are important for catalysis and binding; Glu-270, Arg-71, Arg-127, Asn-144, Arg-145, and Tyr-248. Figure 1 illustrates the tetrahedral zinc complex active site with the important amino acid residues that surround the complex. The zinc metal is a strong electrophilic Lewis acid catalyst which stabilizes a coordinated water molecule as well as stabilizes the negative intermediates that occur throughout the hydrolytic reaction. Stabilization of both the coordinated water molecule and negative intermediates are assisted by polar residues in the active site which are in close proximity to facilitate hydrogen bonding. The active site can be characterized into two sub-sites denoted as S1’ and S1. The S1’ sub-site is the hydrophobic pocket of the enzyme, and Tyr-248 acts to ‘cap’ the hydrophobic pocket after substrate or inhibitor is bound (SITE). The hydrogen bonding from the hydroxyl group in Tyr-248 facilitates this conformation due to interaction with the terminal carboxylates of substrates that bind. Substantial movement is required for this enzyme and induced fit model explains how this interaction occurs. A triad of residues interact to the C-terminal carboxylate through hydrogen bonding: Salt linkage with positively charged Arg-145 Hydrogen bond from Tyr-248 Hydrogen bond from the nitrogen of the Asn-144 amide Mechanism Classified as a metalloexopeptidase, carboxypeptidase A consists of a single polypeptide chain bound to a zinc ion. This characteristic metal ion is located within the active site of the enzyme, along with five amino acid residues that are involved in substrate binding: Arg-71, Arg-127, Asn-144, Arg-145, Tyr-248, and Glu-270. X-ray crystallographic studies have revealed five subsites on the protein. These allosteric sites are involved in creating the ligand-enzyme specificity seen in most bioactive enzymes. One of these subsites induces a conformational change at Tyr-248 upon binding of a substrate molecule at the primary active site. The phenolic hydroxyl of tyrosine forms a hydrogen bond with the terminal carboxylate of the ligand. In addition, a second hydrogen bond is formed between the tyrosine and a peptide linkage of longer peptide substrates. These changes make the bond between the enzyme and ligand, whether it is substrate or inhibitor, much stronger. This property of carboxypeptidase A led to the first clause of Daniel E. Koshland, Jr.’s “induced fit” hypothesis. The S1 sub-site is where catalysis occurs in CPA, and the zinc ion is coordinated by Glu-72, His-69, and His-196 enzyme residues. A plane exists that bisects the active-site groove where residues Glu-270 and Arg-127 are on opposite sides of the zinc-water coupled complex. The zinc is electron rich due to glutamine ligands coordinating the zinc because before substrate binds, Glu-72 coordinates bidentate but shifts to monodentate after substrate binds. As a result, the zinc metal is not able to deprotonate the coordinated water molecule to make a hydroxyl nucleophile. Glu-270 and Arg-127 play an important role in catalysis shown in Figure 2. Arg-127 acts to stabilize the carbonyl of the substrate that is bound to amino group of phenylalanine. Simultaneously, the water molecule coordinated to zinc is deprotonated by Glu-270 and interacts with the carbonyl stabilized by Arg-127. This creates an intermediate, shown in Figure 2, where the negatively charged oxygen is coordinated to zinc, and through unfavorable electrostatic interactions between Glu-270 and the ionized product facilitates the release of the product at the end of catalysis. In recent computational studies, the mechanism of catalysis is similar but the difference in mechanism is that deprotonated water molecule binds to the carbon of the carbonyl, whereas Figure 2 shows the hydroxyl group stays coordinated to zinc. Then proteolysis occurs and the water molecule is then introduced back into the active site to coordinate to zinc. Several studies have been conducted exploring the details of the bond between carboxypeptidase A and substrate and how this affects the rate of hydrolysis. In 1934, it was first discovered through kinetic experiments that, in order for substrate to bind, the peptide that is to be hydrolyzed must be adjacent to a terminal free hydroxyl group. Also, the rate of hydrolysis can be enhanced if the C-terminal residue is branched aliphatic or aromatic. However, if the substrate is a dipeptide with a free amino group, it undergoes hydrolysis slowly; this, however, can be avoided if the amino group is blocked by N-acylation. It is quite clear that the structure of the enzyme, to be specific the active site, is very important in understanding the mechanism of reaction. For this reason, Rees and colleagues studied the enzyme-ligand complex to get a clear answer for the role of the zinc ion. These studies found that, in free enzyme, the zinc coordination number is five; the metal center is coordinated with two imidazole Nδ1 nitrogens, the two carboxylate oxygens of glutamate-72, and a water molecule to form a distorted tetrahedral. However, once ligand binds at the active site of carboxypeptidase A, this coordination number can vary from five to six. When bound to dipeptide glycyl-L-tyrosine, the amino nitrogen of the dipeptide and the carbonyl oxygen replaced the water ligand. This would yield a coordination number of six for the zinc in the carboxypeptidase A- dipeptide glycyl-L-tyrosine complex. Electron density maps gave evidence that the amino nitrogen occupies a second position near glutamate-270. The closeness of these two residues would result in a steric hindrance preventing the water ligand from coordinating with zinc. This would result in a coordination number of five. Data for both are substantial, indicating that both situations occur naturally. There are two proposed mechanisms for the catalytic function of carboxypeptidase A. The first is a nucleophilic pathway involving a covalent acyl enzyme intermediate containing active site base Glu-270. Evidence for this anhydride intermediate is mixed; Suh and colleagues isolated what is assumed to by the acyl intermediate. However, confirmation of the acyl enzyme was done without trapping experiments, making the conclusions weak. The second proposed mechanism is a promoted water pathway. This mechanism involves attack of a water molecule at the scissile peptide linkage of the substrate. This process is promoted by the zinc ion and assisted by residue Glu-270. See also Carboxypeptidase A inhibitor Carboxypeptidase B Carboxypeptidase Carboxypeptidase E References External links The MEROPS online database for peptidases and their inhibitors: M14.001 Proteins EC 3.4.17 Metabolism Zinc enzymes
Carboxypeptidase A
[ "Chemistry", "Biology" ]
2,181
[ "Biomolecules by chemical classification", "Cellular processes", "Molecular biology", "Biochemistry", "Proteins", "Metabolism" ]
11,217,018
https://en.wikipedia.org/wiki/A-weighting
A-weighting is a form of frequency weighting and the most commonly used of a family of curves defined in the International standard IEC 61672:2003 and various national standards relating to the measurement of sound pressure level. A-weighting is applied to instrument-measured sound levels in an effort to account for the relative loudness perceived by the human ear, as the ear is less sensitive to low audio frequencies. It is employed by arithmetically adding a table of values, listed by octave or third-octave bands, to the measured sound pressure levels in dB. The resulting octave band measurements are usually added (logarithmic method) to provide a single A-weighted value describing the sound; the units are written as dB(A). Other weighting sets of values – B, C, D and now Z – are discussed below. The curves were originally defined for use at different average sound levels, but A-weighting, though originally intended only for the measurement of low-level sounds (around 40 phon), is now commonly used for the measurement of environmental noise and industrial noise, as well as when assessing potential hearing damage and other noise health effects at all sound levels; indeed, the use of A-frequency-weighting is now mandated for all these measurements, because decades of field experience have shown a very good correlation with occupational deafness in the frequency range of human speech. It is also used when measuring low-level noise in audio equipment, especially in the United States. In Britain, Europe and many other parts of the world, broadcasters and audio engineers more often use the ITU-R 468 noise weighting, which was developed in the 1960s based on research by the BBC and other organizations. This research showed that our ears respond differently to random noise, and the equal-loudness curves on which the A, B and C weightings were based are really only valid for pure single tones. History A-weighting began with work by Fletcher and Munson which resulted in their publication, in 1933, of a set of equal-loudness contours. Three years later these curves were used in the first American standard for sound level meters. This ANSI standard, later revised as ANSI S1.4-1981, incorporated B-weighting as well as the A-weighting curve, recognising the unsuitability of the latter for anything other than low-level measurements. But B-weighting has since fallen into disuse. Later work, first by Zwicker and then by Schomer, attempted to overcome the difficulty posed by different levels, and work by the BBC resulted in the CCIR-468 weighting, currently maintained as ITU-R 468 noise weighting, which gives more representative readings on noise as opposed to pure tones. Deficiencies A-weighting is valid to represent the sensitivity of the human ear as a function of the frequency of pure tones. The A-weighting was based on the 40-phon Fletcher–Munson curves, which represented an early determination of the equal-loudness contour for human hearing. However, because decades of field experience have shown a very good correlation between the A scale and occupational deafness in the frequency range of human speech, this scale is employed in many jurisdictions to evaluate the risks of occupational deafness and other auditory problems related to signals or speech intelligibility in noisy environments. Because of perceived discrepancies between early and more recent determinations, the International Organization for Standardization (ISO) revised its standard curves as defined in ISO 226, in response to the recommendations of a study coordinated by the Research Institute of Electrical Communication, Tohoku University, Japan. The study produced new curves by combining the results of several studies, by researchers in Japan, Germany, Denmark, UK, and USA. (Japan was the greatest contributor with about 40% of the data.) This resulted in the acceptance of a new set of curves standardized as ISO 226:2003 (subsequently revised again in 2023 with changes to the ISO 226 equal loudness contours of less than 0.5 dB over the 20-90 phon range). The report comments on the large differences between the combined study results and the original Fletcher–Munson equal loudness contours, as well as the later Robinson-Dadson contours that formed the basis for the first version of ISO 226, published in 1987. Subsequent research has demonstrated that A-weighting is in closer agreement with the updated 60-phon contour incorporated into ISO 226:2003 than with the 40-phon Fletcher-Munson contour, which challenges the common misapprehension that A-weighting represents loudness only for quiet sounds. Nevertheless, A-weighting would be a closer match to the equal loudness curves if it fell more steeply above 10 kHz, and it is conceivable that this compromise may have arisen because steep filters were more difficult to construct in the early days of electronics. Nowadays, no such limitation need exist, as demonstrated by the ITU-R 468 curve. If A-weighting is used without further band-limiting it is possible to obtain different readings on different instruments when ultrasonic, or near ultrasonic noise is present. Accurate measurements therefore require a 20 kHz low-pass filter to be combined with the A-weighting curve in modern instruments. This is defined in IEC 61012 as AU weighting and while very desirable, is rarely fitted to commercial sound level meters. B-, C-, D-, G- and Z-weightings A-frequency-weighting is mandated by the international standard IEC 61672 to be fitted to all sound level meters and are approximations to the equal loudness contours given in ISO 226. The old B- and D-frequency-weightings have fallen into disuse, but many sound level meters provide for C frequency-weighting and its fitting is mandated — at least for testing purposes — to precision (Class one) sound level meters. D-frequency-weighting was specifically designed for use when measuring high-level aircraft noise in accordance with the IEC 537 measurement standard. The large peak in the D-weighting curve is not a feature of the equal-loudness contours, but reflects the fact that humans hear random noise differently from pure tones, an effect that is particularly pronounced around 6 kHz. This is because individual neurons from different regions of the cochlea in the inner ear respond to narrow bands of frequencies, but the higher frequency neurons integrate a wider band and hence signal a louder sound when presented with noise containing many frequencies than for a single pure tone of the same pressure level. Following changes to the ISO standard, D-frequency-weighting by itself should now only be used for non-bypass-type jet engines, which are found only on military aircraft and not on commercial aircraft. For this reason, today A-frequency-weighting is now mandated for light civilian aircraft measurements, while a more accurate loudness-corrected weighting EPNdB is required for certification of large transport aircraft. D-weighting is the basis for the measurement underlying EPNdB. Z- or ZERO frequency-weighting was introduced in the International Standard IEC 61672 in 2003 and was intended to replace the "Flat" or "Linear" frequency weighting often fitted by manufacturers. This change was needed as each sound level meter manufacturer could choose their own low and high frequency cut-offs (–3 dB) points, resulting in different readings, especially when peak sound level was being measured. It is a flat frequency response between 10 Hz and 20 kHz ±1.5 dB. As well, the C-frequency-weighting, with –3 dB points at 31.5 Hz and 8 kHz did not have a sufficient bandpass to allow the sensibly correct measurement of true peak noise (Lpk). G-weighting is used for measurements in the infrasound range from 8 Hz to about 40 Hz. B- and D-frequency-weightings are no longer described in the body of the standard IEC 61672:2003, but their frequency responses can be found in the older IEC 60651, although that has been formally withdrawn by the International Electrotechnical Commission in favour of IEC 61672:2003. The frequency weighting tolerances in IEC 61672 have been tightened over those in the earlier standards IEC 179 and IEC 60651 and thus instruments complying with the earlier specifications should no longer be used for legally required measurements. Environmental and other noise measurements A-weighted decibels are abbreviated dB(A) or dBA. When acoustic (calibrated microphone) measurements are being referred to, then the units used will be dB SPL referenced to 20 micropascals = 0 dB SPL. The A-weighting curve has been widely adopted for environmental noise measurement, and is standard in many sound level meters. The A-weighting system is used in any measurement of environmental noise (examples of which include roadway noise, rail noise, aircraft noise). A-weighting is also in common use for assessing potential hearing damage caused by loud noise, including noise dose measurements at work. A noise level of more than 85 dB(A) each day increases the risk factor for hearing damage. A-weighted sound power levels LWA are increasingly found on sales literature for domestic appliances such as refrigerators, freezers and computer fans. The expected sound pressure level to be measured at a given distance as SPL with a sound level meter can with some simplifications be calculated from the sound power level. In Europe, the A-weighted noise level is used for instance for normalizing the noise of tires on cars. Noise exposure for visitors of venues with loud music is usually also expressed in dB(A), although the presence of high levels of low frequency noise does not justify this. Audio reproduction and broadcasting equipment Although the A-weighting curve, in widespread use for noise measurement, is said to have been based on the 40-phon Fletcher-Munson curve, research in the 1960s demonstrated that determinations of equal-loudness made using pure tones are not directly relevant to our perception of noise. This is because the cochlea in our inner ear analyses sounds in terms of spectral content, each hair cell responding to a narrow band of frequencies known as a critical band. The high-frequency bands are wider in absolute terms than the low-frequency bands, and therefore 'collect' proportionately more power from a noise source. However, when more than one critical band is stimulated, the outputs of the various bands are summed by the brain to produce an impression of loudness. For these reasons equal-loudness curves derived using noise bands show an upwards tilt above 1 kHz and a downward tilt below 1 kHz when compared to the curves derived using pure tones. This enhanced sensitivity to noise in the region of 6 kHz became particularly apparent in the late 1960s with the introduction of compact cassette recorders and Dolby-B noise reduction. A-weighted noise measurements were found to give misleading results because they did not give sufficient prominence to the 6 kHz region where the noise reduction was having greatest effect, and did not sufficiently attenuate noise around 10 kHz and above (a particular example is with the 19 kHz pilot tone on FM radio systems which, though usually inaudible, is not sufficiently attenuated by A-weighting, so that sometimes one piece of equipment would even measure worse than another and yet sound better, because of differing spectral content. ITU-R 468 noise weighting was therefore developed to more accurately reflect the subjective loudness of all types of noise, as opposed to tones. This curve, which came out of work done by the BBC Research Department, and was standardised by the CCIR and later adopted by many other standards bodies (IEC, BSI) and, , is maintained by the ITU. It became widely used in Europe, especially in broadcasting, and was adopted by Dolby Laboratories who realised its superior validity for their purposes when measuring noise on film soundtracks and compact cassette systems. Its advantages over A-weighting are less accepted in the US, where the use of A-weighting still predominates. It is used by broadcasters in Britain, Europe, and former countries of the British Empire such as Australia and South Africa. Function realisation of some common weightings The standard defines weightings () in dB units by tables with tolerance limits (to allow a variety of implementations). Additionally, the standard describes weighting functions to calculate the weightings. The weighting function is applied to the amplitude spectrum (not the intensity spectrum) of the unweighted sound level. The offsets ensure the normalisation to 0 dB at 1000 Hz. Appropriate weighting functions are: A B C D Transfer function equivalent The gain curves can be realised by the following s-domain transfer functions. They are not defined in this way though, being defined by tables of values with tolerances in the standards documents, thus allowing different realisations: A kA ≈ 7.39705 × 109 B kB ≈ 5.99185 × 109 C kC ≈ 5.91797 × 109 D kD ≈ 91104.32 The k-values are constants that are used to normalize the function to a gain of 1 (0 dB). The values listed above normalize the functions to 0 dB at 1 kHz, as they are typically used. (This normalization is shown in the image.) See also Noise Signal noise ITU-R 468 noise weighting M-weighting Psophometric weighting Audio quality measurement Noise pollution Noise regulation Headroom Rumble measurement Weighting filter Weighting curve Luminous efficiency function, the light equivalent LKFS Notes References Further reading Audio Engineer's Reference Book, 2nd Ed 1999, edited Michael Talbot Smith, Focal Press An Introduction to the Psychology of Hearing 5th ed, Brian C. J. Moore, Elsevier Press External links Noise Measurement Briefing. Archived from the original on 2013-02-25. A-weighting filter circuit for audio measurements Weighting Filter Set Circuit diagrams AES pro audio reference definition of "weighting filters" Frequency Weighting Equations A-weighting in detail A-Weighting Equation and online calculation Researches in loudness measurement by CBS using noise bands, 1966 IEEE Article Comparison of some loudness measures for loudspeaker listening tests (Aarts, JAES, 1992) PDF containing algorithm for ABCD filters Noise pollution Sound Audio engineering Noise Articles containing video clips Acoustics de:Bewerteter Schalldruckpegel fr:Décibel A ja:A特性
A-weighting
[ "Physics", "Engineering" ]
3,009
[ "Electrical engineering", "Audio engineering", "Classical mechanics", "Acoustics" ]
11,217,534
https://en.wikipedia.org/wiki/Background%20extinction%20rate
Background extinction rate, also known as the normal extinction rate, refers to the standard rate of extinction in Earth's geological and biological history, excluding major extinction events, including the current human-induced Holocene extinction. There have been five mass extinction events throughout Earth's history. Overview Extinctions are a normal part of the evolutionary process, and the background extinction rate is a measurement of "how often" they naturally occur. Normal extinction rates are often used as a comparison to present day extinction rates, to illustrate the higher frequency of extinction today than in all periods of non-extinction events before it. Background extinction rates have not remained constant, although changes are measured over geological time, covering millions of years. Measurement Background extinction rates are typically measured in order to give a specific classification to a species and this is obtained over a certain period of time. There are three different ways to calculate background extinction rate. The first is simply the number of species that normally go extinct over a given period of time. For example, at the background rate one species of bird will go extinct every estimated 400 years. Another way the extinction rate can be given is in million species years (MSY). For example, there is approximately one extinction estimated per million species years. From a purely mathematical standpoint this means that if there are a million species on the planet earth, one would go extinct every year, while if there was only one species it would go extinct in one million years, etc. The third way is in giving species survival rates over time. For example, given normal extinction rates species typically exist for 5–10 million years before going extinct. Lifespan estimates Some groups' lifespan estimates by taxonomy are given below (Lawton & May 1995). Invertebrates: These species' average lifespan is 11 million years. Some reasons these species go extinct are from habitat loss, overharvesting, pollution, invasive species, and climate change. Invertebrates make up most of Earth's biodiversity which is why they do not go extinct as fast as other species. Marine Invertebrates: These species' average lifespan is 5–10 million years. Many marine invertebrates face extinction because of the high levels of dissolved carbon dioxide in aquatic environments. Seawater chemistry changes with the increase carbon levels which makes it hard for these organisms to survive. Similar to terrestrial invertebrates, marine invertebrates make up most of Earth's biodiversity which is why they do not go extinct as fast as other species. Marine Animals: These species' average lifespan is 4–5 million years. Reasons why marine animals go extinct include interactions with fisheries, capturing, pollution, habitat degradation, climate change, and overharvesting. Mammals: These species' average lifespan is 1 million years. Habitat loss is the leading reason for why mammals go extinct. Other reasons that follow this are overexploitation, invasive species, pollution, and climate change. Diatoms: These species' average lifespan is 8 million years. Diatoms rely on silica to build their shells, which benefited them when oceans originally started to become more acidic. Now as oceans continue to become even more acidic, it becomes harder for them to continue to thrive. From this information it can be concluded that these species are going to extinct due to high rates of ocean acidification. Dinoflagellates: These species' average lifespan is 13 million years. It takes a lot for these species to go extinct because they are so prominent in aquatic environments. Dinoflagellates were severely affected during the Triassic extinction, suggesting that the warming of ocean waters can affect the livelihood of these organisms. Planktonic Foraminifera: These species' average lifespan is 7 million years. These species face extinction in cases of glaciation events, hyperthermal events, and climate change. Cenozoic Bivalves: These species' average lifespan is 10 million years. The reason for why members of this group go extinct is related to environmental deterioration. Echinoderms: These species' average lifespan is 6 million years. The reason why members of this group went extinct is related to ocean acidification. Ocean acidification makes it hard for the echinoderms to build their shells. Silurian Graptolites: These species' average lifespan is 2 million years. Reasons why members of this group go extinct include climate change, rising sea levels, and loss of habitats. References Further reading E. O. Wilson. 2005. The Future of Life. Alfred A. Knopf. New York, New York, USA C.Michael Hogan. 2010. Edenic Period. Encyclopedia of Earth. National Council for Science and Environment. ed. Galal Hassan, ed in chief Cutler Cleveland, Washington DC J.H.Lawton and R.M.May (2005) Extinction rates, Oxford University Press, Oxford. External links Discussion of extinction events, with description of Background extinction rates Extinction Temporal rates
Background extinction rate
[ "Physics" ]
987
[ "Temporal quantities", "Temporal rates", "Physical quantities" ]
7,512,554
https://en.wikipedia.org/wiki/Miglitol
Miglitol is an oral alpha-glucosidase inhibitor used in the treatment of type 2 diabetes. It works by reversibly inhibiting alpha-glucosidase enzymes in the small intestine, which delays the digestion of complex carbohydrates and subsequently reduces postprandial glucose levels. Approved for clinical use since 1998, miglitol has demonstrated efficacy in improving glycemic control, reducing HbA1c levels, and decreasing both fasting and postprandial plasma glucose concentrations in long-term clinical trials. Additionally, recent studies have suggested that miglitol may have potential as an anti-obesity agent, showing promise in reducing body weight and body mass index in obese or diabetic patients. While generally well-tolerated, the most common side effects associated with miglitol are gastrointestinal disturbances, which are typically mild to moderate and tend to decrease over time. It must be taken at the start of main meals to have maximal effect In contrast to acarbose (another alpha-glucosidase inhibitor), miglitol is systemically absorbed; however, it is not metabolized and is excreted by the kidneys. Formulation The benefits of alpha-glucosidase inhibitors on health were shown to be stronger when the powder is consumed orally dissolved in water as a beverage in comparison to its intake as ordinary hard gelatin capsules. See also Alpha-glucosidase inhibitor Αlpha-Amylase Cinnamon Miglustat Voglibose References Alpha-glucosidase inhibitors Iminosugars Piperidines Polyols
Miglitol
[ "Chemistry" ]
340
[ "Iminosugars", "Carbohydrates" ]
7,513,144
https://en.wikipedia.org/wiki/Thrombopoietic%20agent
Thrombopoietic agents are drugs that induce the growth and maturation of megakaryocytes. Some of them are currently in clinical use: romiplostim, eltrombopag, oprelvekin (a recombinant interleukin 11) and thrombopoietin. Several others are under clinical investigation such as lusutrombopag and avatrombopag. References Drugs by mechanism of action
Thrombopoietic agent
[ "Chemistry" ]
96
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
7,513,712
https://en.wikipedia.org/wiki/Flyback%20chronograph
A flyback chronograph is a watch complication, in which the user can use a reset function without the need to first stop the chronograph, by a single press on an additional pusher at the 4 o'clock mark. In usual chronographs of the time, the user had to push three times for the same operation. First they had to stop the chronograph, then reset the hands at zero, and finally restart the chronograph in order to time the next sequence. A flyback chronograph shortens the time of operation needed to measure subsequent legs of a flight. Other names The flyback function is also known by some other names: Retour-en-vol () Taylor system Permanent zero setting Overview The flyback function is a complication inspired by the need of pilots in the early 20th century, especially on shorter flights where pilots oriented themselves along highly visible geographical marks like rivers, mountains or railroad tracks. Flyback chronographs have a different layout than the usual monopusher chronographs of the early 20th century. They usually have a push-piece at 2 o'clock to start, stop and reset the timer function. But they have an additional pusher at 4 o'clock, enabling to do the three actions (stop, reset, restart) all at once. Navigation purposes Given the emergence of high-speed flight, e.g. Maurice Prévost reached 200 km/h in 1913, recording multiple time intervals with a conventional chronograph generated a significant margin of error. The aim of the flyback function was therefore to reduce this margin of error and help pilots to navigate more precisely. History The first model produced was a Longines wrist chronograph with a cal. 13.33Z in 1925. The flyback function served navigation and timing sporting events purposes in the 20th century. It was the first watch complication designed to record multiple time intervals such as calculating the time taken to travel between waypoints, measure fuel consumption or perform coordinated maneuvers. Longines filed the patent for the flyback mechanism on 12 June 1935. It was approved and registered on 16 June 1936. The flyback function has its origins in the development of aircraft. In fact, the early years of the 20th century not only made flying a reliable technology, but also developed aeronautical navigation systems. A major problem rapidly experienced by pilots: the high speed of the aircraft combined with the lengthy computations' methods meant longer periods of time flying in the wrong direction. This inexorably led to greater positional errors. At best, these navigational errors would cause the pilots to miss their destination. At worst, they would disappear somewhere in the middle of the ocean, deprived of precious fuel. Wiley Post, for example, carried three chronometers "because a minute of error in time meant a 15-mile error on the equator in the final calculation of position". The flyback function would become an important part of the solution: pilots would only have to press a single time on a pusher to stop, reset and restart their chronograph giving them much more accurate timing of the subsequent legs of a flight at high speeds. Thanks to the flyback function, anything involving time measured in sequences or at close intervals, such as dead reckoning or coordinated maneuvers, had been pushed to a higher level of accuracy. Richard Byrd, who flew first over the South Pole in 1929, led several expeditions wearing a Longines wrist chronograph (cal. 13ZN) with flyback function. See also Double chronograph References Watches Clocks Horology
Flyback chronograph
[ "Physics", "Technology", "Engineering" ]
744
[ "Machines", "Physical quantities", "Horology", "Time", "Clocks", "Measuring instruments", "Physical systems", "Spacetime" ]
7,513,965
https://en.wikipedia.org/wiki/Mean%20down%20time
In organizational management, mean down time (MDT) is the average time that a system is non-operational. This includes all downtime associated with repair, corrective and preventive maintenance, self-imposed downtime, and any logistics or administrative delays. Description The inclusion of delay times distinguishes mean down time from mean time to repair (MTTR), which includes only downtime specifically attributable to repairs. Mean Down Time key factors: SYSTEM FAILURE Identification & Recovery Time. First, the fact that the system is down must be identified, and maintainers notified & brought to action Fault detection and isolation. The problem must be identified and the faulty part identified. Parts Procurement. Replacement parts needed (if any) must be obtained System Repair. Faulty parts must be replaced or repaired. SCHEDULED DOWNTIME Preventive Maintenance. Preventive maintenance checks are often intrusive and require the system to be down (unless prognostics are used), e.g., checking oil in a car engine. System Upgrade. System downtime is usually required to bring new features to the system. Calibration. Many forms of mechanical or electronic equipment require periodic intrusive calibration. Other administrative actions There are four main ways of reducing MDT: Design the system to fail less often. A more reliable system that doesn't fail often reduces the Down Time. Make the system repairable. If an item is repairable, it will be used for a longer time, and the user will become more familiar with its operation. This will decrease the MDT because the user will be able to detect abnormal operation sooner, and the system will be repaired before the problem becomes too serious. Let the user repair the system. By designing a system to be user-repairable, the MDT will be considerably decreased, as it will not have to be taken out of service for long periods of time while it is being repaired by the manufacturer (which of course includes time spent in transit to and from the manufacturer). Provide the user with a repair support system. The closer critical spare parts are to the system, the faster it will be able to be repaired, as this eliminates the delay involved in ordering parts from the manufacturer and waiting to receive them. Also, the clarity of any instructions on how to repair an item will greatly contribute to the speed at which it is repaired. References Engineering failures Reliability engineering
Mean down time
[ "Technology", "Engineering" ]
487
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
7,516,542
https://en.wikipedia.org/wiki/Immunoproteasome
An immunoproteasome is a type of proteasome that degrades ubiquitin-labeled proteins found in the cytoplasm in cells exposed to oxidative stress and proinflammatory stimuli. In general, proteasomes consist of a regulatory and a catalytic part. Immunoproteasomes are induced by interferon gamma (but also by other proinflammatory cytokines) and oxidative stress, which in the cell triggers the transcription of three catalytic subunits that do not occur in the classical proteasome. Another possible variation of proteasome is the thymoproteasome, which is located in the thymus and folds to present peptides to naive T cells. Structure Structurally, immunoproteasome is a cylindrical protein complex composed of a catalytic 20S subunit and a 19S regulatory subunit. The catalytic subunit consists of four outer alpha rings and four inner beta rings. In the classical proteasome, the beta (β) 1, β2 and β5 subunits have catalytic activity, which, however, in the immunoproteasome are replaced by the subunits LMP2 (alias β1i), MECL-1 (alias β2i), and LMP7 (alias β5i). The LMP2 protein is composed of 20 amino acids, MECL-1 of 39 amino acids and LMP7 occurs in isoform and therefore can have either 72 or 68 amino acids. The regulatory unit consists of 19 proteins, which are structurally divided into a lid of 9 proteins and a base again of 9 proteins. The RPN10 protein is added to this regulatory complex, which serves to stabilize the structure and as a receptor for ubiquitin. Function The function of the immunoproteasome is primarily to specifically cleave proteins into shorter peptides, which can then be displayed on the cell surface together with the MHC I complex. The MHC I complex with bound peptide is then recognized primarily by cytotoxic T cells. In order to expose a peptide on the cell surface, the ubiquitin-labeled protein, specifically cleaved into peptides by immunoproteasome, must first be transferred to the endoplasmic reticulum using TAP1 and TAP2 transporters and chaperones. In the endoplasmic reticulum, the peptide is then bound to an MHC I molecule. The aforementioned LMP2 and LMP7 subunits are encoded by the PSMB9 (LMP2) and PSMB8 (LMP7) genes, which are found in the MHC II gene cluster of the TAP-1 and TAP-2 genes. The LMP2 subunit has the function of chymotrypsin, which means that it cleaves bonds after hydrophobic substances and this prepares peptides with hydrophobic C anchors for the MHC I complex. While LMP7 and MECL-1 subunits form the same as the standard proteasome subunits, i.e. trypsin and chymotrypsin activity Diseases associated with immunoproteasome The ability to display peptides on the cell surface is essential for the recognition of cell status by immune cells. Its proper function is therefore essential and when it is disrupted, a disease occurs. Some examples where the effect of immunoproteasome on pathology has been confirmed are given below: Mutations in the PSMB8 gene, which encodes the LMP7 subunit, are involved in a variety of diseases and autoinflammatory disorders, the symptoms of which include skin rash, erythema, spiking fever and lipodystrophy, which are presented since early childhood. These also include Nakajo-Nishimura syndrome, a Japanese autoinflammatory syndrom with lipodystrophy syndrome (JASL) or chronic atypical neutrophilic dermatosis with lipodystrophy and elevated temperature. This list of syndromes is collectively called proteasome-associated autoinflammatory syndrome. In Alzheimer's disease, a single nucleotide polymorphisms have been found in the immunoproteasome subunit, which increases the chance of its occurrence. Alzheimer's disease is characterized by the presence of amyloid plaques in which an advanced glycation end product occurs. These advanced glycation end-products are not degraded in the cell and remain in it. It is in amyloid plaques that the active activity of the immunoproteasome is found as a consequence of the cells' efforts to remove plaques. References Proteins Protein complexes Organelles
Immunoproteasome
[ "Chemistry" ]
977
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
7,516,709
https://en.wikipedia.org/wiki/GW%20approximation
The GW approximation (GWA) is an approximation made in order to calculate the self-energy of a many-body system of electrons. The approximation is that the expansion of the self-energy Σ in terms of the single particle Green's function G and the screened Coulomb interaction W (in units of ) can be truncated after the first term: In other words, the self-energy is expanded in a formal Taylor series in powers of the screened interaction W and the lowest order term is kept in the expansion in GWA. Theory The above formulae are schematic in nature and show the overall idea of the approximation. More precisely, if we label an electron coordinate with its position, spin, and time and bundle all three into a composite index (the numbers 1, 2, etc.), we have where the "+" superscript means the time index is shifted forward by an infinitesimal amount. The GWA is then To put this in context, if one replaces W by the bare Coulomb interaction (i.e. the usual 1/r interaction), one generates the standard perturbative series for the self-energy found in most many-body textbooks. The GWA with W replaced by the bare Coulomb yields nothing other than the Hartree–Fock exchange potential (self-energy). Therefore, loosely speaking, the GWA represents a type of dynamically screened Hartree–Fock self-energy. In a solid state system, the series for the self-energy in terms of W should converge much faster than the traditional series in the bare Coulomb interaction. This is because the screening of the medium reduces the effective strength of the Coulomb interaction: for example, if one places an electron at some position in a material and asks what the potential is at some other position in the material, the value is smaller than given by the bare Coulomb interaction (inverse distance between the points) because the other electrons in the medium polarize (move or distort their electronic states) so as to screen the electric field. Therefore, W is a smaller quantity than the bare Coulomb interaction so that a series in W should have higher hopes of converging quickly. To see the more rapid convergence, we can consider the simplest example involving the homogeneous or uniform electron gas which is characterized by an electron density or equivalently the average electron-electron separation or Wigner–Seitz radius . (We only present a scaling argument and will not compute numerical prefactors that are order unity.) Here are the key steps: The kinetic energy of an electron scales as The average electron-electron repulsion from the bare (unscreened) Coulomb interaction scales as (simply the inverse of the typical separation) The electron gas dielectric function in the simplest Thomas–Fermi screening model for a wave vector is where is the screening wave number that scales as Typical wave vectors scale as (again typical inverse separation) Hence a typical screening value is The screened Coulomb interaction is Thus for the bare Coulomb interaction, the ratio of Coulomb to kinetic energy is of order which is of order 2-5 for a typical metal and not small at all: in other words, the bare Coulomb interaction is rather strong and makes for a poor perturbative expansion. On the other hand, the ratio of a typical to the kinetic energy is greatly reduced by the screening and is of order which is well behaved and smaller than unity even for large : the screened interaction is much weaker and is more likely to give a rapidly converging perturbative series. History The first GWA calculation for Hartree–Fock method was in 1958 by John Quinn and Richard Allan Ferrell but with many approximation and limited approach. Donald F. Dubois used this method to obtain results at for very small Wigner–Seitz radius or very large electron densities in 1959. The first full calculation using GWA was done by Lars Hedin in 1965. Hedin equations for GWA are named after him. With the advanced of computational resources, real materials were first studied using GWA in the 1980s, with the works of Mark S. Hybertsen and Steven Gwon Sheng Louie. Software implementing the GW approximation ABINIT - plane-wave pseudopotential method ADF - Slater basis set method BerkeleyGW - plane-wave pseudopotential method CP2K - Gaussian-based low-scaling all-electron and pseudopotential method ELK - full-potential (linearized) augmented plane-wave (FP-LAPW) method FHI-aims - numeric atom-centered orbitals method Fiesta - Gaussian all-electron method GAP - an all-electron GW code based on augmented plane-waves, currently interfaced with WIEN2k GPAW GREEN - fully self-consistent GW in Gaussian basis for molecules and solids Molgw - small gaussian basis code NanoGW - real-space wave functions and Lanczos iterative methods PySCF QuantumATK - LCAO and PW methods. Quantum ESPRESSO - Wannier-function pseudopotential method Questaal - Full Potential (FP-LMTO) method SaX - plane-wave pseudopotential method Spex - full-potential (linearized) augmented plane-wave (FP-LAPW) method TURBOMOLE - Gaussian all-electron method VASP - projector-augmented-wave (PAW) method West - large scale GW YAMBO code - plane-wave pseudopotential method Sources The key publications concerning the application of the GW approximation Picture of Lars Hedin, inventor of GW GW100 - Benchmarking the GW approach for molecules. References Further reading Electron Correlation in the Solid State, Norman H. March (editor), World Scientific Publishing Company Quantum field theory
GW approximation
[ "Physics" ]
1,210
[ "Quantum field theory", "Quantum mechanics" ]
7,517,878
https://en.wikipedia.org/wiki/Rotordynamics
Rotordynamics (or rotor dynamics) is a specialized branch of applied mechanics concerned with the behavior and diagnosis of rotating structures. It is commonly used to analyze the behavior of structures ranging from jet engines and steam turbines to auto engines and computer disk storage. At its most basic level, rotor dynamics is concerned with one or more mechanical structures (rotors) supported by bearings and influenced by internal phenomena that rotate around a single axis. The supporting structure is called a stator. As the speed of rotation increases the amplitude of vibration often passes through a maximum that is called a critical speed. This amplitude is commonly excited by imbalance of the rotating structure; everyday examples include engine balance and tire balance. If the amplitude of vibration at these critical speeds is excessive, then catastrophic failure occurs. In addition to this, turbomachinery often develop instabilities which are related to the internal makeup of turbomachinery, and which must be corrected. This is the chief concern of engineers who design large rotors. Rotating machinery produces vibrations depending upon the structure of the mechanism involved in the process. Any faults in the machine can increase or excite the vibration signatures. Vibration behavior of the machine due to imbalance is one of the main aspects of rotating machinery which must be studied in detail and considered while designing. All objects including rotating machinery exhibit natural frequency depending on the structure of the object. The critical speed of a rotating machine occurs when the rotational speed matches its natural frequency. The lowest speed at which the natural frequency is first encountered is called the first critical speed, but as the speed increases, additional critical speeds are seen which are the multiples of the natural frequency. Hence, minimizing rotational unbalance and unnecessary external forces are very important to reducing the overall forces which initiate resonance. When the vibration is in resonance, it creates a destructive energy which should be the main concern when designing a rotating machine. The objective here should be to avoid operations that are close to the critical and pass safely through them when in acceleration or deceleration. If this aspect is ignored it might result in loss of the equipment, excessive wear and tear on the machinery, catastrophic breakage beyond repair or even human injury and loss of lives. The real dynamics of the machine is difficult to model theoretically. The calculations are based on simplified models which resemble various structural components (lumped parameters models), equations obtained from solving models numerically (Rayleigh–Ritz method) and finally from the finite element method (FEM), which is another approach for modelling and analysis of the machine for natural frequencies. There are also some analytical methods, such as the distributed transfer function method, which can generate analytical and closed-form natural frequencies, critical speeds and unbalanced mass response. On any machine prototype it is tested to confirm the precise frequencies of resonance and then redesigned to assure that resonance does not occur. Basic principles The equation of motion, in generalized matrix form, for an axially symmetric rotor rotating at a constant spin speed is where: is the symmetric mass matrix; is the symmetric damping matrix; is the skew-symmetric gyroscopic matrix: is the symmetric bearing or seal stiffness matrix; is the gyroscopic matrix of deflection for inclusion of e.g., centrifugal elements; is the generalized coordinates of the rotor in inertial coordinates; is a forcing function, usually including the unbalance. The gyroscopic matrix is proportional to spin speed . The general solution to the above equation involves complex eigenvectors which are spin speed dependent. Engineering specialists in this field rely on the Campbell Diagram to explore these solutions. An interesting feature of the rotordynamic system of equations are the off-diagonal terms of stiffness, damping, and mass. These terms are called cross-coupled stiffness, cross-coupled damping, and cross-coupled mass. When there is a positive cross-coupled stiffness, a deflection will cause a reaction force opposite the direction of deflection to react the load, and also a reaction force in the direction of positive whirl. If this force is large enough compared with the available direct damping and stiffness, the rotor will be unstable. When a rotor is unstable, it will typically require immediate shutdown of the machine to avoid catastrophic failure. Jeffcott rotor The Jeffcott rotor (named after Henry Homan Jeffcott), also known as the de Laval rotor in Europe, is a simplified lumped parameter model used to solve these equations. A Jeffcott rotor consists of a flexible, massless, uniform shaft mounted on two flexible bearings equidistant from a massive disk rigidly attached to the shaft. The simplest form of the rotor constrains the disk to a plane orthogonal to the axis of rotation. This limits the rotor's response to lateral vibration only. If the disk is perfectly balanced (i.e., its geometric center and center of mass are coincident), then the rotor is analogous to a single-degree-of-freedom undamped oscillator under free vibration. If there is some radial distance between the geometric center and center of mass, then the rotor is unbalanced, which produced a force proportional to the disk's mass, , the distance between the two centers (eccentricity, ) and the disk's spin speed, . After calculating the equivalent stiffness, , of the system, we can create the following second-order linear ordinary differential equation that describes the radial deflection of the disk from the rotor centerline. If we were to graph the radial response, we would see a sine wave with angular frequency . This lateral oscillation is called 'whirl', and in this case, is highly dependent upon spin speed. Not only does the spin speed influence the amplitude of the forcing function, it can also produce dynamic amplification near the system's natural frequency. While the Jeffcott rotor is a useful tool for introducing rotordynamic concepts, it is important to note that it is a mathematical idealization that only loosely approximates the behavior of real-world rotors. Campbell diagram The Campbell diagram, also known as "Whirl Speed Map" or a "Frequency Interference Diagram", of a simple rotor system is shown on the right. The pink and blue curves show the backward whirl (BW) and forward whirl (FW) modes, respectively, which diverge as the spin speed increases. When the BW frequency or the FW frequency equal the spin speed Ω, indicated by the intersections A and B with the synchronous spin speed line, the response of the rotor may show a peak. This is called a critical speed. History The history of rotordynamics is replete with the interplay of theory and practice. W. J. M. Rankine first performed an analysis of a spinning shaft in 1869, but his model was not adequate and he predicted that supercritical speeds could not be attained. In 1895, Dunkerley published an experimental paper describing supercritical speeds. Gustaf de Laval, a Swedish engineer, ran a steam turbine to supercritical speeds in 1889, and Kerr published a paper showing experimental evidence of a second critical speed in 1916. Henry Jeffcott was commissioned by the Royal Society of London to resolve the conflict between theory and practice. He published a paper now considered classic in the Philosophical Magazine in 1919 in which he confirmed the existence of stable supercritical speeds. August Föppl published much the same conclusions in 1895, but history largely ignored his work. Between the work of Jeffcott and the start of World War II there was much work in the area of instabilities and modeling techniques culminating in the work of Nils Otto Myklestad and M. A. Prohl which led to the transfer matrix method (TMM) for analyzing rotors. The most prevalent method used today for rotordynamics analysis is the finite element method. Modern computer models have been commented on in a quote attributed to Dara Childs, "the quality of predictions from a computer code has more to do with the soundness of the basic model and the physical insight of the analyst. ... Superior algorithms or computer codes will not cure bad models or a lack of engineering judgment." Prof. F. Nelson has written extensively on the history of rotordynamics and most of this section is based on his work. Software There are many software packages that are capable of solving the rotor dynamic system of equations. Rotor dynamic specific codes are more versatile for design purposes. These codes make it easy to add bearing coefficients, side loads, and many other items only a rotordynamicist would need. The non-rotor dynamic specific codes are full featured FEA solvers, and have many years of development in their solving techniques. The non-rotor dynamic specific codes can also be used to calibrate a code designed for rotor dynamics. See also Axle Balancing machine Bearing (mechanical) Driveshaft Exoskeletal engine Magnetic bearing Turbine References uses DyRoBeS Ganeriwala, S., Mohsen N (2008). Rotordynamic Analysis using XLRotor. SQI03-02800-0811 Notes External links Rotordynamic Analysis using XLRotor Gateway to technical literature on Rotordynamics Dynamics (mechanics) Rotation
Rotordynamics
[ "Physics" ]
1,910
[ "Physical phenomena", "Classical mechanics", "Rotation", "Motion (physics)", "Dynamics (mechanics)" ]
7,519,445
https://en.wikipedia.org/wiki/Following%20sea
A following sea refers to a wave direction that is similar to the heading of a waterborne vessel under way. The word "sea" in this context refers to open water wind waves. In the strict sense, a following sea has a direction of propagation between 15° either side of vessel heading, and has a celerity that does not exceed the velocity of the vessel in the direction of wave propagation. If the wave moves faster than the vessel it is an overtaking sea. If the angle to vessel heading is more than 15° it may be a quartering sea. Usage Sailors use this term synonymously with the points of sail below a beam reach, since the wind direction is generally the same as the sea direction. Therefore, the phrase "Fair winds and following seas," implies that a vessel will have good winds, and not have to pound into the waves. The phrase is now used as a popular toast or salutation between mariners. It is also used during ceremonies, such as the beginning of a voyage, a ship's commissioning, a retirement, funeral et cetera. Following seas, combined with high winds (especially from the stern, or from behind the boat), can be dangerous and cause a boat to yaw (turn sideways) and swamp or plow under the wave ahead, if the winds and sea are too strong or violent. The original term may have been "Fair winds and a fallowing sea" where fallow means inactive. However, in the mariners' traditional toast or blessing a "following sea", combined with a "fair wind", to a sailor, implies that the winds are comfortable, the sailboat is "running", i.e. sailing with the wind on its stern, and the seas are comfortably rolling in the same direction as the boat is heading, so that the boat seems to be skimming easily on the surface of the water. See also References Sailing Water waves
Following sea
[ "Physics", "Chemistry" ]
391
[ "Water waves", "Waves", "Physical phenomena", "Fluid dynamics" ]
7,520,255
https://en.wikipedia.org/wiki/Sonic%20black%20hole
A sonic black hole, sometimes called a dumb hole or acoustic black hole, is a phenomenon in which phonons (sound perturbations) are unable to escape from a region of a fluid that is flowing more quickly than the local speed of sound. They are called sonic, or acoustic, black holes because these trapped phonons are analogous to light in astrophysical (gravitational) black holes. Physicists are interested in them because they have many properties similar to astrophysical black holes and, in particular, emit a phononic version of Hawking radiation. This Hawking radiation can be spontaneously created by quantum vacuum fluctuations, in close analogy with Hawking radiation from a real black hole. On the other hand, the Hawking radiation can be stimulated in a classical process. The boundary of a sonic black hole, at which the flow speed changes from being greater than the speed of sound to less than the speed of sound, is called the event horizon. History of the concept Acoustic black holes were first theorized to be useful by W. G. Unruh in 1981. However, the first black hole analogue was not created in a laboratory until 2009. It was created in a rubidium Bose–Einstein condensate using a technique called density inversion. This technique creates a flow by repelling the condensate with a potential minimum. The surface gravity and temperature of the sonic black hole were measured, but no attempt was made to detect Hawking radiation. However, the scientists who created it predicted that the experiment was suitable for detection and suggested a method by which it might be done by lasing the phonons. In 2014, stimulated Hawking radiation was reported in an analogue black-hole laser by the same researchers. Quantum, spontaneous Hawking radiation was observed later. A rotating sonic black hole was used in 2010 to give the first laboratory testing of superradiance, a process whereby energy is extracted from a black hole. Overview Perfect fluids Sonic black holes are possible because phonons in perfect fluids exhibit the same properties of motion as fields, such as gravity, in space and time. For this reason, a system in which a sonic black hole can be created is called a gravity analogue. Nearly any fluid can be used to create an acoustic event horizon, but the viscosity of most fluids creates random motion that makes features like Hawking radiation nearly impossible to detect. The complexity of such a system would make it very difficult to gain any knowledge about such features even if they could be detected. Many nearly perfect fluids have been suggested for use in creating sonic black holes, such as superfluid helium, one–dimensional degenerate Fermi gases, and Bose–Einstein condensate. Gravity analogues other than phonons in a fluid, such as slow light and a system of ions, have also been proposed for studying black hole analogues. The fact that so many systems mimic gravity is sometimes used as evidence for the theory of emergent gravity, which could help reconcile relativity, and quantum mechanics. Acoustic engineering In addition to the above-mentioned sonic or acoustic black holes that can be viewed as analogues of astrophysical black holes, physical objects bearing the same names also exist in Acoustic and Vibration Engineering, where they are used for sound absorption and for damping structural vibrations. The Acoustic Black Hole effect in such objects can be achieved by creating a gradual reduction of sound velocity in a waveguide or elastic wave velocity in a solid structure (e.g. flexural wave velocity in thin plates) with propagation distance. The required velocity reduction should follow a power-law function of the propagation distance, and the velocity at the end of the wave propagation path should be reduced to almost zero. Also, measures should be taken to insert a small amount of traditional sound or vibration absorbing materials in the area of very low propagation velocity. Under these conditions, the described sonic or acoustic black holes provide almost 100% absorption of the incident air-borne or structure-borne acoustic waves. See also Acoustic metric Analog models of gravity Black hole Optical black hole Quantum gravity Notes External links Black holes Fluid dynamics
Sonic black hole
[ "Physics", "Chemistry", "Astronomy", "Engineering" ]
827
[ "Black holes", "Physical phenomena", "Physical quantities", "Chemical engineering", "Unsolved problems in physics", "Astrophysics", "Density", "Piping", "Stellar phenomena", "Astronomical objects", "Fluid dynamics" ]
7,521,019
https://en.wikipedia.org/wiki/Biocomposite
A biocomposite is a composite material formed by a matrix (resin) and a reinforcement of natural fibers. Environmental concern and cost of synthetic fibres have led the foundation of using natural fibre as reinforcement in polymeric composites. The matrix phase is formed by polymers derived from renewable and nonrenewable resources. The matrix is important to protect the fibers from environmental degradation and mechanical damage, to hold the fibers together and to transfer the loads on it. In addition, biofibers are the principal components of biocomposites, which are derived from biological origins, for example fibers from crops (cotton, flax or hemp), recycled wood, waste paper, crop processing byproducts or regenerated cellulose fiber (viscose/rayon). The interest in biocomposites is rapidly growing in terms of industrial applications (automobiles, railway coach, aerospace, military applications, construction, and packaging) and fundamental research, due to its great benefits (renewable, cheap, recyclable, and biodegradable). Biocomposites can be used alone, or as a complement to standard materials, such as carbon fiber. Advocates of biocomposites state that use of these materials improve health and safety in their production, are lighter in weight, have a visual appeal similar to that of wood, and are environmentally superior. Characteristics The differential for this class of composites is that they are biodegradable and pollute the environment less which is a concern for many scientists and engineers to minimize the environmental impact of the production of a composite. They are a renewable source, cheap, and in certain cases completely recyclable. One advantage of natural fibers is their low density, which results in a higher specific tensile strength and stiffness than glass fibers, besides of its lower manufacturing costs. As such, biocomposites could be a viable ecological alternative to carbon, glass, and man-made fiber composites. Natural fibers have a hollow structure, which gives insulation against noise and heat. It is a class of materials that can be easily processed, and thus, they are suited to a wide range of applications, such as packaging, building (roof structure, bridge, window, door, green kitchen), automobiles, aerospace, military applications, electronics, consumer products and medical industry (prosthetic, bone plate, orthodontic archwire, total hip replacement, and composite screws and pins). Unfortunately, biocomposites have limitations due to lack of compatibility between synthetic resin and natural fibers Classification Biocomposites are divided into non-wood fibers and wood fibers, all of which present cellulose and lignin. The non-wood fibers (natural fibers) are more attractive for the industry due to the physical and mechanical properties which they present. Also, these fibers are relatively long fibers, and present high cellulose content, which delivers a high tensile strength, and degree of cellulose crystallinity, whereas natural fibers have some disadvantages because they have hydroxyl groups (OH) in the fiber that can attract water molecules, and thus, the fiber might swell. This results in voids at the interface of the composite, which will affect the mechanical properties and loss in dimensional stability. The wood fibers have this name because almost than 60% of its mass is wood elements. It presents softwood fibers (long and flexible) and hardwood fibers (shorter and stiffer), and has low degree of cellulose crystallinity. The natural fibers are divided into straw fibers, bast, leaf, seed or fruit, and grass fibers. The fibers most widely used in the industry are flax, jute, hemp, kenaf, sisal and coir. The straw fibers could be found in many parts of the world, and it is an example of a low-cost reinforcement for biocomposites. The wood fibers could be recycled or non-recycled. Thus, many polymers as polyethylene (PE), polypropylene (PP), and polyvinyl chloride (PVC) are being used in wood composites industries. Flax applications Flax linen composites work well for applications seeking a lighter weight alternative to other materials, most notably, applications in automotive interior components and sports equipment. For automotive interiors, Composites Evolution has performed prototype testing for the Land Rover Defender and the Jaguar XF, with the Defender's flax composite 60% lighter than the production counterpart at the same stiffness, and the XF's flax composite part 35% lighter than the production component at the same stiffness In sports equipment, Ergon Bikes produced a concept saddle that won first place among 439 entries in the Accessories category at the Eurobike 2012, a major bicycling industry trade show. VE Paddles has produced a boat paddle blade. Flaxland Canoes has developed a canoe that has a covering of flax linen. Magine Snowboards has developed a snowboard that incorporates flax linen. Samsara Surfboards has produced a flax linen surfboard. Idris Ski's Lynx won an ISPO Award in 2013 for the Lynx ski Flax linen composites also work for applications for which the look, feel, or sound of wood is desired, but without susceptibility to warping. Applications include furniture and musical instruments. In furniture, a team at Sheffield Hallam University designed a cabinet with entirely sustainable materials, including flax linen. In musical instruments, Blackbird Guitars has produced a ukulele made with flax linen that has won a number of design awards in the composites industry, as well as a guitar Green composites Green composites are classified as a biocomposite combined by natural fibers with biodegradable resins. They are called green composites mainly because of their degradable and sustainable properties, which can be easily disposed without harming the environment. Because of their durability, green composites are mainly used to increase the life cycle of products with short life. Hybrid composites Another class of biocomposite is called 'hybrid biocomposite', which is based on different types of fibers into a single matrix. The fibers can be synthetic or natural, and can be randomly combined to generate the hybrid composites. Its functionality depends directly on the balance between the good and bad properties of each individual material used. Besides, with the use of a composite that has two more types of fibers in the hybrid composite, one fiber can stand on the other one when it is blocked. The properties of this biocomposite depends directly on the fibers counting their content, length, arrangement, and also the bonding to the matrix. In particular, the strength of the hybrid composite depends on the failure strain of the individual fibers. Hemp applications Hemp fiber composites work well in applications where weight reduction and increased stiffness is important. For consumer good applications, Trifilon has developed a number of hemp fiber biocomposites to replace conventional plastics. Suitcases, chillboxes, mobile phone cases and cosmetic packaging have been produced using hemp fiber composites. Processing The production of biocomposites uses techniques that are used to manufacture plastics or composites materials. These techniques include: Machine press; Filament winding; Pultrusion; Extrusion (most widely used, principally for green biocomposite); Injection molding; Compression molding; Resin transfer molding; Sheet moulding compound. References Bibliography Pingle, P. Analytical Modeling of Hard Biocomposites. ProQuest, 2008. University of Massachusetts Lowell. Website: https://books.google.com/books?id=XRLEstOKTiEC&q=biocomposites Mohanty, A.K.; Misra, M.; Drzal, L.T. Natural Fibers, Biopolymers, and Biocomposites. CRC Press, 2005. Website: https://books.google.com/books?id=AwXugfY2oc4C&q=biocomposites Averous, L.; Le Digabel, F. Properties of biocomposites based on lignocellulosic fillers. Science Direct, 2006. Website: http://www.biodeg.net/fichiers/Properties%20of%20biocomposites%20based%20on%20lignocellulosic%20fillers%20(Proof).pdf Averous, L. Cellulose-based biocomposites: comparison of different multiphasic systems. Composite Interfaces, 2007. Website: http://www.biodeg.net/fichiers/Cellulosebased%20biocomposites%20(Abstract-Proof).pdf Halonen, H. Structural changes during cellulose composite processing. Stockholm, 2012. Website: http://www.diva-portal.org/smash/get/diva2:565072/FULLTEXT01.pdf Fowler, P; Hughes, J; Elias, E. Biocomposites: technology, environmental credentials and market forces. Journal of the Science Food and Agriculture, 2006. Website: http://www.bc.bangor.ac.uk/_includes/docs/pdf/biocomposites%20technology.pdf Composite materials
Biocomposite
[ "Physics" ]
1,960
[ "Materials", "Composite materials", "Matter" ]
7,521,201
https://en.wikipedia.org/wiki/Compounding
In the field of pharmacy, compounding (performed in compounding pharmacies) is preparation of custom medications to fit unique needs of patients that cannot be met with mass-produced products. This may be done, for example, to provide medication in a form easier for a given patient to ingest (e.g., liquid vs. tablet), or to avoid a non-active ingredient a patient is allergic to, or to provide an exact dose that isn't otherwise available. This kind of patient-specific compounding, according to a prescriber's specifications, is referred to as "traditional" compounding. The nature of patient need for such customization can range from absolute necessity (e.g. avoiding allergy) to individual optimality (e.g. ideal dose level) to even preference (e.g. flavor or texture). Hospital pharmacies typically engage in compounding medications for intravenous administration, whereas outpatient or community pharmacies typically engage in compounding medications for oral or topical administration. Due to the rising cost of compounding and drug shortages, some hospitals outsource their compounding needs to large-scale compounding pharmacies, particularly of sterile-injectable medications. Compounding preparations of a given formulation in advance batches, as opposed to preparation for a specific patient on demand, is known as "non-traditional" compounding and is akin to small-scale manufacturing. Jurisdictions have varying regulations that apply to drug manufacturers and pharmacies that do advance bulk compounding. History The earliest chemists were familiar with various natural substances and their uses. They compounded a variety of preparations such as medications, dyes, incense, perfumes, ceremonial compounds, preservatives and cosmetics. In the medieval Islamic world in particular, Muslim pharmacists and chemists developed advanced methods of compounding drugs. The first drugstores were opened by Muslim pharmacists in Baghdad in 754. The modern age of pharmacy compounding began in the 19th century with the isolation of various compounds from coal tar for the purpose of producing synthetic dyes. From this came the earliest antibacterial sulfa drugs, phenolic compounds made famous by Joseph Lister, and plastics. During the 1800s, pharmacists specialized in the raising, preparation and compounding of crude drugs. Crude drugs, like opium, are from natural sources and usually contain several chemical compounds. The pharmacist extracted these drugs using solvents such as water or alcohol to form extracts, concoctions and decoctions. They eventually began isolating and identifying the active ingredients in these drug concoctions. Using fractionation or recrystallization, they separated an active ingredient from the crude preparation, and compounded a medication using this active ingredient. With the isolation of medications from the raw materials or crude drugs came the birth of the modern pharmaceutical company. Pharmacists were trained to compound the preparations made by the drug companies, but they could not do it efficiently on a small scale. So economies of scale, not lack of skill or knowledge, produced the modern pharmaceutical industry. With the turn of the 20th century came greater government regulation of the practice of medicine. These new regulations forced the drug companies to prove that any new medication they brought to market was safe. With the discovery of penicillin, modern marketing techniques and brand promotion, the drug manufacturing industry came of age. Pharmacists continued to compound most prescriptions until the early 1950s when the majority of dispensed drugs came directly from the large pharmaceutical companies. Roles A physician may choose to prescribe a compounded medication for a patient with an unusual health need that cannot be met with commercially manufactured products. The physician may choose to prescribe a compounded medication for reasons such as Patients requiring an individualized compounded formulation to be developed by the pharmacist Patients who cannot take commercially prepared prescriptions of a drug Patients requiring limited dosage strengths, such as a very small dose for infants Patients requiring a different formulation, such as turning a pill into a liquid or transdermal gel for people who cannot swallow pills due to disability Patients requiring an allergen-free medication, such as one without gluten or colored dyes Patients who absorb or excrete medications abnormally Patients who need drugs that have been discontinued by pharmaceutical manufacturers because of low profitability Patients facing a supply shortage of their normal drug Children who want flavored additives in liquid drugs, usually so that the medication tastes like candy or fruit Veterinary medicine, for a change in dose, change to a more easily administered form (such as from a pill to a liquid or transdermal gel), or to add a flavor more palatable to the animal. In the United States, compounded veterinary medicine must meet the standards set forth in the Animal Medicinal Drug Use Clarification Act (AMDUCA) Many types of bioidentical hormone replacement therapy Patients who require multiple medications combined in various doses IV compounding in hospitals In hospitals, pharmacists and pharmacy technicians often make compounded sterile preparations (CSPs) using manual methods. The error rate for manually compounded sterile IV products is high. The Institute for Safe Medication Practices (ISMP) has expressed concern with manual methods, particularly the error-prone nature of the syringe pull-back method of verifying sterile preparations. To increase accuracy, some U.S. hospitals have adopted IV workflow management systems and robotic compounding systems. These technologies use barcode scanning to identify each ingredient and gravimetric weight measurement to confirm the proper dose amount. The workflow management systems incorporate software to guide pharmacy technicians through the process of preparing IV medications. The robotic systems prepare IV syringes and bags in an ISO Class 5 environment, and support sterility and dose accuracy by removing human error and contamination from the process. Regulation in Australia In Australia the Pharmacy Board of Australia is responsible for registration of pharmacists and professional practice including compounding. Although almost all pharmacies are able to prepare at least simple compounded medicines, some pharmacy staff undertake further training and education to be able to prepare more complex products. Although pharmacists who have undertaken further training to do complex compounding are not yet easily identified, the Board has been working to put a credentialing system in place. In 2011 the Pharmacy Board convened a Compounding Working Party to advise on revised compounding standards. Draft compounding guidelines for comment were released in April 2014. Pharmacists must comply with current guidelines or may be sanctioned by the Board. Both sterile and non-sterile compounding are legal provided the compounding is done for therapeutic use in a particular patient, and the compounded product is supplied on or from the compounding pharmacy. There are additional requirements for sterile compounding. Not only must a laminar flow cabinet [laminar flow hood] be used, but the environment in which the hood is located must be strictly controlled for microbial and particulate contamination and all procedures, equipment and personnel must be validated to ensure the safe preparation of sterile products. In non-sterile compounding, a powder containment hood is required when any hazardous material (e.g. hormones) are prepared or when there is a risk of cross-contamination of the compounded product. Pharmacists preparing compounded products must comply with these requirements and others published in the Australian Pharmaceutical Formulary & Handbook. Regulation in the United States In the United States, compounding pharmacies are licensed and regulated by states. National standards have been created by Pharmacy Compounding Accreditation Board (PCAB), however, obtaining accreditation is not mandatory and inspections for compliance occur only every three years. The Food and Drug Administration (FDA) has authority to regulate "manufacturing" of pharmaceutical products—which applies when drug products are not made or modified as to be tailored in some way to the individual patient—regardless of whether this is done at a factory or at a pharmacy. In the Drug Quality and Security Act (DQSA) of 2013 (H.R. 3204), Congress amended the Federal Food, Drug, and Cosmetic Act (FFDCA) to clarify limits of FDA jurisdiction over patient-specific compounding, and to provide an optional pathway for "non-traditional" or bulk compounders to operate. The law established that pharmacies compounding only "patient-specific" preparations made in response to a prescription (503A pharmacies) cannot be required to obtain FDA approval for such products, as they will remain exclusively under state-level pharmacy regulation. At the same time, section 503B of the law regulates "outsourcing facilities" which conduct bulk compounding or are used as outsourcing for compounding by other pharmacies. These outsourcing facilities can be explicitly authorized by the Food and Drug Administration under specified circumstances, while being exempted from certain requirements otherwise imposed on mass-producers. In any pharmacy, compounding is not permitted for a drug product that is "essentially a copy" of a mass-produced drug product, however outsourcing pharmacies are subject to a broader definition of "essentially a copy". For traditional/patient-specific compounding, 503A's definition of "copy" retains its original focus on drug products or ultimate dosage forms rather than drug substances or active ingredients, and in any event it explicitly excludes from its definition any compounded drug product that a given patient's prescribing practitioner determines makes a "significant difference" for the patient. The FDA weighs the following factors in deciding whether it has authority to "exercise its discretion" to require approval for a custom-compounded drug product: Compounding in anticipation of receiving prescriptions Compounding drugs removed from the market for safety reasons Compounding from bulk ingredients not approved by FDA Receiving, storing, or using drugs not made in an FDA-registered facility Receiving, storing, or using drugs' components not determined to meet compendia requirements Using commercial-scale manufacturing or testing equipment Compounding for third parties for resale Compounding drugs that are essentially the same as commercially available products Failing to operate in conformance with applicable state law Outsourcing facilities The DQSA amended the FFDCA to create a new class of FDA-regulated entities known as "outsourcing facilities" whose compounding activities "may or may not" be patient-specific based on individualized prescriptions. Registered outsourcing facilities, unlike traditional compounding facilities, are subject to the FDA's oversight. In addition to being subjected to Food and Drug Administration inspections, registration, fees, and specified reporting requirements, other requirements of outsourcing facilities include: Drugs are compounded by or under the direct supervision of a licensed pharmacist The facility does not compound using "bulk drug substances" (unless certain exceptions apply) and its drugs are manufactured by an FDA-registered establishment Other ingredients used in compounding the drug must comply with the standards of the applicable United States Pharmacopeia or National Formulary monograph, if a monograph exists The drug does not appear on a list published by FDA of unsafe or ineffective drugs The drug is not "essentially a copy" of one or more marketed drugs (as defined uniquely in section 503B, notably more broadly and with narrower exclusions than for "traditional" compounding) The drug does not appear on the FDA list of drugs or categories of drugs that present "demonstrable difficulties" for compounding The compounding pharmacist demonstrates that he or she will use controls comparable to the controls applicable under any applicable risk evaluation and mitigation strategy (REMS) The drug will not be sold or transferred by an entity other than the outsourcing facility The label of the drug states that it is a compounded drug, as well as the name of the outsourcing facility, the lot or batch number of the drug, dosage form and strength, and other key information Drug testing and reporting of incidents Poor practices on the part of drug compounders can result in contamination of products, or products that do not meet their stated strength, purity, or quality. Unless a complaint is filed or a patient is harmed, drugs made by compounders are seldom tested. In Texas, one of only two states that does random testing, significant problems have been found. Random tests by the state's pharmacy board over the last several years have found that as many as one in four compounded drugs was either too weak or too strong. In Missouri, the only other state that does testing, potency varied by as much as 300 percent. In 2002, the Food and Drug Administration, concerned about the rising number of accidents related to compounded medications, identified "red flag" factors and issued a guide devoted to human pharmacy compounding, These factors include instances where pharmacists are: Compounding drug products that have been pulled from the market because they were found to be unsafe or ineffective Compounding drugs that are essentially copies of a commercially available drug product Compounding drugs in advance of receiving prescriptions, except in very limited quantities relating to the amounts of drugs previously compounded based on valid prescriptions Compounding finished drugs from bulk active ingredients that aren't components of FDA-approved drugs, without an FDA-sanctioned, investigational new-drug application Receiving, storing, or using drug substances without first obtaining written assurance from the supplier that each lot of the drug substance has been made in an FDA-registered facility Failing to conform to applicable state law regulating the practice of pharmacy New England Compounding Center incident In October 2012 news reports surfaced of an outbreak of fungal meningitis tied to the New England Compounding Center, a pharmacy which engaged in bulk compounding. At that time it was also disclosed that the United States and Massachusetts state health regulators were aware in 2002 that steroid treatments from the New England Compounding Center could cause adverse patient reactions. It was further disclosed that in 2001–02, four people died, more than a dozen were injured and hundreds exposed after they received back-pain injections tainted with a common fungus dispensed by two compounding pharmacies in California and South Carolina. In August 2013 further reports tied to the New England compounding center said that about 750 people were sickened, including 63 deaths, and that infections were linked to more than 17,600 doses of methylprednisolone acetate steroid injections used to treat back and joint pain that were shipped to 23 states. At that time, another incident was reported after at least 15 people at two Texas hospitals developed bacterial infections. All lots of medications dispensed since May 9, 2013, made by Specialty Compounding, LLC of Cedar Park, Texas were recalled. The hospitals reported affected were Corpus Christi Medical Center Bay Area and Corpus Christi Medical Center Doctors Regional. The patients had received intravenous infusions of calcium gluconate, a drug used to treat calcium deficiencies and too much potassium in the blood. Implicated in these cases is the Rhodococcus bacteria, which can cause symptoms such as fever and pain. Misuse prompting regulatory changes The FDA, among others, claims that larger compounding pharmacies act like drug manufacturers and yet circumvent FDA regulations under the banner of compounding. Drugs from compounding pharmacies can be cheaper or alleviate shortages, but can pose greater risk of contamination due in part to the lack of oversight. "Non-traditional" compounders behave like drug manufacturers in some cases by having sales teams that market non-personalized drug products or production capability to doctors, by making drugs that are essentially the same as commercially available mass-produced drug products, or by preparing large batches of a given drug product in anticipation of additional prescriptions before actually receiving them. An FDA spokesperson stated, "The methods of these companies seem far more consistent with those of drug manufacturers than with those of retail pharmacies. Some firms make large amounts of compounded drugs that are copies or near copies of FDA-approved, commercially available drugs. Other firms sell to physicians and patients with whom they have only a remote professional relationship." The head of the FDA has recently requested the following authority from Congress: Various ideas have been proposed to expand federal US regulation in this area, including laws making it easier to identify misuse or misnomered-use and/or stricter enforcement of the longstanding distinction between compounding versus manufacturing. Some US states have also taken initiatives to strengthen oversight of compounding pharmacies. A major source of opposition to new Food and Drug Administration regulation on compounding is makers of dietary supplements. See also Apothecary - the ancestral practitioner of compounding, and their shop Bioidentical hormone replacement therapy - Compounding is involved in the surrounding controversy New England Compounding Center meningitis outbreak Professional Compounding Centers of America References External links International Academy of Compounding Pharmacists International Journal of Pharmaceutical Compounding Drug Compounding: FDA Authority and Possible Issues for Congress from the Congressional Research Service and Federation of American Scientists Pharmacy
Compounding
[ "Chemistry" ]
3,464
[ "Pharmacology", "Pharmacy" ]
7,521,471
https://en.wikipedia.org/wiki/Optical%20black%20hole
An optical black hole is a phenomenon in which slow light is passed through a Bose–Einstein condensate that is itself spinning faster than the local speed of light within to create a vortex capable of trapping the light behind an event horizon just as a gravitational black hole would. Unlike other black hole analogs such as a sonic black hole in a Bose–Einstein condensate, a slow light black hole analog is not expected to mimic the quantum effects of a black hole, and thus not emit Hawking radiation. It does, however, mimic the classical properties of a gravitational black hole, making it potentially useful in studying other properties of black holes. More recently, some physicists have developed a fiber optic based system which they believe will emit Hawking radiation. See also Sonic black hole Analog models of gravity Notes Black holes
Optical black hole
[ "Physics", "Astronomy" ]
167
[ "Physical phenomena", "Black holes", "Physical quantities", "Unsolved problems in physics", "Astronomy stubs", "Astrophysics", "Stellar astronomy stubs", "Astrophysics stubs", "Density", "Relativity stubs", "Theory of relativity", "Stellar phenomena", "Astronomical objects" ]