id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
9,432,308
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic
Mothers against decapentaplegic is a protein from the SMAD family that was discovered in Drosophila. During Drosophila research, it was found that a mutation in the gene in the mother repressed the gene decapentaplegic in the embryo. The phrase "Mothers against" was added as a humorous take-off on organizations opposing various issues e.g. Mothers Against Drunk Driving (MADD); and based on a tradition of such unusual naming within the gene research community. Several human homologues are known: Mothers against decapentaplegic homolog 1 Mothers against decapentaplegic homolog 2 Mothers against decapentaplegic homolog 3 Mothers against decapentaplegic homolog 4 Mothers against decapentaplegic homolog 5 Mothers against decapentaplegic homolog 6 Mothers against decapentaplegic homolog 7 Mothers against decapentaplegic homolog 9 References Proteins SMAD (protein)
Mothers against decapentaplegic
[ "Chemistry" ]
198
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
9,432,643
https://en.wikipedia.org/wiki/RNA-induced%20transcriptional%20silencing
RNA-induced transcriptional silencing (RITS) is a form of RNA interference by which short RNA molecules – such as small interfering RNA (siRNA) – trigger the downregulation of transcription of a particular gene or genomic region. This is usually accomplished by posttranslational modification of histone tails (e.g. methylation of lysine 9 of histone H3) which target the genomic region for heterochromatin formation. The protein complex that binds to siRNAs and interacts with the methylated lysine 9 residue of histones H3 (H3K9me2) is the RITS complex. RITS was discovered in the fission yeast Schizosaccharomyces pombe, and has been shown to be involved in the initiation and spreading of heterochromatin in the mating-type region and in centromere formation. The RITS complex in S. pombe contains at least a piwi domain-containing RNase H-like argonaute, a chromodomain protein Chp1, and an argonaute interacting protein Tas3 which can also bind to Chp1, while heterochromatin formation has been shown to require at least argonaute and an RNA-dependent RNA polymerase. Loss of these genes in S. pombe results in abnormal heterochromatin organization and impairment of centromere function, resulting in lagging chromosomes on anaphase during cell division. Function and mechanisms The maintenance of heterochromatin regions by RITS complexes has been described as a self-reinforcing feedback loop, in which RITS complexes stably bind the methylated histones of a heterochromatin region using the Chp1 protein and induce co-transcriptional degradation of any nascent messenger RNA (mRNA) transcripts, which are then used as RNA-dependent RNA polymerase substrates to replenish the complement of siRNA molecules to form more RITS complexes. The RITS complex localizes to heterochromatic regions through the base pairing of the nascent heterochromatic transcripts as well as through the Chp chromodomain which recognizes methylated histones found in heterochromatin. Once incorporated into the heterochromatin, the RITS complex is also known to play a role in the recruitment of other RNAi complexes as well as other chromatin modifying enzymes to specific genomic regions. Heterochromatin formation, but possibly not maintenance, is dependent on the ribonuclease protein dicer, which is used to generate the initial complement of siRNAs. Importance in other species The relevance of observations from fission yeast mating-type regions and centromeres to mammals is not clear, as some evidence suggests that heterochromatin maintenance in mammalian cells is independent of the components of the RNAi pathway. It is known, however, that plants and animals have analogous mechanism for small RNA-guided heterochromatin formation, and it is believed that the mechanisms described above for S. pombe are highly conserved and play some role in heterochromatin formation in mammals as well. In higher eukaryotes, RNAi-dependent heterochromatic silencing appears to play a larger role in germline cells than in primary cells or cell lines, and is only one of the many different forms of gene silencing used throughout the genome, making it more difficult to study. The role of RNAi in transcriptional gene silencing in plants has been characterized fairly well, and functions primarily through DNA methylation via the RdDM pathway. In this process, which is distinct from the process described above, argonaut-bound siRNA recognizes nascent RNA transcripts or the target DNA to guide the methylation and silencing of the target genomic region. References Gene expression RNA
RNA-induced transcriptional silencing
[ "Chemistry", "Biology" ]
804
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
596,405
https://en.wikipedia.org/wiki/Collider
A collider is a type of particle accelerator that brings two opposing particle beams together such that the particles collide. Compared to other particle accelerators in which the moving particles collide with a stationary matter target, colliders can achieve higher collision energies. Colliders may either be ring accelerators or linear accelerators. Colliders are used as a research tool in particle physics by accelerating particles to very high kinetic energy and letting them impact other particles. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. These may become apparent only at high energies and for extremely short periods of time, and therefore may be hard or impossible to study in other ways. Explanation In particle physics one gains knowledge about elementary particles by accelerating particles to very high kinetic energy and guiding them to colide with other particles. For sufficiently high energy, a reaction occurs that transforms the particles into other particles. Detecting these products gives insight into the physics involved. To do such experiments there are two possible setups: Fixed target setup: A beam of particles (the projectiles) is accelerated with a particle accelerator, and as collision partner, one puts a stationary target into the path of the beam. Collider: Two beams of particles are accelerated and the beams are directed against each other, so that the particles collide while flying in opposite directions. The collider setup is harder to construct but has the great advantage that according to special relativity the energy of an inelastic collision between two particles approaching each other with a given velocity is not just 4 times as high as in the case of one particle resting (as it would be in non-relativistic physics); it can be orders of magnitude higher if the collision velocity is near the speed of light. In the case of a collider where the collision point is at rest in the laboratory frame (i.e. ), the center of mass energy (the energy available for producing new particles in the collision) is simply , where and is the total energy of a particle from each beam. For a fixed target experiment where particle 2 is at rest, . History The first serious proposal for a collider originated with a group at the Midwestern Universities Research Association (MURA). This group proposed building two tangent radial-sector FFAG accelerator rings. Tihiro Ohkawa, one of the authors of the first paper, went on to develop a radial-sector FFAG accelerator design that could accelerate two counterrotating particle beams within a single ring of magnets. The third FFAG prototype built by the MURA group was a 50 MeV electron machine built in 1961 to demonstrate the feasibility of this concept. Gerard K. O'Neill proposed using a single accelerator to inject particles into a pair of tangent storage rings. As in the original MURA proposal, collisions would occur in the tangent section. The benefit of storage rings is that the storage ring can accumulate a high beam flux from an injection accelerator that achieves a much lower flux. The first electron-positron colliders were built in late 1950s-early 1960s in Italy, at the Istituto Nazionale di Fisica Nucleare in Frascati near Rome, by the Austrian-Italian physicist Bruno Touschek and in the US, by the Stanford-Princeton team that included William C.Barber, Bernard Gittelman, Gerry O’Neill, and Burton Richter. Around the same time, the VEP-1 electron-electron collider was independently developed and built under supervision of Gersh Budker in the Institute of Nuclear Physics in Novosibirsk, USSR. The first observations of particle reactions in the colliding beams were reported almost simultaneously by the three teams in mid-1964 - early 1965. In 1966, work began on the Intersecting Storage Rings at CERN, and in 1971, this collider was operational. The ISR was a pair of storage rings that accumulated and collided protons injected by the CERN Proton Synchrotron. This was the first hadron collider, as all of the earlier efforts had worked with electrons or with electrons and positrons. In 1968 construction began on the highest energy proton accelerator complex at Fermilab. It was eventually upgraded to become the Tevatron collider and in October 1985 the first proton-antiproton collisions were recorded at a center of mass energy of 1.6 TeV, making it the highest energy collider in the world, at the time. The energy had later reached 1.96 TeV and at the end of the operation in 2011 the collider luminosity exceeded 430 times its original design goal. Since 2009, the most high-energetic collider in the world is the Large Hadron Collider (LHC) at CERN. It currently operates at 13 TeV center of mass energy in proton-proton collisions. More than a dozen future particle collider projects of various types - circular and linear, colliding hadrons (proton-proton or ion-ion), leptons (electron-positron or muon-muon), or electrons and ions/protons - are currently under consideration for detail exploration of the Higgs/electroweak physics and discoveries at the post-LHC energy frontier. Operating colliders Sources: Information was taken from the website Particle Data Group. See also List of colliders Fixed-target experiment Large Electron–Positron Collider Large Hadron Collider Very Large Hadron Collider Relativistic Heavy Ion Collider International Linear Collider Storage ring Tevatron International Conference on Photonic, Electronic and Atomic Collisions Future Circular Collider References External links LHC - The Large Hadron Collider on the web The Relativistic Heavy Ion Collider (RHIC) Accelerator physics
Collider
[ "Physics" ]
1,210
[ "Accelerator physics", "Applied and interdisciplinary physics", "Experimental physics" ]
596,622
https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli%20theorem
The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis giving necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. The main condition is the equicontinuity of the family of functions. The theorem is the basis of many proofs in mathematics, including that of the Peano existence theorem in the theory of ordinary differential equations, Montel's theorem in complex analysis, and the Peter–Weyl theorem in harmonic analysis and various results concerning compactness of integral operators. The notion of equicontinuity was introduced in the late 19th century by the Italian mathematicians Cesare Arzelà and Giulio Ascoli. A weak form of the theorem was proven by , who established the sufficient condition for compactness, and by , who established the necessary condition and gave the first clear presentation of the result. A further generalization of the theorem was proven by , to sets of real-valued continuous functions with domain a compact metric space . Modern formulations of the theorem allow for the domain to be compact Hausdorff and for the range to be an arbitrary metric space. More general formulations of the theorem exist that give necessary and sufficient conditions for a family of functions from a compactly generated Hausdorff space into a uniform space to be compact in the compact-open topology; see . Statement and first consequences By definition, a sequence of continuous functions on an interval is uniformly bounded if there is a number such that for every function belonging to the sequence, and every . (Here, must be independent of and .) The sequence is said to be uniformly equicontinuous if, for every , there exists a such that whenever for all functions in the sequence. (Here, may depend on , but not , or .) One version of the theorem can be stated as follows: Consider a sequence of real-valued continuous functions defined on a closed and bounded interval of the real line. If this sequence is uniformly bounded and uniformly equicontinuous, then there exists a subsequence that converges uniformly. The converse is also true, in the sense that if every subsequence of itself has a uniformly convergent subsequence, then is uniformly bounded and equicontinuous. Immediate examples Differentiable functions The hypotheses of the theorem are satisfied by a uniformly bounded sequence of differentiable functions with uniformly bounded derivatives. Indeed, uniform boundedness of the derivatives implies by the mean value theorem that for all and , where is the supremum of the derivatives of functions in the sequence and is independent of . So, given , let to verify the definition of equicontinuity of the sequence. This proves the following corollary: Let be a uniformly bounded sequence of real-valued differentiable functions on such that the derivatives are uniformly bounded. Then there exists a subsequence that converges uniformly on . If, in addition, the sequence of second derivatives is also uniformly bounded, then the derivatives also converge uniformly (up to a subsequence), and so on. Another generalization holds for continuously differentiable functions. Suppose that the functions are continuously differentiable with derivatives . Suppose that are uniformly equicontinuous and uniformly bounded, and that the sequence is pointwise bounded (or just bounded at a single point). Then there is a subsequence of the converging uniformly to a continuously differentiable function. The diagonalization argument can also be used to show that a family of infinitely differentiable functions, whose derivatives of each order are uniformly bounded, has a uniformly convergent subsequence, all of whose derivatives are also uniformly convergent. This is particularly important in the theory of distributions. Lipschitz and Hölder continuous functions The argument given above proves slightly more, specifically If is a uniformly bounded sequence of real valued functions on such that each fn is Lipschitz continuous with the same Lipschitz constant : for all and all , then there is a subsequence that converges uniformly on . The limit function is also Lipschitz continuous with the same value for the Lipschitz constant. A slight refinement is A set of functions on that is uniformly bounded and satisfies a Hölder condition of order , , with a fixed constant , is relatively compact in . In particular, the unit ball of the Hölder space is compact in . This holds more generally for scalar functions on a compact metric space satisfying a Hölder condition with respect to the metric on . Generalizations Euclidean spaces The Arzelà–Ascoli theorem holds, more generally, if the functions take values in -dimensional Euclidean space , and the proof is very simple: just apply the -valued version of the Arzelà–Ascoli theorem times to extract a subsequence that converges uniformly in the first coordinate, then a sub-subsequence that converges uniformly in the first two coordinates, and so on. The above examples generalize easily to the case of functions with values in Euclidean space. Compact metric spaces and compact Hausdorff spaces The definitions of boundedness and equicontinuity can be generalized to the setting of arbitrary compact metric spaces and, more generally still, compact Hausdorff spaces. Let X be a compact Hausdorff space, and let C(X) be the space of real-valued continuous functions on X. A subset is said to be equicontinuous if for every x ∈ X and every , x has a neighborhood Ux such that A set is said to be pointwise bounded if for every x ∈ X, A version of the Theorem holds also in the space C(X) of real-valued continuous functions on a compact Hausdorff space X : Let X be a compact Hausdorff space. Then a subset F of C(X) is relatively compact in the topology induced by the uniform norm if and only if it is equicontinuous and pointwise bounded. The Arzelà–Ascoli theorem is thus a fundamental result in the study of the algebra of continuous functions on a compact Hausdorff space. Various generalizations of the above quoted result are possible. For instance, the functions can assume values in a metric space or (Hausdorff) topological vector space with only minimal changes to the statement (see, for instance, , ): Let X be a compact Hausdorff space and Y a metric space. Then is compact in the compact-open topology if and only if it is equicontinuous, pointwise relatively compact and closed. Here pointwise relatively compact means that for each x ∈ X, the set is relatively compact in Y. In the case that Y is complete, the proof given above can be generalized in a way that does not rely on the separability of the domain. On a compact Hausdorff space X, for instance, the equicontinuity is used to extract, for each ε = 1/n, a finite open covering of X such that the oscillation of any function in the family is less than ε on each open set in the cover. The role of the rationals can then be played by a set of points drawn from each open set in each of the countably many covers obtained in this way, and the main part of the proof proceeds exactly as above. A similar argument is used as a part of the proof for the general version which does not assume completeness of Y. Functions on non-compact spaces The Arzela-Ascoli theorem generalises to functions where is not compact. Particularly important are cases where is a topological vector space. Recall that if is a topological space and is a uniform space (such as any metric space or any topological group, metrisable or not), there is the topology of compact convergence on the set of functions ; it is set up so that a sequence (or more generally a filter or net) of functions converges if and only if it converges uniformly on each compact subset of . Let be the subspace of consisting of continuous functions, equipped with the topology of compact convergence. Then one form of the Arzelà-Ascoli theorem is the following: Let be a topological space, a Hausdorff uniform space and an equicontinuous set of continuous functions such that is relatively compact in for each . Then is relatively compact in . This theorem immediately gives the more specialised statements above in cases where is compact and the uniform structure of is given by a metric. There are a few other variants in terms of the topology of precompact convergence or other related topologies on . It is also possible to extend the statement to functions that are only continuous when restricted to the sets of a covering of by compact subsets. For details one can consult Bourbaki (1998), Chapter X, § 2, nr 5. Non-continuous functions Solutions of numerical schemes for parabolic equations are usually piecewise constant, and therefore not continuous, in time. As their jumps nevertheless tend to become small as the time step goes to , it is possible to establish uniform-in-time convergence properties using a generalisation to non-continuous functions of the classical Arzelà–Ascoli theorem (see e.g. ). Denote by the space of functions from to endowed with the uniform metric Then we have the following: Let be a compact metric space and a complete metric space. Let be a sequence in such that there exists a function and a sequence satisfying Assume also that, for all , is relatively compact in . Then is relatively compact in , and any limit of in this space is in . Necessity Whereas most formulations of the Arzelà–Ascoli theorem assert sufficient conditions for a family of functions to be (relatively) compact in some topology, these conditions are typically also necessary. For instance, if a set F is compact in C(X), the Banach space of real-valued continuous functions on a compact Hausdorff space with respect to its uniform norm, then it is bounded in the uniform norm on C(X) and in particular is pointwise bounded. Let N(ε, U) be the set of all functions in F whose oscillation over an open subset U ⊂ X is less than ε: For a fixed x∈X and ε, the sets N(ε, U) form an open covering of F as U varies over all open neighborhoods of x. Choosing a finite subcover then gives equicontinuity. Further examples To every function that is -integrable on , with , associate the function defined on by Let be the set of functions corresponding to functions in the unit ball of the space . If is the Hölder conjugate of , defined by , then Hölder's inequality implies that all functions in satisfy a Hölder condition with and constant . It follows that is compact in . This means that the correspondence defines a compact linear operator between the Banach spaces and . Composing with the injection of into , one sees that acts compactly from to itself. The case can be seen as a simple instance of the fact that the injection from the Sobolev space into , for a bounded open set in , is compact. When is a compact linear operator from a Banach space to a Banach space , its transpose is compact from the (continuous) dual to . This can be checked by the Arzelà–Ascoli theorem. Indeed, the image of the closed unit ball of is contained in a compact subset of . The unit ball of defines, by restricting from to , a set of (linear) continuous functions on that is bounded and equicontinuous. By Arzelà–Ascoli, for every sequence in , there is a subsequence that converges uniformly on , and this implies that the image of that subsequence is Cauchy in . When is holomorphic in an open disk , with modulus bounded by , then (for example by Cauchy's formula) its derivative has modulus bounded by in the smaller disk If a family of holomorphic functions on is bounded by on , it follows that the family of restrictions to is equicontinuous on . Therefore, a sequence converging uniformly on can be extracted. This is a first step in the direction of Montel's theorem. Let be endowed with the uniform metric Assume that is a sequence of solutions of a certain partial differential equation (PDE), where the PDE ensures the following a priori estimates: is equicontinuous for all , is equitight for all , and, for all and all , is small enough when is small enough. Then by the Fréchet–Kolmogorov theorem, we can conclude that is relatively compact in . Hence, we can, by (a generalization of) the Arzelà–Ascoli theorem, conclude that is relatively compact in See also Helly's selection theorem Fréchet–Kolmogorov theorem References . . . . . . . Arzelà-Ascoli theorem at Encyclopaedia of Mathematics Articles containing proofs Compactness theorems Theory of continuous functions Theorems in real analysis Theorems in functional analysis Topology of function spaces
Arzelà–Ascoli theorem
[ "Mathematics" ]
2,719
[ "Compactness theorems", "Theorems in mathematical analysis", "Theory of continuous functions", "Theorems in real analysis", "Theorems in topology", "Theorems in functional analysis", "Topology", "Articles containing proofs" ]
596,706
https://en.wikipedia.org/wiki/Gas%20chromatography
Gas chromatography (GC) is a common type of chromatography used in analytical chemistry for separating and analyzing compounds that can be vaporized without decomposition. Typical uses of GC include testing the purity of a particular substance, or separating the different components of a mixture. In preparative chromatography, GC can be used to prepare pure compounds from a mixture. Gas chromatography is also sometimes known as vapor-phase chromatography (VPC), or gas–liquid partition chromatography (GLPC). These alternative names, as well as their respective abbreviations, are frequently used in scientific literature. Gas chromatography is the process of separating compounds in a mixture by injecting a gaseous or liquid sample into a mobile phase, typically called the carrier gas, and passing the gas through a stationary phase. The mobile phase is usually an inert gas or an unreactive gas such as helium, argon, nitrogen or hydrogen. The stationary phase can be solid or liquid, although most GC systems today use a polymeric liquid stationary phase. The stationary phase is contained inside of a separation column. Today, most GC columns are fused silica capillaries with an inner diameter of and a length of . The GC column is located inside an oven where the temperature of the gas can be controlled and the effluent coming off the column is monitored by a suitable detector. Operating principle A gas chromatograph is made of a narrow tube, known as the column, through which the vaporized sample passes, carried along by a continuous flow of inert or nonreactive gas. Components of the sample pass through the column at different rates, depending on their chemical and physical properties and the resulting interactions with the column lining or filling, called the stationary phase. The column is typically enclosed within a temperature controlled oven. As the chemicals exit the end of the column, they are detected and identified electronically. History Background Chromatography dates to 1903 in the work of the Russian scientist, Mikhail Semenovich Tswett, who separated plant pigments via liquid column chromatography. Invention The invention of gas chromatography is generally attributed to Anthony T. James and Archer J.P. Martin. Their gas chromatograph used partition chromatography as the separating principle, rather than adsorption chromatography. The popularity of gas chromatography quickly rose after the development of the flame ionization detector. Martin and another one of their colleagues, Richard Synge, with whom he shared the 1952 Nobel Prize in Chemistry, had noted in an earlier paper that chromatography might also be used to separate gases. Synge pursued other work while Martin continued his work with James. Gas adsorption chromatography precursors German physical chemist Erika Cremer in 1947 together with Austrian graduate student Fritz Prior developed what could be considered the first gas chromatograph that consisted of a carrier gas, a column packed with silica gel, and a thermal conductivity detector. They exhibited the chromatograph at ACHEMA in Frankfurt, but nobody was interested in it. N.C. Turner with the Burrell Corporation introduced in 1943 a massive instrument that used a charcoal column and mercury vapors. Stig Claesson of Uppsala University published in 1946 his work on a charcoal column that also used mercury. Gerhard Hesse, while a professor at the University of Marburg/Lahn decided to test the prevailing opinion among German chemists that molecules could not be separated in a moving gas stream. He set up a simple glass column filled with starch and successfully separated bromine and iodine using nitrogen as the carrier gas. He then built a system that flowed an inert gas through a glass condenser packed with silica gel and collected the eluted fractions. Courtenay S.G Phillips of Oxford University investigated separation in a charcoal column using a thermal conductivity detector. He consulted with Claesson and decided to use displacement as his separating principle. After learning about the results of James and Martin, he switched to partition chromatography. Column technology Early gas chromatography used packed columns, made of block 1–5 m long, 1–5 mm diameter, and filled with particles. The resolution of packed columns was improved by the invention of capillary column, in which the stationary phase is coated on the inner wall of the capillary. Physical components Autosamplers The autosampler provides the means to introduce a sample automatically into the inlets. Manual insertion of the sample is possible but is no longer common. Automatic insertion provides better reproducibility and time-optimization.Different kinds of autosamplers exist. Autosamplers can be classified in relation to sample capacity (auto-injectors vs. autosamplers, where auto-injectors can work a small number of samples), to robotic technologies (XYZ robot vs. rotating robot – the most common), or to analysis: Liquid Static head-space by syringe technology Dynamic head-space by transfer-line technology Solid phase microextraction (SPME) Inlets The column inlet (or injector) provides the means to introduce a sample into a continuous flow of carrier gas. The inlet is a piece of hardware attached to the column head. Common inlet types are: S/SL (split/splitless) injector – a sample is introduced into a heated small chamber via a syringe through a septum – the heat facilitates volatilization of the sample and sample matrix. The carrier gas then either sweeps the entirety (splitless mode) or a portion (split mode) of the sample into the column. In split mode, a part of the sample/carrier gas mixture in the injection chamber is exhausted through the split vent. Split injection is preferred when working with samples with high analyte concentrations (>0.1%) whereas splitless injection is best suited for trace analysis with low amounts of analytes (<0.01%). In splitless mode the split valve opens after a pre-set amount of time to purge heavier elements that would otherwise contaminate the system. This pre-set (splitless) time should be optimized, the shorter time (e.g., 0.2 min) ensures less tailing but loss in response, the longer time (2 min) increases tailing but also signal. On-column inlet – the sample is here introduced directly into the column in its entirety without heat, or at a temperature below the boiling point of the solvent. The low temperature condenses the sample into a narrow zone. The column and inlet can then be heated, releasing the sample into the gas phase. This ensures the lowest possible temperature for chromatography and keeps samples from decomposing above their boiling point. PTV injector – Temperature-programmed sample introduction was first described by Vogt in 1979. Originally Vogt developed the technique as a method for the introduction of large sample volumes (up to 250 μL) in capillary GC. Vogt introduced the sample into the liner at a controlled injection rate. The temperature of the liner was chosen slightly below the boiling point of the solvent. The low-boiling solvent was continuously evaporated and vented through the split line. Based on this technique, Poy developed the programmed temperature vaporising injector; PTV. By introducing the sample at a low initial liner temperature many of the disadvantages of the classic hot injection techniques could be circumvented. Gas source inlet or gas switching valve – gaseous samples in collection bottles are connected to what is most commonly a six-port switching valve. The carrier gas flow is not interrupted while a sample can be expanded into a previously evacuated sample loop. Upon switching, the contents of the sample loop are inserted into the carrier gas stream. P/T (purge-and-trap) system – An inert gas is bubbled through an aqueous sample causing insoluble volatile chemicals to be purged from the matrix. The volatiles are 'trapped' on an absorbent column (known as a trap or concentrator) at ambient temperature. The trap is then heated and the volatiles are directed into the carrier gas stream. Samples requiring preconcentration or purification can be introduced via such a system, usually hooked up to the S/SL port. The choice of carrier gas (mobile phase) is important. Hydrogen has a range of flow rates that are comparable to helium in efficiency. However, helium may be more efficient and provide the best separation if flow rates are optimized. Helium is non-flammable and works with a greater number of detectors and older instruments. Therefore, helium is the most common carrier gas used. However, the price of helium has gone up considerably over recent years, causing an increasing number of chromatographers to switch to hydrogen gas. Historical use, rather than rational consideration, may contribute to the continued preferential use of helium. Detectors Commonly used detectors are the flame ionization detector (FID) and the thermal conductivity detector (TCD). While TCDs are beneficial in that they are non-destructive, its low detection limit for most analytes inhibits widespread use. FIDs are sensitive primarily to hydrocarbons, and are more sensitive to them than TCD. FIDs cannot detect water or carbon dioxide which make them ideal for environmental organic analyte analysis. FID is two to three times more sensitive to analyte detection than TCD. The TCD relies on the thermal conductivity of matter passing around a thin wire of tungsten-rhenium with a current traveling through it. In this set up helium or nitrogen serve as the carrier gas because of their relatively high thermal conductivity which keep the filament cool and maintain uniform resistivity and electrical efficiency of the filament. When analyte molecules elute from the column, mixed with carrier gas, the thermal conductivity decreases while there is an increase in filament temperature and resistivity resulting in fluctuations in voltage ultimately causing a detector response. Detector sensitivity is proportional to filament current while it is inversely proportional to the immediate environmental temperature of that detector as well as flow rate of the carrier gas. In a flame ionization detector (FID), electrodes are placed adjacent to a flame fueled by hydrogen / air near the exit of the column, and when carbon containing compounds exit the column they are pyrolyzed by the flame. This detector works only for organic / hydrocarbon containing compounds due to the ability of the carbons to form cations and electrons upon pyrolysis which generates a current between the electrodes. The increase in current is translated and appears as a peak in a chromatogram. FIDs have low detection limits (a few picograms per second) but they are unable to generate ions from carbonyl containing carbons. FID compatible carrier gasses include helium, hydrogen, nitrogen, and argon. In FID, sometimes the stream is modified before entering the detector. A methanizer converts carbon monoxide and carbon dioxide into methane so that it can be detected. A different technology is the polyarc, by Activated Research Inc, that converts all compounds to methane. Alkali flame detector (AFD) or alkali flame ionization detector (AFID) has high sensitivity to nitrogen and phosphorus, similar to NPD. However, the alkaline metal ions are supplied with the hydrogen gas, rather than a bead above the flame. For this reason AFD does not suffer the "fatigue" of the NPD, but provides a constant sensitivity over long period of time. In addition, when alkali ions are not added to the flame, AFD operates like a standard FID. A catalytic combustion detector (CCD) measures combustible hydrocarbons and hydrogen. Discharge ionization detector (DID) uses a high-voltage electric discharge to produce ions. Flame photometric detector (FPD) uses a photomultiplier tube to detect spectral lines of the compounds as they are burned in a flame. Compounds eluting off the column are carried into a hydrogen fueled flame which excites specific elements in the molecules, and the excited elements (P,S, Halogens, Some Metals) emit light of specific characteristic wavelengths. The emitted light is filtered and detected by a photomultiplier tube. In particular, phosphorus emission is around 510–536 nm and sulfur emission is at 394 nm. With an atomic emission detector (AED), a sample eluting from a column enters a chamber which is energized by microwaves that induce a plasma. The plasma causes the analyte sample to decompose and certain elements generate an atomic emission spectra. The atomic emission spectra is diffracted by a diffraction grating and detected by a series of photomultiplier tubes or photo diodes. Electron capture detector (ECD) uses a radioactive beta particle (electron) source to measure the degree of electron capture. ECD are used for the detection of molecules containing electronegative / withdrawing elements and functional groups like halogens, carbonyl, nitriles, nitro groups, and organometalics. In this type of detector either nitrogen or 5% methane in argon is used as the mobile phase carrier gas. The carrier gas passes between two electrodes placed at the end of the column, and adjacent to the cathode (negative electrode) resides a radioactive foil such as 63Ni. The radioactive foil emits a beta particle (electron) which collides with and ionizes the carrier gas to generate more ions resulting in a current. When analyte molecules with electronegative / withdrawing elements or functional groups electrons are captured which results in a decrease in current generating a detector response. Nitrogen–phosphorus detector (NPD), a form of thermionic detector where nitrogen and phosphorus alter the work function on a specially coated bead and a resulting current is measured. Dry electrolytic conductivity detector (DELCD) uses an air phase and high temperature (v. Coulsen) to measure chlorinated compounds. Mass spectrometer (MS), also called GC-MS; highly effective and sensitive, even in a small quantity of sample. This detector can be used to identify the analytes in chromatograms by their mass spectrum. Some GC-MS are connected to an NMR spectrometer which acts as a backup detector. This combination is known as GC-MS-NMR. Some GC-MS-NMR are connected to an infrared spectrophotometer which acts as a backup detector. This combination is known as GC-MS-NMR-IR. It must, however, be stressed this is very rare as most analyses needed can be concluded via purely GC-MS. Vacuum ultraviolet (VUV) represents the most recent development in gas chromatography detectors. Most chemical species absorb and have unique gas phase absorption cross sections in the approximately 120–240 nm VUV wavelength range monitored. Where absorption cross sections are known for analytes, the VUV detector is capable of absolute determination (without calibration) of the number of molecules present in the flow cell in the absence of chemical interferences. Olfactometric detector, also called GC-O, uses a human assessor to analyse the odour activity of compounds. With an odour port or a sniffing port, the quality of the odour, the intensity of the odour and the duration of the odour activity of a compound can be assessed. Other detectors include the Hall electrolytic conductivity detector (ElCD), helium ionization detector (HID), infrared detector (IRD), photo-ionization detector (PID), pulsed discharge ionization detector (PDD), and thermionic ionization detector (TID). Methods The method is the collection of conditions in which the GC operates for a given analysis. Method development is the process of determining what conditions are adequate and/or ideal for the analysis required. Conditions which can be varied to accommodate a required analysis include inlet temperature, detector temperature, column temperature and temperature program, carrier gas and carrier gas flow rates, the column's stationary phase, diameter and length, inlet type and flow rates, sample size and injection technique. Depending on the detector(s) (see below) installed on the GC, there may be a number of detector conditions that can also be varied. Some GCs also include valves which can change the route of sample and carrier flow. The timing of the opening and closing of these valves can be important to method development. Carrier gas selection and flow rates Typical carrier gases include helium, nitrogen, argon, and hydrogen. Which gas to use is usually determined by the detector being used, for example, a DID requires helium as the carrier gas. When analyzing gas samples the carrier is also selected based on the sample's matrix, for example, when analyzing a mixture in argon, an argon carrier is preferred because the argon in the sample does not show up on the chromatogram. Safety and availability can also influence carrier selection. The purity of the carrier gas is also frequently determined by the detector, though the level of sensitivity needed can also play a significant role. Typically, purities of 99.995% or higher are used. The most common purity grades required by modern instruments for the majority of sensitivities are 5.0 grades, or 99.999% pure meaning that there is a total of 10 ppm of impurities in the carrier gas that could affect the results. The highest purity grades in common use are 6.0 grades, but the need for detection at very low levels in some forensic and environmental applications has driven the need for carrier gases at 7.0 grade purity and these are now commercially available. Trade names for typical purities include "Zero Grade", "Ultra-High Purity (UHP) Grade", "4.5 Grade" and "5.0 Grade". The carrier gas linear velocity affects the analysis in the same way that temperature does (see above). The higher the linear velocity the faster the analysis, but the lower the separation between analytes. Selecting the linear velocity is therefore the same compromise between the level of separation and length of analysis as selecting the column temperature. The linear velocity will be implemented by means of the carrier gas flow rate, with regards to the inner diameter of the column. With GCs made before the 1990s, carrier flow rate was controlled indirectly by controlling the carrier inlet pressure, or "column head pressure". The actual flow rate was measured at the outlet of the column or the detector with an electronic flow meter, or a bubble flow meter, and could be an involved, time consuming, and frustrating process. It was not possible to vary the pressure setting during the run, and thus the flow was essentially constant during the analysis. The relation between flow rate and inlet pressure is calculated with Poiseuille's equation for compressible fluids. Many modern GCs, however, electronically measure the flow rate, and electronically control the carrier gas pressure to set the flow rate. Consequently, carrier pressures and flow rates can be adjusted during the run, creating pressure/flow programs similar to temperature programs. Stationary compound selection The polarity of the solute is crucial for the choice of stationary compound, which in an optimal case would have a similar polarity as the solute. Common stationary phases in open tubular columns are cyanopropylphenyl dimethyl polysiloxane, carbowax polyethyleneglycol, biscyanopropyl cyanopropylphenyl polysiloxane and diphenyl dimethyl polysiloxane. For packed columns more options are available. Inlet types and flow rates The choice of inlet type and injection technique depends on if the sample is in liquid, gas, adsorbed, or solid form, and on whether a solvent matrix is present that has to be vaporized. Dissolved samples can be introduced directly onto the column via a COC injector, if the conditions are well known; if a solvent matrix has to be vaporized and partially removed, a S/SL injector is used (most common injection technique); gaseous samples (e.g., air cylinders) are usually injected using a gas switching valve system; adsorbed samples (e.g., on adsorbent tubes) are introduced using either an external (on-line or off-line) desorption apparatus such as a purge-and-trap system, or are desorbed in the injector (SPME applications). Sample size and injection technique Sample injection The real chromatographic analysis starts with the introduction of the sample onto the column. The development of capillary gas chromatography resulted in many practical problems with the injection technique. The technique of on-column injection, often used with packed columns, is usually not possible with capillary columns. In the injection system in the capillary gas chromatograph the amount injected should not overload the column and the width of the injected plug should be small compared to the spreading due to the chromatographic process. Failure to comply with this latter requirement will reduce the separation capability of the column. As a general rule, the volume injected, Vinj, and the volume of the detector cell, Vdet, should be about 1/10 of the volume occupied by the portion of sample containing the molecules of interest (analytes) when they exit the column. Some general requirements which a good injection technique should fulfill are that it should be possible to obtain the column's optimum separation efficiency, it should allow accurate and reproducible injections of small amounts of representative samples, it should induce no change in sample composition, it should not exhibit discrimination based on differences in boiling point, polarity, concentration or thermal/catalytic stability, and it should be applicable for trace analysis as well as for undiluted samples. However, there are a number of problems inherent in the use of syringes for injection. Even the best syringes claim an accuracy of only 3%, and in unskilled hands, errors are much larger. The needle may cut small pieces of rubber from the septum as it injects sample through it. These can block the needle and prevent the syringe filling the next time it is used. It may not be obvious that this has happened. A fraction of the sample may get trapped in the rubber, to be released during subsequent injections. This can give rise to ghost peaks in the chromatogram. There may be selective loss of the more volatile components of the sample by evaporation from the tip of the needle. Column selection The choice of column depends on the sample and the active measured. The main chemical attribute regarded when choosing a column is the polarity of the mixture, but functional groups can play a large part in column selection. The polarity of the sample must closely match the polarity of the column stationary phase to increase resolution and separation while reducing run time. The separation and run time also depends on the film thickness (of the stationary phase), the column diameter and the column length. Column temperature and temperature program The column(s) in a GC are contained in an oven, the temperature of which is precisely controlled electronically. (When discussing the "temperature of the column," an analyst is technically referring to the temperature of the column oven. The distinction, however, is not important and will not subsequently be made in this article.) The rate at which a sample passes through the column is directly proportional to the temperature of the column. The higher the column temperature, the faster the sample moves through the column. However, the faster a sample moves through the column, the less it interacts with the stationary phase, and the less the analytes are separated. In general, the column temperature is selected to compromise between the length of the analysis and the level of separation. A method which holds the column at the same temperature for the entire analysis is called "isothermal". Most methods, however, increase the column temperature during the analysis, the initial temperature, rate of temperature increase (the temperature "ramp"), and final temperature are called the temperature program. A temperature program allows analytes that elute early in the analysis to separate adequately, while shortening the time it takes for late-eluting analytes to pass through the column. Data reduction and analysis Qualitative analysis Generally, chromatographic data is presented as a graph of detector response (y-axis) against retention time (x-axis), which is called a chromatogram. This provides a spectrum of peaks for a sample representing the analytes present in a sample eluting from the column at different times. Retention time can be used to identify analytes if the method conditions are constant. Also, the pattern of peaks will be constant for a sample under constant conditions and can identify complex mixtures of analytes. However, in most modern applications, the GC is connected to a mass spectrometer or similar detector that is capable of identifying the analytes represented by the peaks. Quantitative analysis The area under a peak is proportional to the amount of analyte present in the chromatogram. By calculating the area of the peak using the mathematical function of integration, the concentration of an analyte in the original sample can be determined. Concentration can be calculated using a calibration curve created by finding the response for a series of concentrations of analyte, or by determining the relative response factor of an analyte. The relative response factor is the expected ratio of an analyte to an internal standard (or external standard) and is calculated by finding the response of a known amount of analyte and a constant amount of internal standard (a chemical added to the sample at a constant concentration, with a distinct retention time to the analyte). In most modern GC-MS systems, computer software is used to draw and integrate peaks, and match MS spectra to library spectra. Applications In general, substances that vaporize below 300 °C (and therefore are stable up to that temperature) can be measured quantitatively. The samples are also required to be salt-free; they should not contain ions. Very minute amounts of a substance can be measured, but it is often required that the sample must be measured in comparison to a sample containing the pure, suspected substance known as a reference standard. Various temperature programs can be used to make the readings more meaningful; for example to differentiate between substances that behave similarly during the GC process. Professionals working with GC analyze the content of a chemical product, for example in assuring the quality of products in the chemical industry; or measuring chemicals in soil, air or water, such as soil gases. GC is very accurate if used properly and can measure picomoles of a substance in a 1 ml liquid sample, or parts-per-billion concentrations in gaseous samples. In practical courses at colleges, students sometimes get acquainted to the GC by studying the contents of lavender oil or measuring the ethylene that is secreted by Nicotiana benthamiana plants after artificially injuring their leaves. These GC analyse hydrocarbons (C2-C40+). In a typical experiment, a packed column is used to separate the light gases, which are then detected with a TCD. The hydrocarbons are separated using a capillary column and detected with a FID. A complication with light gas analyses that include H2 is that He, which is the most common and most sensitive inert carrier (sensitivity is proportional to molecular mass) has an almost identical thermal conductivity to hydrogen (it is the difference in thermal conductivity between two separate filaments in a Wheatstone Bridge type arrangement that shows when a component has been eluted). For this reason, dual TCD instruments used with a separate channel for hydrogen that uses nitrogen as a carrier are common. Argon is often used when analysing gas phase chemistry reactions such as F-T synthesis so that a single carrier gas can be used rather than two separate ones. The sensitivity is reduced, but this is a trade off for simplicity in the gas supply. Gas chromatography is used extensively in forensic science. Disciplines as diverse as solid drug dose (pre-consumption form) identification and quantification, arson investigation, paint chip analysis, and toxicology cases, employ GC to identify and quantify various biological specimens and crime-scene evidence. See also Analytical chemistry Chromatography Gas chromatography–mass spectrometry Gas chromatography-olfactometry High-performance liquid chromatography Inverse gas chromatography Proton transfer reaction mass spectrometry Secondary electrospray ionization Selected ion flow tube mass spectrometry Standard addition Thin layer chromatography Unresolved complex mixture References External links Chromatographic Columns in the Chemistry LibreTexts Library Laboratory techniques
Gas chromatography
[ "Chemistry" ]
6,004
[ "Chromatography", "Gas chromatography", "nan" ]
596,816
https://en.wikipedia.org/wiki/Particle-in-cell
In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles (or fluid elements) in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points. PIC methods were already in use as early as 1955, even before the first Fortran compilers were available. The method gained popularity for plasma simulation in the late 1950s and early 1960s by Buneman, Dawson, Hockney, Birdsall, Morse and others. In plasma physics applications, the method amounts to following the trajectories of charged particles in self-consistent electromagnetic (or electrostatic) fields computed on a fixed mesh. Technical aspects For many types of problems, the classical PIC method invented by Buneman, Dawson, Hockney, Birdsall, Morse and others is relatively intuitive and straightforward to implement. This probably accounts for much of its success, particularly for plasma simulation, for which the method typically includes the following procedures: Integration of the equations of motion. Interpolation of charge and current source terms to the field mesh. Computation of the fields on mesh points. Interpolation of the fields from the mesh to the particle locations. Models which include interactions of particles only through the average fields are called PM (particle-mesh). Those which include direct binary interactions are PP (particle-particle). Models with both types of interactions are called PP-PM or P3M. Since the early days, it has been recognized that the PIC method is susceptible to error from so-called discrete particle noise. This error is statistical in nature, and today it remains less-well understood than for traditional fixed-grid methods, such as Eulerian or semi-Lagrangian schemes. Modern geometric PIC algorithms are based on a very different theoretical framework. These algorithms use tools of discrete manifold, interpolating differential forms, and canonical or non-canonical symplectic integrators to guarantee gauge invariant and conservation of charge, energy-momentum, and more importantly the infinitely dimensional symplectic structure of the particle-field system. These desired features are attributed to the fact that geometric PIC algorithms are built on the more fundamental field-theoretical framework and are directly linked to the perfect form, i.e., the variational principle of physics. Basics of the PIC plasma simulation technique Inside the plasma research community, systems of different species (electrons, ions, neutrals, molecules, dust particles, etc.) are investigated. The set of equations associated with PIC codes are therefore the Lorentz force as the equation of motion, solved in the so-called pusher or particle mover of the code, and Maxwell's equations determining the electric and magnetic fields, calculated in the (field) solver. Super-particles The real systems studied are often extremely large in terms of the number of particles they contain. In order to make simulations efficient or at all possible, so-called super-particles are used. A super-particle (or macroparticle) is a computational particle that represents many real particles; it may be millions of electrons or ions in the case of a plasma simulation, or, for instance, a vortex element in a fluid simulation. It is allowed to rescale the number of particles, because the acceleration from the Lorentz force depends only on the charge-to-mass ratio, so a super-particle will follow the same trajectory as a real particle would. The number of real particles corresponding to a super-particle must be chosen such that sufficient statistics can be collected on the particle motion. If there is a significant difference between the density of different species in the system (between ions and neutrals, for instance), separate real to super-particle ratios can be used for them. The particle mover Even with super-particles, the number of simulated particles is usually very large (> 105), and often the particle mover is the most time consuming part of PIC, since it has to be done for each particle separately. Thus, the pusher is required to be of high accuracy and speed and much effort is spent on optimizing the different schemes. The schemes used for the particle mover can be split into two categories, implicit and explicit solvers. While implicit solvers (e.g. implicit Euler scheme) calculate the particle velocity from the already updated fields, explicit solvers use only the old force from the previous time step, and are therefore simpler and faster, but require a smaller time step. In PIC simulation the leapfrog method is used, a second-order explicit method. Also the Boris algorithm is used which cancel out the magnetic field in the Newton-Lorentz equation. For plasma applications, the leapfrog method takes the following form: where the subscript refers to "old" quantities from the previous time step, to updated quantities from the next time step (i.e. ), and velocities are calculated in-between the usual time steps . The equations of the Boris scheme which are substitute in the above equations are: with and . Because of its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. It was realized that the excellent long term accuracy of nonrelativistic Boris algorithm is due to the fact it conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas. It has also been shown that one can improve on the relativistic Boris push to make it both volume preserving and have a constant-velocity solution in crossed E and B fields. The field solver The most commonly used methods for solving Maxwell's equations (or more generally, partial differential equations (PDE)) belong to one of the following three categories: Finite difference methods (FDM) Finite element methods (FEM) Spectral methods With the FDM, the continuous domain is replaced with a discrete grid of points, on which the electric and magnetic fields are calculated. Derivatives are then approximated with differences between neighboring grid-point values and thus PDEs are turned into algebraic equations. Using FEM, the continuous domain is divided into a discrete mesh of elements. The PDEs are treated as an eigenvalue problem and initially a trial solution is calculated using basis functions that are localized in each element. The final solution is then obtained by optimization until the required accuracy is reached. Also spectral methods, such as the fast Fourier transform (FFT), transform the PDEs into an eigenvalue problem, but this time the basis functions are high order and defined globally over the whole domain. The domain itself is not discretized in this case, it remains continuous. Again, a trial solution is found by inserting the basis functions into the eigenvalue equation and then optimized to determine the best values of the initial trial parameters. Particle and field weighting The name "particle-in-cell" originates in the way that plasma macro-quantities (number density, current density, etc.) are assigned to simulation particles (i.e., the particle weighting). Particles can be situated anywhere on the continuous domain, but macro-quantities are calculated only on the mesh points, just as the fields are. To obtain the macro-quantities, one assumes that the particles have a given "shape" determined by the shape function where is the coordinate of the particle and the observation point. Perhaps the easiest and most used choice for the shape function is the so-called cloud-in-cell (CIC) scheme, which is a first order (linear) weighting scheme. Whatever the scheme is, the shape function has to satisfy the following conditions: space isotropy, charge conservation, and increasing accuracy (convergence) for higher-order terms. The fields obtained from the field solver are determined only on the grid points and can't be used directly in the particle mover to calculate the force acting on particles, but have to be interpolated via the field weighting: where the subscript labels the grid point. To ensure that the forces acting on particles are self-consistently obtained, the way of calculating macro-quantities from particle positions on the grid points and interpolating fields from grid points to particle positions has to be consistent, too, since they both appear in Maxwell's equations. Above all, the field interpolation scheme should conserve momentum. This can be achieved by choosing the same weighting scheme for particles and fields and by ensuring the appropriate space symmetry (i.e. no self-force and fulfilling the action-reaction law) of the field solver at the same time Collisions As the field solver is required to be free of self-forces, inside a cell the field generated by a particle must decrease with decreasing distance from the particle, and hence inter-particle forces inside the cells are underestimated. This can be balanced with the aid of Coulomb collisions between charged particles. Simulating the interaction for every pair of a big system would be computationally too expensive, so several Monte Carlo methods have been developed instead. A widely used method is the binary collision model, in which particles are grouped according to their cell, then these particles are paired randomly, and finally the pairs are collided. In a real plasma, many other reactions may play a role, ranging from elastic collisions, such as collisions between charged and neutral particles, over inelastic collisions, such as electron-neutral ionization collision, to chemical reactions; each of them requiring separate treatment. Most of the collision models handling charged-neutral collisions use either the direct Monte-Carlo scheme, in which all particles carry information about their collision probability, or the null-collision scheme, which does not analyze all particles but uses the maximum collision probability for each charged species instead. Accuracy and stability conditions As in every simulation method, also in PIC, the time step and the grid size must be well chosen, so that the time and length scale phenomena of interest are properly resolved in the problem. In addition, time step and grid size affect the speed and accuracy of the code. For an electrostatic plasma simulation using an explicit time integration scheme (e.g. leapfrog, which is most commonly used), two important conditions regarding the grid size and the time step should be fulfilled in order to ensure the stability of the solution: which can be derived considering the harmonic oscillations of a one-dimensional unmagnetized plasma. The latter conditions is strictly required but practical considerations related to energy conservation suggest to use a much stricter constraint where the factor 2 is replaced by a number one order of magnitude smaller. The use of is typical. Not surprisingly, the natural time scale in the plasma is given by the inverse plasma frequency and length scale by the Debye length . For an explicit electromagnetic plasma simulation, the time step must also satisfy the CFL condition: where , and is the speed of light. Applications Within plasma physics, PIC simulation has been used successfully to study laser-plasma interactions, electron acceleration and ion heating in the auroral ionosphere, magnetohydrodynamics, magnetic reconnection, as well as ion-temperature-gradient and other microinstabilities in tokamaks, furthermore vacuum discharges, and dusty plasmas. Hybrid models may use the PIC method for the kinetic treatment of some species, while other species (that are Maxwellian) are simulated with a fluid model. PIC simulations have also been applied outside of plasma physics to problems in solid and fluid mechanics. Electromagnetic particle-in-cell computational applications See also Plasma modeling Multiphase particle-in-cell method References Bibliography External links Beam, Plasma & Accelerator Simulation Toolkit (BLAST) Particle-In-Cell and Kinetic Simulation Software Center (PICKSC), UCLA. Open source 3D Particle-In-Cell code for spacecraft plasma interactions (mandatory user registration required). Simple Particle-In-Cell code in MATLAB Plasma Theory and Simulation Group (Berkeley) Contains links to freely available software. Introduction to PIC codes (Univ. of Texas) open-pic - 3D Hybrid Particle-In-Cell simulation of plasma dynamics Numerical differential equations Computational fluid dynamics Mathematical modeling Computational electromagnetics
Particle-in-cell
[ "Physics", "Chemistry", "Mathematics" ]
2,527
[ "Computational electromagnetics", "Mathematical modeling", "Computational fluid dynamics", "Applied mathematics", "Computational physics", "Fluid dynamics" ]
598,302
https://en.wikipedia.org/wiki/Pathophysiology
Pathophysiology (or physiopathology) is a branch of study, at the intersection of pathology and physiology, concerning disordered physiological processes that cause, result from, or are otherwise associated with a disease or injury. Pathology is the medical discipline that describes conditions typically observed during a disease state, whereas physiology is the biological discipline that describes processes or mechanisms operating within an organism. Pathology describes the abnormal or undesired condition (symptoms of a disease), whereas pathophysiology seeks to explain the functional changes that are occurring within an individual due to a disease or pathologic state. Etymology The term pathophysiology comes from the Ancient Greek πάθος (pathos) and φυσιολογία (phisiologia). History Early Developments The origins of pathophysiology as a distinct field date back to the late 18th century. The first known lectures on the subject were delivered by Professor at the University of Erfurt in 1790, and in 1791, he published the first textbook on pathophysiology, Grundriss der Physiologia pathologica, spanning 770 pages. Hecker also established the first academic journal in the field, Magazin für die pathologische Anatomie und Physiologie, in 1796. The French physician Jean François Fernel had earlier suggested in 1542 that a distinct branch of physiology should study the functions of diseased organisms, an idea further developed by in 1617, who first coined the term "pathologic physiology" in a medical text. Nineteenth century Reductionism In Germany in the 1830s, Johannes Müller led the establishment of physiology research autonomous from medical research. In 1843, the Berlin Physical Society was founded in part to purge biology and medicine of vitalism, and in 1847 Hermann von Helmholtz, who joined the Society in 1845, published the paper "On the conservation of energy", highly influential to reduce physiology's research foundation to physical sciences. In the late 1850s, German anatomical pathologist Rudolf Virchow, a former student of Müller, directed focus to the cell, establishing cytology as the focus of physiological research. He also recognized pathophysiology as a distinct discipline, arguing that it should rely on clinical observation and experimentation rather than purely anatomical pathology. Virchow’s influence extended to his student Julius Cohnheim, who pioneered experimental pathology and the usage of intravital microscopy, further advancing the study of pathophysiology. Germ theory By 1863, motivated by Louis Pasteur's report on fermentation to butyric acid, fellow Frenchman Casimir Davaine identified a microorganism as the crucial causal agent of the cattle disease anthrax, but its routinely vanishing from blood left other scientists inferring it a mere byproduct of putrefaction. In 1876, upon Ferdinand Cohn's report of a tiny spore stage of a bacterial species, the fellow German Robert Koch isolated Davaine's bacterides in pure culture —a pivotal step that would establish bacteriology as a distinct discipline— identified a spore stage, applied Jakob Henle's postulates, and confirmed Davaine's conclusion, a major feat for experimental pathology. Pasteur and colleagues followed up with ecological investigations confirming its role in the natural environment via spores in soil. Also, as to sepsis, Davaine had injected rabbits with a highly diluted, tiny amount of putrid blood, duplicated disease, and used the term ferment of putrefaction, but it was unclear whether this referred as did Pasteur's term ferment to a microorganism or, as it did for many others, to a chemical. In 1878, Koch published Aetiology of Traumatic Infective Diseases, unlike any previous work, where in 80 pages Koch, as noted by a historian, "was able to show, in a manner practically conclusive, that a number of diseases, differing clinically, anatomically, and in aetiology, can be produced experimentally by the injection of putrid materials into animals." Koch used bacteriology and the new staining methods with aniline dyes to identify particular microorganisms for each. Germ theory of disease crystallized the concept of cause—presumably identifiable by scientific investigation. Scientific medicine The American physician William Welch trained in German pathology from 1876 to 1878, including under Cohnheim, and opened America's first scientific laboratory —a pathology laboratory— at Bellevue Hospital in New York City in 1878. Welch's course drew enrollment from students at other medical schools, which responded by opening their own pathology laboratories. Once appointed by Daniel Coit Gilman, upon advice by John Shaw Billings, as founding dean of the medical school of the newly forming Johns Hopkins University that Gilman, as its first president, was planning, Welch traveled again to Germany for training in Koch's bacteriology in 1883. Welch returned to America but moved to Baltimore, eager to overhaul American medicine, while blending Virchow's anatomical pathology, Cohnheim's experimental pathology, and Koch's bacteriology. Hopkins medical school, led by the "Four Horsemen" —Welch, William Osler, Howard Kelly, and William Halsted— opened at last in 1893 as America's first medical school devoted to teaching German scientific medicine, so called. Twentieth century Biomedicine The first biomedical institutes, Pasteur Institute and Berlin Institute for Infectious Diseases, whose first directors were Pasteur and Koch, were founded in 1888 and 1891, respectively. America's first biomedical institute, The Rockefeller Institute for Medical Research, was founded in 1901 with Welch, nicknamed "dean of American medicine", as its scientific director, who appointed his former Hopkins student Simon Flexner as director of pathology and bacteriology laboratories. By way of World War I and World War II, Rockefeller Institute became the globe's leader in biomedical research. Molecular paradigm The 1918 pandemic triggered frenzied search for its cause, although most deaths were via lobar pneumonia, already attributed to pneumococcal invasion. In London, pathologist with the Ministry of Health, Fred Griffith in 1928 reported pneumococcal transformation from virulent to avirulent and between antigenic types —nearly a switch in species— challenging pneumonia's specific causation. The laboratory of Rockefeller Institute's Oswald Avery, America's leading pneumococcal expert, was so troubled by the report that they refused to attempt repetition. When Avery was away on summer vacation, Martin Dawson, British-Canadian, convinced that anything from England must be correct, repeated Griffith's results, then achieved transformation in vitro, too, opening it to precise investigation. Having returned, Avery kept a photo of Griffith on his desk while his researchers followed the trail. In 1944, Avery, Colin MacLeod, and Maclyn McCarty reported the transformation factor as DNA, widely doubted amid estimations that something must act with it. At the time of Griffith's report, it was unrecognized that bacteria even had genes. The first genetics, Mendelian genetics, began at 1900, yet inheritance of Mendelian traits was localized to chromosomes by 1903, thus chromosomal genetics. Biochemistry emerged in the same decade. In the 1940s, most scientists viewed the cell as a "sack of chemicals" —a membrane containing only loose molecules in chaotic motion— and the only especial cell structures as chromosomes, which bacteria lack as such. Chromosomal DNA was presumed too simple, so genes were sought in chromosomal proteins. Yet in 1953, American biologist James Watson, British physicist Francis Crick, and British chemist Rosalind Franklin inferred DNA's molecular structure —a double helix— and conjectured it to spell a code. In the early 1960s, Crick helped crack a genetic code in DNA, thus establishing molecular genetics. In the late 1930s, Rockefeller Foundation had spearheaded and funded the molecular biology research program —seeking fundamental explanation of organisms and life— led largely by physicist Max Delbrück at Caltech and Vanderbilt University. Yet the reality of organelles in cells was controversial amid unclear visualization with conventional light microscopy. Around 1940, largely via cancer research at Rockefeller Institute, cell biology emerged as a new discipline filling the vast gap between cytology and biochemistry by applying new technology —ultracentrifuge and electron microscope— to identify and deconstruct cell structures, functions, and mechanisms. The two new sciences interlaced, cell and molecular biology. Mindful of Griffith and Avery, Joshua Lederberg confirmed bacterial conjugation —reported decades earlier but controversial— and was awarded the 1958 Nobel Prize in Physiology or Medicine. At Cold Spring Harbor Laboratory in Long Island, New York, Delbrück and Salvador Luria led the Phage Group —hosting Watson— discovering details of cell physiology by tracking changes to bacteria upon infection with their viruses, the process transduction. Lederberg led the opening of a genetics department at Stanford University's medical school, and facilitated greater communication between biologists and medical departments. Disease mechanisms In the 1950s, researches on rheumatic fever, a complication of streptococcal infections, revealed it was mediated by the host's own immune response, stirring investigation by pathologist Lewis Thomas that led to identification of enzymes released by the innate immune cells macrophages and that degrade host tissue. In the late 1970s, as president of Memorial Sloan–Kettering Cancer Center, Thomas collaborated with Lederberg, soon to become president of Rockefeller University, to redirect the funding focus of the US National Institutes of Health toward basic research into the mechanisms operating during disease processes, which at the time medical scientists were all but wholly ignorant of, as biologists had scarcely taken interest in disease mechanisms. Thomas became for American basic researchers a patron saint. Examples Parkinson's disease The pathophysiology of Parkinson's disease is death of dopaminergic neurons as a result of changes in biological activity in the brain with respect to Parkinson's disease (PD). There are several proposed mechanisms for neuronal death in PD; however, not all of them are well understood. Five proposed major mechanisms for neuronal death in Parkinson's Disease include protein aggregation in Lewy bodies, disruption of autophagy, changes in cell metabolism or mitochondrial function, neuroinflammation, and blood–brain barrier (BBB) breakdown resulting in vascular leakiness. Heart failure The pathophysiology of heart failure is a reduction in the efficiency of the heart muscle, through damage or overloading. As such, it can be caused by a wide number of conditions, including myocardial infarction (in which the heart muscle is starved of oxygen and dies), hypertension (which increases the force of contraction needed to pump blood) and amyloidosis (in which misfolded proteins are deposited in the heart muscle, causing it to stiffen). Over time these increases in workload will produce changes to the heart itself. Multiple sclerosis The pathophysiology of multiple sclerosis is that of an inflammatory demyelinating disease of the CNS in which activated immune cells invade the central nervous system and cause inflammation, neurodegeneration and tissue damage. The underlying condition that produces this behaviour is currently unknown. Current research in neuropathology, neuroimmunology, neurobiology, and neuroimaging, together with clinical neurology provide support for the notion that MS is not a single disease but rather a spectrum Hypertension The pathophysiology of hypertension is that of a chronic disease characterized by elevation of blood pressure. Hypertension can be classified by cause as either essential (also known as primary or idiopathic) or secondary. About 90–95% of hypertension is essential hypertension. HIV/AIDS The pathophysiology of HIV/AIDS involves, upon acquisition of the virus, that the virus replicates inside and kills T helper cells, which are required for almost all adaptive immune responses. There is an initial period of influenza-like illness, and then a latent, asymptomatic phase. When the CD4 lymphocyte count falls below 200 cells/ml of blood, the HIV host has progressed to AIDS, a condition characterized by deficiency in cell-mediated immunity and the resulting increased susceptibility to opportunistic infections and certain forms of cancer. Spider bites The pathophysiology of spider bites is due to the effect of its venom. A spider envenomation occurs whenever a spider injects venom into the skin. Not all spider bites inject venom – a dry bite, and the amount of venom injected can vary based on the type of spider and the circumstances of the encounter. The mechanical injury from a spider bite is not a serious concern for humans. Obesity The pathophysiology of obesity involves many possible pathophysiological mechanisms involved in its development and maintenance. This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory. These investigators postulated that leptin was a satiety factor. In the ob/ob mouse, mutations in the leptin gene resulted in the obese phenotype opening the possibility of leptin therapy for human obesity. However, soon thereafter J. F. Caro's laboratory could not detect any mutations in the leptin gene in humans with obesity. On the contrary Leptin expression was increased proposing the possibility of Leptin-resistance in human obesity. See also Pathogenesis References Pathology Physiology
Pathophysiology
[ "Biology" ]
2,821
[ "Pathology", "Physiology" ]
598,373
https://en.wikipedia.org/wiki/Electropolishing
Electropolishing, also known as electrochemical polishing, anodic polishing, or electrolytic polishing (especially in the metallography field), is an electrochemical process that removes material from a metallic workpiece, reducing the surface roughness by levelling micro-peaks and valleys, improving the surface finish. Electropolishing is often compared to, but distinctly different from, electrochemical machining. It is used to polish, passivate, and deburr metal parts. It is often described as the reverse of electroplating. It may be used in lieu of abrasive fine polishing in microstructural preparation. Mechanism Typically, the work-piece is immersed in a temperature-controlled bath of electrolyte and serves as the anode; it is connected to the positive terminal of a DC power supply, the negative terminal being attached to the cathode. A current passes from the anode, where metal on the surface is oxidised and dissolved in the electrolyte, to the cathode. At the cathode, a reduction reaction occurs, which normally produces hydrogen. Electrolytes used for electropolishing are most often concentrated acid solutions such as mixtures of sulfuric acid and phosphoric acid. Other electropolishing electrolytes reported in the literature include mixtures of perchloric acid with acetic anhydride (which has caused fatal explosions), and methanolic solutions of sulfuric acid. To electropolish a rough surface, the protruding parts of a surface profile must dissolve faster than the recesses. This process, referred to as anodic leveling, can be subject to incorrect analysis when measuring the surface topography. Anodic dissolution under electropolishing conditions deburrs metal objects due to increased current density on corners and burrs. Most importantly, successful electropolishing should operate under diffusion limited constant current plateau, achieved by following current dependence on voltage (polarisation curve), under constant temperature and stirring conditions. Applications Due to its ease of operation and its usefulness in polishing irregularly-shaped objects, electropolishing has become a common process in the production of semiconductors. As electropolishing can also be used to sterilize workpieces, the process plays an essential role in the food, medical, and pharmaceutical industries. It is commonly used in the post-production of large metal pieces such as those used in drums of washing machines, bodies of ocean vessels and aircraft, and automobiles. While nearly any metal may be electropolished, the most-commonly polished metals are 300- and 400-series stainless steel, aluminum, copper, titanium, and nickel- and copper-alloys. Ultra-high vacuum (UHV) components are typically electropolished in order to have a smoother surface for improved vacuum pressures, out-gassing rates, and pumping speed. Electropolishing is commonly used to prepare thin metal samples for transmission electron microscopy and atom probe tomography because the process does not mechanically deform surface layers like mechanical polishing does. Standards ISO.15730:2000 Metallic and other Inorganic Coatings - Electropolishing as a Means of Smoothing and Passivating Stainless Steel ASME BPE Standards for Electropolishing Bioprocessing Equipment SEMI F19, Electropolishing Specifications for Semiconductor Applications ASTM B 912-02 (2008), Passivation of Stainless Steels Using Electropolishing ASTM E1558, Standard Guide for Electrolytic Polishing of Metallographic Specimens Benefits The results are aesthetically pleasing. Creates a clean, smooth surface that is easier to sterilise. Can polish areas that are inaccessible by other polishing methods. Removes a small amount of material (typically 20-40 micrometre in depth in the case of stainless steel) from the surface of the parts, while also removing small burrs or high spots. It can be used to reduce the size of parts when necessary. Stainless steel preferentially removes iron from the surface and enhances the chromium/nickel content for the most superior form of passivation for stainless steel. Electropolishing can be used on a wide range of metals including stainless steel, aluminum, copper, brass and titanium. See also Corrosion Electrochemistry Electroetching Electroplating Passivation (chemistry) Polishing (metalworking) Stainless steel Surface finishing References Chemical processes Metallurgical processes Metalworking
Electropolishing
[ "Chemistry", "Materials_science" ]
901
[ "Metallurgical processes", "Metallurgy", "Chemical processes", "nan", "Chemical process engineering" ]
599,001
https://en.wikipedia.org/wiki/Holdfast%20%28biology%29
A holdfast is a root-like structure that anchors aquatic sessile organisms, such as seaweed, other sessile algae, stalked crinoids, benthic cnidarians, and sponges, to the substrate. Holdfasts vary in shape and form depending on both the species and the substrate type. The holdfasts of organisms that live in muddy substrates often have complex tangles of root-like growths. These projections are called haptera and similar structures of the same name are found on lichens. The holdfasts of organisms that live in sandy substrates are bulb-like and very flexible, such as those of sea pens, thus permitting the organism to pull the entire body into the substrate when the holdfast is contracted. The holdfasts of organisms that live on smooth surfaces (such as the surface of a boulder) have flattened bases which adhere to the surface. The organism derives no nutrition from this intimate contact with the substrate, as the process of liberating nutrients from the substrate requires enzymatically eroding the substrate away, thereby increasing the risk of the organism falling off the substrate. The claw-like holdfasts of kelps and other algae differ from the roots of land plants, in that they have no absorbent function, instead serving only as an anchor. References Plant morphology es:Rizoide
Holdfast (biology)
[ "Biology" ]
279
[ "Plant morphology", "Algae stubs", "Algae", "Plants" ]
599,463
https://en.wikipedia.org/wiki/Isoniazid
Isoniazid, also known as isonicotinic acid hydrazide (INH), is an antibiotic used for the treatment of tuberculosis. For active tuberculosis, it is often used together with rifampicin, pyrazinamide, and either streptomycin or ethambutol. For latent tuberculosis, it is often used alone. It may also be used for atypical types of mycobacteria, such as M. avium, M. kansasii, and M. xenopi. It is usually taken by mouth, but may be used by injection into muscle. History First synthesis was described in 1912. A. Kachugin invented the drug against tuberculosis under name Tubazid in 1949. Three pharmaceutical companies unsuccessfully attempted to patent the drug at the same time, the most prominent one being Roche, which launched its version, Rimifon, in 1952. The drug was first tested at Many Farms, a Navajo community in Arizona, due to the Navajo reservation's tuberculosis problem and because the population had not previously been treated with streptomycin, the main tuberculosis treatment at the time. The research was led by Walsh McDermott, an infectious disease researcher with an interest in public health, who had previously taken isoniazid to treat his own tuberculosis. Isoniazid and a related drug, iproniazid, were among the first drugs to be referred to as antidepressants. Psychiatric use stopped in 1961 following reports of hepatotoxicity. Use against tuberculosis continued, as isoniazid's effectiveness against the disease outweighs its risks. It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies isoniazid as critically important for human medicine. Isoniazid is available as a generic medication. Medical uses Tuberculosis Isoniazid is often used to treat latent and active tuberculosis infections. In persons with isoniazid-sensitive Mycobacterium tuberculosis infection, drug regimens based on isoniazid are usually effective when persons adhere to the prescribed treatment. However, in persons with isoniazid-resistant Mycobacterium tuberculosis infection, drug regimens based on isoniazid have a high rate of failure. Isoniazid has been approved as prophylactic therapy for the following populations: People with HIV infection and a PPD (purified protein derivative) reaction of at least 5 mm induration Contacts of people with tuberculosis and who have a PPD reaction at least 5 mm induration People whose PPD reactions convert from negative to positive in a two-year period – at least 10 mm induration for those up to 35 years of age, and at least 15 mm induration for those at least 35 years old People with pulmonary damage on their chest X-ray that is likely to be due to healed tuberculosis and also have a PPD reaction at least 5 mm induration Injection drug users whose HIV status is negative who have a PPD reaction at least 10 mm induration People with a PPD of greater than or equal to 10 mm induration who are foreign-born from high prevalence geographical regions, low-income populations, and patients residing in long-term facilities Isoniazid can be used alone or in combination with Rifampin for treatment of latent tuberculosis, or as part of a four-drug regimen for treatment of active tuberculosis. The drug regimen typically requires daily or weekly oral administration for a period of three to nine months, often under Directly Observed Therapy (DOT) supervision. Non-tuberculous mycobacteria Isoniazid was widely used in the treatment of Mycobacterium avium complex as part of a regimen including rifampicin and ethambutol. Evidence suggests that isoniazid prevents mycolic acid synthesis in M. avium complex as in M. tuberculosis and although this is not bactericidal to M. avium complex, it greatly potentiates the effect of rifampicin. The introduction of macrolides led to this use greatly decreasing. However, since rifampicin is broadly underdosed in M. avium complex treatment, this effect may be worth re-investigating. Special populations It is recommended that women with active tuberculosis who are pregnant or breastfeeding take isoniazid. Preventive therapy should be delayed until after giving birth. Nursing mothers excrete a relatively low and non-toxic concentration of INH in breast milk, and their babies are at low risk for side effects. Both pregnant women and infants being breastfed by mothers taking INH should take vitamin B6 in its pyridoxine form to minimize the risk of peripheral nerve damage. Vitamin B6 is used to prevent isoniazid-induced B6 deficiency and neuropathy in people with a risk factor, such as pregnancy, lactation, HIV infection, alcoholism, diabetes, kidney failure, or malnutrition. People with liver dysfunction are at a higher risk for hepatitis caused by INH, and may need a lower dose. Levels of liver enzymes in the bloodstream should be frequently checked in daily alcohol drinkers, pregnant women, IV drug users, people over 35, and those who have chronic liver disease, severe kidney dysfunction, peripheral neuropathy, or HIV infection since they are more likely to develop hepatitis from INH. Side effects Up to 20% of people taking isoniazid experience peripheral neuropathy when taking daily doses of 6 mg/kg of body weight or higher. Gastrointestinal reactions include nausea and vomiting. Aplastic anemia, thrombocytopenia, and agranulocytosis due to lack of production of red blood cells, platelets, and white blood cells by the bone marrow respectively, can also occur. Hypersensitivity reactions are also common and can present with a maculopapular rash and fever. Gynecomastia may occur. Asymptomatic elevation of serum liver enzyme concentrations occurs in 10% to 20% of people taking INH, and liver enzyme concentrations usually return to normal even when treatment is continued. Isoniazid has a boxed warning for severe and sometimes fatal hepatitis, which is age-dependent at a rate of 0.3% in people 21 to 35 years old and over 2% in those over age 50. Symptoms suggestive of liver toxicity include nausea, vomiting, abdominal pain, dark urine, right upper quadrant pain, and loss of appetite. Black and Hispanic women are at higher risk for isoniazid-induced hepatotoxicity. When it happens, isoniazid-induced liver toxicity has been shown to occur in 50% of patients within the first 2 months of therapy. Some recommend that liver function should be monitored carefully in all people receiving it, but others recommend monitoring only in certain populations. Headache, poor concentration, weight gain, poor memory, insomnia, and depression have all been associated with isoniazid use. All patients and healthcare workers should be aware of these serious side effects, especially if suicidal ideation or behavior are suspected. Isoniazid is associated with pyridoxine (vitamin B6) deficiency because of its similar structure. Isoniazid is also associated with increased excretion of pyridoxine. Pyridoxal phosphate (a derivative of pyridoxine) is required for δ-aminolevulinic acid synthase, the enzyme responsible for the rate-limiting step in heme synthesis. Therefore, isoniazid-induced pyridoxine deficiency causes insufficient heme formation in early red blood cells, leading to sideroblastic anemia. Isoniazid was found to significantly elevate the in vivo concentration of GABA and homocarnosine in a single subject via magnetic resonance spectroscopy. Drug interactions People taking isoniazid and acetaminophen are at risk of acetaminophen toxicity. Isoniazid is thought to induce a liver enzyme which causes a larger amount of acetaminophen to be metabolized to a toxic form. Isoniazid decreases the metabolism of carbamazepine, thus slowing down its clearance from the body. People taking carbamazepine should have their carbamazepine levels monitored and, if necessary, have their dose adjusted accordingly. It is possible that isoniazid may decrease the serum levels of ketoconazole after long-term treatment. This is seen with the simultaneous use of rifampin, isoniazid, and ketoconazole. Isoniazid may increase the amount of phenytoin in the body. The doses of phenytoin may need to be adjusted when given with isoniazid. Isoniazid may increase the plasma levels of theophylline. There are some cases of theophylline slowing down isoniazid elimination. Both theophylline and isoniazid levels should be monitored. Valproate levels may increase when taken with isoniazid. Valproate levels should be monitored and its dose adjusted if necessary. Mechanism of action Isoniazid is a prodrug that inhibits the formation of the mycobacterial cell wall. Isoniazid must be activated by KatG, a bacterial catalase-peroxidase enzyme in Mycobacterium tuberculosis. KatG catalyzes the formation of the isonicotinic acyl radical, which spontaneously couples with NADH to form the nicotinoyl-NAD adduct. This complex binds tightly to the enoyl-acyl carrier protein reductase InhA, thereby blocking the natural enoyl-AcpM substrate and the action of fatty acid synthase. This process inhibits the synthesis of mycolic acids, which are required components of the mycobacterial cell wall. A range of radicals are produced by KatG activation of isoniazid, including nitric oxide, which has also been shown to be important in the action of another antimycobacterial prodrug pretomanid. Isoniazid is bactericidal to rapidly dividing mycobacteria, but is bacteriostatic if the mycobacteria are slow-growing. It inhibits the cytochrome P450 system and hence acts as a source of free radicals. Isoniazid is a mild non-selective monoamine oxidase inhibitor (MAO-I). It inhibits diamine oxidase more strongly. These two actions are possible explanations for its antidepressant action as well as its ability to cause mania. Metabolism Isoniazid reaches therapeutic concentrations in serum, cerebrospinal fluid, and within caseous granulomas. It is metabolized in the liver via acetylation into acetylhydrazine. Two forms of the enzyme are responsible for acetylation, so some patients metabolize the drug more quickly than others. Hence, the half-life is bimodal, with "slow acetylators" and "fast acetylators". A graph of number of people versus time shows peaks at one and three hours. The height of the peaks depends on the ethnicities of the people being tested. The metabolites are excreted in the urine. Doses do not usually have to be adjusted in case of renal failure. Preparation Isoniazid is an isonicotinic acid derivative. It is manufactured using 4-cyanopyridine and hydrazine hydrate. In another method, isoniazid was claimed to have been made from citric acid starting material. It can in theory be made from methyl isonicotinate, which is labelled a semiochemical. Brand names Hydra, Hyzyd, Isovit, Laniazid, Nydrazid, Rimifon, and Stanozide. Other uses Chromatography Isonicotinic acid hydrazide is also used in chromatography to differentiate between various degrees of conjugation in organic compounds barring the ketone functional group. The test works by forming a hydrazone which can be detected by its bathochromic shift. Dogs Isoniazid may be used for dogs, but there have been concerns it can cause seizures. References Further reading External links Anti-tuberculosis drugs Antidepressants CYP3A4 inhibitors Disulfiram-like drugs GABA transaminase inhibitors Hepatotoxins Hydrazides 4-Pyridyl compounds Prodrugs Vitamin B6 antagonists World Health Organization essential medicines Wikipedia medicine articles ready to translate
Isoniazid
[ "Chemistry" ]
2,630
[ "Chemicals in medicine", "Prodrugs" ]
17,376,281
https://en.wikipedia.org/wiki/Drucker%E2%80%93Prager%20yield%20criterion
The Drucker–Prager yield criterion is a pressure-dependent model for determining whether a material has failed or undergone plastic yielding. The criterion was introduced to deal with the plastic deformation of soils. It and its many variants have been applied to rock, concrete, polymers, foams, and other pressure-dependent materials. The Drucker–Prager yield criterion has the form where is the first invariant of the Cauchy stress and is the second invariant of the deviatoric part of the Cauchy stress. The constants are determined from experiments. In terms of the equivalent stress (or von Mises stress) and the hydrostatic (or mean) stress, the Drucker–Prager criterion can be expressed as where is the equivalent stress, is the hydrostatic stress, and are material constants. The Drucker–Prager yield criterion expressed in Haigh–Westergaard coordinates is The Drucker–Prager yield surface is a smooth version of the Mohr–Coulomb yield surface. Expressions for A and B The Drucker–Prager model can be written in terms of the principal stresses as If is the yield stress in uniaxial tension, the Drucker–Prager criterion implies If is the yield stress in uniaxial compression, the Drucker–Prager criterion implies Solving these two equations gives Uniaxial asymmetry ratio Different uniaxial yield stresses in tension and in compression are predicted by the Drucker–Prager model. The uniaxial asymmetry ratio for the Drucker–Prager model is Expressions in terms of cohesion and friction angle Since the Drucker–Prager yield surface is a smooth version of the Mohr–Coulomb yield surface, it is often expressed in terms of the cohesion () and the angle of internal friction () that are used to describe the Mohr–Coulomb yield surface. If we assume that the Drucker–Prager yield surface circumscribes the Mohr–Coulomb yield surface then the expressions for and are If the Drucker–Prager yield surface middle circumscribes the Mohr–Coulomb yield surface then If the Drucker–Prager yield surface inscribes the Mohr–Coulomb yield surface then {| class="toccolours collapsible collapsed" width="90%" style="text-align:left" !Derivation of expressions for in terms of |- |The expression for the Mohr–Coulomb yield criterion in Haigh–Westergaard space is If we assume that the Drucker–Prager yield surface circumscribes the Mohr–Coulomb yield surface such that the two surfaces coincide at , then at those points the Mohr–Coulomb yield surface can be expressed as or, The Drucker–Prager yield criterion expressed in Haigh–Westergaard coordinates is Comparing equations (1.1) and (1.2), we have These are the expressions for in terms of . On the other hand, if the Drucker–Prager surface inscribes the Mohr–Coulomb surface, then matching the two surfaces at gives |} Drucker–Prager model for polymers The Drucker–Prager model has been used to model polymers such as polyoxymethylene and polypropylene. For polyoxymethylene the yield stress is a linear function of the pressure. However, polypropylene shows a quadratic pressure-dependence of the yield stress. Drucker–Prager model for foams For foams, the GAZT model uses where is a critical stress for failure in tension or compression, is the density of the foam, and is the density of the base material. Extensions of the isotropic Drucker–Prager model The Drucker–Prager criterion can also be expressed in the alternative form Deshpande–Fleck yield criterion or isotropic foam yield criterion The Deshpande–Fleck yield criterion for foams has the form given in above equation. The parameters for the Deshpande–Fleck criterion are where is a parameter that determines the shape of the yield surface, and is the yield stress in tension or compression. Anisotropic Drucker–Prager yield criterion An anisotropic form of the Drucker–Prager yield criterion is the Liu–Huang–Stout yield criterion. This yield criterion is an extension of the generalized Hill yield criterion and has the form The coefficients are where and are the uniaxial yield stresses in compression in the three principal directions of anisotropy, are the uniaxial yield stresses in tension, and are the yield stresses in pure shear. It has been assumed in the above that the quantities are positive and are negative. The Drucker yield criterion The Drucker–Prager criterion should not be confused with the earlier Drucker criterion which is independent of the pressure (). The Drucker yield criterion has the form where is the second invariant of the deviatoric stress, is the third invariant of the deviatoric stress, is a constant that lies between -27/8 and 9/4 (for the yield surface to be convex), is a constant that varies with the value of . For , where is the yield stress in uniaxial tension. Anisotropic Drucker Criterion An anisotropic version of the Drucker yield criterion is the Cazacu–Barlat (CZ) yield criterion which has the form where are generalized forms of the deviatoric stress and are defined as Cazacu–Barlat yield criterion for plane stress For thin sheet metals, the state of stress can be approximated as plane stress. In that case the Cazacu–Barlat yield criterion reduces to its two-dimensional version with For thin sheets of metals and alloys, the parameters of the Cazacu–Barlat yield criterion are See also Yield surface Yield (engineering) Plasticity (physics) Material failure theory Daniel C. Drucker William Prager References Plasticity (physics) Soil mechanics Solid mechanics Yield criteria
Drucker–Prager yield criterion
[ "Physics", "Materials_science" ]
1,294
[ "Solid mechanics", "Applied and interdisciplinary physics", "Deformation (mechanics)", "Soil mechanics", "Plasticity (physics)", "Mechanics" ]
17,382,620
https://en.wikipedia.org/wiki/Supercritical%20angle%20fluorescence%20microscopy
Supercritical angle fluorescence microscopy (SAF) is a technique to detect and characterize fluorescent species (proteins, biomolecules, pharmaceuticals, etc.) and their behaviour close or even adsorbed or linked at surfaces. The method is able to observe molecules in a distance of less than 100 to 0 nanometer from the surface even in presence of high concentrations of fluorescent species around. Using an aspheric lens for excitation of a sample with laser light, fluorescence emitted by the specimen is collected above the critical angle of total internal reflection selectively and directed by a parabolic optics onto a detector. The method was invented in 1998 in the laboratories of Stefan Seeger at University of Regensburg/Germany and later at University of Zurich/Switzerland. SAF microscopy principle The principle how SAF Microscopy works is as follows: A fluorescent specimen does not emit fluorescence isotropically when it comes close to a surface, but approximately 70% of the fluorescence emitted is directed into the solid phase. Here, the main part enters the solid body above the critical angle. When the emitter is located just 200 nm above the surface, fluorescent light entering the solid body above the critical angle is decreased dramatically. Hence, SAF Microscopy is ideally suited to discriminate between molecules and particles at or close to surfaces and all other specimen present in the bulk. Typical SAF-setup The typical SAF setup consists of a laser line (typically 450-633 nm), which is reflected into the aspheric lens by a dichroic mirror. The lens focuses the laser beam in the sample, causing the particles to fluoresce. The fluorescent light then passes through a parabolic lens before reaching a detector, typically a photomultiplier tube or avalanche photodiode detector. It is also possible to arrange SAF elements as arrays, and image the output onto a CCD, allowing the detection of multiple analytes. Selected publications Fluorescence techniques Microscopy Laser applications
Supercritical angle fluorescence microscopy
[ "Chemistry", "Biology" ]
407
[ "Analytical chemistry stubs", "Fluorescence techniques", "Microscopy" ]
13,433,854
https://en.wikipedia.org/wiki/Geographical%20centre%20of%20Switzerland
The geographical centre of Switzerland has the coordinates (Swiss Grid: 660158/183641). It is located at Älggi-Alp in the municipality of Sachseln, Obwalden. The point is the centre of mass determined in 1988 by Swisstopo. As the point is difficult to access, a stone was set 500 m further south-east on Älggi Alp (1645 m). This symbolizes the centre of Switzerland and is located at (Swiss Grid: 660557/183338). A plaque on the stone commemorates the winner of the "Swiss of the Year" award. External links https://web.archive.org/web/20120607113752/http://www.swisstopo.admin.ch/internet/swisstopo/en/home/topics/knowledge/center_ch.html https://web.archive.org/web/20111113181702/http://www.ch.ch/schweiz/01865/01885/01904/02135/index.html?lang=en https://web.archive.org/web/20160303224820/http://skatingland.myswitzerland.com/en/sightseeing_detail.cfm?id=340745 https://web.archive.org/web/20120308105359/http://www.wanderland.ch/en/orte_detail.cfm?id=315425 Switzerland Geography of Obwalden Centre Tourist attractions in Obwalden
Geographical centre of Switzerland
[ "Physics", "Mathematics" ]
352
[ "Point (geometry)", "Geometric centers", "Geographical centres", "Symmetry" ]
13,434,545
https://en.wikipedia.org/wiki/Frequency%20coordination
Frequency Coordination is a technical and regulatory process that removes or mitigates radio-frequency interference between different radio systems that operate on the same frequency. Normally frequency coordination is a function of an administration, such as a governmental spectrum regulator, as part of a formal regulatory process under the procedures of the Radio Regulations (an intergovernmental treaty text that regulates the radio frequency spectrum). Before an administrations lets an operator operate a new radio communications network, it must undergo coordination in the following steps: Inform other operators about the plans Receive comments if appropriate Conduct technical discussions with priority networks Agree on technical and operational parameters Gain international recognition and protection on the Master International Frequency Register Bring the network into use This coordination ensures that: All administrations know the technical plans of other administrations. All operators (satellite and terrestrial) can determine if unacceptable interference to existing and planned “priority” networks is likely, and have an opportunity to: Object Discuss and review Reach technical and operational sharing agreements Coordination is thus closely bound to date of protection or priority, defined by the date when the International Telecommunication Union receives complete coordination data. New planned networks must coordinate with all networks with an earlier date of protection but are protected against all networks with a later date of protection. Planned (but not implemented) networks acquire status under this procedure, but time limits ensure that protection does not last forever if networks are not implemented. Congress Authorizes FCC In 1982, the United States Congress provided the FCC with the authority to use frequency coordinators: Assist in developing and managing spectrum Recommend appropriate frequencies (designated under Part 90). List of Coordinators For Public Safety frequency coordination - AASHTO APCO FCCA IMSA For Business and special emergency - AAA AAR EWA FIT PCIA UTC Micronet Communications, Inc. - Since 1983 Comsearch - Since 1977 References Radio technology
Frequency coordination
[ "Technology", "Engineering" ]
365
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
13,437,576
https://en.wikipedia.org/wiki/Voltage-gated%20proton%20channel
Voltage-gated proton channels are ion channels that have the unique property of opening with depolarization, but in a strongly pH-sensitive manner. The result is that these channels open only when the electrochemical gradient is outward, such that their opening will only allow protons to leave cells. Their function thus appears to be acid extrusion from cells. Another important function occurs in phagocytes (e.g. eosinophils, neutrophils, and macrophages) during the respiratory burst. When bacteria or other microbes are engulfed by phagocytes, the enzyme NADPH oxidase assembles in the membrane and begins to produce reactive oxygen species (ROS) that help kill bacteria. NADPH oxidase is electrogenic, moving electrons across the membrane, and proton channels open to allow proton flux to balance the electron movement electrically. The functional expression of Hv1 in phagocytes has been well characterized in mammals, and recently in zebrafish, suggesting its important roles in the immune cells of mammals and non-mammalian vertebrates. A group of small molecule inhibitors of the Hv1 channel are shown as chemotherapeutics and anti-inflammatory agents. When activated, the voltage-gated proton channel Hv1 can allow up to 100,000 hydrogen ions across the membrane each second. Whereas most voltage-gated ion channels contain a central pore that is surrounding by alpha helices and the voltage-sensing domain (VSD), voltage-gated hydrogen channels contain no central pore, so their voltage-sensing regions (VSD) carry out the job of bringing acidic protons across the membrane. Because the relative H+ concentrations on each side of the membrane result in a pH gradient, these voltage-gated hydrogen channels only carry outward current, meaning they are used to move acidic protons out of the membrane. As a result, the opening of voltage-gated hydrogen channels usually hyperpolarize the cell membrane, or makes the membrane potential more negative. A recent discovery has shown that the voltage-gated proton channel Hv1 is highly expressed in human breast tumor tissues that are metastatic, but not in non-metastatic breast cancer tissues. Because it has also been found to be highly expressed in other cancer tissues, the study of the voltage-gated proton channel has led many scientists to wonder what its importance is in cancer metastasis. However, much is still being discovered concerning the structure and function of the voltage-gated proton channel. Known types HVCN1 References Ion channels Immunology Voltage-gated ion channels
Voltage-gated proton channel
[ "Chemistry", "Biology" ]
538
[ "Immunology", "Neurochemistry", "Ion channels" ]
13,439,882
https://en.wikipedia.org/wiki/Universal%20conductance%20fluctuations
Universal conductance fluctuations (UCF) in mesoscopic physics is a phenomenon encountered in electrical transport experiments in mesoscopic species. The measured electrical conductance will vary from sample to sample, mainly due to inhomogeneous scattering sites. Fluctuations originate from coherence effects for electronic wavefunctions and thus the phase-coherence length needs be larger than the momentum relaxation length . UCF is more profound when electrical transport is in weak localization regime. where , is the number of conduction channels and is the momentum relaxation due to phonon scattering events length or mean free path. For weakly localized samples fluctuation in conductance is equal to fundamental conductance regardless of the number of channels. Many factors will influence the amplitude of UCF. At zero temperature without decoherence, the UCF is influenced by mainly two factors, the symmetry and the shape of the sample. Recently, a third key factor, anisotropy of Fermi surface, is also found to fundamentally influence the amplitude of UCF. See also Speckle patterns, the optical analogues of conductance fluctuation patterns. References General references Akkermans and Montambaux, Mesoscopic Physics of Electrons and Photons, Cambridge University Press (2007) Supriyo Datta, Electronic Transport in Mesoscopic Systems, Cambridge University Press (1995) R. Saito, G. Dresselhaus and M. S. Dresselhaus, Physical Properties of Carbon Nanotubes, Imperial College Press (1998) Boris Altshuler (1985), Pis'ma Zh. Eksp. Teor. Fiz. 41: 530 [JETP Lett. 41: 648] . Mesoscopic physics Quantum mechanics
Universal conductance fluctuations
[ "Physics", "Materials_science" ]
357
[ "Materials science stubs", "Theoretical physics", "Quantum mechanics", "Condensed matter physics", "Condensed matter stubs", "Mesoscopic physics", "Quantum physics stubs" ]
13,440,300
https://en.wikipedia.org/wiki/Too%20cheap%20to%20meter
Too cheap to meter refers to a commodity so inexpensive that it is cheaper and less bureaucratic to simply provide it for a flat fee or even free and make a profit from associated services. Originally applied to nuclear power, the phrase is also used for services that can be provided at such low cost that the additional cost of itemized billing would outweigh the benefits. Origins The phrase was coined by Lewis Strauss, then chairman of the United States Atomic Energy Commission, who, in a 1954 speech to the National Association of Science Writers, said: It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter, will know of great periodic regional famines in the world only as matters of history, will travel effortlessly over the seas and under them and through the air with a minimum of danger and at great speeds, and will experience a lifespan far longer than ours, as disease yields and man comes to understand what causes him to age. It was this statement that caught the eye of most reviewers and was the headline in a New York Times article covering the speech, subtitled "It will be too cheap for our children to meter, Strauss tells science writers." Only a few days later, Strauss was a guest on Meet the Press. When the reporters asked him about the quotation and the viability of "commercial power from atomic piles," Strauss replied that he expected his children and grandchildren would have power "too cheap to be metered, just as we have water today that's too cheap to be metered." The statement was contentious from the start. The U.S. Atomic Energy Commission itself, in testimony to the U.S. Congress only months before, lowered the expectations for fission power, projecting only that the costs of reactors could be brought down to about the same as those for conventional sources. A later survey found dozens of statements from the period that suggested it was widely believed that nuclear energy would be more expensive than coal, at least in the foreseeable future. James T. Ramey, who would later become an AEC Commissioner, noted: "Nobody took Strauss' statement very seriously." The phrase has also been attributed to Walter Marshall, a pioneer of nuclear power in the United Kingdom. There is no documentary evidence that he invented or used the term. Fusion or fission? Strauss's prediction did not come true, and over time it became a target of those pointing to the industry's record of overpromising and underdelivering. In 1980, the Atomic Industrial Forum wrote an article quoting his son, Lewis H. Strauss, claiming that he was talking about not nuclear fission but nuclear fusion. He claimed his father was not specific about this in the speech because the AEC's Project Sherwood was still classified at the time, so he was not allowed to refer to this work directly. Since that time, this claim has been widely repeated, including in 2003 comments by Donald Hintz, chairman of the Nuclear Energy Institute. To support that argument, Strauss and biographer Pfau point to this statement: "industry would have electrical power from atomic furnaces in five to fifteen years." It was claimed that the timeline implies that Strauss was referring to fusion, not fission. Although it is not a direct quote, this version of the statement appeared in the New York Times overview of the speech the next day. The statement in question is originally: Dr. Lawrence Hafstad, whom all of you surely know, happens to be speaking, today, in Brussels before the Congress of Industrial Chemistry. He heads the Reactor Development Division of the Atomic Energy Commission. Therefore, he expects to be asked, "How soon will you have industrial atomic electric power in the United States?" His answer is "from 5 to 15 years depending on the vigor of the development effort." Hafstad was in charge of the development of fission reactors by the AEC, and this statement immediately precedes the "too cheap to meter" statement. The same is true of his statements on Meet the Press, which in direct reply to a question about fission. The speech as a whole contains large sections about the development of fission power and the difficulties that the Commission was having communicating this fact. He wryly notes receiving letters addressed to the "Atomic Bomb Commission" and then quotes a study that demonstrates the public is largely ignorant of the development of atomic power. He goes on to briefly recount the development of fission, noting a letter from Leo Szilard of sixteen years earlier where he speaks of the possibility of a chain reaction. A later examination of the topic concluded: "there is no evidence in Strauss's papers at the Herbert Hoover Presidential Library to indicate fusion was the hidden subject of his speech." Strauss viewed hydrogen fusion as the ultimate power source and was eager to develop the technology as quickly as possible and urged the Project Sherwood researchers to make rapid progress, even suggesting a million-dollar prize to the individual or team that succeeded first. However Strauss was not optimistic about the rapid commercialization of fusion power. In August 1955 after fusion research was made public, he cautioned that "there has been nothing in the nature of breakthroughs that would warrant anyone assuming that this [fusion power] was anything except a very long range—and I would accent the word 'very'—prospect." Other uses The phrase became famous enough that it has been used in other contexts, especially in post-scarcity discussions. For instance, landline (and cable) internet bandwidth is now often billed on a flat monthly fee with no usage limits, and it is predicted that the introduction of 5G will do the same for mobile data, making it "too cheap to meter." The same has been said for technology as a whole. Prior to 1985, water meters were not required in New York City; water and sewage fees were assessed based on building size and number of water fixtures; water metering was introduced as a conservation measure. See also Cornucopianism Free public transport References Sources External links Steve Cohn (1997). Too cheap to meter: an economic and philosophical analysis of the nuclear dream English-language idioms Nuclear power Commodities
Too cheap to meter
[ "Physics" ]
1,246
[ "Power (physics)", "Physical quantities", "Nuclear power" ]
13,440,591
https://en.wikipedia.org/wiki/Onium
An onium (plural: onia) is a bound state of a particle and its antiparticle. These states are usually named by adding the suffix -onium to the name of one of the constituent particles (replacing an -on suffix when present), with one exception for "muonium"; a muon–antimuon bound pair is called "true muonium" to avoid confusion with old nomenclature. Examples Positronium is an onium which consists of an electron and a positron bound together as a long-lived metastable state. Positronium has been studied since the 1950s to understand bound states in quantum field theory. A recent development called non-relativistic quantum electrodynamics (NRQED) used this system as a proving ground. Pionium, a bound state of two oppositely-charged pions, is interesting for exploring the strong interaction. This should also be true of protonium. The true analogs of positronium in the theory of strong interactions are the quarkonium states: they are mesons made of a heavy quark and antiquark (namely, charmonium and bottomonium). Exploration of these states through non-relativistic quantum chromodynamics (NRQCD) and lattice QCD are increasingly important tests of quantum chromodynamics. Understanding bound states of hadrons such as pionium and protonium is also important in order to clarify notions related to exotic hadrons such as mesonic molecules and pentaquark states. See also Exotic atom Exciton — solid-state analog of positronium Footnotes References Particle physics
Onium
[ "Physics" ]
349
[ "Particle physics" ]
13,443,170
https://en.wikipedia.org/wiki/Chirikov%20criterion
The Chirikov criterion or Chirikov resonance-overlap criterion was established by the Russian physicist Boris Chirikov. Back in 1959, he published a seminal article, where he introduced the very first physical criterion for the onset of chaotic motion in deterministic Hamiltonian systems. He then applied such a criterion to explain puzzling experimental results on plasma confinement in magnetic bottles obtained by Rodionov at the Kurchatov Institute. Description According to this criterion a deterministic trajectory will begin to move between two nonlinear resonances in a chaotic and unpredictable manner, in the parameter range Here is the perturbation parameter, while is the resonance-overlap parameter, given by the ratio of the unperturbed resonance width in frequency (often computed in the pendulum approximation and proportional to the square-root of perturbation), and the frequency difference between two unperturbed resonances. Since its introduction, the Chirikov criterion has become an important analytical tool for the determination of the chaos border. See also Chirikov criterion at Scholarpedia Chirikov standard map and standard map Boris Chirikov and Boris Chirikov at Scholarpedia References B.V.Chirikov, "Research concerning the theory of nonlinear resonance and stochasticity", Preprint N 267, Institute of Nuclear Physics, Novosibirsk (1969), (Engl. Trans., CERN Trans. 71-40 (1971)) B.V.Chirikov, "A universal instability of many-dimensional oscillator systems", Phys. Rep. 52: 263 (1979) Springer link References External links website dedicated to Boris Chirikov Special Volume dedicated to 70th of Boris Chirikov: Physica D 131:1-4 vii (1999) and arXiv Chaos theory Chaotic maps
Chirikov criterion
[ "Mathematics" ]
375
[ "Functions and mappings", "Mathematical objects", "Mathematical relations", "Chaotic maps", "Dynamical systems" ]
13,444,144
https://en.wikipedia.org/wiki/Virtual%20facility
A Virtual Facility (VF) is a highly realistic digital representation of a data center, used to model all relevant aspects of a physical data center with a high degree of precision. The term "virtual" in Virtual Facility refers to its use of virtual reality, rather than the abstraction of computer resources as seen in platform virtualization. The VF mirrors the characteristics of a physical facility over time and allows for detailed analysis and modeling. VF Model features A standard VF model includes: Three-dimensional physical facility layout Network connectivity of facility equipment Full inventory of facility equipment, including electronics and electrical systems such as power distribution units (PDUs) and uninterruptible power supplies (UPSs) Full air conditioning system (ACUs) and controls within the room The term Virtual Facility was introduced to address the emerging environmental problems facing modern Mission Critical Facilities (MCFs). This concept combines virtual reality (VR), computer simulation, and expert systems applied to the domain of facilities. The VF type of computer simulation allows for detailed analysis and prototyping of airflow in the data center using computational fluid dynamics (CFD) techniques. This enables the visualization and numerical analysis of airflow and temperatures within the facility, helping to predict real-world outcomes. VF applications The VF model can be used to assist with the following: Greenfield design Asset management Troubleshooting existing data centers Making existing data centers more resilient Making existing data centers more energy efficient Cost prediction Staff training Capacity planning Load growth management Many organizations use VF models to virtually assess scenarios before committing resources to physical changes. This allows for better decision-making regarding the addition or modification of equipment, helping to avoid logistical or thermal problems. References Data management
Virtual facility
[ "Technology" ]
350
[ "Data management", "Data" ]
13,444,838
https://en.wikipedia.org/wiki/S%C3%B6der%20Torn
South Tower (Swedish: "Söder Torn") is a high-rise building located on Fatburstrappan 18, next to Fatbursparken on Södermalm in Stockholm. The building has a height of about above the ground including the "crown" and consists of 25 floors. The Söder Torn complex contains three additional buildings, including one that abuts Medborgarplatsen. Collectively, the buildings contain 172 condominium apartments and 5 businesses. The South Tower itself has 85 apartments and one business. A garage contains parking for both cars and motorcycles. Site History and Building Description The Tower's site was previously a lake called Fatburen, which had formed due to isostatic uplift after the last glaciation. By the 1700s, the lake had become polluted due to urban expansion, and by 1860 the lake had been filled in order to create a rail yard and train station. The rail yard was closed in 1980 and the neighborhood re-developed into a residential district during the period 1985-1995 which features a large number of buildings in the post-modern style. A new train station was built underground in close proximity (300 meters) to the South Tower. The Tower was originally designed by Danish architect Henning Larsen to have 40 floors. However, Larsen left the project in protest after Stockholm's city planning office forced the removal 16 floors from the building plan. The floor plan is octagonal with five apartments on each level. The tower tapers with increasing height. The facades are clad with red granite slabs. In the centrally located stairwell there are two elevators and a spiral staircase. The 23rd and 24th floors have three multi-story apartments, and the top floor is a common party room with glass walls and a panoramic view of the city. Built by construction company JM and finance company SBC, it opened in 1997. Gallery Residential Qualities The top floor of the building is a glass-enclosed party room and terrace with panoramic views of Stockholm. There is an indoor swimming pool and sauna on a lower level. A fountain sculpture at the Tower's base, La Fontaine aux quatre Nanas by French-American artist Niki de Saint Phalle, attracts many viewers due to its styling and location adjacent to a heavily used path between Stockholm South Station and the plaza Medborgarplatsen. Criticism The development at Fatburen district was poorly received by some architectural critics, with one review specifically highlighting the South Tower as "a monument to post-modernism as a playhouse for urban development." See also Bofills båge Medborgarplatsen Södermalm Stockholm South Station References Skyscrapers in Stockholm Residential skyscrapers Residential buildings in Sweden Postmodern architecture
Söder Torn
[ "Engineering" ]
557
[ "Postmodern architecture", "Architecture" ]
5,574,263
https://en.wikipedia.org/wiki/Carboxypeptidase
A carboxypeptidase (EC number 3.4.16 - 3.4.18) is a protease enzyme that hydrolyzes (cleaves) a peptide bond at the carboxy-terminal (C-terminal) end of a protein or peptide. This is in contrast to an aminopeptidases, which cleave peptide bonds at the N-terminus of proteins. Humans, animals, bacteria and plants contain several types of carboxypeptidases that have diverse functions ranging from catabolism to protein maturation. At least two mechanisms have been discussed. Functions Initial studies on carboxypeptidases focused on pancreatic carboxypeptidases A1, A2, and B in the digestion of food. Most carboxypeptidases are not, however, involved in catabolism. Instead they help to mature proteins, for example post-translational modification. They also regulate biological processes, such as the biosynthesis of neuroendocrine peptides such as insulin requires a carboxypeptidase. Carboxypeptidases also function in blood clotting, growth factor production, wound healing, reproduction, and many other processes. Mechanism Carboxypeptidases hydrolyze peptides at the first amide or polypeptide bond on the C-terminal end of the chain. Carboxypeptidases act by replacing the substrate water with a carbonyl (C=O) group. The carboxypeptidase A hydrolysis reaction has two mechanistic hypotheses, via a nucleophilic water and via an anhydride. In the first proposed mechanism, a promoted-water pathway is favoured as Glu270 deprotonates the nucleophilic water. The Zn2+ ion, along with positively charged residues, decreases the pKa of the bound water to approximately 7. Glu 270 has a dual role in this mechanism as it acts as a base to allow for the attack at the amide carbonyl group during nucleophilic addition. It acts as an acid during elimination when the water proton is transferred to the leaving nitrogen group. The oxygen on the amide carbonyl group does not coordinate to the Zn2+ until the addition of the water. The deprotonation of the Zn2+ coordinated water by Glu 270 provides an activated hydroxide nucleophile which attacks the amide carbonyl group in the peptide bond in a nucleophilic addition. The negatively charged intermediates that are formed during hydrolysis are stabilized by the Zn2+ ion. The interaction between the carbonyl group and the neighbouring arginine, Arg 217, also stabilizes the negatively charged intermediates. The zinc-bound hydroxide interacts with the amide with the electrostatic stabilization of the transition state provided by the Zn2+ ion and the neighbouring arginine. The second proposed mechanism via an anhydride has similar steps but there is a direct attack of Glu270 on the carbonyl group, and then the interaction of Glu270 on the Zn2+-bound amide forms an anhydride instead which can subsequently be hydrolyzed by water. Classifications By active site mechanism Carboxypeptidases are usually classified into one of several families based on their active site mechanism. Enzymes that use a metal in the active site are called "metallo-carboxypeptidases" (EC number 3.4.17). Other carboxypeptidases that use active site serine residues are called "serine carboxypeptidases" (EC number 3.4.16). Those that use an active site cysteine are called "cysteine carboxypeptidase" (or "thiol carboxypeptidases")(EC number 3.4.18). These names do not refer to the selectivity of the amino acid that is cleaved. By substrate preference Another classification system for carboxypeptidases refers to their substrate preference. In this classification system, carboxypeptidases that have a stronger preference for those amino acids containing aromatic or branched hydrocarbon chains are called carboxypeptidase A (A for aromatic/aliphatic). Carboxypeptidases that cleave positively charged amino acids (arginine, lysine) are called carboxypeptidase B (B for basic). A metallo-carboxypeptidase that cleaves a C-terminal glutamate from the peptide N-acetyl-L-aspartyl-L-glutamate is called "glutamate carboxypeptidase". A serine carboxypeptidase that cleaves the C-terminal residue from peptides containing the sequence -Pro-Xaa (Pro is proline, Xaa is any amino acid on the C-terminus of a peptide) is called "prolyl carboxypeptidase". Activation Some, but not all, carboxypeptidases are initially produced in an inactive form; this precursor form is referred to as a procarboxypeptidase. In the case of pancreatic carboxypeptidase A, the inactive zymogen form - pro-carboxypeptidase A - is converted to its active form - carboxypeptidase A - by the enzyme trypsin. This mechanism ensures that the cells wherein pro-carboxypeptidase A is produced are not themselves digested. See also Carboxypeptidase E Carboxypeptidase A Enzyme category EC number 3.4 Thrombin-activatable fibrinolysis inhibitor aka plasma carboxypeptidase B2 Bacterial transpeptidase, an alanine carboxypeptidase Bradykinin is broken down among other enzymes by carboxypeptidase N DD-Ala carboxypeptidase is a penicillin-binding protein Phenylalanine might inhibit carboxypeptidase A Martha L. Ludwig References Further reading External links Proteins Enzymes Metabolism pl:Karboksypeptydaza
Carboxypeptidase
[ "Chemistry", "Biology" ]
1,313
[ "Biomolecules by chemical classification", "Cellular processes", "Molecular biology", "Biochemistry", "Proteins", "Metabolism" ]
4,170,806
https://en.wikipedia.org/wiki/Corrosion%20fatigue
Corrosion fatigue is fatigue in a corrosive environment. It is the mechanical degradation of a material under the joint action of corrosion and cyclic loading. Nearly all engineering structures experience some form of alternating stress, and are exposed to harmful environments during their service life. The environment plays a significant role in the fatigue of high-strength structural materials like steel, aluminum alloys and titanium alloys. Materials with high specific strength are being developed to meet the requirements of advancing technology. However, their usefulness depends to a large extent on the degree to which they resist corrosion fatigue. The effects of corrosive environments on the fatigue behavior of metals were studied as early as 1930. The phenomenon should not be confused with stress corrosion cracking, where corrosion (such as pitting) leads to the development of brittle cracks, growth and failure. The only requirement for corrosion fatigue is that the sample be under tensile stress. Effect of corrosion on S-N diagram The effect of corrosion on a smooth-specimen S-N diagram is shown schematically on the right. Curve A shows the fatigue behavior of a material tested in air. A fatigue threshold (or limit) is seen in curve A, corresponding to the horizontal part of the curve. Curves B and C represent the fatigue behavior of the same material in two corrosive environments. In curve B, the fatigue failure at high stress levels is retarded, and the fatigue limit is eliminated. In curve C, the whole curve is shifted to the left; this indicates a general lowering in fatigue strength, accelerated initiation at higher stresses and elimination of the fatigue limit. To meet the needs of advancing technology, higher-strength materials are developed through heat treatment or alloying. Such high-strength materials generally exhibit higher fatigue limits, and can be used at higher service stress levels even under fatigue loading. However, the presence of a corrosive environment during fatigue loading eliminates this stress advantage, since the fatigue limit becomes almost insensitive to the strength level for a particular group of alloys. This effect is schematically shown for several steels in the diagram on the left, which illustrates the debilitating effect of a corrosive environment on the functionality of high-strength materials under fatigue. Corrosion fatigue in aqueous media is an electrochemical behavior. Fractures are initiated either by pitting or persistent slip bands. Corrosion fatigue may be reduced by alloy additions, inhibition and cathodic protection, all of which reduce pitting. Since corrosion-fatigue cracks initiate at a metal's surface, surface treatments like plating, cladding, nitriding and shot peening were found to improve the materials resistance to this phenomenon. Crack-propagation studies in corrosion fatigue In normal fatigue-testing of smooth specimens, about 90 percent is spent in crack nucleation and only the remaining 10 percent in crack propagation. However, in corrosion fatigue crack nucleation is facilitated by corrosion; typically, about 10 percent of life is sufficient for this stage. The rest (90 percent) of life is spent in crack propagation. Thus, it is more useful to evaluate crack-propagation behavior during corrosion fatigue. Fracture mechanics uses pre-cracked specimens, effectively measuring crack-propagation behavior. For this reason, emphasis is given to crack-propagation velocity measurements (using fracture mechanics) to study corrosion fatigue. Since fatigue crack grows in a stable fashion below the critical stress-intensity factor for fracture (fracture toughness), the process is called sub-critical crack growth. The diagram on the right shows typical fatigue-crack-growth behavior. In this log-log plot, the crack-propagation velocity is plotted against the applied stress-intensity range. Generally there is a threshold stress-intensity range, below which crack-propagation velocity is insignificant. Three stages may be visualized in this plot. Near the threshold, crack-propagation velocity increases with increasing stress-intensity range. In the second region, the curve is nearly linear and follows Paris' law(6); in the third region crack-propagation velocity increases rapidly, with the stress-intensity range leading to fracture at the fracture-toughness value. Crack propagation under corrosion fatigue may be classified as a) true corrosion fatigue, b) stress corrosion fatigue or c) a combination of true, stress and corrosion fatigue. True corrosion fatigue In true corrosion fatigue, the fatigue-crack-growth rate is enhanced by corrosion; this effect is seen in all three regions of the fatigue-crack growth-rate diagram. The diagram on the left is a schematic of crack-growth rate under true corrosion fatigue; the curve shifts to a lower stress-intensity-factor range in the corrosive environment. The threshold is lower (and the crack-growth velocities higher) at all stress-intensity factors. Specimen fracture occurs when the stress-intensity-factor range is equal to the applicable threshold-stress-intensity factor for stress-corrosion cracking. When attempting to analyze the effects of corrosion fatigue on crack growth in a particular environment, both corrosion type and fatigue load levels affect crack growth in varying degrees. Common types of corrosion include filiform, pitting, exfoliation, intergranular; each will affect crack growth in a particular material in a distinct way. For instance, pitting will often be the most damaging type of corrosion, degrading a material's performance (by increasing the crack-growth rate) more than any other kind of corrosion; even pits of the order of a material's grain size may substantially degrade a material. The degree to which corrosion affects crack-growth rates also depends on fatigue-load levels; for instance, corrosion can cause a greater increase in crack-growth rates at a low loads than it does at a high load. Stress-corrosion fatigue In materials where the maximum applied-stress-intensity factor exceeds the stress-corrosion cracking-threshold value, stress corrosion adds to crack-growth velocity. This is shown in the schematic on the right. In a corrosive environment, the crack grows due to cyclic loading at a lower stress-intensity range; above the threshold stress intensity for stress corrosion cracking, additional crack growth (the red line) occurs due to SCC. The lower stress-intensity regions are not affected, and the threshold stress-intensity range for fatigue-crack propagation is unchanged in the corrosive environment. In the most-general case, corrosion-fatigue crack growth may exhibit both of the above effects; crack-growth behavior is represented in the schematic on the left.. See also Corrosion Cyclic corrosion testing Metal Fatigue Stress corrosion cracking Stress References Structural engineering Corrosion Materials degradation Fracture mechanics
Corrosion fatigue
[ "Chemistry", "Materials_science", "Engineering" ]
1,337
[ "Structural engineering", "Fracture mechanics", "Metallurgy", "Materials science", "Corrosion", "Construction", "Electrochemistry", "Civil engineering", "Materials degradation" ]
4,171,950
https://en.wikipedia.org/wiki/Constrained%20optimization
In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied. Relation to constraint-satisfaction problems The constrained-optimization problem (COP) is a significant generalization of the classic constraint-satisfaction problem (CSP) model. COP is a CSP that includes an objective function to be optimized. Many algorithms are used to handle the optimization part. General form A general constrained minimization problem may be written as follows: where and are constraints that are required to be satisfied (these are called hard constraints), and is the objective function that needs to be optimized subject to the constraints. In some problems, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated. Solution methods Many constrained optimization algorithms can be adapted to the unconstrained case, often via the use of a penalty method. However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence. This is referred to as the Maratos effect. Equality constraints Substitution method For very simple problems, say a function of two variables subject to a single equality constraint, it is most practical to apply the method of substitution. The idea is to substitute the constraint into the objective function to create a composite function that incorporates the effect of the constraint. For example, assume the objective is to maximize subject to . The constraint implies , which can be substituted into the objective function to create . The first-order necessary condition gives , which can be solved for and, consequently, . Lagrange multiplier If the constrained problem has only equality constraints, the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables. Inequality constraints With inequality constraints, the problem can be characterized in terms of the geometric optimality conditions, Fritz John conditions and Karush–Kuhn–Tucker conditions, under which simple problems may be solvable. Linear programming If the objective function and all of the hard constraints are linear and some hard constraints are inequalities, then the problem is a linear programming problem. This can be solved by the simplex method, which usually works in polynomial time in the problem size but is not guaranteed to, or by interior point methods which are guaranteed to work in polynomial time. Nonlinear programming If the objective function or some of the constraints are nonlinear, and some constraints are inequalities, then the problem is a nonlinear programming problem. Quadratic programming If all the hard constraints are linear and some are inequalities, but the objective function is quadratic, the problem is a quadratic programming problem. It is one type of nonlinear programming. It can still be solved in polynomial time by the ellipsoid method if the objective function is convex; otherwise the problem may be NP hard. KKT conditions Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers. It can be applied under differentiability and convexity. Branch and bound Constraint optimization can be solved by branch-and-bound algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution. Assuming that cost is to be minimized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far. On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists. As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible. A variation of this approach called Hansen's method uses interval methods. It inherently implements rectangular constraints. First-choice bounding functions One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper bound because the soft constraints cannot assume a higher value. It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for while another constraint is maximal for . Russian doll search This method runs a branch-and-bound algorithm on problems, where is the number of variables. Each such problem is the subproblem obtained by dropping a sequence of variables from the original problem, along with the constraints containing them. After the problem on variables is solved, its optimal cost can be used as an upper bound while solving the other problems, In particular, the cost estimate of a solution having as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point. There is similarity between the Russian Doll Search method and dynamic programming. Like dynamic programming, Russian Doll Search solves sub-problems in order to solve the whole problem. But, whereas Dynamic Programming directly combines the results obtained on sub-problems to get the result of the whole problem, Russian Doll Search only uses them as bounds during its search. Bucket elimination The bucket elimination algorithm can be adapted for constraint optimization. A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint. The cost of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if is the variable to be removed, are the soft constraints containing it, and are their variables except , the new soft constraint is defined by: Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the bucket of a variable contains all constraints having the variable has the highest in the order. Bucket elimination proceed from the last variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is then placed in the appropriate bucket. See also Constrained least squares Distributed constraint optimization Constraint satisfaction problem (CSP) Constraint programming Integer programming Metric projection Penalty method Superiorization References Further reading Mathematical optimization Constraint programming
Constrained optimization
[ "Mathematics" ]
1,644
[ "Mathematical optimization", "Mathematical analysis" ]
8,759,421
https://en.wikipedia.org/wiki/Hexamethylenediamine
Hexamethylenediamine or hexane-1,6-diamine, is the organic compound with the formula H2N(CH2)6NH2. The molecule is a diamine, consisting of a hexamethylene hydrocarbon chain terminated with amine functional groups. The colorless solid (yellowish for some commercial samples) has a strong amine odor. About 1 billion kilograms are produced annually. Synthesis Hexamethylenediamine was first reported by Theodor Curtius. It is produced by the hydrogenation of adiponitrile: NC(CH2)4CN + 4 H2 → H2N(CH2)6NH2 The hydrogenation is conducted on molten adiponitrile diluted with ammonia, typical catalysts being based on cobalt and iron. The yield is good, but commercially significant side products are generated by virtue of reactivity of partially hydrogenated intermediates. These other products include 1,2-diaminocyclohexane, hexamethyleneimine, and the triamine bis(hexamethylenetriamine). An alternative process uses Raney nickel as the catalyst and adiponitrile that is diluted with hexamethylenediamine itself (as the solvent). This process operates without ammonia and at lower pressure and temperature. Applications Hexamethylenediamine is used almost exclusively for the production of polymers, an application that takes advantage of its structure. It is difunctional in terms of the amine groups and tetra functional with respect to the amine hydrogens. The great majority of the diamine is consumed by the production of nylon 66 via condensation with adipic acid. Otherwise hexamethylene diisocyanate (HDI) is generated from this diamine by phosgenation as a monomer feedstock in the production of polyurethane. The diamine also serves as a cross-linking agent in epoxy resins. Safety Hexamethylenediamine is moderately toxic, with of 792–1127 mg/kg. Nonetheless, like other basic amines, it can cause serious burns and severe irritation. Such injuries were observed in the accident at the BASF site in Seal Sands, near Billingham (UK) on 4 January 2007 in which 37 persons were injured, one of them seriously. See also 1,2-Diaminocyclohexane 2-Methylpentamethylenediamine References Monomers Diamines
Hexamethylenediamine
[ "Chemistry", "Materials_science" ]
525
[ "Monomers", "Polymer chemistry" ]
8,761,205
https://en.wikipedia.org/wiki/Compatibility%20%28chemical%29
Chemical compatibility is a rough measure of how stable a substance is when mixed with another substance. If two substances can mix together and not undergo a chemical reaction, they are considered compatible. Incompatible chemicals react with each other, and can cause corrosion, mechanical weakening, evolution of gas, fire, or other undesirable interactions. Chemical compatibility is important when choosing materials for chemical storage or reactions, so that the vessel and other apparatus will not be damaged by its contents. For purposes of chemical storage, chemicals that are incompatible should not be stored together, so that any leak will not cause an even more dangerous situation from chemical reactions. In addition, chemical compatibility refers to the container material being acceptable to store the chemical or for a tool or object that comes in contact with a chemical to not degrade. For example, when stirring a chemical, the stirrer must be stable in the chemical that is being stirred. Many companies publish chemical resistance charts. and databases to help chemical users use appropriate materials for handling chemicals. Such charts are particularly important for polymers as they are often not compatible with common chemical reagents; this may even depend on how the polymers have been processed. For example, 3-D printing polymer tools used for chemical experiments must be chosen to ensure chemical compatibility with care. Chemical compatibility is also important when choosing among different chemicals that have similar purposes. For example, bleach and ammonia, both commonly used as cleaners, can undergo a dangerous chemical reaction when combined with each other, producing poisonous fumes. Even though each of them has a similar use, care must be taken not to allow these chemicals to mix. References External links Chemical compatibility database Chemical safety
Compatibility (chemical)
[ "Chemistry" ]
336
[ "Chemical safety", "Chemical accident", "nan" ]
8,761,319
https://en.wikipedia.org/wiki/Skorokhod%27s%20embedding%20theorem
In mathematics and probability theory, Skorokhod's embedding theorem is either or both of two theorems that allow one to regard any suitable collection of random variables as a Wiener process (Brownian motion) evaluated at a collection of stopping times. Both results are named for the Ukrainian mathematician A. V. Skorokhod. Skorokhod's first embedding theorem Let X be a real-valued random variable with expected value 0 and finite variance; let W denote a canonical real-valued Wiener process. Then there is a stopping time (with respect to the natural filtration of W), τ, such that Wτ has the same distribution as X, and Skorokhod's second embedding theorem Let X1, X2, ... be a sequence of independent and identically distributed random variables, each with expected value 0 and finite variance, and let Then there is a sequence of stopping times τ1 ≤ τ2 ≤ ... such that the have the same joint distributions as the partial sums Sn and τ1, τ2 − τ1, τ3 − τ2, ... are independent and identically distributed random variables satisfying and References (Theorems 37.6, 37.7) Probability theorems Wiener process Ukrainian inventions
Skorokhod's embedding theorem
[ "Mathematics" ]
266
[ "Theorems in probability theory", "Mathematical theorems", "Mathematical problems" ]
8,761,903
https://en.wikipedia.org/wiki/Strain%20engineering
Strain engineering refers to a general strategy employed in semiconductor manufacturing to enhance device performance. Performance benefits are achieved by modulating strain, as one example, in the transistor channel, which enhances electron mobility (or hole mobility) and thereby conductivity through the channel. Another example are semiconductor photocatalysts strain-engineered for more effective use of sunlight. In CMOS manufacturing The use of various strain engineering techniques has been reported by many prominent microprocessor manufacturers, including AMD, IBM, and Intel, primarily with regards to sub-130 nm technologies. One key consideration in using strain engineering in CMOS technologies is that PMOS and NMOS respond differently to different types of strain. Specifically, PMOS performance is best served by applying compressive strain to the channel, whereas NMOS receives benefit from tensile strain. Many approaches to strain engineering induce strain locally, allowing both n-channel and p-channel strain to be modulated independently. One prominent approach involves the use of a strain-inducing capping layer. CVD silicon nitride is a common choice for a strained capping layer, in that the magnitude and type of strain (e.g. tensile vs compressive) may be adjusted by modulating the deposition conditions, especially temperature. Standard lithography patterning techniques can be used to selectively deposit strain-inducing capping layers, to deposit a compressive film over only the PMOS, for example. Capping layers are key to the Dual Stress Liner (DSL) approach reported by IBM-AMD. In the DSL process, standard patterning and lithography techniques are used to selectively deposit a tensile silicon nitride film over the NMOS and a compressive silicon nitride film over the PMOS. A second prominent approach involves the use of a silicon-rich solid solution, especially silicon-germanium, to modulate channel strain. One manufacturing method involves epitaxial growth of silicon on top of a relaxed silicon-germanium underlayer. Tensile strain is induced in the silicon as the lattice of the silicon layer is stretched to mimic the larger lattice constant of the underlying silicon-germanium. Conversely, compressive strain could be induced by using a solid solution with a smaller lattice constant, such as silicon-carbon. See, e.g., U.S. Patent No. 7,023,018. Another closely related method involves replacing the source and drain region of a MOSFET with silicon-germanium. In thin films Strain can be induced in thin films with either epitaxial growth, or more recently, topological growth. Epitaxial strain in thin films generally arises due to lattice mismatch between the film and its substrate and triple junction restructuring at the surface triple junction, which arises either during film growth or due to thermal expansion mismatch. Tuning this epitaxial strain can be used to moderate the properties of thin films and induce phase transitions. The misfit parameter () is given by the equation below: where is the lattice parameter of the epitaxial film and is the lattice parameter of the substrate. After some critical film thickness, it becomes energetically favorable to relieve some mismatch strain through the formation of misfit dislocations or microtwins. Misfit dislocations can be interpreted as a dangling bond at an interface between layers with different lattice constants. This critical thickness () was computed by Mathews and Blakeslee to be: where is the length of the Burgers vector, is the Poisson ratio, is the angle between the Burgers vector and misfit dislocation line, and is the angle between the Burgers vector and the vector normal to the dislocation's glide plane. The equilibrium in-plane strain for a thin film with a thickness () that exceeds is then given by the expression: Strain relaxation at thin film interfaces via misfit dislocation nucleation and multiplication occurs in three stages which are distinguishable based on the relaxation rate. The first stage is dominated by glide of pre-existing dislocations and is characterized by a slow relaxation rate. The second stage has a faster relaxation rate, which depends on the mechanisms for dislocation nucleation in the material. Finally, the last stage represents a saturation in strain relaxation due to strain hardening. Strain engineering has been well-studied in complex oxide systems, in which epitaxial strain can strongly influence the coupling between the spin, charge, and orbital degrees of freedom, and thereby impact the electrical and magnetic properties. Epitaxial strain has been shown to induce metal-insulator transitions and shift the Curie temperature for the antiferromagnetic-to-ferromagnetic transition in La_{1-x}Sr_{x}MnO_{3}. In alloy thin films, epitaxial strain has been observed to impact the spinodal instability, and therefore impact the driving force for phase separation. This is explained as a coupling between the imposed epitaxial strain and the system's composition-dependent elastic properties. Researchers more recently have achieved strain in thick oxide films larger than that achieved in epitaxial growth by incorporating nano-structured topologies (Guerra and Vezenov, 2002) and nanorods/nanopillars within an oxide film matrix. Following this work, researchers world-wide have created such self-organized, phase-separated, nanorod/nanopillar structures in numerous oxide films as reviewed here. In 2008, Thulin and Guerra published calculations of strain-modified anatase titania band structures, which included an indicated higher hole mobility with increasing strain. Additionally, in two dimensional materials such as strain has been shown to induce conversion from an indirect semiconductor to a direct semiconductor allowing a hundred-fold increase in the light emission rate. In III-N LEDs Strain engineering plays a major role in III-N LEDs, one of the most ubiquitous and efficient LED varieties that has only gained popularity after the 2014 Nobel Prize in Physics. Most III-N LEDs utilize a combination of GaN and InGaN, the latter being used as the quantum well region. The composition of In within the InGaN layer can be tuned to change the color of the light emitted from these LEDs. However, the epilayers of the LED quantum well have inherently mismatched lattice constants, creating strain between the layers. Due to the quantum confined Stark effect (QCSE), the electron and hole wave functions are misaligned within the quantum well, resulting in a reduced overlap integral, decreased recombination probability, and increased carrier lifetime. As such, applying an external strain can negate the internal quantum well strain, reducing the carrier lifetime and making the LEDs a more attractive light source for communications and other applications requiring fast modulation speeds. With appropriate strain engineering, it is possible to grow III-N LEDs on Si substrates. This can be accomplished via strain relaxed templates, superlattices, and pseudo-substrates. Furthermore, electro-plated metal substrates have also shown promise in applying an external counterbalancing strain to increase the overall LED efficiency. In DUV LEDs In addition to traditional strain engineering that takes place with III-N LEDs, Deep Ultraviolet (DUV) LEDs, which use AlN, AlGaN, and GaN, undergo a polarity switch from TE to TM at a critical Al composition within the active region. The polarity switch arises from the negative value of AlN’s crystal field splitting, which results in its valence bands switching character at this critical Al composition. Studies have established a linear relationship between this critical composition within the active layer and the Al composition used in the substrate templating region, underscoring the importance of strain engineering in the character of light emitted from DUV LEDs. Furthermore, any existing lattice mismatch causes phase separation and surface roughness, in addition to creating dislocations and point defects. The former results in local current leakage while the latter enhances the nonradiative recombination process, both reducing the device's internal quantum efficiency (IQE). Active layer thickness can trigger the bending and annihilation of threading dislocations, surface roughening, phase separation, misfit dislocation formation, and point defects. All of these mechanisms compete across different thicknesses. By delaying strain accumulation to grow at a thicker epilayer before reaching the target relaxation degree, certain adverse effects can be reduced. In nano-scale materials Typically, the maximum elastic strain achievable in normal bulk materials ranges from 0.1% to 1%. This limits our ability to effectively modify material properties in a reversible and quantitative manner using strain. However, recent research on nanoscale materials has shown that the elastic strain range is much broader. Even the hardest material in nature, diamond, exhibits up to 9.0% uniform elastic strain at the nanoscale. Keeping in line with Moore's law, semiconductor devices are continuously shrinking in size to the nanoscale. With the concept of "smaller is stronger", elastic strain engineering can be fully exploited at the nanoscale. In nanoscale elastic strain engineering, the crystallographic direction plays a crucial role. Most materials are anisotropic, meaning their properties vary with direction. This is particularly true in elastic strain engineering, as applying strain in different crystallographic directions can have a significant impact on the material's properties. Taking diamond as an example, Density Functional Theory (DFT) simulations demonstrate distinct behaviors in the bandgap decreasing rates when strained along different directions. Straining along the <110> direction results in a higher bandgap decreasing rate, while straining along the <111> direction leads to a lower bandgap decreasing rate but a transition from an indirect to a direct bandgap. A similar indirect-direct bandgap transition can be observed in strained silicon. Theoretically, achieving this indirect-direct bandgap transition in silicon requires a strain of more than 14% uniaxial strain. In 2D materials In the case of elastic strain, when the limit is exceeded, plastic deformation occurs due to slip and dislocation movement in the microstructure of the material. Plastic deformation is not commonly utilized in strain engineering due to the difficulty in controlling its uniform outcome. Plastic deformation is more influenced by local distortion rather than the global stress field observed in elastic strain. However, 2D materials have a greater range of elastic strain compared to bulk materials because they lack typical plastic deformation mechanisms like slip and dislocation. Additionally, it is easier to apply strain along a specific crystallographic direction in 2D materials compared to bulk materials. Recent research has shown significant progress in strain engineering in 2D materials through techniques such as deforming the substrate, inducing material rippling, and creating lattice asymmetry. These methods of applying strain effectively enhance the electric, magnetic, thermal, and optical properties of the material. For example, in the reference provided, the optical gap of monolayer and bilayer MoS2 decreases at rates of approximately 45 and 120 meV/%, respectively, under 0-2.2% uniaxial strain. Additionally, the photoluminescence intensity of monolayer MoS2 decreases at 1% strain, indicating an indirect-to-direct bandgap transition. The reference also demonstrates that strain-engineered rippling in black phosphorus leads to bandgap variations between +10% and -30%. In the case of ReSe2, the literature shows the formation of local wrinkle structures when the substrate is relaxed after stretching. This folding process results in a redshift in the absorption spectrum peak, leading to increased light absorption and changes in magnetic properties and bandgap. The research team also conducted I-V curve tests on the stretched samples and found that a 30% stretching resulted in lower resistance compared to the unstretched samples. However, a 50% stretching showed the opposite effect, with higher resistance compared to the unstretched samples. This behavior can be attributed to the folding of ReSe2, with the folded regions being particularly weak. See also Strained silicon References Semiconductors
Strain engineering
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,511
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
8,762,886
https://en.wikipedia.org/wiki/Hemophagocytosis
Hemophagocytosis is a dangerous form of phagocytosis in which histiocytes engulf red blood cells, white blood cells, platelets, and their precursors in bone marrow and other tissues. It is part of the presentation of hemophagocytic lymphohistiocytosis and macrophage activation syndrome. It has also been seen at autopsy of people who died of COVID-19. References Pathology
Hemophagocytosis
[ "Biology" ]
93
[ "Pathology" ]
8,764,088
https://en.wikipedia.org/wiki/Lyman%20continuum%20photons
Lyman continuum photons (abbrev. LyC), shortened to Ly continuum photons or Lyc photons, are the photons emitted from stars or active galactic nuclei at photon energies above the Lyman limit. Hydrogen is ionized by absorbing LyC. Working from Victor Schumann's discovery of ultraviolet light, from 1906 to 1914, Theodore Lyman observed that atomic hydrogen absorbs light only at specific frequencies (or wavelengths) and the Lyman series is thus named after him. All the wavelengths in the Lyman series are in the ultraviolet band. This quantized absorption behavior occurs only up to an energy limit, known as the ionization energy. In the case of neutral atomic hydrogen, the minimum ionization energy is equal to the Lyman limit, where the photon has enough energy to completely ionize the atom, resulting in a free proton and a free electron. Above this energy (below this wavelength), all wavelengths of light may be absorbed. This forms a continuum in the energy spectrum; the spectrum is continuous rather than composed of many discrete lines, which are seen at lower energies. The Lyman limit is at the wavelength of 91.2 nm (912 Å), corresponding to a frequency of 3.29 million GHz and a photon energy of 13.6 eV. LyC energies are mostly in the ultraviolet C portion of the electromagnetic spectrum (see Lyman series). Although X-rays and gamma-rays will also ionize a hydrogen atom, there are far fewer of them emitted from a star's photosphere—LyC are predominantly UV-C. The photon absorption process leading to the ionization of atomic hydrogen can occur in reverse: an electron and a proton can collide and form atomic hydrogen. If the two particles were traveling slowly (so that kinetic energy can be ignored), then the photon the atom emits upon its creation will theoretically be 13.6 eV (in reality, the energy will be less if the atom is formed in an excited state). At faster speeds, the excess (kinetic) energy is radiated (but momentum must be conserved) as photons of lower wavelength (higher energy). Therefore, photons with energies above 13.6 eV are emitted by the combination of energetic protons and electrons forming atomic hydrogen, and emission from photoionized hydrogen. See also Balmer limit Lyman-alpha blob Lyman-alpha forest Lyman-break galaxy Lyman series Haro 11 - One of the two galaxies in the local universe that 'leaks' Lyman continuum photons. Tololo-1247-232 - The second galaxy in the local universe that 'leaks' Lyman continuum photons. Pea galaxy - Many nearby Green Peas are confirmed LyC 'leakers'. Reionization References Emission spectroscopy Hydrogen physics
Lyman continuum photons
[ "Physics", "Chemistry" ]
564
[ "Emission spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
8,765,022
https://en.wikipedia.org/wiki/Speed%20of%20electricity
The word electricity refers generally to the movement of electrons, or other charge carriers, through a conductor in the presence of a potential difference or an electric field. The speed of this flow has multiple meanings. In everyday electrical and electronic devices, the signals travel as electromagnetic waves typically at 50%–99% of the speed of light in vacuum. The electrons themselves move much more slowly. See drift velocity and electron mobility. Electromagnetic waves The speed at which energy or signals travel down a cable is actually the speed of the electromagnetic wave traveling along (guided by) the cable. I.e., a cable is a form of a waveguide. The propagation of the wave is affected by the interaction with the material(s) in and surrounding the cable, caused by the presence of electric charge carriers, interacting with the electric field component, and magnetic dipoles, interacting with the magnetic field component. These interactions are typically described using mean field theory by the permeability and the permittivity of the materials involved. The energy/signal usually flows overwhelmingly outside the electric conductor of a cable. The purpose of the conductor is thus not to conduct energy, but to guide the energy-carrying wave. Velocity of electromagnetic waves in good dielectrics The velocity of electromagnetic waves in a low-loss dielectric is given by where = speed of light in vacuum. = the permeability of free space = 4π x 10−7 H/m. = relative magnetic permeability of the material. Usually in good dielectrics, e.g. vacuum, air, Teflon, . . = the permittivity of free space = 8.854 x 10−12 F/m. = relative permittivity of the material. Usually in good conductors e.g. copper, silver, gold, . . Velocity of electromagnetic waves in good conductors The velocity of transverse electromagnetic (TEM) mode waves in a good conductor is given by where = frequency. = angular frequency = 2f. = conductivity of annealed copper = . = conductivity of the material relative to the conductivity of copper. For hard drawn copper may be as low as 0.97. . and permeability is defined as above in = the permeability of free space = 4π x 10−7 H/m. = relative magnetic permeability of the material. Nonmagnetic conductive materials such as copper typically have a near 1. . This velocity is the speed with which electromagnetic waves penetrate into the conductor and is not the drift velocity of the conduction electrons. In copper at 60Hz, 3.2m/s. As a consequence of Snell's Law and the extremely low speed, electromagnetic waves always enter good conductors in a direction that is within a milliradian of normal to the surface, regardless of the angle of incidence. Electromagnetic waves in circuits In the theoretical investigation of electric circuits, the velocity of propagation of the electromagnetic field through space is usually not considered; the field is assumed, as a precondition, to be present throughout space. The magnetic component of the field is considered to be in phase with the current, and the electric component is considered to be in phase with the voltage. The electric field starts at the conductor, and propagates through space at the velocity of light, which depends on the material it is traveling through. The electromagnetic fields do not move through space. It is the electromagnetic energy that moves. The corresponding fields simply grow and decline in a region of space in response to the flow of energy. At any point in space, the electric field corresponds not to the condition of the electric energy flow at that moment, but to that of the flow at a moment earlier. The latency is determined by the time required for the field to propagate from the conductor to the point under consideration. In other words, the greater the distance from the conductor, the more the electric field lags. Since the velocity of propagation is very high – about 300,000 kilometers per second – the wave of an alternating or oscillating current, even of high frequency, is of considerable length. At 60 cycles per second, the wavelength is 5,000 kilometers, and even at 100,000 hertz, the wavelength is 3 kilometers. This is a very large distance compared to those typically used in field measurement and application. The important part of the electric field of a conductor extends to the return conductor, which usually is only a few feet distant. At greater distance, the aggregate field can be approximated by the differential field between conductor and return conductor, which tend to cancel. Hence, the intensity of the electric field is usually inappreciable at a distance which is still small compared to the wavelength. Within the range in which an appreciable field exists, this field is practically in phase with the flow of energy in the conductor. That is, the velocity of propagation has no appreciable effect unless the return conductor is very distant, or entirely absent, or the frequency is so high that the distance to the return conductor is an appreciable portion of the wavelength. Charge carrier drift The drift velocity deals with the average velocity of a particle, such as an electron, due to an electric field. In general, an electron will propagate randomly in a conductor at the Fermi velocity. Free electrons in a conductor follow a random path. Without the presence of an electric field, the electrons have no net velocity. When a DC voltage is applied, the electron drift velocity will increase in speed proportionally to the strength of the electric field. The drift velocity in a 2 mm diameter copper wire in 1 ampere current is approximately 8 cm per hour. AC voltages cause no net movement. The electrons oscillate back and forth in response to the alternating electric field, over a distance of a few micrometers – see example calculation. See also Speed of light Speed of gravity Speed of sound Telegrapher's equations Reflections of signals on conducting lines References Further reading Alfvén, H. (1950). Cosmical electrodynamics. Oxford: Clarendon Press Alfvén, H. (1981). Cosmic plasma. Taylor & Francis US. "Velocity of Propagation of Electric Field", Theory and Calculation of Transient Electric Phenomena and Oscillations by Charles Proteus Steinmetz, Chapter VIII, p. 394-, McGraw-Hill, 1920. Fleming, J. A. (1911). Propagation of electric currents in telephone & telegraph conductors. New York: Van Nostrand Electromagnetism Electricity
Speed of electricity
[ "Physics" ]
1,340
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
8,765,524
https://en.wikipedia.org/wiki/Cyanogen%20halide
A cyanogen halide is a molecule consisting of cyanide and a halogen. Cyanogen halides are chemically classified as pseudohalogens. The cyanogen halides are a group of chemically reactive compounds which contain a cyano group (-CN) attached to a halogen element, such as fluorine, chlorine, bromine or iodine. Cyanogen halides are colorless, volatile, lacrimatory (tear-producing) and highly poisonous compounds. Production Halogen cyanides can be obtained by the reaction of halogens with metal cyanides or the halogenation of hydrocyanic acid. M = metal, X = halogen Cyanogen fluoride can be obtained by thermal decomposition of cyanuric fluoride. Properties Halogen cyanides are stable at normal pressure below 20 °C and in the absence of moisture or acids. In the presence of free halogens or Lewis acids they easily polymerize to cyanuric halides, for example cyanogen chloride to cyanuric chloride. They are very toxic and tear-inducing (lachrymatory). Cyanogen chloride melts at −6 °C and boils at about 150 °C. Bromine cyanide melts at 52 °C and boils at 61 °C. Iodine cyanide sublimates at normal pressure. Cyanogen fluoride boils at −46 °C and polymerizes at room temperature to cyanuric fluoride. In some of their reactions they resemble halogens. The hydrolysis of cyanogen halides takes place in different ways depending on the electronegativity of the halogen and the resulting different polarity of the X-C bond. Cyanogen fluoride is a gas produced by heating cyanuric fluoride. Cyanogen chloride is a liquid produced by reacting chlorine with hydrocyanic acid. Biomedical effects and metabolism of cyanogen halides Cyanide is naturally present in human tissues in very small quantities. It is metabolized by rhodanese, a live enzyme at a rate of approximately 17 μg/kg·min. Rhodanese catalyzes the irreversible reaction forming thiocyanate from cyanide and sulfane which is non-toxic and can be excreted through the urine. Under normal conditions, availability of sulfane is the limiting factor which acts as a substrate for rhodanese. Sulfur can be administered therapeutically as sodium thiosulfate to accelerate the reaction. A lethal dose of cyanide is time-dependent because of the body's ability to detoxify and excrete small amounts of cyanide through rhodanese-sulfate catalysis. If an amount of cyanide is absorbed slowly, rhodanese-sulfate may be able to biologically render it non-toxic through catalysis to thiosulfate whereas the same amount administered over a short period of time may be lethal. Use Halogen cyanides, in particular cyanogen chloride and cyanogen bromide, are important starting materials for the incorporation of the cyanogen group, the production of other carbonic acid derivatives and heterocycles. It has been suggested that cyanogen chloride be used by the military as poison gas. Cyanogen bromide is a solid that is prepared by reacting bromine with hydrocyanic acid salts; it has been used as a chemical pesticide against insects and rodents and as a reagent for the study of protein structure. Cyanogen halides have been found to act as electrolytes in liquid solvents, sulfur dioxide, arsenous chloride, and sulfuryl chloride. See also Cyanogen fluoride Cyanogen chloride Cyanogen bromide Cyanogen iodide References Halides Triatomic molecules Cyano compounds Pseudohalogens
Cyanogen halide
[ "Physics", "Chemistry" ]
811
[ "Pseudohalogens", "Inorganic compounds", "Molecules", "Triatomic molecules", "Matter" ]
8,769,106
https://en.wikipedia.org/wiki/Specific-pathogen-free
Specific-pathogen-free (SPF) is a term used for laboratory animals that are guaranteed free of particular pathogens. Use of SPF animals ensures that specified diseases do not interfere with an experiment. For example, absence of respiratory pathogens such as influenza is desirable when investigating a drug's effect on lung function. Practical Completely germ-free The animals can be born through a caesarian section then special care taken so the newborn does not acquire infections, such as use of sterile isolation units with a positive pressure differential to keep all outside air and pathogens from entering. Everything that needs to be inserted into the isolator, such as food, water and equipment needs to be completely sterilized and disinfected, and inserted through an airlock that can be disinfected before opening from the inside. A disadvantage is that any contact with pathogens may be fatal. This is because the animals have no protective bacterial microbiota on the skin or in the intestine or respiratory tract, and because they have no natural immunity to common infections as they have never been exposed to them. Specific-pathogen-free To certify SPF, the population is checked for presence of (antibodies against) the specified pathogens. For SPF eggs the specific pathogens are: Avian Adenovirus Group I, Avian Adenovirus Group II (HEV), Avian Adenovirus Group III (EDS), Avian Encephalomyelitis, Avian Influenza (Type A), Avian Nephritis Virus, Avian Paramyxovirus Type 2, Avian Reovirus S 1133, Avian Rhinotracheitis Virus; Avian Rotavirus; Avian Tuberculosis M. avium; Chicken Anemia Virus; Endogenous GS Antigen; Fowl Pox; Hemophilus paragallinarum Serovars A, B, C; Infectious Bronchitis - Ark; Infectious Bronchitis - Conn; Infectious Bronchitis - JMK; Infectious Bronchitis - Mass; Infectious Bursal Disease Type 1; Infectious Bursal Disease Type 2; Infectious Laryngotracheitis; Lymphoid Leukosis A, B; Avian Lymphoid Leukosis Virus; Lymphoid Leukosis Viruses A, B, C, D, E, J; Marek's Disease (Serotypes 1,2, 3); Mycoplasma gallisepticum; Mycoplasma synoviae; Newcastle Disease LaSota; Reticuloendotheliosis Virus; Salmonella pullorum-gallinarum; Salmonella species; Minimal disease status When by accident some infection does occur, the population is said to have minimal disease status. Monitoring The population is regularly checked to ensure the status still holds. Applications SPF eggs can be used to make vaccines. Mice raised under SPF conditions (no Helicobacter pylori) were shown to develop colitis rather than enterocolitis. See also Filtered Air Positive Pressure Gnotobiotic animal References Animal testing Animal models
Specific-pathogen-free
[ "Chemistry", "Biology" ]
646
[ "Animal testing", "Animal models", "Model organisms" ]
8,769,764
https://en.wikipedia.org/wiki/Cryochemistry
Cryochemistry is the study of chemical interactions at temperatures below . It is derived from the Greek word cryos, meaning 'cold'. It overlaps with many other sciences, including chemistry, cryobiology, condensed matter physics, and even astrochemistry. Cryochemistry has been a topic of interest since liquid nitrogen, which freezes at −210°C, became commonly available. Cryogenic-temperature chemical interactions are an important mechanism for studying the detailed pathways of chemical reactions by reducing the confusion introduced by thermal fluctuations. Cryochemistry forms the foundation for cryobiology, which uses slowed or stopped biological processes for medical and research purposes. Low temperature behaviours As a material cools, the relative motion of its component molecules/atoms decreases - its temperature decreases. Cooling can continue until all motion ceases, and its kinetic energy, or energy of motion, disappears. This condition is known as absolute zero and it forms the basis for the Kelvin temperature scale, which measures the temperature above absolute zero. Zero degrees Celsius (°C) coincides with 273 Kelvin. At absolute zero most elements become a solid, but not all behave as predictably as this; for instance, helium becomes a highly unusual liquid. The chemistry between substances, however, does not disappear, even near absolute zero temperatures, since separated molecules/atom can always combine to lower their total energy. Almost every molecule or element will show different properties at different temperatures; if cold enough, some functions are lost entirely. Cryogenic chemistry can lead to very different results compared with standard chemistry, and new chemical routes to substances may be available at cryogenic temperatures, such as the formation of argon fluorohydride, which is only a stable compound at or below . Methods of cooling One method that used to cool molecules to temperatures near absolute zero is laser cooling. In the Doppler cooling process, lasers are used to remove energy from electrons of a given molecule to slow or cool the molecule down. This method has applications in quantum mechanics and is related to particle traps and the Bose–Einstein condensate. All of these methods use a "trap" consisting of lasers pointed at opposite equatorial angles on a specific point in space. The wavelengths from the laser beams eventually hit the gaseous atoms and their outer spinning electrons. This clash of wavelengths decreases the kinetic energy state fraction by fraction to slow or cool the molecules down. Laser cooling has also been used to help improve atomic clocks and atom optics. Ultracold studies are not usually focused on chemical interactions, but rather on fundamental chemical properties. Because of the extremely low temperatures, diagnosing the chemical status is a major issue when studying low temperature physics and chemistry. The primary techniques in use today are optical - many types of spectroscopy are available, but these require special apparatus with vacuum windows that provide room temperature access to cryogenic processes. See also Thermochemistry Cryogenics Bose–Einstein condensate References Moskovits, M., and Ozin, G.A., (1976) Cryochemistry, J. Wiley & Sons, New York Dillinger, J. R. (1957). Low temperature physics & chemistry (edited by Joseph R. Dillinger.) Madison, Wisconsin: University of Wisconsin Press. Naduvalath, B. (2013). "Ultracold molecules." Phillips, W. D. (2012). "Laser cooling" Parpia, J. M., & Lee, D.M. (2012). "Absolute zero" Hasegawa, Y., Nakamura, D., Murata, M., Yamamoto, H., & Komine, T. (2010). "High-precision temperature control and stabilization using a cryocooler. Review of Scientific Instruments", doi:10.1063/1.3484192 External links Physical chemistry Thermochemistry Cryobiology Condensed matter physics Astrochemistry
Cryochemistry
[ "Physics", "Chemistry", "Materials_science", "Astronomy", "Engineering", "Biology" ]
805
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Thermochemistry", "Astronomical sub-disciplines", "Phases of matter", "Cryobiology", "Materials science", "Astrochemistry", "Condensed matter physics", "nan", "Biochemistry", "Physical chemistry", "Matter"...
6,910
https://en.wikipedia.org/wiki/Cloning
Cloning is the process of producing individual organisms with identical genomes, either by natural or artificial means. In nature, some organisms produce clones through asexual reproduction; this reproduction of an organism by itself without a mate is known as parthenogenesis. In the field of biotechnology, cloning is the process of creating cloned organisms of cells and of DNA fragments. The artificial cloning of organisms, sometimes known as reproductive cloning, is often accomplished via somatic-cell nuclear transfer (SCNT), a cloning method in which a viable embryo is created from a somatic cell and an egg cell. In 1996, Dolly the sheep achieved notoriety for being the first mammal cloned from a somatic cell. Another example of artificial cloning is molecular cloning, a technique in molecular biology in which a single living cell is used to clone a large population of cells that contain identical DNA molecules. In bioethics, there are a variety of ethical positions regarding the practice and possibilities of cloning. The use of embryonic stem cells, which can be produced through SCNT, in some stem cell research has attracted controversy. Cloning has been proposed as a means of reviving extinct species. In popular culture, the concept of cloning—particularly human cloning—is often depicted in science fiction; depictions commonly involve themes related to identity, the recreation of historical figures or extinct species, or cloning for exploitation (e.g. cloning soldiers for warfare). Etymology Coined by Herbert J. Webber, the term clone derives from the Ancient Greek word (), twig, which is the process whereby a new plant is created from a twig. In botany, the term lusus was used. In horticulture, the spelling clon was used until the early twentieth century; the final e came into use to indicate the vowel is a "long o" instead of a "short o". Since the term entered the popular lexicon in a more general context, the spelling clone has been used exclusively. Natural cloning Natural cloning is the production of clones without the involvement of genetic engineering techniques or human intervention (i.e. artificial cloning). Natural cloning occurs through a variety of natural mechanisms, from single-celled organisms to complex multicellular organisms, and has allowed life forms to spread for hundreds of millions of years. Versions of this reproduction method are used by plants, fungi, and bacteria, and is also the way that clonal colonies reproduce themselves. Some of the mechanisms are explored and used in plants and animals are binary fission, budding, fragmentation, and parthenogenesis. It can also occur during some forms of asexual reproduction, when a single parent organism produces genetically identical offspring by itself. Many plants are well known for natural cloning ability, including blueberry plants, Hazel trees, the Pando trees, the Kentucky coffeetree, Myrica, and the American sweetgum. It also occurs accidentally in the case of identical twins, which are formed when a fertilized egg splits, creating two or more embryos that carry identical DNA. Molecular cloning Molecular cloning refers to the process of making multiple molecules. Cloning is commonly used to amplify DNA fragments containing whole genes, but it can also be used to amplify any DNA sequence such as promoters, non-coding sequences and randomly fragmented DNA. It is used in a wide array of biological experiments and practical applications ranging from genetic fingerprinting to large scale protein production. Occasionally, the term cloning is misleadingly used to refer to the identification of the chromosomal location of a gene associated with a particular phenotype of interest, such as in positional cloning. In practice, localization of the gene to a chromosome or genomic region does not necessarily enable one to isolate or amplify the relevant genomic sequence. To amplify any DNA sequence in a living organism, that sequence must be linked to an origin of replication, which is a sequence of DNA capable of directing the propagation of itself and any linked sequence. However, a number of other features are needed, and a variety of specialised cloning vectors (small piece of DNA into which a foreign DNA fragment can be inserted) exist that allow protein production, affinity tagging, single-stranded RNA or DNA production and a host of other molecular biology tools. Cloning of any DNA fragment essentially involves four steps fragmentation - breaking apart a strand of DNA ligation – gluing together pieces of DNA in a desired sequence transfection – inserting the newly formed pieces of DNA into cells screening/selection – selecting out the cells that were successfully transfected with the new DNA Although these steps are invariable among cloning procedures a number of alternative routes can be selected; these are summarized as a cloning strategy. Initially, the DNA of interest needs to be isolated to provide a DNA segment of suitable size. Subsequently, a ligation procedure is used where the amplified fragment is inserted into a vector (piece of DNA). The vector (which is frequently circular) is linearised using restriction enzymes, and incubated with the fragment of interest under appropriate conditions with an enzyme called DNA ligase. Following ligation, the vector with the insert of interest is transfected into cells. A number of alternative techniques are available, such as chemical sensitisation of cells, electroporation, optical injection and biolistics. Finally, the transfected cells are cultured. As the aforementioned procedures are of particularly low efficiency, there is a need to identify the cells that have been successfully transfected with the vector construct containing the desired insertion sequence in the required orientation. Modern cloning vectors include selectable antibiotic resistance markers, which allow only cells in which the vector has been transfected, to grow. Additionally, the cloning vectors may contain colour selection markers, which provide blue/white screening (alpha-factor complementation) on X-gal medium. Nevertheless, these selection steps do not absolutely guarantee that the DNA insert is present in the cells obtained. Further investigation of the resulting colonies must be required to confirm that cloning was successful. This may be accomplished by means of PCR, restriction fragment analysis and/or DNA sequencing. Cell cloning Cloning unicellular organisms Cloning a cell means to derive a population of cells from a single cell. In the case of unicellular organisms such as bacteria and yeast, this process is remarkably simple and essentially only requires the inoculation of the appropriate medium. However, in the case of cell cultures from multi-cellular organisms, cell cloning is an arduous task as these cells will not readily grow in standard media. A useful tissue culture technique used to clone distinct lineages of cell lines involves the use of cloning rings (cylinders). In this technique a single-cell suspension of cells that have been exposed to a mutagenic agent or drug used to drive selection is plated at high dilution to create isolated colonies, each arising from a single and potentially clonal distinct cell. At an early growth stage when colonies consist of only a few cells, sterile polystyrene rings (cloning rings), which have been dipped in grease, are placed over an individual colony and a small amount of trypsin is added. Cloned cells are collected from inside the ring and transferred to a new vessel for further growth. Cloning stem cells Somatic-cell nuclear transfer, popularly known as SCNT, can also be used to create embryos for research or therapeutic purposes. The most likely purpose for this is to produce embryos for use in stem cell research. This process is also called "research cloning" or "therapeutic cloning". The goal is not to create cloned human beings (called "reproductive cloning"), but rather to harvest stem cells that can be used to study human development and to potentially treat disease. While a clonal human blastocyst has been created, stem cell lines are yet to be isolated from a clonal source. Therapeutic cloning is achieved by creating embryonic stem cells in the hopes of treating diseases such as diabetes and Alzheimer's. The process begins by removing the nucleus (containing the DNA) from an egg cell and inserting a nucleus from the adult cell to be cloned. In the case of someone with Alzheimer's disease, the nucleus from a skin cell of that patient is placed into an empty egg. The reprogrammed cell begins to develop into an embryo because the egg reacts with the transferred nucleus. The embryo will become genetically identical to the patient. The embryo will then form a blastocyst which has the potential to form/become any cell in the body. The reason why SCNT is used for cloning is because somatic cells can be easily acquired and cultured in the lab. This process can either add or delete specific genomes of farm animals. A key point to remember is that cloning is achieved when the oocyte maintains its normal functions and instead of using sperm and egg genomes to replicate, the donor's somatic cell nucleus is inserted into the oocyte. The oocyte will react to the somatic cell nucleus, the same way it would to a sperm cell's nucleus. The process of cloning a particular farm animal using SCNT is relatively the same for all animals. The first step is to collect the somatic cells from the animal that will be cloned. The somatic cells could be used immediately or stored in the laboratory for later use. The hardest part of SCNT is removing maternal DNA from an oocyte at metaphase II. Once this has been done, the somatic nucleus can be inserted into an egg cytoplasm. This creates a one-cell embryo. The grouped somatic cell and egg cytoplasm are then introduced to an electrical current. This energy will hopefully allow the cloned embryo to begin development. The successfully developed embryos are then placed in surrogate recipients, such as a cow or sheep in the case of farm animals. SCNT is seen as a good method for producing agriculture animals for food consumption. It successfully cloned sheep, cattle, goats, and pigs. Another benefit is SCNT is seen as a solution to clone endangered species that are on the verge of going extinct. However, stresses placed on both the egg cell and the introduced nucleus can be enormous, which led to a high loss in resulting cells in early research. For example, the cloned sheep Dolly was born after 277 eggs were used for SCNT, which created 29 viable embryos. Only three of these embryos survived until birth, and only one survived to adulthood. As the procedure could not be automated, and had to be performed manually under a microscope, SCNT was very resource intensive. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from being well understood. However, by 2014 researchers were reporting cloning success rates of seven to eight out of ten and in 2016, a Korean Company Sooam Biotech was reported to be producing 500 cloned embryos per day. In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus. Organism cloning Organism cloning (also called reproductive cloning) refers to the procedure of creating a new multicellular organism, genetically identical to another. In essence this form of cloning is an asexual method of reproduction, where fertilization or inter-gamete contact does not take place. Asexual reproduction is a naturally occurring phenomenon in many species, including most plants and some insects. Scientists have made some major achievements with cloning, including the asexual reproduction of sheep and cows. There is a lot of ethical debate over whether or not cloning should be used. However, cloning, or asexual propagation, has been common practice in the horticultural world for hundreds of years. Horticultural The term clone is used in horticulture to refer to descendants of a single plant which were produced by vegetative reproduction or apomixis. Many horticultural plant cultivars are clones, having been derived from a single individual, multiplied by some process other than sexual reproduction. As an example, some European cultivars of grapes represent clones that have been propagated for over two millennia. Other examples are potato and banana. Grafting can be regarded as cloning, since all the shoots and branches coming from the graft are genetically a clone of a single individual, but this particular kind of cloning has not come under ethical scrutiny and is generally treated as an entirely different kind of operation. Many trees, shrubs, vines, ferns and other herbaceous perennials form clonal colonies naturally. Parts of an individual plant may become detached by fragmentation and grow on to become separate clonal individuals. A common example is in the vegetative reproduction of moss and liverwort gametophyte clones by means of gemmae. Some vascular plants e.g. dandelion and certain viviparous grasses also form seeds asexually, termed apomixis, resulting in clonal populations of genetically identical individuals. Parthenogenesis Clonal derivation exists in nature in some animal species and is referred to as parthenogenesis (reproduction of an organism by itself without a mate). This is an asexual form of reproduction that is only found in females of some insects, crustaceans, nematodes, fish (for example the hammerhead shark), Cape honeybees, and lizards including the Komodo dragon and several whiptails. The growth and development occurs without fertilization by a male. In plants, parthenogenesis means the development of an embryo from an unfertilized egg cell, and is a component process of apomixis. In species that use the XY sex-determination system, the offspring will always be female. An example is the little fire ant (Wasmannia auropunctata), which is native to Central and South America but has spread throughout many tropical environments. Artificial cloning of organisms Artificial cloning of organisms may also be called reproductive cloning. First steps Hans Spemann, a German embryologist was awarded a Nobel Prize in Physiology or Medicine in 1935 for his discovery of the effect now known as embryonic induction, exercised by various parts of the embryo, that directs the development of groups of cells into particular tissues and organs. In 1924 he and his student, Hilde Mangold, were the first to perform somatic-cell nuclear transfer using amphibian embryos – one of the first steps towards cloning. Methods Reproductive cloning generally uses "somatic cell nuclear transfer" (SCNT) to create animals that are genetically identical. This process entails the transfer of a nucleus from a donor adult cell (somatic cell) to an egg from which the nucleus has been removed, or to a cell from a blastocyst from which the nucleus has been removed. If the egg begins to divide normally it is transferred into the uterus of the surrogate mother. Such clones are not strictly identical since the somatic cells may contain mutations in their nuclear DNA. Additionally, the mitochondria in the cytoplasm also contains DNA and during SCNT this mitochondrial DNA is wholly from the cytoplasmic donor's egg, thus the mitochondrial genome is not the same as that of the nucleus donor cell from which it was produced. This may have important implications for cross-species nuclear transfer in which nuclear-mitochondrial incompatibilities may lead to death. Artificial embryo splitting or embryo twinning, a technique that creates monozygotic twins from a single embryo, is not considered in the same fashion as other methods of cloning. During that procedure, a donor embryo is split in two distinct embryos, that can then be transferred via embryo transfer. It is optimally performed at the 6- to 8-cell stage, where it can be used as an expansion of IVF to increase the number of available embryos. If both embryos are successful, it gives rise to monozygotic (identical) twins. Dolly the sheep Dolly, a Finn-Dorset ewe, was the first mammal to have been successfully cloned from an adult somatic cell. Dolly was formed by taking a cell from the udder of her 6-year-old biological mother. Dolly's embryo was created by taking the cell and inserting it into a sheep ovum. It took 435 attempts before an embryo was successful. The embryo was then placed inside a female sheep that went through a normal pregnancy. She was cloned at the Roslin Institute in Scotland by British scientists Sir Ian Wilmut and Keith Campbell and lived there from her birth in 1996 until her death in 2003 when she was six. She was born on 5 July 1996 but not announced to the world until 22 February 1997. Her stuffed remains were placed at Edinburgh's Royal Museum, part of the National Museums of Scotland. Dolly was publicly significant because the effort showed that genetic material from a specific adult cell, designed to express only a distinct subset of its genes, can be redesigned to grow an entirely new organism. Before this demonstration, it had been shown by John Gurdon that nuclei from differentiated cells could give rise to an entire organism after transplantation into an enucleated egg. However, this concept was not yet demonstrated in a mammalian system. The first mammalian cloning (resulting in Dolly) had a success rate of 29 embryos per 277 fertilized eggs, which produced three lambs at birth, one of which lived. In a bovine experiment involving 70 cloned calves, one-third of the calves died quite young. The first successfully cloned horse, Prometea, took 814 attempts. Notably, although the first clones were frogs, no adult cloned frog has yet been produced from a somatic adult nucleus donor cell. There were early claims that Dolly had pathologies resembling accelerated aging. Scientists speculated that Dolly's death in 2003 was related to the shortening of telomeres, DNA-protein complexes that protect the end of linear chromosomes. However, other researchers, including Ian Wilmut who led the team that successfully cloned Dolly, argue that Dolly's early death due to respiratory infection was unrelated to problems with the cloning process. This idea that the nuclei have not irreversibly aged was shown in 2013 to be true for mice. Dolly was named after performer Dolly Parton because the cells cloned to make her were from a mammary gland cell, and Parton is known for her ample cleavage. Species cloned and applications The modern cloning techniques involving nuclear transfer have been successfully performed on several species. Notable experiments include: Tadpole: (1952) Robert Briggs and Thomas J. King successfully cloned northern leopard frogs: thirty-five complete embryos and twenty-seven tadpoles from one-hundred and four successful nuclear transfers. Carp: (1963) In China, embryologist Tong Dizhou produced the world's first cloned fish by inserting the DNA from a cell of a male carp into an egg from a female carp. He published the findings in a Chinese science journal. Zebrafish: (1981) George Streisinger produced the first cloned vertebrate. Sheep: (1984) Steen Willadsen produced the first cloned mammal from early embryonic cells. In June 1995, the Roslin Institute cloned Megan and Morag from differentiated embryonic cells. In July 1996, PPL Therapeutics and the Roslin Institute cloned Dolly the sheep from a somatic cell. Mouse: (1986) A mouse was successfully cloned from an early embryonic cell. In 1987, Soviet scientists Levon Chaylakhyan, Veprencev, Sviridova, and Nikitin cloned Masha, a mouse. Rhesus monkey: (October 1999) The Oregon National Primate Research Center cloned Tetra from embryo splitting and not nuclear transfer: a process more akin to artificial formation of twins. Pig: (March 2000) PPL Therapeutics cloned five piglets. By 2014, BGI in China was producing 500 cloned pigs a year to test new medicines. Gaur: (2001) was the first endangered species cloned. Cattle: Alpha and Beta (males, 2001) and (2005), Brazil In 2023, Chinese scientists reported the cloning of three supercows with a milk productivity "nearly 1.7 times the amount of milk an average cow in the United States produced in 2021" and a plan for 1,000 of such super cows in the near-term. According to a news report "[i]n many countries, including the United States, farmers breed clones with conventional animals to add desirable traits, such as high milk production or disease resistance, into the gene pool". Cat: CopyCat "CC" (female, late 2001), Little Nicky, 2004, was the first cat cloned for commercial reasons Rat: Ralph, the first cloned rat (2003) Mule: Idaho Gem, a john mule born 4 May 2003, was the first horse-family clone. Horse: Prometea, a Haflinger female born 28 May 2003, was the first horse clone. Przewalksi's Horse: An ongoing cloning program by the San Diego Zoo Wildlife Alliance and Revive & Restore attempts to reintroduce genetic diversity to this endangered species. Kurt, the first cloned Przewalski's horse, was born in 2020. He was cloned from the skin tissue of a stallion which was preserved in 1980. "Trey" was born in 2023. He was cloned from the same stallion's tissue as Kurt. Dog: Snuppy, a male Afghan hound was the first cloned dog (2005). In 2017, the world's first gene-editing clone dog, Apple, was created by Sinogene Biotechnology. Sooam Biotech, South Korea, was reported in 2015 to have cloned 700 dogs to date for their owners, including two Yakutian Laika hunting dogs, which are seriously endangered due to crossbreeding. Cloning of super sniffer dogs was reported in 2011, four years afterwards when the dogs started working. Cloning of a successful rescue dog was also reported in 2009 and of a similar police dog in 2019. Cancer-sniffing dogs have also been cloned. A review concluded that "qualified elite working dogs can be produced by cloning a working dog that exhibits both an appropriate temperament and good health." Wolf: Snuwolf and Snuwolffy, the first two cloned female wolves (2005). Water buffalo: Samrupa was the first cloned water buffalo. It was born on 6 February 2009, at India's Karnal National Dairy Research Institute but died five days later due to lung infection. Pyrenean ibex: (2009) was the first extinct animal to be cloned back to life; the clone lived for seven minutes before dying of lung defects. The extinct Pyrenean ibex is a sub-species of the still-thriving Spanish ibex. Camel: (2009) Injaz, was the first cloned camel. Pashmina goat: (2012) Noori, is the first cloned pashmina goat. Scientists at the faculty of veterinary sciences and animal husbandry of Sher-e-Kashmir University of Agricultural Sciences and Technology of Kashmir successfully cloned the first Pashmina goat (Noori) using the advanced reproductive techniques under the leadership of Riaz Ahmad Shah. Goat: (2001) Scientists of Northwest A&F University successfully cloned the first goat which use the adult female cell. Gastric brooding frog: (2013) The gastric brooding frog, Rheobatrachus silus, thought to have been extinct since 1983 was cloned in Australia, although the embryos died after a few days. Macaque monkey: (2017) First successful cloning of a primate species using nuclear transfer, with the birth of two live clones named Zhong Zhong and Hua Hua. Conducted in China in 2017, and reported in January 2018. In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used with Zhong Zhong and Hua Hua and Dolly the sheep, and the gene-editing Crispr-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made to study several medical diseases. Black-footed ferret: (2020) A team of scientists cloned a female named Willa, who died in the mid-1980s and left no living descendants. Her clone, a female named Elizabeth Ann, was born on 10 December. Scientists hope that the contribution of this individual will alleviate the effects of inbreeding and help black-footed ferrets better cope with plague. Experts estimate that this female's genome contains three times as much genetic diversity as any of the modern black-footed ferrets. First artificial parthenogenesis in mammals: (2022) Viable mice offspring was born from unfertilized eggs via targeted DNA methylation editing of seven imprinting control regions. Human cloning Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissues. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass legislation regarding human cloning and its legality. As of right now, scientists have no intention of trying to clone people and they believe their results should spark a wider discussion about the laws and regulations the world needs to regulate cloning. Two commonly discussed types of theoretical human cloning are therapeutic cloning and reproductive cloning. Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants, and is an active area of research, but is not in medical practice anywhere in the world, . Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and, more recently, pluripotent stem cell induction. Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues. Ethical issues of cloning There are a variety of ethical positions regarding the possibilities of cloning, especially human cloning. While many of these views are religious in origin, the questions raised by cloning are faced by secular perspectives as well. Perspectives on human cloning are theoretical, as human therapeutic and reproductive cloning are not commercially used; animals are currently cloned in laboratories and in livestock production. Advocates support development of therapeutic cloning to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology. Opponents of cloning have concerns that technology is not yet developed enough to be safe and that it could be prone to abuse (leading to the generation of humans from whom organs and tissues would be harvested), as well as concerns about how cloned individuals could integrate with families and with society at large. Cloning humans could lead to serious violations of human rights. Religious groups are divided, with some opposing the technology as usurping "God's place" and, to the extent embryos are used, destroying a human life; others support therapeutic cloning's potential life-saving benefits. There is at least one religion, Raëlism, in which cloning plays a major role. Contemporary work on this topic is concerned with the ethics, adequate regulation and issues of any cloning carried out by humans, not potentially by extraterrestrials (including in the future), and largely also not replication – also described as mind cloning – of potential whole brain emulations. Cloning of animals is opposed by animal-groups due to the number of cloned animals that suffer from malformations before they die, and while food from cloned animals has been approved as safe by the US FDA, its use is opposed by groups concerned about food safety. In practical terms, the inclusion of "licensing requirements for embryo research projects and fertility clinics, restrictions on the commodification of eggs and sperm, and measures to prevent proprietary interests from monopolizing access to stem cell lines" in international cloning regulations has been proposed, albeit e.g. effective oversight mechanisms or cloning requirements have not been described. Cloning extinct and endangered species Cloning, or more precisely, the reconstruction of functional DNA from extinct species has, for decades, been a dream. Possible implications of this were dramatized in the 1984 novel Carnosaur and the 1990 novel Jurassic Park. The best current cloning techniques have an average success rate of 9.4 percent (and as high as 25 percent) when working with familiar species such as mice, while cloning wild animals is usually less than 1 percent successful. Conservation cloning Several tissue banks have come into existence, including the "Frozen zoo" at the San Diego Zoo, to store frozen tissue from the world's rarest and most endangered species. This is also referred to as "Conservation cloning". Engineers have proposed a "lunar ark" in 2021 – storing millions of seed, spore, sperm and egg samples from Earth's contemporary species in a network of lava tubes on the Moon as a genetic backup. Similar proposals have been made since at least 2008. These also include sending human customer DNA, and a proposal for "a lunar backup record of humanity" that includes genetic information by Avi Loeb et al. Scientists at the University of Newcastle and University of New South Wales announced in March 2013 that the very recently extinct gastric-brooding frog would be the subject of a cloning attempt to resurrect the species. Many such "De-extinction" projects are being championed by the non-profit Revive & Restore. De-extinction One of the most anticipated targets for cloning was once the woolly mammoth, but attempts to extract DNA from frozen mammoths have been unsuccessful, though a joint Russo-Japanese team is currently working toward this goal. In January 2011, it was reported by Yomiuri Shimbun that a team of scientists headed by Akira Iritani of Kyoto University had built upon research by Dr. Wakayama, saying that they will extract DNA from a mammoth carcass that had been preserved in a Russian laboratory and insert it into the egg cells of an Asian elephant in hopes of producing a mammoth embryo. The researchers said they hoped to produce a baby mammoth within six years. The challenges are formidable. Extensively degraded DNA that may be suitable for sequencing may not be suitable for cloning; it would have to be synthetically reconstituted. In any case, with currently available technology, DNA alone is not suitable for mammalian cloning; intact viable cell nuclei are required. Patching pieces of reconstituted mammoth DNA into an Asian elephant cell nucleus would result in an elephant-mammoth hybrid rather than a true mammoth. Moreover, true de-extinction of the wooly mammoth species would require a breeding population, which would require cloning of multiple genetically distinct but reproductively compatible individuals, multiplying both the amount of work and the uncertainties involved in the project. There are potentially other post-cloning problems associated with the survival of a reconstructed mammoth, such as the requirement of ruminants for specific symbiotic microbiota in their stomachs for digestion. In 2022, scientists showed major limitations and the scale of challenge of genetic-editing-based de-extinction, suggesting resources spent on more comprehensive de-extinction projects such as of the woolly mammoth may currently not be well allocated and substantially limited. Their analyses "show that even when the extremely high-quality Norway brown rat (R. norvegicus) is used as a reference, nearly 5% of the genome sequence is unrecoverable, with 1,661 genes recovered at lower than 90% completeness, and 26 completely absent", complicated further by that "distribution of regions affected is not random, but for example, if 90% completeness is used as the cutoff, genes related to immune response and olfaction are excessively affected" due to which "a reconstructed Christmas Island rat would lack attributes likely critical to surviving in its natural or natural-like environment". In a 2021 online session of the Russian Geographical Society, Russia's defense minister Sergei Shoigu mentioned using the DNA of 3,000-year-old Scythian warriors to potentially bring them back to life. The idea was described as absurd at least at this point in news reports and it was noted that Scythians likely weren't skilled warriors by default. The idea of cloning Neanderthals or bringing them back to life in general is controversial but some scientists have stated that it may be possible in the future and have outlined several issues or problems with such as well as broad rationales for doing so. Unsuccessful attempts In 2001, a cow named Bessie gave birth to a cloned Asian gaur, an endangered species, but the calf died after two days. In 2003, a banteng was successfully cloned, followed by three African wildcats from a thawed frozen embryo. These successes provided hope that similar techniques (using surrogate mothers of another species) might be used to clone extinct species. Anticipating this possibility, tissue samples from the last bucardo (Pyrenean ibex) were frozen in liquid nitrogen immediately after it died in 2000. Researchers are also considering cloning endangered species such as the Giant panda and Cheetah. In 2002, geneticists at the Australian Museum announced that they had replicated DNA of the thylacine (Tasmanian tiger), at the time extinct for about 65 years, using polymerase chain reaction. However, on 15 February 2005 the museum announced that it was stopping the project after tests showed the specimens' DNA had been too badly degraded by the (ethanol) preservative. On 15 May 2005 it was announced that the thylacine project would be revived, with new participation from researchers in New South Wales and Victoria. In 2003, for the first time, an extinct animal, the Pyrenean ibex mentioned above was cloned, at the Centre of Food Technology and Research of Aragon, using the preserved frozen cell nucleus of the skin samples from 2001 and domestic goat egg-cells. The ibex died shortly after birth due to physical defects in its lungs. Lifespan After an eight-year project involving the use of a pioneering cloning technique, Japanese researchers created 25 generations of healthy cloned mice with normal lifespans, demonstrating that clones are not intrinsically shorter-lived than naturally born animals. Other sources have noted that the offspring of clones tend to be healthier than the original clones and indistinguishable from animals produced naturally. Some posited that Dolly the sheep may have aged more quickly than naturally born animals, as she died relatively early for a sheep at the age of six. Ultimately, her death was attributed to a respiratory illness, and the "advanced aging" theory is disputed. A 2016 study indicated that once cloned animals survive the first month or two of life they are generally healthy. However, early pregnancy loss and neonatal losses are still greater with cloning than natural conception or assisted reproduction (IVF). Current research is attempting to overcome these problems. In popular culture Discussion of cloning in the popular media often presents the subject negatively. In an article in the 8 November 1993 article of Time, cloning was portrayed in a negative way, modifying Michelangelo's Creation of Adam to depict Adam with five identical hands. Newsweek 10 March 1997 issue also critiqued the ethics of human cloning, and included a graphic depicting identical babies in beakers. The concept of cloning, particularly human cloning, has featured a wide variety of science fiction works. An early fictional depiction of cloning is Bokanovsky's Process which features in Aldous Huxley's 1931 dystopian novel Brave New World. The process is applied to fertilized human eggs in vitro, causing them to split into identical genetic copies of the original. Following renewed interest in cloning in the 1950s, the subject was explored further in works such as Poul Anderson's 1953 story UN-Man, which describes a technology called "exogenesis", and Gordon Rattray Taylor's book The Biological Time Bomb, which popularised the term "cloning" in 1963. Cloning is a recurring theme in a number of contemporary science fiction films, ranging from action films such as Anna to the Infinite Power, The Boys from Brazil, Jurassic Park (1993), Alien Resurrection (1997), The 6th Day (2000), Resident Evil (2002), Star Wars: Episode II – Attack of the Clones (2002), The Island (2005), Tales of the Abyss (2006), and Moon (2009) to comedies such as Woody Allen's 1973 film Sleeper. The process of cloning is represented variously in fiction. Many works depict the artificial creation of humans by a method of growing cells from a tissue or DNA sample; the replication may be instantaneous, or take place through slow growth of human embryos in artificial wombs. In the long-running British television series Doctor Who, the Fourth Doctor and his companion Leela were cloned in a matter of seconds from DNA samples ("The Invisible Enemy", 1977) and then – in an apparent homage to the 1966 film Fantastic Voyage – shrunk to microscopic size to enter the Doctor's body to combat an alien virus. The clones in this story are short-lived, and can only survive a matter of minutes before they expire. Science fiction films such as The Matrix and Star Wars: Episode II – Attack of the Clones have featured scenes of human foetuses being cultured on an industrial scale in mechanical tanks. Cloning humans from body parts is also a common theme in science fiction. Cloning features strongly among the science fiction conventions parodied in Woody Allen's Sleeper, the plot of which centres around an attempt to clone an assassinated dictator from his disembodied nose. In the 2008 Doctor Who story "Journey's End", a duplicate version of the Tenth Doctor spontaneously grows from his severed hand, which had been cut off in a sword fight during an earlier episode. After the death of her beloved 14-year-old Coton de Tulear named Samantha in late 2017, Barbra Streisand announced that she had cloned the dog, and was now "waiting for [the two cloned pups] to get older so [she] can see if they have [Samantha's] brown eyes and her seriousness". The operation cost $50,000 through the pet cloning company ViaGen. In films such as Roger Spottiswoode's 2000 The 6th Day, which makes use of the trope of a "vast clandestine laboratory ... filled with row upon row of 'blank' human bodies kept floating in tanks of nutrient liquid or in suspended animation", clearly fear is to be incited. In Clark's view, the biotechnology is typically "given fantastic but visually arresting forms" while the science is either relegated to the background or fictionalised to suit a young audience. Genetic engineering methods are weakly represented in film; Michael Clark, writing for The Wellcome Trust, calls the portrayal of genetic engineering and biotechnology "seriously distorted" Cloning and identity Science fiction has used cloning, most commonly and specifically human cloning, to raise questions of identity. A Number is a 2002 play by English playwright Caryl Churchill which addresses the subject of human cloning and identity, especially nature and nurture. The story, set in the near future, is structured around the conflict between a father (Salter) and his sons (Bernard 1, Bernard 2, and Michael Black) – two of whom are clones of the first one. A Number was adapted by Caryl Churchill for television, in a co-production between the BBC and HBO Films. In 2012, a Japanese television series named "Bunshin" was created. The story's main character, Mariko, is a woman studying child welfare in Hokkaido. She grew up always doubtful about the love from her mother, who looked nothing like her and who died nine years before. One day, she finds some of her mother's belongings at a relative's house, and heads to Tokyo to seek out the truth behind her birth. She later discovered that she was a clone. In the 2013 television series Orphan Black, cloning is used as a scientific study on the behavioral adaptation of the clones. In a similar vein, the book The Double by Nobel Prize winner José Saramago explores the emotional experience of a man who discovers that he is a clone. Cloning as resurrection Cloning has been used in fiction as a way of recreating historical figures. In the 1976 Ira Levin novel The Boys from Brazil and its 1978 film adaptation, Josef Mengele uses cloning to create copies of Adolf Hitler. In Michael Crichton's 1990 novel Jurassic Park, which spawned a series of Jurassic Park feature films, the bioengineering company InGen develops a technique to resurrect extinct species of dinosaurs by creating cloned creatures using DNA extracted from fossils. The cloned dinosaurs are used to populate the Jurassic Park wildlife park for the entertainment of visitors. The scheme goes disastrously wrong when the dinosaurs escape their enclosures. Despite being selectively cloned as females to prevent them from breeding, the dinosaurs develop the ability to reproduce through parthenogenesis. Cloning for warfare The use of cloning for military purposes has also been explored in several fictional works. In Doctor Who, an alien race of armour-clad, warlike beings called Sontarans was introduced in the 1973 serial "The Time Warrior". Sontarans are depicted as squat, bald creatures who have been genetically engineered for combat. Their weak spot is a "probic vent", a small socket at the back of their neck which is associated with the cloning process. The concept of cloned soldiers being bred for combat was revisited in "The Doctor's Daughter" (2008), when the Doctor's DNA is used to create a female warrior called Jenny. The 1977 film Star Wars was set against the backdrop of a historical conflict called the Clone Wars. The events of this war were not fully explored until the prequel films Attack of the Clones (2002) and Revenge of the Sith (2005), which depict a space war waged by a massive army of heavily armoured clone troopers that leads to the foundation of the Galactic Empire. Cloned soldiers are "manufactured" on an industrial scale, genetically conditioned for obedience and combat effectiveness. It is also revealed that the popular character Boba Fett originated as a clone of Jango Fett, a mercenary who served as the genetic template for the clone troopers. Cloning for exploitation A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. The 2005 Kazuo Ishiguro novel Never Let Me Go and the 2010 film adaption are set in an alternate history in which cloned humans are created for the sole purpose of providing organ donations to naturally born humans, despite the fact that they are fully sentient and self-aware. The 2005 film The Island revolves around a similar plot, with the exception that the clones are unaware of the reason for their existence. The exploitation of human clones for dangerous and undesirable work was examined in the 2009 British science fiction film Moon. In the futuristic novel Cloud Atlas and subsequent film, one of the story lines focuses on a genetically engineered fabricant clone named Sonmi~451, one of millions raised in an artificial "wombtank", destined to serve from birth. She is one of thousands created for manual and emotional labor; Sonmi herself works as a server in a restaurant. She later discovers that the sole source of food for clones, called 'Soap', is manufactured from the clones themselves. In the film Us, at some point prior to the 1980s, the US Government creates clones of every citizen of the United States with the intention of using them to control their original counterparts, akin to voodoo dolls. This fails, as they were able to copy bodies, but unable to copy the souls of those they cloned. The project is abandoned and the clones are trapped exactly mirroring their above-ground counterparts' actions for generations. In the present day, the clones launch a surprise attack and manage to complete a mass-genocide of their unaware counterparts. See also Frozen Ark The President's Council on Bioethics Notes References Further reading Guo, Owen. "World's Biggest Animal Cloning Center Set for '16 in a Skeptical China". The New York Times, 26 November 2015 Lerner, K. Lee. "Animal cloning". The Gale Encyclopedia of Science, edited by K. Lee Lerner and Brenda Wilmoth Lerner, 5th ed., Gale, 2014. Science in Context, link Dutchen, Stephanie (11 July 2018). "Rise of the Clones". Harvard Medical School. External links Cloning Fact Sheet from Human Genome Project Information website. 'Cloning' Freeview video by the Vega Science Trust and the BBC/OU Cloning in Focus, an accessible and comprehensive look at cloning research from the University of Utah's Genetic Science Learning Center Click and Clone. Try it yourself in the virtual mouse cloning laboratory, from the University of Utah's Genetic Science Learning Center "Cloning Addendum: A statement on the cloning report issues by the President's Council on Bioethics" . National Review, 15 July 2002 8:45 am Molecular biology Cryobiology Applied genetics Asexual reproduction Selection
Cloning
[ "Physics", "Chemistry", "Engineering", "Biology" ]
9,477
[ "Evolutionary processes", "Physical phenomena", "Phase transitions", "Behavior", "Selection", "Reproduction", "Cloning", "Genetic engineering", "Cryobiology", "Asexual reproduction", "Molecular biology", "Biochemistry" ]
6,911
https://en.wikipedia.org/wiki/Cellulose
Cellulose is an organic compound with the formula , a polysaccharide consisting of a linear chain of several hundred to many thousands of β(1→4) linked D-glucose units. Cellulose is an important structural component of the primary cell wall of green plants, many forms of algae and the oomycetes. Some species of bacteria secrete it to form biofilms. Cellulose is the most abundant organic polymer on Earth. The cellulose content of cotton fibre is 90%, that of wood is 40–50%, and that of dried hemp is approximately 57%. Cellulose is mainly used to produce paperboard and paper. Smaller quantities are converted into a wide variety of derivative products such as cellophane and rayon. Conversion of cellulose from energy crops into biofuels such as cellulosic ethanol is under development as a renewable fuel source. Cellulose for industrial use is mainly obtained from wood pulp and cotton. Cellulose is also greatly affected by direct interaction with several organic liquids. Some animals, particularly ruminants and termites, can digest cellulose with the help of symbiotic micro-organisms that live in their guts, such as Trichonympha. In human nutrition, cellulose is a non-digestible constituent of insoluble dietary fiber, acting as a hydrophilic bulking agent for feces and potentially aiding in defecation. History Cellulose was discovered in 1838 by the French chemist Anselme Payen, who isolated it from plant matter and determined its chemical formula.<ref>Payen, A. (1838) "Mémoire sur la composition du tissu propre des plantes et du ligneux" (Memoir on the composition of the tissue of plants and of woody [material]), Comptes rendus, vol. 7, pp. 1052–1056. Payen added appendices to this paper on December 24, 1838 (see: Comptes rendus, vol. 8, p. 169 (1839)) and on February 4, 1839 (see: Comptes rendus, vol. 9, p. 149 (1839)). A committee of the French Academy of Sciences reviewed Payen's findings in : Jean-Baptiste Dumas (1839) "Rapport sur un mémoire de M. Payen, reltes rendus, vol. 8, pp. 51–53. In this report, the word "cellulose" is coined and author points out the similarity between the empirical formula of cellulose and that of "dextrine" (starch). The above articles are reprinted in: Brongniart and Guillemin, eds., Annales des sciences naturelles ..., 2nd series, vol. 11 (Paris, France: Crochard et Cie., 1839), [ pp. 21–31].</ref> Cellulose was used to produce the first successful thermoplastic polymer, celluloid, by Hyatt Manufacturing Company in 1870. Production of rayon ("artificial silk") from cellulose began in the 1890s and cellophane was invented in 1912. Hermann Staudinger determined the polymer structure of cellulose in 1920. The compound was first chemically synthesized (without the use of any biologically derived enzymes) in 1992, by Kobayashi and Shoda. Structure and properties Cellulose has no taste, is odorless, is hydrophilic with the contact angle of 20–30 degrees, is insoluble in water and most organic solvents, is chiral and is biodegradable. It was shown to melt at 467 °C in pulse tests made by Dauenhauer et al. (2016). It can be broken down chemically into its glucose units by treating it with concentrated mineral acids at high temperature. Cellulose is derived from D-glucose units, which condense through β(1→4)-glycosidic bonds. This linkage motif contrasts with that for α(1→4)-glycosidic bonds present in starch and glycogen. Cellulose is a straight chain polymer. Unlike starch, no coiling or branching occurs and the molecule adopts an extended and rather stiff rod-like conformation, aided by the equatorial conformation of the glucose residues. The multiple hydroxyl groups on the glucose from one chain form hydrogen bonds with oxygen atoms on the same or on a neighbour chain, holding the chains firmly together side-by-side and forming microfibrils with high tensile strength. This confers tensile strength in cell walls where cellulose microfibrils are meshed into a polysaccharide matrix. The high tensile strength of plant stems and of the tree wood also arises from the arrangement of cellulose fibers intimately distributed into the lignin matrix. The mechanical role of cellulose fibers in the wood matrix responsible for its strong structural resistance, can somewhat be compared to that of the reinforcement bars in concrete, lignin playing here the role of the hardened cement paste acting as the "glue" in between the cellulose fibres. Mechanical properties of cellulose in primary plant cell wall are correlated with growth and expansion of plant cells. Live fluorescence microscopy techniques are promising in investigation of the role of cellulose in growing plant cells. Compared to starch, cellulose is also much more crystalline. Whereas starch undergoes a crystalline to amorphous transition when heated beyond 60–70 °C in water (as in cooking), cellulose requires a temperature of 320 °C and pressure of 25 MPa to become amorphous in water. Several types of cellulose are known. These forms are distinguished according to the location of hydrogen bonds between and within strands. Natural cellulose is cellulose I, with structures Iα and Iβ. Cellulose produced by bacteria and algae is enriched in Iα while cellulose of higher plants consists mainly of Iβ. Cellulose in regenerated cellulose fibers is cellulose II. The conversion of cellulose I to cellulose II is irreversible, suggesting that cellulose I is metastable and cellulose II is stable. With various chemical treatments it is possible to produce the structures cellulose III and cellulose IV. Many properties of cellulose depend on its chain length or degree of polymerization, the number of glucose units that make up one polymer molecule. Cellulose from wood pulp has typical chain lengths between 300 and 1700 units; cotton and other plant fibers as well as bacterial cellulose have chain lengths ranging from 800 to 10,000 units. Molecules with very small chain length resulting from the breakdown of cellulose are known as cellodextrins; in contrast to long-chain cellulose, cellodextrins are typically soluble in water and organic solvents. The chemical formula of cellulose is (C6H10O5)n where n is the degree of polymerization and represents the number of glucose groups. Plant-derived cellulose is usually found in a mixture with hemicellulose, lignin, pectin and other substances, while bacterial cellulose is quite pure, has a much higher water content and higher tensile strength due to higher chain lengths. Cellulose consists of fibrils with crystalline and amorphous regions. These cellulose fibrils may be individualized by mechanical treatment of cellulose pulp, often assisted by chemical oxidation or enzymatic treatment, yielding semi-flexible cellulose nanofibrils generally 200 nm to 1 μm in length depending on the treatment intensity. Cellulose pulp may also be treated with strong acid to hydrolyze the amorphous fibril regions, thereby producing short rigid cellulose nanocrystals a few 100 nm in length. These nanocelluloses are of high technological interest due to their self-assembly into cholesteric liquid crystals, production of hydrogels or aerogels, use in nanocomposites with superior thermal and mechanical properties, and use as Pickering stabilizers for emulsions. Processing Biosynthesis In plants cellulose is synthesized at the plasma membrane by rosette terminal complexes (RTCs). The RTCs are hexameric protein structures, approximately 25 nm in diameter, that contain the cellulose synthase enzymes that synthesise the individual cellulose chains. Each RTC floats in the cell's plasma membrane and "spins" a microfibril into the cell wall. RTCs contain at least three different cellulose synthases, encoded by CesA (Ces is short for "cellulose synthase") genes, in an unknown stoichiometry. Separate sets of CesA genes are involved in primary and secondary cell wall biosynthesis. There are known to be about seven subfamilies in the plant CesA superfamily, some of which include the more cryptic, tentatively-named Csl (cellulose synthase-like) enzymes. These cellulose syntheses use UDP-glucose to form the β(1→4)-linked cellulose. Bacterial cellulose is produced using the same family of proteins, although the gene is called BcsA for "bacterial cellulose synthase" or CelA for "cellulose" in many instances. In fact, plants acquired CesA from the endosymbiosis event that produced the chloroplast. All cellulose synthases known belongs to glucosyltransferase family 2 (GT2). Cellulose synthesis requires chain initiation and elongation, and the two processes are separate. Cellulose synthase (CesA) initiates cellulose polymerization using a steroid primer, sitosterol-beta-glucoside, and UDP-glucose. It then utilises UDP-D-glucose precursors to elongate the growing cellulose chain. A cellulase may function to cleave the primer from the mature chain. Cellulose is also synthesised by tunicate animals, particularly in the tests of ascidians (where the cellulose was historically termed "tunicine" (tunicin)). Breakdown (cellulolysis) Cellulolysis is the process of breaking down cellulose into smaller polysaccharides called cellodextrins or completely into glucose units; this is a hydrolysis reaction. Because cellulose molecules bind strongly to each other, cellulolysis is relatively difficult compared to the breakdown of other polysaccharides. However, this process can be significantly intensified in a proper solvent, e.g. in an ionic liquid. Most mammals have limited ability to digest dietary fibre such as cellulose. Some ruminants like cows and sheep contain certain symbiotic anaerobic bacteria (such as Cellulomonas and Ruminococcus spp.) in the flora of the rumen, and these bacteria produce enzymes called cellulases that hydrolyze cellulose. The breakdown products are then used by the bacteria for proliferation. The bacterial mass is later digested by the ruminant in its digestive system (stomach and small intestine). Horses use cellulose in their diet by fermentation in their hindgut. Some termites contain in their hindguts certain flagellate protozoa producing such enzymes, whereas others contain bacteria or may produce cellulase. The enzymes used to cleave the glycosidic linkage in cellulose are glycoside hydrolases including endo-acting cellulases and exo-acting glucosidases. Such enzymes are usually secreted as part of multienzyme complexes that may include dockerins and carbohydrate-binding modules. Breakdown (thermolysis) At temperatures above 350 °C, cellulose undergoes thermolysis (also called 'pyrolysis'), decomposing into solid char, vapors, aerosols, and gases such as carbon dioxide. Maximum yield of vapors which condense to a liquid called bio-oil is obtained at 500 °C. Semi-crystalline cellulose polymers react at pyrolysis temperatures (350–600 °C) in a few seconds; this transformation has been shown to occur via a solid-to-liquid-to-vapor transition, with the liquid (called intermediate liquid cellulose or molten cellulose) existing for only a fraction of a second. Glycosidic bond cleavage produces short cellulose chains of two-to-seven monomers comprising the melt. Vapor bubbling of intermediate liquid cellulose produces aerosols, which consist of short chain anhydro-oligomers derived from the melt. Continuing decomposition of molten cellulose produces volatile compounds including levoglucosan, furans, pyrans, light oxygenates, and gases via primary reactions. Within thick cellulose samples, volatile compounds such as levoglucosan undergo 'secondary reactions' to volatile products including pyrans and light oxygenates such as glycolaldehyde. Hemicellulose Hemicelluloses are polysaccharides related to cellulose that comprises about 20% of the biomass of land plants. In contrast to cellulose, hemicelluloses are derived from several sugars in addition to glucose, especially xylose but also including mannose, galactose, rhamnose, and arabinose. Hemicelluloses consist of shorter chains – between 500 and 3000 sugar units. Furthermore, hemicelluloses are branched, whereas cellulose is unbranched. Regenerated cellulose Cellulose is soluble in several kinds of media, several of which are the basis of commercial technologies. These dissolution processes are reversible and are used in the production of regenerated celluloses (such as viscose and cellophane) from dissolving pulp. The most important solubilizing agent is carbon disulfide in the presence of alkali. Other agents include Schweizer's reagent, N-methylmorpholine N-oxide, and lithium chloride in dimethylacetamide. In general, these agents modify the cellulose, rendering it soluble. The agents are then removed concomitant with the formation of fibers. Cellulose is also soluble in many kinds of ionic liquids. The history of regenerated cellulose is often cited as beginning with George Audemars, who first manufactured regenerated nitrocellulose fibers in 1855. Although these fibers were soft and strong -resembling silk- they had the drawback of being highly flammable. Hilaire de Chardonnet perfected production of nitrocellulose fibers, but manufacturing of these fibers by his process was relatively uneconomical. In 1890, L.H. Despeissis invented the cuprammonium process – which uses a cuprammonium solution to solubilize cellulose – a method still used today for production of artificial silk. In 1891, it was discovered that treatment of cellulose with alkali and carbon disulfide generated a soluble cellulose derivative known as viscose. This process, patented by the founders of the Viscose Development Company, is the most widely used method for manufacturing regenerated cellulose products. Courtaulds purchased the patents for this process in 1904, leading to significant growth of viscose fiber production. By 1931, expiration of patents for the viscose process led to its adoption worldwide. Global production of regenerated cellulose fiber peaked in 1973 at 3,856,000 tons. Regenerated cellulose can be used to manufacture a wide variety of products. While the first application of regenerated cellulose was as a clothing textile, this class of materials is also used in the production of disposable medical devices as well as fabrication of artificial membranes. Cellulose esters and ethers The hydroxyl groups (−OH) of cellulose can be partially or fully reacted with various reagents to afford derivatives with useful properties like mainly cellulose esters and cellulose ethers (−OR). In principle, although not always in current industrial practice, cellulosic polymers are renewable resources. Ester derivatives include: Cellulose acetate and cellulose triacetate are film- and fiber-forming materials that find a variety of uses. Nitrocellulose was initially used as an explosive and was an early film forming material. When plasticized with camphor, nitrocellulose gives celluloid. Cellulose Ether derivatives include: The sodium carboxymethyl cellulose can be cross-linked to give the croscarmellose sodium (E468) for use as a disintegrant in pharmaceutical formulations. Furthermore, by the covalent attachment of thiol groups to cellulose ethers such as sodium carboxymethyl cellulose, ethyl cellulose or hydroxyethyl cellulose mucoadhesive and permeation enhancing properties can be introduced. Thiolated cellulose derivatives (see thiomers) exhibit also high binding properties for metal ions. Commercial applications Cellulose for industrial use is mainly obtained from wood pulp and from cotton. Paper products: Cellulose is the major constituent of paper, paperboard, and card stock. Electrical insulation paper: Cellulose is used in diverse forms as insulation in transformers, cables, and other electrical equipment. Fibres: Cellulose is the main ingredient of textiles. Cotton and synthetics (nylons) each have about 40% market by volume. Other plant fibres (jute, sisal, hemp) represent about 20% of the market. Rayon, cellophane and other "regenerated cellulose fibres" are a small portion (5%). Consumables: Microcrystalline cellulose (E460i) and powdered cellulose (E460ii) are used as inactive fillers in drug tablets and a wide range of soluble cellulose derivatives, E numbers E461 to E469, are used as emulsifiers, thickeners and stabilizers in processed foods. Cellulose powder is, for example, used in processed cheese to prevent caking inside the package. Cellulose occurs naturally in some foods and is an additive in manufactured foods, contributing an indigestible component used for texture and bulk, potentially aiding in defecation. Building material: Hydroxyl bonding of cellulose in water produces a sprayable, moldable material as an alternative to the use of plastics and resins. The recyclable material can be made water- and fire-resistant. It provides sufficient strength for use as a building material. Cellulose insulation made from recycled paper is becoming popular as an environmentally preferable material for building insulation. It can be treated with boric acid as a fire retardant. Miscellaneous: Cellulose can be converted into cellophane, a thin transparent film. It is the base material for the celluloid that was used for photographic and movie films until the mid-1930s. Cellulose is used to make water-soluble adhesives and binders such as methyl cellulose and carboxymethyl cellulose which are used in wallpaper paste. Cellulose is further used to make hydrophilic and highly absorbent sponges. Cellulose is the raw material in the manufacture of nitrocellulose (cellulose nitrate) which is used in smokeless gunpowder. Pharmaceuticals: Cellulose derivatives, such as microcrystalline cellulose (MCC), have the advantages of retaining water, being a stabilizer and thickening agent, and in reinforcement of drug tablets. Aspirational Energy crops: The major combustible component of non-food energy crops is cellulose, with lignin second. Non-food energy crops produce more usable energy than edible energy crops (which have a large starch component), but still compete with food crops for agricultural land and water resources. Typical non-food energy crops include industrial hemp, switchgrass, Miscanthus, Salix (willow), and Populus (poplar) species. A strain of Clostridium'' bacteria found in zebra dung, can convert nearly any form of cellulose into butanol fuel. Another possible application is as Insect repellents. See also Gluconic acid Isosaccharinic acid, a degradation product of cellulose Lignin Zeoform References External links Structure and morphology of cellulose by Serge Pérez and William Mackie, CERMAV-CNRS Cellulose, by Martin Chaplin, London South Bank University Clear description of a cellulose assay method at the Cotton Fiber Biosciences unit of the USDA. Cellulose films could provide flapping wings and cheap artificial muscles for robots – TechnologyReview.com Excipients Papermaking Polysaccharides E-number additives
Cellulose
[ "Chemistry" ]
4,450
[ "Carbohydrates", "Polysaccharides" ]
6,933
https://en.wikipedia.org/wiki/Chromatin
Chromatin is a complex of DNA and protein found in eukaryotic cells. The primary function is to package long DNA molecules into more compact, denser structures. This prevents the strands from becoming tangled and also plays important roles in reinforcing the DNA during cell division, preventing DNA damage, and regulating gene expression and DNA replication. During mitosis and meiosis, chromatin facilitates proper segregation of the chromosomes in anaphase; the characteristic shapes of chromosomes visible during this stage are the result of DNA being coiled into highly condensed chromatin. The primary protein components of chromatin are histones. An octamer of two sets of four histone cores (Histone H2A, Histone H2B, Histone H3, and Histone H4) bind to DNA and function as "anchors" around which the strands are wound. In general, there are three levels of chromatin organization: DNA wraps around histone proteins, forming nucleosomes and the so-called beads on a string structure (euchromatin). Multiple histones wrap into a 30-nanometer fiber consisting of nucleosome arrays in their most compact form (heterochromatin). Higher-level DNA supercoiling of the 30 nm fiber produces the metaphase chromosome (during mitosis and meiosis). Many organisms, however, do not follow this organization scheme. For example, spermatozoa and avian red blood cells have more tightly packed chromatin than most eukaryotic cells, and trypanosomatid protozoa do not condense their chromatin into visible chromosomes at all. Prokaryotic cells have entirely different structures for organizing their DNA (the prokaryotic chromosome equivalent is called a genophore and is localized within the nucleoid region). The overall structure of the chromatin network further depends on the stage of the cell cycle. During interphase, the chromatin is structurally loose to allow access to RNA and DNA polymerases that transcribe and replicate the DNA. The local structure of chromatin during interphase depends on the specific genes present in the DNA. Regions of DNA containing genes which are actively transcribed ("turned on") are less tightly compacted and closely associated with RNA polymerases in a structure known as euchromatin, while regions containing inactive genes ("turned off") are generally more condensed and associated with structural proteins in heterochromatin. Epigenetic modification of the structural proteins in chromatin via methylation and acetylation also alters local chromatin structure and therefore gene expression. There is limited understanding of chromatin structure and it is active area of research in molecular biology. Dynamic chromatin structure and hierarchy Chromatin undergoes various structural changes during a cell cycle. Histone proteins are the basic packers and arrangers of chromatin and can be modified by various post-translational modifications to alter chromatin packing (histone modification). Most modifications occur on histone tails. The positively charged histone cores only partially counteract the negative charge of the DNA phosphate backbone resulting in a negative net charge of the overall structure. An imbalance of charge within the polymer causes electrostatic repulsion between neighboring chromatin regions that promote interactions with positively charged proteins, molecules, and cations. As these modifications occur, the electrostatic environment surrounding the chromatin will flux and the level of chromatin compaction will alter. The consequences in terms of chromatin accessibility and compaction depend both on the modified amino acid and the type of modification. For example, histone acetylation results in loosening and increased accessibility of chromatin for replication and transcription. Lysine trimethylation can either lead to increased transcriptional activity (trimethylation of histone H3 lysine 4) or transcriptional repression and chromatin compaction (trimethylation of histone H3, lysine 9 or lysine 27). Several studies suggested that different modifications could occur simultaneously. For example, it was proposed that a bivalent structure (with trimethylation of both lysine 4 and 27 on histone H3) is involved in early mammalian development. Another study tested the role of acetylation of histone 4 on lysine 16 on chromatin structure and found that homogeneous acetylation inhibited 30 nm chromatin formation and blocked adenosine triphosphate remodeling. This singular modification changed the dynamics of the chromatin which shows that acetylation of H4 at K16 is vital for proper intra- and inter- functionality of chromatin structure. Polycomb-group proteins play a role in regulating genes through modulation of chromatin structure. For additional information, see Chromatin variant, Histone modifications in chromatin regulation and RNA polymerase control by chromatin structure. Structure of DNA In nature, DNA can form three structures, A-, B-, and Z-DNA. A- and B-DNA are very similar, forming right-handed helices, whereas Z-DNA is a left-handed helix with a zig-zag phosphate backbone. Z-DNA is thought to play a specific role in chromatin structure and transcription because of the properties of the junction between B- and Z-DNA. At the junction of B- and Z-DNA, one pair of bases is flipped out from normal bonding. These play a dual role of a site of recognition by many proteins and as a sink for torsional stress from RNA polymerase or nucleosome binding. DNA bases are stored as a code structure with four chemical bases such as “Adenine (A), Guanine (G), Cytosine (C), and Thymine (T)”. The order and sequences of these chemical structures of DNA are reflected as information available for the creation and control of human organisms. “A with T and C with G” pairing up to build the DNA base pair. Sugar and phosphate molecules are also paired with these bases, making DNA nucleotides arrange 2 long spiral strands unitedly called “double helix”. In eukaryotes, DNA consists of a cell nucleus and the DNA is providing strength and direction to the mechanism of heredity. Moreover, between the nitrogenous bonds of the 2 DNA, homogenous bonds are forming. Nucleosomes and beads-on-a-string The basic repeat element of chromatin is the nucleosome, interconnected by sections of linker DNA, a far shorter arrangement than pure DNA in solution. In addition to core histones, a linker histone H1 exists that contacts the exit/entry of the DNA strand on the nucleosome. The nucleosome core particle, together with histone H1, is known as a chromatosome. Nucleosomes, with about 20 to 60 base pairs of linker DNA, can form, under non-physiological conditions, an approximately 11 nm beads on a string fibre. The nucleosomes bind DNA non-specifically, as required by their function in general DNA packaging. There are, however, large DNA sequence preferences that govern nucleosome positioning. This is due primarily to the varying physical properties of different DNA sequences: For instance, adenine (A), and thymine (T) is more favorably compressed into the inner minor grooves. This means nucleosomes can bind preferentially at one position approximately every 10 base pairs (the helical repeat of DNA)- where the DNA is rotated to maximise the number of A and T bases that will lie in the inner minor groove. (See nucleic acid structure.) 30-nm chromatin fiber in mitosis With addition of H1, during mitosis the beads-on-a-string structure can coil into a 30 nm-diameter helical structure known as the 30 nm fibre or filament. The precise structure of the chromatin fiber in the cell is not known in detail. This level of chromatin structure is thought to be the form of heterochromatin, which contains mostly transcriptionally silent genes. Electron microscopy studies have demonstrated that the 30 nm fiber is highly dynamic such that it unfolds into a 10 nm fiber beads-on-a-string structure when transversed by an RNA polymerase engaged in transcription. The existing models commonly accept that the nucleosomes lie perpendicular to the axis of the fibre, with linker histones arranged internally. A stable 30 nm fibre relies on the regular positioning of nucleosomes along DNA. Linker DNA is relatively resistant to bending and rotation. This makes the length of linker DNA critical to the stability of the fibre, requiring nucleosomes to be separated by lengths that permit rotation and folding into the required orientation without excessive stress to the DNA. In this view, different lengths of the linker DNA should produce different folding topologies of the chromatin fiber. Recent theoretical work, based on electron-microscopy images of reconstituted fibers supports this view. DNA loops The beads-on-a-string chromatin structure has a tendency to form loops. These loops allow interactions between different regions of DNA by bringing them closer to each other, which increases the efficiency of gene interactions. This process is dynamic, with loops forming and disappearing. The loops are regulated by two main elements: Cohesins, protein complexes that generate loops by extrusion of the DNA fiber through the ring-like structure of the complex itself. CTCF, a transcription factor that limits the frontier of the DNA loop. To stop the growth of a loop, two CTCF molecules must be positioned in opposite directions to block the movement of the cohesin ring (see video). There are many other elements involved. For example, Jpx regulates the binding sites of CTCF molecules along the DNA fiber. Spatial organization of chromatin in the cell nucleus The spatial arrangement of the chromatin within the nucleus is not random - specific regions of the chromatin can be found in certain territories. Territories are, for example, the lamina-associated domains (LADs), and the topologically associating domains (TADs), which are bound together by protein complexes. Currently, polymer models such as the Strings & Binders Switch (SBS) model and the Dynamic Loop (DL) model are used to describe the folding of chromatin within the nucleus. The arrangement of chromatin within the nucleus may also play a role in nuclear stress and restoring nuclear membrane deformation by mechanical stress. When chromatin is condensed, the nucleus becomes more rigid. When chromatin is decondensed, the nucleus becomes more elastic with less force exerted on the inner nuclear membrane. This observation sheds light on other possible cellular functions of chromatin organization outside of genomic regulation. Cell-cycle dependent structural organization Interphase: The structure of chromatin during interphase of mitosis is optimized to allow simple access of transcription and DNA repair factors to the DNA while compacting the DNA into the nucleus. The structure varies depending on the access required to the DNA. Genes that require regular access by RNA polymerase require the looser structure provided by euchromatin. Metaphase: The metaphase structure of chromatin differs vastly to that of interphase. It is optimised for physical strength and manageability, forming the classic chromosome structure seen in karyotypes. The structure of the condensed chromatin is thought to be loops of 30 nm fibre to a central scaffold of proteins. It is, however, not well-characterised. Chromosome scaffolds play an important role to hold the chromatin into compact chromosomes. Loops of 30 nm structure further condense with scaffold, into higher order structures. Chromosome scaffolds are made of proteins including condensin, type IIA topoisomerase and kinesin family member 4 (KIF4). The physical strength of chromatin is vital for this stage of division to prevent shear damage to the DNA as the daughter chromosomes are separated. To maximise strength the composition of the chromatin changes as it approaches the centromere, primarily through alternative histone H1 analogues. During mitosis, although most of the chromatin is tightly compacted, there are small regions that are not as tightly compacted. These regions often correspond to promoter regions of genes that were active in that cell type prior to chromatin formation. The lack of compaction of these regions is called bookmarking, which is an epigenetic mechanism believed to be important for transmitting to daughter cells the "memory" of which genes were active prior to entry into mitosis. This bookmarking mechanism is needed to help transmit this memory because transcription ceases during mitosis. Chromatin and bursts of transcription Chromatin and its interaction with enzymes has been researched, and a conclusion being made is that it is relevant and an important factor in gene expression. Vincent G. Allfrey, a professor at Rockefeller University, stated that RNA synthesis is related to histone acetylation. The lysine amino acid attached to the end of the histones is positively charged. The acetylation of these tails would make the chromatin ends neutral, allowing for DNA access. When the chromatin decondenses, the DNA is open to entry of molecular machinery. Fluctuations between open and closed chromatin may contribute to the discontinuity of transcription, or transcriptional bursting. Other factors are probably involved, such as the association and dissociation of transcription factor complexes with chromatin. Specifically, RNA polymerase and transcriptional proteins have been shown to congregate into droplets via phase separation, and recent studies have suggested that 10 nm chromatin demonstrates liquid-like behavior increasing the targetability of genomic DNA. The interactions between linker histones and disordered tail regions act as an electrostatic glue organizing large-scale chromatin into a dynamic, liquid-like domain. Decreased chromatin compaction comes with increased chromatin mobility and easier transcriptional access to DNA. The phenomenon, as opposed to simple probabilistic models of transcription, can account for the high variability in gene expression occurring between cells in isogenic populations. Alternative chromatin organizations During metazoan spermiogenesis, the spermatid's chromatin is remodeled into a more spaced-packaged, widened, almost crystal-like structure. This process is associated with the cessation of transcription and involves nuclear protein exchange. The histones are mostly displaced, and replaced by protamines (small, arginine-rich proteins). It is proposed that in yeast, regions devoid of histones become very fragile after transcription; HMO1, an HMG-box protein, helps in stabilizing nucleosomes-free chromatin. Chromatin and DNA repair A variety of internal and external agents can cause DNA damage in cells. Many factors influence how the repair route is selected, including the cell cycle phase and chromatin segment where the break occurred. In terms of initiating 5’ end DNA repair, the p53 binding protein 1 (53BP1) and BRCA1 are important protein components that influence double-strand break repair pathway selection. The 53BP1 complex attaches to chromatin near DNA breaks and activates downstream factors such as Rap1-Interacting Factor 1 (RIF1) and shieldin, which protects DNA ends against nucleolytic destruction. DNA damage process occurs within the condition of chromatin, and the constantly changing chromatin environment has a large effect on it. Accessing and repairing the damaged cell of DNA, the genome condenses into chromatin and repairing it through modifying the histone residues. Through altering the chromatin structure, histones residues are adding chemical groups namely phosphate, acetyl and one or more methyl groups and these control the expressions of gene building by proteins to acquire DNA. Moreover, resynthesis of the delighted zone, DNA will be repaired by processing and restructuring the damaged bases. In order to maintain genomic integrity, “homologous recombination and classical non-homologous end joining process” has been followed by DNA to be repaired. The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow the critical cellular process of DNA repair, the chromatin must be remodeled. In eukaryotes, ATP-dependent chromatin remodeling complexes and histone-modifying enzymes are two predominant factors employed to accomplish this remodeling process. Chromatin relaxation occurs rapidly at the site of DNA damage. This process is initiated by PARP1 protein that starts to appear at DNA damage in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. Next the chromatin remodeler Alc1 quickly attaches to the product of PARP1, and completes arrival at the DNA damage within 10 seconds of the damage. About half of the maximum chromatin relaxation, presumably due to action of Alc1, occurs by 10 seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA damage occurrence. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. After undergoing relaxation subsequent to DNA damage, followed by DNA repair, chromatin recovers to a compaction state close to its pre-damage level after about 20 min. Methods to investigate chromatin ChIP-seq (Chromatin immunoprecipitation sequencing) is recognized as the vastly utilized chromatin identification method it has been using the antibodies that actively selected, identify and combine with proteins including "histones, histone restructuring, transaction factors and cofactors". This has been providing data about the state of chromatin and the transaction of a gene by trimming "oligonucleotides" that are unbound. Chromatin immunoprecipitation sequencing aimed against different histone modifications, can be used to identify chromatin states throughout the genome. Different modifications have been linked to various states of chromatin. DNase-seq (DNase I hypersensitive sites Sequencing) uses the sensitivity of accessible regions in the genome to the DNase I enzyme to map open or accessible regions in the genome. FAIRE-seq (Formaldehyde-Assisted Isolation of Regulatory Elements sequencing) uses the chemical properties of protein-bound DNA in a two-phase separation method to extract nucleosome depleted regions from the genome. ATAC-seq (Assay for Transposable Accessible Chromatin sequencing) uses the Tn5 transposase to integrate (synthetic) transposons into accessible regions of the genome consequentially highlighting the localisation of nucleosomes and transcription factors across the genome. DNA footprinting is a method aimed at identifying protein-bound DNA. It uses labeling and fragmentation coupled to gel electrophoresis to identify areas of the genome that have been bound by proteins. MNase-seq (Micrococcal Nuclease sequencing) uses the micrococcal nuclease enzyme to identify nucleosome positioning throughout the genome. Chromosome conformation capture determines the spatial organization of chromatin in the nucleus, by inferring genomic locations that physically interact. MACC profiling (Micrococcal nuclease ACCessibility profiling) uses titration series of chromatin digests with micrococcal nuclease to identify chromatin accessibility as well as to map nucleosomes and non-histone DNA-binding proteins in both open and closed regions of the genome. Chromatin and knots It has been a puzzle how decondensed interphase chromosomes remain essentially unknotted. The natural expectation is that in the presence of type II DNA topoisomerases that permit passages of double-stranded DNA regions through each other, all chromosomes should reach the state of topological equilibrium. The topological equilibrium in highly crowded interphase chromosomes forming chromosome territories would result in formation of highly knotted chromatin fibres. However, Chromosome Conformation Capture (3C) methods revealed that the decay of contacts with the genomic distance in interphase chromosomes is practically the same as in the crumpled globule state that is formed when long polymers condense without formation of any knots. To remove knots from highly crowded chromatin, one would need an active process that should not only provide the energy to move the system from the state of topological equilibrium but also guide topoisomerase-mediated passages in such a way that knots would be efficiently unknotted instead of making the knots even more complex. It has been shown that the process of chromatin-loop extrusion is ideally suited to actively unknot chromatin fibres in interphase chromosomes. Chromatin: alternative definitions The term, introduced by Walther Flemming, has multiple meanings: Simple and concise definition: Chromatin is a macromolecular complex of a DNA macromolecule and protein macromolecules (and RNA). The proteins package and arrange the DNA and control its functions within the cell nucleus. A biochemists' operational definition: Chromatin is the DNA/protein/RNA complex extracted from eukaryotic lysed interphase nuclei. Just which of the multitudinous substances present in a nucleus will constitute a part of the extracted material partly depends on the technique each researcher uses. Furthermore, the composition and properties of chromatin vary from one cell type to another, during the development of a specific cell type, and at different stages in the cell cycle. The DNA + histone = chromatin definition: The DNA double helix in the cell nucleus is packaged by special proteins termed histones. The formed protein/DNA complex is called chromatin. The basic structural unit of chromatin is the nucleosome. The first definition allows for "chromatins" to be defined in other domains of life like bacteria and archaea, using any DNA-binding proteins that condenses the molecule. These proteins are usually referred to nucleoid-associated proteins (NAPs); examples include AsnC/LrpC with HU. In addition, some archaea do produce nucleosomes from proteins homologous to eukaryotic histones. Chromatin Remodeling: Chromatin remodeling can result from covalent modification of histones that physically remodel, move or remove nucleosomes. Studies of Sanosaka et al. 2022, says that Chromatin remodeler CHD7 regulate cell type-specific gene expression in human neural crest cells. See also Active chromatin sequence Chromatid DAnCER database (2010) Epigenetics Histone-modifying enzymes Position-effect variegation Transcriptional bursting Notes References Additional sources Cooper, Geoffrey M. 2000. The Cell, 2nd edition, A Molecular Approach. Chapter 4.2 Chromosomes and Chromatin. Cremer, T. 1985. Von der Zellenlehre zur Chromosomentheorie: Naturwissenschaftliche Erkenntnis und Theorienwechsel in der frühen Zell- und Vererbungsforschung, Veröffentlichungen aus der Forschungsstelle für Theoretische Pathologie der Heidelberger Akademie der Wissenschaften. Springer-Vlg., Berlin, Heidelberg. Elgin, S. C. R. (ed.). 1995. Chromatin Structure and Gene Expression, vol. 9. IRL Press, Oxford, New York, Tokyo. Pollard, T., and W. Earnshaw. 2002. Cell Biology. Saunders. Saumweber, H. 1987. Arrangement of Chromosomes in Interphase Cell Nuclei, p. 223-234. In W. Hennig (ed.), Structure and Function of Eucaryotic Chromosomes, vol. 14. Springer-Verlag, Berlin, Heidelberg. Van Holde KE. 1989. Chromatin. New York: Springer-Verlag. . Van Holde, K., J. Zlatanova, G. Arents, and E. Moudrianakis. 1995. Elements of chromatin structure: histones, nucleosomes, and fibres, p. 1-26. In S. C. R. Elgin (ed.), Chromatin structure and gene expression. IRL Press at Oxford University Press, Oxford. External links Chromatin, Histones & Cathepsin; PMAP The Proteolysis Map-animation Nature journal: recent chromatin publications and news Protocol for in vitro Chromatin Assembly ENCODE threads Explorer Chromatin patterns at transcription factor binding sites. Nature (journal) Molecular genetics Nuclear substructures
Chromatin
[ "Chemistry", "Biology" ]
5,411
[ "Molecular genetics", "Molecular biology" ]
6,956
https://en.wikipedia.org/wiki/Conservation%20law
In physics, a conservation law states that a particular measurable property of an isolated physical system does not change as the system evolves over time. Exact conservation laws include conservation of mass-energy, conservation of linear momentum, conservation of angular momentum, and conservation of electric charge. There are also many approximate conservation laws, which apply to such quantities as mass, parity, lepton number, baryon number, strangeness, hypercharge, etc. These quantities are conserved in certain classes of physics processes, but not in all. A local conservation law is usually expressed mathematically as a continuity equation, a partial differential equation which gives a relation between the amount of the quantity and the "transport" of that quantity. It states that the amount of the conserved quantity at a point or within a volume can only change by the amount of the quantity which flows in or out of the volume. From Noether's theorem, every differentiable symmetry leads to a conservation law. Other conserved quantities can exist as well. Conservation laws as fundamental laws of nature Conservation laws are fundamental to our understanding of the physical world, in that they describe which processes can or cannot occur in nature. For example, the conservation law of energy states that the total quantity of energy in an isolated system does not change, though it may change form. In general, the total quantity of the property governed by that law remains unchanged during physical processes. With respect to classical physics, conservation laws include conservation of energy, mass (or matter), linear momentum, angular momentum, and electric charge. With respect to particle physics, particles cannot be created or destroyed except in pairs, where one is ordinary and the other is an antiparticle. With respect to symmetries and invariance principles, three special conservation laws have been described, associated with inversion or reversal of space, time, and charge. Conservation laws are considered to be fundamental laws of nature, with broad application in physics, as well as in other fields such as chemistry, biology, geology, and engineering. Most conservation laws are exact, or absolute, in the sense that they apply to all possible processes. Some conservation laws are partial, in that they hold for some processes but not for others. One particularly important result concerning conservation laws is Noether's theorem, which states that there is a one-to-one correspondence between each one of them and a differentiable symmetry of the Universe. For example, the conservation of energy follows from the uniformity of time and the conservation of angular momentum arises from the isotropy of space, i.e. because there is no preferred direction of space. Notably, there is no conservation law associated with time-reversal, although more complex conservation laws combining time-reversal with other symmetries are known. Exact laws A partial listing of physical conservation equations due to symmetry that are said to be exact laws, or more precisely have never been proven to be violated: Another exact symmetry is CPT symmetry, the simultaneous inversion of space and time coordinates, together with swapping all particles with their antiparticles; however being a discrete symmetry Noether's theorem does not apply to it. Accordingly, the conserved quantity, CPT parity, can usually not be meaningfully calculated or determined. Approximate laws There are also approximate conservation laws. These are approximately true in particular situations, such as low speeds, short time scales, or certain interactions. Conservation of mechanical energy Conservation of mass (approximately true for nonrelativistic speeds) Conservation of baryon number (See chiral anomaly and sphaleron) Conservation of lepton number (In the Standard Model) Conservation of flavor (violated by the weak interaction) Conservation of strangeness (violated by the weak interaction) Conservation of space-parity (violated by the weak interaction) Conservation of charge-parity (violated by the weak interaction) Conservation of time-parity (violated by the weak interaction) Conservation of CP parity (violated by the weak interaction); in the Standard Model, this is equivalent to conservation of time-parity. Global and local conservation laws The total amount of some conserved quantity in the universe could remain unchanged if an equal amount were to appear at one point A and simultaneously disappear from another separate point B. For example, an amount of energy could appear on Earth without changing the total amount in the Universe if the same amount of energy were to disappear from some other region of the Universe. This weak form of "global" conservation is really not a conservation law because it is not Lorentz invariant, so phenomena like the above do not occur in nature. Due to special relativity, if the appearance of the energy at A and disappearance of the energy at B are simultaneous in one inertial reference frame, they will not be simultaneous in other inertial reference frames moving with respect to the first. In a moving frame one will occur before the other; either the energy at A will appear before or after the energy at B disappears. In both cases, during the interval energy will not be conserved. A stronger form of conservation law requires that, for the amount of a conserved quantity at a point to change, there must be a flow, or flux of the quantity into or out of the point. For example, the amount of electric charge at a point is never found to change without an electric current into or out of the point that carries the difference in charge. Since it only involves continuous local changes, this stronger type of conservation law is Lorentz invariant; a quantity conserved in one reference frame is conserved in all moving reference frames. This is called a local conservation law. Local conservation also implies global conservation; that the total amount of the conserved quantity in the Universe remains constant. All of the conservation laws listed above are local conservation laws. A local conservation law is expressed mathematically by a continuity equation, which states that the change in the quantity in a volume is equal to the total net "flux" of the quantity through the surface of the volume. The following sections discuss continuity equations in general. Differential forms In continuum mechanics, the most general form of an exact conservation law is given by a continuity equation. For example, conservation of electric charge is where is the divergence operator, is the density of (amount per unit volume), is the flux of (amount crossing a unit area in unit time), and is time. If we assume that the motion u of the charge is a continuous function of position and time, then In one space dimension this can be put into the form of a homogeneous first-order quasilinear hyperbolic equation: where the dependent variable is called the density of a conserved quantity, and is called the current Jacobian, and the subscript notation for partial derivatives has been employed. The more general inhomogeneous case: is not a conservation equation but the general kind of balance equation describing a dissipative system. The dependent variable is called a nonconserved quantity, and the inhomogeneous term is the-source, or dissipation. For example, balance equations of this kind are the momentum and energy Navier-Stokes equations, or the entropy balance for a general isolated system. In the one-dimensional space a conservation equation is a first-order quasilinear hyperbolic equation that can be put into the advection form: where the dependent variable is called the density of the conserved (scalar) quantity, and is called the current coefficient, usually corresponding to the partial derivative in the conserved quantity of a current density of the conserved quantity : In this case since the chain rule applies: the conservation equation can be put into the current density form: In a space with more than one dimension the former definition can be extended to an equation that can be put into the form: where the conserved quantity is , denotes the scalar product, is the nabla operator, here indicating a gradient, and is a vector of current coefficients, analogously corresponding to the divergence of a vector current density associated to the conserved quantity : This is the case for the continuity equation: Here the conserved quantity is the mass, with density and current density , identical to the momentum density, while is the flow velocity. In the general case a conservation equation can be also a system of this kind of equations (a vector equation) in the form: where is called the conserved (vector) quantity, is its gradient, is the zero vector, and is called the Jacobian of the current density. In fact as in the former scalar case, also in the vector case A(y) usually corresponding to the Jacobian of a current density matrix : and the conservation equation can be put into the form: For example, this the case for Euler equations (fluid dynamics). In the simple incompressible case they are: where: is the flow velocity vector, with components in a N-dimensional space , is the specific pressure (pressure per unit density) giving the source term, It can be shown that the conserved (vector) quantity and the current density matrix for these equations are respectively: where denotes the outer product. Integral and weak forms Conservation equations can usually also be expressed in integral form: the advantage of the latter is substantially that it requires less smoothness of the solution, which paves the way to weak form, extending the class of admissible solutions to include discontinuous solutions. By integrating in any space-time domain the current density form in 1-D space: and by using Green's theorem, the integral form is: In a similar fashion, for the scalar multidimensional space, the integral form is: where the line integration is performed along the boundary of the domain, in an anticlockwise manner. Moreover, by defining a test function φ(r,t) continuously differentiable both in time and space with compact support, the weak form can be obtained pivoting on the initial condition. In 1-D space it is: In the weak form all the partial derivatives of the density and current density have been passed on to the test function, which with the former hypothesis is sufficiently smooth to admit these derivatives. See also Invariant (physics) Momentum Cauchy momentum equation Energy Conservation of energy and the First law of thermodynamics Conservative system Conserved quantity Some kinds of helicity are conserved in dissipationless limit: hydrodynamical helicity, magnetic helicity, cross-helicity. Principle of mutability Conservation law of the Stress–energy tensor Riemann invariant Philosophy of physics Totalitarian principle Convection–diffusion equation Uniformity of nature Examples and applications Advection Mass conservation, or Continuity equation Charge conservation Euler equations (fluid dynamics) inviscid Burgers equation Kinematic wave Conservation of energy Traffic flow Notes References Philipson, Schuster, Modeling by Nonlinear Differential Equations: Dissipative and Conservative Processes, World Scientific Publishing Company 2009. Victor J. Stenger, 2000. Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpt. 12 is a gentle introduction to symmetry, invariance, and conservation laws. E. Godlewski and P.A. Raviart, Hyperbolic systems of conservation laws, Ellipses, 1991. External links Conservation Laws – Ch. 11–15 in an online textbook Scientific laws Symmetry Thermodynamic systems
Conservation law
[ "Physics", "Chemistry", "Mathematics" ]
2,307
[ "Thermodynamic systems", "Equations of physics", "Conservation laws", "Mathematical objects", "Scientific laws", "Equations", "Physical systems", "Thermodynamics", "Geometry", "Dynamical systems", "Symmetry", "Physics theorems" ]
7,011
https://en.wikipedia.org/wiki/Control%20engineering
Control engineering, also known as control systems engineering and, in some European countries, automation engineering, is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering, chemical engineering and mechanical engineering at many institutions around the world. The practice uses sensors and detectors to measure the output performance of the process being controlled; these measurements are used to provide corrective feedback helping to achieve the desired performance. Systems designed to perform without requiring human input are called automatic control systems (such as cruise control for regulating the speed of a car). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of a diverse range of systems. Overview Modern day control engineering is a relatively new field of study that gained significant attention during the 20th century with the advancement of technology. It can be broadly defined or classified as practical application of control theory. Control engineering plays an essential role in a wide range of control systems, from simple household washing machines to high-performance fighter aircraft. It seeks to understand physical systems, using mathematical modelling, in terms of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem. Control engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner. Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering. Electrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. In most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a PID controller system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved. Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors. History Automatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt, around the third century BCE. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel. This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 CE. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt in 1788. In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis. Control theory made significant strides over the next century. New mathematical techniques, as well as advances in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes. Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today. Mathematical modelling David Quinn Mayne, (1930–2024) was among the early developers of a rigorous mathematical method for analysing Model predictive control algorithms (MPC). It is currently used in tens of thousands of applications and is a core part of the advanced control technology by hundreds of process control producers. MPC's major strength is its capacity to deal with nonlinearities and hard constraints in a simple and intuitive fashion. His work underpins a class of algorithms that are provably correct, heuristically explainable, and yield control system designs which meet practically important objectives. Control systems Control theory Education At many universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be applied to any system for which a suitable model can be derived. However, specialised control engineering departments do exist, for example, in Italy there are several master in Automation & Robotics that are fully specialised in Control engineering or the Department of Automatic Control and Systems Engineering at the University of Sheffield or the Department of Robotics and Control Engineering at the United States Naval Academy and the Department of Control and Automation Engineering at the Istanbul Technical University. Control engineering has diversified applications that include science, finance management, and even human behavior. Students of control engineering may start with a linear control system course dealing with the time and complex-s domain, which requires a thorough background in elementary mathematics and Laplace transform, called classical control theory. In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z transformation and algebra respectively, and could be said to complete a basic control education. Careers A control engineer's career starts with a bachelor's degree and can continue through the college process. Control engineer degrees are typically paired with an electrical or mechanical engineering degree, but can also be paired with a degree in chemical engineering. According to a Control Engineering survey, most of the people who answered were control engineers in various forms of their own career. There are not very many careers that are classified as "control engineer", most of them are specific careers that have a small semblance to the overarching career of control engineering. A majority of the control engineers that took the survey in 2019 are system or product designers, or even control or instrument engineers. Most of the jobs involve process engineering or production or even maintenance, they are some variation of control engineering. Because of this, there are many job opportunities in aerospace companies, manufacturing companies, automobile companies, power companies, chemical companies, petroleum companies, and government agencies. Some places that hire Control Engineers include companies such as Rockwell Automation, NASA, Ford, Phillips 66, Eastman, and Goodrich. Control Engineers can possibly earn $66k annually from Lockheed Martin Corp. They can also earn up to $96k annually from General Motors Corporation. Process Control Engineers, typically found in Refineries and Specialty Chemical plants, can earn upwards of $90k annually. Recent advancement Originally, control engineering was all about continuous systems. Development of computer control tools posed a requirement of discrete control system engineering because the communications between the computer-based digital controller and the physical system are governed by a computer clock. The equivalent to Laplace transform in the discrete domain is the Z-transform. Today, many of the control systems are computer controlled and they consist of both digital and analog components. Therefore, at the design stage either digital components are mapped into the continuous domain and the design is carried out in the continuous domain, or analog components are mapped into discrete domain and design is carried out there. The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with a few digital controllers. Similarly, the design technique has progressed from paper-and-ruler based manual design to computer-aided design and now to computer-automated design or CAD which has been made possible by evolutionary computation. CAD can be applied not just to tuning a predefined control scheme, but also to controller structure optimisation, system identification and invention of novel control systems, based purely upon a performance requirement, independent of any specific control scheme. Resilient control systems extend the traditional focus of addressing only planned disturbances to frameworks and attempt to address multiple types of unexpected disturbance; in particular, adapting and transforming behaviors of the control system in response to malicious actors, abnormal failure modes, undesirable human action, etc. See also Artificial intelligence Automation Automation engineering Electrical engineering Communications engineering Satellite navigation Outline of control engineering Advanced process control Building automation Computer-automated design (CAutoD, CAutoCSD) Control reconfiguration Feedback H-infinity Lead–lag compensator List of control engineering topics Quantitative feedback theory Robotic unicycle State space Sliding mode control Systems engineering Testing controller VisSim Control Engineering (magazine) Time series Process control system Robotic control Mechatronics SCADA References Further reading External links Control Labs Worldwide The Michigan Chemical Engineering Process Dynamics and Controls Open Textbook Control System Integrators Association List of control systems integrators Institution of Mechanical Engineers - Mechatronics, Informatics and Control Group (MICG) Systems Science & Control Engineering: An Open Access Journal Electrical engineering Mechanical engineering Systems engineering Engineering disciplines Automation
Control engineering
[ "Physics", "Engineering" ]
2,329
[ "Systems engineering", "Applied and interdisciplinary physics", "Automation", "Control engineering", "Mechanical engineering", "nan", "Electrical engineering" ]
7,039
https://en.wikipedia.org/wiki/Control%20theory
Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality. To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics. Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky. Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research. History Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors. A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem. A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics. Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship. The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant. Open-loop and closed-loop (feedback) control Classical control theory Linear and nonlinear control theory The field of control theory can be divided into two branches: Linear control theory – This applies to systems made of devices which obey the superposition principle, which means roughly that the output is proportional to the input. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems are amenable to powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion. These lead to a description of the system using terms like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, zeros and poles, which give solutions for system response and design techniques for most systems of interest. Nonlinear control theory – This covers a wider class of systems that do not obey the superposition principle, and applies to more real-world systems because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The few mathematical techniques which have been developed to handle them are more difficult and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theorem, and describing functions. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system using perturbation theory, and linear techniques can be used. Analysis techniques - frequency domain and time domain Mathematical techniques for analyzing and designing control systems fall into two different categories: Frequency domain – In this type the values of the state variables, the mathematical variables representing the system's input, output and feedback are represented as functions of frequency. The input signal and the system's transfer function are converted from time functions to functions of frequency by a transform such as the Fourier transform, Laplace transform, or Z transform. The advantage of this technique is that it results in a simplification of the mathematics; the differential equations that represent the system are replaced by algebraic equations in the frequency domain which is much simpler to solve. However, frequency domain techniques can only be used with linear systems, as mentioned above. Time-domain state space representation – In this type the values of the state variables are represented as functions of time. With this model, the system being analyzed is represented by one or more differential equations. Since frequency domain techniques are limited to linear systems, time domain is widely used to analyze real-world nonlinear systems. Although these are more difficult to solve, modern computer simulation techniques such as simulation languages have made their analysis routine. In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space. System interfacing - SISO & MIMO Control systems can be divided into different categories depending on the number of inputs and outputs. Single-input single-output (SISO) – This is the simplest and most common type, in which one output is controlled by one control signal. Examples are the cruise control example above, or an audio system, in which the control input is the input audio signal and the output is the sound waves from the speaker. Multiple-input multiple-output (MIMO) – These are found in more complicated systems. For example, modern large telescopes such as the Keck and MMT have mirrors composed of many separate segments each controlled by an actuator. The shape of the entire mirror is constantly adjusted by a MIMO active optics control system using input from multiple sensors at the focal plane, to compensate for changes in the mirror shape due to thermal expansion, contraction, stresses as it is rotated and distortion of the wavefront due to turbulence in the atmosphere. Complicated systems such as nuclear reactors and human cells are simulated by a computer as large MIMO control systems. Classical SISO system design The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. Modern MIMO system design Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs. Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory. Topics in control theory Stability The stability of a general dynamical system with no input can be described with Lyapunov stability criteria. A linear system is called bounded-input bounded-output (BIBO) stable if its output will stay bounded for any bounded input. Stability for nonlinear systems that take an input is input-to-state stability (ISS), which combines Lyapunov stability and a notion similar to BIBO stability. For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems. Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside in the open left half of the complex plane for continuous time, when the Laplace transform is used to obtain the transfer function. inside the unit circle for discrete time, when the Z-transform is used. The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the axis is the real axis and the discrete Z-transform is in circular coordinates where the axis is the real axis. When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero. If a system in question has an impulse response of then the Z-transform (see this example), is given by which has a pole in (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is inside the unit circle. However, if the impulse response was then the Z-transform is which has a pole at and is not BIBO stable since the pole has a modulus strictly greater than one. Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots. Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll. Controllability and observability Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed stabilizable. Observability instead is related to the possibility of observing, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable. From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis. Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors. Control specification Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control). A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have , where is a fixed value strictly greater than zero, instead of simply asking that . Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included. Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after). Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI). Model identification and robustness A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible. System identification The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that . Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal. Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance. Analysis Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties. Constraints A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold. System classifications Linear systems control For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design. Nonlinear systems control Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states. Decentralized systems control When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions. Deterministic and stochastic systems control A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks. Main control strategies Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen. List of the main control techniques Optimal control is a particular control technique in which the control signal optimizes a certain "cost index": for example, in the case of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel. Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. These are Model Predictive Control (MPC) and linear-quadratic-Gaussian control (LQG). The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes. However, the "optimal control" structure in MPC is only a means to achieve such a result, as it does not optimize a true performance index of the closed-loop control system. Together with PID controllers, MPC systems are the most widely used control technique in process control. Robust control deals explicitly with uncertainty in its approach to controller design. Controllers designed using robust control methods tend to be able to cope with small differences between the true system and the nominal model used for design. The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness. Examples of modern robust control techniques include H-infinity loop-shaping developed by Duncan McFarlane and Keith Glover, Sliding mode control (SMC) developed by Vadim Utkin, and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications. Robust methods aim to achieve robust performance and/or stability in the presence of small modeling errors. Stochastic control deals with control design with uncertainty in the model. In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into account these random deviations. Adaptive control uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the aerospace industry in the 1950s, and have found particular success in that field. A hierarchical control system is a type of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system. Intelligent control uses various AI computing approaches like artificial neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms or a combination of these methods, such as neuro-fuzzy algorithms, to control a dynamic system. Self-organized criticality control may be defined as attempts to interfere in the processes by which the self-organized system dissipates energy. People in systems and control Many active and historical figures made significant contribution to control theory including Pierre-Simon Laplace invented the Z-transform in his work on probability theory, now used to solve discrete-time control theory problems. The Z-transform is a discrete-time equivalent of the Laplace transform which is named after him. Irmgard Flugge-Lotz developed the theory of discontinuous automatic control and applied it to automatic aircraft control systems. Alexander Lyapunov in the 1890s marks the beginning of stability theory. Harold S. Black invented the concept of negative feedback amplifiers in 1927. He managed to develop stable negative feedback amplifiers in the 1930s. Harry Nyquist developed the Nyquist stability criterion for feedback systems in the 1930s. Richard Bellman developed dynamic programming in the 1940s. Warren E. Dixon, control theorist and a professor Kyriakos G. Vamvoudakis, developed synchronous reinforcement learning algorithms to solve optimal control and game theoretic problems Andrey Kolmogorov co-developed the Wiener–Kolmogorov filter in 1941. Norbert Wiener co-developed the Wiener–Kolmogorov filter and coined the term cybernetics in the 1940s. John R. Ragazzini introduced digital control and the use of Z-transform in control theory (invented by Laplace) in the 1950s. Lev Pontryagin introduced the maximum principle and the bang-bang principle. Pierre-Louis Lions developed viscosity solutions into stochastic control and optimal control methods. Rudolf E. Kálmán pioneered the state-space approach to systems and control. Introduced the notions of controllability and observability. Developed the Kalman filter for linear estimation. Ali H. Nayfeh who was one of the main contributors to nonlinear control theory and published many books on perturbation methods Jan C. Willems Introduced the concept of dissipativity, as a generalization of Lyapunov function to input/state/output systems. The construction of the storage function, as the analogue of a Lyapunov function is called, led to the study of the linear matrix inequality (LMI) in control theory. He pioneered the behavioral approach to mathematical systems theory. See also Examples of control systems Automation Deadbeat controller Distributed parameter systems Fractional-order control H-infinity loop-shaping Hierarchical control system Model predictive control Optimal control Process control Robust control Servomechanism State space (controls) Vector control Topics in control theory Coefficient diagram method Control reconfiguration Feedback H infinity Hankel singular value Krener's theorem Lead-lag compensator Minor loop feedback Multi-loop feedback Positive systems Radial basis function Root locus Signal-flow graphs Stable polynomial State space representation Steady state Transient response Transient state Underactuation Youla–Kucera parametrization Markov chain approximation method Other related topics Adaptive system Automation and remote control Bond graph Control engineering Control–feedback–abort loop Controller (control theory) Cybernetics Intelligent control Mathematical system theory Negative feedback amplifier Outline of management People in systems and control Perceptual control theory Systems theory References Further reading For Chemical Engineering External links Control Tutorials for Matlab, a set of worked-through control examples solved by several different methods. Control Tuning and Best Practices Advanced control structures, free on-line simulators explaining the control theory Control engineering Computer engineering Management cybernetics
Control theory
[ "Mathematics", "Technology", "Engineering" ]
6,082
[ "Computer engineering", "Applied mathematics", "Control theory", "Control engineering", "Electrical engineering", "Dynamical systems" ]
7,176
https://en.wikipedia.org/wiki/Cryogenics
In physics, cryogenics is the production and behaviour of materials at very low temperatures. The 13th International Institute of Refrigeration's (IIR) International Congress of Refrigeration (held in Washington DC in 1971) endorsed a universal definition of "cryogenics" and "cryogenic" by accepting a threshold of to distinguish these terms from conventional refrigeration. This is a logical dividing line, since the normal boiling points of the so-called permanent gases (such as helium, hydrogen, neon, nitrogen, oxygen, and normal air) lie below 120 K, while the Freon refrigerants, hydrocarbons, and other common refrigerants have boiling points above 120 K. Discovery of superconducting materials with critical temperatures significantly above the boiling point of nitrogen has provided new interest in reliable, low-cost methods of producing high-temperature cryogenic refrigeration. The term "high temperature cryogenic" describes temperatures ranging from above the boiling point of liquid nitrogen, , up to . The discovery of superconductive properties is first attributed to Heike Kamerlingh Onnes on July 10, 1908. The discovery came after the ability to reach a temperature of 2 K. These first superconductive properties were observed in mercury at a temperature of 4.2 K. Cryogenicists use the Kelvin or Rankine temperature scale, both of which measure from absolute zero, rather than more usual scales such as Celsius which measures from the freezing point of water at sea level or Fahrenheit which measures from the freezing point of a particular brine solution at sea level. Definitions and distinctions Cryogenics The branches of engineering that involve the study of very low temperatures (ultra low temperature i.e. below 123 K), how to produce them, and how materials behave at those temperatures. Cryobiology The branch of biology involving the study of the effects of low temperatures on organisms (most often for the purpose of achieving cryopreservation). Other applications include Lyophilization (freeze-drying) of pharmaceutical components and medicine. Cryoconservation of animal genetic resources The conservation of genetic material with the intention of conserving a breed. The conservation of genetic material is not limited to non-humans. Many services provide genetic storage or the preservation of stem cells at birth. They may be used to study the generation of cell lines or for stem-cell therapy. Cryosurgery The branch of surgery applying cryogenic temperatures to destroy and kill tissue, e.g. cancer cells. Commonly referred to as Cryoablation. Cryoelectronics The study of electronic phenomena at cryogenic temperatures. Examples include superconductivity and variable-range hopping. Cryonics Cryopreserving humans and animals with the intention of future revival. "Cryogenics" is sometimes erroneously used to mean "Cryonics" in popular culture and the press. Etymology The word cryogenics stems from Greek κρύος (cryos) – "cold" + γενής (genis) – "generating". Cryogenic fluids Cryogenic fluids with their boiling point in Kelvin and degree Celsius. Industrial applications Liquefied gases, such as liquid nitrogen and liquid helium, are used in many cryogenic applications. Liquid nitrogen is the most commonly used element in cryogenics and is legally purchasable around the world. Liquid helium is also commonly used and allows for the lowest attainable temperatures to be reached. These liquids may be stored in Dewar flasks, which are double-walled containers with a high vacuum between the walls to reduce heat transfer into the liquid. Typical laboratory Dewar flasks are spherical, made of glass and protected in a metal outer container. Dewar flasks for extremely cold liquids such as liquid helium have another double-walled container filled with liquid nitrogen. Dewar flasks are named after their inventor, James Dewar, the man who first liquefied hydrogen. Thermos bottles are smaller vacuum flasks fitted in a protective casing. Cryogenic barcode labels are used to mark Dewar flasks containing these liquids, and will not frost over down to −195 degrees Celsius. Cryogenic transfer pumps are the pumps used on LNG piers to transfer liquefied natural gas from LNG carriers to LNG storage tanks, as are cryogenic valves. Cryogenic processing The field of cryogenics advanced during World War II when scientists found that metals frozen to low temperatures showed more resistance to wear. Based on this theory of cryogenic hardening, the commercial cryogenic processing industry was founded in 1966 by Bill and Ed Busch. With a background in the heat treating industry, the Busch brothers founded a company in Detroit called CryoTech in 1966. Busch originally experimented with the possibility of increasing the life of metal tools to anywhere between 200% and 400% of the original life expectancy using cryogenic tempering instead of heat treating. This evolved in the late 1990s into the treatment of other parts. Cryogens, such as liquid nitrogen, are further used for specialty chilling and freezing applications. Some chemical reactions, like those used to produce the active ingredients for the popular statin drugs, must occur at low temperatures of approximately . Special cryogenic chemical reactors are used to remove reaction heat and provide a low temperature environment. The freezing of foods and biotechnology products, like vaccines, requires nitrogen in blast freezing or immersion freezing systems. Certain soft or elastic materials become hard and brittle at very low temperatures, which makes cryogenic milling (cryomilling) an option for some materials that cannot easily be milled at higher temperatures. Cryogenic processing is not a substitute for heat treatment, but rather an extension of the heating–quenching–tempering cycle. Normally, when an item is quenched, the final temperature is ambient. The only reason for this is that most heat treaters do not have cooling equipment. There is nothing metallurgically significant about ambient temperature. The cryogenic process continues this action from ambient temperature down to . In most instances the cryogenic cycle is followed by a heat tempering procedure. As all alloys do not have the same chemical constituents, the tempering procedure varies according to the material's chemical composition, thermal history and/or a tool's particular service application. The entire process takes 3–4 days. Fuels Another use of cryogenics is cryogenic fuels for rockets with liquid hydrogen as the most widely used example. Liquid oxygen (LOX) is even more widely used but as an oxidizer, not a fuel. NASA's workhorse Space Shuttle used cryogenic hydrogen/oxygen propellant as its primary means of getting into orbit. LOX is also widely used with RP-1 kerosene, a non-cryogenic hydrocarbon, such as in the rockets built for the Soviet space program by Sergei Korolev. Russian aircraft manufacturer Tupolev developed a version of its popular design Tu-154 with a cryogenic fuel system, known as the Tu-155. The plane uses a fuel referred to as liquefied natural gas or LNG, and made its first flight in 1989. Other applications Some applications of cryogenics: Nuclear magnetic resonance (NMR) is one of the most common methods to determine the physical and chemical properties of atoms by detecting the radio frequency absorbed and subsequent relaxation of nuclei in a magnetic field. This is one of the most commonly used characterization techniques and has applications in numerous fields. Primarily, the strong magnetic fields are generated by supercooling electromagnets, although there are spectrometers that do not require cryogens. In traditional superconducting solenoids, liquid helium is used to cool the inner coils because it has a boiling point of around 4 K at ambient pressure. Inexpensive metallic superconductors can be used for the coil wiring. So-called high-temperature superconducting compounds can be made to super conduct with the use of liquid nitrogen, which boils at around 77 K. Magnetic resonance imaging (MRI) is a complex application of NMR where the geometry of the resonances is deconvoluted and used to image objects by detecting the relaxation of protons that have been perturbed by a radio-frequency pulse in the strong magnetic field. This is most commonly used in health applications. Cryogenic electron microscopy (cryoEM) is a popular method in structural biology for elucidating the structures of proteins, cells, and other biological systems. Samples are plunge-frozen into a cryogen such as liquid ethane cooled by liquid nitrogen, and are then kept at liquid nitrogen temperature as they are inserted into an electron microscope for imaging. Electron microscopes are also themselves cooled by liquid nitrogen. In large cities, it is difficult to transmit power by overhead cables, so underground cables are used. But underground cables get heated and the resistance of the wire increases, leading to waste of power. Superconductors could be used to increase power throughput, although they would require cryogenic liquids such as nitrogen or helium to cool special alloy-containing cables to increase power transmission. Several feasibility studies have been performed and the field is the subject of an agreement within the International Energy Agency. Cryogenic gases are used in transportation and storage of large masses of frozen food. When very large quantities of food must be transported to regions like war zones, earthquake hit regions, etc., they must be stored for a long time, so cryogenic food freezing is used. Cryogenic food freezing is also helpful for large scale food processing industries. Many infrared (forward looking infrared) cameras require their detectors to be cryogenically cooled. Certain rare blood groups are stored at low temperatures, such as −165°C, at blood banks. Cryogenics technology using liquid nitrogen and CO2 has been built into nightclub effect systems to create a chilling effect and white fog that can be illuminated with colored lights. Cryogenic cooling is used to cool the tool tip at the time of machining in manufacturing process. It increases the tool life. Oxygen is used to perform several important functions in the steel manufacturing process. Many rockets and lunar landers use cryogenic gases as propellants. These include liquid oxygen, liquid hydrogen, and liquid methane. By freezing an automobile or truck tire in liquid nitrogen, the rubber is made brittle and can be crushed into small particles. These particles can be used again for other items. Experimental research on certain physics phenomena, such as spintronics and magnetotransport properties, requires cryogenic temperatures for the effects to be observable. Certain vaccines must be stored at cryogenic temperatures. For example, the Pfizer–BioNTech COVID-19 vaccine must be stored at temperatures of . (See cold chain.) Production Cryogenic cooling of devices and material is usually achieved via the use of liquid nitrogen, liquid helium, or a mechanical cryocooler (which uses high-pressure helium lines). Gifford-McMahon cryocoolers, pulse tube cryocoolers and Stirling cryocoolers are in wide use with selection based on required base temperature and cooling capacity. The most recent development in cryogenics is the use of magnets as regenerators as well as refrigerators. These devices work on the principle known as the magnetocaloric effect. Detectors There are various cryogenic detectors which are used to detect particles. For cryogenic temperature measurement down to 30 K, Pt100 sensors, a resistance temperature detector (RTD), are used. For temperatures lower than 30 K, it is necessary to use a silicon diode for accuracy. See also Absolute zero Lowest temperature recorded on Earth Cryogenic grinding Flash freezing Frozen food References Further reading Haselden, G. G. (1971), Cryogenic fundamentals, Academic Press, New York, . Cooling technology Industrial gases
Cryogenics
[ "Physics", "Chemistry" ]
2,422
[ "Chemical process engineering", "Applied and interdisciplinary physics", "Cryogenics", "Industrial gases" ]
7,252
https://en.wikipedia.org/wiki/Cell%20cycle
The cell cycle, or cell-division cycle, is the sequential series of events that take place in a cell that causes it to divide into two daughter cells. These events include the growth of the cell, duplication of its DNA (DNA replication) and some of its organelles, and subsequently the partitioning of its cytoplasm, chromosomes and other components into two daughter cells in a process called cell division. In eukaryotic cells (having a cell nucleus) including animal, plant, fungal, and protist cells, the cell cycle is divided into two main stages: interphase, and the M phase that includes mitosis and cytokinesis. During interphase, the cell grows, accumulating nutrients needed for mitosis, and replicates its DNA and some of its organelles. During the M phase, the replicated chromosomes, organelles, and cytoplasm separate into two new daughter cells. To ensure the proper replication of cellular components and division, there are control mechanisms known as cell cycle checkpoints after each of the key steps of the cycle that determine if the cell can progress to the next phase. In cells without nuclei the prokaryotes, bacteria and archaea, the cell cycle is divided into the B, C, and D periods. The B period extends from the end of cell division to the beginning of DNA replication. DNA replication occurs during the C period. The D period refers to the stage between the end of DNA replication and the splitting of the bacterial cell into two daughter cells. In single-celled organisms, a single cell-division cycle is how the organism reproduces to ensure its survival. In multicellular organisms such as plants and animals, a series of cell-division cycles is how the organism develops from a single-celled fertilized egg into a mature organism, and is also the process by which hair, skin, blood cells, and some internal organs are regenerated and healed (with possible exception of nerves; see nerve damage). After cell division, each of the daughter cells begin the interphase of a new cell cycle. Although the various stages of interphase are not usually morphologically distinguishable, each phase of the cell cycle has a distinct set of specialized biochemical processes that prepare the cell for initiation of the cell division. Phases The eukaryotic cell cycle consists of four distinct phases: G1 phase, S phase (synthesis), G2 phase (collectively known as interphase) and M phase (mitosis and cytokinesis). M phase is itself composed of two tightly coupled processes: mitosis, in which the cell's nucleus divides, and cytokinesis, in which the cell's cytoplasm and cell membrane divides forming two daughter cells. Activation of each phase is dependent on the proper progression and completion of the previous one. Cells that have temporarily or reversibly stopped dividing are said to have entered a state of quiescence known as G0 phase or resting phase. G0 phase (quiescence) G0 is a resting phase where the cell has left the cycle and has stopped dividing. The cell cycle starts with this phase. Non-proliferative (non-dividing) cells in multicellular eukaryotes generally enter the quiescent G0 state from G1 and may remain quiescent for long periods of time, possibly indefinitely (as is often the case for neurons). This is very common for cells that are fully differentiated. Some cells enter the G0 phase semi-permanently and are considered post-mitotic, e.g., some liver, kidney, and stomach cells. Many cells do not enter G0 and continue to divide throughout an organism's life, e.g., epithelial cells. The word "post-mitotic" is sometimes used to refer to both quiescent and senescent cells. Cellular senescence occurs in response to DNA damage and external stress and usually constitutes an arrest in G1. Cellular senescence may make a cell's progeny nonviable; it is often a biochemical alternative to the self-destruction of such a damaged cell by apoptosis. Interphase Interphase represents the phase between two successive M phases. Interphase is a series of changes that takes place in a newly formed cell and its nucleus before it becomes capable of division again. It is also called preparatory phase or intermitosis. Typically interphase lasts for at least 91% of the total time required for the cell cycle. Interphase proceeds in three stages, G1, S, and G2, followed by the cycle of mitosis and cytokinesis. The cell's nuclear DNA contents are duplicated during S phase. G1 phase (First growth phase or Post mitotic gap phase) The first phase within interphase, from the end of the previous M phase until the beginning of DNA synthesis, is called G1 (G indicating gap). It is also called the growth phase. During this phase, the biosynthetic activities of the cell, which are considerably slowed down during M phase, resume at a high rate. The duration of G1 is highly variable, even among different cells of the same species. In this phase, the cell increases its supply of proteins, increases the number of organelles (such as mitochondria, ribosomes), and grows in size. In G1 phase, a cell has three options. To continue cell cycle and enter S phase Stop cell cycle and enter G0 phase for undergoing differentiation. Become arrested in G1 phase hence it may enter G0 phase or re-enter cell cycle. The deciding point is called check point (Restriction point). This check point is called the restriction point or START and is regulated by G1/S cyclins, which cause transition from G1 to S phase. Passage through the G1 check point commits the cell to division. S phase (DNA replication) The ensuing S phase starts when DNA synthesis commences; when it is complete, all of the chromosomes have been replicated, i.e., each chromosome consists of two sister chromatids. Thus, during this phase, the amount of DNA in the cell has doubled, though the ploidy and number of chromosomes are unchanged. Rates of RNA transcription and protein synthesis are very low during this phase. An exception to this is histone production, most of which occurs during the S phase. G2 phase (growth) G2 phase occurs after DNA replication and is a period of protein synthesis and rapid cell growth to prepare the cell for mitosis. During this phase microtubules begin to reorganize to form a spindle (preprophase). Before proceeding to mitotic phase, cells must be checked at the G2 checkpoint for any DNA damage within the chromosomes. The G2 checkpoint is mainly regulated by the tumor protein p53. If the DNA is damaged, p53 will either repair the DNA or trigger the apoptosis of the cell. If p53 is dysfunctional or mutated, cells with damaged DNA may continue through the cell cycle, leading to the development of cancer. Mitotic phase (chromosome separation) The relatively brief M phase consists of nuclear division (karyokinesis) and division of cytoplasm (cytokinesis). M phase is complex and highly regulated. The sequence of events is divided into phases, corresponding to the completion of one set of activities and the start of the next. These phases are sequentially known as: prophase prometaphase metaphase anaphase telophase Mitosis is the process by which a eukaryotic cell separates the chromosomes in its cell nucleus into two identical sets in two nuclei. During the process of mitosis the pairs of chromosomes condense and attach to microtubules that pull the sister chromatids to opposite sides of the cell. Mitosis occurs exclusively in eukaryotic cells, but occurs in different ways in different species. For example, animal cells undergo an "open" mitosis, where the nuclear envelope breaks down before the chromosomes separate, while fungi such as Aspergillus nidulans and Saccharomyces cerevisiae (yeast) undergo a "closed" mitosis, where chromosomes divide within an intact cell nucleus. Cytokinesis phase (separation of all cell components) Mitosis is immediately followed by cytokinesis, which divides the nuclei, cytoplasm, organelles and cell membrane into two cells containing roughly equal shares of these cellular components. Cytokinesis occurs differently in plant and animal cells. While the cell membrane forms a groove that gradually deepens to separate the cytoplasm in animal cells, a cell plate is formed to separate it in plant cells. The position of the cell plate is determined by the position of a preprophase band of microtubules and actin filaments. Mitosis and cytokinesis together define the division of the parent cell into two daughter cells, genetically identical to each other and to their parent cell. This accounts for approximately 10% of the cell cycle. Because cytokinesis usually occurs in conjunction with mitosis, "mitosis" is often used interchangeably with "M phase". However, there are many cells where mitosis and cytokinesis occur separately, forming single cells with multiple nuclei in a process called endoreplication. This occurs most notably among the fungi and slime molds, but is found in various groups. Even in animals, cytokinesis and mitosis may occur independently, for instance during certain stages of fruit fly embryonic development. Errors in mitosis can result in cell death through apoptosis or cause mutations that may lead to cancer. Regulation of eukaryotic cell cycle Regulation of the cell cycle involves processes crucial to the survival of a cell, including the detection and repair of genetic damage as well as the prevention of uncontrolled cell division. The molecular events that control the cell cycle are ordered and directional; that is, each process occurs in a sequential fashion and it is impossible to "reverse" the cycle. Role of cyclins and CDKs Two key classes of regulatory molecules, cyclins and cyclin-dependent kinases (CDKs), determine a cell's progress through the cell cycle. Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of these central molecules. Many of the genes encoding cyclins and CDKs are conserved among all eukaryotes, but in general, more complex organisms have more elaborate cell cycle control systems that incorporate more individual components. Many of the relevant genes were first identified by studying yeast, especially Saccharomyces cerevisiae; genetic nomenclature in yeast dubs many of these genes cdc (for "cell division cycle") followed by an identifying number, e.g. cdc25 or cdc20. Cyclins form the regulatory subunits and CDKs the catalytic subunits of an activated heterodimer; cyclins have no catalytic activity and CDKs are inactive in the absence of a partner cyclin. When activated by a bound cyclin, CDKs perform a common biochemical reaction called phosphorylation that activates or inactivates target proteins to orchestrate coordinated entry into the next phase of the cell cycle. Different cyclin-CDK combinations determine the downstream proteins targeted. CDKs are constitutively expressed in cells whereas cyclins are synthesised at specific stages of the cell cycle, in response to various molecular signals. General mechanism of cyclin-CDK interaction Upon receiving a pro-mitotic extracellular signal, G1 cyclin-CDK complexes become active to prepare the cell for S phase, promoting the expression of transcription factors that in turn promote the expression of S cyclins and of enzymes required for DNA replication. The G1 cyclin-CDK complexes also promote the degradation of molecules that function as S phase inhibitors by targeting them for ubiquitination. Once a protein has been ubiquitinated, it is targeted for proteolytic degradation by the proteasome. Results from a study of E2F transcriptional dynamics at the single-cell level argue that the role of G1 cyclin-CDK activities, in particular cyclin D-CDK4/6, is to tune the timing rather than the commitment of cell cycle entry. Active S cyclin-CDK complexes phosphorylate proteins that make up the pre-replication complexes assembled during G1 phase on DNA replication origins. The phosphorylation serves two purposes: to activate each already-assembled pre-replication complex, and to prevent new complexes from forming. This ensures that every portion of the cell's genome will be replicated once and only once. The reason for prevention of gaps in replication is fairly clear, because daughter cells that are missing all or part of crucial genes will die. However, for reasons related to gene copy number effects, possession of extra copies of certain genes is also deleterious to the daughter cells. Mitotic cyclin-CDK complexes, which are synthesized but inactivated during S and G2 phases, promote the initiation of mitosis by stimulating downstream proteins involved in chromosome condensation and mitotic spindle assembly. A critical complex activated during this process is a ubiquitin ligase known as the anaphase-promoting complex (APC), which promotes degradation of structural proteins associated with the chromosomal kinetochore. APC also targets the mitotic cyclins for degradation, ensuring that telophase and cytokinesis can proceed. Specific action of cyclin-CDK complexes Cyclin D is the first cyclin produced in the cells that enter the cell cycle, in response to extracellular signals (e.g. growth factors). Cyclin D levels stay low in resting cells that are not proliferating. Additionally, CDK4/6 and CDK2 are also inactive because CDK4/6 are bound by INK4 family members (e.g., p16), limiting kinase activity. Meanwhile, CDK2 complexes are inhibited by the CIP/KIP proteins such as p21 and p27, When it is time for a cell to enter the cell cycle, which is triggered by a mitogenic stimuli, levels of cyclin D increase. In response to this trigger, cyclin D binds to existing CDK4/6, forming the active cyclin D-CDK4/6 complex. Cyclin D-CDK4/6 complexes in turn mono-phosphorylates the retinoblastoma susceptibility protein (Rb) to pRb. The un-phosphorylated Rb tumour suppressor functions in inducing cell cycle exit and maintaining G0 arrest (senescence). In the last few decades, a model has been widely accepted whereby pRB proteins are inactivated by cyclin D-Cdk4/6-mediated phosphorylation. Rb has 14+ potential phosphorylation sites. Cyclin D-Cdk 4/6 progressively phosphorylates Rb to hyperphosphorylated state, which triggers dissociation of pRB–E2F complexes, thereby inducing G1/S cell cycle gene expression and progression into S phase. Scientific observations from a study have shown that Rb is present in three types of isoforms: (1) un-phosphorylated Rb in G0 state; (2) mono-phosphorylated Rb, also referred to as "hypo-phosphorylated' or 'partially' phosphorylated Rb in early G1 state; and (3) inactive hyper-phosphorylated Rb in late G1 state. In early G1 cells, mono-phosphorylated Rb exists as 14 different isoforms, one of each has distinct E2F binding affinity. Rb has been found to associate with hundreds of different proteins and the idea that different mono-phosphorylated Rb isoforms have different protein partners was very appealing. A later report confirmed that mono-phosphorylation controls Rb's association with other proteins and generates functional distinct forms of Rb. All different mono-phosphorylated Rb isoforms inhibit E2F transcriptional program and are able to arrest cells in G1-phase. Different mono-phosphorylated forms of Rb have distinct transcriptional outputs that are extended beyond E2F regulation. In general, the binding of pRb to E2F inhibits the E2F target gene expression of certain G1/S and S transition genes including E-type cyclins. The partial phosphorylation of Rb de-represses the Rb-mediated suppression of E2F target gene expression, begins the expression of cyclin E. The molecular mechanism that causes the cell switched to cyclin E activation is currently not known, but as cyclin E levels rise, the active cyclin E-CDK2 complex is formed, bringing Rb to be inactivated by hyper-phosphorylation. Hyperphosphorylated Rb is completely dissociated from E2F, enabling further expression of a wide range of E2F target genes are required for driving cells to proceed into S phase [1]. It has been identified that cyclin D-Cdk4/6 binds to a C-terminal alpha-helix region of Rb that is only distinguishable to cyclin D rather than other cyclins, cyclin E, A and B. This observation based on the structural analysis of Rb phosphorylation supports that Rb is phosphorylated in a different level through multiple Cyclin-Cdk complexes. This also makes feasible the current model of a simultaneous switch-like inactivation of all mono-phosphorylated Rb isoforms through one type of Rb hyper-phosphorylation mechanism. In addition, mutational analysis of the cyclin D- Cdk 4/6 specific Rb C-terminal helix shows that disruptions of cyclin D-Cdk 4/6 binding to Rb prevents Rb phosphorylation, arrests cells in G1, and bolsters Rb's functions in tumor suppressor. This cyclin-Cdk driven cell cycle transitional mechanism governs a cell committed to the cell cycle that allows cell proliferation. A cancerous cell growth often accompanies with deregulation of Cyclin D-Cdk 4/6 activity. The hyperphosphorylated Rb dissociates from the E2F/DP1/Rb complex (which was bound to the E2F responsive genes, effectively "blocking" them from transcription), activating E2F. Activation of E2F results in transcription of various genes like cyclin E, cyclin A, DNA polymerase, thymidine kinase, etc. Cyclin E thus produced binds to CDK2, forming the cyclin E-CDK2 complex, which pushes the cell from G1 to S phase (G1/S, which initiates the G2/M transition). Cyclin B-cdk1 complex activation causes breakdown of nuclear envelope and initiation of prophase, and subsequently, its deactivation causes the cell to exit mitosis. A quantitative study of E2F transcriptional dynamics at the single-cell level by using engineered fluorescent reporter cells provided a quantitative framework for understanding the control logic of cell cycle entry, challenging the canonical textbook model. Genes that regulate the amplitude of E2F accumulation, such as Myc, determine the commitment in cell cycle and S phase entry. G1 cyclin-CDK activities are not the driver of cell cycle entry. Instead, they primarily tune the timing of E2F increase, thereby modulating the pace of cell cycle progression. Inhibitors Endogenous Two families of genes, the cip/kip (CDK interacting protein/Kinase inhibitory protein) family and the INK4a/ARF (Inhibitor of Kinase 4/Alternative Reading Frame) family, prevent the progression of the cell cycle. Because these genes are instrumental in prevention of tumor formation, they are known as tumor suppressors. The cip/kip family includes the genes p21, p27 and p57. They halt the cell cycle in G1 phase by binding to and inactivating cyclin-CDK complexes. p21 is activated by p53 (which, in turn, is triggered by DNA damage e.g. due to radiation). p27 is activated by Transforming Growth Factor β (TGF β), a growth inhibitor. The INK4a/ARF family includes p16INK4a, which binds to CDK4 and arrests the cell cycle in G1 phase, and p14ARF which prevents p53 degradation. Synthetic Synthetic inhibitors of Cdc25 could also be useful for the arrest of cell cycle and therefore be useful as antineoplastic and anticancer agents. Many human cancers possess the hyper-activated Cdk 4/6 activities. Given the observations of cyclin D-Cdk 4/6 functions, inhibition of Cdk 4/6 should result in preventing a malignant tumor from proliferating. Consequently, scientists have tried to invent the synthetic Cdk4/6 inhibitor as Cdk4/6 has been characterized to be a therapeutic target for anti-tumor effectiveness. Three Cdk4/6 inhibitors – palbociclib, ribociclib, and abemaciclib – currently received FDA approval for clinical use to treat advanced-stage or metastatic, hormone-receptor-positive (HR-positive, HR+), HER2-negative (HER2-) breast cancer. For example, palbociclib is an orally active CDK4/6 inhibitor which has demonstrated improved outcomes for ER-positive/HER2-negative advanced breast cancer. The main side effect is neutropenia which can be managed by dose reduction. Cdk4/6 targeted therapy will only treat cancer types where Rb is expressed. Cancer cells with loss of Rb have primary resistance to Cdk4/6 inhibitors. Transcriptional regulatory network Current evidence suggests that a semi-autonomous transcriptional network acts in concert with the CDK-cyclin machinery to regulate the cell cycle. Several gene expression studies in Saccharomyces cerevisiae have identified 800–1200 genes that change expression over the course of the cell cycle. They are transcribed at high levels at specific points in the cell cycle, and remain at lower levels throughout the rest of the cycle. While the set of identified genes differs between studies due to the computational methods and criteria used to identify them, each study indicates that a large portion of yeast genes are temporally regulated. Many periodically expressed genes are driven by transcription factors that are also periodically expressed. One screen of single-gene knockouts identified 48 transcription factors (about 20% of all non-essential transcription factors) that show cell cycle progression defects. Genome-wide studies using high throughput technologies have identified the transcription factors that bind to the promoters of yeast genes, and correlating these findings with temporal expression patterns have allowed the identification of transcription factors that drive phase-specific gene expression. The expression profiles of these transcription factors are driven by the transcription factors that peak in the prior phase, and computational models have shown that a CDK-autonomous network of these transcription factors is sufficient to produce steady-state oscillations in gene expression). Experimental evidence also suggests that gene expression can oscillate with the period seen in dividing wild-type cells independently of the CDK machinery. Orlando et al. used microarrays to measure the expression of a set of 1,271 genes that they identified as periodic in both wild type cells and cells lacking all S-phase and mitotic cyclins (clb1,2,3,4,5,6). Of the 1,271 genes assayed, 882 continued to be expressed in the cyclin-deficient cells at the same time as in the wild type cells, despite the fact that the cyclin-deficient cells arrest at the border between G1 and S phase. However, 833 of the genes assayed changed behavior between the wild type and mutant cells, indicating that these genes are likely directly or indirectly regulated by the CDK-cyclin machinery. Some genes that continued to be expressed on time in the mutant cells were also expressed at different levels in the mutant and wild type cells. These findings suggest that while the transcriptional network may oscillate independently of the CDK-cyclin oscillator, they are coupled in a manner that requires both to ensure the proper timing of cell cycle events. Other work indicates that phosphorylation, a post-translational modification, of cell cycle transcription factors by Cdk1 may alter the localization or activity of the transcription factors in order to tightly control timing of target genes. While oscillatory transcription plays a key role in the progression of the yeast cell cycle, the CDK-cyclin machinery operates independently in the early embryonic cell cycle. Before the midblastula transition, zygotic transcription does not occur and all needed proteins, such as the B-type cyclins, are translated from maternally loaded mRNA. DNA replication and DNA replication origin activity Analyses of synchronized cultures of Saccharomyces cerevisiae under conditions that prevent DNA replication initiation without delaying cell cycle progression showed that origin licensing decreases the expression of genes with origins near their 3' ends, revealing that downstream origins can regulate the expression of upstream genes. This confirms previous predictions from mathematical modeling of a global causal coordination between DNA replication origin activity and mRNA expression, and shows that mathematical modeling of DNA microarray data can be used to correctly predict previously unknown biological modes of regulation. Checkpoints Cell cycle checkpoints are used by the cell to monitor and regulate the progress of the cell cycle. Checkpoints prevent cell cycle progression at specific points, allowing verification of necessary phase processes and repair of DNA damage. The cell cannot proceed to the next phase until checkpoint requirements have been met. Checkpoints typically consist of a network of regulatory proteins that monitor and dictate the progression of the cell through the different stages of the cell cycle. It is estimated that in normal human cells about 1% of single-strand DNA damages are converted to about 50 endogenous DNA double-strand breaks per cell per cell cycle. Although such double-strand breaks are usually repaired with high fidelity, errors in their repair are considered to contribute significantly to the rate of cancer in humans. There are several checkpoints to ensure that damaged or incomplete DNA is not passed on to daughter cells. Three main checkpoints exist: the G1/S checkpoint, the G2/M checkpoint and the metaphase (mitotic) checkpoint. Another checkpoint is the Go checkpoint, in which the cells are checked for maturity. If the cells fail to pass this checkpoint by not being ready yet, they will be discarded from dividing. G1/S transition is a rate-limiting step in the cell cycle and is also known as restriction point. This is where the cell checks whether it has enough raw materials to fully replicate its DNA (nucleotide bases, DNA synthase, chromatin, etc.). An unhealthy or malnourished cell will get stuck at this checkpoint. The G2/M checkpoint is where the cell ensures that it has enough cytoplasm and phospholipids for two daughter cells. But sometimes more importantly, it checks to see if it is the right time to replicate. There are some situations where many cells need to all replicate simultaneously (for example, a growing embryo should have a symmetric cell distribution until it reaches the mid-blastula transition). This is done by controlling the G2/M checkpoint. The metaphase checkpoint is a fairly minor checkpoint, in that once a cell is in metaphase, it has committed to undergoing mitosis. However that's not to say it isn't important. In this checkpoint, the cell checks to ensure that the spindle has formed and that all of the chromosomes are aligned at the spindle equator before anaphase begins. While these are the three "main" checkpoints, not all cells have to pass through each of these checkpoints in this order to replicate. Many types of cancer are caused by mutations that allow the cells to speed through the various checkpoints or even skip them altogether. Going from S to M to S phase almost consecutively. Because these cells have lost their checkpoints, any DNA mutations that may have occurred are disregarded and passed on to the daughter cells. This is one reason why cancer cells have a tendency to exponentially acquire mutations. Aside from cancer cells, many fully differentiated cell types no longer replicate so they leave the cell cycle and stay in G0 until their death. Thus removing the need for cellular checkpoints. An alternative model of the cell cycle response to DNA damage has also been proposed, known as the postreplication checkpoint. Checkpoint regulation plays an important role in an organism's development. In sexual reproduction, when egg fertilization occurs, when the sperm binds to the egg, it releases signalling factors that notify the egg that it has been fertilized. Among other things, this induces the now fertilized oocyte to return from its previously dormant, G0, state back into the cell cycle and on to mitotic replication and division. p53 plays an important role in triggering the control mechanisms at both G1/S and G2/M checkpoints. In addition to p53, checkpoint regulators are being heavily researched for their roles in cancer growth and proliferation. Fluorescence imaging of the cell cycle Pioneering work by Atsushi Miyawaki and coworkers developed the fluorescent ubiquitination-based cell cycle indicator (FUCCI), which enables fluorescence imaging of the cell cycle. Originally, a green fluorescent protein, mAG, was fused to hGem(1/110) and an orange fluorescent protein (mKO2) was fused to hCdt1(30/120). Note, these fusions are fragments that contain a nuclear localization signal and ubiquitination sites for degradation, but are not functional proteins. The green fluorescent protein is made during the S, G2, or M phase and degraded during the G0 or G1 phase, while the orange fluorescent protein is made during the G0 or G1 phase and destroyed during the S, G2, or M phase. A far-red and near-infrared FUCCI was developed using a cyanobacteria-derived fluorescent protein (smURFP) and a bacteriophytochrome-derived fluorescent protein (movie found at this link). Several modifications have been made to the original FUCCI system to improve its usability in several in vitro systems and model organisms. These advancements have increased the sensitivity and accuracy of cell cycle phase detection, enabling more precise assessments of cellular proliferation Role in tumor formation A disregulation of the cell cycle components may lead to tumor formation. As mentioned above, when some genes like the cell cycle inhibitors, RB, p53 etc. mutate, they may cause the cell to multiply uncontrollably, forming a tumor. Although the duration of cell cycle in tumor cells is equal to or longer than that of normal cell cycle, the proportion of cells that are in active cell division (versus quiescent cells in G0 phase) in tumors is much higher than that in normal tissue. Thus there is a net increase in cell number as the number of cells that die by apoptosis or senescence remains the same. The cells which are actively undergoing cell cycle are targeted in cancer therapy as the DNA is relatively exposed during cell division and hence susceptible to damage by drugs or radiation. This fact is made use of in cancer treatment; by a process known as debulking, a significant mass of the tumor is removed which pushes a significant number of the remaining tumor cells from G0 to G1 phase (due to increased availability of nutrients, oxygen, growth factors etc.). Radiation or chemotherapy following the debulking procedure kills these cells which have newly entered the cell cycle. The fastest cycling mammalian cells in culture, crypt cells in the intestinal epithelium, have a cycle time as short as 9 to 10 hours. Stem cells in resting mouse skin may have a cycle time of more than 200 hours. Most of this difference is due to the varying length of G1, the most variable phase of the cycle. M and S do not vary much. In general, cells are most radiosensitive in late M and G2 phases and most resistant in late S phase. For cells with a longer cell cycle time and a significantly long G1 phase, there is a second peak of resistance late in G1. The pattern of resistance and sensitivity correlates with the level of sulfhydryl compounds in the cell. Sulfhydryls are natural substances that protect cells from radiation damage and tend to be at their highest levels in S and at their lowest near mitosis. Homologous recombination (HR) is an accurate process for repairing DNA double-strand breaks. HR is nearly absent in G1 phase, is most active in S phase, and declines in G2/M. Non-homologous end joining, a less accurate and more mutagenic process for repairing double strand breaks, is active throughout the cell cycle. Cell cycle evolution Evolution of the genome The cell cycle must duplicate all cellular constituents and equally partition them into two daughter cells. Many constituents, such as proteins and ribosomes, are produced continuously throughout the cell cycle (except during M-phase). However, the chromosomes and other associated elements like MTOCs, are duplicated just once during the cell cycle. A central component of the cell cycle is its ability to coordinate the continuous and periodic duplications of different cellular elements, which evolved with the formation of the genome. The pre-cellular environment contained functional and self-replicating RNAs. All RNA concentrations depended on the concentrations of other RNAs that might be helping or hindering the gathering of resources. In this environment, growth was simply the continuous production of RNAs. These pre-cellular structures would have had to contend with parasitic RNAs, issues of inheritance, and copy-number control of specific RNAs. Partitioning "genomic" RNA from "functional" RNA helped solve these problems. The fusion of multiple RNAs into a genome gave a template from which functional RNAs were cleaved. Now, parasitic RNAs would have to incorporate themselves into the genome, a much greater barrier, in order to survive. Controlling the copy number of genomic RNA also allowed RNA concentration to be determined through synthesis rates and RNA half-lives, instead of competition. Separating the duplication of genomic RNAs from the generation of functional RNAs allowed for much greater duplication fidelity of genomic RNAs without compromising the production of functional RNAs. Finally, the replacement of genomic RNA with DNA, which is a more stable molecule, allowed for larger genomes. The transition from self-catalysis enzyme synthesis to genome-directed enzyme synthesis was a critical step in cell evolution, and had lasting implications on the cell cycle, which must regulate functional synthesis and genomic duplication in very different ways. Cyclin-dependent kinase and cyclin evolution Cell-cycle progression is controlled by the oscillating concentrations of different cyclins and the resulting molecular interactions from the various cyclin-dependent kinases (CDKs). In yeast, just one CDK (Cdc28 in S. cerevisiae and Cdc2 in S. pombe) controls the cell cycle. However, in animals, whole families of CDKs have evolved. Cdk1 controls entry to mitosis and Cdk2, Cdk4, and Cdk6 regulate entry into S phase. Despite the evolution of the CDK family in animals, these proteins have related or redundant functions. For example, cdk2 cdk4 cdk6 triple knockout mice cells can still progress through the basic cell cycle. cdk1 knockouts are lethal, which suggests an ancestral CDK1-type kinase ultimately controlling the cell cycle. Arabidopsis thaliana has a Cdk1 homolog called CDKA;1, however cdka;1 A. thaliana mutants are still viable, running counter to the opisthokont pattern of CDK1-type kinases as essential regulators controlling the cell cycle. Plants also have a unique group of B-type CDKs, whose functions may range from development-specific functions to major players in mitotic regulation. G1/S checkpoint evolution The G1/S checkpoint is the point at which the cell commits to division through the cell cycle. Complex regulatory networks lead to the G1/S transition decision. Across opisthokonts, there are both highly diverged protein sequences as well as strikingly similar network topologies. Entry into S-phase in both yeast and animals is controlled by the levels of two opposing regulators. The networks regulating these transcription factors are double-negative feedback loops and positive feedback loops in both yeast and animals. Additional regulation of the regulatory network for the G1/S checkpoint in yeast and animals includes the phosphorylation/de-phosphorylation of CDK-cyclin complexes. The sum of these regulatory networks creates a hysteretic and bistable scheme, despite the specific proteins being highly diverged. For yeast, Whi5 must be suppressed by Cln3 phosphorylation for SBF to be expressed, while in animals Rb must be suppressed by the Cdk4/6-cyclin D complex for E2F to be expressed. Both Rb and Whi5 inhibit transcript through the recruitment of histone deacetylase proteins to promoters. Both proteins additionally have multiple CDK phosphorylation sites through which they are inhibited. However, these proteins share no sequence similarity. Studies in A. thaliana extend our knowledge of the G1/S transition across eukaryotes as a whole. Plants also share a number of conserved network features with opisthokonts, and many plant regulators have direct animal homologs. For example, plants also need to suppress Rb for E2F translation in the network. These conserved elements of the plant and animal cell cycles may be ancestral in eukaryotes. While yeast share a conserved network topology with plants and animals, the highly diverged nature of yeast regulators suggests possible rapid evolution along the yeast lineage. See also Cellular model Eukaryotic DNA replication Mitotic catastrophe Origin recognition complex Retinoblastoma protein Synchronous culture – synchronization of cell cultures Wee1 References Further reading External links David Morgan's Seminar: Controlling the Cell Cycle The cell cycle & Cell death Transcriptional program of the cell cycle: high-resolution timing Cell cycle and metabolic cycle regulated transcription in yeast Cell Cycle Animation 1Lec.com Cell Cycle Fucci:Using GFP to visualize the cell-cycle Science Creative Quarterly's overview of the cell cycle KEGG – Human Cell Cycle Cellular senescence
Cell cycle
[ "Biology" ]
8,161
[ "Senescence", "Cellular senescence", "Cell cycle", "Cellular processes" ]
7,284
https://en.wikipedia.org/wiki/Centromere
The centromere links a pair of sister chromatids together during cell division. This constricted region of chromosome connects the sister chromatids, creating a short arm (p) and a long arm (q) on the chromatids. During mitosis, spindle fibers attach to the centromere via the kinetochore. The physical role of the centromere is to act as the site of assembly of the kinetochores – a highly complex multiprotein structure that is responsible for the actual events of chromosome segregation – i.e. binding microtubules and signaling to the cell cycle machinery when all chromosomes have adopted correct attachments to the spindle, so that it is safe for cell division to proceed to completion and for cells to enter anaphase. There are, broadly speaking, two types of centromeres. "Point centromeres" bind to specific proteins that recognize particular DNA sequences with high efficiency. Any piece of DNA with the point centromere DNA sequence on it will typically form a centromere if present in the appropriate species. The best characterized point centromeres are those of the budding yeast, Saccharomyces cerevisiae. "Regional centromeres" is the term coined to describe most centromeres, which typically form on regions of preferred DNA sequence, but which can form on other DNA sequences as well. The signal for formation of a regional centromere appears to be epigenetic. Most organisms, ranging from the fission yeast Schizosaccharomyces pombe to humans, have regional centromeres. Regarding mitotic chromosome structure, centromeres represent a constricted region of the chromosome (often referred to as the primary constriction) where two identical sister chromatids are most closely in contact. When cells enter mitosis, the sister chromatids (the two copies of each chromosomal DNA molecule resulting from DNA replication in chromatin form) are linked along their length by the action of the cohesin complex. It is now believed that this complex is mostly released from chromosome arms during prophase, so that by the time the chromosomes line up at the mid-plane of the mitotic spindle (also known as the metaphase plate), the last place where they are linked with one another is in the chromatin in and around the centromere. Position In humans, centromere positions define the chromosomal karyotype, in which each chromosome has two arms, p (the shorter of the two) and q (the longer). The short arm 'p' is reportedly named for the French word "petit" meaning 'small'. The position of the centromere relative to any particular linear chromosome is used to classify chromosomes as metacentric, submetacentric, acrocentric, telocentric, or holocentric. Metacentric Metacentric means that the centromere is positioned midway between the chromosome ends, resulting in the arms being approximately equal in length. When the centromeres are metacentric, the chromosomes appear to be "x-shaped." Submetacentric Submetacentric means that the centromere is positioned below the middle, with one chromosome arm shorter than the other, often resulting in an L shape. Acrocentric An acrocentric chromosome's centromere is situated so that one of the chromosome arms is much shorter than the other. The "acro-" in acrocentric refers to the Greek word for "peak." The human genome has six acrocentric chromosomes, including five autosomal chromosomes (13, 14, 15, 21, 22) and the Y chromosome. Short acrocentric p-arms contain little genetic material and can be translocated without significant harm, as in a balanced Robertsonian translocation. In addition to some protein coding genes, human acrocentric p-arms also contain Nucleolus organizer regions (NORs), from which ribosomal RNA is transcribed. However, a proportion of acrocentric p-arms in cell lines and tissues from normal human donors do not contain detectable NORs. The domestic horse genome includes one metacentric chromosome that is homologous to two acrocentric chromosomes in the conspecific but undomesticated Przewalski's horse. This may reflect either fixation of a balanced Robertsonian translocation in domestic horses or, conversely, fixation of the fission of one metacentric chromosome into two acrocentric chromosomes in Przewalski's horses. A similar situation exists between the human and great ape genomes, with a reduction of two acrocentric chromosomes in the great apes to one metacentric chromosome in humans (see aneuploidy and the human chromosome 2). Many diseases from the result of unbalanced translocations more frequently involve acrocentric chromosomes than other non-acrocentric chromosomes. Acrocentric chromosomes are usually located in and around the nucleolus. As a result, these chromosomes tend to be less densely packed than chromosomes in the nuclear periphery. Consistently, chromosomal regions that are less densely packed are also more prone to chromosomal translocations in cancers. Telocentric Telocentric chromosomes have a centromere at one end of the chromosome and therefore exhibit only one arm at the cytological (microscopic) level. They are not present in humans but can form through cellular chromosomal errors. Telocentric chromosomes occur naturally in many species, such as the house mouse, in which all chromosomes except the Y are telocentric. Subtelocentric Subtelocentric chromosomes' centromeres are located between the middle and the end of the chromosomes, but reside closer to the end of the chromosomes. Centromere types Acentric An acentric chromosome is fragment of a chromosome that lacks a centromere. Since centromeres are the attachment point for spindle fibers in cell division, acentric fragments are not evenly distributed to daughter cells during cell division. As a result, a daughter cell will lack the acentric fragment and deleterious consequences could occur. Chromosome-breaking events can also generate acentric chromosomes or acentric fragments. Dicentric A dicentric chromosome is an abnormal chromosome with two centromeres, which can be unstable through cell divisions. It can form through translocation between or fusion of two chromosome segments, each with a centromere. Some rearrangements produce both dicentric chromosomes and acentric fragments which can not attach to spindles at mitosis. The formation of dicentric chromosomes has been attributed to genetic processes, such as Robertsonian translocation and paracentric inversion. Dicentric chromosomes can have a variety of fates, including mitotic stability. In some cases, their stability comes from inactivation of one of the two centromeres to make a functionally monocentric chromosome capable of normal transmission to daughter cells during cell division. For example, human chromosome 2, which is believed to be the result of a Robertsonian translocation at some point in the evolution between the great apes and Homo, has a second, vestigial, centromere near the middle of its long arm. Monocentric The monocentric chromosome is a chromosome that has only one centromere in a chromosome and forms a narrow constriction. Monocentric centromeres are the most common structure on highly repetitive DNA in plants and animals. Holocentric Unlike monocentric chromosomes, holocentric chromosomes have no distinct primary constriction when viewed at mitosis. Instead, spindle fibers attach along almost the entire (Greek: holo-) length of the chromosome. In holocentric chromosomes centromeric proteins, such as CENPA (CenH3) are spread over the whole chromosome. The nematode, Caenorhabditis elegans, is a well-known example of an organism with holocentric chromosomes, but this type of centromere can be found in various species, plants, and animals, across eukaryotes. Holocentromeres are actually composed of multiple distributed centromere units that form a line-like structure along the chromosomes during mitosis. Alternative or nonconventional strategies are deployed at meiosis to achieve the homologous chromosome pairing and segregation needed to produce viable gametes or gametophytes for sexual reproduction. Different types of holocentromeres exist in different species, namely with or without centromeric repetitive DNA sequences and with or without CenH3. Holocentricity has evolved at least 13 times independently in various green algae, protozoans, invertebrates, and different plant families. Contrary to monocentric species where acentric fragments usually become lost during cell division, the breakage of holocentric chromosomes creates fragments with normal spindle fiber attachment sites. Because of this, organisms with holocentric chromosomes can more rapidly evolve karyotype variation, able to heal fragmented chromosomes through subsequent addition of telomere caps at the sites of breakage. Polycentric Polycentric chromosomes have several kinetochore clusters, i.e. centromes. The term overlaps partially with "holocentric", but "polycentric" is clearly preferred when discussing defectively formed monocentric chromosomes. There is some actual ambiguity as well, as there is no clear line dividing up the transition from kinetochores covering the whole chromosome to distinct clusters. In other words, the difference between "the whole chromosome is a centrome" and "the chromosome has no centrome" is hazy and usage varies. Beyond "polycentricity" being used more about defects, there is no clear preference in other topics such as evolutionary origin or kinetochore distribution and detailed structure (e.g. as seen in tagging or genome assembly analysis). Even clearly distinct clusters of kinetochore proteins do not necessarily produce more than one constriction: "Metapolycentric" chromosomes feature one elongated constriction of the chromosome, joining a longer segment which is still visibly shorter than the chromatids. Metapolycentric chromosomes may be a step in the emergence and suppression of centromere drive, a type of meiotic drive that disrupts parity by monocentric centromeres growing additional kinetochore proteins to gain an advantage during meiosis. Human chromosomes Based on the micrographic characteristics of size, position of the centromere and sometimes the presence of a chromosomal satellite, the human chromosomes are classified into the following groups: Sequence There are two types of centromeres. In regional centromeres, DNA sequences contribute to but do not define function. Regional centromeres contain large amounts of DNA and are often packaged into heterochromatin. In most eukaryotes, the centromere's DNA sequence consists of large arrays of repetitive DNA (e.g. satellite DNA) where the sequence within individual repeat elements is similar but not identical. In humans, the primary centromeric repeat unit is called α-satellite (or alphoid), although a number of other sequence types are found in this region. Centromere satellites are hypothesized to evolve by a process called layered expansion. They evolve rapidly between species, and analyses in wild mice show that satellite copy number and heterogeneity relates to population origins and subspecies. Additionally, satellite sequences may be affected by inbreeding. Point centromeres are smaller and more compact. DNA sequences are both necessary and sufficient to specify centromere identity and function in organisms with point centromeres. In budding yeasts, the centromere region is relatively small (about 125 bp DNA) and contains two highly conserved DNA sequences that serve as binding sites for essential kinetochore proteins. Inheritance Since centromeric DNA sequence is not the key determinant of centromeric identity in metazoans, it is thought that epigenetic inheritance plays a major role in specifying the centromere. The daughter chromosomes will assemble centromeres in the same place as the parent chromosome, independent of sequence. It has been proposed that histone H3 variant CENP-A (Centromere Protein A) is the epigenetic mark of the centromere. The question arises whether there must be still some original way in which the centromere is specified, even if it is subsequently propagated epigenetically. If the centromere is inherited epigenetically from one generation to the next, the problem is pushed back to the origin of the first metazoans. On the other hand, thanks to comparisons of the centromeres in the X chromosomes, epigenetic and structural variations have been seen in these regions. In addition, a recent assembly of the human genome has detected a possible mechanism of how pericentromeric and centromeric structures evolve, through a layered expansion model for αSat sequences. This model proposes that different αSat sequence repeats emerge periodically and expand within an active vector, displacing old sequences, and becoming the site of kinetochore assembly. The αSat can originate from the same, or from different vectors. As this process is repeated over time, the layers that flank the active centromere shrink and deteriorate. This process raises questions about the relationship between this dynamic evolutionary process and the position of the centromere. Structure The centromeric DNA is normally in a heterochromatin state, which is essential for the recruitment of the cohesin complex that mediates sister chromatid cohesion after DNA replication as well as coordinating sister chromatid separation during anaphase. In this chromatin, the normal histone H3 is replaced with a centromere-specific variant, CENP-A in humans. The presence of CENP-A is believed to be important for the assembly of the kinetochore on the centromere. CENP-C has been shown to localise almost exclusively to these regions of CENP-A associated chromatin. In human cells, the histones are found to be most enriched for H4K20me3 and H3K9me3 which are known heterochromatic modifications. In Drosophila, Islands of retroelements are major components of the centromeres. In the yeast Schizosaccharomyces pombe (and probably in other eukaryotes), the formation of centromeric heterochromatin is connected to RNAi. In nematodes such as Caenorhabditis elegans, some plants, and the insect orders Lepidoptera and Hemiptera, chromosomes are "holocentric", indicating that there is not a primary site of microtubule attachments or a primary constriction, and a "diffuse" kinetochore assembles along the entire length of the chromosome. Centromeric aberrations In rare cases, neocentromeres can form at new sites on a chromosome as a result of a repositioning of the centromere. This phenomenon is most well known from human clinical studies and there are currently over 90 known human neocentromeres identified on 20 different chromosomes. The formation of a neocentromere must be coupled with the inactivation of the previous centromere, since chromosomes with two functional centromeres (Dicentric chromosome) will result in chromosome breakage during mitosis. In some unusual cases human neocentromeres have been observed to form spontaneously on fragmented chromosomes. Some of these new positions were originally euchromatic and lack alpha satellite DNA altogether. Neocentromeres lack the repetitive structure seen in normal centromeres which suggest that centromere formation is mainly controlled epigenetically. Over time a neocentromere can accumulate repetitive elements and mature into what is known as an evolutionary new centromere. There are several well known examples in primate chromosomes where the centromere position is different from the human centromere of the same chromosome and is thought to be evolutionary new centromeres. Centromere repositioning and the formation of evolutionary new centromeres has been suggested to be a mechanism of speciation. Centromere proteins are also the autoantigenic target for some anti-nuclear antibodies, such as anti-centromere antibodies. Dysfunction and disease It has been known that centromere misregulation contributes to mis-segregation of chromosomes, which is strongly related to cancer and miscarriage. Notably, overexpression of many centromere genes have been linked to cancer malignant phenotypes. Overexpression of these centromere genes can increase genomic instability in cancers. Elevated genomic instability on one hand relates to malignant phenotypes; on the other hand, it makes the tumor cells more vulnerable to specific adjuvant therapies such as certain chemotherapies and radiotherapy. Instability of centromere repetitive DNA was recently shown in cancer and aging. Repair of centromeric DNA When DNA breaks occur at centromeres in the G1 phase of the cell cycle, the cells are able to recruit the homologous recombinational repair machinery to the damaged site, even in the absence of a sister chromatid. It appears that homologous recombinational repair can occur at centromeric breaks throughout the cell cycle in order to prevent the activation of inaccurate mutagenic DNA repair pathways and to preserve centromeric integrity. Etymology and pronunciation The word centromere () uses combining forms of centro- and -mere, yielding "central part", describing the centromere's location at the center of the chromosome. See also Telomere Chromatid Diploid Monopolin References Further reading External links Chromosomes DNA replication
Centromere
[ "Biology" ]
3,624
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
7,296
https://en.wikipedia.org/wiki/Cardiac%20glycoside
Cardiac glycosides are a class of organic compounds that increase the output force of the heart and decrease its rate of contractions by inhibiting the cellular sodium-potassium ATPase pump. Their beneficial medical uses include treatments for congestive heart failure and cardiac arrhythmias; however, their relative toxicity prevents them from being widely used. Most commonly found as secondary metabolites in several plants such as foxglove plants and milkweed plants, these compounds nevertheless have a diverse range of biochemical effects regarding cardiac cell function and have also been suggested for use in cancer treatment. Classification General structure The general structure of a cardiac glycoside consists of a steroid molecule attached to a sugar (glycoside) and an R group. The steroid nucleus consists of four fused rings to which other functional groups such as methyl, hydroxyl, and aldehyde groups can be attached to influence the overall molecule's biological activity. Cardiac glycosides also vary in the groups attached at either end of the steroid. Specifically, different sugar groups attached at the sugar end of the steroid can alter the molecule's solubility and kinetics; however, the lactone moiety at the R group end only serves a structural function. In particular, the structure of the ring attached at the R end of the molecule allows it to be classified as either a cardenolide or bufadienolide. Cardenolides differ from bufadienolides due to the presence of an "enolide," a five-membered ring with a single double bond, at the lactone end. Bufadienolides, on the other hand, contain a "dienolide," a six-membered ring with two double bonds, at the lactone end. While compounds of both groups can be used to influence the cardiac output of the heart, cardenolides are more commonly used medicinally, primarily due to the widespread availability of the plants from which they are derived. Classification Cardiac glycosides can be more specifically categorized based on the plant they are derived from, as in the following list. For example, cardenolides have been primarily derived from the foxglove plants Digitalis purpurea and Digitalis lanata, while bufadienolides have been derived from the venom of the cane toad Rhinella marina (formerly known as Bufo marinus), from which they receive the "bufo" portion of their name. Below is a list of organisms from which cardiac glycosides can be derived. Plant cardenolides Convallaria majalis (Lily of the Valley): convallatoxin Antiaris toxicaria (upas tree): antiarin Strophanthus kombe (Strophanthus vine): ouabain (g-strophanthin) and other strophanthins Digitalis lanata and Digitalis purpurea (Woolly and purple foxglove): digoxin, digitoxin Nerium oleander (oleander tree): oleandrin Asclepias sp. (milkweed): asclepin, calotropin, uzarin, calactin, coroglucigenin, uzarigenin, oleandrin Adonis vernalis (Spring pheasant's eye): adonitoxin Kalanchoe daigremontiana and other Kalanchoe species: daigremontianin Erysimum cheiranthoides (wormseed wallflower) and other Erysimum species Cerbera odollam (suicide tree): cerberin Periploca sepium: periplocin Other cardenolides some species of Chrysolina beetles, including Chrysolina coerulans, have cardiac glycosides (including Xylose) in their defensive glands. Bufadienolides Leonurus cardiaca (motherwort): scillarenin Drimia maritima (squill): proscillaridine A Rhinella marina (cane toad): various bufadienolides – see also toad venom Kalanchoe daigremontiana and other Kalanchoe species: daigremontianin and others Helleborus'' spp. (hellebore) Mechanism of action Cardiac glycosides affect the sodium-potassium ATPase pump in cardiac muscle cells to alter their function. Normally, these sodium-potassium pumps move potassium ions in and sodium ions out. Cardiac glycosides, however, inhibit this pump by stabilizing it in the E2-P transition state, so that sodium cannot be extruded: intracellular sodium concentration therefore increases. With regard to potassium ion movement, because both cardiac glycosides and potassium compete for binding to the ATPase pump, changes in extracellular potassium concentration can potentially lead to altered drug efficacy. Nevertheless, by carefully controlling the dosage, such adverse effects can be avoided. Continuing on with the mechanism, raised intracellular sodium levels inhibit the function of a second membrane ion exchanger, NCX, which is responsible for pumping calcium ions out of the cell and sodium ions in at a ratio of /. Thus, calcium ions are also not extruded and will begin to build up inside the cell as well. The disrupted calcium homeostasis and increased cytoplasmic calcium concentrations cause increased calcium uptake into the sarcoplasmic reticulum (SR) via the SERCA2 transporter. Raised calcium stores in the SR allow for greater calcium release on stimulation, so the myocyte can achieve faster and more powerful contraction by cross-bridge cycling. The refractory period of the AV node is increased, so cardiac glycosides also function to decrease heart rate. For example, the ingestion of digoxin leads to increased cardiac output and decreased heart rate without significant changes in blood pressure; this quality allows it to be widely used medicinally in the treatment of cardiac arrhythmias. Non-cardiac uses Cardiac glycosides were identified as senolytics: they can selectively eliminate senescent cells which are more sensitive to the ATPase-inhibiting action due to cell membrane changes. Clinical significance Cardiac glycosides have long served as the main medical treatment to congestive heart failure and cardiac arrhythmia, due to their effects of increasing the force of muscle contraction while reducing heart rate. Heart failure is characterized by an inability to pump enough blood to support the body, possibly due to a decrease in the volume of the blood or its contractile force. Treatments for the condition thus focus on lowering blood pressure, so that the heart does not have to exert as much force to pump the blood, or directly increasing the heart's contractile force, so that the heart can overcome the higher blood pressure. Cardiac glycosides, such as the commonly used digoxin and digitoxin, deal with the latter, due to their positive inotropic activity. On the other hand, cardiac arrhythmia are changes in heart rate, whether faster (tachycardia) or slower (bradycardia). Medicinal treatments for this condition work primarily to counteract tachycardia or atrial fibrillation by slowing down heart rate, as done by cardiac glycosides. Nevertheless, due to questions of toxicity and dosage, cardiac glycosides have been replaced with synthetic drugs such as ACE inhibitors and beta blockers and are no longer used as the primary medical treatment for such conditions. Depending on the severity of the condition, though, they may still be used in conjunction with other treatments. Toxicity From ancient times, humans have used cardiac-glycoside-containing plants and their crude extracts as arrow coatings, homicidal or suicidal aids, rat poisons, heart tonics, diuretics and emetics, primarily due to the toxic nature of these compounds. Thus, though cardiac glycosides have been used for their medicinal function, their toxicity must also be recognized. For example, in 2008 US poison centers reported 2,632 cases of digoxin toxicity, and 17 cases of digoxin-related deaths. Because cardiac glycosides affect the cardiovascular, neurologic, and gastrointestinal systems, these three systems can be used to determine the effects of toxicity. The effect of these compounds on the cardiovascular system presents a reason for concern, as they can directly affect the function of the heart through their inotropic and chronotropic effects. In terms of inotropic activity, excessive cardiac glycoside dosage results in cardiac contractions with greater force, as further calcium is released from the SR of cardiac muscle cells. Toxicity also results in changes to heart chronotropic activity, resulting in multiple kinds of dysrhythmia and potentially fatal ventricular tachycardia. These dysrhythmias are an effect of an influx of sodium and decrease of resting membrane potential threshold in cardiac muscle cells. When taken beyond a narrow dosage range specific to each particular cardiac glycoside, these compounds can rapidly become dangerous. In sum, they interfere with fundamental processes that regulate membrane potential. They are toxic to the heart, the brain, and the gut at doses that are not difficult to reach. In the heart, the most common negative effect is premature ventricular contraction. References External links Plant toxins
Cardiac glycoside
[ "Chemistry" ]
1,959
[ "Chemical ecology", "Plant toxins" ]
7,304
https://en.wikipedia.org/wiki/Coordination%20complex
A coordination complex is a chemical compound consisting of a central atom or ion, which is usually metallic and is called the coordination centre, and a surrounding array of bound molecules or ions, that are in turn known as ligands or complexing agents. Many metal-containing compounds, especially those that include transition metals (elements like titanium that belong to the periodic table's d-block), are coordination complexes. Nomenclature and terminology Coordination complexes are so pervasive that their structures and reactions are described in many ways, sometimes confusingly. The atom within a ligand that is bonded to the central metal atom or ion is called the donor atom. In a typical complex, a metal ion is bonded to several donor atoms, which can be the same or different. A polydentate (multiple bonded) ligand is a molecule or ion that bonds to the central atom through several of the ligand's atoms; ligands with 2, 3, 4 or even 6 bonds to the central atom are common. These complexes are called chelate complexes; the formation of such complexes is called chelation, complexation, and coordination. The central atom or ion, together with all ligands, comprise the coordination sphere. The central atoms or ion and the donor atoms comprise the first coordination sphere. Coordination refers to the "coordinate covalent bonds" (dipolar bonds) between the ligands and the central atom. Originally, a complex implied a reversible association of molecules, atoms, or ions through such weak chemical bonds. As applied to coordination chemistry, this meaning has evolved. Some metal complexes are formed virtually irreversibly and many are bound together by bonds that are quite strong. The number of donor atoms attached to the central atom or ion is called the coordination number. The most common coordination numbers are 2, 4, and especially 6. A hydrated ion is one kind of a complex ion (or simply a complex), a species formed between a central metal ion and one or more surrounding ligands, molecules or ions that contain at least one lone pair of electrons. If all the ligands are monodentate, then the number of donor atoms equals the number of ligands. For example, the cobalt(II) hexahydrate ion or the hexaaquacobalt(II) ion [Co(H2O)6]2+ is a hydrated-complex ion that consists of six water molecules attached to a metal ion Co. The oxidation state and the coordination number reflect the number of bonds formed between the metal ion and the ligands in the complex ion. However, the coordination number of Pt(en) is 4 (rather than 2) since it has two bidentate ligands, which contain four donor atoms in total. Any donor atom will give a pair of electrons. There are some donor atoms or groups which can offer more than one pair of electrons. Such are called bidentate (offers two pairs of electrons) or polydentate (offers more than two pairs of electrons). In some cases an atom or a group offers a pair of electrons to two similar or different central metal atoms or acceptors—by division of the electron pair—into a three-center two-electron bond. These are called bridging ligands. History Coordination complexes have been known since the beginning of modern chemistry. Early well-known coordination complexes include dyes such as Prussian blue. Their properties were first well understood in the late 1800s, following the 1869 work of Christian Wilhelm Blomstrand. Blomstrand developed what has come to be known as the complex ion chain theory. In considering metal amine complexes, he theorized that the ammonia molecules compensated for the charge of the ion by forming chains of the type [(NH3)X]X+, where X is the coordination number of the metal ion. He compared his theoretical ammonia chains to hydrocarbons of the form (CH2)X. Following this theory, Danish scientist Sophus Mads Jørgensen made improvements to it. In his version of the theory, Jørgensen claimed that when a molecule dissociates in a solution there were two possible outcomes: the ions would bind via the ammonia chains Blomstrand had described or the ions would bind directly to the metal. It was not until 1893 that the most widely accepted version of the theory today was published by Alfred Werner. Werner's work included two important changes to the Blomstrand theory. The first was that Werner described the two possibilities in terms of location in the coordination sphere. He claimed that if the ions were to form a chain, this would occur outside of the coordination sphere while the ions that bound directly to the metal would do so within the coordination sphere. In one of his most important discoveries however Werner disproved the majority of the chain theory. Werner discovered the spatial arrangements of the ligands that were involved in the formation of the complex hexacoordinate cobalt. His theory allows one to understand the difference between a coordinated ligand and a charge balancing ion in a compound, for example the chloride ion in the cobaltammine chlorides and to explain many of the previously inexplicable isomers. In 1911, Werner first resolved the coordination complex hexol into optical isomers, overthrowing the theory that only carbon compounds could possess chirality. Structures The ions or molecules surrounding the central atom are called ligands. Ligands are classified as L or X (or a combination thereof), depending on how many electrons they provide for the bond between ligand and central atom. L ligands provide two electrons from a lone electron pair, resulting in a coordinate covalent bond. X ligands provide one electron, with the central atom providing the other electron, thus forming a regular covalent bond. The ligands are said to be coordinated to the atom. For alkenes, the pi bonds can coordinate to metal atoms. An example is ethylene in the complex (Zeise's salt). Geometry In coordination chemistry, a structure is first described by its coordination number, the number of ligands attached to the metal (more specifically, the number of donor atoms). Usually one can count the ligands attached, but sometimes even the counting can become ambiguous. Coordination numbers are normally between two and nine, but large numbers of ligands are not uncommon for the lanthanides and actinides. The number of bonds depends on the size, charge, and electron configuration of the metal ion and the ligands. Metal ions may have more than one coordination number. Typically the chemistry of transition metal complexes is dominated by interactions between s and p molecular orbitals of the donor-atoms in the ligands and the d orbitals of the metal ions. The s, p, and d orbitals of the metal can accommodate 18 electrons (see 18-Electron rule). The maximum coordination number for a certain metal is thus related to the electronic configuration of the metal ion (to be more specific, the number of empty orbitals) and to the ratio of the size of the ligands and the metal ion. Large metals and small ligands lead to high coordination numbers, e.g. . Small metals with large ligands lead to low coordination numbers, e.g. . Due to their large size, lanthanides, actinides, and early transition metals tend to have high coordination numbers. Most structures follow the points-on-a-sphere pattern (or, as if the central atom were in the middle of a polyhedron where the corners of that shape are the locations of the ligands), where orbital overlap (between ligand and metal orbitals) and ligand-ligand repulsions tend to lead to certain regular geometries. The most observed geometries are listed below, but there are many cases that deviate from a regular geometry, e.g. due to the use of ligands of diverse types (which results in irregular bond lengths; the coordination atoms do not follow a points-on-a-sphere pattern), due to the size of ligands, or due to electronic effects (see, e.g., Jahn–Teller distortion): Linear for two-coordination Trigonal planar for three-coordination Tetrahedral or square planar for four-coordination Trigonal bipyramidal for five-coordination Octahedral for six-coordination Pentagonal bipyramidal for seven-coordination Square antiprismatic for eight-coordination Tricapped trigonal prismatic for nine-coordination The idealized descriptions of 5-, 7-, 8-, and 9- coordination are often indistinct geometrically from alternative structures with slightly differing L-M-L (ligand-metal-ligand) angles, e.g. the difference between square pyramidal and trigonal bipyramidal structures. Square pyramidal for five-coordination Capped octahedral or capped trigonal prismatic for seven-coordination Dodecahedral or bicapped trigonal prismatic for eight-coordination Capped square antiprismatic for nine-coordination To distinguish between the alternative coordinations for five-coordinated complexes, the τ geometry index was invented by Addison et al. This index depends on angles by the coordination center and changes between 0 for the square pyramidal to 1 for trigonal bipyramidal structures, allowing to classify the cases in between. This system was later extended to four-coordinated complexes by Houser et al. and also Okuniewski et al. In systems with low d electron count, due to special electronic effects such as (second-order) Jahn–Teller stabilization, certain geometries (in which the coordination atoms do not follow a points-on-a-sphere pattern) are stabilized relative to the other possibilities, e.g. for some compounds the trigonal prismatic geometry is stabilized relative to octahedral structures for six-coordination. Bent for two-coordination Trigonal pyramidal for three-coordination Trigonal prismatic for six-coordination Isomerism The arrangement of the ligands is fixed for a given complex, but in some cases it is mutable by a reaction that forms another stable isomer. There exist many kinds of isomerism in coordination complexes, just as in many other compounds. Stereoisomerism Stereoisomerism occurs with the same bonds in distinct orientations. Stereoisomerism can be further classified into: Cis–trans isomerism and facial–meridional isomerism Cis–trans isomerism occurs in octahedral and square planar complexes (but not tetrahedral). When two ligands are adjacent they are said to be cis, when opposite each other, trans. When three identical ligands occupy one face of an octahedron, the isomer is said to be facial, or fac. In a fac isomer, any two identical ligands are adjacent or cis to each other. If these three ligands and the metal ion are in one plane, the isomer is said to be meridional, or mer. A mer isomer can be considered as a combination of a trans and a cis, since it contains both trans and cis pairs of identical ligands. Optical isomerism Optical isomerism occurs when a complex is not superimposable with its mirror image. It is so called because the two isomers are each optically active, that is, they rotate the plane of polarized light in opposite directions. In the first molecule shown, the symbol Λ (lambda) is used as a prefix to describe the left-handed propeller twist formed by three bidentate ligands. The second molecule is the mirror image of the first, with the symbol Δ (delta) as a prefix for the right-handed propeller twist. The third and fourth molecules are a similar pair of Λ and Δ isomers, in this case with two bidentate ligands and two identical monodentate ligands. Structural isomerism Structural isomerism occurs when the bonds are themselves different. Four types of structural isomerism are recognized: ionisation isomerism, solvate or hydrate isomerism, linkage isomerism and coordination isomerism. Ionisation isomerism – the isomers give different ions in solution although they have the same composition. This type of isomerism occurs when the counter ion of the complex is also a potential ligand. For example, pentaamminebromocobalt(III) sulphate is red violet and in solution gives a precipitate with barium chloride, confirming the presence of sulphate ion, while pentaamminesulphatecobalt(III) bromide is red and tests negative for sulphate ion in solution, but instead gives a precipitate of AgBr with silver nitrate. Solvate or hydrate isomerism – the isomers have the same composition but differ with respect to the number of molecules of solvent that serve as ligand vs simply occupying sites in the crystal. Examples: is violet colored, is blue-green, and is dark green. See water of crystallization. Linkage isomerism occurs with ligands with more than one possible donor atom, known as ambidentate ligands. For example, nitrite can coordinate through O or N. One pair of nitrite linkage isomers have structures (nitro isomer) and (nitrito isomer). Coordination isomerism occurs when both positive and negative ions of a salt are complex ions and the two isomers differ in the distribution of ligands between the cation and the anion. For example, and . Electronic properties Many of the properties of transition metal complexes are dictated by their electronic structures. The electronic structure can be described by a relatively ionic model that ascribes formal charges to the metals and ligands. This approach is the essence of crystal field theory (CFT). Crystal field theory, introduced by Hans Bethe in 1929, gives a quantum mechanically based attempt at understanding complexes. But crystal field theory treats all interactions in a complex as ionic and assumes that the ligands can be approximated by negative point charges. More sophisticated models embrace covalency, and this approach is described by ligand field theory (LFT) and Molecular orbital theory (MO). Ligand field theory, introduced in 1935 and built from molecular orbital theory, can handle a broader range of complexes and can explain complexes in which the interactions are covalent. The chemical applications of group theory can aid in the understanding of crystal or ligand field theory, by allowing simple, symmetry based solutions to the formal equations. Chemists tend to employ the simplest model required to predict the properties of interest; for this reason, CFT has been a favorite for the discussions when possible. MO and LF theories are more complicated, but provide a more realistic perspective. The electronic configuration of the complexes gives them some important properties: Color of transition metal complexes Transition metal complexes often have spectacular colors caused by electronic transitions by the absorption of light. For this reason they are often applied as pigments. Most transitions that are related to colored metal complexes are either d–d transitions or charge transfer bands. In a d–d transition, an electron in a d orbital on the metal is excited by a photon to another d orbital of higher energy, therefore d–d transitions occur only for partially-filled d-orbital complexes (d1–9). For complexes having d0 or d10 configuration, charge transfer is still possible even though d–d transitions are not. A charge transfer band entails promotion of an electron from a metal-based orbital into an empty ligand-based orbital (metal-to-ligand charge transfer or MLCT). The converse also occurs: excitation of an electron in a ligand-based orbital into an empty metal-based orbital (ligand-to-metal charge transfer or LMCT). These phenomena can be observed with the aid of electronic spectroscopy; also known as UV-Vis. For simple compounds with high symmetry, the d–d transitions can be assigned using Tanabe–Sugano diagrams. These assignments are gaining increased support with computational chemistry. Colors of lanthanide complexes Superficially lanthanide complexes are similar to those of the transition metals in that some are colored. However, for the common Ln3+ ions (Ln = lanthanide) the colors are all pale, and hardly influenced by the nature of the ligand. The colors are due to 4f electron transitions. As the 4f orbitals in lanthanides are "buried" in the xenon core and shielded from the ligand by the 5s and 5p orbitals they are therefore not influenced by the ligands to any great extent leading to a much smaller crystal field splitting than in the transition metals. The absorption spectra of an Ln3+ ion approximates to that of the free ion where the electronic states are described by spin-orbit coupling. This contrasts to the transition metals where the ground state is split by the crystal field. Absorptions for Ln3+ are weak as electric dipole transitions are parity forbidden (Laporte forbidden) but can gain intensity due to the effect of a low-symmetry ligand field or mixing with higher electronic states (e.g. d orbitals). f-f absorption bands are extremely sharp which contrasts with those observed for transition metals which generally have broad bands. This can lead to extremely unusual effects, such as significant color changes under different forms of lighting. Magnetism Metal complexes that have unpaired electrons are paramagnetic. This can be due to an odd number of electrons overall, or to destabilization of electron-pairing. Thus, monomeric Ti(III) species have one "d-electron" and must be (para)magnetic, regardless of the geometry or the nature of the ligands. Ti(II), with two d-electrons, forms some complexes that have two unpaired electrons and others with none. This effect is illustrated by the compounds TiX2[(CH3)2PCH2CH2P(CH3)2]2: when X = Cl, the complex is paramagnetic (high-spin configuration), whereas when X = CH3, it is diamagnetic (low-spin configuration). Ligands provide an important means of adjusting the ground state properties. In bi- and polymetallic complexes, in which the individual centres have an odd number of electrons or that are high-spin, the situation is more complicated. If there is interaction (either direct or through ligand) between the two (or more) metal centres, the electrons may couple (antiferromagnetic coupling, resulting in a diamagnetic compound), or they may enhance each other (ferromagnetic coupling). When there is no interaction, the two (or more) individual metal centers behave as if in two separate molecules. Reactivity Complexes show a variety of possible reactivities: Electron transfers Electron transfer (ET) between metal ions can occur via two distinct mechanisms, inner and outer sphere electron transfers. In an inner sphere reaction, a bridging ligand serves as a conduit for ET. (Degenerate) ligand exchange One important indicator of reactivity is the rate of degenerate exchange of ligands. For example, the rate of interchange of coordinate water in [M(H2O)6]n+ complexes varies over 20 orders of magnitude. Complexes where the ligands are released and rebound rapidly are classified as labile. Such labile complexes can be quite stable thermodynamically. Typical labile metal complexes either have low-charge (Na+), electrons in d-orbitals that are antibonding with respect to the ligands (Zn2+), or lack covalency (Ln3+, where Ln is any lanthanide). The lability of a metal complex also depends on the high-spin vs. low-spin configurations when such is possible. Thus, high-spin Fe(II) and Co(III) form labile complexes, whereas low-spin analogues are inert. Cr(III) can exist only in the low-spin state (quartet), which is inert because of its high formal oxidation state, absence of electrons in orbitals that are M–L antibonding, plus some "ligand field stabilization" associated with the d3 configuration. Associative processes Complexes that have unfilled or half-filled orbitals are often capable of reacting with substrates. Most substrates have a singlet ground-state; that is, they have lone electron pairs (e.g., water, amines, ethers), so these substrates need an empty orbital to be able to react with a metal centre. Some substrates (e.g., molecular oxygen) have a triplet ground state, which results that metals with half-filled orbitals have a tendency to react with such substrates (it must be said that the dioxygen molecule also has lone pairs, so it is also capable to react as a 'normal' Lewis base). If the ligands around the metal are carefully chosen, the metal can aid in (stoichiometric or catalytic) transformations of molecules or be used as a sensor. Classification Metal complexes, also known as coordination compounds, include virtually all metal compounds. The study of "coordination chemistry" is the study of "inorganic chemistry" of all alkali and alkaline earth metals, transition metals, lanthanides, actinides, and metalloids. Thus, coordination chemistry is the chemistry of the majority of the periodic table. Metals and metal ions exist, in the condensed phases at least, only surrounded by ligands. The areas of coordination chemistry can be classified according to the nature of the ligands, in broad terms: Classical (or "Werner Complexes"): Ligands in classical coordination chemistry bind to metals, almost exclusively, via their lone pairs of electrons residing on the main-group atoms of the ligand. Typical ligands are H2O, NH3, Cl−, CN−, en. Some of the simplest members of such complexes are described in metal aquo complexes, metal ammine complexes, Examples: [Co(EDTA)]−, [Co(NH3)6]3+, [Fe(C2O4)3]3- Organometallic chemistry: Ligands are organic (alkenes, alkynes, alkyls) as well as "organic-like" ligands such as phosphines, hydride, and CO. Example: (C5H5)Fe(CO)2CH3 Bioinorganic chemistry: Ligands are those provided by nature, especially including the side chains of amino acids, and many cofactors such as porphyrins. Example: hemoglobin contains heme, a porphyrin complex of iron Example: chlorophyll contains a porphyrin complex of magnesium Many natural ligands are "classical" especially including water. Cluster chemistry: Ligands include all of the above as well as other metal ions or atoms as well. Example Ru3(CO)12 In some cases there are combinations of different fields: Example: [Fe4S4(Scysteinyl)4]2−, in which a cluster is embedded in a biologically active species. Mineralogy, materials science, and solid state chemistry – as they apply to metal ions – are subsets of coordination chemistry in the sense that the metals are surrounded by ligands. In many cases these ligands are oxides or sulfides, but the metals are coordinated nonetheless, and the principles and guidelines discussed below apply. In hydrates, at least some of the ligands are water molecules. It is true that the focus of mineralogy, materials science, and solid state chemistry differs from the usual focus of coordination or inorganic chemistry. The former are concerned primarily with polymeric structures, properties arising from a collective effects of many highly interconnected metals. In contrast, coordination chemistry focuses on reactivity and properties of complexes containing individual metal atoms or small ensembles of metal atoms. Nomenclature of coordination complexes The basic procedure for naming a complex is: When naming a complex ion, the ligands are named before the metal ion. The ligands' names are given in alphabetical order. Numerical prefixes do not affect the order. Multiple occurring monodentate ligands receive a prefix according to the number of occurrences: di-, tri-, tetra-, penta-, or hexa-. Multiple occurring polydentate ligands (e.g., ethylenediamine, oxalate) receive bis-, tris-, tetrakis-, etc. Anions end in o. This replaces the final 'e' when the anion ends with '-ide', '-ate' or '-ite', e.g. chloride becomes chlorido and sulfate becomes sulfato. Formerly, '-ide' was changed to '-o' (e.g. chloro and cyano), but this rule has been modified in the 2005 IUPAC recommendations and the correct forms for these ligands are now chlorido and cyanido. Neutral ligands are given their usual name, with some exceptions: NH3 becomes ammine; H2O becomes aqua or aquo; CO becomes carbonyl; NO becomes nitrosyl. Write the name of the central atom/ion. If the complex is an anion, the central atom's name will end in -ate, and its Latin name will be used if available (except for mercury). The oxidation state of the central atom is to be specified (when it is one of several possible, or zero), and should be written as a Roman numeral (or 0) enclosed in parentheses. Name of the cation should be preceded by the name of anion. (if applicable, as in last example) Examples: [Cd(CN)2(en)2] → dicyanidobis(ethylenediamine)cadmium(II) [CoCl(NH3)5]SO4 → pentaamminechloridocobalt(III) sulfate [Cu(H2O)6] 2+ → hexaaquacopper(II) ion [CuCl5NH3]3− → amminepentachloridocuprate(II) ion K4[Fe(CN)6] → potassium hexacyanidoferrate(II) [NiCl4]2− → tetrachloridonickelate(II) ion (The use of chloro- was removed from IUPAC naming convention) The coordination number of ligands attached to more than one metal (bridging ligands) is indicated by a subscript to the Greek symbol μ placed before the ligand name. Thus the dimer of aluminium trichloride is described by Al2Cl4(μ2-Cl)2. Any anionic group can be electronically stabilized by any cation. An anionic complex can be stabilised by a hydrogen cation, becoming an acidic complex which can dissociate to release the cationic hydrogen. This kind of complex compound has a name with "ic" added after the central metal. For example, H2[Pt(CN)4] has the name tetracyanoplatinic (II) acid. Stability constant The affinity of metal ions for ligands is described by a stability constant, also called the formation constant, and is represented by the symbol Kf. It is the equilibrium constant for its assembly from the constituent metal and ligands, and can be calculated accordingly, as in the following example for a simple case: xM (aq) + yL (aq) zZ (aq) where : x, y, and z are the stoichiometric coefficients of each species. M stands for metal / metal ion , the L for Lewis bases , and finally Z for complex ions. Formation constants vary widely. Large values indicate that the metal has high affinity for the ligand, provided the system is at equilibrium. Sometimes the stability constant will be in a different form known as the constant of destability. This constant is expressed as the inverse of the constant of formation and is denoted as Kd = 1/Kf . This constant represents the reverse reaction for the decomposition of a complex ion into its individual metal and ligand components. When comparing the values for Kd, the larger the value, the more unstable the complex ion is. As a result of these complex ions forming in solutions they also can play a key role in solubility of other compounds. When a complex ion is formed it can alter the concentrations of its components in the solution. For example: Ag + 2NH3 Ag(NH3) AgCl(s) + H2O(l) Ag + Cl If these reactions both occurred in the same reaction vessel, the solubility of the silver chloride would be increased by the presence of NH4OH because formation of the Diammine argentum(I) complex consumes a significant portion of the free silver ions from the solution. By Le Chatelier's principle, this causes the equilibrium reaction for the dissolving of the silver chloride, which has silver ion as a product, to shift to the right. This new solubility can be calculated given the values of Kf and Ksp for the original reactions. The solubility is found essentially by combining the two separate equilibria into one combined equilibrium reaction and this combined reaction is the one that determines the new solubility. So Kc, the new solubility constant, is denoted by: Application of coordination compounds As metals only exist in solution as coordination complexes, it follows then that this class of compounds is useful in a wide variety of ways. Bioinorganic chemistry In bioinorganic chemistry and bioorganometallic chemistry, coordination complexes serve either structural or catalytic functions. An estimated 30% of proteins contain metal ions. Examples include the intensely colored vitamin B12, the heme group in hemoglobin, the cytochromes, the chlorin group in chlorophyll, and carboxypeptidase, a hydrolytic enzyme important in digestion. Another complex ion enzyme is catalase, which decomposes the cell's waste hydrogen peroxide. Synthetic coordination compounds are also used to bind to proteins and especially nucleic acids (e.g. anticancer drug cisplatin). Industry Homogeneous catalysis is a major application of coordination compounds for the production of organic substances. Processes include hydrogenation, hydroformylation, oxidation. In one example, a combination of titanium trichloride and triethylaluminium gives rise to Ziegler–Natta catalysts, used for the polymerization of ethylene and propylene to give polymers of great commercial importance as fibers, films, and plastics. Nickel, cobalt, and copper can be extracted using hydrometallurgical processes involving complex ions. They are extracted from their ores as ammine complexes. Metals can also be separated using the selective precipitation and solubility of complex ions. Cyanide is used chiefly for extraction of gold and silver from their ores. Phthalocyanine complexes are an important class of pigments. Analysis At one time, coordination compounds were used to identify the presence of metals in a sample. Qualitative inorganic analysis has largely been superseded by instrumental methods of analysis such as atomic absorption spectroscopy (AAS), inductively coupled plasma atomic emission spectroscopy (ICP-AES) and inductively coupled plasma mass spectrometry (ICP-MS). See also Activated complex IUPAC nomenclature of inorganic chemistry Coordination cage Coordination geometry Coordination isomerism Coordination polymers, in which coordination complexes are the repeating units. Inclusion compounds Organometallic chemistry deals with a special class of coordination compounds where organic fragments are bonded to a metal at least through one C atom. References Further reading De Vito, D.; Weber, J. ; Merbach, A. E. “Calculated Volume and Energy Profiles for Water Exchange on t2g 6 Rhodium(III) and Iridium(III) Hexaaquaions: Conclusive Evidence for an Ia Mechanism” Inorganic Chemistry, 2004, Volume 43, pages 858–863. Zumdahl, Steven S. Chemical Principles, Fifth Edition. New York: Houghton Mifflin, 2005. 943–946, 957. Harris, D., Bertolucci, M., Symmetry and Spectroscopy. 1989 New York, Dover Publications External links Naming Coordination Compounds Transition Metal Complex Colors Inorganic chemistry Transition metals Coordination chemistry
Coordination complex
[ "Chemistry" ]
6,743
[ "Coordination chemistry", "nan", "Coordination complexes" ]
7,376
https://en.wikipedia.org/wiki/Cosmic%20microwave%20background
The cosmic microwave background (CMB, CMBR), or relic radiation, is microwave radiation that fills all space in the observable universe. With a standard optical telescope, the background space between stars and galaxies is almost completely dark. However, a sufficiently sensitive radio telescope detects a faint background glow that is almost uniform and is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the electromagnetic spectrum. The accidental discovery of the CMB in 1965 by American radio astronomers Arno Penzias and Robert Wilson was the culmination of work initiated in the 1940s. The CMB is landmark evidence of the Big Bang theory for the origin of the universe. In the Big Bang cosmological models, during the earliest periods, the universe was filled with an opaque fog of dense, hot plasma of sub-atomic particles. As the universe expanded, this plasma cooled to the point where protons and electrons combined to form neutral atoms of mostly hydrogen. Unlike the plasma, these atoms could not scatter thermal radiation by Thomson scattering, and so the universe became transparent. Known as the recombination epoch, this decoupling event released photons to travel freely through space. However, the photons have grown less energetic due to the cosmological redshift associated with the expansion of the universe. The surface of last scattering refers to a shell at the right distance in space so photons are now received that were originally emitted at the time of decoupling. The CMB is not completely smooth and uniform, showing a faint anisotropy that can be mapped by sensitive detectors. Ground and space-based experiments such as COBE, WMAP and Planck have been used to measure these temperature inhomogeneities. The anisotropy structure is determined by various interactions of matter and photons up to the point of decoupling, which results in a characteristic lumpy pattern that varies with angular scale. The distribution of the anisotropy across the sky has frequency components that can be represented by a power spectrum displaying a sequence of peaks and valleys. The peak values of this spectrum hold important information about the physical properties of the early universe: the first peak determines the overall curvature of the universe, while the second and third peak detail the density of normal matter and so-called dark matter, respectively. Extracting fine details from the CMB data can be challenging, since the emission has undergone modification by foreground features such as galaxy clusters. Features The cosmic microwave background radiation is an emission of uniform black body thermal energy coming from all directions. Intensity of the CMB is expressed in kelvin (K), the SI unit of temperature. The CMB has a thermal black body spectrum at a temperature of . Variations in intensity are expressed as variations in temperature. The blackbody temperature uniquely characterizes the intensity of the radiation at all wavelengths; a measured brightness temperature at any wavelength can be converted to a blackbody temperature. The radiation is remarkably uniform across the sky, very unlike the almost point-like structure of stars or clumps of stars in galaxies. The radiation is isotropic to roughly one part in 25,000: the root mean square variations are just over 100 μK, after subtracting a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Sun relative to the comoving cosmic rest frame as it moves at 369.82 ± 0.11 km/s towards the constellation Crater near its boundary with the constellation Leo The CMB dipole and aberration at higher multipoles have been measured, consistent with galactic motion. Despite the very small degree of anisotropy in the CMB, many aspects can be measured with high precision and such measurements are critical for cosmological theories. In addition to temperature anisotropy, the CMB should have an angular variation in polarization. The polarization at each direction in the sky has an orientation described in terms of E-mode and B-mode polarization. The E-mode signal is a factor of 10 less strong than the temperature anisotropy; it supplements the temperature data as they are correlated. The B-mode signal is even weaker but may contain additional cosmological data. The anisotropy is related to physical origin of the polarization. Excitation of an electron by linear polarized light generates polarized light at 90 degrees to the incident direction. If the incoming radiation is isotropic, different incoming directions create polarizations that cancel out. If the incoming radiation has quadrupole anisotropy, residual polarization will be seen. Other than the temperature and polarization anisotropy, the CMB frequency spectrum is expected to feature tiny departures from the black-body law known as spectral distortions. These are also at the focus of an active research effort with the hope of a first measurement within the forthcoming decades, as they contain a wealth of information about the primordial universe and the formation of structures at late time. The CMB contains the vast majority of photons in the universe by a factor of 400 to 1; the number density of photons in the CMB is one billion times (109) the number density of matter in the universe. Without the expansion of the universe to cause the cooling of the CMB, the night sky would shine as brightly as the Sun. The energy density of the CMB is , about 411 photons/cm3. History Early speculations In 1931, Georges Lemaître speculated that remnants of the early universe may be observable as radiation, but his candidate was cosmic rays. Richard C. Tolman showed in 1934 that expansion of the universe would cool blackbody radiation while maintaining a thermal spectrum. The cosmic microwave background was first predicted in 1948 by Ralph Alpher and Robert Herman, in a correction they prepared for a paper by Alpher's PhD advisor George Gamow. Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K. Discovery The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964. In 1964, David Todd Wilkinson and Peter Roll, Robert H. Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. The antenna was constructed in 1959 to support Project Echo—the National Aeronautics and Space Administration's passive communications satellites, which used large earth orbiting aluminized plastic balloons as reflectors to bounce radio signals from one point on the Earth to another. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background, with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke said "Boys, we've been scooped." A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery. Cosmic origin The interpretation of the cosmic microwave background was a controversial issue in the late 1960s. Alternative explanations included energy from within the solar system, from galaxies, from intergalactic plasma and from multiple extragalactic radio sources. Two requirements would show that the microwave radiation was truly "cosmic". First, the intensity vs frequency or spectrum needed to be shown to match a thermal or blackbody source. This was accomplished by 1968 in a series of measurements of the radiation temperature at higher and lower wavelengths. Second, the radiation needed be shown to be isotropic, the same from all directions. This was also accomplished by 1970, demonstrating that this radiation was truly cosmic in origin. Progress on theory In the 1970s numerous studies showed that tiny deviations from isotropy in the CMB could result from events in the early universe. Harrison, Peebles and Yu, and Zel'dovich realized that the early universe would require quantum inhomogeneities that would result in temperature anisotropy at the level of 10−4 or 10−5. Rashid Sunyaev, using the alternative name relic radiation, calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background. COBE After a lull in the 1970s caused in part by the many experimental difficulties in measuring CMB at high precision, increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground-based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983), gave the first upper limits on the large-scale anisotropy. The other key event in the 1980s was the proposal by Alan Guth for cosmic inflation. This theory of rapid spatial expansion gave an explanation for large-scale isotropy by allowing causal connection just before the epoch of last scattering. With this and similar theories, detailed prediction encouraged larger and more ambitious experiments. The NASA Cosmic Background Explorer (COBE) satellite orbited Earth in 1989–1996 detected and quantified the large scale anisotropies at the limit of its detection capabilities. The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992. The team received the Nobel Prize in physics for 2006 for this discovery. Precision cosmology Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the two decades. The sensitivity of the new experiments improved dramatically, with a reduction in internal noise by three orders of magnitude. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma. The first peak in the anisotropy was tentatively detected by the MAT/TOCO experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments. These measurements demonstrated that the geometry of the universe is approximately flat, rather than curved. They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation. Observations after COBE Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum. Wilkinson Microwave Anisotropy Probe In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers at five frequencies to minimize non-sky signal noise. The data from the mission was released in five installments, the last being the nine year summary. The results are broadly consistent Lambda CDM models based on 6 free parameters and fitting in to Big Bang cosmology with cosmic inflation. Degree Angular Scale Interferometer Atacama Cosmology Telescope Planck Surveyor A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and performed an even more detailed investigation until it was shut down in October 2013. Planck employed both HEMT radiometers and bolometer technology and measured the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope. On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. The map suggests the universe is slightly older than researchers expected. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth (10−30) of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is billion years old and the Hubble constant was measured to be . South Pole Telescope Theoretical models The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang event. Measurements of the CMB have made the inflationary Big Bang model the Standard Cosmological Model. The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory. In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10−37 seconds the nascent universe underwent exponential growth that smoothed out nearly all irregularities. The remaining irregularities were caused by quantum fluctuations in the inflaton field that caused the inflation event. Long before the formation of stars and planets, the early universe was more compact, much hotter and, starting 10−6 seconds after the Big Bang, filled with a uniform glow from its white-hot fog of interacting plasma of photons, electrons, and baryons. As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old. As photons did not interact with these electrically neutral atoms, the former began to travel freely through space, resulting in the decoupling of matter and radiation. The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to , it will continue to drop as the universe expands. The intensity of the radiation corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background, making up a fraction of roughly of the total density of the universe. Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature. Predictions based on the Big Bang model In the late 1940s Alpher and Herman reasoned that if there was a Big Bang, the expansion of the universe would have stretched the high-energy radiation of the very early universe into the microwave region of the electromagnetic spectrum, and down to a temperature of about 5 K. They were slightly off with their estimate, but they had the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to discover that the microwave background was actually there. According to standard cosmology, the CMB gives a snapshot of the hot early universe at the point in time when the temperature dropped enough to allow electrons and protons to form hydrogen atoms. This event made the universe nearly transparent to radiation because light was no longer being scattered off free electrons. When this occurred some 380,000 years after the Big Bang, the temperature of the universe was about 3,000 K. This corresponds to an ambient energy of about , which is much less than the ionization energy of hydrogen. This epoch is generally known as the "time of last scattering" or the period of recombination or decoupling. Since decoupling, the color temperature of the background radiation has dropped by an average factor of 1,089 due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. The color temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The color temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the color temperature of the CMB as observed in the present day (2.725 K or 0.2348 meV): Tr = 2.725 K × (1 + z) The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM ("Lambda Cold Dark Matter") model in particular. Moreover, the fluctuations are coherent on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred. Primary anisotropy The anisotropy, or directional dependency, of the cosmic microwave background is divided into two types: primary anisotropy, due to effects that occur at the surface of last scattering and before; and secondary anisotropy, due to effects such as interactions of the background radiation with intervening hot gas or gravitational potentials, which occur between the last scattering surface and the observer. The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude. The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density. The third peak can be used to get information about the dark-matter density. The locations of the peaks give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures. Adiabatic density perturbationsIn an adiabatic density perturbation, the fractional additional number density of each type of particle (baryons, photons, etc.) is the same. That is, if at one place there is a 1% higher number density of baryons than average, then at that place there is a 1% higher number density of photons (and a 1% higher number density in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic. Isocurvature density perturbationsIn an isocurvature density perturbation, the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% energy in neutrinos than average, would be a pure isocurvature perturbation. Hypothetical cosmic strings would produce mostly isocurvature primordial perturbations. The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (ℓ values of the peaks) are roughly in the ratio 1 : 3 : 5 : ..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1 : 2 : 3 : ... Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings. Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down: the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe, the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring. These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies. The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and is given by P(t)dt. The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) has a maximum as 372,000 years. This is often taken as the "time" at which the CMB formed. However, to figure out how it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and thus when it was complete, the universe was roughly 487,000 years old. Late time anisotropy Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions. The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB: Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.) The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation. Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift around 10. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes. The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation). Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zeldovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields. Alternative theories The standard cosmology that includes the Big Bang "enjoys considerable popularity among the practicing cosmologists" However, there are challenges to the standard big bang framework for explaining CMB data. In particular standard cosmology requires fine-tuning of some free parameters, with different values supported by different experimental data. As an example of the fine-tuning issue, standard cosmology cannot predict the present temperature of the relic radiation, . This value of is one of the best results of experimental cosmology and the steady state model can predict it. However, alternative models have their own set of problems and they have only made post-facto explanations of existing observations. Nevertheless, these alternatives have played an important historic role in providing ideas for and challenges to the standard explanation. Polarization The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-mode (or gradient-mode) and B-mode (or curl mode). This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. E-modes The E-modes arise from Thomson scattering in a heterogeneous plasma. E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI). B-modes B-modes are expected to be an order of magnitude weaker than the E-modes. The former are not produced by standard scalar type perturbations, but are generated by gravitational waves during cosmic inflation shortly after the big bang. However, gravitational lensing of the stronger E-modes can also produce B-mode polarization. Detecting the original B-modes signal requires analysis of the contamination caused by lensing of the relatively strong E-mode signal. Primordial gravitational waves Models of "slow-roll" cosmic inflation in the early universe predicts primordial gravitational waves that would impact the polarisation of the cosmic microwave background, creating a specific pattern of B-mode polarization. Detection of this pattern would support the theory of inflation and their strength can confirm and exclude different models of inflation. Claims that this characteristic pattern of B-mode polarization had been measured by BICEP2 instrument were later attributed to cosmic dust due to new results of the Planck experiment. Gravitational lensing The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level. Multipole analysis The CMB angular anisotropies are usually presented in terms of power per multipole. The map of temperature across the sky, is written as coefficients of spherical harmonics, where the term measures the strength of the angular oscillation in , and ℓ is the multipole number while m is the azimuthal number. The azimuthal variation is not significant and is removed by applying the angular correlation function, giving power spectrum term  Increasing values of ℓ correspond to higher multipole moments of CMB, meaning more rapid variation with angle. CMBR monopole term (ℓ = 0) The monopole term, , is the constant isotropic mean temperature of the CMB, with one standard deviation confidence. This term must be measured with absolute temperature devices, such as the FIRAS instrument on the COBE satellite. CMBR dipole anisotropy (ℓ = 1) CMB dipole represents the largest anisotropy, which is in the first spherical harmonic (), a cosine function. The amplitude of CMB dipole is around . The CMB dipole moment is interpreted as the peculiar motion of the Earth relative to the CMB. Its amplitude depends on the time due to the Earth's orbit about the barycenter of the solar system. This enables us to add a time-dependent term to the dipole expression. The modulation of this term is 1 year, which fits the observation done by COBE FIRAS. The dipole moment does not encode any primordial information. From the CMB data, it is seen that the Sun appears to be moving at relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB). The Local Group — the galaxy group that includes our own Milky Way galaxy — appears to be moving at in the direction of galactic longitude , . The dipole is now used to calibrate mapping studies. Multipole (ℓ ≥ 2) The temperature variation in the CMB temperature maps at higher multipoles, or , is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch at a redshift of around . Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. In such a hot dense environment, electrons and protons could not form any neutral atoms. The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. These phenomena caused the pressure and gravitational effects to act against each other, and triggered fluctuations in the photon-baryon plasma. Quickly after the recombination epoch, the rapid expansion of the universe caused the plasma to cool down and these fluctuations are "frozen into" the CMB maps we observe today. Data analysis challenges Raw CMBR data, even from space vehicles such as WMAP or Planck, contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background. The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. In practice it is hard to take the effects of noise and foreground sources into account. In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. In addition, point sources like galaxies and clusters represent another source of foreground which must be removed so as not to distort the short scale structure of the CMB power spectrum. Constraints on many cosmological parameters can be obtained from their effects on the power spectrum, and results are often calculated using Markov chain Monte Carlo sampling techniques. Anomalies With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions. The most longstanding of these is the low-ℓ multipole controversy. Even in the COBE map, it was observed that the quadrupole (, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole () modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes. A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data. Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole. A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%. Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, record the same anomaly, and so instrumental error (but not foreground contamination) appears to be ruled out. Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect; people want to find unusual things." Measurements of the density of quasars based on Wide-field Infrared Survey Explorer data finds a dipole significantly different from the one extracted from the CMB anisotropy. This difference is conflict with the cosmological principle. Future evolution Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable, and will be superseded first by the one produced by starlight, and perhaps, later by the background radiation fields of processes that may take place in the far future of the universe such as proton decay, evaporation of black holes, and positronium decay. Timeline of prediction, discovery and interpretation Thermal (non-microwave background) temperature predictions 1896 – Charles Édouard Guillaume estimates the "radiation of the stars" to be 5–6 K. 1926 – Sir Arthur Eddington estimates the non-thermal radiation of starlight in the galaxy "... by the formula the effective temperature corresponding to this density is 3.18° absolute ... black body". 1930s – Cosmologist Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K. 1931 – Term microwave first used in print: "When trials with wavelengths as low as 18 cm. were made known, there was undisguised surprise+that the problem of the micro-wave had been solved so soon." Telegraph & Telephone Journal XVII. 179/1 1934 – Richard Tolman shows that black-body radiation in an expanding universe cools but remains thermal. 1946 – Robert Dicke predicts "... radiation from cosmic matter" at < 20 K, but did not refer to background radiation. 1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe), commenting it "... is in reasonable agreement with the actual temperature of interstellar space", but does not mention background radiation. 1953 – Erwin Finlay-Freundlich in support of his tired light theory, derives a blackbody temperature for intergalactic space of 2.3 K and in the following year values of 1.9K and 6.0K. Microwave background radiation predictions and measurements 1941 – Andrew McKellar detected a "rotational" temperature of 2.3 K for the interstellar medium by comparing the population of CN doublet lines measured by W. S. Adams in a B star. 1948 – Ralph Alpher and Robert Herman estimate "the temperature in the universe" at 5 K. Although they do not specifically mention microwave background radiation, it may be inferred. 1953 – George Gamow estimates 7 K based on a model that does not rely on a free parameter 1955 – Émile Le Roux of the Nançay Radio Observatory, in a sky survey at λ = 33 cm, initially reported a near-isotropic background radiation of 3 kelvins, plus or minus 2; he did not recognize the cosmological significance and later revised the error bars to 20K. 1957 – Tigran Shmaonov reports that "the absolute effective temperature of the radioemission background ... is 4±3 K". with radiation intensity was independent of either time or direction of observation. Although Shamonov did not recognize it at the time, it is now clear that Shmaonov did observe the cosmic microwave background at a wavelength of 3.2 cm 1964 – A. G. Doroshkevich and Igor Dmitrievich Novikov publish a brief paper suggesting microwave searches for the black-body radiation predicted by Gamow, Alpher, and Herman, where they name the CMB radiation phenomenon as detectable. 1964–65 – Arno Penzias and Robert Woodrow Wilson measure the temperature to be approximately 3 K. Robert Dicke, James Peebles, P. G. Roll, and D. T. Wilkinson interpret this radiation as a signature of the Big Bang. 1966 – Rainer K. Sachs and Arthur M. Wolfe theoretically predict microwave background fluctuation amplitudes created by gravitational potential variations between observers and the last scattering surface (see Sachs–Wolfe effect). 1968 – Martin Rees and Dennis Sciama theoretically predict microwave background fluctuation amplitudes created by photons traversing time-dependent wells of potential. 1969 – R. A. Sunyaev and Yakov Zel'dovich study the inverse Compton scattering of microwave background photons by hot electrons (see Sunyaev–Zel'dovich effect). 1983 – Researchers from the Cambridge Radio Astronomy Group and the Owens Valley Radio Observatory first detect the Sunyaev–Zel'dovich effect from clusters of galaxies. 1983 – RELIKT-1 Soviet CMB anisotropy experiment was launched. 1990 – FIRAS on the Cosmic Background Explorer (COBE) satellite measures the black body form of the CMB spectrum with exquisite precision, and shows that the microwave background has a nearly perfect black-body spectrum with T = 2.73 K and thereby strongly constrains the density of the intergalactic medium. January 1992 – Scientists that analysed data from the RELIKT-1 report the discovery of anisotropy in the cosmic microwave background at the Moscow astrophysical seminar. 1992 – Scientists that analysed data from COBE DMR report the discovery of anisotropy in the cosmic microwave background. 1995 – The Cosmic Anisotropy Telescope performs the first high resolution observations of the cosmic microwave background. 1999 – First measurements of acoustic oscillations in the CMB anisotropy angular power spectrum from the MAT/TOCO, BOOMERANG, and Maxima Experiments. The BOOMERanG experiment makes higher quality maps at intermediate resolution, and confirms that the universe is "flat". 2002 – Polarization discovered by DASI. 2003 – E-mode polarization spectrum obtained by the CBI. The CBI and the Very Small Array produces yet higher quality maps at high resolution (covering small areas of the sky). 2003 – The Wilkinson Microwave Anisotropy Probe spacecraft produces an even higher quality map at low and intermediate resolution of the whole sky (WMAP provides high-resolution data, but improves on the intermediate resolution maps from BOOMERanG). 2004 – E-mode polarization spectrum obtained by the CBI. 2004 – The Arcminute Cosmology Bolometer Array Receiver produces a higher quality map of the high resolution structure not mapped by WMAP. 2005 – The Arcminute Microkelvin Imager and the Sunyaev–Zel'dovich Array begin the first surveys for very high redshift clusters of galaxies using the Sunyaev–Zel'dovich effect. 2005 – Ralph A. Alpher is awarded the National Medal of Science for his groundbreaking work in nucleosynthesis and prediction that the universe expansion leaves behind background radiation, thus providing a model for the Big Bang theory. 2006 – The long-awaited three-year WMAP results are released, confirming previous analysis, correcting several points, and including polarization data. 2006 – Two of COBE's principal investigators, George Smoot and John Mather, received the Nobel Prize in Physics in 2006 for their work on precision measurement of the CMBR. 2006–2011 – Improved measurements from WMAP, new supernova surveys ESSENCE and SNLS, and baryon acoustic oscillations from SDSS and WiggleZ, continue to be consistent with the standard Lambda-CDM model. 2010 – The first all-sky map from the Planck telescope is released. 2013 – An improved all-sky map from the Planck telescope is released, improving the measurements of WMAP and extending them to much smaller scales. 2014 – On March 17, 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported. 2015 – On January 30, 2015, the same team of astronomers from BICEP2 withdrew the claim made on the previous year. Based on the combined data of BICEP2 and Planck, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way. 2018 – The final data and maps from the Planck telescope is released, with improved measurements of the polarization on large scales. 2019 – Planck telescope analyses of their final 2018 data continue to be released. In popular culture In the Stargate Universe TV series (2009–2011), an ancient spaceship, Destiny, was built to study patterns in the CMBR which is a sentient message left over from the beginning of time. In Wheelers, a novel (2000) by Ian Stewart & Jack Cohen, CMBR is explained as the encrypted transmissions of an ancient civilization. This allows the Jovian "blimps" to have a society older than the currently-observed age of the universe. In The Three-Body Problem, a 2008 novel by Liu Cixin, a probe from an alien civilization compromises instruments monitoring the CMBR in order to deceive a character into believing the civilization has the power to manipulate the CMBR itself. The 2017 issue of the Swiss 20 francs bill lists several astronomical objects with their distances – the CMB is mentioned with 430 · 1015 light-seconds. In the 2021 Marvel series WandaVision, a mysterious television broadcast is discovered within the Cosmic Microwave Background. See also Notes References Further reading External links Student Friendly Intro to the CMB A pedagogic, step-by-step introduction to the cosmic microwave background power spectrum analysis suitable for those with an undergraduate physics background. More in depth than typical online sites. Less dense than cosmology texts. CMBR Theme on arxiv.org Audio: Fraser Cain and Dr. Pamela Gay – Astronomy Cast. The Big Bang and Cosmic Microwave Background – October 2006 Visualization of the CMB data from the Planck mission Astronomical radio sources Astrophysics Cosmic background radiation B-modes Inflation (cosmology) Observational astronomy Physical cosmological concepts Radio astronomy
Cosmic microwave background
[ "Physics", "Astronomy" ]
9,697
[ "Physical cosmological concepts", "Astronomical radio sources", "Concepts in astrophysics", "Astronomical events", "Observational astronomy", "Astrophysics", "Radio astronomy", "Astronomical objects", "Astronomical sub-disciplines" ]
7,403
https://en.wikipedia.org/wiki/Chemotaxis
Chemotaxis (from chemo- + taxis) is the movement of an organism or entity in response to a chemical stimulus. Somatic cells, bacteria, and other single-cell or multicellular organisms direct their movements according to certain chemicals in their environment. This is important for bacteria to find food (e.g., glucose) by swimming toward the highest concentration of food molecules, or to flee from poisons (e.g., phenol). In multicellular organisms, chemotaxis is critical to early development (e.g., movement of sperm towards the egg during fertilization) and development (e.g., migration of neurons or lymphocytes) as well as in normal function and health (e.g., migration of leukocytes during injury or infection). In addition, it has been recognized that mechanisms that allow chemotaxis in animals can be subverted during cancer metastasis, and the aberrant change of the overall property of these networks, which control chemotaxis, can lead to carcinogenesis. The aberrant chemotaxis of leukocytes and lymphocytes also contribute to inflammatory diseases such as atherosclerosis, asthma, and arthritis. Sub-cellular components, such as the polarity patch generated by mating yeast, may also display chemotactic behavior. Positive chemotaxis occurs if the movement is toward a higher concentration of the chemical in question; negative chemotaxis if the movement is in the opposite direction. Chemically prompted kinesis (randomly directed or nondirectional) can be called chemokinesis. History of chemotaxis research Although migration of cells was detected from the early days of the development of microscopy by Leeuwenhoek, a Caltech lecture regarding chemotaxis propounds that 'erudite description of chemotaxis was only first made by T. W. Engelmann (1881) and W. F. Pfeffer (1884) in bacteria, and H. S. Jennings (1906) in ciliates'. The Nobel Prize laureate I. Metchnikoff also contributed to the study of the field during 1882 to 1886, with investigations of the process as an initial step of phagocytosis. The significance of chemotaxis in biology and clinical pathology was widely accepted in the 1930s, and the most fundamental definitions underlying the phenomenon were drafted by this time. The most important aspects in quality control of chemotaxis assays were described by H. Harris in the 1950s. In the 1960s and 1970s, the revolution of modern cell biology and biochemistry provided a series of novel techniques that became available to investigate the migratory responder cells and subcellular fractions responsible for chemotactic activity. The availability of this technology led to the discovery of C5a, a major chemotactic factor involved in acute inflammation. The pioneering works of J. Adler modernized Pfeffer's capillary assay and represented a significant turning point in understanding the whole process of intracellular signal transduction of bacteria. Bacterial chemotaxis—general characteristics Some bacteria, such as E. coli, have several flagella per cell (4–10 typically). These can rotate in two ways: Counter-clockwise rotation aligns the flagella into a single rotating bundle, causing the bacterium to swim in a straight line; and Clockwise rotation breaks the flagella bundle apart such that each flagellum points in a different direction, causing the bacterium to tumble in place. The directions of rotation are given for an observer outside the cell looking down the flagella toward the cell. Behavior The overall movement of a bacterium is the result of alternating tumble and swim phases, called run-and-tumble motion. As a result, the trajectory of a bacterium swimming in a uniform environment will form a random walk with relatively straight swims interrupted by random tumbles that reorient the bacterium. Bacteria such as E. coli are unable to choose the direction in which they swim, and are unable to swim in a straight line for more than a few seconds due to rotational diffusion; in other words, bacteria "forget" the direction in which they are going. By repeatedly evaluating their course, and adjusting if they are moving in the wrong direction, bacteria can direct their random walk motion toward favorable locations. In the presence of a chemical gradient bacteria will chemotax, or direct their overall motion based on the gradient. If the bacterium senses that it is moving in the correct direction (toward attractant/away from repellent), it will keep swimming in a straight line for a longer time before tumbling; however, if it is moving in the wrong direction, it will tumble sooner. Bacteria like E. coli use temporal sensing to decide whether their situation is improving or not, and in this way, find the location with the highest concentration of attractant, detecting even small differences in concentration. This biased random walk is a result of simply choosing between two methods of random movement; namely tumbling and straight swimming. The helical nature of the individual flagellar filament is critical for this movement to occur. The protein structure that makes up the flagellar filament, flagellin, is conserved among all flagellated bacteria. Vertebrates seem to have taken advantage of this fact by possessing an immune receptor (TLR5) designed to recognize this conserved protein. As in many instances in biology, there are bacteria that do not follow this rule. Many bacteria, such as Vibrio, are monoflagellated and have a single flagellum at one pole of the cell. Their method of chemotaxis is different. Others possess a single flagellum that is kept inside the cell wall. These bacteria move by spinning the whole cell, which is shaped like a corkscrew. Signal transduction Chemical gradients are sensed through multiple transmembrane receptors, called methyl-accepting chemotaxis proteins (MCPs), which vary in the molecules that they detect. Thousands of MCP receptors are known to be encoded across the bacterial kingdom. These receptors may bind attractants or repellents directly or indirectly through interaction with proteins of periplasmatic space. The signals from these receptors are transmitted across the plasma membrane into the cytosol, where Che proteins are activated. The Che proteins alter the tumbling frequency, and alter the receptors. Flagellum regulation The proteins CheW and CheA bind to the receptor. The absence of receptor activation results in autophosphorylation in the histidine kinase, CheA, at a single highly conserved histidine residue. CheA, in turn, transfers phosphoryl groups to conserved aspartate residues in the response regulators CheB and CheY; CheA is a histidine kinase and it does not actively transfer the phosphoryl group, rather, the response regulator CheB takes the phosphoryl group from CheA. This mechanism of signal transduction is called a two-component system, and it is a common form of signal transduction in bacteria. CheY induces tumbling by interacting with the flagellar switch protein FliM, inducing a change from counter-clockwise to clockwise rotation of the flagellum. Change in the rotation state of a single flagellum can disrupt the entire flagella bundle and cause a tumble. Receptor regulation CheB, when activated by CheA, acts as a methylesterase, removing methyl groups from glutamate residues on the cytosolic side of the receptor; it works antagonistically with CheR, a methyltransferase, which adds methyl residues to the same glutamate residues. If the level of an attractant remains high, the level of phosphorylation of CheA (and, therefore, CheY and CheB) will remain low, the cell will swim smoothly, and the level of methylation of the MCPs will increase (because CheB-P is not present to demethylate). The MCPs no longer respond to the attractant when they are fully methylated; therefore, even though the level of attractant might remain high, the level of CheA-P (and CheB-P) increases and the cell begins to tumble. The MCPs can be demethylated by CheB-P, and, when this happens, the receptors can once again respond to attractants. The situation is the opposite with regard to repellents: fully methylated MCPs respond best to repellents, while least-methylated MCPs respond worst to repellents. This regulation allows the bacterium to 'remember' chemical concentrations from the recent past, a few seconds, and compare them to those it is currently experiencing, thus 'know' whether it is traveling up or down a gradient. that bacteria have to chemical gradients, other mechanisms are involved in increasing the absolute value of the sensitivity on a given background. Well-established examples are the ultra-sensitive response of the motor to the CheY-P signal, and the clustering of chemoreceptors. Chemoattractants and chemorepellents Chemoattractants and chemorepellents are inorganic or organic substances possessing chemotaxis-inducer effect in motile cells. These chemotactic ligands create chemical concentration gradients that organisms, prokaryotic and eukaryotic, move toward or away from, respectively. Effects of chemoattractants are elicited via chemoreceptors such as methyl-accepting chemotaxis proteins (MCP). MCPs in E.coli include Tar, Tsr, Trg and Tap. Chemoattracttants to Trg include ribose and galactose with phenol as a chemorepellent. Tap and Tsr recognize dipeptides and serine as chemoattractants, respectively. Chemoattractants or chemorepellents bind MCPs at its extracellular domain; an intracellular signaling domain relays the changes in concentration of these chemotactic ligands to downstream proteins like that of CheA which then relays this signal to flagellar motors via phosphorylated CheY (CheY-P). CheY-P can then control flagellar rotation influencing the direction of cell motility. For E.coli, S. meliloti, and R. spheroides, the binding of chemoattractants to MCPs inhibit CheA and therefore CheY-P activity, resulting in smooth runs, but for B. substilis, CheA activity increases. Methylation events in E.coli cause MCPs to have lower affinity to chemoattractants which causes increased activity of CheA and CheY-P resulting in tumbles. In this way cells are able to adapt to the immediate chemoattractant concentration and detect further changes to modulate cell motility. Chemoattractants in eukaryotes are well characterized for immune cells. Formyl peptides, such as fMLF, attract leukocytes such as neutrophils and macrophages, causing movement toward infection sites. Non-acylated methioninyl peptides do not act as chemoattractants to neutrophils and macrophages. Leukocytes also move toward chemoattractants C5a, a complement component, and pathogen-specific ligands on bacteria. Mechanisms concerning chemorepellents are less known than chemoattractants. Although chemorepellents work to confer an avoidance response in organisms, Tetrahymena thermophila adapt to a chemorepellent, Netrin-1 peptide, within 10 minutes of exposure; however, exposure to chemorepellents such as GTP, PACAP-38, and nociceptin show no such adaptations. GTP and ATP are chemorepellents in micro-molar concentrations to both Tetrahymena and Paramecium. These organisms avoid these molecules by producing avoiding reactions to re-orient themselves away from the gradient. Eukaryotic chemotaxis The mechanism of chemotaxis that eukaryotic cells employ is quite different from that in the bacteria E. coli; however, sensing of chemical gradients is still a crucial step in the process. Due to their small size and other biophysical constraints, E. coli cannot directly detect a concentration gradient. Instead, they employ temporal gradient sensing, where they move over larger distances several times their own width and measure the rate at which perceived chemical concentration changes. Eukaryotic cells are much larger than prokaryotes and have receptors embedded uniformly throughout the cell membrane. Eukaryotic chemotaxis involves detecting a concentration gradient spatially by comparing the asymmetric activation of these receptors at the different ends of the cell. Activation of these receptors results in migration towards chemoattractants, or away from chemorepellants. In mating yeast, which are non-motile, patches of polarity proteins on the cell cortex can relocate in a chemotactic fashion up pheromone gradients. It has also been shown that both prokaryotic and eukaryotic cells are capable of chemotactic memory. In prokaryotes, this mechanism involves the methylation of receptors called methyl-accepting chemotaxis proteins (MCPs). This results in their desensitization and allows prokaryotes to "remember" and adapt to a chemical gradient. In contrast, chemotactic memory in eukaryotes can be explained by the Local Excitation Global Inhibition (LEGI) model. LEGI involves the balance between a fast excitation and delayed inhibition which controls downstream signaling such as Ras activation and PIP3 production. Levels of receptors, intracellular signalling pathways and the effector mechanisms all represent diverse, eukaryotic-type components. In eukaryotic unicellular cells, amoeboid movement and cilium or the eukaryotic flagellum are the main effectors (e.g., Amoeba or Tetrahymena). Some eukaryotic cells of higher vertebrate origin, such as immune cells also move to where they need to be. Besides immune competent cells (granulocyte, monocyte, lymphocyte) a large group of cells—considered previously to be fixed into tissues—are also motile in special physiological (e.g., mast cell, fibroblast, endothelial cells) or pathological conditions (e.g., metastases). Chemotaxis has high significance in the early phases of embryogenesis as development of germ layers is guided by gradients of signal molecules. Detection of a gradient of chemoattractant The specific molecule/s that allow a eukaryotic cells detect a gradient of chemoattractant ligands (that is, a sort of the molecular compass that detects the direction of a chemoattractant) seems to change depending on the cell and chemoattractant receptor involved or even the concentration of the chemoattractant. However, these molecules apparently are activated independently of the motility of the cell. That is, even an immnobilized cell is still able to detect the direction of a chemoattractant. There appear to be mechanisms by which an external chemotactic gradient is sensed and turned into an intracellular Ras and PIP3 gradients, which results in a gradient and the activation of a signaling pathway, culminating in the polymerisation of actin filaments. The growing distal end of actin filaments develops connections with the internal surface of the plasma membrane via different sets of peptides and results in the formation of anterior pseudopods and posterior uropods. Cilia of eukaryotic cells can also produce chemotaxis; in this case, it is mainly a Ca2+-dependent induction of the microtubular system of the basal body and the beat of the 9 + 2 microtubules within cilia. The orchestrated beating of hundreds of cilia is synchronized by a submembranous system built between basal bodies. The details of the signaling pathways are still not totally clear. Chemotaxis-related migratory responses Chemotaxis refers to the directional migration of cells in response to chemical gradients; several variations of chemical-induced migration exist as listed below. Chemokinesis refers to an increase in cellular motility in response to chemicals in the surrounding environment. Unlike chemotaxis, the migration stimulated by chemokinesis lacks directionality, and instead increases environmental scanning behaviors. In haptotaxis the gradient of the chemoattractant is expressed or bound on a surface, in contrast to the classical model of chemotaxis, in which the gradient develops in a soluble fluid. The most common biologically active haptotactic surface is the extracellular matrix (ECM); the presence of bound ligands is responsible for induction of transendothelial migration and angiogenesis. Necrotaxis embodies a special type of chemotaxis when the chemoattractant molecules are released from necrotic or apoptotic cells. Depending on the chemical character of released substances, necrotaxis can accumulate or repel cells, which underlines the pathophysiological significance of this phenomenon. Receptors In general, eukaryotic cells sense the presence of chemotactic stimuli through the use of 7-transmembrane (or serpentine) heterotrimeric G-protein-coupled receptors, a class representing a significant portion of the genome. Some members of this gene superfamily are used in eyesight (rhodopsins) as well as in olfaction (smelling). The main classes of chemotaxis receptors are triggered by: Formyl peptides - formyl peptide receptors (FPR), Chemokines - chemokine receptors (CCR or CXCR), and Leukotrienes - leukotriene receptors (BLT). However, induction of a wide set of membrane receptors (e.g., cyclic nucleotides, amino acids, insulin, vasoactive peptides) also elicit migration of the cell. Chemotactic selection While some chemotaxis receptors are expressed in the surface membrane with long-term characteristics, as they are determined genetically, others have short-term dynamics, as they are assembled ad hoc in the presence of the ligand. The diverse features of the chemotaxis receptors and ligands allows for the possibility of selecting chemotactic responder cells with a simple chemotaxis assay By chemotactic selection, we can determine whether a still-uncharacterized molecule acts via the long- or the short-term receptor pathway. The term chemotactic selection is also used to designate a technique that separates eukaryotic or prokaryotic cells according to their chemotactic responsiveness to selector ligands. Chemotactic ligands The number of molecules capable of eliciting chemotactic responses is relatively high, and we can distinguish primary and secondary chemotactic molecules. The main groups of the primary ligands are as follows: Formyl peptides are di-, tri-, tetrapeptides of bacterial origin, formylated on the N-terminus of the peptide. They are released from bacteria in vivo or after decomposition of the cell, a typical member of this group is the N-formylmethionyl-leucyl-phenylalanine (abbreviated fMLF or fMLP). Bacterial fMLF is a key component of inflammation has characteristic chemoattractant effects in neutrophil granulocytes and monocytes. The chemotactic factor ligands and receptors related to formyl peptides are summarized in the related article, Formyl peptide receptors. Complement 3a (C3a) and complement 5a (C5a) are intermediate products of the complement cascade. Their synthesis is joined to the three alternative pathways (classical, lectin-dependent, and alternative) of complement activation by a convertase enzyme. The main target cells of these derivatives are neutrophil granulocytes and monocytes as well. Chemokines belong to a special class of cytokines; not only do their groups (C, CC, CXC, CX3C chemokines) represent structurally related molecules with a special arrangement of disulfide bridges but also their target cell specificity is diverse. CC chemokines act on monocytes (e.g., RANTES), and CXC chemokines are neutrophil granulocyte-specific (e.g., IL-8). Investigations of the three-dimensional structures of chemokines provided evidence that a characteristic composition of beta-sheets and an alpha helix provides expression of sequences required for interaction with the chemokine receptors. Formation of dimers and their increased biological activity was demonstrated by crystallography of several chemokines, e.g. IL-8. Metabolites of polyunsaturated fatty acids Leukotrienes are eicosanoid lipid mediators made by the metabolism of arachidonic acid by ALOX5 (also termed 5-lipoxygenase). Their most prominent member with chemotactic factor activity is leukotriene B4, which elicits adhesion, chemotaxis, and aggregation of leukocytes. The chemoattractant action of LTB4 is induced via either of two G protein–coupled receptors, BLT1 and BLT2, which are highly expressed in cells involved in inflammation and allergy. The family of 5-Hydroxyicosatetraenoic acid eicosanoids are arachidonic acid metabolites also formed by ALOX5. Three members of the family form naturally and have prominent chemotactic activity. These, listed in order of decreasing potency, are: 5-oxo-eicosatetraenoic acid, 5-oxo-15-hydroxy-eicosatetraenoic acid, and 5-Hydroxyeicosatetraenoic acid. This family of agonists stimulates chemotactic responses in human eosinophils, neutrophils, and monocytes by binding to the Oxoeicosanoid receptor 1, which like the receptors for leukotriene B4, is a G protein-coupled receptor. Aside from the skin, neutrophils are the body's first line of defense against bacterial infections. After leaving nearby blood vessels, these cells recognize chemicals produced by bacteria in a cut or scratch and migrate "toward the smell". 5-hydroxyeicosatrienoic acid and 5-oxoeicosatrienoic acid are metabolites of Mead acid (5Z,8Z,11Z-eicosatrirenoid acid); they stimulate leukocyte chemotaxis through the oxoeicosanoid receptor 1 with 5-oxoeicosatrienoic acid being as potent as its arachidonic acid-derived analog, 5-oxo-eicosatetraenoic acid, in stimulating human blood eosinophil and neutrophil chemotaxis. 12-Hydroxyeicosatetraenoic acid is an eicosanoid metabolite of arachidonic acid made by ALOX12 which stimulates leukocyte chemotaxis through the leukotriene B4 receptor, BLT2. Prostaglandin D2 is an eicosanoid metabolite of arachidononic acid made by cyclooxygenase 1 or cyclooxygenase 2 that stimulates chemotaxis through the Prostaglandin DP2 receptor. It elicits chemotactic responses in eosinophils, basophils, and T helper cells of the Th2 subtype. 12-Hydroxyheptadecatrienoic acid is a non-eicosanoid metabolite of arachidonic acid made by cyclooxygenase 1 or cyclooxygenase 2 that stimulates leukocyte chemotaxis though the leukotriene B4 receptor, BLT2. 15-oxo-eicosatetraenoic acid is an eicosanoid metabolite of arachidonic acid made my ALOX15; it has weak chemotactic activity for human monocytes (sees 15-Hydroxyeicosatetraenoic acid#15-oxo-ETE). The receptor or other mechanism by which this metabolite stimulates chemotaxis has not been elucidated. Chemotactic range fitting Chemotactic responses elicited by ligand-receptor interactions vary with the concentration of the ligand. Investigations of ligand families (e.g. amino acids or oligopeptides) demonstrates that chemoattractant activity occurs over a wide range, while chemorepellent activities have narrow ranges. Clinical significance A changed migratory potential of cells has relatively high importance in the development of several clinical symptoms and syndromes. Altered chemotactic activity of extracellular (e.g., Escherichia coli) or intracellular (e.g., Listeria monocytogenes) pathogens itself represents a significant clinical target. Modification of endogenous chemotactic ability of these microorganisms by pharmaceutical agents can decrease or inhibit the ratio of infections or spreading of infectious diseases. Apart from infections, there are some other diseases wherein impaired chemotaxis is the primary etiological factor, as in Chédiak–Higashi syndrome, where giant intracellular vesicles inhibit normal migration of cells. Mathematical models Several mathematical models of chemotaxis were developed depending on the type of Migration (e.g., basic differences of bacterial swimming, movement of unicellular eukaryotes with cilia/flagellum and amoeboid migration) Physico-chemical characteristics of the chemicals (e.g., diffusion) working as ligands Biological characteristics of the ligands (attractant, neutral, and repellent molecules) Assay systems applied to evaluate chemotaxis (see incubation times, development, and stability of concentration gradients) Other environmental effects possessing direct or indirect influence on the migration (lighting, temperature, magnetic fields, etc.) Although interactions of the factors listed above make the behavior of the solutions of mathematical models of chemotaxis rather complex, it is possible to describe the basic phenomenon of chemotaxis-driven motion in a straightforward way. Indeed, let us denote with the spatially non-uniform concentration of the chemo-attractant and as its gradient. Then the chemotactic cellular flow (also called current) that is generated by the chemotaxis is linked to the above gradient by the law:where is the spatial density of the cells and is the so-called 'Chemotactic coefficient' - is often not constant, but a decreasing function of the chemo-attractant. For some quantity that is subject to total flux and generation/destruction term , it is possible to formulate a continuity equation: where is the divergence. This general equation applies to both the cell density and the chemo-attractant. Therefore, incorporating a diffusion flux into the total flux term, the interactions between these quantities are governed by a set of coupled reaction-diffusion partial differential equations describing the change in and :where describes the growth in cell density, is the kinetics/source term for the chemo-attractant, and the diffusion coefficients for cell density and the chemo-attractant are respectively and . Spatial ecology of soil microorganisms is a function of their chemotactic sensitivities towards substrate and fellow organisms. The chemotactic behavior of the bacteria was proven to lead to non-trivial population patterns even in the absence of environmental heterogeneities. The presence of structural pore scale heterogeneities has an extra impact on the emerging bacterial patterns. Measurement of chemotaxis A wide range of techniques is available to evaluate chemotactic activity of cells or the chemoattractant and chemorepellent character of ligands. The basic requirements of the measurement are as follows: Concentration gradients can develop relatively quickly and persist for a long time in the system Chemotactic and chemokinetic activities are distinguished Migration of cells is free toward and away on the axis of the concentration gradient Detected responses are the results of active migration of cells Despite the fact that an ideal chemotaxis assay is still not available, there are several protocols and pieces of equipment that offer good correspondence with the conditions described above. The most commonly used are summarised in the table below: Artificial chemotactic systems Chemical robots that use artificial chemotaxis to navigate autonomously have been designed. Applications include targeted delivery of drugs in the body. More recently, enzyme molecules have also shown positive chemotactic behavior in the gradient of their substrates. The thermodynamically favorable binding of enzymes to their specific substrates is recognized as the origin of enzymatic chemotaxis. Additionally, enzymes in cascades have also shown substrate-driven chemotactic aggregation. Apart from active enzymes, non-reacting molecules also show chemotactic behavior. This has been demonstrated by using dye molecules that move directionally in gradients of polymer solution through favorable hydrophobic interactions. See also McCutcheon index Tropism Durotaxis Haptotaxis Mechanotaxis Plithotaxis Thin layers (oceanography) References Further reading External links Chemotaxis Neutrophil Chemotaxis Cell Migration Gateway Downloadable Matlab chemotaxis simulator Bacterial Chemotaxis Interactive Simulator (web-app) Motile cells Perception Taxes (biology) Transmembrane receptors Transport phenomena
Chemotaxis
[ "Physics", "Chemistry", "Engineering" ]
6,263
[ "Transport phenomena", "Transmembrane receptors", "Physical phenomena", "Chemical engineering", "Signal transduction" ]
7,466
https://en.wikipedia.org/wiki/Coal%20tar
Coal tar is a thick dark liquid which is a by-product of the production of coke and coal gas from coal. It is a type of creosote. It has both medical and industrial uses. Medicinally it is a topical medication applied to skin to treat psoriasis and seborrheic dermatitis (dandruff). It may be used in combination with ultraviolet light therapy. Industrially it is a railroad tie preservative and used in the surfacing of roads. Coal tar was listed as a known human carcinogen in the first Report on Carcinogens from the U.S. Federal Government, issued in 1980. Coal tar was discovered circa 1665 and used for medical purposes as early as the 1800s. Circa 1850, the discovery that it could be used as the main raw material for the synthesis of dyes engendered an entire industry. It is on the World Health Organization's List of Essential Medicines. Coal tar is available as a generic medication and over the counter. Side effects include skin irritation, sun sensitivity, allergic reactions, and skin discoloration. It is unclear if use during pregnancy is safe for the baby and use during breastfeeding is not typically recommended. The exact mechanism of action is unknown. It is a complex mixture of phenols, polycyclic aromatic hydrocarbons (PAHs), and heterocyclic compounds. It demonstrates antifungal, anti-inflammatory, anti-itch, and antiparasitic properties. Composition Coal tar is produced through thermal destruction (pyrolysis) of coal. Its composition varies with the process and type of coal used – lignite, bituminous or anthracite. Coal tar is a mixture of approximately 10,000 chemicals, of which only about 50% have been identified. Most of the chemical compounds are polycyclic aromatic hydrocarbon: polycyclic aromatic hydrocarbons (4-rings: chrysene, fluoranthene, pyrene, triphenylene, naphthacene, benzanthracene, 5-rings: picene, benzo[a]pyrene, benzo[e]pyrene, benzofluoranthenes, perylene, 6-rings: dibenzopyrenes, dibenzofluoranthenes, benzoperylenes, 7-rings: coronene) methylated and polymethylated derivatives, mono- and polyhydroxylated derivatives, and heterocyclic compounds. Others: benzene, toluene, xylenes, cumenes, coumarone, indene, benzofuran, naphthalene and methyl-naphthalenes, acenaphthene, fluorene, phenol, cresols, pyridine, picolines, phenanthracene, carbazole, quinolines, fluoranthene. Many of these constituents are known carcinogens. Derivatives Various phenolic coal tar derivatives have analgesic (pain-killer) properties. These included acetanilide, phenacetin, and paracetamol aka acetaminophen. Paracetamol may be the only coal-tar derived analgesic still in use today. Industrial phenol is now usually synthesized from crude oil rather than coal tar. Coal tar derivatives are contra-indicated for people with the inherited red cell blood disorder glucose-6-phosphate dehydrogenase deficiency (G6PD deficiency), as they can cause oxidative stress leading to red blood cell breakdown. Mechanism of action The exact mechanism of action is unknown. Coal tar is a complex mixture of phenols, polycyclic aromatic hydrocarbons (PAHs), and heterocyclic compounds. It is a keratolytic agent, which reduces the growth rate of skin cells and softens the skin's keratin. Uses Medicinal Coal tar is on the World Health Organization's List of Essential Medicines, the most effective and safe medicines needed in a health system. Coal tar is generally available as a generic medication and over the counter. Coal tar is used in medicated shampoo, soap and ointment. It demonstrates antifungal, anti-inflammatory, anti-itch, and antiparasitic properties. It may be applied topically as a treatment for dandruff and psoriasis, and to kill and repel head lice. It may be used in combination with ultraviolet light therapy. Coal tar may be used in two forms: crude coal tar () or a coal tar solution () also known as liquor carbonis detergens (LCD). Named brands include Denorex, Balnetar, Psoriasin, Tegrin, T/Gel, and Neutar. When used in the extemporaneous preparation of topical medications, it is supplied in the form of coal tar topical solution USP, which consists of a 20% w/v solution of coal tar in alcohol, with an additional 5% w/v of polysorbate 80 USP; this must then be diluted in an ointment base, such as petrolatum. Construction Coal tar was a component of the first sealed roads. In its original development by Edgar Purnell Hooley, tarmac was tar covered with granite chips. Later the filler used was industrial slag. Today, petroleum derived binders and sealers are more commonly used. These sealers are used to extend the life and reduce maintenance cost associated with asphalt pavements, primarily in asphalt road paving, car parks and walkways. Coal tar is incorporated into some parking-lot sealcoat products used to protect the structural integrity of the underlying pavement. Sealcoat products that are coal-tar based typically contain 20 to 35 percent coal-tar pitch. Research shows it is used throughout the United States of America, however several areas have banned its use in sealcoat products, including the District of Columbia; the city of Austin, Texas; Dane County, Wisconsin; the state of Washington; and several municipalities in Minnesota and others. Industry In modern times, coal tar is mostly traded as fuel and an application for tar, such as roofing. The total value of the trade in coal tar is around US$20 billion each year. As a fuel. In the manufacture of paints, synthetic dyes (notably tartrazine/Yellow #5), and photographic materials. For heating or to fire boilers. Like most heavy oils, it must be heated before it will flow easily. As a source of carbon black. As a binder in manufacturing graphite; a considerable portion of the materials in "green blocks" is coke oven volatiles (COV). During the baking process of the green blocks as a part of commercial graphite production, most of the coal tar binders are vaporised and are generally burned in an incinerator to prevent release into the atmosphere, as COV and coal tar can be injurious to health. As a main component of the electrode paste used in electric arc furnaces. Coal tar pitch act as the binder for solid filler that can be either coke or calcined anthracite, forming electrode paste, also widely known as Söderberg electrode paste. As a feed stock for higher-value fractions, such as naphtha, creosote and pitch. In the coal gas era, companies distilled coal tar to separate these out, leading to the discovery of many industrial chemicals. Some British companies included: Bonnington Chemical Works British Tar Products Lancashire Tar Distillers Midland Tar Distillers Newton, Chambers & Company (owners of Izal brand disinfectant) Sadlers Chemicals Safety Side effects of coal tar products include skin irritation, sun sensitivity, allergic reactions, and skin discoloration. It is unclear if use during pregnancy is safe for the baby and use during breastfeeding is not typically recommended. According to the National Psoriasis Foundation, coal tar is a valuable, safe and inexpensive treatment option for millions of people with psoriasis and other scalp or skin conditions. According to the FDA, coal tar concentrations between 0.5% and 5% are considered safe and effective for psoriasis. Cancer Long-term, consistent exposure to coal tar likely increases the risk of non-melanoma skin cancers. Evidence is inconclusive whether medical coal tar, which does not remain on the skin for the long periods seen in occupational exposure, causes cancer, because there is insufficient data to make a judgment. While coal tar consistently causes cancer in cohorts of workers with chronic occupational exposure, animal models, and mechanistic studies, the data on short-term use as medicine in humans has so far failed to show any consistently significant increase in rates of cancer. Coal tar contains many polycyclic aromatic hydrocarbons, and it is believed that their metabolites bind to DNA, damaging it. The PAHs found in coal tar and air pollution induce immunosenescence and cytotoxicity in epidermal cells. It's possible that the skin can repair itself from this damage after short-term exposure to PAHs but not after long-term exposure. Long-term skin exposure to these compounds can produce "tar warts", which can progress to squamous cell carcinoma. Coal tar was one of the first chemical substances proven to cause cancer from occupational exposure, during research in 1775 on the cause of chimney sweeps' carcinoma. Modern studies have shown that working with coal tar pitch, such as during the paving of roads or when working on roofs, increases the risk of cancer. The International Agency for Research on Cancer lists coal tars as Group 1 carcinogens, meaning they directly cause cancer. The U.S. Department of Health and Human Services lists coal tars as known human carcinogens. In response to public health concerns regarding the carcinogenicity of PAHs some municipalities, such as the city of Milwaukee, have banned the use of common coal tar-based road and driveway sealants citing concerns of elevated PAH content in groundwater. Other Coal tar causes increased sensitivity to sunlight, so skin treated with topical coal tar preparations should be protected from sunlight. The residue from the distillation of high-temperature coal tar, primarily a complex mixture of three or more membered condensed ring aromatic hydrocarbons, was listed on 13 January 2010 as a substance of very high concern by the European Chemicals Agency. Regulation Exposure to coal tar pitch volatiles can occur in the workplace by breathing, skin contact, or eye contact. The Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit) to 0.2 mg/m3 benzene-soluble fraction over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.1 mg/m3 cyclohexane-extractable fraction over an 8-hour workday. At levels of 80 mg/m3, coal tar pitch volatiles are immediately dangerous to life and health. When used as a medication in the United States, coal tar preparations are considered over-the-counter drug pharmaceuticals and are subject to regulation by the Food and Drug Administration (FDA). See also Coal oil Wood tar References External links Antipsoriatics Coal IARC Group 1 carcinogens Materials World Health Organization essential medicines Wikipedia medicine articles ready to translate Drugs with unknown mechanisms of action
Coal tar
[ "Physics" ]
2,371
[ "Materials", "Matter" ]
7,480
https://en.wikipedia.org/wiki/Cross%20section%20%28physics%29
In physics, the cross section is a measure of the probability that a specific process will take place in a collision of two particles. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process. When two discrete particles interact in classical physics, their mutual cross section is the area transverse to their relative motion within which they must meet in order to scatter from each other. If the particles are hard inelastic spheres that interact only upon contact, their scattering cross section is related to their geometric size. If the particles interact through some action-at-a-distance force, such as electromagnetism or gravity, their scattering cross section is generally larger than their geometric size. When a cross section is specified as the differential limit of a function of some final-state variable, such as particle angle or energy, it is called a differential cross section (see detailed discussion below). When a cross section is integrated over all scattering angles (and possibly other variables), it is called a total cross section or integrated total cross section. For example, in Rayleigh scattering, the intensity scattered at the forward and backward angles is greater than the intensity scattered sideways, so the forward differential scattering cross section is greater than the perpendicular differential cross section, and by adding all of the infinitesimal cross sections over the whole range of angles with integral calculus, we can find the total cross section. Scattering cross sections may be defined in nuclear, atomic, and particle physics for collisions of accelerated beams of one type of particle with targets (either stationary or moving) of a second type of particle. The probability for any given reaction to occur is in proportion to its cross section. Thus, specifying the cross section for a given reaction is a proxy for stating the probability that a given scattering process will occur. The measured reaction rate of a given process depends strongly on experimental variables such as the density of the target material, the intensity of the beam, the detection efficiency of the apparatus, or the angle setting of the detection apparatus. However, these quantities can be factored away, allowing measurement of the underlying two-particle collisional cross section. Differential and total scattering cross sections are among the most important measurable quantities in nuclear, atomic, and particle physics. With light scattering off of a particle, the cross section specifies the amount of optical power scattered from light of a given irradiance (power per area). Although the cross section has the same units as area, the cross section may not necessarily correspond to the actual physical size of the target given by other forms of measurement. It is not uncommon for the actual cross-sectional area of a scattering object to be much larger or smaller than the cross section relative to some physical process. For example, plasmonic nanoparticles can have light scattering cross sections for particular frequencies that are much larger than their actual cross-sectional areas. Collision among gas particles In a gas of finite-sized particles there are collisions among particles that depend on their cross-sectional size. The average distance that a particle travels between collisions depends on the density of gas particles. These quantities are related by where is the cross section of a two-particle collision (SI unit: m2), is the mean free path between collisions (SI unit: m), is the number density of the target particles (SI unit: m−3). If the particles in the gas can be treated as hard spheres of radius that interact by direct contact, as illustrated in Figure 1, then the effective cross section for the collision of a pair is If the particles in the gas interact by a force with a larger range than their physical size, then the cross section is a larger effective area that may depend on a variety of variables such as the energy of the particles. Cross sections can be computed for atomic collisions but also are used in the subatomic realm. For example, in nuclear physics a "gas" of low-energy neutrons collides with nuclei in a reactor or other nuclear device, with a cross section that is energy-dependent and hence also with well-defined mean free path between collisions. Attenuation of a beam of particles If a beam of particles enters a thin layer of material of thickness , the flux of the beam will decrease by according to where is the total cross section of all events, including scattering, absorption, or transformation to another species. The volumetric number density of scattering centers is designated by . Solving this equation exhibits the exponential attenuation of the beam intensity: where is the initial flux, and is the total thickness of the material. For light, this is called the Beer–Lambert law. Differential cross section Consider a classical measurement where a single particle is scattered off a single stationary target particle. Conventionally, a spherical coordinate system is used, with the target placed at the origin and the axis of this coordinate system aligned with the incident beam. The angle is the scattering angle, measured between the incident beam and the scattered beam, and the is the azimuthal angle. The impact parameter is the perpendicular offset of the trajectory of the incoming particle, and the outgoing particle emerges at an angle . For a given interaction (coulombic, magnetic, gravitational, contact, etc.), the impact parameter and the scattering angle have a definite one-to-one functional dependence on each other. Generally the impact parameter can neither be controlled nor measured from event to event and is assumed to take all possible values when averaging over many scattering events. The differential size of the cross section is the area element in the plane of the impact parameter, i.e. . The differential angular range of the scattered particle at angle is the solid angle element . The differential cross section is the quotient of these quantities, . It is a function of the scattering angle (and therefore also the impact parameter), plus other observables such as the momentum of the incoming particle. The differential cross section is always taken to be positive, even though larger impact parameters generally produce less deflection. In cylindrically symmetric situations (about the beam axis), the azimuthal angle is not changed by the scattering process, and the differential cross section can be written as . In situations where the scattering process is not azimuthally symmetric, such as when the beam or target particles possess magnetic moments oriented perpendicular to the beam axis, the differential cross section must also be expressed as a function of the azimuthal angle. For scattering of particles of incident flux off a stationary target consisting of many particles, the differential cross section at an angle is related to the flux of scattered particle detection in particles per unit time by Here is the finite angular size of the detector (SI unit: sr), is the number density of the target particles (SI unit: m−3), and is the thickness of the stationary target (SI unit: m). This formula assumes that the target is thin enough that each beam particle will interact with at most one target particle. The total cross section may be recovered by integrating the differential cross section over the full solid angle ( steradians): It is common to omit the "differential" qualifier when the type of cross section can be inferred from context. In this case, may be referred to as the integral cross section or total cross section. The latter term may be confusing in contexts where multiple events are involved, since "total" can also refer to the sum of cross sections over all events. The differential cross section is extremely useful quantity in many fields of physics, as measuring it can reveal a great amount of information about the internal structure of the target particles. For example, the differential cross section of Rutherford scattering provided strong evidence for the existence of the atomic nucleus. Instead of the solid angle, the momentum transfer may be used as the independent variable of differential cross sections. Differential cross sections in inelastic scattering contain resonance peaks that indicate the creation of metastable states and contain information about their energy and lifetime. Quantum scattering In the time-independent formalism of quantum scattering, the initial wave function (before scattering) is taken to be a plane wave with definite momentum : where and are the relative coordinates between the projectile and the target. The arrow indicates that this only describes the asymptotic behavior of the wave function when the projectile and target are too far apart for the interaction to have any effect. After scattering takes place it is expected that the wave function takes on the following asymptotic form: where is some function of the angular coordinates known as the scattering amplitude. This general form is valid for any short-ranged, energy-conserving interaction. It is not true for long-ranged interactions, so there are additional complications when dealing with electromagnetic interactions. The full wave function of the system behaves asymptotically as the sum The differential cross section is related to the scattering amplitude: This has the simple interpretation as the probability density for finding the scattered projectile at a given angle. A cross section is therefore a measure of the effective surface area seen by the impinging particles, and as such is expressed in units of area. The cross section of two particles (i.e. observed when the two particles are colliding with each other) is a measure of the interaction event between the two particles. The cross section is proportional to the probability that an interaction will occur; for example in a simple scattering experiment the number of particles scattered per unit of time (current of scattered particles ) depends only on the number of incident particles per unit of time (current of incident particles ), the characteristics of target (for example the number of particles per unit of surface ), and the type of interaction. For we have Relation to the S-matrix If the reduced masses and momenta of the colliding system are , and , before and after the collision respectively, the differential cross section is given by where the on-shell matrix is defined by in terms of the S-matrix. Here is the Dirac delta function. The computation of the S-matrix is the main goal of the scattering theory. Units Although the SI unit of total cross sections is m2, a smaller unit is usually used in practice. In nuclear and particle physics, the conventional unit is the barn b, where 1 b = 10−28 m2 = 100 fm2. Smaller prefixed units such as mb and μb are also widely used. Correspondingly, the differential cross section can be measured in units such as mb/sr. When the scattered radiation is visible light, it is conventional to measure the path length in centimetres. To avoid the need for conversion factors, the scattering cross section is expressed in cm2, and the number concentration in cm−3. The measurement of the scattering of visible light is known as nephelometry, and is effective for particles of 2–50 μm in diameter: as such, it is widely used in meteorology and in the measurement of atmospheric pollution. The scattering of X-rays can also be described in terms of scattering cross sections, in which case the square ångström is a convenient unit: 1 Å2 = 10−20 m2 = = 108 b. The sum of the scattering, photoelectric, and pair-production cross-sections (in barns) is charted as the "atomic attenuation coefficient" (narrow-beam), in barns. Scattering of light For light, as in other settings, the scattering cross section for particles is generally different from the geometrical cross section of the particle, and it depends upon the wavelength of light and the permittivity, shape, and size of the particle. The total amount of scattering in a sparse medium is proportional to the product of the scattering cross section and the number of particles present. In the interaction of light with particles, many processes occur, each with their own cross sections, including absorption, scattering, and photoluminescence. The sum of the absorption and scattering cross sections is sometimes referred to as the attenuation or extinction cross section. The total extinction cross section is related to the attenuation of the light intensity through the Beer–Lambert law, which says that attenuation is proportional to particle concentration: where is the attenuation at a given wavelength , is the particle concentration as a number density, and is the path length. The absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance : Combining the scattering and absorption cross sections in this manner is often necessitated by the inability to distinguish them experimentally, and much research effort has been put into developing models that allow them to be distinguished, the Kubelka-Munk theory being one of the most important in this area. Cross section and Mie theory Cross sections commonly calculated using Mie theory include efficiency coefficients for extinction , scattering , and Absorption cross sections. These are normalized by the geometrical cross sections of the particle as The cross section is defined by where is the energy flow through the surrounding surface, and is the intensity of the incident wave. For a plane wave the intensity is going to be , where is the impedance of the host medium. The main approach is based on the following. Firstly, we construct an imaginary sphere of radius (surface ) around the particle (the scatterer). The net rate of electromagnetic energy crosses the surface is where is the time averaged Poynting vector. If energy is absorbed within the sphere, otherwise energy is being created within the sphere. We will not consider this case here. If the host medium is non-absorbing, the energy must be absorbed by the particle. We decompose the total field into incident and scattered parts , and the same for the magnetic field . Thus, we can decompose into the three terms , where where , , and . All the field can be decomposed into the series of vector spherical harmonics (VSH). After that, all the integrals can be taken. In the case of a uniform sphere of radius , permittivity , and permeability , the problem has a precise solution. The scattering and extinction coefficients are Where . These are connected as Dipole approximation for the scattering cross section Let us assume that a particle supports only electric and magnetic dipole modes with polarizabilities and (here we use the notation of magnetic polarizability in the manner of Bekshaev et al. rather than the notation of Nieto-Vesperinas et al.) expressed through the Mie coefficients as Then the cross sections are given by and, finally, the electric and magnetic absorption cross sections are and For the case of a no-inside-gain particle, i.e. no energy is emitted by the particle internally (), we have a particular case of the Optical theorem Equality occurs for non-absorbing particles, i.e. for . Scattering of light on extended bodies In the context of scattering light on extended bodies, the scattering cross section, , describes the likelihood of light being scattered by a macroscopic particle. In general, the scattering cross section is different from the geometrical cross section of a particle, as it depends upon the wavelength of light and the permittivity in addition to the shape and size of the particle. The total amount of scattering in a sparse medium is determined by the product of the scattering cross section and the number of particles present. In terms of area, the total cross section () is the sum of the cross sections due to absorption, scattering, and luminescence: The total cross section is related to the absorbance of the light intensity through the Beer–Lambert law, which says that absorbance is proportional to concentration: , where is the absorbance at a given wavelength , is the concentration as a number density, and is the path length. The extinction or absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance : Relation to physical size There is no simple relationship between the scattering cross section and the physical size of the particles, as the scattering cross section depends on the wavelength of radiation used. This can be seen when looking at a halo surrounding the Moon on a decently foggy evening: Red light photons experience a larger cross sectional area of water droplets than photons of higher energy. The halo around the Moon thus has a perimeter of red light due to lower energy photons being scattering further from the center of the Moon. Photons from the rest of the visible spectrum are left within the center of the halo and perceived as white light. Meteorological range The scattering cross section is related to the meteorological range : The quantity is sometimes denoted , the scattering coefficient per unit length. Examples Elastic collision of two hard spheres The following equations apply to two hard spheres that undergo a perfectly elastic collision. Let and denote the radii of the scattering center and scattered sphere, respectively. The differential cross section is and the total cross section is In other words, the total scattering cross section is equal to the area of the circle (with radius ) within which the center of mass of the incoming sphere has to arrive for it to be deflected. Rutherford scattering In Rutherford scattering, an incident particle with charge and energy scatters off a fixed particle with charge . The differential cross section is where is the vacuum permittivity. The total cross section is infinite unless a cutoff for small scattering angles is applied. This is due to the long range of the Coulomb potential. Scattering from a 2D circular mirror The following example deals with a beam of light scattering off a circle with radius and a perfectly reflecting boundary. The beam consists of a uniform density of parallel rays, and the beam-circle interaction is modeled within the framework of geometric optics. Because the problem is genuinely two-dimensional, the cross section has unit of length (e.g., metre). Let be the angle between the light ray and the radius joining the reflection point of the ray with the center point of the mirror. Then the increase of the length element perpendicular to the beam is The reflection angle of this ray with respect to the incoming ray is , and the scattering angle is The differential relationship between incident and reflected intensity is The differential cross section is therefore () Its maximum at corresponds to backward scattering, and its minimum at corresponds to scattering from the edge of the circle directly forward. This expression confirms the intuitive expectations that the mirror circle acts like a diverging lens. The total cross section is equal to the diameter of the circle: Scattering from a 3D spherical mirror The result from the previous example can be used to solve the analogous problem in three dimensions, i.e., scattering from a perfectly reflecting sphere of radius . The plane perpendicular to the incoming light beam can be parameterized by cylindrical coordinates and . In any plane of the incoming and the reflected ray we can write (from the previous example): while the impact area element is In spherical coordinates, Together with the trigonometric identity we obtain The total cross section is See also Cross section (geometry) Flow velocity Luminosity (scattering theory) Linear attenuation coefficient Mass attenuation coefficient Neutron cross section Nuclear cross section Gamma ray cross section Partial wave analysis Particle detector Radar cross-section Rutherford scattering Scattering amplitude References Bibliography J. D. Bjorken, S. D. Drell, Relativistic Quantum Mechanics, 1964 P. Roman, Introduction to Quantum Theory, 1969 W. Greiner, J. Reinhardt, Quantum Electrodynamics, 1994 R. G. Newton. Scattering Theory of Waves and Particles. McGraw Hill, 1966. External links Nuclear Cross Section Scattering Cross Section IAEA – Nuclear Data Services BNL – National Nuclear Data Center Particle Data Group – The Review of Particle Physics IUPAC Goldbook – Definition: Reaction Cross Section IUPAC Goldbook – Definition: Collision Cross Section ShimPlotWell cross section plotter for nuclear data Atomic physics Physical quantities Dimensional analysis Experimental particle physics Measurement Nuclear physics Particle physics Scattering theory Scattering, absorption and radiative transfer (optics) Scattering Spectroscopy
Cross section (physics)
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
4,140
[ "Physical phenomena", "Physical quantities", "Quantum mechanics", "Spectroscopy", "Instrumental analysis", "Measurement", "Scattering", "Particle physics", " molecular", "Nuclear physics", " and optical physics", "Molecular physics", "Spectrum (physical sciences)", "Quantity", "Size", ...
7,522
https://en.wikipedia.org/wiki/Calorimetry
In chemistry and thermodynamics, calorimetry () is the science or act of measuring changes in state variables of a body for the purpose of deriving the heat transfer associated with changes of its state due, for example, to chemical reactions, physical changes, or phase transitions under specified constraints. Calorimetry is performed with a calorimeter. Scottish physician and scientist Joseph Black, who was the first to recognize the distinction between heat and temperature, is said to be the founder of the science of calorimetry. Indirect calorimetry calculates heat that living organisms produce by measuring either their production of carbon dioxide and nitrogen waste (frequently ammonia in aquatic organisms, or urea in terrestrial ones), or from their consumption of oxygen. Lavoisier noted in 1780 that heat production can be predicted from oxygen consumption this way, using multiple regression. The dynamic energy budget theory explains why this procedure is correct. Heat generated by living organisms may also be measured by direct calorimetry, in which the entire organism is placed inside the calorimeter for the measurement. A widely used modern instrument is the differential scanning calorimeter, a device which allows thermal data to be obtained on small amounts of material. It involves heating the sample at a controlled rate and recording the heat flow either into or from the specimen. Classical calorimetric calculation of heat Cases with differentiable equation of state for a one-component body Basic classical calculation with respect to volume Calorimetry requires that a reference material that changes temperature have known definite thermal constitutive properties. The classical rule, recognized by Clausius and Kelvin, is that the pressure exerted by the calorimetric material is fully and rapidly determined solely by its temperature and volume; this rule is for changes that do not involve phase change, such as melting of ice. There are many materials that do not comply with this rule, and for them, the present formula of classical calorimetry does not provide an adequate account. Here the classical rule is assumed to hold for the calorimetric material being used, and the propositions are mathematically written: The thermal response of the calorimetric material is fully described by its pressure as the value of its constitutive function of just the volume and the temperature . All increments are here required to be very small. This calculation refers to a domain of volume and temperature of the body in which no phase change occurs, and there is only one phase present. An important assumption here is continuity of property relations. A different analysis is needed for phase change When a small increment of heat is gained by a calorimetric body, with small increments, of its volume, and of its temperature, the increment of heat, , gained by the body of calorimetric material, is given by where denotes the latent heat with respect to volume, of the calorimetric material at constant controlled temperature . The surroundings' pressure on the material is instrumentally adjusted to impose a chosen volume change, with initial volume . To determine this latent heat, the volume change is effectively the independently instrumentally varied quantity. This latent heat is not one of the widely used ones, but is of theoretical or conceptual interest. denotes the heat capacity, of the calorimetric material at fixed constant volume , while the pressure of the material is allowed to vary freely, with initial temperature . The temperature is forced to change by exposure to a suitable heat bath. It is customary to write simply as , or even more briefly as . This latent heat is one of the two widely used ones. The latent heat with respect to volume is the heat required for unit increment in volume at constant temperature. It can be said to be 'measured along an isotherm', and the pressure the material exerts is allowed to vary freely, according to its constitutive law . For a given material, it can have a positive or negative sign or exceptionally it can be zero, and this can depend on the temperature, as it does for water about 4 C. The concept of latent heat with respect to volume was perhaps first recognized by Joseph Black in 1762. The term 'latent heat of expansion' is also used. The latent heat with respect to volume can also be called the 'latent energy with respect to volume'. For all of these usages of 'latent heat', a more systematic terminology uses 'latent heat capacity'. The heat capacity at constant volume is the heat required for unit increment in temperature at constant volume. It can be said to be 'measured along an isochor', and again, the pressure the material exerts is allowed to vary freely. It always has a positive sign. This means that for an increase in the temperature of a body without change of its volume, heat must be supplied to it. This is consistent with common experience. Quantities like are sometimes called 'curve differentials', because they are measured along curves in the surface. Classical theory for constant-volume (isochoric) calorimetry Constant-volume calorimetry is calorimetry performed at a constant volume. This involves the use of a constant-volume calorimeter. Heat is still measured by the above-stated principle of calorimetry. This means that in a suitably constructed calorimeter, called a bomb calorimeter, the increment of volume can be made to vanish, . For constant-volume calorimetry: where denotes the increment in temperature and denotes the heat capacity at constant volume. Classical heat calculation with respect to pressure From the above rule of calculation of heat with respect to volume, there follows one with respect to pressure. In a process of small increments, of its pressure, and of its temperature, the increment of heat, , gained by the body of calorimetric material, is given by where denotes the latent heat with respect to pressure, of the calorimetric material at constant temperature, while the volume and pressure of the body are allowed to vary freely, at pressure and temperature ; denotes the heat capacity, of the calorimetric material at constant pressure, while the temperature and volume of the body are allowed to vary freely, at pressure and temperature . It is customary to write simply as , or even more briefly as . The new quantities here are related to the previous ones: where denotes the partial derivative of with respect to evaluated for and denotes the partial derivative of with respect to evaluated for . The latent heats and are always of opposite sign. It is common to refer to the ratio of specific heats as often just written as . Calorimetry through phase change, equation of state shows one jump discontinuity An early calorimeter was that used by Laplace and Lavoisier, as shown in the figure above. It worked at constant temperature, and at atmospheric pressure. The latent heat involved was then not a latent heat with respect to volume or with respect to pressure, as in the above account for calorimetry without phase change. The latent heat involved in this calorimeter was with respect to phase change, naturally occurring at constant temperature. This kind of calorimeter worked by measurement of mass of water produced by the melting of ice, which is a phase change. Cumulation of heating For a time-dependent process of heating of the calorimetric material, defined by a continuous joint progression of and , starting at time and ending at time , there can be calculated an accumulated quantity of heat delivered, . This calculation is done by mathematical integration along the progression with respect to time. This is because increments of heat are 'additive'; but this does not mean that heat is a conservative quantity. The idea that heat was a conservative quantity was invented by Lavoisier, and is called the 'caloric theory'; by the middle of the nineteenth century it was recognized as mistaken. Written with the symbol , the quantity is not at all restricted to be an increment with very small values; this is in contrast with . One can write . This expression uses quantities such as which are defined in the section below headed 'Mathematical aspects of the above rules'. Mathematical aspects of the above rules The use of 'very small' quantities such as is related to the physical requirement for the quantity to be 'rapidly determined' by and ; such 'rapid determination' refers to a physical process. These 'very small' quantities are used in the Leibniz approach to the infinitesimal calculus. The Newton approach uses instead 'fluxions' such as , which makes it more obvious that must be 'rapidly determined'. In terms of fluxions, the above first rule of calculation can be written where denotes the time denotes the time rate of heating of the calorimetric material at time denotes the time rate of change of volume of the calorimetric material at time denotes the time rate of change of temperature of the calorimetric material. The increment and the fluxion are obtained for a particular time that determines the values of the quantities on the righthand sides of the above rules. But this is not a reason to expect that there should exist a mathematical function . For this reason, the increment is said to be an 'imperfect differential' or an 'inexact differential'. Some books indicate this by writing instead of . Also, the notation đQ is used in some books. Carelessness about this can lead to error. The quantity is properly said to be a functional of the continuous joint progression of and , but, in the mathematical definition of a function, is not a function of . Although the fluxion is defined here as a function of time , the symbols and respectively standing alone are not defined here. Physical scope of the above rules of calorimetry The above rules refer only to suitable calorimetric materials. The terms 'rapidly' and 'very small' call for empirical physical checking of the domain of validity of the above rules. The above rules for the calculation of heat belong to pure calorimetry. They make no reference to thermodynamics, and were mostly understood before the advent of thermodynamics. They are the basis of the 'thermo' contribution to thermodynamics. The 'dynamics' contribution is based on the idea of work, which is not used in the above rules of calculation. Experimentally conveniently measured coefficients Empirically, it is convenient to measure properties of calorimetric materials under experimentally controlled conditions. Pressure increase at constant volume For measurements at experimentally controlled volume, one can use the assumption, stated above, that the pressure of the body of calorimetric material is can be expressed as a function of its volume and temperature. For measurement at constant experimentally controlled volume, the isochoric coefficient of pressure rise with temperature, is defined by Expansion at constant pressure For measurements at experimentally controlled pressure, it is assumed that the volume of the body of calorimetric material can be expressed as a function of its temperature and pressure . This assumption is related to, but is not the same as, the above used assumption that the pressure of the body of calorimetric material is known as a function of its volume and temperature; anomalous behaviour of materials can affect this relation. The quantity that is conveniently measured at constant experimentally controlled pressure, the isobar volume expansion coefficient, is defined by Compressibility at constant temperature For measurements at experimentally controlled temperature, it is again assumed that the volume of the body of calorimetric material can be expressed as a function of its temperature and pressure , with the same provisos as mentioned just above. The quantity that is conveniently measured at constant experimentally controlled temperature, the isothermal compressibility, is defined by Relation between classical calorimetric quantities Assuming that the rule is known, one can derive the function of that is used above in the classical heat calculation with respect to pressure. This function can be found experimentally from the coefficients and through the mathematically deducible relation . Connection between calorimetry and thermodynamics Thermodynamics developed gradually over the first half of the nineteenth century, building on the above theory of calorimetry which had been worked out before it, and on other discoveries. According to Gislason and Craig (2005): "Most thermodynamic data come from calorimetry..." According to Kondepudi (2008): "Calorimetry is widely used in present day laboratories." In terms of thermodynamics, the internal energy of the calorimetric material can be considered as the value of a function of , with partial derivatives and . Then it can be shown that one can write a thermodynamic version of the above calorimetric rules: with and . Again, further in terms of thermodynamics, the internal energy of the calorimetric material can sometimes, depending on the calorimetric material, be considered as the value of a function of , with partial derivatives and , and with being expressible as the value of a function of , with partial derivatives and . Then, according to Adkins (1975), it can be shown that one can write a further thermodynamic version of the above calorimetric rules: with and . Beyond the calorimetric fact noted above that the latent heats and are always of opposite sign, it may be shown, using the thermodynamic concept of work, that also Special interest of thermodynamics in calorimetry: the isothermal segments of a Carnot cycle Calorimetry has a special benefit for thermodynamics. It tells about the heat absorbed or emitted in the isothermal segment of a Carnot cycle. A Carnot cycle is a special kind of cyclic process affecting a body composed of material suitable for use in a heat engine. Such a material is of the kind considered in calorimetry, as noted above, that exerts a pressure that is very rapidly determined just by temperature and volume. Such a body is said to change reversibly. A Carnot cycle consists of four successive stages or segments: (1) a change in volume from a volume to a volume at constant temperature so as to incur a flow of heat into the body (known as an isothermal change) (2) a change in volume from to a volume at a variable temperature just such as to incur no flow of heat (known as an adiabatic change) (3) another isothermal change in volume from to a volume at constant temperature such as to incur a flow or heat out of the body and just such as to precisely prepare for the following change (4) another adiabatic change of volume from back to just such as to return the body to its starting temperature . In isothermal segment (1), the heat that flows into the body is given by     and in isothermal segment (3) the heat that flows out of the body is given by . Because the segments (2) and (4) are adiabats, no heat flows into or out of the body during them, and consequently the net heat supplied to the body during the cycle is given by . This quantity is used by thermodynamics and is related in a special way to the net work done by the body during the Carnot cycle. The net change of the body's internal energy during the Carnot cycle, , is equal to zero, because the material of the working body has the special properties noted above. Special interest of calorimetry in thermodynamics: relations between classical calorimetric quantities Relation of latent heat with respect to volume, and the equation of state The quantity , the latent heat with respect to volume, belongs to classical calorimetry. It accounts for the occurrence of energy transfer by work in a process in which heat is also transferred; the quantity, however, was considered before the relation between heat and work transfers was clarified by the invention of thermodynamics. In the light of thermodynamics, the classical calorimetric quantity is revealed as being tightly linked to the calorimetric material's equation of state . Provided that the temperature is measured in the thermodynamic absolute scale, the relation is expressed in the formula . Difference of specific heats Advanced thermodynamics provides the relation . From this, further mathematical and thermodynamic reasoning leads to another relation between classical calorimetric quantities. The difference of specific heats is given by . Practical constant-volume calorimetry (bomb calorimetry) for thermodynamic studies Constant-volume calorimetry is calorimetry performed at a constant volume. This involves the use of a constant-volume calorimeter. No work is performed in constant-volume calorimetry, so the heat measured equals the change in internal energy of the system. The heat capacity at constant volume is assumed to be independent of temperature. Heat is measured by the principle of calorimetry. where ΔU is change in internal energy, ΔT is change in temperature and CV is the heat capacity at constant volume. In constant-volume calorimetry the pressure is not held constant. If there is a pressure difference between initial and final states, the heat measured needs adjustment to provide the enthalpy change. One then has where ΔH is change in enthalpy and V is the unchanging volume of the sample chamber. See also Isothermal microcalorimetry (IMC) Isothermal titration calorimetry Sorption calorimetry Reaction calorimeter References Books . External links Heat transfer
Calorimetry
[ "Physics", "Chemistry" ]
3,605
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics" ]
7,534
https://en.wikipedia.org/wiki/Centripetal%20force
A centripetal force (from Latin centrum, "center" and petere, "to seek") is a force that makes a body follow a curved path. The direction of the centripetal force is always orthogonal to the motion of the body and towards the fixed point of the instantaneous center of curvature of the path. Isaac Newton described it as "a force by which bodies are drawn or impelled, or in any way tend, towards a point as to a centre". In Newtonian mechanics, gravity provides the centripetal force causing astronomical orbits. One common example involving centripetal force is the case in which a body moves with uniform speed along a circular path. The centripetal force is directed at right angles to the motion and also along the radius towards the centre of the circular path. The mathematical description was derived in 1659 by the Dutch physicist Christiaan Huygens. Formula From the kinematics of curved motion it is known that an object moving at tangential speed v along a path with radius of curvature r accelerates toward the center of curvature at a rate Here, is the centripetal acceleration and is the difference between the velocity vectors at and . By Newton's second law, the cause of acceleration is a net force acting on the object, which is proportional to its mass m and its acceleration. The force, usually referred to as a centripetal force, has a magnitude and is, like centripetal acceleration, directed toward the center of curvature of the object's trajectory. Derivation The centripetal acceleration can be inferred from the diagram of the velocity vectors at two instances. In the case of uniform circular motion the velocities have constant magnitude. Because each one is perpendicular to its respective position vector, simple vector subtraction implies two similar isosceles triangles with congruent angles – one comprising a base of and a leg length of , and the other a base of (position vector difference) and a leg length of : Therefore, can be substituted with : The direction of the force is toward the center of the circle in which the object is moving, or the osculating circle (the circle that best fits the local path of the object, if the path is not circular). The speed in the formula is squared, so twice the speed needs four times the force, at a given radius. This force is also sometimes written in terms of the angular velocity ω of the object about the center of the circle, related to the tangential velocity by the formula so that Expressed using the orbital period T for one revolution of the circle, the equation becomes In particle accelerators, velocity can be very high (close to the speed of light in vacuum) so the same rest mass now exerts greater inertia (relativistic mass) thereby requiring greater force for the same centripetal acceleration, so the equation becomes: where is the Lorentz factor. Thus the centripetal force is given by: which is the rate of change of relativistic momentum . Sources In the case of an object that is swinging around on the end of a rope in a horizontal plane, the centripetal force on the object is supplied by the tension of the rope. The rope example is an example involving a 'pull' force. The centripetal force can also be supplied as a 'push' force, such as in the case where the normal reaction of a wall supplies the centripetal force for a wall of death or a Rotor rider. Newton's idea of a centripetal force corresponds to what is nowadays referred to as a central force. When a satellite is in orbit around a planet, gravity is considered to be a centripetal force even though in the case of eccentric orbits, the gravitational force is directed towards the focus, and not towards the instantaneous center of curvature. Another example of centripetal force arises in the helix that is traced out when a charged particle moves in a uniform magnetic field in the absence of other external forces. In this case, the magnetic force is the centripetal force that acts towards the helix axis. Analysis of several cases Below are three examples of increasing complexity, with derivations of the formulas governing velocity and acceleration. Uniform circular motion Uniform circular motion refers to the case of constant rate of rotation. Here are two approaches to describing this case. Calculus derivation In two dimensions, the position vector , which has magnitude (length) and directed at an angle above the x-axis, can be expressed in Cartesian coordinates using the unit vectors and : The assumption of uniform circular motion requires three things: The object moves only on a circle. The radius of the circle does not change in time. The object moves with constant angular velocity around the circle. Therefore, where is time. The velocity and acceleration of the motion are the first and second derivatives of position with respect to time: The term in parentheses is the original expression of in Cartesian coordinates. Consequently, negative shows that the acceleration is pointed towards the center of the circle (opposite the radius), hence it is called "centripetal" (i.e. "center-seeking"). While objects naturally follow a straight path (due to inertia), this centripetal acceleration describes the circular motion path caused by a centripetal force. Derivation using vectors The image at right shows the vector relationships for uniform circular motion. The rotation itself is represented by the angular velocity vector Ω, which is normal to the plane of the orbit (using the right-hand rule) and has magnitude given by: with θ the angular position at time t. In this subsection, dθ/dt is assumed constant, independent of time. The distance traveled dℓ of the particle in time dt along the circular path is which, by properties of the vector cross product, has magnitude rdθ and is in the direction tangent to the circular path. Consequently, In other words, Differentiating with respect to time, Lagrange's formula states: Applying Lagrange's formula with the observation that Ω • r(t) = 0 at all times, In words, the acceleration is pointing directly opposite to the radial displacement r at all times, and has a magnitude: where vertical bars |...| denote the vector magnitude, which in the case of r(t) is simply the radius r of the path. This result agrees with the previous section, though the notation is slightly different. When the rate of rotation is made constant in the analysis of nonuniform circular motion, that analysis agrees with this one. A merit of the vector approach is that it is manifestly independent of any coordinate system. Example: The banked turn The upper panel in the image at right shows a ball in circular motion on a banked curve. The curve is banked at an angle θ from the horizontal, and the surface of the road is considered to be slippery. The objective is to find what angle the bank must have so the ball does not slide off the road. Intuition tells us that, on a flat curve with no banking at all, the ball will simply slide off the road; while with a very steep banking, the ball will slide to the center unless it travels the curve rapidly. Apart from any acceleration that might occur in the direction of the path, the lower panel of the image above indicates the forces on the ball. There are two forces; one is the force of gravity vertically downward through the center of mass of the ball mg, where m is the mass of the ball and g is the gravitational acceleration; the second is the upward normal force exerted by the road at a right angle to the road surface man. The centripetal force demanded by the curved motion is also shown above. This centripetal force is not a third force applied to the ball, but rather must be provided by the net force on the ball resulting from vector addition of the normal force and the force of gravity. The resultant or net force on the ball found by vector addition of the normal force exerted by the road and vertical force due to gravity must equal the centripetal force dictated by the need to travel a circular path. The curved motion is maintained so long as this net force provides the centripetal force requisite to the motion. The horizontal net force on the ball is the horizontal component of the force from the road, which has magnitude . The vertical component of the force from the road must counteract the gravitational force: , which implies . Substituting into the above formula for yields a horizontal force to be: On the other hand, at velocity |v| on a circular path of radius r, kinematics says that the force needed to turn the ball continuously into the turn is the radially inward centripetal force Fc of magnitude: Consequently, the ball is in a stable path when the angle of the road is set to satisfy the condition: or, As the angle of bank θ approaches 90°, the tangent function approaches infinity, allowing larger values for |v|2/r. In words, this equation states that for greater speeds (bigger |v|) the road must be banked more steeply (a larger value for θ), and for sharper turns (smaller r) the road also must be banked more steeply, which accords with intuition. When the angle θ does not satisfy the above condition, the horizontal component of force exerted by the road does not provide the correct centripetal force, and an additional frictional force tangential to the road surface is called upon to provide the difference. If friction cannot do this (that is, the coefficient of friction is exceeded), the ball slides to a different radius where the balance can be realized. These ideas apply to air flight as well. See the FAA pilot's manual. Nonuniform circular motion As a generalization of the uniform circular motion case, suppose the angular rate of rotation is not constant. The acceleration now has a tangential component, as shown the image at right. This case is used to demonstrate a derivation strategy based on a polar coordinate system. Let r(t) be a vector that describes the position of a point mass as a function of time. Since we are assuming circular motion, let , where R is a constant (the radius of the circle) and ur is the unit vector pointing from the origin to the point mass. The direction of ur is described by θ, the angle between the x-axis and the unit vector, measured counterclockwise from the x-axis. The other unit vector for polar coordinates, uθ is perpendicular to ur and points in the direction of increasing θ. These polar unit vectors can be expressed in terms of Cartesian unit vectors in the x and y directions, denoted and respectively: and One can differentiate to find velocity: where is the angular velocity . This result for the velocity matches expectations that the velocity should be directed tangentially to the circle, and that the magnitude of the velocity should be . Differentiating again, and noting that we find that the acceleration, a is: Thus, the radial and tangential components of the acceleration are: and where is the magnitude of the velocity (the speed). These equations express mathematically that, in the case of an object that moves along a circular path with a changing speed, the acceleration of the body may be decomposed into a perpendicular component that changes the direction of motion (the centripetal acceleration), and a parallel, or tangential component, that changes the speed. General planar motion Polar coordinates The above results can be derived perhaps more simply in polar coordinates, and at the same time extended to general motion within a plane, as shown next. Polar coordinates in the plane employ a radial unit vector uρ and an angular unit vector uθ, as shown above. A particle at position r is described by: where the notation ρ is used to describe the distance of the path from the origin instead of R to emphasize that this distance is not fixed, but varies with time. The unit vector uρ travels with the particle and always points in the same direction as r(t). Unit vector uθ also travels with the particle and stays orthogonal to uρ. Thus, uρ and uθ form a local Cartesian coordinate system attached to the particle, and tied to the path travelled by the particle. By moving the unit vectors so their tails coincide, as seen in the circle at the left of the image above, it is seen that uρ and uθ form a right-angled pair with tips on the unit circle that trace back and forth on the perimeter of this circle with the same angle θ(t) as r(t). When the particle moves, its velocity is To evaluate the velocity, the derivative of the unit vector uρ is needed. Because uρ is a unit vector, its magnitude is fixed, and it can change only in direction, that is, its change duρ has a component only perpendicular to uρ. When the trajectory r(t) rotates an amount dθ, uρ, which points in the same direction as r(t), also rotates by dθ. See image above. Therefore, the change in uρ is or In a similar fashion, the rate of change of uθ is found. As with uρ, uθ is a unit vector and can only rotate without changing size. To remain orthogonal to uρ while the trajectory r(t) rotates an amount dθ, uθ, which is orthogonal to r(t), also rotates by dθ. See image above. Therefore, the change duθ is orthogonal to uθ and proportional to dθ (see image above): The equation above shows the sign to be negative: to maintain orthogonality, if duρ is positive with dθ, then duθ must decrease. Substituting the derivative of uρ into the expression for velocity: To obtain the acceleration, another time differentiation is done: Substituting the derivatives of uρ and uθ, the acceleration of the particle is: As a particular example, if the particle moves in a circle of constant radius R, then dρ/dt = 0, v = vθ, and: where These results agree with those above for nonuniform circular motion. See also the article on non-uniform circular motion. If this acceleration is multiplied by the particle mass, the leading term is the centripetal force and the negative of the second term related to angular acceleration is sometimes called the Euler force. For trajectories other than circular motion, for example, the more general trajectory envisioned in the image above, the instantaneous center of rotation and radius of curvature of the trajectory are related only indirectly to the coordinate system defined by uρ and uθ and to the length |r(t)| = ρ. Consequently, in the general case, it is not straightforward to disentangle the centripetal and Euler terms from the above general acceleration equation. To deal directly with this issue, local coordinates are preferable, as discussed next. Local coordinates Local coordinates mean a set of coordinates that travel with the particle, and have orientation determined by the path of the particle. Unit vectors are formed as shown in the image at right, both tangential and normal to the path. This coordinate system sometimes is referred to as intrinsic or path coordinates or nt-coordinates, for normal-tangential, referring to these unit vectors. These coordinates are a very special example of a more general concept of local coordinates from the theory of differential forms. Distance along the path of the particle is the arc length s, considered to be a known function of time. A center of curvature is defined at each position s located a distance ρ (the radius of curvature) from the curve on a line along the normal un (s). The required distance ρ(s) at arc length s is defined in terms of the rate of rotation of the tangent to the curve, which in turn is determined by the path itself. If the orientation of the tangent relative to some starting position is θ(s), then ρ(s) is defined by the derivative dθ/ds: The radius of curvature usually is taken as positive (that is, as an absolute value), while the curvature κ is a signed quantity. A geometric approach to finding the center of curvature and the radius of curvature uses a limiting process leading to the osculating circle. See image above. Using these coordinates, the motion along the path is viewed as a succession of circular paths of ever-changing center, and at each position s constitutes non-uniform circular motion at that position with radius ρ. The local value of the angular rate of rotation then is given by: with the local speed v given by: As for the other examples above, because unit vectors cannot change magnitude, their rate of change is always perpendicular to their direction (see the left-hand insert in the image above): Consequently, the velocity and acceleration are: and using the chain-rule of differentiation: with the tangential acceleration In this local coordinate system, the acceleration resembles the expression for nonuniform circular motion with the local radius ρ(s), and the centripetal acceleration is identified as the second term. Extending this approach to three dimensional space curves leads to the Frenet–Serret formulas. Alternative approach Looking at the image above, one might wonder whether adequate account has been taken of the difference in curvature between ρ(s) and ρ(s + ds) in computing the arc length as ds = ρ(s)dθ. Reassurance on this point can be found using a more formal approach outlined below. This approach also makes connection with the article on curvature. To introduce the unit vectors of the local coordinate system, one approach is to begin in Cartesian coordinates and describe the local coordinates in terms of these Cartesian coordinates. In terms of arc length s, let the path be described as: Then an incremental displacement along the path ds is described by: where primes are introduced to denote derivatives with respect to s. The magnitude of this displacement is ds, showing that: (Eq. 1) This displacement is necessarily a tangent to the curve at s, showing that the unit vector tangent to the curve is: while the outward unit vector normal to the curve is Orthogonality can be verified by showing that the vector dot product is zero. The unit magnitude of these vectors is a consequence of Eq. 1. Using the tangent vector, the angle θ of the tangent to the curve is given by: and The radius of curvature is introduced completely formally (without need for geometric interpretation) as: The derivative of θ can be found from that for sinθ: Now: in which the denominator is unity. With this formula for the derivative of the sine, the radius of curvature becomes: where the equivalence of the forms stems from differentiation of Eq. 1: With these results, the acceleration can be found: as can be verified by taking the dot product with the unit vectors ut(s) and un(s). This result for acceleration is the same as that for circular motion based on the radius ρ. Using this coordinate system in the inertial frame, it is easy to identify the force normal to the trajectory as the centripetal force and that parallel to the trajectory as the tangential force. From a qualitative standpoint, the path can be approximated by an arc of a circle for a limited time, and for the limited time a particular radius of curvature applies, the centrifugal and Euler forces can be analyzed on the basis of circular motion with that radius. This result for acceleration agrees with that found earlier. However, in this approach, the question of the change in radius of curvature with s is handled completely formally, consistent with a geometric interpretation, but not relying upon it, thereby avoiding any questions the image above might suggest about neglecting the variation in ρ. Example: circular motion To illustrate the above formulas, let x, y be given as: Then: which can be recognized as a circular path around the origin with radius α. The position s = 0 corresponds to [α, 0], or 3 o'clock. To use the above formalism, the derivatives are needed: With these results, one can verify that: The unit vectors can also be found: which serve to show that s = 0 is located at position [ρ, 0] and s = ρπ/2 at [0, ρ], which agrees with the original expressions for x and y. In other words, s is measured counterclockwise around the circle from 3 o'clock. Also, the derivatives of these vectors can be found: To obtain velocity and acceleration, a time-dependence for s is necessary. For counterclockwise motion at variable speed v(t): where v(t) is the speed and t is time, and s(t = 0) = 0. Then: where it already is established that α = ρ. This acceleration is the standard result for non-uniform circular motion. See also Analytical mechanics Applied mechanics Bertrand theorem Central force Centrifugal force Circular motion Classical mechanics Coriolis force Dynamics (physics) Eskimo yo-yo Example: circular motion Fictitious force Frenet-Serret formulas History of centrifugal and centripetal forces Kinematics Kinetics Orthogonal coordinates Reactive centrifugal force Statics Notes and references Further reading Centripetal force vs. Centrifugal force, from an online Regents Exam physics tutorial by the Oswego City School District External links Notes from Physics and Astronomy HyperPhysics at Georgia State University Force Mechanics Kinematics Rotation Acceleration Articles containing video clips
Centripetal force
[ "Physics", "Mathematics", "Technology", "Engineering" ]
4,460
[ "Machines", "Force", "Kinematics", "Physical quantities", "Acceleration", "Physical phenomena", "Quantity", "Mass", "Classical mechanics", "Rotation", "Physical systems", "Motion (physics)", "Mechanics", "Mechanical engineering", "Wikipedia categories named after physical quantities", ...
7,555
https://en.wikipedia.org/wiki/Casimir%20effect
In quantum field theory, the Casimir effect (or Casimir force) is a physical force acting on the macroscopic boundaries of a confined space which arises from the quantum fluctuations of a field. The term Casimir pressure is sometimes used when it is described in units of force per unit area. It is named after the Dutch physicist Hendrik Casimir, who predicted the effect for electromagnetic systems in 1948. In the same year Casimir, together with Dirk Polder, described a similar effect experienced by a neutral atom in the vicinity of a macroscopic interface which is called the Casimir–Polder force. Their result is a generalization of the London–van der Waals force and includes retardation due to the finite speed of light. The fundamental principles leading to the London–van der Waals force, the Casimir force, and the Casimir–Polder force can be formulated on the same footing. In 1997 a direct experiment by Steven K. Lamoreaux quantitatively measured the Casimir force to be within 5% of the value predicted by the theory. The Casimir effect can be understood by the idea that the presence of macroscopic material interfaces, such as electrical conductors and dielectrics, alter the vacuum expectation value of the energy of the second-quantized electromagnetic field. Since the value of this energy depends on the shapes and positions of the materials, the Casimir effect manifests itself as a force between such objects. Any medium supporting oscillations has an analogue of the Casimir effect. For example, beads on a string as well as plates submerged in turbulent water or gas illustrate the Casimir force. In modern theoretical physics, the Casimir effect plays an important role in the chiral bag model of the nucleon; in applied physics it is significant in some aspects of emerging microtechnologies and nanotechnologies. Physical properties The typical example is of two uncharged conductive plates in a vacuum, placed a few nanometers apart. In a classical description, the lack of an external field means that no field exists between the plates, and no force connects them. When this field is instead studied using the quantum electrodynamic vacuum, it is seen that the plates do affect the virtual photons that constitute the field, and generate a net force – either an attraction or a repulsion depending on the plates' specific arrangement. Although the Casimir effect can be expressed in terms of virtual particles interacting with the objects, it is best described and more easily calculated in terms of the zero-point energy of a quantized field in the intervening space between the objects. This force has been measured and is a striking example of an effect captured formally by second quantization. The treatment of boundary conditions in these calculations is controversial. In fact, "Casimir's original goal was to compute the van der Waals force between polarizable molecules" of the conductive plates. Thus it can be interpreted without any reference to the zero-point energy (vacuum energy) of quantum fields. Because the strength of the force falls off rapidly with distance, it is measurable only when the distance between the objects is small. This force becomes so strong that it becomes the dominant force between uncharged conductors at submicron scales. In fact, at separations of 10 nm – about 100 times the typical size of an atom – the Casimir effect produces the equivalent of about 1 atmosphere of pressure (the precise value depends on surface geometry and other factors). History Dutch physicists Hendrik Casimir and Dirk Polder at Philips Research Labs proposed the existence of a force between two polarizable atoms and between such an atom and a conducting plate in 1947; this special form is called the Casimir–Polder force. After a conversation with Niels Bohr, who suggested it had something to do with zero-point energy, Casimir alone formulated the theory predicting a force between neutral conducting plates in 1948. This latter phenomenon is called the Casimir effect. Predictions of the force were later extended to finite-conductivity metals and dielectrics, while later calculations considered more general geometries. Experiments before 1997 observed the force qualitatively, and indirect validation of the predicted Casimir energy was made by measuring the thickness of liquid helium films. Finally, in 1997 Lamoreaux's direct experiment quantitatively measured the force to within 5% of the value predicted by the theory. Subsequent experiments approached an accuracy of a few percent. Possible causes Vacuum energy The causes of the Casimir effect are described by quantum field theory, which states that all of the various fundamental fields, such as the electromagnetic field, must be quantized at each and every point in space. In a simplified view, a "field" in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field can be visualized as the displacement of a ball from its rest position. Vibrations in this field propagate and are governed by the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball-spring combination be quantized, that is, that the strength of the field be quantized at each point in space. At the most basic level, the field at each point in space is a simple harmonic oscillator, and its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. However, even the vacuum has a vastly complex structure, so all calculations of quantum field theory must be made in relation to this model of the vacuum. The vacuum has, implicitly, all of the properties that a particle may have: spin, or polarization in the case of light, energy, and so on. On average, most of these properties cancel out: the vacuum is, after all, "empty" in this sense. One important exception is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator states that the lowest possible energy or zero-point energy that such an oscillator may have is Summing over all possible oscillators at all points in space gives an infinite quantity. Since only differences in energy are physically measurable (with the notable exception of gravitation, which remains beyond the scope of quantum field theory), this infinity may be considered a feature of the mathematics rather than of the physics. This argument is the underpinning of the theory of renormalization. Dealing with infinite quantities in this way was a cause of widespread unease among quantum field theorists before the development in the 1970s of the renormalization group, a mathematical formalism for scale transformations that provides a natural basis for the process. When the scope of the physics is widened to include gravity, the interpretation of this formally infinite quantity remains problematic. There is currently no compelling explanation as to why it should not result in a cosmological constant that is many orders of magnitude larger than observed. However, since we do not yet have any fully coherent quantum theory of gravity, there is likewise no compelling reason as to why it should instead actually result in the value of the cosmological constant that we observe. The Casimir effect for fermions can be understood as the spectral asymmetry of the fermion operator , where it is known as the Witten index. Relativistic van der Waals force Alternatively, a 2005 paper by Robert Jaffe of MIT states that "Casimir effects can be formulated and Casimir forces can be computed without reference to zero-point energies. They are relativistic, quantum forces between charges and currents. The Casimir force (per unit area) between parallel plates vanishes as alpha, the fine structure constant, goes to zero, and the standard result, which appears to be independent of alpha, corresponds to the alpha approaching infinity limit", and that "The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates." Casimir and Polder's original paper used this method to derive the Casimir–Polder force. In 1978, Schwinger, DeRadd, and Milton published a similar derivation for the Casimir effect between two parallel plates. More recently, Nikolic proved from first principles of quantum electrodynamics that the Casimir force does not originate from the vacuum energy of the electromagnetic field, and explained in simple terms why the fundamental microscopic origin of Casimir force lies in van der Waals forces. Effects Casimir's observation was that the second-quantized quantum electromagnetic field, in the presence of bulk bodies such as metals or dielectrics, must obey the same boundary conditions that the classical electromagnetic field must obey. In particular, this affects the calculation of the vacuum energy in the presence of a conductor or dielectric. Consider, for example, the calculation of the vacuum expectation value of the electromagnetic field inside a metal cavity, such as, for example, a radar cavity or a microwave waveguide. In this case, the correct way to find the zero-point energy of the field is to sum the energies of the standing waves of the cavity. To each and every possible standing wave corresponds an energy; say the energy of the th standing wave is . The vacuum expectation value of the energy of the electromagnetic field in the cavity is then with the sum running over all possible values of enumerating the standing waves. The factor of is present because the zero-point energy of the th mode is , where is the energy increment for the th mode. (It is the same as appears in the equation .) Written in this way, this sum is clearly divergent; however, it can be used to create finite expressions. In particular, one may ask how the zero-point energy depends on the shape of the cavity. Each energy level depends on the shape, and so one should write for the energy level, and for the vacuum expectation value. At this point comes an important observation: The force at point on the wall of the cavity is equal to the change in the vacuum energy if the shape of the wall is perturbed a little bit, say by , at . That is, one has This value is finite in many practical calculations. Attraction between the plates can be easily understood by focusing on the one-dimensional situation. Suppose that a moveable conductive plate is positioned at a short distance from one of two widely separated plates (distance apart). With , the states within the slot of width are highly constrained so that the energy of any one mode is widely separated from that of the next. This is not the case in the large region where there is a large number of states (about ) with energy evenly spaced between and the next mode in the narrow slot, or in other words, all slightly larger than . Now on shortening by an amount (which is negative), the mode in the narrow slot shrinks in wavelength and therefore increases in energy proportional to , whereas all the states that lie in the large region lengthen and correspondingly decrease their energy by an amount proportional to (note the different denominator). The two effects nearly cancel, but the net change is slightly negative, because the energy of all the modes in the large region are slightly larger than the single mode in the slot. Thus the force is attractive: it tends to make slightly smaller, the plates drawing each other closer, across the thin slot. Derivation of Casimir effect assuming zeta-regularization In the original calculation done by Casimir, he considered the space between a pair of conducting metal plates at distance apart. In this case, the standing waves are particularly easy to calculate, because the transverse component of the electric field and the normal component of the magnetic field must vanish on the surface of a conductor. Assuming the plates lie parallel to the -plane, the standing waves are where stands for the electric component of the electromagnetic field, and, for brevity, the polarization and the magnetic components are ignored here. Here, and are the wavenumbers in directions parallel to the plates, and is the wavenumber perpendicular to the plates. Here, is an integer, resulting from the requirement that vanish on the metal plates. The frequency of this wave is where is the speed of light. The vacuum energy is then the sum over all possible excitation modes. Since the area of the plates is large, we may sum by integrating over two of the dimensions in -space. The assumption of periodic boundary conditions yields, where is the area of the metal plates, and a factor of 2 is introduced for the two possible polarizations of the wave. This expression is clearly infinite, and to proceed with the calculation, it is convenient to introduce a regulator (discussed in greater detail below). The regulator will serve to make the expression finite, and in the end will be removed. The zeta-regulated version of the energy per unit-area of the plate is In the end, the limit is to be taken. Here is just a complex number, not to be confused with the shape discussed previously. This integral sum is finite for real and larger than 3. The sum has a pole at , but may be analytically continued to , where the expression is finite. The above expression simplifies to: where polar coordinates were introduced to turn the double integral into a single integral. The in front is the Jacobian, and the comes from the angular integration. The integral converges if , resulting in The sum diverges at in the neighborhood of zero, but if the damping of large-frequency excitations corresponding to analytic continuation of the Riemann zeta function to is assumed to make sense physically in some way, then one has But and so one obtains The analytic continuation has evidently lost an additive positive infinity, somehow exactly accounting for the zero-point energy (not included above) outside the slot between the plates, but which changes upon plate movement within a closed system. The Casimir force per unit area for idealized, perfectly conducting plates with vacuum between them is where is the reduced Planck constant, is the speed of light, is the distance between the two plates The force is negative, indicating that the force is attractive: by moving the two plates closer together, the energy is lowered. The presence of shows that the Casimir force per unit area is very small, and that furthermore, the force is inherently of quantum-mechanical origin. By integrating the equation above it is possible to calculate the energy required to separate to infinity the two plates as: where is the reduced Planck constant, is the speed of light, is the area of one of the plates, is the distance between the two plates In Casimir's original derivation, a moveable conductive plate is positioned at a short distance from one of two widely separated plates (distance apart). The zero-point energy on both sides of the plate is considered. Instead of the above ad hoc analytic continuation assumption, non-convergent sums and integrals are computed using Euler–Maclaurin summation with a regularizing function (e.g., exponential regularization) not so anomalous as in the above. More recent theory Casimir's analysis of idealized metal plates was generalized to arbitrary dielectric and realistic metal plates by Evgeny Lifshitz and his students. Using this approach, complications of the bounding surfaces, such as the modifications to the Casimir force due to finite conductivity, can be calculated numerically using the tabulated complex dielectric functions of the bounding materials. Lifshitz's theory for two metal plates reduces to Casimir's idealized force law for large separations much greater than the skin depth of the metal, and conversely reduces to the force law of the London dispersion force (with a coefficient called a Hamaker constant) for small , with a more complicated dependence on for intermediate separations determined by the dispersion of the materials. Lifshitz's result was subsequently generalized to arbitrary multilayer planar geometries as well as to anisotropic and magnetic materials, but for several decades the calculation of Casimir forces for non-planar geometries remained limited to a few idealized cases admitting analytical solutions. For example, the force in the experimental sphere–plate geometry was computed with an approximation (due to Derjaguin) that the sphere radius is much larger than the separation , in which case the nearby surfaces are nearly parallel and the parallel-plate result can be adapted to obtain an approximate force (neglecting both skin-depth and higher-order curvature effects). However, in the 2010s a number of authors developed and demonstrated a variety of numerical techniques, in many cases adapted from classical computational electromagnetics, that are capable of accurately calculating Casimir forces for arbitrary geometries and materials, from simple finite-size effects of finite plates to more complicated phenomena arising for patterned surfaces or objects of various shapes. Measurement One of the first experimental tests was conducted by Marcus Sparnaay at Philips in Eindhoven (Netherlands), in 1958, in a delicate and difficult experiment with parallel plates, obtaining results not in contradiction with the Casimir theory, but with large experimental errors. The Casimir effect was measured more accurately in 1997 by Steve K. Lamoreaux of Los Alamos National Laboratory, and by Umar Mohideen and Anushree Roy of the University of California, Riverside. In practice, rather than using two parallel plates, which would require phenomenally accurate alignment to ensure they were parallel, the experiments use one plate that is flat and another plate that is a part of a sphere with a very large radius. In 2001, a group (Giacomo Bressi, Gianni Carugno, Roberto Onofrio and Giuseppe Ruoso) at the University of Padua (Italy) finally succeeded in measuring the Casimir force between parallel plates using microresonators. Numerous variations of these experiments are summarized in the 2009 review by Klimchitskaya. In 2013, a conglomerate of scientists from Hong Kong University of Science and Technology, University of Florida, Harvard University, Massachusetts Institute of Technology, and Oak Ridge National Laboratory demonstrated a compact integrated silicon chip that can measure the Casimir force. The integrated chip defined by electron-beam lithography does not need extra alignment, making it an ideal platform for measuring Casimir force between complex geometries. In 2017 and 2021, the same group from Hong Kong University of Science and Technology demonstrated the non-monotonic Casimir force and distance-independent Casimir force, respectively, using this on-chip platform. Regularization In order to be able to perform calculations in the general case, it is convenient to introduce a regulator in the summations. This is an artificial device, used to make the sums finite so that they can be more easily manipulated, followed by the taking of a limit so as to remove the regulator. The heat kernel or exponentially regulated sum is where the limit is taken in the end. The divergence of the sum is typically manifested as for three-dimensional cavities. The infinite part of the sum is associated with the bulk constant which does not depend on the shape of the cavity. The interesting part of the sum is the finite part, which is shape-dependent. The Gaussian regulator is better suited to numerical calculations because of its superior convergence properties, but is more difficult to use in theoretical calculations. Other, suitably smooth, regulators may be used as well. The zeta function regulator is completely unsuited for numerical calculations, but is quite useful in theoretical calculations. In particular, divergences show up as poles in the complex plane, with the bulk divergence at . This sum may be analytically continued past this pole, to obtain a finite part at . Not every cavity configuration necessarily leads to a finite part (the lack of a pole at ) or shape-independent infinite parts. In this case, it should be understood that additional physics has to be taken into account. In particular, at extremely large frequencies (above the plasma frequency), metals become transparent to photons (such as X-rays), and dielectrics show a frequency-dependent cutoff as well. This frequency dependence acts as a natural regulator. There are a variety of bulk effects in solid state physics, mathematically very similar to the Casimir effect, where the cutoff frequency comes into explicit play to keep expressions finite. (These are discussed in greater detail in Landau and Lifshitz, "Theory of Continuous Media".) Generalities The Casimir effect can also be computed using the mathematical mechanisms of functional integrals of quantum field theory, although such calculations are considerably more abstract, and thus difficult to comprehend. In addition, they can be carried out only for the simplest of geometries. However, the formalism of quantum field theory makes it clear that the vacuum expectation value summations are in a certain sense summations over so-called "virtual particles". More interesting is the understanding that the sums over the energies of standing waves should be formally understood as sums over the eigenvalues of a Hamiltonian. This allows atomic and molecular effects, such as the Van der Waals force, to be understood as a variation on the theme of the Casimir effect. Thus one considers the Hamiltonian of a system as a function of the arrangement of objects, such as atoms, in configuration space. The change in the zero-point energy as a function of changes of the configuration can be understood to result in forces acting between the objects. In the chiral bag model of the nucleon, the Casimir energy plays an important role in showing the mass of the nucleon is independent of the bag radius. In addition, the spectral asymmetry is interpreted as a non-zero vacuum expectation value of the baryon number, cancelling the topological winding number of the pion field surrounding the nucleon. A "pseudo-Casimir" effect can be found in liquid crystal systems, where the boundary conditions imposed through anchoring by rigid walls give rise to a long-range force, analogous to the force that arises between conducting plates. Dynamical Casimir effect The dynamical Casimir effect is the production of particles and energy from an accelerated moving mirror. This reaction was predicted by certain numerical solutions to quantum mechanics equations made in the 1970s. In May 2011 an announcement was made by researchers at the Chalmers University of Technology, in Gothenburg, Sweden, of the detection of the dynamical Casimir effect. In their experiment, microwave photons were generated out of the vacuum in a superconducting microwave resonator. These researchers used a modified SQUID to change the effective length of the resonator in time, mimicking a mirror moving at the required relativistic velocity. If confirmed this would be the first experimental verification of the dynamical Casimir effect. In March 2013 an article appeared on the PNAS scientific journal describing an experiment that demonstrated the dynamical Casimir effect in a Josephson metamaterial. In July 2019 an article was published describing an experiment providing evidence of optical dynamical Casimir effect in a dispersion-oscillating fibre. In 2020, Frank Wilczek et al., proposed a resolution to the information loss paradox associated with the moving mirror model of the dynamical Casimir effect. Constructed within the framework of quantum field theory in curved spacetime, the dynamical Casimir effect (moving mirror) has been used to help understand the Unruh effect. Repulsive forces There are few instances wherein the Casimir effect can give rise to repulsive forces between uncharged objects. Evgeny Lifshitz showed (theoretically) that in certain circumstances (most commonly involving liquids), repulsive forces can arise. This has sparked interest in applications of the Casimir effect toward the development of levitating devices. An experimental demonstration of the Casimir-based repulsion predicted by Lifshitz was carried out by Munday et al. who described it as "quantum levitation". Other scientists have also suggested the use of gain media to achieve a similar levitation effect, though this is controversial because these materials seem to violate fundamental causality constraints and the requirement of thermodynamic equilibrium (Kramers–Kronig relations). Casimir and Casimir–Polder repulsion can in fact occur for sufficiently anisotropic electrical bodies; for a review of the issues involved with repulsion see Milton et al. A notable recent development on repulsive Casimir forces relies on using chiral materials. Q.-D. Jiang at Stockholm University and Nobel Laureate Frank Wilczek at MIT show that chiral "lubricant" can generate repulsive, enhanced, and tunable Casimir interactions. Timothy Boyer showed in his work published in 1968 that a conductor with spherical symmetry will also show this repulsive force, and the result is independent of radius. Further work shows that the repulsive force can be generated with materials of carefully chosen dielectrics. Speculative applications It has been suggested that the Casimir forces have application in nanotechnology, in particular silicon integrated circuit technology based micro- and nanoelectromechanical systems, and so-called Casimir oscillators. In 1995 and 1998 Maclay et al. published the first models of a microelectromechanical system (MEMS) with Casimir forces. While not exploiting the Casimir force for useful work, the papers drew attention from the MEMS community due to the revelation that Casimir effect needs to be considered as a vital factor in the future design of MEMS. In particular, Casimir effect might be the critical factor in the stiction failure of MEMS. In 2001, Capasso et al. showed how the force can be used to control the mechanical motion of a MEMS device, The researchers suspended a polysilicon plate from a torsional rod – a twisting horizontal bar just a few microns in diameter. When they brought a metallized sphere close up to the plate, the attractive Casimir force between the two objects made the plate rotate. They also studied the dynamical behaviour of the MEMS device by making the plate oscillate. The Casimir force reduced the rate of oscillation and led to nonlinear phenomena, such as hysteresis and bistability in the frequency response of the oscillator. According to the team, the system's behaviour agreed well with theoretical calculations. The Casimir effect shows that quantum field theory allows the energy density in very small regions of space to be negative relative to the ordinary vacuum energy, and the energy densities cannot be arbitrarily negative as the theory breaks down at atomic distances. Such prominent physicists such as Stephen Hawking and Kip Thorne, have speculated that such effects might make it possible to stabilize a traversable wormhole. See also Negative energy Scharnhorst effect Van der Waals force Squeezed vacuum References Further reading Introductory readings Casimir effect description from University of California, Riverside's version of the Usenet physics FAQ. A. Lambrecht, The Casimir effect: a force from nothing, Physics World, September 2002. Papers, books and lectures (Includes discussion of French naval analogy.) (Also includes discussion of French naval analogy.) Patent No. PCT/RU2011/000847 Author Urmatskih. Temperature dependence Measurements Recast Usual View of Elusive Force from NIST External links Casimir effect article search on arxiv.org G. Lang, The Casimir Force web site, 2002 J. Babb, bibliography on the Casimir Effect web site, 2009 H. Nikolic, The origin of Casimir effect; Vacuum energy or van der Waals force? presentation slides, 2018 Quantum field theory Physical phenomena Force Levitation Articles containing video clips
Casimir effect
[ "Physics", "Mathematics" ]
5,604
[ "Quantum field theory", "Physical phenomena", "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Quantum mechanics", "Levitation", "Motion (physics)", "Wikipedia categories named after physical quantities", "Matter" ]
7,794
https://en.wikipedia.org/wiki/Crystallography
Crystallography is the branch of science devoted to the study of molecular and crystalline structure and properties. The word crystallography is derived from the Ancient Greek word (; "clear ice, rock-crystal"), and (; "to write"). In July 2012, the United Nations recognised the importance of the science of crystallography by proclaiming 2014 the International Year of Crystallography. Crystallography is a broad topic, and many of its subareas, such as X-ray crystallography, are themselves important scientific topics. Crystallography ranges from the fundamentals of crystal structure to the mathematics of crystal geometry, including those that are not periodic or quasicrystals. At the atomic scale it can involve the use of X-ray diffraction to produce experimental data that the tools of X-ray crystallography can convert into detailed positions of atoms, and sometimes electron density. At larger scales it includes experimental tools such as orientational imaging to examine the relative orientations at the grain boundary in materials. Crystallography plays a key role in many areas of biology, chemistry, and physics, as well new developments in these fields. History and timeline Before the 20th century, the study of crystals was based on physical measurements of their geometry using a goniometer. This involved measuring the angles of crystal faces relative to each other and to theoretical reference axes (crystallographic axes), and establishing the symmetry of the crystal in question. The position in 3D space of each crystal face is plotted on a stereographic net such as a Wulff net or Lambert net. The pole to each face is plotted on the net. Each point is labelled with its Miller index. The final plot allows the symmetry of the crystal to be established. The discovery of X-rays and electrons in the last decade of the 19th century enabled the determination of crystal structures on the atomic scale, which brought about the modern era of crystallography. The first X-ray diffraction experiment was conducted in 1912 by Max von Laue, while electron diffraction was first realized in 1927 in the Davisson–Germer experiment and parallel work by George Paget Thomson and Alexander Reid. These developed into the two main branches of crystallography, X-ray crystallography and electron diffraction. The quality and throughput of solving crystal structures greatly improved in the second half of the 20th century, with the developments of customized instruments and phasing algorithms. Nowadays, crystallography is an interdisciplinary field, supporting theoretical and experimental discoveries in various domains. Modern-day scientific instruments for crystallography vary from laboratory-sized equipment, such as diffractometers and electron microscopes, to dedicated large facilities, such as photoinjectors, synchrotron light sources and free-electron lasers. Methodology Crystallographic methods depend mainly on analysis of the diffraction patterns of a sample targeted by a beam of some type. X-rays are most commonly used; other beams used include electrons or neutrons. Crystallographers often explicitly state the type of beam used, as in the terms X-ray diffraction, neutron diffraction and electron diffraction. These three types of radiation interact with the specimen in different ways. X-rays interact with the spatial distribution of electrons in the sample. Neutrons are scattered by the atomic nuclei through the strong nuclear forces, but in addition the magnetic moment of neutrons is non-zero, so they are also scattered by magnetic fields. When neutrons are scattered from hydrogen-containing materials, they produce diffraction patterns with high noise levels, which can sometimes be resolved by substituting deuterium for hydrogen. Electrons are charged particles and therefore interact with the total charge distribution of both the atomic nuclei and the electrons of the sample. It is hard to focus x-rays or neutrons, but since electrons are charged they can be focused and are used in electron microscope to produce magnified images. There are many ways that transmission electron microscopy and related techniques such as scanning transmission electron microscopy, high-resolution electron microscopy can be used to obtain images with in many cases atomic resolution from which crystallographic information can be obtained. There are also other methods such as low-energy electron diffraction, low-energy electron microscopy and reflection high-energy electron diffraction which can be used to obtain crystallographic information about surfaces. Applications in various areas Materials science Crystallography is used by materials scientists to characterize different materials. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically because the natural shapes of crystals reflect the atomic structure. In addition, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Most materials do not occur as a single crystal, but are poly-crystalline in nature (they exist as an aggregate of small crystals with different orientations). As such, powder diffraction techniques, which take diffraction patterns of samples with a large number of crystals, play an important role in structural determination. Other physical properties are also linked to crystallography. For example, the minerals in clay form small, flat, platelike structures. Clay can be easily deformed because the platelike particles can slip along each other in the plane of the plates, yet remain strongly connected in the direction perpendicular to the plates. Such mechanisms can be studied by crystallographic texture measurements. Crystallographic studies help elucidate the relationship between a material's structure and its properties, aiding in developing new materials with tailored characteristics. This understanding is crucial in various fields, including metallurgy, geology, and materials science. Advancements in crystallographic techniques, such as electron diffraction and X-ray crystallography, continue to expand our understanding of material behavior at the atomic level. In another example, iron transforms from a body-centered cubic (bcc) structure called ferrite to a face-centered cubic (fcc) structure called austenite when it is heated. The fcc structure is a close-packed structure unlike the bcc structure; thus the volume of the iron decreases when this transformation occurs. Crystallography is useful in phase identification. When manufacturing or using a material, it is generally desirable to know what compounds and what phases are present in the material, as their composition, structure and proportions will influence the material's properties. Each phase has a characteristic arrangement of atoms. X-ray or neutron diffraction can be used to identify which structures are present in the material, and thus which compounds are present. Crystallography covers the enumeration of the symmetry patterns which can be formed by atoms in a crystal and for this reason is related to group theory. Biology X-ray crystallography is the primary method for determining the molecular conformations of biological macromolecules, particularly protein and nucleic acids such as DNA and RNA. The double-helical structure of DNA was deduced from crystallographic data. The first crystal structure of a macromolecule was solved in 1958, a three-dimensional model of the myoglobin molecule obtained by X-ray analysis. The Protein Data Bank (PDB) is a freely accessible repository for the structures of proteins and other biological macromolecules. Computer programs such as RasMol, Pymol or VMD can be used to visualize biological molecular structures. Neutron crystallography is often used to help refine structures obtained by X-ray methods or to solve a specific bond; the methods are often viewed as complementary, as X-rays are sensitive to electron positions and scatter most strongly off heavy atoms, while neutrons are sensitive to nucleus positions and scatter strongly even off many light isotopes, including hydrogen and deuterium. Electron diffraction has been used to determine some protein structures, most notably membrane proteins and viral capsids. Notation Coordinates in square brackets such as [100] denote a direction vector (in real space). Coordinates in angle brackets or chevrons such as <100> denote a family of directions which are related by symmetry operations. In the cubic crystal system for example, <100> would mean [100], [010], [001] or the negative of any of those directions. Miller indices in parentheses such as (100) denote a plane of the crystal structure, and regular repetitions of that plane with a particular spacing. In the cubic system, the normal to the (hkl) plane is the direction [hkl], but in lower-symmetry cases, the normal to (hkl) is not parallel to [hkl]. Indices in curly brackets or braces such as {100} denote a family of planes and their normals. In cubic materials the symmetry makes them equivalent, just as the way angle brackets denote a family of directions. In non-cubic materials, <hkl> is not necessarily perpendicular to {hkl}. Reference literature The International Tables for Crystallography is an eight-book series that outlines the standard notations for formatting, describing and testing crystals. The series contains books that covers analysis methods and the mathematical procedures for determining organic structure through x-ray crystallography, electron diffraction, and neutron diffraction. The International tables are focused on procedures, techniques and descriptions and do not list the physical properties of individual crystals themselves. Each book is about 1000 pages and the titles of the books are: Vol A - Space Group Symmetry, Vol A1 - Symmetry Relations Between Space Groups, Vol B - Reciprocal Space, Vol C - Mathematical, Physical, and Chemical Tables, Vol D - Physical Properties of Crystals, Vol E - Subperiodic Groups, Vol F - Crystallography of Biological Macromolecules, and Vol G - Definition and Exchange of Crystallographic Data. Notable scientists See also Atomic packing factor Crystal structure Crystallographic database Crystallographic point group Crystallographic group Dana classification system Electron crystallography Electron diffraction Fractional coordinates Low-energy electron diffraction Neutron crystallography Neutron diffraction at OPAL Neutron diffraction at the ILL NMR crystallography Point group Precession electron diffraction Quasicrystal Reflection high-energy electron diffraction Space group Symmetric group Timeline of crystallography Transmission electron microscopy X-ray crystallography References External links Free book, Geometry of Crystals, Polycrystals and Phase Transformations American Crystallographic Association Learning Crystallography Web Course on Crystallography Crystallographic Space Groups Chemistry Condensed matter physics Instrumental analysis Materials science Neutron-related techniques Synchrotron-related techniques
Crystallography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,178
[ "Applied and interdisciplinary physics", "Instrumental analysis", "Phases of matter", "Materials science", "Crystallography", "Condensed matter physics", "nan", "Matter" ]
7,832
https://en.wikipedia.org/wiki/Complete%20metric%20space
In mathematical analysis, a metric space is called complete (or a Cauchy space) if every Cauchy sequence of points in has a limit that is also in . Intuitively, a space is complete if there are no "points missing" from it (inside or at the boundary). For instance, the set of rational numbers is not complete, because e.g. is "missing" from it, even though one can construct a Cauchy sequence of rational numbers that converges to it (see further examples below). It is always possible to "fill all the holes", leading to the completion of a given space, as explained below. Definition Cauchy sequence A sequence of elements from of a metric space is called Cauchy if for every positive real number there is a positive integer such that for all positive integers Complete space A metric space is complete if any of the following equivalent conditions are satisfied: Every Cauchy sequence of points in has a limit that is also in Every Cauchy sequence in converges in (that is, to some point of ). Every decreasing sequence of non-empty closed subsets of with diameters tending to 0, has a non-empty intersection: if is closed and non-empty, for every and then there is a unique point common to all sets Examples The space of rational numbers, with the standard metric given by the absolute value of the difference, is not complete. Consider for instance the sequence defined by and This is a Cauchy sequence of rational numbers, but it does not converge towards any rational limit: If the sequence did have a limit then by solving necessarily yet no rational number has this property. However, considered as a sequence of real numbers, it does converge to the irrational number . The open interval , again with the absolute difference metric, is not complete either. The sequence defined by is Cauchy, but does not have a limit in the given space. However the closed interval is complete; for example the given sequence does have a limit in this interval, namely zero. The space of real numbers and the space of complex numbers (with the metric given by the absolute difference) are complete, and so is Euclidean space , with the usual distance metric. In contrast, infinite-dimensional normed vector spaces may or may not be complete; those that are complete are Banach spaces. The space C of continuous real-valued functions on a closed and bounded interval is a Banach space, and so a complete metric space, with respect to the supremum norm. However, the supremum norm does not give a norm on the space C of continuous functions on , for it may contain unbounded functions. Instead, with the topology of compact convergence, C can be given the structure of a Fréchet space: a locally convex topological vector space whose topology can be induced by a complete translation-invariant metric. The space Qp of p-adic numbers is complete for any prime number This space completes Q with the p-adic metric in the same way that R completes Q with the usual metric. If is an arbitrary set, then the set of all sequences in becomes a complete metric space if we define the distance between the sequences and to be where is the smallest index for which is distinct from or if there is no such index. This space is homeomorphic to the product of a countable number of copies of the discrete space Riemannian manifolds which are complete are called geodesic manifolds; completeness follows from the Hopf–Rinow theorem. Some theorems Every compact metric space is complete, though complete spaces need not be compact. In fact, a metric space is compact if and only if it is complete and totally bounded. This is a generalization of the Heine–Borel theorem, which states that any closed and bounded subspace of is compact and therefore complete. Let be a complete metric space. If is a closed set, then is also complete. Let be a metric space. If is a complete subspace, then is also closed. If is a set and is a complete metric space, then the set of all bounded functions from to is a complete metric space. Here we define the distance in in terms of the distance in with the supremum norm If is a topological space and is a complete metric space, then the set consisting of all continuous bounded functions is a closed subspace of and hence also complete. The Baire category theorem says that every complete metric space is a Baire space. That is, the union of countably many nowhere dense subsets of the space has empty interior. The Banach fixed-point theorem states that a contraction mapping on a complete metric space admits a fixed point. The fixed-point theorem is often used to prove the inverse function theorem on complete metric spaces such as Banach spaces. Completion For any metric space M, it is possible to construct a complete metric space M′ (which is also denoted as ), which contains M as a dense subspace. It has the following universal property: if N is any complete metric space and f is any uniformly continuous function from M to N, then there exists a unique uniformly continuous function f′ from M′ to N that extends f. The space M''' is determined up to isometry by this property (among all complete metric spaces isometrically containing M), and is called the completion of M. The completion of M can be constructed as a set of equivalence classes of Cauchy sequences in M. For any two Cauchy sequences and in M, we may define their distance as (This limit exists because the real numbers are complete.) This is only a pseudometric, not yet a metric, since two different Cauchy sequences may have the distance 0. But "having distance 0" is an equivalence relation on the set of all Cauchy sequences, and the set of equivalence classes is a metric space, the completion of M. The original space is embedded in this space via the identification of an element x of M' with the equivalence class of sequences in M converging to x (i.e., the equivalence class containing the sequence with constant value x). This defines an isometry onto a dense subspace, as required. Notice, however, that this construction makes explicit use of the completeness of the real numbers, so completion of the rational numbers needs a slightly different treatment. Cantor's construction of the real numbers is similar to the above construction; the real numbers are the completion of the rational numbers using the ordinary absolute value to measure distances. The additional subtlety to contend with is that it is not logically permissible to use the completeness of the real numbers in their own construction. Nevertheless, equivalence classes of Cauchy sequences are defined as above, and the set of equivalence classes is easily shown to be a field that has the rational numbers as a subfield. This field is complete, admits a natural total ordering, and is the unique totally ordered complete field (up to isomorphism). It is defined as the field of real numbers (see also Construction of the real numbers for more details). One way to visualize this identification with the real numbers as usually viewed is that the equivalence class consisting of those Cauchy sequences of rational numbers that "ought" to have a given real limit is identified with that real number. The truncations of the decimal expansion give just one choice of Cauchy sequence in the relevant equivalence class. For a prime the -adic numbers arise by completing the rational numbers with respect to a different metric. If the earlier completion procedure is applied to a normed vector space, the result is a Banach space containing the original space as a dense subspace, and if it is applied to an inner product space, the result is a Hilbert space containing the original space as a dense subspace. Topologically complete spaces Completeness is a property of the metric and not of the topology, meaning that a complete metric space can be homeomorphic to a non-complete one. An example is given by the real numbers, which are complete but homeomorphic to the open interval , which is not complete. In topology one considers completely metrizable spaces, spaces for which there exists at least one complete metric inducing the given topology. Completely metrizable spaces can be characterized as those spaces that can be written as an intersection of countably many open subsets of some complete metric space. Since the conclusion of the Baire category theorem is purely topological, it applies to these spaces as well. Completely metrizable spaces are often called topologically complete. However, the latter term is somewhat arbitrary since metric is not the most general structure on a topological space for which one can talk about completeness (see the section Alternatives and generalizations). Indeed, some authors use the term topologically complete for a wider class of topological spaces, the completely uniformizable spaces. A topological space homeomorphic to a separable complete metric space is called a Polish space. Alternatives and generalizations Since Cauchy sequences can also be defined in general topological groups, an alternative to relying on a metric structure for defining completeness and constructing the completion of a space is to use a group structure. This is most often seen in the context of topological vector spaces, but requires only the existence of a continuous "subtraction" operation. In this setting, the distance between two points and is gauged not by a real number via the metric in the comparison but by an open neighbourhood of via subtraction in the comparison A common generalisation of these definitions can be found in the context of a uniform space, where an entourage is a set of all pairs of points that are at no more than a particular "distance" from each other. It is also possible to replace Cauchy sequences in the definition of completeness by Cauchy nets or Cauchy filters. If every Cauchy net (or equivalently every Cauchy filter) has a limit in then is called complete. One can furthermore construct a completion for an arbitrary uniform space similar to the completion of metric spaces. The most general situation in which Cauchy nets apply is Cauchy spaces; these too have a notion of completeness and completion just like uniform spaces. See also Notes References Kreyszig, Erwin, Introductory functional analysis with applications'' (Wiley, New York, 1978). Lang, Serge, "Real and Functional Analysis" Metric geometry Topology Uniform spaces
Complete metric space
[ "Physics", "Mathematics" ]
2,131
[ "Uniform spaces", "Space (mathematics)", "Topological spaces", "Topology", "Space", "Geometry", "Spacetime" ]
7,834
https://en.wikipedia.org/wiki/Chain%20reaction
A chain reaction is a sequence of reactions where a reactive product or by-product causes additional reactions to take place. In a chain reaction, positive feedback leads to a self-amplifying chain of events. Chain reactions are one way that systems which are not in thermodynamic equilibrium can release energy or increase entropy in order to reach a state of higher entropy. For example, a system may not be able to reach a lower energy state by releasing energy into the environment, because it is hindered or prevented in some way from taking the path that will result in the energy release. If a reaction results in a small energy release making way for more energy releases in an expanding chain, then the system will typically collapse explosively until much or all of the stored energy has been released. A macroscopic metaphor for chain reactions is thus a snowball causing a larger snowball until finally an avalanche results ("snowball effect"). This is a result of stored gravitational potential energy seeking a path of release over friction. Chemically, the equivalent to a snow avalanche is a spark causing a forest fire. In nuclear physics, a single stray neutron can result in a prompt critical event, which may finally be energetic enough for a nuclear reactor meltdown or (in a bomb) a nuclear explosion. Another metaphor for a chain reaction is the domino effect, named after the act of domino toppling, where the simple action of toppling one domino leads to all dominoes eventually toppling, even if they are significantly larger. Numerous chain reactions can be represented by a mathematical model based on Markov chains. Chemical chain reactions History In 1913, the German chemist Max Bodenstein first put forth the idea of chemical chain reactions. If two molecules react, not only molecules of the final reaction products are formed, but also some unstable molecules which can further react with the parent molecules with a far larger probability than the initial reactants. (In the new reaction, further unstable molecules are formed besides the stable products, and so on.) In 1918, Walther Nernst proposed that the photochemical reaction between hydrogen and chlorine is a chain reaction in order to explain what is known as the quantum yield phenomena. This means that one photon of light is responsible for the formation of as many as 106 molecules of the product HCl. Nernst suggested that the photon dissociates a Cl2 molecule into two Cl atoms which each initiate a long chain of reaction steps forming HCl. In 1923, Danish and Dutch scientists J. A. Christiansen and Hendrik Anthony Kramers, in an analysis of the formation of polymers, pointed out that such a chain reaction need not start with a molecule excited by light, but could also start with two molecules colliding violently due to thermal energy as previously proposed for initiation of chemical reactions by van' t Hoff. Christiansen and Kramers also noted that if, in one link of the reaction chain, two or more unstable molecules are produced, the reaction chain would branch and grow. The result is in fact an exponential growth, thus giving rise to explosive increases in reaction rates, and indeed to chemical explosions themselves. This was the first proposal for the mechanism of chemical explosions. A quantitative chain chemical reaction theory was created later on by Soviet physicist Nikolay Semyonov in 1934. Semyonov shared the Nobel Prize in 1956 with Sir Cyril Norman Hinshelwood, who independently developed many of the same quantitative concepts. Typical steps The main types of steps in chain reaction are of the following types. Initiation (formation of active particles or chain carriers, often free radicals, in either a thermal or a photochemical step) Propagation (may comprise several elementary steps in a cycle, where the active particle through reaction forms another active particle which continues the reaction chain by entering the next elementary step). In effect the active particle serves as a catalyst for the overall reaction of the propagation cycle. Particular cases are: chain branching (a propagation step where one active particle enters the step and two or more are formed); chain transfer (a propagation step in which the active particle is a growing polymer chain which reacts to form an inactive polymer whose growth is terminated and an active small particle (such as a radical), which may then react to form a new polymer chain). Termination (elementary step in which the active particle loses its activity; e. g. by recombination of two free radicals). The chain length is defined as the average number of times the propagation cycle is repeated, and equals the overall reaction rate divided by the initiation rate. Some chain reactions have complex rate equations with fractional order or mixed order kinetics. Detailed example: the hydrogen-bromine reaction The reaction H2 + Br2 → 2 HBr proceeds by the following mechanism: Initiation Br2 → 2 Br• (thermal) or Br2 + hν → 2 Br• (photochemical) each Br atom is a free radical, indicated by the symbol "•" representing an unpaired electron. Propagation (here a cycle of two steps) Br• + H2 → HBr + H• H• + Br2 → HBr + Br• the sum of these two steps corresponds to the overall reaction H2 + Br2 → 2 HBr, with catalysis by Br• which participates in the first step and is regenerated in the second step. Retardation (inhibition) H• + HBr → H2 + Br• this step is specific to this example, and corresponds to the first propagation step in reverse. Termination 2 Br• → Br2 recombination of two radicals, corresponding in this example to initiation in reverse. As can be explained using the steady-state approximation, the thermal reaction has an initial rate of fractional order (3/2), and a complete rate equation with a two-term denominator (mixed-order kinetics). Further chemical examples The reaction 2 H2 + O2 → 2 H2O provides an example of chain branching. The propagation is a sequence of two steps whose net effect is to replace an H atom by another H atom plus two OH radicals. This leads to an explosion under certain conditions of temperature and pressure. H• + O2 → •OH + •O• •O• + H2 → •OH + H• In chain-growth polymerization, the propagation step corresponds to the elongation of the growing polymer chain. Chain transfer corresponds to transfer of the activity from this growing chain, whose growth is terminated, to another molecule which may be a second growing polymer chain. For polymerization, the kinetic chain length defined above may differ from the degree of polymerization of the product macromolecule. Polymerase chain reaction, a technique used in molecular biology to amplify (make many copies of) a piece of DNA by in vitro enzymatic replication using a DNA polymerase. Acetaldehyde pyrolysis and rate equation The pyrolysis (thermal decomposition) of acetaldehyde, CH3CHO (g) → CH4 (g) + CO (g), proceeds via the Rice-Herzfeld mechanism: Initiation (formation of free radicals): CH3CHO (g) → •CH3 (g) + •CHO (g) k1 The methyl and CHO groups are free radicals. Propagation (two steps): •CH3 (g) + CH3CHO (g) → CH4 (g) + •CH3CO (g) k2 This reaction step provides methane, which is one of the two main products. •CH3CO (g) → CO (g) + •CH3 (g) k3 The product •CH3CO (g) of the previous step gives rise to carbon monoxide (CO), which is the second main product. The sum of the two propagation steps corresponds to the overall reaction CH3CHO (g) → CH4 (g) + CO (g), catalyzed by a methyl radical •CH3. Termination: •CH3 (g) + •CH3 (g) → C2H6 (g) k4 This reaction is the only source of ethane (minor product) and it is concluded to be the main chain ending step. Although this mechanism explains the principal products, there are others that are formed in a minor degree, such as acetone (CH3COCH3) and propanal (CH3CH2CHO). Applying the Steady State Approximation for the intermediate species CH3(g) and CH3CO(g), the rate law for the formation of methane and the order of reaction are found: The rate of formation of the product methane is For the intermediates and Adding (2) and (3), we obtain so that Using (4) in (1) gives the rate law , which is order 3/2 in the reactant CH3CHO. Nuclear chain reactions A nuclear chain reaction was proposed by Leo Szilard in 1933, shortly after the neutron was discovered, yet more than five years before nuclear fission was first discovered. Szilárd knew of chemical chain reactions, and he had been reading about an energy-producing nuclear reaction involving high-energy protons bombarding lithium, demonstrated by John Cockcroft and Ernest Walton, in 1932. Now, Szilárd proposed to use neutrons theoretically produced from certain nuclear reactions in lighter isotopes, to induce further reactions in light isotopes that produced more neutrons. This would in theory produce a chain reaction at the level of the nucleus. He did not envision fission as one of these neutron-producing reactions, since this reaction was not known at the time. Experiments he proposed using beryllium and indium failed. Later, after fission was discovered in 1938, Szilárd immediately realized the possibility of using neutron-induced fission as the particular nuclear reaction necessary to create a chain-reaction, so long as fission also produced neutrons. In 1939, with Enrico Fermi, Szilárd proved this neutron-multiplying reaction in uranium. In this reaction, a neutron plus a fissionable atom causes a fission resulting in a larger number of neutrons than the single one that was consumed in the initial reaction. Thus was born the practical nuclear chain reaction by the mechanism of neutron-induced nuclear fission. Specifically, if one or more of the produced neutrons themselves interact with other fissionable nuclei, and these also undergo fission, then there is a possibility that the macroscopic overall fission reaction will not stop, but continue throughout the reaction material. This is then a self-propagating and thus self-sustaining chain reaction. This is the principle for nuclear reactors and atomic bombs. Demonstration of a self-sustaining nuclear chain reaction was accomplished by Enrico Fermi and others, in the successful operation of Chicago Pile-1, the first artificial nuclear reactor, in late 1942. Electron avalanche in gases An electron avalanche happens between two unconnected electrodes in a gas when an electric field exceeds a certain threshold. Random thermal collisions of gas atoms may result in a few free electrons and positively charged gas ions, in a process called impact ionization. Acceleration of these free electrons in a strong electric field causes them to gain energy, and when they impact other atoms, the energy causes release of new free electrons and ions (ionization), which fuels the same process. If this process happens faster than it is naturally quenched by ions recombining, the new ions multiply in successive cycles until the gas breaks down into a plasma and current flows freely in a discharge. Electron avalanches are essential to the dielectric breakdown process within gases. The process can culminate in corona discharges, streamers, leaders, or in a spark or continuous electric arc that completely bridges the gap. The process may extend huge sparks — streamers in lightning discharges propagate by formation of electron avalanches created in the high potential gradient ahead of the streamers' advancing tips. Once begun, avalanches are often intensified by the creation of photoelectrons as a result of ultraviolet radiation emitted by the excited medium's atoms in the aft-tip region. The extremely high temperature of the resulting plasma cracks the surrounding gas molecules and the free ions recombine to create new chemical compounds. The process can also be used to detect radiation that initiates the process, as the passage of a single particles can be amplified to large discharges. This is the mechanism of a Geiger counter and also the visualization possible with a spark chamber and other wire chambers. Avalanche breakdown in semiconductors An avalanche breakdown process can happen in semiconductors, which in some ways conduct electricity analogously to a mildly ionized gas. Semiconductors rely on free electrons knocked out of the crystal by thermal vibration for conduction. Thus, unlike metals, semiconductors become better conductors the higher the temperature. This sets up conditions for the same type of positive feedback—heat from current flow causes temperature to rise, which increases charge carriers, lowering resistance, and causing more current to flow. This can continue to the point of complete breakdown of normal resistance at a semiconductor junction, and failure of the device (this may be temporary or permanent depending on whether there is physical damage to the crystal). Certain devices, such as avalanche diodes, deliberately make use of the effect. Living organisms Examples of chain reactions in living organisms include excitation of neurons in epilepsy and lipid peroxidation. In peroxidation, a lipid radical reacts with oxygen to form a peroxyl radical (L• + O2 → LOO•). The peroxyl radical then oxidises another lipid, thus forming another lipid radical (LOO• + L–H → LOOH + L•). A chain reaction in glutamatergic synapses is the cause of synchronous discharge in some epileptic seizures. See also Cascading failure Multiple-vehicle collision Rube Goldberg machine References External links IUPAC Gold Book - Chain reaction Chemical kinetics Metaphors referring to objects Causality
Chain reaction
[ "Physics", "Chemistry" ]
2,864
[ "Chemical reaction engineering", "Chemical kinetics" ]
8,267
https://en.wikipedia.org/wiki/Dimensional%20analysis
In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric current) and units of measurement (such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer to conversion of units from one dimensional unit to another, which can be used to evaluate scientific formulae. Commensurable physical quantities are of the same kind and have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years. Incommensurable physical quantities are of different kinds and have different dimensions, and can not be directly compared to each other, no matter what units they are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless. Any physically meaningful equation, or inequality, must have the same dimensions on its left and right sides, a property known as dimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It also serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation. The concept of physical dimension or quantity dimension, and of dimensional analysis, was introduced by Joseph Fourier in 1822. Formulation The Buckingham π theorem describes how every physically meaningful equation involving variables can be equivalently rewritten as an equation of dimensionless parameters, where m is the rank of the dimensional matrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables. A dimensional equation can have the dimensions reduced or eliminated through nondimensionalization, which begins with dimensional analysis, and involves scaling quantities by characteristic units of a system or physical constants of nature. This may give insight into the fundamental properties of the system, as illustrated in the examples below. The dimension of a physical quantity can be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionally rational) power. The dimension of a physical quantity is more fundamental than some scale or unit used to express the amount of that physical quantity. For example, mass is a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent. Natural units, being based on only universal constants, may be thought of as being "less arbitrary". There are many possible choices of base physical dimensions. The SI standard selects the following dimensions and corresponding dimension symbols: time (T), length (L), mass (M), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J). The symbols are by convention usually written in roman sans serif typeface. Mathematically, the dimension of the quantity is given by where , , , , , , are the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form a basis – for instance, one could replace the dimension (I) of electric current of the SI basis with a dimension (Q) of electric charge, since . A quantity that has only (with all other exponents zero) is known as a geometric quantity. A quantity that has only both and is known as a kinematic quantity. A quantity that has only all of , , and is known as a dynamic quantity. A quantity that has all exponents null is said to have dimension one. The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity have conversion factors that relate them. For example, ; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity. There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity, although this does not invalidate the usefulness of dimensional analysis. Simple cases As examples, the dimension of the physical quantity speed is The dimension of the physical quantity acceleration is The dimension of the physical quantity force is The dimension of the physical quantity pressure is The dimension of the physical quantity energy is The dimension of the physical quantity power is The dimension of the physical quantity electric charge is The dimension of the physical quantity voltage is The dimension of the physical quantity capacitance is Rayleigh's method In dimensional analysis, Rayleigh's method is a conceptual tool used in physics, chemistry, and engineering. It expresses a functional relationship of some variables in the form of an exponential equation. It was named after Lord Rayleigh. The method involves the following steps: Gather all the independent variables that are likely to influence the dependent variable. If is a variable that depends upon independent variables , , , ..., , then the functional equation can be written as . Write the above equation in the form , where is a dimensionless constant and , , , ..., are arbitrary exponents. Express each of the quantities in the equation in some base units in which the solution is required. By using dimensional homogeneity, obtain a set of simultaneous equations involving the exponents , , , ..., . Solve these equations to obtain the values of the exponents , , , ..., . Substitute the values of exponents in the main equation, and form the non-dimensional parameters by grouping the variables with like exponents. As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis. Concrete numbers and base units Many parameters and measurements in the physical sciences and engineering are expressed as a concrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed with division, e.g. 60 km/h. Other relations can involve multiplication (often shown with a centered dot or juxtaposition), powers (like m2 for square metres), or combinations thereof. A set of base units for a system of measurement is a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed. For example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units. Sometimes the names of units obscure the fact that they are derived units. For example, a newton (N) is a unit of force, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as . Percentages, derivatives and integrals Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since . Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus: position () has the dimension L (length); derivative of position with respect to time (, velocity) has dimension T−1L—length from position, time due to the gradient; the second derivative (, acceleration) has dimension . Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator. force has the dimension (mass multiplied by acceleration); the integral of force with respect to the distance () the object has travelled (, work) has dimension . In economics, one distinguishes between stocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year). In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example, debt-to-GDP ratios are generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged. Dimensional homogeneity (commensurability) The most basic rule of dimensional analysis is that of dimensional homogeneity. However, the dimensions form an abelian group under multiplication, so: For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h. The rule implies that in a physically meaningful expression only quantities of the same dimension can be added, subtracted, or compared. For example, if , and denote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expression is meaningful, but the heterogeneous expression is meaningless. However, is fine. Thus, dimensional analysis may be used as a sanity check of physical equations: the two sides of any equation must be commensurable or have the same dimensions. Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, although torque and energy share the dimension , they are fundamentally different physical quantities. To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use to convert 35 yards to 32.004 m. A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables. For example, Newton's laws of motion must hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres. Conversion factor In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called a conversion factor. For example, kPa and bar are both units of pressure, and . The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to . Since any quantity can be multiplied by 1 without changing it, the expression "" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example, because , and bar/bar cancels out, so . Applications Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well. Mathematics A simple application of dimensional analysis to mathematics is in computing the form of the volume of an -ball (the solid ball in n dimensions), or the area of its surface, the -sphere: being an -dimensional figure, the volume scales as , while the surface area, being -dimensional, scales as . Thus the volume of the -ball in terms of the radius is , for some constant . Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone. Finance, economics, and accounting In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios. For example, the P/E ratio has dimensions of time (unit: year), and can be interpreted as "years of earnings to earn the price paid". In economics, debt-to-GDP ratio also has the unit year (debt has a unit of currency, GDP has a unit of currency/year). Velocity of money has a unit of 1/years (GDP/money supply has a unit of currency/year over currency): how often a unit of currency circulates per year. Annual continuously compounded interest rates and simple interest rates are often expressed as a percentage (adimensional quantity) while time is expressed as an adimensional quantity consisting of the number of years. However, if the time includes year as the unit of measure, the dimension of the rate is 1/year. Of course, there is nothing special (apart from the usual convention) about using year as a unit of time: any other time unit can be used. Furthermore, if rate and time include their units of measure, the use of different units for each is not problematic. In contrast, rate and time need to refer to a common period if they are adimensional. (Note that effective interest rates can only be defined as adimensional quantities.) In financial analysis, bond duration can be defined as , where is the value of a bond (or portfolio), is the continuously compounded interest rate and is a derivative. From the previous point, the dimension of is 1/time. Therefore, the dimension of duration is time (usually expressed in years) because is in the "denominator" of the derivative. Fluid mechanics In fluid mechanics, dimensional analysis is performed to obtain dimensionless pi terms or groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships. In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include: Reynolds number (), generally important in all types of fluid problems: Froude number (), modeling flow with a free surface: Euler number (), used in problems in which pressure is of interest: Mach number (), important in high speed flows where the velocity approaches or exceeds the local speed of sound: where is the local speed of sound. History The origins of dimensional analysis have been disputed by historians. The first written application of dimensional analysis has been credited to François Daviet, a student of Joseph-Louis Lagrange, in a 1799 article at the Turin Academy of Science. This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in the Buckingham π theorem. Simeon Poisson also treated the same problem of the parallelogram law by Daviet, in his treatise of 1811 and 1833 (vol I, p. 39). In the second edition of 1833, Poisson explicitly introduces the term dimension instead of the Daviet homogeneity. In 1822, the important Napoleonic scientist Joseph Fourier made the first credited important contributions based on the idea that physical laws like should be independent of the units employed to measure the physical variables. James Clerk Maxwell played a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived. Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form of Newton's law of universal gravitation in which the gravitational constant is taken as unity, thereby defining . By assuming a form of Coulomb's law in which the Coulomb constant ke is taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge were , which, after substituting his equation for mass, results in charge having the same dimensions as mass, viz. . Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Rayleigh first published the technique in his 1877 book The Theory of Sound. The original meaning of the word dimension, in Fourier's Theorie de la Chaleur, was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time. This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents. Examples A simple example: period of a harmonic oscillator What is the period of oscillation of a mass attached to an ideal linear spring with spring constant suspended in gravity of strength ? That period is the solution for of some dimensionless equation in the variables , , , and . The four quantities have the following dimensions: [T]; [M]; [M/T2]; and [L/T2]. From these we can form only one dimensionless product of powers of our chosen variables, , and putting for some dimensionless constant gives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematical group. They are often called dimensionless numbers as well. The variable does not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combines with , , and , because is the only quantity that involves the dimension L. This implies that in this problem the is irrelevant. Dimensional analysis can sometimes yield strong statements about the irrelevance of some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent of : it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way: , for some dimensionless constant (equal to from the original dimensionless equation). When faced with a case where dimensional analysis rejects a variable (, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here. When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such as . A more complex example: energy of a vibrating wire Consider the case of a vibrating wire of length (L) vibrating with an amplitude (L). The wire has a linear density (M/L) and is under tension (LM/T2), and we want to know the energy (L2M/T2) in the wire. Let and be two dimensionless products of powers of the variables chosen, given by The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation where is some unknown function, or, equivalently as where is some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown function . But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to , and so infer that . The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident. The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on a dimensionless number such as the Reynolds number, which may be interpreted by dimensional analysis. A third example: demand versus capacity for a rotating disc Consider the case of a thin, solid, parallel-sided rotating disc of axial thickness (L) and radius (L). The disc has a density (M/L3), rotates at an angular velocity (T−1) and this leads to a stress (T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following () non-dimensional groups: demand/capacity = thickness/radius or aspect ratio = Through the use of numerical experiments using, for example, the finite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs. Properties Mathematical properties The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form an abelian group: The identity is written as 1; , and the inverse of L is 1/L or L−1. L raised to any integer power is a member of the group, having an inverse of L or 1/L. The operation of the group is multiplication, having the usual rules for handling exponents (). Physically, 1/L can be interpreted as reciprocal length, and 1/T as reciprocal time (see reciprocal second). An abelian group is equivalent to a module over the integers, with the dimensional symbol corresponding to the tuple . When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds to scalar multiplication in the module. A basis for such a module of dimensional symbols is called a set of base quantities, and all other vectors are called derived units. As in any module, one may choose different bases, which yields different systems of units (e.g., choosing whether the unit for charge is derived from the unit for current, or vice versa). The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module, . In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, like . However, it is not possible to take arbitrary fractional powers of units, due to representation-theoretic obstructions. One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensions and , one has the vector spaces and , and can define as the tensor product. Similarly, the dual space can be interpreted as having "negative" dimensions. This corresponds to the fact that under the natural pairing between a vector space and its dual, the dimensions cancel, leaving a dimensionless scalar. The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). The nullity describes some number (e.g., ) of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities, . (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (and exponentiating) together the measured quantities to produce something with the same unit as some derived quantity can be expressed in the general form Consequently, every possible commensurate equation for the physics of the system can be rewritten in the form Knowing this restriction can be a powerful tool for obtaining new insight into the system. Mechanics The dimension of physical quantities of interest in mechanics can be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by a change of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form a basis: they must span the space, and be linearly independent. For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M. On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons: There is no way to obtain mass – or anything derived from it, such as force – without introducing another base dimension (thus, they do not span the space). Velocity, being expressible in terms of length and time (), is redundant (the set is not linearly independent). Other fields of physics and chemistry Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension of electric charge. In thermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, the amount of substance (the number of molecules divided by the Avogadro constant, ≈ ) is also defined as a base dimension, N. In the interaction of relativistic plasma with strong laser pulses, a dimensionless relativistic similarity parameter, connected with the symmetry properties of the collisionless Vlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features. Polynomials and transcendental functions Bridgman's theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor. This excludes polynomials of more than one term or transcendental functions not of that form. Scalar arguments to transcendental functions such as exponential, trigonometric and logarithmic functions, or to inhomogeneous polynomials, must be dimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.) While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identity , where the logarithm is taken in any base, holds for dimensionless numbers and , but it does not hold if and are dimensional, because in this case the left-hand side is well-defined but the right-hand side is not. Similarly, while one can evaluate monomials () of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: for , the expression makes sense (as an area), while for , the expression does not make sense. However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example, This is the height to which an object rises in time  if the acceleration of gravity is 9.8 and the initial upward speed is 500 . It is not necessary for to be in seconds. For example, suppose  = 0.01 minutes. Then the first term would be Combining units and numerical values The value of a dimensional physical quantity is written as the product of a unit [] within the dimension and a dimensionless numerical value or numerical factor, . When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. A conversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed: is identical to The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted. Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units. Quantity equations A quantity equation, also sometimes called a complete equation, is an equation that remains valid independently of the unit of measurement used when expressing the physical quantities. In contrast, in a numerical-value equation, just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit. For example, a quantity equation for displacement as speed multiplied by time difference would be: for = 5 m/s, where and may be expressed in any units, converted if necessary. In contrast, a corresponding numerical-value equation would be: where is the numeric value of when expressed in seconds and is the numeric value of when expressed in metres. Generally, the use of numerical-value equations is discouraged. Dimensionless concepts Constants The dimensionless constants that arise in the results obtained, such as the in the Poiseuille's Law problem and the in the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc. Formalisms Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as the Ising model can be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length, ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be , where is the dimension of the lattice. It has been argued by some physicists, e.g., Michael J. Duff, that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants: , , and , in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other. Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constants , , and (but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limit , and . In problems involving a gravitational field the latter limit should be taken such that the field stays finite. Dimensional equivalences Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force. SI units Programming languages Dimensional correctness as part of type checking has been studied since 1977. Implementations for Ada and C++ were described in 1985 and 1988. Kennedy's 1996 thesis describes an implementation in Standard ML, and later in F#. There are implementations for Haskell, OCaml, and Rust, Python, and a code checker for Fortran. Griffioen's 2019 thesis extended Kennedy's Hindley–Milner type system to support Hart's matrices. McBride and Nordvall-Forsberg show how to use dependent types to extend type systems for units of measure. Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation. Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions. Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations. Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions. For example, you can use UnityDimensions to factor out angles. In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions. Geometry: position vs. displacement Affine quantities Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors; vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: an origin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change). Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable: adding two displacements should yield a new displacement (walking ten paces then twenty paces gets you thirty paces forward), adding a displacement to a position should yield a new position (walking one block down the street from an intersection gets you to the next intersection), subtracting two positions should yield a displacement, but one may not add two positions. This illustrates the subtle distinction between affine quantities (ones modeled by an affine space, such as position) and vector quantities (ones modeled by a vector space, such as displacement). Vector quantities may be added to each other, yielding a new vector quantity, and a vector quantity may be added to a suitable affine quantity (a vector space acts on an affine space), yielding a new affine quantity. Affine quantities cannot be added, but may be subtracted, yielding relative quantities which are vectors, and these relative differences may then be added to each other or to an affine quantity. Properly then, positions have dimension of affine length, while displacements have dimension of vector length. To assign a number to an affine unit, one must not only choose a unit of measurement, but also a point of reference, while to assign a number to a vector unit only requires a unit of measurement. Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis. This distinction is particularly important in the case of temperature, for which the numeric value of absolute zero is not the origin 0 in some scales. For absolute zero, −273.15 °C ≘ 0 K = 0 °R ≘ −459.67 °F, where the symbol ≘ means corresponds to, since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated. For temperature differences, 1 K = 1 °C ≠ 1 °F = 1 °R. (Here °R refers to the Rankine scale, not the Réaumur scale). Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C. Orientation and frame of reference Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with a direction. (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to a frame of reference. This leads to the extensions discussed below, namely Huntley's directed dimensions and Siano's orientational analysis. Huntley's extensions Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rank of the dimensional matrix. He introduced two approaches: The magnitudes of the components of a vector are to be considered dimensionally independent. For example, rather than an undifferentiated length dimension L, we may have Lx represent dimension in the x-direction, and so forth. This requirement stems ultimately from the requirement that each component of a physically meaningful equation (scalar, vector, or tensor) must be dimensionally consistent. Mass as a measure of the quantity of matter is to be considered dimensionally independent from mass as a measure of inertia. Directed dimensions As an example of the usefulness of the first approach, suppose we wish to calculate the distance a cannonball travels when fired with a vertical velocity component and a horizontal velocity component , assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are then , the distance travelled, with dimension L, , , both dimensioned as T−1L, and the downward acceleration of gravity, with dimension T−2L. With these four quantities, we may conclude that the equation for the range may be written: Or dimensionally from which we may deduce that and , which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation. However, if we use directed length dimensions, then will be dimensioned as T−1L, as T−1L, as L and as T−2L. The dimensional equation becomes: and we may solve completely as , and . The increase in deductive power gained by the use of directed length dimensions is apparent. Huntley's concept of directed length dimensions however has some serious limitations: It does not deal well with vector equations involving the cross product, nor does it handle well the use of angles as physical variables. It also is often quite difficult to assign the L, L, L, L, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries? Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems. Quantity of matter In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of the quantity of matter. Quantity of matter is defined by Huntley as a quantity only to inertial mass, while not implicating inertial properties. No further restrictions are added to its definition. For example, consider the derivation of Poiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables: There are three fundamental variables, so the above five equations will yield two independent dimensionless variables: If we distinguish between inertial mass with dimension and quantity of matter with dimension , then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written: where now only is an undetermined constant (found to be equal to by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yield Poiseuille's law. Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimension amount of substance, with unit mole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable. Siano's extension: orientational analysis Angles are, by convention, considered to be dimensionless quantities (although the wisdom of this is contested ) . As an example, consider again the projectile problem in which a point mass is launched from the origin at a speed and angle above the x-axis, with the force of gravity directed along the negative y-axis. It is desired to find the range , at which point the mass returns to the x-axis. Conventional analysis will yield the dimensionless variable , but offers no insight into the relationship between and . Siano has suggested that the directed dimensions of Huntley be replaced by using orientational symbols to denote vector directions, and an orientationless symbol 10. Thus, Huntley's L becomes L1 with L specifying the dimension of length, and specifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that , the following multiplication table for the orientation symbols results: The orientational symbols form a group (the Klein four-group or "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of . For angles, consider an angle that lies in the z-plane. Form a right triangle in the z-plane with being one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation and the side opposite has an orientation . Since (using to indicate orientational equivalence) we conclude that an angle in the xy-plane must have an orientation , which is not unreasonable. Analogous reasoning forces the conclusion that has orientation while has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the form , where and are real scalars. An expression such as is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written: which for and yields . Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is . The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it into normal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd. As an example, for the projectile problem, using orientational symbols, , being in the xy-plane will thus have dimension and the range of the projectile will be of the form: Dimensional homogeneity will now correctly yield and , and orientational homogeneity requires that . In other words, that must be an odd integer. In fact, the required function of theta will be which is a series consisting of odd powers of . It is seen that the Taylor series of and are orientationally homogeneous using the above multiplication table, while expressions like and are not, and are (correctly) deemed unphysical. Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis. See also Buckingham π theorem Dimensionless numbers in fluid mechanics Fermi estimate – used to teach dimensional analysis Numerical-value equation Rayleigh's method of dimensional analysis Similitude – an application of dimensional analysis System of measurement Related areas of mathematics Covariance and contravariance of vectors Exterior algebra Geometric algebra Quantity calculus Notes References As postscript , (5): 147, (6): 101, (7): 129 Wilson, Edwin B. (1920) "Theory of Dimensions", chapter XI of Aeronautics, via Internet Archive Further reading External links List of dimensions for variety of physical quantities Unicalc Live web calculator doing units conversion by dimensional analysis A C++ implementation of compile-time dimensional analysis in the Boost open-source libraries Buckingham's pi-theorem Quantity System calculator for units conversion based on dimensional approach Units, quantities, and fundamental constants project dimensional analysis maps Measurement Conversion of units of measurement Chemical engineering Mechanical engineering Environmental engineering
Dimensional analysis
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
10,504
[ "Applied and interdisciplinary physics", "Physical quantities", "Dimensional analysis", "Chemical engineering", "Quantity", "Measurement", "Size", "Environmental engineering", "Civil engineering", "Mechanical engineering", "nan", "Conversion of units of measurement", "Units of measurement" ]
8,315
https://en.wikipedia.org/wiki/Diamagnetism
Diamagnetism is the property of materials that are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. Diamagnetism is a quantum mechanical effect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. In paramagnetic and ferromagnetic substances, the weak diamagnetic force is overcome by the attractive force of magnetic dipoles in the material. The magnetic permeability of diamagnetic materials is less than the permeability of vacuum, μ0. In most materials, diamagnetism is a weak effect which can be detected only by sensitive laboratory instruments, but a superconductor acts as a strong diamagnet because it entirely expels any magnetic field from its interior (the Meissner effect). Diamagnetism was first discovered when Anton Brugmans observed in 1778 that bismuth was repelled by magnetic fields. In 1845, Michael Faraday demonstrated that it was a property of matter and concluded that every material responded (in either a diamagnetic or paramagnetic way) to an applied magnetic field. On a suggestion by William Whewell, Faraday first referred to the phenomenon as diamagnetic (the prefix dia- meaning through or across), then later changed it to diamagnetism. A simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: If all electrons in the particle are paired, then the substance made of this particle is diamagnetic; If it has unpaired electrons, then the substance is paramagnetic. Materials Diamagnetism is a property of all materials, and always makes a weak contribution to the material's response to a magnetic field. However, other forms of magnetism (such as ferromagnetism or paramagnetism) are so much stronger such that, when different forms of magnetism are present in a material, the diamagnetic contribution is usually negligible. Substances where the diamagnetic behaviour is the strongest effect are termed diamagnetic materials, or diamagnets. Diamagnetic materials are those that some people generally think of as non-magnetic, and include water, wood, most organic compounds such as petroleum and some plastics, and many metals including copper, particularly the heavy ones with many core electrons, such as mercury, gold and bismuth. The magnetic susceptibility values of various molecular fragments are called Pascal's constants (named after ). Diamagnetic materials, like water, or water-based materials, have a relative magnetic permeability that is less than or equal to 1, and therefore a magnetic susceptibility less than or equal to 0, since susceptibility is defined as . This means that diamagnetic materials are repelled by magnetic fields. However, since diamagnetism is such a weak property, its effects are not observable in everyday life. For example, the magnetic susceptibility of diamagnets such as water is . The most strongly diamagnetic material is bismuth, , although pyrolytic carbon may have a susceptibility of in one plane. Nevertheless, these values are orders of magnitude smaller than the magnetism exhibited by paramagnets and ferromagnets. Because χv is derived from the ratio of the internal magnetic field to the applied field, it is a dimensionless value. In rare cases, the diamagnetic contribution can be stronger than paramagnetic contribution. This is the case for gold, which has a magnetic susceptibility less than 0 (and is thus by definition a diamagnetic material), but when measured carefully with X-ray magnetic circular dichroism, has an extremely weak paramagnetic contribution that is overcome by a stronger diamagnetic contribution. Superconductors Superconductors may be considered perfect diamagnets (), because they expel all magnetic fields (except in a thin surface layer) due to the Meissner effect. Demonstrations Curving water surfaces If a powerful magnet (such as a supermagnet) is covered with a layer of water (that is thin compared to the diameter of the magnet) then the field of the magnet significantly repels the water. This causes a slight dimple in the water's surface that may be seen by a reflection in its surface. Levitation Diamagnets may be levitated in stable equilibrium in a magnetic field, with no power consumption. Earnshaw's theorem seems to preclude the possibility of static magnetic levitation. However, Earnshaw's theorem applies only to objects with positive susceptibilities, such as ferromagnets (which have a permanent positive moment) and paramagnets (which induce a positive moment). These are attracted to field maxima, which do not exist in free space. Diamagnets (which induce a negative moment) are attracted to field minima, and there can be a field minimum in free space. A thin slice of pyrolytic graphite, which is an unusually strongly diamagnetic material, can be stably floated in a magnetic field, such as that from rare earth permanent magnets. This can be done with all components at room temperature, making a visually effective and relatively convenient demonstration of diamagnetism. The Radboud University Nijmegen, the Netherlands, has conducted experiments where water and other substances were successfully levitated. Most spectacularly, a live frog (see figure) was levitated. In September 2009, NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California announced it had successfully levitated mice using a superconducting magnet, an important step forward since mice are closer biologically to humans than frogs. JPL said it hopes to perform experiments regarding the effects of microgravity on bone and muscle mass. Recent experiments studying the growth of protein crystals have led to a technique using powerful magnets to allow growth in ways that counteract Earth's gravity. A simple homemade device for demonstration can be constructed out of bismuth plates and a few permanent magnets that levitate a permanent magnet. Theory The electrons in a material generally settle in orbitals, with effectively zero resistance and act like current loops. Thus it might be imagined that diamagnetism effects in general would be common, since any applied magnetic field would generate currents in these loops that would oppose the change, in a similar way to superconductors, which are essentially perfect diamagnets. However, since the electrons are rigidly held in orbitals by the charge of the protons and are further constrained by the Pauli exclusion principle, many materials exhibit diamagnetism, but typically respond very little to the applied field. The Bohr–Van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. However, the classical theory of Langevin for diamagnetism gives the same prediction as the quantum theory. The classical theory is given below. Langevin diamagnetism Paul Langevin's theory of diamagnetism (1905) applies to materials containing atoms with closed shells (see dielectrics). A field with intensity , applied to an electron with charge and mass , gives rise to Larmor precession with frequency . The number of revolutions per unit time is, so the current for an atom with electrons is (in SI units) The magnetic moment of a current loop is equal to the current times the area of the loop. Suppose the field is aligned with the axis. The average loop area can be given as , where is the mean square distance of the electrons perpendicular to the axis. The magnetic moment is therefore If the distribution of charge is spherically symmetric, we can suppose that the distribution of coordinates are independent and identically distributed. Then , where is the mean square distance of the electrons from the nucleus. Therefore, . If is the number of atoms per unit volume, the volume diamagnetic susceptibility in SI units is In atoms, Langevin susceptibility is of the same order of magnitude as Van Vleck paramagnetic susceptibility. In metals The Langevin theory is not the full picture for metals because there are also non-localized electrons. The theory that describes diamagnetism in a free electron gas is called Landau diamagnetism, named after Lev Landau, and instead considers the weak counteracting field that forms when the electrons' trajectories are curved due to the Lorentz force. Landau diamagnetism, however, should be contrasted with Pauli paramagnetism, an effect associated with the polarization of delocalized electrons' spins. For the bulk case of a 3D system and low magnetic fields, the (volume) diamagnetic susceptibility can be calculated using Landau quantization, which in SI units is where is the Fermi energy. This is equivalent to , exactly times Pauli paramagnetic susceptibility, where is the Bohr magneton and is the density of states (number of states per energy per volume). This formula takes into account the spin degeneracy of the carriers (spin-1/2 electrons). In doped semiconductors the ratio between Landau and Pauli susceptibilities may change due to the effective mass of the charge carriers differing from the electron mass in vacuum, increasing the diamagnetic contribution. The formula presented here only applies for the bulk; in confined systems like quantum dots, the description is altered due to quantum confinement. Additionally, for strong magnetic fields, the susceptibility of delocalized electrons oscillates as a function of the field strength, a phenomenon known as the De Haas–Van Alphen effect, also first described theoretically by Landau. See also Antiferromagnetism Magnetochemistry Moses effect References External links The Feynman Lectures on Physics Vol. II Ch. 34: The Magnetism of Matter Electric and magnetic fields in matter Magnetic levitation Magnetism
Diamagnetism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,155
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
8,378
https://en.wikipedia.org/wiki/Dipole
In physics, a dipole () is an electromagnetic phenomenon which occurs in two ways: An electric dipole deals with the separation of the positive and negative electric charges found in any electromagnetic system. A simple example of this system is a pair of charges of equal magnitude but opposite sign separated by some typically small distance. (A permanent electric dipole is called an electret.) A magnetic dipole is the closed circulation of an electric current system. A simple example is a single loop of wire with constant current through it. A bar magnet is an example of a magnet with a permanent magnetic dipole moment. Dipoles, whether electric or magnetic, can be characterized by their dipole moment, a vector quantity. For the simple electric dipole, the electric dipole moment points from the negative charge towards the positive charge, and has a magnitude equal to the strength of each charge times the separation between the charges. (To be precise: for the definition of the dipole moment, one should always consider the "dipole limit", where, for example, the distance of the generating charges should converge to 0 while simultaneously, the charge strength should diverge to infinity in such a way that the product remains a positive constant.) For the magnetic (dipole) current loop, the magnetic dipole moment points through the loop (according to the right hand grip rule), with a magnitude equal to the current in the loop times the area of the loop. Similar to magnetic current loops, the electron particle and some other fundamental particles have magnetic dipole moments, as an electron generates a magnetic field identical to that generated by a very small current loop. However, an electron's magnetic dipole moment is not due to a current loop, but to an intrinsic property of the electron. The electron may also have an electric dipole moment though such has yet to be observed (see electron electric dipole moment). A permanent magnet, such as a bar magnet, owes its magnetism to the intrinsic magnetic dipole moment of the electron. The two ends of a bar magnet are referred to as poles (not to be confused with monopoles, see Classification below) and may be labeled "north" and "south". In terms of the Earth's magnetic field, they are respectively "north-seeking" and "south-seeking" poles: if the magnet were freely suspended in the Earth's magnetic field, the north-seeking pole would point towards the north and the south-seeking pole would point towards the south. The dipole moment of the bar magnet points from its magnetic south to its magnetic north pole. In a magnetic compass, the north pole of a bar magnet points north. However, that means that Earth's geomagnetic north pole is the south pole (south-seeking pole) of its dipole moment and vice versa. The only known mechanisms for the creation of magnetic dipoles are by current loops or quantum-mechanical spin since the existence of magnetic monopoles has never been experimentally demonstrated. Classification A physical dipole consists of two equal and opposite point charges: in the literal sense, two poles. Its field at large distances (i.e., distances large in comparison to the separation of the poles) depends almost entirely on the dipole moment as defined above. A point (electric) dipole is the limit obtained by letting the separation tend to 0 while keeping the dipole moment fixed. The field of a point dipole has a particularly simple form, and the order-1 term in the multipole expansion is precisely the point dipole field. Although there are no known magnetic monopoles in nature, there are magnetic dipoles in the form of the quantum-mechanical spin associated with particles such as electrons (although the accurate description of such effects falls outside of classical electromagnetism). A theoretical magnetic point dipole has a magnetic field of exactly the same form as the electric field of an electric point dipole. A very small current-carrying loop is approximately a magnetic point dipole; the magnetic dipole moment of such a loop is the product of the current flowing in the loop and the (vector) area of the loop. Any configuration of charges or currents has a 'dipole moment', which describes the dipole whose field is the best approximation, at large distances, to that of the given configuration. This is simply one term in the multipole expansion when the total charge ("monopole moment") is 0—as it always is for the magnetic case, since there are no magnetic monopoles. The dipole term is the dominant one at large distances: Its field falls off in proportion to , as compared to for the next (quadrupole) term and higher powers of for higher terms, or for the monopole term. Molecular dipoles Many molecules have such dipole moments due to non-uniform distributions of positive and negative charges on the various atoms. Such is the case with polar compounds like hydrogen fluoride (HF), where electron density is shared unequally between atoms. Therefore, a molecule's dipole is an electric dipole with an inherent electric field that should not be confused with a magnetic dipole, which generates a magnetic field. The physical chemist Peter J. W. Debye was the first scientist to study molecular dipoles extensively, and, as a consequence, dipole moments are measured in the non-SI unit named debye in his honor. For molecules there are three types of dipoles: Permanent dipoles These occur when two atoms in a molecule have substantially different electronegativity : One atom attracts electrons more than another, becoming more negative, while the other atom becomes more positive. A molecule with a permanent dipole moment is called a polar molecule. See dipole–dipole attractions. Instantaneous dipoles These occur due to chance when electrons happen to be more concentrated in one place than another in a molecule, creating a temporary dipole. These dipoles are smaller in magnitude than permanent dipoles, but still play a large role in chemistry and biochemistry due to their prevalence. See instantaneous dipole. Induced dipoles These can occur when one molecule with a permanent dipole repels another molecule's electrons, inducing a dipole moment in that molecule. A molecule is polarized when it carries an induced dipole. See induced-dipole attraction. More generally, an induced dipole of any polarizable charge distribution ρ (remember that a molecule has a charge distribution) is caused by an electric field external to ρ. This field may, for instance, originate from an ion or polar molecule in the vicinity of ρ or may be macroscopic (e.g., a molecule between the plates of a charged capacitor). The size of the induced dipole moment is equal to the product of the strength of the external field and the dipole polarizability of ρ. Dipole moment values can be obtained from measurement of the dielectric constant. Some typical gas phase values given with the unit debye are: carbon dioxide: 0 carbon monoxide: 0.112 D ozone: 0.53 D phosgene: 1.17 D ammonia: 1.42 D water vapor: 1.85 D hydrogen cyanide: 2.98 D cyanamide: 4.27 D potassium bromide: 10.41 D Potassium bromide (KBr) has one of the highest dipole moments because it is an ionic compound that exists as a molecule in the gas phase. The overall dipole moment of a molecule may be approximated as a vector sum of bond dipole moments. As a vector sum it depends on the relative orientation of the bonds, so that from the dipole moment information can be deduced about the molecular geometry. For example, the zero dipole of CO2 implies that the two C=O bond dipole moments cancel so that the molecule must be linear. For H2O the O−H bond moments do not cancel because the molecule is bent. For ozone (O3) which is also a bent molecule, the bond dipole moments are not zero even though the O−O bonds are between similar atoms. This agrees with the Lewis structures for the resonance forms of ozone which show a positive charge on the central oxygen atom. An example in organic chemistry of the role of geometry in determining dipole moment is the cis and trans isomers of 1,2-dichloroethene. In the cis isomer the two polar C−Cl bonds are on the same side of the C=C double bond and the molecular dipole moment is 1.90 D. In the trans isomer, the dipole moment is zero because the two C−Cl bonds are on opposite sides of the C=C and cancel (and the two bond moments for the much less polar C−H bonds also cancel). Another example of the role of molecular geometry is boron trifluoride, which has three polar bonds with a difference in electronegativity greater than the traditionally cited threshold of 1.7 for ionic bonding. However, due to the equilateral triangular distribution of the fluoride ions centered on and in the same plane as the boron cation, the symmetry of the molecule results in its dipole moment being zero. Quantum-mechanical dipole operator Consider a collection of N particles with charges qi and position vectors ri. For instance, this collection may be a molecule consisting of electrons, all with charge −e, and nuclei with charge eZi, where Zi is the atomic number of the i th nucleus. The dipole observable (physical quantity) has the quantum mechanical dipole operator: Notice that this definition is valid only for neutral atoms or molecules, i.e. total charge equal to zero. In the ionized case, we have where is the center of mass of the molecule/group of particles. Atomic dipoles A non-degenerate (S-state) atom can have only a zero permanent dipole. This fact follows quantum mechanically from the inversion symmetry of atoms. All 3 components of the dipole operator are antisymmetric under inversion with respect to the nucleus, where is the dipole operator and is the inversion operator. The permanent dipole moment of an atom in a non-degenerate state (see degenerate energy level) is given as the expectation (average) value of the dipole operator, where is an S-state, non-degenerate, wavefunction, which is symmetric or antisymmetric under inversion: . Since the product of the wavefunction (in the ket) and its complex conjugate (in the bra) is always symmetric under inversion and its inverse, it follows that the expectation value changes sign under inversion. We used here the fact that , being a symmetry operator, is unitary: and by definition the Hermitian adjoint may be moved from bra to ket and then becomes . Since the only quantity that is equal to minus itself is the zero, the expectation value vanishes, In the case of open-shell atoms with degenerate energy levels, one could define a dipole moment by the aid of the first-order Stark effect. This gives a non-vanishing dipole (by definition proportional to a non-vanishing first-order Stark shift) only if some of the wavefunctions belonging to the degenerate energies have opposite parity; i.e., have different behavior under inversion. This is a rare occurrence, but happens for the excited H-atom, where 2s and 2p states are "accidentally" degenerate (see article Laplace–Runge–Lenz vector for the origin of this degeneracy) and have opposite parity (2s is even and 2p is odd). Field of a static magnetic dipole Magnitude The far-field strength, B, of a dipole magnetic field is given by where B is the strength of the field, measured in teslas r is the distance from the center, measured in metres λ is the magnetic latitude (equal to 90° − θ) where θ is the magnetic colatitude, measured in radians or degrees from the dipole axis m is the dipole moment, measured in ampere-square metres or joules per tesla μ0 is the permeability of free space, measured in henries per metre. Conversion to cylindrical coordinates is achieved using and where ρ is the perpendicular distance from the z-axis. Then, Vector form The field itself is a vector quantity: where B is the field r is the vector from the position of the dipole to the position where the field is being measured r is the absolute value of r: the distance from the dipole r̂ = is the unit vector parallel to r; m is the (vector) dipole moment μ0 is the permeability of free space This is exactly the field of a point dipole, exactly the dipole term in the multipole expansion of an arbitrary field, and approximately the field of any dipole-like configuration at large distances. Magnetic vector potential The vector potential A of a magnetic dipole is with the same definitions as above. Field from an electric dipole The electrostatic potential at position r due to an electric dipole at the origin is given by: where p is the (vector) dipole moment, and є0 is the permittivity of free space. This term appears as the second term in the multipole expansion of an arbitrary electrostatic potential Φ(r). If the source of Φ(r) is a dipole, as it is assumed here, this term is the only non-vanishing term in the multipole expansion of Φ(r). The electric field from a dipole can be found from the gradient of this potential: This is of the same form of the expression for the magnetic field of a point magnetic dipole, ignoring the delta function. In a real electric dipole, however, the charges are physically separate and the electric field diverges or converges at the point charges. This is different to the magnetic field of a real magnetic dipole which is continuous everywhere. The delta function represents the strong field pointing in the opposite direction between the point charges, which is often omitted since one is rarely interested in the field at the dipole's position. For further discussions about the internal field of dipoles, see or . Torque on a dipole Since the direction of an electric field is defined as the direction of the force on a positive charge, electric field lines point away from a positive charge and toward a negative charge. When placed in a homogeneous electric or magnetic field, equal but opposite forces arise on each side of the dipole creating a torque }: for an electric dipole moment p (in coulomb-meters), or for a magnetic dipole moment m (in ampere-square meters). The resulting torque will tend to align the dipole with the applied field, which in the case of an electric dipole, yields a potential energy of . The energy of a magnetic dipole is similarly . Dipole radiation In addition to dipoles in electrostatics, it is also common to consider an electric or magnetic dipole that is oscillating in time. It is an extension, or a more physical next-step, to spherical wave radiation. In particular, consider a harmonically oscillating electric dipole, with angular frequency ω and a dipole moment p0 along the ẑ direction of the form In vacuum, the exact field produced by this oscillating dipole can be derived using the retarded potential formulation as: For  ≫ 1, the far-field takes the simpler form of a radiating "spherical" wave, but with angular dependence embedded in the cross-product: The time-averaged Poynting vector is not distributed isotropically, but concentrated around the directions lying perpendicular to the dipole moment, as a result of the non-spherical electric and magnetic waves. In fact, the spherical harmonic function (sin θ) responsible for such toroidal angular distribution is precisely the l = 1 "p" wave. The total time-average power radiated by the field can then be derived from the Poynting vector as Notice that the dependence of the power on the fourth power of the frequency of the radiation is in accordance with the Rayleigh scattering, and the underlying effects why the sky consists of mainly blue colour. A circular polarized dipole is described as a superposition of two linear dipoles. See also Polarization density Magnetic dipole models Dipole model of the Earth's magnetic field Electret Indian Ocean Dipole and Subtropical Indian Ocean Dipole, two oceanographic phenomena Magnetic dipole–dipole interaction Spin magnetic moment Monopole Solid harmonics Axial multipole moments Cylindrical multipole moments Spherical multipole moments Laplace expansion Molecular solid Magnetic moment#Internal magnetic field of a dipole Notes References External links USGS Geomagnetism Program Fields of Force : a chapter from an online textbook Electric Dipole Potential by Stephen Wolfram and Energy Density of a Magnetic Dipole by Franz Krafft. Wolfram Demonstrations Project. Electromagnetism Potential theory
Dipole
[ "Physics", "Mathematics" ]
3,506
[ "Electromagnetism", "Physical phenomena", "Functions and mappings", "Mathematical objects", "Potential theory", "Mathematical relations", "Fundamental interactions" ]
8,398
https://en.wikipedia.org/wiki/Dimension
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordinate is needed to specify a point on itfor example, the point at 5 on a number line. A surface, such as the boundary of a cylinder or sphere, has a dimension of two (2D) because two coordinates are needed to specify a point on itfor example, both a latitude and longitude are required to locate a point on the surface of a sphere. A two-dimensional Euclidean space is a two-dimensional space on the plane. The inside of a cube, a cylinder or a sphere is three-dimensional (3D) because three coordinates are needed to locate a point within these spaces. In classical mechanics, space and time are different categories and refer to absolute space and time. That conception of the world is a four-dimensional space but not the one that was found necessary to describe electromagnetism. The four dimensions (4D) of spacetime consist of events that are not absolutely defined spatially and temporally, but rather are known relative to the motion of an observer. Minkowski space first approximates the universe without gravity; the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity. 10 dimensions are used to describe superstring theory (6D hyperspace + 4D), 11 dimensions can describe supergravity and M-theory (7D hyperspace + 4D), and the state-space of quantum mechanics is an infinite-dimensional function space. The concept of dimension is not restricted to physical objects. s frequently occur in mathematics and the sciences. They may be Euclidean spaces or more general parameter spaces or configuration spaces such as in Lagrangian or Hamiltonian mechanics; these are abstract spaces, independent of the physical space. In mathematics In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two, etc. The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded. For example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curve. This is independent from the fact that a curve cannot be embedded in a Euclidean space of dimension lower than two, unless it is a line. Similarly, a surface is of dimension two, even if embedded in three-dimensional space. The dimension of Euclidean -space is . When trying to generalize to other types of spaces, one is faced with the question "what makes -dimensional?" One answer is that to cover a fixed ball in by small balls of radius , one needs on the order of such small balls. This observation leads to the definition of the Minkowski dimension and its more sophisticated variant, the Hausdorff dimension, but there are also other answers to that question. For example, the boundary of a ball in looks locally like and this leads to the notion of the inductive dimension. While these notions agree on , they turn out to be different when one looks at more general spaces. A tesseract is an example of a four-dimensional object. Whereas outside mathematics the use of the term "dimension" is as in: "A tesseract has four dimensions", mathematicians usually express this as: "The tesseract has dimension 4", or: "The dimension of the tesseract is 4" or: 4D. Although the notion of higher dimensions goes back to René Descartes, substantial development of a higher-dimensional geometry only began in the 19th century, via the work of Arthur Cayley, William Rowan Hamilton, Ludwig Schläfli and Bernhard Riemann. Riemann's 1854 Habilitationsschrift, Schläfli's 1852 Theorie der vielfachen Kontinuität, and Hamilton's discovery of the quaternions and John T. Graves' discovery of the octonions in 1843 marked the beginning of higher-dimensional geometry. The rest of this section examines some of the more important mathematical definitions of dimension. Vector spaces The dimension of a vector space is the number of vectors in any basis for the space, i.e. the number of coordinates necessary to specify any vector. This notion of dimension (the cardinality of a basis) is often referred to as the Hamel dimension or algebraic dimension to distinguish it from other notions of dimension. For the non-free case, this generalizes to the notion of the length of a module. Manifolds The uniquely defined dimension of every connected topological manifold can be calculated. A connected topological manifold is locally homeomorphic to Euclidean -space, in which the number is the manifold's dimension. For connected differentiable manifolds, the dimension is also the dimension of the tangent vector space at any point. In geometric topology, the theory of manifolds is characterized by the way dimensions 1 and 2 are relatively elementary, the high-dimensional cases are simplified by having extra space in which to "work"; and the cases and are in some senses the most difficult. This state of affairs was highly marked in the various cases of the Poincaré conjecture, in which four different proof methods are applied. Complex dimension The dimension of a manifold depends on the base field with respect to which Euclidean space is defined. While analysis usually assumes a manifold to be over the real numbers, it is sometimes useful in the study of complex manifolds and algebraic varieties to work over the complex numbers instead. A complex number (x + iy) has a real part x and an imaginary part y, in which x and y are both real numbers; hence, the complex dimension is half the real dimension. Conversely, in algebraically unconstrained contexts, a single complex coordinate system may be applied to an object having two real dimensions. For example, an ordinary two-dimensional spherical surface, when given a complex metric, becomes a Riemann sphere of one complex dimension. Varieties The dimension of an algebraic variety may be defined in various equivalent ways. The most intuitive way is probably the dimension of the tangent space at any Regular point of an algebraic variety. Another intuitive way is to define the dimension as the number of hyperplanes that are needed in order to have an intersection with the variety that is reduced to a finite number of points (dimension zero). This definition is based on the fact that the intersection of a variety with a hyperplane reduces the dimension by one unless if the hyperplane contains the variety. An algebraic set being a finite union of algebraic varieties, its dimension is the maximum of the dimensions of its components. It is equal to the maximal length of the chains of sub-varieties of the given algebraic set (the length of such a chain is the number of ""). Each variety can be considered as an algebraic stack, and its dimension as variety agrees with its dimension as stack. There are however many stacks which do not correspond to varieties, and some of these have negative dimension. Specifically, if V is a variety of dimension m and G is an algebraic group of dimension n acting on V, then the quotient stack [V/G] has dimension m − n. Krull dimension The Krull dimension of a commutative ring is the maximal length of chains of prime ideals in it, a chain of length n being a sequence of prime ideals related by inclusion. It is strongly related to the dimension of an algebraic variety, because of the natural correspondence between sub-varieties and prime ideals of the ring of the polynomials on the variety. For an algebra over a field, the dimension as vector space is finite if and only if its Krull dimension is 0. Topological spaces For any normal topological space , the Lebesgue covering dimension of is defined to be the smallest integer n for which the following holds: any open cover has an open refinement (a second open cover in which each element is a subset of an element in the first cover) such that no point is included in more than elements. In this case dim . For a manifold, this coincides with the dimension mentioned above. If no such integer exists, then the dimension of is said to be infinite, and one writes dim . Moreover, has dimension −1, i.e. dim if and only if is empty. This definition of covering dimension can be extended from the class of normal spaces to all Tychonoff spaces merely by replacing the term "open" in the definition by the term "functionally open". An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a new direction, one obtains a 2-dimensional object. In general, one obtains an ()-dimensional object by dragging an -dimensional object in a new direction. The inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, balls have -dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension -1. Similarly, for the class of CW complexes, the dimension of an object is the largest for which the -skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles. Hausdorff dimension The Hausdorff dimension is useful for studying structurally complicated sets, especially fractals. The Hausdorff dimension is defined for all metric spaces and, unlike the dimensions considered above, can also have non-integer real values. The box dimension or Minkowski dimension is a variant of the same idea. In general, there exist more definitions of fractal dimensions that work for highly irregular sets and attain non-integer positive real values. Hilbert spaces Every Hilbert space admits an orthonormal basis, and any two such bases for a particular space have the same cardinality. This cardinality is called the dimension of the Hilbert space. This dimension is finite if and only if the space's Hamel dimension is finite, and in this case the two dimensions coincide. In physics Spatial dimensions Classical physics theories describe three physical dimensions: from a particular point in space, the basic directions in which we can move are up/down, left/right, and forward/backward. Movement in any other direction can be expressed in terms of just these three. Moving down is the same as moving up a negative distance. Moving diagonally upward and forward is just as the name of the direction implies i.e., moving in a linear combination of up and forward. In its simplest form: a line describes one dimension, a plane describes two dimensions, and a cube describes three dimensions. (See Space and Cartesian coordinate system.) Time A temporal dimension, or time dimension, is a dimension of time. Time is often referred to as the "fourth dimension" for this reason, but that is not to imply that it is a spatial dimension. A temporal dimension is one way to measure physical change. It is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move freely in time but subjectively move in one direction. The equations used in physics to model reality do not treat time in the same way that humans commonly perceive it. The equations of classical mechanics are symmetric with respect to time, and equations of quantum mechanics are typically symmetric if both time and other quantities (such as charge and parity) are reversed. In these models, the perception of time flowing in one direction is an artifact of the laws of thermodynamics (we perceive time as flowing in the direction of increasing entropy). The best-known treatment of time as a dimension is Poincaré and Einstein's special relativity (and extended to general relativity), which treats perceived space and time as components of a four-dimensional manifold, known as spacetime, and in the special, flat case as Minkowski space. Time is different from other spatial dimensions as time operates in all spatial dimensions. Time operates in the first, second and third as well as theoretical spatial dimensions such as a fourth spatial dimension. Time is not however present in a single point of absolute infinite singularity as defined as a geometric point, as an infinitely small point can have no change and therefore no time. Just as when an object moves through positions in space, it also moves through positions in time. In this sense the force moving any object to change is time. Additional dimensions In physics, three dimensions of space and one of time is the accepted norm. However, there are theories that attempt to unify the four fundamental forces by introducing extra dimensions/hyperspace. Most notably, superstring theory requires 10 spacetime dimensions, and originates from a more fundamental 11-dimensional theory tentatively called M-theory which subsumes five previously distinct superstring theories. Supergravity theory also promotes 11D spacetime = 7D hyperspace + 4 common dimensions. To date, no direct experimental or observational evidence is available to support the existence of these extra dimensions. If hyperspace exists, it must be hidden from us by some physical mechanism. One well-studied possibility is that the extra dimensions may be "curled up" at such tiny scales as to be effectively invisible to current experiments. In 1921, Kaluza–Klein theory presented 5D including an extra dimension of space. At the level of quantum field theory, Kaluza–Klein theory unifies gravity with gauge interactions, based on the realization that gravity propagating in small, compact extra dimensions is equivalent to gauge interactions at long distances. In particular when the geometry of the extra dimensions is trivial, it reproduces electromagnetism. However, at sufficiently high energies or short distances, this setup still suffers from the same pathologies that famously obstruct direct attempts to describe quantum gravity. Therefore, these models still require a UV completion, of the kind that string theory is intended to provide. In particular, superstring theory requires six compact dimensions (6D hyperspace) forming a Calabi–Yau manifold. Thus Kaluza-Klein theory may be considered either as an incomplete description on its own, or as a subset of string theory model building. In addition to small and curled up extra dimensions, there may be extra dimensions that instead are not apparent because the matter associated with our visible universe is localized on a subspace. Thus, the extra dimensions need not be small and compact but may be large extra dimensions. D-branes are dynamical extended objects of various dimensionalities predicted by string theory that could play this role. They have the property that open string excitations, which are associated with gauge interactions, are confined to the brane by their endpoints, whereas the closed strings that mediate the gravitational interaction are free to propagate into the whole spacetime, or "the bulk". This could be related to why gravity is exponentially weaker than the other forces, as it effectively dilutes itself as it propagates into a higher-dimensional volume. Some aspects of brane physics have been applied to cosmology. For example, brane gas cosmology attempts to explain why there are three dimensions of space using topological and thermodynamic considerations. According to this idea it would be since three is the largest number of spatial dimensions in which strings can generically intersect. If initially there are many windings of strings around compact dimensions, space could only expand to macroscopic sizes once these windings are eliminated, which requires oppositely wound strings to find each other and annihilate. But strings can only find each other to annihilate at a meaningful rate in three dimensions, so it follows that only three dimensions of space are allowed to grow large given this kind of initial configuration. Extra dimensions are said to be universal if all fields are equally free to propagate within them. In computer graphics and spatial data Several types of digital systems are based on the storage, analysis, and visualization of geometric shapes, including illustration software, Computer-aided design, and Geographic information systems. Different vector systems use a wide variety of data structures to represent shapes, but almost all are fundamentally based on a set of geometric primitives corresponding to the spatial dimensions: Point (0-dimensional), a single coordinate in a Cartesian coordinate system. Line or Polyline (1-dimensional) usually represented as an ordered list of points sampled from a continuous line, whereupon the software is expected to interpolate the intervening shape of the line as straight- or curved-line segments. Polygon (2-dimensional) usually represented as a line that closes at its endpoints, representing the boundary of a two-dimensional region. The software is expected to use this boundary to partition 2-dimensional space into an interior and exterior. Surface (3-dimensional) represented using a variety of strategies, such as a polyhedron consisting of connected polygon faces. The software is expected to use this surface to partition 3-dimensional space into an interior and exterior. Frequently in these systems, especially GIS and Cartography, a representation of a real-world phenomenon may have a different (usually lower) dimension than the phenomenon being represented. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional volume of material) may be represented as a line. This dimensional generalization correlates with tendencies in spatial cognition. For example, asking the distance between two cities presumes a conceptual model of the cities as points, while giving directions involving travel "up," "down," or "along" a road imply a one-dimensional conceptual model. This is frequently done for purposes of data efficiency, visual simplicity, or cognitive efficiency, and is acceptable if the distinction between the representation and the represented is understood but can cause confusion if information users assume that the digital shape is a perfect representation of reality (i.e., believing that roads really are lines). More dimensions List of topics by dimension See also References Further reading Google preview External links Physical quantities Abstract algebra Geometric measurement Mathematical concepts
Dimension
[ "Physics", "Mathematics" ]
3,921
[ "Geometric measurement", "Physical phenomena", "Physical quantities", "Quantity", "Physical properties", "Geometry", "Theory of relativity", "nan", "Abstract algebra", "Dimension", "Algebra" ]
8,410
https://en.wikipedia.org/wiki/Decibel
The decibel (symbol: dB) is a relative unit of measurement equal to one tenth of a bel (B). It expresses the ratio of two values of a power or root-power quantity on a logarithmic scale. Two signals whose levels differ by one decibel have a power ratio of 101/10 (approximately ) or root-power ratio of 101/20 (approximately ). The unit fundamentally expresses a relative change but may also be used to express an absolute value as the ratio of a value to a fixed reference value; when used in this way, the unit symbol is often suffixed with letter codes that indicate the reference value. For example, for the reference value of 1 volt, a common suffix is "V" (e.g., "20 dBV"). Two principal types of scaling of the decibel are in common use. When expressing a power ratio, it is defined as ten times the logarithm with base 10. That is, a change in power by a factor of 10 corresponds to a 10 dB change in level. When expressing root-power quantities, a change in amplitude by a factor of 10 corresponds to a 20 dB change in level. The decibel scales differ by a factor of two, so that the related power and root-power levels change by the same value in linear systems, where power is proportional to the square of amplitude. The definition of the decibel originated in the measurement of transmission loss and power in telephony of the early 20th century in the Bell System in the United States. The bel was named in honor of Alexander Graham Bell, but the bel is seldom used. Instead, the decibel is used for a wide variety of measurements in science and engineering, most prominently for sound power in acoustics, in electronics and control theory. In electronics, the gains of amplifiers, attenuation of signals, and signal-to-noise ratios are often expressed in decibels. History The decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. Until the mid-1920s, the unit for loss was miles of standard cable (MSC). 1 MSC corresponded to the loss of power over one mile (approximately 1.6 km) of standard telephone cable at a frequency of  radians per second (795.8 Hz), and matched closely the smallest attenuation detectable to a listener. A standard telephone cable was "a cable having uniformly distributed resistance of 88 ohms per loop-mile and uniformly distributed shunt capacitance of 0.054 microfarads per mile" (approximately corresponding to 19 gauge wire). In 1924, Bell Telephone Laboratories received a favorable response to a new unit definition among members of the International Advisory Committee on Long Distance Telephony in Europe and replaced the MSC with the Transmission Unit (TU). 1 TU was defined such that the number of TUs was ten times the base-10 logarithm of the ratio of measured power to a reference power. The definition was conveniently chosen such that 1 TU approximated 1 MSC; specifically, 1 MSC was 1.056 TU. In 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the bel, in honor of the telecommunications pioneer Alexander Graham Bell. The bel is seldom used, as the decibel was the proposed working unit. The naming and early definition of the decibel is described in the NBS Standard's Yearbook of 1931: In 1954, J. W. Horton argued that the use of the decibel as a unit for quantities other than transmission loss led to confusion, and suggested the name logit for "standard magnitudes which combine by multiplication", to contrast with the name unit for "standard magnitudes which combine by addition". In April 2003, the International Committee for Weights and Measures (CIPM) considered a recommendation for the inclusion of the decibel in the International System of Units (SI), but decided against the proposal. However, the decibel is recognized by other international bodies such as the International Electrotechnical Commission (IEC) and International Organization for Standardization (ISO). The IEC permits the use of the decibel with root-power quantities as well as power and this recommendation is followed by many national standards bodies, such as NIST, which justifies the use of the decibel for voltage ratios. In spite of their widespread use, suffixes (such as in dBA or dBV) are not recognized by the IEC or ISO. Definition The IEC Standard 60027-3:2002 defines the following quantities. The decibel (dB) is one-tenth of a bel: . The bel (B) is  ln(10) nepers: . The neper is the change in the level of a root-power quantity when the root-power quantity changes by a factor of e, that is , thereby relating all of the units as nondimensional natural log of root-power-quantity ratios, =  = . Finally, the level of a quantity is the logarithm of the ratio of the value of that quantity to a reference value of the same kind of quantity. Therefore, the bel represents the logarithm of a ratio between two power quantities of 10:1, or the logarithm of a ratio between two root-power quantities of :1. Two signals whose levels differ by one decibel have a power ratio of 101/10, which is approximately , and an amplitude (root-power quantity) ratio of 101/20 (). The bel is rarely used either without a prefix or with SI unit prefixes other than deci; it is customary, for example, to use hundredths of a decibel rather than millibels. Thus, five one-thousandths of a bel would normally be written 0.05 dB, and not 5 mB. The method of expressing a ratio as a level in decibels depends on whether the measured property is a power quantity or a root-power quantity; see Power, root-power, and field quantities for details. Power quantities When referring to measurements of power quantities, a ratio can be expressed as a level in decibels by evaluating ten times the base-10 logarithm of the ratio of the measured quantity to reference value. Thus, the ratio of P (measured power) to P0 (reference power) is represented by LP, that ratio expressed in decibels, which is calculated using the formula: The base-10 logarithm of the ratio of the two power quantities is the number of bels. The number of decibels is ten times the number of bels (equivalently, a decibel is one-tenth of a bel). P and P0 must measure the same type of quantity, and have the same units before calculating the ratio. If in the above equation, then LP = 0. If P is greater than P0 then LP is positive; if P is less than P0 then LP is negative. Rearranging the above equation gives the following formula for P in terms of P0 and LP : Root-power (field) quantities When referring to measurements of root-power quantities, it is usual to consider the ratio of the squares of F (measured) and F0 (reference). This is because the definitions were originally formulated to give the same value for relative ratios for both power and root-power quantities. Thus, the following definition is used: The formula may be rearranged to give Similarly, in electrical circuits, dissipated power is typically proportional to the square of voltage or current when the impedance is constant. Taking voltage as an example, this leads to the equation for power gain level LG: where Vout is the root-mean-square (rms) output voltage, Vin is the rms input voltage. A similar formula holds for current. The term root-power quantity is introduced by ISO Standard 80000-1:2009 as a substitute of field quantity. The term field quantity is deprecated by that standard and root-power is used throughout this article. Relationship between power and root-power levels Although power and root-power quantities are different quantities, their respective levels are historically measured in the same units, typically decibels. A factor of 2 is introduced to make changes in the respective levels match under restricted conditions such as when the medium is linear and the same waveform is under consideration with changes in amplitude, or the medium impedance is linear and independent of both frequency and time. This relies on the relationship holding. In a nonlinear system, this relationship does not hold by the definition of linearity. However, even in a linear system in which the power quantity is the product of two linearly related quantities (e.g. voltage and current), if the impedance is frequency- or time-dependent, this relationship does not hold in general, for example if the energy spectrum of the waveform changes. For differences in level, the required relationship is relaxed from that above to one of proportionality (i.e., the reference quantities P and F need not be related), or equivalently, must hold to allow the power level difference to be equal to the root-power level difference from power P and F to P and F. An example might be an amplifier with unity voltage gain independent of load and frequency driving a load with a frequency-dependent impedance: the relative voltage gain of the amplifier is always 0 dB, but the power gain depends on the changing spectral composition of the waveform being amplified. Frequency-dependent impedances may be analyzed by considering the quantities power spectral density and the associated root-power quantities via the Fourier transform, which allows elimination of the frequency dependence in the analysis by analyzing the system at each frequency independently. Conversions Since logarithm differences measured in these units often represent power ratios and root-power ratios, values for both are shown below. The bel is traditionally used as a unit of logarithmic power ratio, while the neper is used for logarithmic root-power (amplitude) ratio. Examples The unit dBW is often used to denote a ratio for which the reference is 1 W, and similarly dBm for a reference point. Calculating the ratio in decibels of (one kilowatt, or watts) to yields: The ratio in decibels of to is: , illustrating the consequence from the definitions above that LG has the same value, 30 dB, regardless of whether it is obtained from powers or from amplitudes, provided that in the specific system being considered power ratios are equal to amplitude ratios squared. The ratio in decibels of to (one milliwatt) is obtained with the formula: The power ratio corresponding to a change in level is given by: A change in power ratio by a factor of 10 corresponds to a change in level of . A change in power ratio by a factor of 2 or is approximately a change of 3 dB. More precisely, the change is ± dB, but this is almost universally rounded to 3 dB in technical writing. This implies an increase in voltage by a factor of . Likewise, a doubling or halving of the voltage, corresponding to a quadrupling or quartering of the power, is commonly described as 6 dB rather than ± dB. Should it be necessary to make the distinction, the number of decibels is written with additional significant figures. 3.000 dB corresponds to a power ratio of 103/10, or , about 0.24% different from exactly 2, and a voltage ratio of , about 0.12% different from exactly . Similarly, an increase of 6.000 dB corresponds to a power ratio of , about 0.5% different from 4. Properties The decibel is useful for representing large ratios and for simplifying representation of multiplicative effects, such as attenuation from multiple sources along a signal chain. Its application in systems with additive effects is less intuitive, such as in the combined sound pressure level of two machines operating together. Care is also necessary with decibels directly in fractions and with the units of multiplicative operations. Reporting large ratios The logarithmic scale nature of the decibel means that a very large range of ratios can be represented by a convenient number, in a manner similar to scientific notation. This allows one to clearly visualize huge changes of some quantity. See Bode plot and Semi-log plot. For example, 120 dB SPL may be clearer than "a trillion times more intense than the threshold of hearing". Representation of multiplication operations Level values in decibels can be added instead of multiplying the underlying power values, which means that the overall gain of a multi-component system, such as a series of amplifier stages, can be calculated by summing the gains in decibels of the individual components, rather than multiply the amplification factors; that is, = log(A) + log(B) + log(C). Practically, this means that, armed only with the knowledge that 1 dB is a power gain of approximately 26%, 3 dB is approximately 2× power gain, and 10 dB is 10× power gain, it is possible to determine the power ratio of a system from the gain in dB with only simple addition and multiplication. For example: A system consists of 3 amplifiers in series, with gains (ratio of power out to in) of 10 dB, 8 dB, and 7 dB respectively, for a total gain of 25 dB. Broken into combinations of 10, 3, and 1 dB, this is: With an input of 1 watt, the output is approximately Calculated precisely, the output is 1 W × 1025/10 ≈ 316.2 W. The approximate value has an error of only +0.4% with respect to the actual value, which is negligible given the precision of the values supplied and the accuracy of most measurement instrumentation. However, according to its critics, the decibel creates confusion, obscures reasoning, is more related to the era of slide rules than to modern digital processing, and is cumbersome and difficult to interpret. Quantities in decibels are not necessarily additive, thus being "of unacceptable form for use in dimensional analysis". Thus, units require special care in decibel operations. Take, for example, carrier-to-noise-density ratio C/N0 (in hertz), involving carrier power C (in watts) and noise power spectral density N0 (in W/Hz). Expressed in decibels, this ratio would be a subtraction (C/N0)dB = CdB − N0 dB. However, the linear-scale units still simplify in the implied fraction, so that the results would be expressed in dB-Hz. Representation of addition operations According to Mitschke, "The advantage of using a logarithmic measure is that in a transmission chain, there are many elements concatenated, and each has its own gain or attenuation. To obtain the total, addition of decibel values is much more convenient than multiplication of the individual factors." However, for the same reason that humans excel at additive operation over multiplication, decibels are awkward in inherently additive operations:if two machines each individually produce a sound pressure level of, say, 90 dB at a certain point, then when both are operating together we should expect the combined sound pressure level to increase to 93 dB, but certainly not to 180 dB!; suppose that the noise from a machine is measured (including the contribution of background noise) and found to be 87 dBA but when the machine is switched off the background noise alone is measured as 83 dBA. [...] the machine noise [level (alone)] may be obtained by 'subtracting' the 83 dBA background noise from the combined level of 87 dBA; i.e., 84.8 dBA.; in order to find a representative value of the sound level in a room a number of measurements are taken at different positions within the room, and an average value is calculated. [...] Compare the logarithmic and arithmetic averages of [...] 70 dB and 90 dB: logarithmic average = 87 dB; arithmetic average = 80 dB. Addition on a logarithmic scale is called logarithmic addition, and can be defined by taking exponentials to convert to a linear scale, adding there, and then taking logarithms to return. For example, where operations on decibels are logarithmic addition/subtraction and logarithmic multiplication/division, while operations on the linear scale are the usual operations: The logarithmic mean is obtained from the logarithmic sum by subtracting , since logarithmic division is linear subtraction. Fractions Attenuation constants, in topics such as optical fiber communication and radio propagation path loss, are often expressed as a fraction or ratio to distance of transmission. In this case, dB/m represents decibel per meter, dB/mi represents decibel per mile, for example. These quantities are to be manipulated obeying the rules of dimensional analysis, e.g., a 100-meter run with a 3.5 dB/km fiber yields a loss of 0.35 dB = 3.5 dB/km × 0.1 km. Uses Perception The human perception of the intensity of sound and light more nearly approximates the logarithm of intensity rather than a linear relationship (see Weber–Fechner law), making the dB scale a useful measure. Acoustics The decibel is commonly used in acoustics as a unit of sound power level or sound pressure level. The reference pressure for sound in air is set at the typical threshold of perception of an average human and there are common comparisons used to illustrate different levels of sound pressure. As sound pressure is a root-power quantity, the appropriate version of the unit definition is used: where prms is the root mean square of the measured sound pressure and pref is the standard reference sound pressure of 20 micropascals in air or 1 micropascal in water. Use of the decibel in underwater acoustics leads to confusion, in part because of this difference in reference value. Sound intensity is proportional to the square of sound pressure. Therefore, the sound intensity level can also be defined as: The human ear has a large dynamic range in sound reception. The ratio of the sound intensity that causes permanent damage during short exposure to that of the quietest sound that the ear can hear is equal to or greater than 1 trillion (1012). Such large measurement ranges are conveniently expressed in logarithmic scale: the base-10 logarithm of 1012 is 12, which is expressed as a sound intensity level of 120 dB re 1 pW/m2. The reference values of I and p in air have been chosen such that this corresponds approximately to a sound pressure level of 120 dB re 20 μPa. Since the human ear is not equally sensitive to all sound frequencies, the acoustic power spectrum is modified by frequency weighting (A-weighting being the most common standard) to get the weighted acoustic power before converting to a sound level or noise level in decibels. Telephony The decibel is used in telephony and audio. Similarly to the use in acoustics, a frequency weighted power is often used. For audio noise measurements in electrical circuits, the weightings are called psophometric weightings. Electronics In electronics, the decibel is often used to express power or amplitude ratios (as for gains) in preference to arithmetic ratios or percentages. One advantage is that the total decibel gain of a series of components (such as amplifiers and attenuators) can be calculated simply by summing the decibel gains of the individual components. Similarly, in telecommunications, decibels denote signal gain or loss from a transmitter to a receiver through some medium (free space, waveguide, coaxial cable, fiber optics, etc.) using a link budget. The decibel unit can also be combined with a reference level, often indicated via a suffix, to create an absolute unit of electric power. For example, it can be combined with "m" for "milliwatt" to produce the "dBm". A power level of 0 dBm corresponds to one milliwatt, and 1 dBm is one decibel greater (about 1.259 mW). In professional audio specifications, a popular unit is the dBu. This is relative to the root mean square voltage which delivers 1 mW (0 dBm) into a 600-ohm resistor, or ≈ 0.775 VRMS. When used in a 600-ohm circuit (historically, the standard reference impedance in telephone circuits), dBu and dBm are identical. Optics In an optical link, if a known amount of optical power, in dBm (referenced to 1 mW), is launched into a fiber, and the losses, in dB (decibels), of each component (e.g., connectors, splices, and lengths of fiber) are known, the overall link loss may be quickly calculated by addition and subtraction of decibel quantities. In spectrometry and optics, the blocking unit used to measure optical density is equivalent to −1 B. Video and digital imaging In connection with video and digital image sensors, decibels generally represent ratios of video voltages or digitized light intensities, using 20 log of the ratio, even when the represented intensity (optical power) is directly proportional to the voltage generated by the sensor, not to its square, as in a CCD imager where response voltage is linear in intensity. Thus, a camera signal-to-noise ratio or dynamic range quoted as 40 dB represents a ratio of 100:1 between optical signal intensity and optical-equivalent dark-noise intensity, not a 10,000:1 intensity (power) ratio as 40 dB might suggest. Sometimes the 20 log ratio definition is applied to electron counts or photon counts directly, which are proportional to sensor signal amplitude without the need to consider whether the voltage response to intensity is linear. However, as mentioned above, the 10 log intensity convention prevails more generally in physical optics, including fiber optics, so the terminology can become murky between the conventions of digital photographic technology and physics. Most commonly, quantities called "dynamic range" or "signal-to-noise" (of the camera) would be specified in 20 log dB, but in related contexts (e.g. attenuation, gain, intensifier SNR, or rejection ratio) the term should be interpreted cautiously, as confusion of the two units can result in very large misunderstandings of the value. Photographers typically use an alternative base-2 log unit, the stop, to describe light intensity ratios or dynamic range. Suffixes and reference values Suffixes are commonly attached to the basic dB unit in order to indicate the reference value by which the ratio is calculated. For example, dBm indicates power measurement relative to 1 milliwatt. In cases where the unit value of the reference is stated, the decibel value is known as "absolute". If the unit value of the reference is not explicitly stated, as in the dB gain of an amplifier, then the decibel value is considered relative. This form of attaching suffixes to dB is widespread in practice, albeit being against the rules promulgated by standards bodies (ISO and IEC), given the "unacceptability of attaching information to units" and the "unacceptability of mixing information with units". The IEC 60027-3 standard recommends the following format: or as , where x is the quantity symbol and xref is the value of the reference quantity, e.g.,  = 20 dB or = 20 dB for the electric field strength E relative to 1 μV/m reference value. If the measurement result 20 dB is presented separately, it can be specified using the information in parentheses, which is then part of the surrounding text and not a part of the unit: 20 dB (re: 1 μV/m) or 20 dB (1 μV/m). Outside of documents adhering to SI units, the practice is very common as illustrated by the following examples. There is no general rule, with various discipline-specific practices. Sometimes the suffix is a unit symbol ("W","K","m"), sometimes it is a transliteration of a unit symbol ("uV" instead of μV for microvolt), sometimes it is an acronym for the unit's name ("sm" for square meter, "m" for milliwatt), other times it is a mnemonic for the type of quantity being calculated ("i" for antenna gain with respect to an isotropic antenna, "λ" for anything normalized by the EM wavelength), or otherwise a general attribute or identifier about the nature of the quantity ("A" for A-weighted sound pressure level). The suffix is often connected with a hyphen, as in "dBHz", or with a space, as in "dB HL", or enclosed in parentheses, as in "dB(HL)", or with no intervening character, as in "dBm" (which is non-compliant with international standards). List of suffixes Voltage Since the decibel is defined with respect to power, not amplitude, conversions of voltage ratios to decibels must square the amplitude, or use the factor of 20 instead of 10, as discussed above. dB dB(VRMS) – voltage relative to 1 volt, regardless of impedance. This is used to measure microphone sensitivity, and also to specify the consumer line-level of , in order to reduce manufacturing costs relative to equipment using a line-level signal. dB or dB RMS voltage relative to (i.e. the voltage that would dissipate 1 mW into a 600 Ω load). An RMS voltage of 1 V therefore corresponds to Originally dB, it was changed to dB to avoid confusion with dB. The v comes from volt, while u comes from the volume unit displayed on a VU meter.dB can be used as a measure of voltage, regardless of impedance, but is derived from a 600 Ω load dissipating 0 dB (1 mW). The reference voltage comes from the computation where is the resistance and is the power. In professional audio, equipment may be calibrated to indicate a "0" on the VU meters some finite time after a signal has been applied at an amplitude of . Consumer equipment typically uses a lower "nominal" signal level of Therefore, many devices offer dual voltage operation (with different gain or "trim" settings) for interoperability reasons. A switch or adjustment that covers at least the range between and is common in professional equipment. dB Defined by Recommendation ITU-R V.574 ; dB: dB(mVRMS) – root mean square voltage relative to 1 millivolt across 75 Ω. Widely used in cable television networks, where the nominal strength of a single TV signal at the receiver terminals is about 0 dB. Cable TV uses 75 Ω coaxial cable, so 0 dB corresponds to −78.75 dB or approximately 13 nW. dB or dB dB(μVRMS) – voltage relative to 1 microvolt. Widely used in television and aerial amplifier specifications. 60 dBμV = 0 dB. Acoustics Probably the most common usage of "decibels" in reference to sound level is dB, sound pressure level referenced to the nominal threshold of human hearing: The measures of pressure (a root-power quantity) use the factor of 20, and the measures of power (e.g. dB and dB) use the factor of 10. dB dB (sound pressure level) – for sound in air and other gases, relative to 20 micropascals (μPa), or , approximately the quietest sound a human can hear. For sound in water and other liquids, a reference pressure of 1 μPa is used. An RMS sound pressure of one pascal corresponds to a level of 94 dB SPL. dB dB sound intensity level – relative to 10−12 W/m2, which is roughly the threshold of human hearing in air. dB dB sound power level – relative to 10−12 W. dB, dB, and dB These symbols are often used to denote the use of different weighting filters, used to approximate the human ear's response to sound, although the measurement is still in dB (SPL). These measurements usually refer to noise and its effects on humans and other animals, and they are widely used in industry while discussing noise control issues, regulations and environmental standards. Other variations that may be seen are dB or dB(A). According to standards from the International Electro-technical Committee (IEC 61672-2013) and the American National Standards Institute, ANSI S1.4, the preferred usage is to write Nevertheless, the units dB and dB(A) are still commonly used as a shorthand for Aweighted measurements. Compare dB, used in telecommunications. dB dB hearing level is used in audiograms as a measure of hearing loss. The reference level varies with frequency according to a minimum audibility curve as defined in ANSI and other standards, such that the resulting audiogram shows deviation from what is regarded as 'normal' hearing. dB sometimes used to denote weighted noise level, commonly using the ITU-R 468 noise weighting dB relative to the peak to peak sound pressure. dB G‑weighted spectrum Audio electronics See also dB and dB above. dB dB(mW) – power relative to 1 milliwatt. In audio and telephony, dB is typically referenced relative to a 600 Ω impedance, which corresponds to a voltage level of 0.775 volts or 775 millivolts. dB Power in dB (described above) measured at a zero transmission level point. dB dB(full scale) – the amplitude of a signal compared with the maximum which a device can handle before clipping occurs. Full-scale may be defined as the power level of a full-scale sinusoid or alternatively a full-scale square wave. A signal measured with reference to a full-scale sine-wave appears 3 dB weaker when referenced to a full-scale square wave, thus: 0 dBFS(fullscale sine wave) = −3 dB (fullscale square wave). dB dB volume unit dB dB(true peak) – peak amplitude of a signal compared with the maximum which a device can handle before clipping occurs. In digital systems, 0 dB would equal the highest level (number) the processor is capable of representing. Measured values are always negative or zero, since they are less than or equal to full-scale. Radar dB dB(Z) – decibel relative to Z = 1 mm⋅m: energy of reflectivity (weather radar), related to the amount of transmitted power returned to the radar receiver. Values above 20 dB usually indicate falling precipitation. dB dB(m²) – decibel relative to one square meter: measure of the radar cross section (RCS) of a target. The power reflected by the target is proportional to its RCS. "Stealth" aircraft and insects have negative RCS measured in dB, large flat plates or non-stealthy aircraft have positive values. Radio power, energy, and field strength dB relative to carrier – in telecommunications, this indicates the relative levels of noise or sideband power, compared with the carrier power. Compare dB, used in acoustics. dB relative to the maximum value of the peak power. dB energy relative to 1 joule. 1 joule = 1 watt second = 1 watt per hertz, so power spectral density can be expressed in dB. dB dB(mW) – power relative to 1 milliwatt. In the radio field, dB is usually referenced to a 50 Ω load, with the resultant voltage being 0.224 volts. dB, dB, or dB dB(μV/m) – electric field strength relative to 1 microvolt per meter. The unit is often used to specify the signal strength of a television broadcast at a receiving site (the signal measured at the antenna output is reported in dBμ). dB dB(fW) – power relative to 1 femtowatt. dB dB(W) – power relative to 1 watt. dB dB(kW) – power relative to 1 kilowatt. dB dB electrical. dB dB optical. A change of 1 dB in optical power can result in a change of up to 2 dB in electrical signal power in a system that is thermal noise limited. Antenna measurements dB dB(isotropic) – the gain of an antenna compared with the gain of a theoretical isotropic antenna, which uniformly distributes energy in all directions. Linear polarization of the EM field is assumed unless noted otherwise. dB dB(dipole) – the gain of an antenna compared with the gain a half-wave dipole antenna. 0 dBd = 2.15 dBi dB dB(isotropic circular) – the gain of an antenna compared to the gain of a theoretical circularly polarized isotropic antenna. There is no fixed conversion rule between dB and dB, as it depends on the receiving antenna and the field polarization. dB dB(quarterwave) – the gain of an antenna compared to the gain of a quarter wavelength whip. Rarely used, except in some marketing material; dB dB, dB(m²) – decibels relative to one square meter: A measure of the effective area for capturing signals of the antenna. dB dB(m) – decibels relative to reciprocal of meter: measure of the antenna factor. Other measurements dB or dB‑Hz dB(Hz) – bandwidth relative to one hertz. E.g., 20 dBHz corresponds to a bandwidth of 100 Hz. Commonly used in link budget calculations. Also used in carrier-to-noise-density ratio (not to be confused with carrier-to-noise ratio, in dB). dB or dB dB(overload) – the amplitude of a signal (usually audio) compared with the maximum which a device can handle before clipping occurs. Similar to dB FS, but also applicable to analog systems. According to ITU-T Rec. G.100.1 the level in dB ov of a digital system is defined as: with the maximum signal power for a rectangular signal with the maximum amplitude The level of a tone with a digital amplitude (peak value) of is therefore dB dB(relative) – simply a relative difference from something else, which is made apparent in context. The difference of a filter's response to nominal levels, for instance. dB dB above reference noise. See also dB dB dB(rnC) represents an audio level measurement, typically in a telephone circuit, relative to a −90 dB reference level, with the measurement of this level frequency-weighted by a standard C-message weighting filter. The C-message weighting filter was chiefly used in North America. The psophometric filter is used for this purpose on international circuits. dB dB(K) – decibels relative to 1 K; used to express noise temperature. dB or dB dB(K⁻¹) – decibels relative to 1 K⁻¹. — not decibels per Kelvin: Used for the (G/T) factor, a figure of merit used in satellite communications, relating the antenna gain to the receiver system noise equivalent temperature . List of suffixes in alphabetical order Unpunctuated suffixes dB see dB(A). dB see dB adjusted. dB see dB(B). dB relative to carrier – in telecommunications, this indicates the relative levels of noise or sideband power, compared with the carrier power. dB see dB(C). dB see dB(D). dB dB(dipole) – the forward gain of an antenna compared with a half-wave dipole antenna. 0 dBd = 2.15 dB dB dB electrical. dB dB(fW) – power relative to 1 femtowatt. dB dB(full scale) – the amplitude of a signal compared with the maximum which a device can handle before clipping occurs. Full-scale may be defined as the power level of a full-scale sinusoid or alternatively a full-scale square wave. A signal measured with reference to a full-scale sine-wave appears 3 dB weaker when referenced to a full-scale square wave, thus: 0 dB (fullscale sine wave) = −3 dB (full-scale square wave). dB G-weighted spectrum dB dB(isotropic) – the forward gain of an antenna compared with the hypothetical isotropic antenna, which uniformly distributes energy in all directions. Linear polarization of the EM field is assumed unless noted otherwise. dB dB(isotropic circular) – the forward gain of an antenna compared to a circularly polarized isotropic antenna. There is no fixed conversion rule between dB and dB, as it depends on the receiving antenna and the field polarization. dB energy relative to 1 joule: 1 joule = 1 watt-second = 1 watt per hertz, so power spectral density can be expressed in dB. dB dB(kW) – power relative to 1 kilowatt. dB dB(K) – decibels relative to kelvin: Used to express noise temperature. dB dB(mW) – power relative to 1 milliwatt. dB or dB dB(m²) – decibel relative to one square meter dB Power in dB measured at a zero transmission level point. dB Defined by Recommendation ITU-R V.574. dB dB(mVRMS) – voltage relative to 1 millivolt across 75 Ω. dB dB optical. A change of 1 dB in optical power can result in a change of up to 2 dB in electrical signal power in system that is thermal noise limited. dB see dB dB or dB dB(overload) – the amplitude of a signal (usually audio) compared with the maximum which a device can handle before clipping occurs. dB relative to the peak to peak sound pressure. dB relative to the maximum value of the peak electrical power. dB dB(quarterwave) – the forward gain of an antenna compared to a quarter wavelength whip. Rarely used, except in some marketing material. 0 dBq = −0.85 dB dB dB(relative) – simply a relative difference from something else, which is made apparent in context. The difference of a filter's response to nominal levels, for instance. dB dB above reference noise. See also dB dB dB represents an audio level measurement, typically in a telephone circuit, relative to the circuit noise level, with the measurement of this level frequency-weighted by a standard C-message weighting filter. The C-message weighting filter was chiefly used in North America. dB see dB dB dB(true peak) – peak amplitude of a signal compared with the maximum which a device can handle before clipping occurs. dB or dB RMS voltage relative to dB Defined by Recommendation ITU-R V.574. dB see dB dB see dB dB see dB dB dB(VRMS) – voltage relative to 1 volt, regardless of impedance. dB dB(VU) dB volume unit dB dB(W) – power relative to 1 watt. dB spectral density relative to 1 W·m⁻²·Hz⁻¹ dB dB(Z) – decibel relative to Z = 1 mm6⋅m−3 dB see dB dB or dB dB(μVRMS) – voltage relative to 1 root mean square microvolt. dB, dB, or dB dB(μV/m) – electric field strength relative to 1 microvolt per meter. Suffixes preceded by a space dB HL dB hearing level is used in audiograms as a measure of hearing loss. dB Q sometimes used to denote weighted noise level dB SIL dB sound intensity level – relative to 10−12 W/m2 dB SPL dB SPL (sound pressure level) – for sound in air and other gases, relative to 20 μPa in air or 1 μPa in water dB SWL dB sound power level – relative to 10−12 W. Suffixes within parentheses dB(A), dB(B), dB(C), dB(D), dB(G), and dB(Z) These symbols are often used to denote the use of different weighting filters, used to approximate the human ear's response to sound, although the measurement is still in dB (SPL). These measurements usually refer to noise and its effects on humans and other animals, and they are widely used in industry while discussing noise control issues, regulations and environmental standards. Other variations that may be seen are dBA or dBA. Other suffixes dB or dB-Hz dB(Hz) – bandwidth relative to one Hertz dB or dB dB(K⁻¹) – decibels relative to reciprocal of kelvin dB dB(m⁻¹) – decibel relative to reciprocal of meter: measure of the antenna factor mB mB(mW) – power relative to 1 milliwatt, in millibels (one hundredth of a decibel). 100 mB = 1 dB. This unit is in the Wi-Fi drivers of the Linux kernel and the regulatory domain sections. See also Apparent magnitude Cent (music) Day–evening–night noise level (Lden) and day-night average sound level (Ldl), European and American standards for expressing noise level over an entire day dB drag racing Decade (log scale) Loudness Neper pH Phon Richter magnitude scale Sone Notes References Further reading External links What is a decibel? With sound files and animations Conversion of sound level units: dBSPL or dBA to sound pressure p and sound intensity J OSHA Regulations on Occupational Noise Exposure Working with Decibels (RF signal and field strengths) Acoustics Audio electronics Radio frequency propagation Telecommunications engineering Units of level
Decibel
[ "Physics", "Mathematics", "Engineering" ]
8,727
[ "Audio electronics", "Physical phenomena", "Telecommunications engineering", "Physical quantities", "Spectrum (physical sciences)", "Radio frequency propagation", "Units of level", "Quantity", "Classical mechanics", "Acoustics", "Electromagnetic spectrum", "Waves", "Logarithmic scales of mea...
8,429
https://en.wikipedia.org/wiki/Density
Density (volumetric mass density or specific mass) is a substance's mass per unit of volume. The symbol most often used for density is ρ (the lower case Greek letter rho), although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume: where ρ is the density, m is the mass, and V is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its weight per unit volume, although this is scientifically inaccurate this quantity is more specifically called specific weight. For a pure substance the density has the same numerical value as its mass concentration. Different materials usually have different densities, and density may be relevant to buoyancy, purity and packaging. Osmium is the densest known element at standard conditions for temperature and pressure. To simplify comparisons of density across different systems of units, it is sometimes replaced by the dimensionless quantity "relative density" or "specific gravity", i.e. the ratio of the density of the material to that of a standard material, usually water. Thus a relative density less than one relative to water means that the substance floats in water. The density of a material varies with temperature and pressure. This variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance (with a few exceptions) decreases its density by increasing its volume. In most materials, heating the bottom of a fluid results in convection of the heat from the bottom to the top, due to the decrease in the density of the heated fluid, which causes it to rise relative to denser unheated material. The reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density; rather it increases its mass. Other conceptually comparable quantities or ratios include specific density, relative density (specific gravity), and specific weight. History Density, floating, and sinking The understanding that different materials have different densities, and of a relationship between density, floating, and sinking must date to prehistoric times. Much later it was put in writing. Aristotle, for example, wrote: Volume vs. density; volume of an irregular shape In a well-known but probably apocryphal tale, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a golden wreath dedicated to the gods and replacing it with another, cheaper alloy. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass; but the king did not approve of this. Baffled, Archimedes is said to have taken an immersion bath and observed from the rise of the water upon entering that he could calculate the volume of the gold wreath through the displacement of the water. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!" (). As a result, the term eureka entered common parlance and is used today to indicate a moment of enlightenment. The story first appeared in written form in Vitruvius' books of architecture, two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time. Nevertheless, in 1586, Galileo Galilei, in one of his first experiments, made a possible reconstruction of how the experiment could have been performed with ancient Greek resources Units From the equation for density (), mass density has any unit that is mass divided by volume. As there are many units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per cubic metre (kg/m3) and the cgs unit of gram per cubic centimetre (g/cm3) are probably the most commonly used units for density. One g/cm3 is equal to 1000 kg/m3. One cubic centimetre (abbreviation cc) is equal to one millilitre. In industry, other larger or smaller units of mass and or volume are often more practical and US customary units may be used. See below for a list of some of the most common units of density. The litre and tonne are not part of the SI, but are acceptable for use with it, leading to the following units: kilogram per litre (kg/L) gram per millilitre (g/mL) tonne per cubic metre (t/m3) Densities using the following metric units all have exactly the same numerical value, one thousandth of the value in (kg/m3). Liquid water has a density of about 1 kg/dm3, making any of these SI units numerically convenient to use as most solids and liquids have densities between 0.1 and 20 kg/dm3. kilogram per cubic decimetre (kg/dm3) gram per cubic centimetre (g/cm3) 1 g/cm3 = 1000 kg/m3 megagram (metric ton) per cubic metre (Mg/m3) In US customary units density can be stated in: Avoirdupois ounce per cubic inch (1 g/cm3 ≈ 0.578036672 oz/cu in) Avoirdupois ounce per fluid ounce (1 g/cm3 ≈ 1.04317556 oz/US fl oz = 1.04317556 lb/US fl pint) Avoirdupois pound per cubic inch (1 g/cm3 ≈ 0.036127292 lb/cu in) pound per cubic foot (1 g/cm3 ≈ 62.427961 lb/cu ft) pound per cubic yard (1 g/cm3 ≈ 1685.5549 lb/cu yd) pound per US liquid gallon (1 g/cm3 ≈ 8.34540445 lb/US gal) pound per US bushel (1 g/cm3 ≈ 77.6888513 lb/bu) slug per cubic foot Imperial units differing from the above (as the Imperial gallon and bushel differ from the US units) in practice are rarely used, though found in older documents. The Imperial gallon was based on the concept that an Imperial fluid ounce of water would have a mass of one Avoirdupois ounce, and indeed 1 g/cm3 ≈ 1.00224129 ounces per Imperial fluid ounce = 10.0224129 pounds per Imperial gallon. The density of precious metals could conceivably be based on Troy ounces and pounds, a possible cause of confusion. Knowing the volume of the unit cell of a crystalline material and its formula weight (in daltons), the density can be calculated. One dalton per cubic ångström is equal to a density of 1.660 539 066 60 g/cm3. Measurement A number of techniques as well as standards exist for the measurement of density of materials. Such techniques include the use of a hydrometer (a buoyancy method for liquids), Hydrostatic balance (a buoyancy method for liquids and solids), immersed body method (a buoyancy method for liquids), pycnometer (liquids and solids), air comparison pycnometer (solids), oscillating densitometer (liquids), as well as pour and tap (solids). However, each individual method or technique measures different types of density (e.g. bulk density, skeletal density, etc.), and therefore it is necessary to have an understanding of the type of density being measured as well as the type of material in question. Homogeneous materials The density at all points of a homogeneous object equals its total mass divided by its total volume. The mass is normally measured with a scale or balance; the volume may be measured directly (from the geometry of the object) or by the displacement of a fluid. To determine the density of a liquid or a gas, a hydrometer, a dasymeter or a Coriolis flow meter may be used, respectively. Similarly, hydrostatic weighing uses the displacement of water due to a submerged object to determine the density of the object. Heterogeneous materials If the body is not homogeneous, then its density varies between different regions of the object. In that case the density around any given location is determined by calculating the density of a small volume around that location. In the limit of an infinitesimal volume the density of an inhomogeneous object at a point becomes: , where is an elementary volume at position . The mass of the body then can be expressed as Non-compact materials In practice, bulk materials such as sugar, sand, or snow contain voids. Many materials exist in nature as flakes, pellets, or granules. Voids are regions which contain something other than the considered material. Commonly the void is air, but it could also be vacuum, liquid, solid, or a different gas or gaseous mixture. The bulk volume of a material —inclusive of the void space fraction— is often obtained by a simple measurement (e.g. with a calibrated measuring cup) or geometrically from known dimensions. Mass divided by bulk volume determines bulk density. This is not the same thing as the material volumetric mass density. To determine the material volumetric mass density, one must first discount the volume of the void fraction. Sometimes this can be determined by geometrical reasoning. For the close-packing of equal spheres the non-void fraction can be at most about 74%. It can also be determined empirically. Some bulk materials, however, such as sand, have a variable void fraction which depends on how the material is agitated or poured. It might be loose or compact, with more or less air space depending on handling. In practice, the void fraction is not necessarily air, or even gaseous. In the case of sand, it could be water, which can be advantageous for measurement as the void fraction for sand saturated in water—once any air bubbles are thoroughly driven out—is potentially more consistent than dry sand measured with an air void. In the case of non-compact materials, one must also take care in determining the mass of the material sample. If the material is under pressure (commonly ambient air pressure at the earth's surface) the determination of mass from a measured sample weight might need to account for buoyancy effects due to the density of the void constituent, depending on how the measurement was conducted. In the case of dry sand, sand is so much denser than air that the buoyancy effect is commonly neglected (less than one part in one thousand). Mass change upon displacing one void material with another while maintaining constant volume can be used to estimate the void fraction, if the difference in density of the two voids materials is reliably known. Changes of density In general, density can be changed by changing either the pressure or the temperature. Increasing the pressure always increases the density of a material. Increasing the temperature generally decreases the density, but there are notable exceptions to this generalization. For example, the density of water increases between its melting point at 0 °C and 4 °C; similar behavior is observed in silicon at low temperatures. The effect of pressure and temperature on the densities of liquids and solids is small. The compressibility for a typical liquid or solid is 10−6 bar−1 (1 bar = 0.1 MPa) and a typical thermal expansivity is 10−5 K−1. This roughly translates into needing around ten thousand times atmospheric pressure to reduce the volume of a substance by one percent. (Although the pressures needed may be around a thousand times smaller for sandy soil and some clays.) A one percent expansion of volume typically requires a temperature increase on the order of thousands of degrees Celsius. In contrast, the density of gases is strongly affected by pressure. The density of an ideal gas is where is the molar mass, is the pressure, is the universal gas constant, and is the absolute temperature. This means that the density of an ideal gas can be doubled by doubling the pressure, or by halving the absolute temperature. In the case of volumic thermal expansion at constant pressure and small intervals of temperature the temperature dependence of density is where is the density at a reference temperature, is the thermal expansion coefficient of the material at temperatures close to . Density of solutions The density of a solution is the sum of mass (massic) concentrations of the components of that solution. Mass (massic) concentration of each given component in a solution sums to density of the solution, Expressed as a function of the densities of pure components of the mixture and their volume participation, it allows the determination of excess molar volumes: provided that there is no interaction between the components. Knowing the relation between excess volumes and activity coefficients of the components, one can determine the activity coefficients: List of densities Various materials Others Water Air Molar volumes of liquid and solid phase of elements See also Densities of the elements (data page) List of elements by density Air density Area density Bulk density Buoyancy Charge density Density current Density prediction by the Girolami method Dord Energy density Lighter than air Linear density Number density Orthobaric density Paper density Specific weight Spice (oceanography) Standard temperature and pressure Volumic quantity References External links Video: Density Experiment with Oil and Alcohol Video: Density Experiment with Whiskey and Water Glass Density Calculation – Calculation of the density of glass at room temperature and of glass melts at 1000 – 1400°C List of Elements of the Periodic Table – Sorted by Density Calculation of saturated liquid densities for some components Field density test Water – Density and specific weight Temperature dependence of the density of water – Conversions of density units A delicious density experiment Water density calculator Water density for a given salinity and temperature. Liquid density calculator Select a liquid from the list and calculate density as a function of temperature. Gas density calculator Calculate density of a gas for as a function of temperature and pressure. Densities of various materials. Determination of Density of Solid, instructions for performing classroom experiment.
Density
[ "Physics" ]
2,961
[ "Mechanical quantities", "Physical quantities", "Mass", "Intensive quantities", "Volume-specific quantities", "Density", "Mass density", "Matter" ]
8,456
https://en.wikipedia.org/wiki/Denaturation%20%28biochemistry%29
In biochemistry, denaturation is a process in which proteins or nucleic acids lose folded structure present in their native state due to various factors, including application of some external stress or compound, such as a strong acid or base, a concentrated inorganic salt, an organic solvent (e.g., alcohol or chloroform), agitation and radiation, or heat. If proteins in a living cell are denatured, this results in disruption of cell activity and possibly cell death. Protein denaturation is also a consequence of cell death. Denatured proteins can exhibit a wide range of characteristics, from conformational change and loss of solubility or dissociation of cofactors to aggregation due to the exposure of hydrophobic groups. The loss of solubility as a result of denaturation is called coagulation. Denatured proteins lose their 3D structure, and therefore, cannot function. Proper protein folding is key to whether a globular or membrane protein can do its job correctly; it must be folded into the native shape to function. However, hydrogen bonds and cofactor-protein binding, which play a crucial role in folding, are rather weak, and thus, easily affected by heat, acidity, varying salt concentrations, chelating agents, and other stressors which can denature the protein. This is one reason why cellular homeostasis is physiologically necessary in most life forms. Common examples When food is cooked, some of its proteins become denatured. This is why boiled eggs become hard and cooked meat becomes firm. A classic example of denaturing in proteins comes from egg whites, which are typically largely egg albumins in water. Fresh from the eggs, egg whites are transparent and liquid. Cooking the thermally unstable whites turns them opaque, forming an interconnected solid mass. The same transformation can be effected with a denaturing chemical. Pouring egg whites into a beaker of acetone will also turn egg whites translucent and solid. The skin that forms on curdled milk is another common example of denatured protein. The cold appetizer known as ceviche is prepared by chemically "cooking" raw fish and shellfish in an acidic citrus marinade, without heat. Protein denaturation Denatured proteins can exhibit a wide range of characteristics, from loss of solubility to protein aggregation. Background Proteins or polypeptides are polymers of amino acids. A protein is created by ribosomes that "read" RNA that is encoded by codons in the gene and assemble the requisite amino acid combination from the genetic instruction, in a process known as translation. The newly created protein strand then undergoes posttranslational modification, in which additional atoms or molecules are added, for example copper, zinc, or iron. Once this post-translational modification process has been completed, the protein begins to fold (sometimes spontaneously and sometimes with enzymatic assistance), curling up on itself so that hydrophobic elements of the protein are buried deep inside the structure and hydrophilic elements end up on the outside. The final shape of a protein determines how it interacts with its environment. Protein folding consists of a balance between a substantial amount of weak intra-molecular interactions within a protein (Hydrophobic, electrostatic, and Van Der Waals Interactions) and protein-solvent interactions. As a result, this process is heavily reliant on environmental state that the protein resides in. These environmental conditions include, and are not limited to, temperature, salinity, pressure, and the solvents that happen to be involved. Consequently, any exposure to extreme stresses (e.g. heat or radiation, high inorganic salt concentrations, strong acids and bases) can disrupt a protein's interaction and inevitably lead to denaturation. When a protein is denatured, secondary and tertiary structures are altered but the peptide bonds of the primary structure between the amino acids are left intact. Since all structural levels of the protein determine its function, the protein can no longer perform its function once it has been denatured. This is in contrast to intrinsically unstructured proteins, which are unfolded in their native state, but still functionally active and tend to fold upon binding to their biological target. How denaturation occurs at levels of protein structure In quaternary structure denaturation, protein sub-units are dissociated and/or the spatial arrangement of protein subunits is disrupted. Tertiary structure denaturation involves the disruption of: Covalent interactions between amino acid side-chains (such as disulfide bridges between cysteine groups) Non-covalent dipole-dipole interactions between polar amino acid side-chains (and the surrounding solvent) Van der Waals (induced dipole) interactions between nonpolar amino acid side-chains. In secondary structure denaturation, proteins lose all regular repeating patterns such as alpha-helices and beta-pleated sheets, and adopt a random coil configuration. Primary structure, such as the sequence of amino acids held together by covalent peptide bonds, is not disrupted by denaturation. Loss of function Most biological substrates lose their biological function when denatured. For example, enzymes lose their activity, because the substrates can no longer bind to the active site, and because amino acid residues involved in stabilizing substrates' transition states are no longer positioned to be able to do so. The denaturing process and the associated loss of activity can be measured using techniques such as dual-polarization interferometry, CD, QCM-D and MP-SPR. Loss of activity due to heavy metals and metalloids By targeting proteins, heavy metals have been known to disrupt the function and activity carried out by proteins. Heavy metals fall into categories consisting of transition metals as well as a select amount of metalloid. These metals, when interacting with native, folded proteins, tend to play a role in obstructing their biological activity. This interference can be carried out in a different number of ways. These heavy metals can form a complex with the functional side chain groups present in a protein or form bonds to free thiols. Heavy metals also play a role in oxidizing amino acid side chains present in protein. Along with this, when interacting with metalloproteins, heavy metals can dislocate and replace key metal ions. As a result, heavy metals can interfere with folded proteins, which can strongly deter protein stability and activity. Reversibility and irreversibility In many cases, denaturation is reversible (the proteins can regain their native state when the denaturing influence is removed). This process can be called renaturation. This understanding has led to the notion that all the information needed for proteins to assume their native state was encoded in the primary structure of the protein, and hence in the DNA that codes for the protein, the so-called "Anfinsen's thermodynamic hypothesis". Denaturation can also be irreversible. This irreversibility is typically a kinetic, not thermodynamic irreversibility, as a folded protein generally has lower free energy than when it is unfolded. Through kinetic irreversibility, the fact that the protein is stuck in a local minimum can stop it from ever refolding after it has been irreversibly denatured. Protein denaturation due to pH Denaturation can also be caused by changes in the pH which can affect the chemistry of the amino acids and their residues. The ionizable groups in amino acids are able to become ionized when changes in pH occur. A pH change to more acidic or more basic conditions can induce unfolding. Acid-induced unfolding often occurs between pH 2 and 5, base-induced unfolding usually requires pH 10 or higher. Nucleic acid denaturation Nucleic acids (including RNA and DNA) are nucleotide polymers synthesized by polymerase enzymes during either transcription or DNA replication. Following 5'-3' synthesis of the backbone, individual nitrogenous bases are capable of interacting with one another via hydrogen bonding, thus allowing for the formation of higher-order structures. Nucleic acid denaturation occurs when hydrogen bonding between nucleotides is disrupted, and results in the separation of previously annealed strands. For example, denaturation of DNA due to high temperatures results in the disruption of base pairs and the separation of the double stranded helix into two single strands. Nucleic acid strands are capable of re-annealling when "normal" conditions are restored, but if restoration occurs too quickly, the nucleic acid strands may re-anneal imperfectly resulting in the improper pairing of bases. Biologically-induced denaturation The non-covalent interactions between antiparallel strands in DNA can be broken in order to "open" the double helix when biologically important mechanisms such as DNA replication, transcription, DNA repair or protein binding are set to occur. The area of partially separated DNA is known as the denaturation bubble, which can be more specifically defined as the opening of a DNA double helix through the coordinated separation of base pairs. The first model that attempted to describe the thermodynamics of the denaturation bubble was introduced in 1966 and called the Poland-Scheraga Model. This model describes the denaturation of DNA strands as a function of temperature. As the temperature increases, the hydrogen bonds between the base pairs are increasingly disturbed and "denatured loops" begin to form. However, the Poland-Scheraga Model is now considered elementary because it fails to account for the confounding implications of DNA sequence, chemical composition, stiffness and torsion. Recent thermodynamic studies have inferred that the lifetime of a singular denaturation bubble ranges from 1 microsecond to 1 millisecond. This information is based on established timescales of DNA replication and transcription. Currently, biophysical and biochemical research studies are being performed to more fully elucidate the thermodynamic details of the denaturation bubble. Denaturation due to chemical agents With polymerase chain reaction (PCR) being among the most popular contexts in which DNA denaturation is desired, heating is the most frequent method of denaturation. Other than denaturation by heat, nucleic acids can undergo the denaturation process through various chemical agents such as formamide, guanidine, sodium salicylate, dimethyl sulfoxide (DMSO), propylene glycol, and urea. These chemical denaturing agents lower the melting temperature (Tm) by competing for hydrogen bond donors and acceptors with pre-existing nitrogenous base pairs. Some agents are even able to induce denaturation at room temperature. For example, alkaline agents (e.g. NaOH) have been shown to denature DNA by changing pH and removing hydrogen-bond contributing protons. These denaturants have been employed to make Denaturing Gradient Gel Electrophoresis gel (DGGE), which promotes denaturation of nucleic acids in order to eliminate the influence of nucleic acid shape on their electrophoretic mobility. Chemical denaturation as an alternative The optical activity (absorption and scattering of light) and hydrodynamic properties (translational diffusion, sedimentation coefficients, and rotational correlation times) of formamide denatured nucleic acids are similar to those of heat-denatured nucleic acids. Therefore, depending on the desired effect, chemically denaturing DNA can provide a gentler procedure for denaturing nucleic acids than denaturation induced by heat. Studies comparing different denaturation methods such as heating, beads mill of different bead sizes, probe sonication, and chemical denaturation show that chemical denaturation can provide quicker denaturation compared to the other physical denaturation methods described. Particularly in cases where rapid renaturation is desired, chemical denaturation agents can provide an ideal alternative to heating. For example, DNA strands denatured with alkaline agents such as NaOH renature as soon as phosphate buffer is added. Denaturation due to air Small, electronegative molecules such as nitrogen and oxygen, which are the primary gases in air, significantly impact the ability of surrounding molecules to participate in hydrogen bonding. These molecules compete with surrounding hydrogen bond acceptors for hydrogen bond donors, therefore acting as "hydrogen bond breakers" and weakening interactions between surrounding molecules in the environment. Antiparellel strands in DNA double helices are non-covalently bound by hydrogen bonding between base pairs; nitrogen and oxygen therefore maintain the potential to weaken the integrity of DNA when exposed to air. As a result, DNA strands exposed to air require less force to separate and exemplify lower melting temperatures. Applications Many laboratory techniques rely on the ability of nucleic acid strands to separate. By understanding the properties of nucleic acid denaturation, the following methods were created: PCR Southern blot Northern blot DNA sequencing Denaturants Protein denaturants Acids Acidic protein denaturants include: Acetic acid Trichloroacetic acid 12% in water Sulfosalicylic acid Bases Bases work similarly to acids in denaturation. They include: Sodium bicarbonate Solvents Most organic solvents are denaturing, including: Ethanol Cross-linking reagents Cross-linking agents for proteins include: Formaldehyde Glutaraldehyde Chaotropic agents Chaotropic agents include: Urea 6–8 mol/L Guanidinium chloride 6 mol/L Lithium perchlorate 4.5 mol/L Sodium dodecyl sulfate Disulfide bond reducers Agents that break disulfide bonds by reduction include: 2-Mercaptoethanol Dithiothreitol TCEP (tris(2-carboxyethyl)phosphine) Chemically reactive agents Agents such as hydrogen peroxide, elemental chlorine, hypochlorous acid (chlorine water), bromine, bromine water, iodine, nitric and oxidising acids, and ozone react with sensitive moieties such as sulfide/thiol, activated aromatic rings (phenylalanine) in effect damage the protein and render it useless. Other Mechanical agitation Picric acid Radiation Temperature Nucleic acid denaturants Chemical Acidic nucleic acid denaturants include: Acetic acid HCl Nitric acid Basic nucleic acid denaturants include: NaOH Other nucleic acid denaturants include: DMSO Formamide Guanidine Sodium salicylate Propylene glycol Urea Physical Thermal denaturation Beads mill Probe sonication Radiation See also Denatured alcohol Equilibrium unfolding Fixation (histology) Molten globule Protein folding Random coil References External links McGraw-Hill Online Learning Center — Animation: Protein Denaturation Biochemical reactions Nucleic acids Protein structure
Denaturation (biochemistry)
[ "Chemistry", "Biology" ]
3,092
[ "Biomolecules by chemical classification", "Biochemical reactions", "Structural biology", "Biochemistry", "Protein structure", "Nucleic acids" ]
8,463
https://en.wikipedia.org/wiki/Dubnium
Dubnium is a synthetic chemical element; it has symbol Db and atomic number 105. It is highly radioactive: the most stable known isotope, dubnium-268, has a half-life of about 16 hours. This greatly limits extended research on the element. Dubnium does not occur naturally on Earth and is produced artificially. The Soviet Joint Institute for Nuclear Research (JINR) claimed the first discovery of the element in 1968, followed by the American Lawrence Berkeley Laboratory in 1970. Both teams proposed their names for the new element and used them without formal approval. The long-standing dispute was resolved in 1993 by an official investigation of the discovery claims by the Transfermium Working Group, formed by the International Union of Pure and Applied Chemistry and the International Union of Pure and Applied Physics, resulting in credit for the discovery being officially shared between both teams. The element was formally named dubnium in 1997 after the town of Dubna, the site of the JINR. Theoretical research establishes dubnium as a member of group 5 in the 6d series of transition metals, placing it under vanadium, niobium, and tantalum. Dubnium should share most properties, such as its valence electron configuration and having a dominant +5 oxidation state, with the other group 5 elements, with a few anomalies due to relativistic effects. A limited investigation of dubnium chemistry has confirmed this. Introduction Discovery Background Uranium, element 92, is the heaviest element to occur in significant quantities in nature; heavier elements can only be practically produced by synthesis. The first synthesis of a new element—neptunium, element 93—was achieved in 1940 by a team of researchers in the United States. In the following years, American scientists synthesized the elements up to mendelevium, element 101, which was synthesized in 1955. From element 102, the priority of discoveries was contested between American and Soviet physicists. Their rivalry resulted in a race for new elements and credit for their discoveries, later named the Transfermium Wars. Reports The first report of the discovery of element 105 came from the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Soviet Union, in April 1968. The scientists bombarded 243Am with a beam of 22Ne ions, and reported 9.4 MeV (with a half-life of 0.1–3 seconds) and 9.7 MeV (t1/2 > 0.05 s) alpha activities followed by alpha activities similar to those of either 256103 or 257103. Based on prior theoretical predictions, the two activity lines were assigned to 261105 and 260105, respectively. + → 265−x105 + x (x = 4, 5) After observing the alpha decays of element 105, the researchers aimed to observe spontaneous fission (SF) of the element and study the resulting fission fragments. They published a paper in February 1970, reporting multiple examples of two such activities, with half-lives of 14 ms and . They assigned the former activity to 242mfAm and ascribed the latter activity to an isotope of element 105. They suggested that it was unlikely that this activity could come from a transfer reaction instead of element 105, because the yield ratio for this reaction was significantly lower than that of the 242mfAm-producing transfer reaction, in accordance with theoretical predictions. To establish that this activity was not from a (22Ne,xn) reaction, the researchers bombarded a 243Am target with 18O ions; reactions producing 256103 and 257103 showed very little SF activity (matching the established data), and the reaction producing heavier 258103 and 259103 produced no SF activity at all, in line with theoretical data. The researchers concluded that the activities observed came from SF of element 105. In April 1970, a team at Lawrence Berkeley Laboratory (LBL), in Berkeley, California, United States, claimed to have synthesized element 105 by bombarding californium-249 with nitrogen-15 ions, with an alpha activity of 9.1 MeV. To ensure this activity was not from a different reaction, the team attempted other reactions: bombarding 249Cf with 14N, Pb with 15N, and Hg with 15N. They stated no such activity was found in those reactions. The characteristics of the daughter nuclei matched those of 256103, implying that the parent nuclei were of 260105. + → 260105 + 4 These results did not confirm the JINR findings regarding the 9.4 MeV or 9.7 MeV alpha decay of 260105, leaving only 261105 as a possibly produced isotope. JINR then attempted another experiment to create element 105, published in a report in May 1970. They claimed that they had synthesized more nuclei of element 105 and that the experiment confirmed their previous work. According to the paper, the isotope produced by JINR was probably 261105, or possibly 260105. This report included an initial chemical examination: the thermal gradient version of the gas-chromatography method was applied to demonstrate that the chloride of what had formed from the SF activity nearly matched that of niobium pentachloride, rather than hafnium tetrachloride. The team identified a 2.2-second SF activity in a volatile chloride portraying eka-tantalum properties, and inferred that the source of the SF activity must have been element 105. In June 1970, JINR made improvements on their first experiment, using a purer target and reducing the intensity of transfer reactions by installing a collimator before the catcher. This time, they were able to find 9.1 MeV alpha activities with daughter isotopes identifiable as either 256103 or 257103, implying that the original isotope was either 260105 or 261105. Naming controversy JINR did not propose a name after their first report claiming synthesis of element 105, which would have been the usual practice. This led LBL to believe that JINR did not have enough experimental data to back their claim. After collecting more data, JINR proposed the name bohrium (Bo) in honor of the Danish nuclear physicist Niels Bohr, a founder of the theories of atomic structure and quantum theory; they soon changed their proposal to nielsbohrium (Ns) to avoid confusion with boron. Another proposed name was dubnium. When LBL first announced their synthesis of element 105, they proposed that the new element be named hahnium (Ha) after the German chemist Otto Hahn, the "father of nuclear chemistry", thus creating an element naming controversy. In the early 1970s, both teams reported synthesis of the next element, element 106, but did not suggest names. JINR suggested establishing an international committee to clarify the discovery criteria. This proposal was accepted in 1974 and a neutral joint group formed. Neither team showed interest in resolving the conflict through a third party, so the leading scientists of LBL—Albert Ghiorso and Glenn Seaborg—traveled to Dubna in 1975 and met with the leading scientists of JINR—Georgy Flerov, Yuri Oganessian, and others—to try to resolve the conflict internally and render the neutral joint group unnecessary; after two hours of discussions, this failed. The joint neutral group never assembled to assess the claims, and the conflict remained unresolved. In 1979, IUPAC suggested systematic element names to be used as placeholders until permanent names were established; under it, element 105 would be unnilpentium, from the Latin roots un- and nil- and the Greek root pent- (meaning "one", "zero", and "five", respectively, the digits of the atomic number). Both teams ignored it as they did not wish to weaken their outstanding claims. In 1981, the Gesellschaft für Schwerionenforschung (GSI; Society for Heavy Ion Research) in Darmstadt, Hesse, West Germany, claimed synthesis of element 107; their report came out five years after the first report from JINR but with greater precision, making a more solid claim on discovery. GSI acknowledged JINR's efforts by suggesting the name nielsbohrium for the new element. JINR did not suggest a new name for element 105, stating it was more important to determine its discoverers first. In 1985, the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) formed a Transfermium Working Group (TWG) to assess discoveries and establish final names for the controversial elements. The party held meetings with delegates from the three competing institutes; in 1990, they established criteria on recognition of an element, and in 1991, they finished the work on assessing discoveries and disbanded. These results were published in 1993. According to the report, the first definitely successful experiment was the April 1970 LBL experiment, closely followed by the June 1970 JINR experiment, so credit for the discovery of the element should be shared between the two teams. LBL said that the input from JINR was overrated in the review. They claimed JINR was only able to unambiguously demonstrate the synthesis of element 105 a year after they did. JINR and GSI endorsed the report. In 1994, IUPAC published a recommendation on naming the disputed elements. For element 105, they proposed joliotium (Jl) after the French physicist Frédéric Joliot-Curie, a contributor to the development of nuclear physics and chemistry; this name was originally proposed by the Soviet team for element 102, which by then had long been called nobelium. This recommendation was criticized by the American scientists for several reasons. Firstly, their suggestions were scrambled: the names rutherfordium and hahnium, originally suggested by Berkeley for elements 104 and 105, were respectively reassigned to elements 106 and 108. Secondly, elements 104 and 105 were given names favored by JINR, despite earlier recognition of LBL as an equal co-discoverer for both of them. Thirdly and most importantly, IUPAC rejected the name seaborgium for element 106, having just approved a rule that an element could not be named after a living person, even though the 1993 report had given the LBL team the sole credit for its discovery. In 1995, IUPAC abandoned the controversial rule and established a committee of national representatives aimed at finding a compromise. They suggested seaborgium for element 106 in exchange for the removal of all the other American proposals, except for the established name lawrencium for element 103. The equally entrenched name nobelium for element 102 was replaced by flerovium after Georgy Flerov, following the recognition by the 1993 report that that element had been first synthesized in Dubna. This was rejected by American scientists and the decision was retracted. The name flerovium was later used for element 114. In 1996, IUPAC held another meeting, reconsidered all names in hand, and accepted another set of recommendations; it was approved and published in 1997. Element 105 was named dubnium (Db), after Dubna in Russia, the location of the JINR; the American suggestions were used for elements 102, 103, 104, and 106. The name dubnium had been used for element 104 in the previous IUPAC recommendation. The American scientists "reluctantly" approved this decision. IUPAC pointed out that the Berkeley laboratory had already been recognized several times, in the naming of berkelium, californium, and americium, and that the acceptance of the names rutherfordium and seaborgium for elements 104 and 106 should be offset by recognizing JINR's contributions to the discovery of elements 104, 105, and 106. Even after 1997, LBL still sometimes used the name hahnium for element 105 in their own material, doing so as recently as 2014. However, the problem was resolved in the literature as Jens Volker Kratz, editor of Radiochimica Acta, refused to accept papers not using the 1997 IUPAC nomenclature. Isotopes Dubnium, having an atomic number of 105, is a superheavy element; like all elements with such high atomic numbers, it is very unstable. The longest-lasting known isotope of dubnium, 268Db, has a half-life of around a day. No stable isotopes have been seen, and a 2012 calculation by JINR suggested that the half-lives of all dubnium isotopes would not significantly exceed a day. Dubnium can only be obtained by artificial production. The short half-life of dubnium limits experimentation. This is exacerbated by the fact that the most stable isotopes are the hardest to synthesize. Elements with a lower atomic number have stable isotopes with a lower neutron–proton ratio than those with higher atomic number, meaning that the target and beam nuclei that could be employed to create the superheavy element have fewer neutrons than needed to form these most stable isotopes. (Different techniques based on rapid neutron capture and transfer reactions are being considered as of the 2010s, but those based on the collision of a large and small nucleus still dominate research in the area.) Only a few atoms of 268Db can be produced in each experiment, and thus the measured lifetimes vary significantly during the process. As of 2022, following additional experiments performed at the JINR's Superheavy Element Factory (which started operations in 2019), the half-life of 268Db is measured to be hours. The second most stable isotope, 270Db, has been produced in even smaller quantities: three atoms in total, with lifetimes of 33.4 h, 1.3 h, and 1.6 h. These two are the heaviest isotopes of dubnium to date, and both were produced as a result of decay of the heavier nuclei 288Mc and 294Ts rather than directly, because the experiments that yielded them were originally designed in Dubna for 48Ca beams. For its mass, 48Ca has by far the greatest neutron excess of all practically stable nuclei, both quantitative and relative, which correspondingly helps synthesize superheavy nuclei with more neutrons, but this gain is compensated by the decreased likelihood of fusion for high atomic numbers. Predicted properties According to the periodic law, dubnium should belong to group 5, with vanadium, niobium, and tantalum. Several studies have investigated the properties of element 105 and found that they generally agreed with the predictions of the periodic law. Significant deviations may nevertheless occur, due to relativistic effects, which dramatically change physical properties on both atomic and macroscopic scales. These properties have remained challenging to measure for several reasons: the difficulties of production of superheavy atoms, the low rates of production, which only allows for microscopic scales, requirements for a radiochemistry laboratory to test the atoms, short half-lives of those atoms, and the presence of many unwanted activities apart from those of synthesis of superheavy atoms. So far, studies have only been performed on single atoms. Atomic and physical A direct relativistic effect is that as the atomic numbers of elements increase, the innermost electrons begin to revolve faster around the nucleus as a result of an increase of electromagnetic attraction between an electron and a nucleus. Similar effects have been found for the outermost s orbitals (and p1/2 ones, though in dubnium they are not occupied): for example, the 7s orbital contracts by 25% in size and is stabilized by 2.6 eV. A more indirect effect is that the contracted s and p1/2 orbitals shield the charge of the nucleus more effectively, leaving less for the outer d and f electrons, which therefore move in larger orbitals. Dubnium is greatly affected by this: unlike the previous group 5 members, its 7s electrons are slightly more difficult to extract than its 6d electrons. Another effect is the spin–orbit interaction, particularly spin–orbit splitting, which splits the 6d subshell—the azimuthal quantum number ℓ of a d shell is 2—into two subshells, with four of the ten orbitals having their ℓ lowered to 3/2 and six raised to 5/2. All ten energy levels are raised; four of them are lower than the other six. (The three 6d electrons normally occupy the lowest energy levels, 6d3/2.) A singly ionized atom of dubnium (Db+) should lose a 6d electron compared to a neutral atom; the doubly (Db2+) or triply (Db3+) ionized atoms of dubnium should eliminate 7s electrons, unlike its lighter homologs. Despite the changes, dubnium is still expected to have five valence electrons. As the 6d orbitals of dubnium are more destabilized than the 5d ones of tantalum, and Db3+ is expected to have two 6d, rather than 7s, electrons remaining, the resulting +3 oxidation state is expected to be unstable and even rarer than that of tantalum. The ionization potential of dubnium in its maximum +5 oxidation state should be slightly lower than that of tantalum and the ionic radius of dubnium should increase compared to tantalum; this has a significant effect on dubnium's chemistry. Atoms of dubnium in the solid state should arrange themselves in a body-centered cubic configuration, like the previous group 5 elements. The predicted density of dubnium is 21.6 g/cm3. Chemical Computational chemistry is simplest in gas-phase chemistry, in which interactions between molecules may be ignored as negligible. Multiple authors have researched dubnium pentachloride; calculations show it to be consistent with the periodic laws by exhibiting the properties of a compound of a group 5 element. For example, the molecular orbital levels indicate that dubnium uses three 6d electron levels as expected. Compared to its tantalum analog, dubnium pentachloride is expected to show increased covalent character: a decrease in the effective charge on an atom and an increase in the overlap population (between orbitals of dubnium and chlorine). Calculations of solution chemistry indicate that the maximum oxidation state of dubnium, +5, will be more stable than those of niobium and tantalum and the +3 and +4 states will be less stable. The tendency towards hydrolysis of cations with the highest oxidation state should continue to decrease within group 5 but is still expected to be quite rapid. Complexation of dubnium is expected to follow group 5 trends in its richness. Calculations for hydroxo-chlorido- complexes have shown a reversal in the trends of complex formation and extraction of group 5 elements, with dubnium being more prone to do so than tantalum. Experimental chemistry Experimental results of the chemistry of dubnium date back to 1974 and 1976. JINR researchers used a thermochromatographic system and concluded that the volatility of dubnium bromide was less than that of niobium bromide and about the same as that of hafnium bromide. It is not certain that the detected fission products confirmed that the parent was indeed element 105. These results may imply that dubnium behaves more like hafnium than niobium. The next studies on the chemistry of dubnium were conducted in 1988, in Berkeley. They examined whether the most stable oxidation state of dubnium in aqueous solution was +5. Dubnium was fumed twice and washed with concentrated nitric acid; sorption of dubnium on glass cover slips was then compared with that of the group 5 elements niobium and tantalum and the group 4 elements zirconium and hafnium produced under similar conditions. The group 5 elements are known to sorb on glass surfaces; the group 4 elements do not. Dubnium was confirmed as a group 5 member. Surprisingly, the behavior on extraction from mixed nitric and hydrofluoric acid solution into methyl isobutyl ketone differed between dubnium, tantalum, and niobium. Dubnium did not extract and its behavior resembled niobium more closely than tantalum, indicating that complexing behavior could not be predicted purely from simple extrapolations of trends within a group in the periodic table. This prompted further exploration of the chemical behavior of complexes of dubnium. Various labs jointly conducted thousands of repetitive chromatographic experiments between 1988 and 1993. All group 5 elements and protactinium were extracted from concentrated hydrochloric acid; after mixing with lower concentrations of hydrogen chloride, small amounts of hydrogen fluoride were added to start selective re-extraction. Dubnium showed behavior different from that of tantalum but similar to that of niobium and its pseudohomolog protactinium at concentrations of hydrogen chloride below 12 moles per liter. This similarity to the two elements suggested that the formed complex was either or . After extraction experiments of dubnium from hydrogen bromide into diisobutyl carbinol (2,6-dimethylheptan-4-ol), a specific extractant for protactinium, with subsequent elutions with the hydrogen chloride/hydrogen fluoride mix as well as hydrogen chloride, dubnium was found to be less prone to extraction than either protactinium or niobium. This was explained as an increasing tendency to form non‐extractable complexes of multiple negative charges. Further experiments in 1992 confirmed the stability of the +5 state: Db(V) was shown to be extractable from cation‐exchange columns with α‐hydroxyisobutyrate, like the group 5 elements and protactinium; Db(III) and Db(IV) were not. In 1998 and 1999, new predictions suggested that dubnium would extract nearly as well as niobium and better than tantalum from halide solutions, which was later confirmed. The first isothermal gas chromatography experiments were performed in 1992 with 262Db (half-life 35 seconds). The volatilities for niobium and tantalum were similar within error limits, but dubnium appeared to be significantly less volatile. It was postulated that traces of oxygen in the system might have led to formation of , which was predicted to be less volatile than . Later experiments in 1996 showed that group 5 chlorides were more volatile than the corresponding bromides, with the exception of tantalum, presumably due to formation of . Later volatility studies of chlorides of dubnium and niobium as a function of controlled partial pressures of oxygen showed that formation of oxychlorides and general volatility are dependent on concentrations of oxygen. The oxychlorides were shown to be less volatile than the chlorides. In 2004–05, researchers from Dubna and Livermore identified a new dubnium isotope, 268Db, as a fivefold alpha decay product of the newly created element 115. This new isotope proved to be long-lived enough to allow further chemical experimentation, with a half-life of over a day. In the 2004 experiment, a thin layer with dubnium was removed from the surface of the target and dissolved in aqua regia with tracers and a lanthanum carrier, from which various +3, +4, and +5 species were precipitated on adding ammonium hydroxide. The precipitate was washed and dissolved in hydrochloric acid, where it converted to nitrate form and was then dried on a film and counted. Mostly containing a +5 species, which was immediately assigned to dubnium, it also had a +4 species; based on that result, the team decided that additional chemical separation was needed. In 2005, the experiment was repeated, with the final product being hydroxide rather than nitrate precipitate, which was processed further in both Livermore (based on reverse phase chromatography) and Dubna (based on anion exchange chromatography). The +5 species was effectively isolated; dubnium appeared three times in tantalum-only fractions and never in niobium-only fractions. It was noted that these experiments were insufficient to draw conclusions about the general chemical profile of dubnium. In 2009, at the JAEA tandem accelerator in Japan, dubnium was processed in nitric and hydrofluoric acid solution, at concentrations where niobium forms and tantalum forms . Dubnium's behavior was close to that of niobium but not tantalum; it was thus deduced that dubnium formed . From the available information, it was concluded that dubnium often behaved like niobium, sometimes like protactinium, but rarely like tantalum. In 2021, the volatile heavy group 5 oxychlorides MOCl3 (M = Nb, Ta, Db) were experimentally studied at the JAEA tandem accelerator. The trend in volatilities was found to be NbOCl3 > TaOCl3 ≥ DbOCl3, so that dubnium behaves in line with periodic trends. Notes References Bibliography Chemical elements Transition metals Synthetic elements Chemical elements with body-centered cubic structure
Dubnium
[ "Physics", "Chemistry" ]
5,176
[ "Matter", "Chemical elements", "Synthetic materials", "Synthetic elements", "Atoms", "Radioactivity" ]
8,524
https://en.wikipedia.org/wiki/Deuterium
Deuterium (hydrogen-2, symbol H or D, also known as heavy hydrogen) is one of two stable isotopes of hydrogen; the other is protium, or hydrogen-1, H. The deuterium nucleus (deuteron) contains one proton and one neutron, whereas the far more common H has no neutrons. Deuterium has a natural abundance in Earth's oceans of about one atom of deuterium in every 6,420 atoms of hydrogen. Thus, deuterium accounts for about 0.0156% by number (0.0312% by mass) of all hydrogen in the ocean: tonnes of deuterium – mainly as HOD (or HOH or HHO) and only rarely as DO (or HO) (deuterium oxide, also known as heavy water) – in tonnes of water. The abundance of H changes slightly from one kind of natural water to another (see Vienna Standard Mean Ocean Water). The name deuterium comes from Greek deuteros, meaning "second". American chemist Harold Urey discovered deuterium in 1931. Urey and others produced samples of heavy water in which the H had been highly concentrated. The discovery of deuterium won Urey a Nobel Prize in 1934. Deuterium is destroyed in the interiors of stars faster than it is produced. Other natural processes are thought to produce only an insignificant amount of deuterium. Nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago, as the basic or primordial ratio of H to H (≈26 atoms of deuterium per 10 hydrogen atoms) has its origin from that time. This is the ratio found in the gas giant planets, such as Jupiter. The analysis of deuterium–protium ratios (HHR) in comets found results very similar to the mean ratio in Earth's oceans (156 atoms of deuterium per 10 hydrogen atoms). This reinforces theories that much of Earth's ocean water is of cometary origin. The HHR of comet 67P/Churyumov–Gerasimenko, as measured by the Rosetta space probe, is about three times that of Earth water. This figure is the highest yet measured in a comet. HHR's thus continue to be an active topic of research in both astronomy and climatology. Differences from common hydrogen (protium) Chemical symbol Deuterium is often represented by the chemical symbol D. Since it is an isotope of hydrogen with mass number 2, it is also represented by H. IUPAC allows both D and H, though H is preferred. A distinct chemical symbol is used for convenience because of the isotope's common use in various scientific processes. Also, its large mass difference with protium (H) confers non-negligible chemical differences with H compounds. Deuterium has a mass of , about twice the mean hydrogen atomic weight of , or twice protium's mass of . The isotope weight ratios within other elements are largely insignificant in this regard. Spectroscopy In quantum mechanics, the energy levels of electrons in atoms depend on the reduced mass of the system of electron and nucleus. For a hydrogen atom, the role of reduced mass is most simply seen in the Bohr model of the atom, where the reduced mass appears in a simple calculation of the Rydberg constant and Rydberg equation, but the reduced mass also appears in the Schrödinger equation, and the Dirac equation for calculating atomic energy levels. The reduced mass of the system in these equations is close to the mass of a single electron, but differs from it by a small amount about equal to the ratio of mass of the electron to the nucleus. For H, this amount is about , or 1.000545, and for H it is even smaller: , or 1.0002725. The energies of electronic spectra lines for H and H therefore differ by the ratio of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the corresponding lines of light hydrogen, by 0.0272%. In astronomical observation, this corresponds to a blue Doppler shift of 0.0272% of the speed of light, or 81.6 km/s. The differences are much more pronounced in vibrational spectroscopy such as infrared spectroscopy and Raman spectroscopy, and in rotational spectra such as microwave spectroscopy because the reduced mass of the deuterium is markedly higher than that of protium. In nuclear magnetic resonance spectroscopy, deuterium has a very different NMR frequency (e.g. 61 MHz when protium is at 400 MHz) and is much less sensitive. Deuterated solvents are usually used in protium NMR to prevent the solvent from overlapping with the signal, though deuterium NMR on its own right is also possible. Big Bang nucleosynthesis Deuterium is thought to have played an important role in setting the number and ratios of the elements that were formed in the Big Bang. Combining thermodynamics and the changes brought about by cosmic expansion, one can calculate the fraction of protons and neutrons based on the temperature at the point that the universe cooled enough to allow formation of nuclei. This calculation indicates seven protons for every neutron at the beginning of nucleogenesis, a ratio that would remain stable even after nucleogenesis was over. This fraction was in favor of protons initially, primarily because the lower mass of the proton favored their production. As the Universe expanded, it cooled. Free neutrons and protons are less stable than helium nuclei, and the protons and neutrons had a strong energetic reason to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Through much of the few minutes after the Big Bang during which nucleosynthesis could have occurred, the temperature was high enough that the mean energy per particle was greater than the binding energy of weakly bound deuterium; therefore, any deuterium that was formed was immediately destroyed. This situation is known as the deuterium bottleneck. The bottleneck delayed formation of any helium-4 until the Universe became cool enough to form deuterium (at about a temperature equivalent to 100 keV). At this point, there was a sudden burst of element formation (first deuterium, which immediately fused into helium). However, very soon thereafter, at twenty minutes after the Big Bang, the Universe became too cool for any further nuclear fusion or nucleosynthesis. At this point, the elemental abundances were nearly fixed, with the only change as some of the radioactive products of Big Bang nucleosynthesis (such as tritium) decay. The deuterium bottleneck in the formation of helium, together with the lack of stable ways for helium to combine with hydrogen or with itself (no stable nucleus has a mass number of 5 or 8) meant that an insignificant amount of carbon, or any elements heavier than carbon, formed in the Big Bang. These elements thus required formation in stars. At the same time, the failure of much nucleogenesis during the Big Bang ensured that there would be plenty of hydrogen in the later universe available to form long-lived stars, such as the Sun. Abundance Deuterium occurs in trace amounts naturally as deuterium gas (H or D), but most deuterium atoms in the Universe are bonded with H to form a gas called hydrogen deuteride (HD or HH). Similarly, natural water contains deuterated molecules, almost all as semiheavy water HDO with only one deuterium. The existence of deuterium on Earth, elsewhere in the Solar System (as confirmed by planetary probes), and in the spectra of stars, is also an important datum in cosmology. Gamma radiation from ordinary nuclear fusion dissociates deuterium into protons and neutrons, and there is no known natural process other than Big Bang nucleosynthesis that might have produced deuterium at anything close to its observed natural abundance. Deuterium is produced by the rare cluster decay, and occasional absorption of naturally occurring neutrons by light hydrogen, but these are trivial sources. There is thought to be little deuterium in the interior of the Sun and other stars, as at these temperatures the nuclear fusion reactions that consume deuterium happen much faster than the proton–proton reaction that creates deuterium. However, deuterium persists in the outer solar atmosphere at roughly the same concentration as in Jupiter, and this has probably been unchanged since the origin of the Solar System. The natural abundance of H seems to be a very similar fraction of hydrogen, wherever hydrogen is found, unless there are obvious processes at work that concentrate it. The existence of deuterium at a low but constant primordial fraction in all hydrogen is another one of the arguments in favor of the Big Bang over the Steady State theory of the Universe. The observed ratios of hydrogen to helium to deuterium in the universe are difficult to explain except with a Big Bang model. It is estimated that the abundances of deuterium have not evolved significantly since their production about 13.8 billion years ago. Measurements of Milky Way galactic deuterium from ultraviolet spectral analysis show a ratio of as much as 23 atoms of deuterium per million hydrogen atoms in undisturbed gas clouds, which is only 15% below the WMAP estimated primordial ratio of about 27 atoms per million from the Big Bang. This has been interpreted to mean that less deuterium has been destroyed in star formation in the Milky Way galaxy than expected, or perhaps deuterium has been replenished by a large in-fall of primordial hydrogen from outside the galaxy. In space a few hundred light years from the Sun, deuterium abundance is only 15 atoms per million, but this value is presumably influenced by differential adsorption of deuterium onto carbon dust grains in interstellar space. The abundance of deuterium in Jupiter's atmosphere has been directly measured by the Galileo space probe as 26 atoms per million hydrogen atoms. ISO-SWS observations find 22 atoms per million hydrogen atoms in Jupiter. and this abundance is thought to represent close to the primordial Solar System ratio. This is about 17% of the terrestrial ratio of 156 deuterium atoms per million hydrogen atoms. Comets such as Comet Hale-Bopp and Halley's Comet have been measured to contain more deuterium (about 200 atoms per million hydrogens), ratios which are enriched with respect to the presumed protosolar nebula ratio, probably due to heating, and which are similar to the ratios found in Earth seawater. The recent measurement of deuterium amounts of 161 atoms per million hydrogen in Comet 103P/Hartley (a former Kuiper belt object), a ratio almost exactly that in Earth's oceans (155.76 ± 0.1, but in fact from 153 to 156 ppm), emphasizes the theory that Earth's surface water may be largely from comets. Most recently the HHR of 67P/Churyumov–Gerasimenko as measured by Rosetta is about three times that of Earth water. This has caused renewed interest in suggestions that Earth's water may be partly of asteroidal origin. Deuterium has also been observed to be concentrated over the mean solar abundance in other terrestrial planets, in particular Mars and Venus. Production Deuterium is produced for industrial, scientific and military purposes, by starting with ordinary water—a small fraction of which is naturally occurring heavy water—and then separating out the heavy water by the Girdler sulfide process, distillation, or other methods. In theory, deuterium for heavy water could be created in a nuclear reactor, but separation from ordinary water is the cheapest bulk production process. The world's leading supplier of deuterium was Atomic Energy of Canada Limited until 1997, when the last heavy water plant was shut down. Canada uses heavy water as a neutron moderator for the operation of the CANDU reactor design. Another major producer of heavy water is India. All but one of India's atomic energy plants are pressurized heavy water plants, which use natural (i.e., not enriched) uranium. India has eight heavy water plants, of which seven are in operation. Six plants, of which five are in operation, are based on D–H exchange in ammonia gas. The other two plants extract deuterium from natural water in a process that uses hydrogen sulfide gas at high pressure. While India is self-sufficient in heavy water for its own use, India also exports reactor-grade heavy water. Properties Data for molecular deuterium Formula: or Density: 0.180 kg/m at STP (0 °C, 101325 Pa). Atomic weight: 2.0141017926 Da. Mean abundance in ocean water (from VSMOW) 155.76 ± 0.1 atoms of deuterium per million atoms of all isotopes of hydrogen (about 1 atom of in 6420); that is, about 0.015% of all atoms of hydrogen (any isotope) Data at about 18 K for H (triple point): Density: Liquid: 162.4 kg/m Gas: 0.452 kg/m Liquefied HO: 1105.2 kg/m at STP Viscosity: 12.6 μPa·s at 300 K (gas phase) Specific heat capacity at constant pressure c: Solid: 2950 J/(kg·K) Gas: 5200 J/(kg·K) Physical properties Compared to hydrogen in its natural composition on Earth, pure deuterium (H) has a higher melting point (18.72 K vs. 13.99 K), a higher boiling point (23.64 vs. 20.27 K), a higher critical temperature (38.3 vs. 32.94 K) and a higher critical pressure (1.6496 vs. 1.2858 MPa). The physical properties of deuterium compounds can exhibit significant kinetic isotope effects and other physical and chemical property differences from the protium analogs. HO, for example, is more viscous than normal . There are differences in bond energy and length for compounds of heavy hydrogen isotopes compared to protium, which are larger than the isotopic differences in any other element. Bonds involving deuterium and tritium are somewhat stronger than the corresponding bonds in protium, and these differences are enough to cause significant changes in biological reactions. Pharmaceutical firms are interested in the fact that H is harder to remove from carbon than H. Deuterium can replace H in water molecules to form heavy water (HO), which is about 10.6% denser than normal water (so that ice made from it sinks in normal water). Heavy water is slightly toxic in eukaryotic animals, with 25% substitution of the body water causing cell division problems and sterility, and 50% substitution causing death by cytotoxic syndrome (bone marrow failure and gastrointestinal lining failure). Prokaryotic organisms, however, can survive and grow in pure heavy water, though they develop slowly. Despite this toxicity, consumption of heavy water under normal circumstances does not pose a health threat to humans. It is estimated that a person might drink of heavy water without serious consequences. Small doses of heavy water (a few grams in humans, containing an amount of deuterium comparable to that normally present in the body) are routinely used as harmless metabolic tracers in humans and animals. Quantum properties The deuteron has spin +1 ("triplet state") and is thus a boson. The NMR frequency of deuterium is significantly different from normal hydrogen. Infrared spectroscopy also easily differentiates many deuterated compounds, due to the large difference in IR absorption frequency seen in the vibration of a chemical bond containing deuterium, versus light hydrogen. The two stable isotopes of hydrogen can also be distinguished by using mass spectrometry. The triplet deuteron nucleon is barely bound at , and none of the higher energy states are bound. The singlet deuteron is a virtual state, with a negative binding energy of . There is no such stable particle, but this virtual particle transiently exists during neutron–proton inelastic scattering, accounting for the unusually large neutron scattering cross-section of the proton. Nuclear properties (deuteron) Deuteron mass and radius The deuterium nucleus is called a deuteron. It has a mass of (just over ). The charge radius of a deuteron is Like the proton radius, measurements using muonic deuterium produce a smaller result: . Spin and energy Deuterium is one of only five stable nuclides with an odd number of protons and an odd number of neutrons. (H, Li, B, N, Ta; the long-lived radionuclides K, V, La, Lu also occur naturally.) Most odd–odd nuclei are unstable to beta decay, because the decay products are even–even, and thus more strongly bound, due to nuclear pairing effects. Deuterium, however, benefits from having its proton and neutron coupled to a spin-1 state, which gives a stronger nuclear attraction; the corresponding spin-1 state does not exist in the two-neutron or two-proton system, due to the Pauli exclusion principle which would require one or the other identical particle with the same spin to have some other different quantum number, such as orbital angular momentum. But orbital angular momentum of either particle gives a lower binding energy for the system, mainly due to increasing distance of the particles in the steep gradient of the nuclear force. In both cases, this causes the diproton and dineutron to be unstable. The proton and neutron in deuterium can be dissociated through neutral current interactions with neutrinos. The cross section for this interaction is comparatively large, and deuterium was successfully used as a neutrino target in the Sudbury Neutrino Observatory experiment. Diatomic deuterium (H) has ortho and para nuclear spin isomers like diatomic hydrogen, but with differences in the number and population of spin states and rotational levels, which occur because the deuteron is a boson with nuclear spin equal to one. Isospin singlet state of the deuteron Due to the similarity in mass and nuclear properties between the proton and neutron, they are sometimes considered as two symmetric types of the same object, a nucleon. While only the proton has electric charge, this is often negligible due to the weakness of the electromagnetic interaction relative to the strong nuclear interaction. The symmetry relating the proton and neutron is known as isospin and denoted I (or sometimes T). Isospin is an SU(2) symmetry, like ordinary spin, so is completely analogous to it. The proton and neutron, each of which have isospin-1/2, form an isospin doublet (analogous to a spin doublet), with a "down" state (↓) being a neutron and an "up" state (↑) being a proton. A pair of nucleons can either be in an antisymmetric state of isospin called singlet, or in a symmetric state called triplet. In terms of the "down" state and "up" state, the singlet is , which can also be written : This is a nucleus with one proton and one neutron, i.e. a deuterium nucleus. The triplet is and thus consists of three types of nuclei, which are supposed to be symmetric: a deuterium nucleus (actually a highly excited state of it), a nucleus with two protons, and a nucleus with two neutrons. These states are not stable. Approximated wavefunction of the deuteron The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, the wavefunction need not be antisymmetric in general). Apart from their isospin, the two nucleons also have spin and spatial distributions of their wavefunction. The latter is symmetric if the deuteron is symmetric under parity (i.e. has an "even" or "positive" parity), and antisymmetric if the deuteron is antisymmetric under parity (i.e. has an "odd" or "negative" parity). The parity is fully determined by the total orbital angular momentum of the two nucleons: if it is even then the parity is even (positive), and if it is odd then the parity is odd (negative). The deuteron, being an isospin singlet, is antisymmetric under nucleons exchange due to isospin, and therefore must be symmetric under the double exchange of their spin and location. Therefore, it can be in either of the following two different states: Symmetric spin and symmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (+1) from spin exchange and (+1) from parity (location exchange), for a total of (−1) as needed for antisymmetry. Antisymmetric spin and antisymmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (−1) from spin exchange and (−1) from parity (location exchange), again for a total of (−1) as needed for antisymmetry. In the first case the deuteron is a spin triplet, so that its total spin s is 1. It also has an even parity and therefore even orbital angular momentum l. The lower its orbital angular momentum, the lower its energy. Therefore, the lowest possible energy state has , . In the second case the deuteron is a spin singlet, so that its total spin s is 0. It also has an odd parity and therefore odd orbital angular momentum l. Therefore, the lowest possible energy state has , . Since gives a stronger nuclear attraction, the deuterium ground state is in the , state. The same considerations lead to the possible states of an isospin triplet having , or , . Thus, the state of lowest energy has , , higher than that of the isospin singlet. The analysis just given is in fact only approximate, both because isospin is not an exact symmetry, and more importantly because the strong nuclear interaction between the two nucleons is related to angular momentum in spin–orbit interaction that mixes different s and l states. That is, s and l are not constant in time (they do not commute with the Hamiltonian), and over time a state such as , may become a state of , . Parity is still constant in time, so these do not mix with odd l states (such as , ). Therefore, the quantum state of the deuterium is a superposition (a linear combination) of the , state and the , state, even though the first component is much bigger. Since the total angular momentum j is also a good quantum number (it is a constant in time), both components must have the same j, and therefore . This is the total spin of the deuterium nucleus. To summarize, the deuterium nucleus is antisymmetric in terms of isospin, and has spin 1 and even (+1) parity. The relative angular momentum of its nucleons l is not well defined, and the deuteron is a superposition of mostly with some . Magnetic and electric multipoles In order to find theoretically the deuterium magnetic dipole moment μ, one uses the formula for a nuclear magnetic moment with g and g are g-factors of the nucleons. Since the proton and neutron have different values for g and g, one must separate their contributions. Each gets half of the deuterium orbital angular momentum and spin . One arrives at where subscripts p and n stand for the proton and neutron, and . By using the same identities as here and using the value , one gets the following result, in units of the nuclear magneton μ For the , state (), we obtain For the , state (), we obtain The measured value of the deuterium magnetic dipole moment, is , which is 97.5% of the value obtained by simply adding moments of the proton and neutron. This suggests that the state of the deuterium is indeed to a good approximation , state, which occurs with both nucleons spinning in the same direction, but their magnetic moments subtracting because of the neutron's negative moment. But the slightly lower experimental number than that which results from simple addition of proton and (negative) neutron moments shows that deuterium is actually a linear combination of mostly , state with a slight admixture of , state. The electric dipole is zero as usual. The measured electric quadrupole of the deuterium is . While the order of magnitude is reasonable, since the deuteron radius is of order of 1 femtometer (see below) and its electric charge is e, the above model does not suffice for its computation. More specifically, the electric quadrupole does not get a contribution from the state (which is the dominant one) and does get a contribution from a term mixing the and the states, because the electric quadrupole operator does not commute with angular momentum. The latter contribution is dominant in the absence of a pure contribution, but cannot be calculated without knowing the exact spatial form of the nucleons wavefunction inside the deuterium. Higher magnetic and electric multipole moments cannot be calculated by the above model, for similar reasons. Applications Nuclear reactors Deuterium is used in heavy water moderated fission reactors, usually as liquid HO, to slow neutrons without the high neutron absorption of ordinary hydrogen. This is a common commercial use for larger amounts of deuterium. In research reactors, liquid H is used in cold sources to moderate neutrons to very low energies and wavelengths appropriate for scattering experiments. Experimentally, deuterium is the most common nuclide used in fusion reactor designs, especially in combination with tritium, because of the large reaction rate (or nuclear cross section) and high energy yield of the deuterium–tritium (DT) reaction. There is an even higher-yield H–He fusion reaction, though the breakeven point of H–He is higher than that of most other fusion reactions; together with the scarcity of He, this makes it implausible as a practical power source, at least until DT and deuterium–deuterium (DD) fusion have been performed on a commercial scale. Commercial nuclear fusion is not yet an accomplished technology. NMR spectroscopy Deuterium is most commonly used in hydrogen nuclear magnetic resonance spectroscopy (proton NMR) in the following way. NMR ordinarily requires compounds of interest to be analyzed as dissolved in solution. Because of deuterium's nuclear spin properties which differ from the light hydrogen usually present in organic molecules, NMR spectra of hydrogen/protium are highly differentiable from that of deuterium, and in practice deuterium is not "seen" by an NMR instrument tuned for H. Deuterated solvents (including heavy water, but also compounds like deuterated chloroform, CDCl or CHCl, are therefore routinely used in NMR spectroscopy, in order to allow only the light-hydrogen spectra of the compound of interest to be measured, without solvent-signal interference. Nuclear magnetic resonance spectroscopy can also be used to obtain information about the deuteron's environment in isotopically labelled samples (deuterium NMR). For example, the configuration of hydrocarbon chains in lipid bilayers can be quantified using solid state deuterium NMR with deuterium-labelled lipid molecules. Deuterium NMR spectra are especially informative in the solid state because of its relatively small quadrupole moment in comparison with those of bigger quadrupolar nuclei such as chlorine-35, for example. Mass spectrometry Deuterated (i.e. where all or some hydrogen atoms are replaced with deuterium) compounds are often used as internal standards in mass spectrometry. Like other isotopically labeled species, such standards improve accuracy, while often at a much lower cost than other isotopically labeled standards. Deuterated molecules are usually prepared via hydrogen isotope exchange reactions. Tracing In chemistry, biochemistry and environmental sciences, deuterium is used as a non-radioactive, stable isotopic tracer, for example, in the doubly labeled water test. In chemical reactions and metabolic pathways, deuterium behaves somewhat similarly to ordinary hydrogen (with a few chemical differences, as noted). It can be distinguished from normal hydrogen most easily by its mass, using mass spectrometry or infrared spectrometry. Deuterium can be detected by femtosecond infrared spectroscopy, since the mass difference drastically affects the frequency of molecular vibrations; H–carbon bond vibrations are found in spectral regions free of other signals. Measurements of small variations in the natural abundances of deuterium, along with those of the stable heavy oxygen isotopes O and O, are of importance in hydrology, to trace the geographic origin of Earth's waters. The heavy isotopes of hydrogen and oxygen in rainwater (meteoric water) are enriched as a function of the environmental temperature of the region in which the precipitation falls (and thus enrichment is related to latitude). The relative enrichment of the heavy isotopes in rainwater (as referenced to mean ocean water), when plotted against temperature falls predictably along a line called the global meteoric water line (GMWL). This plot allows samples of precipitation-originated water to be identified along with general information about the climate in which it originated. Evaporative and other processes in bodies of water, and also ground water processes, also differentially alter the ratios of heavy hydrogen and oxygen isotopes in fresh and salt waters, in characteristic and often regionally distinctive ways. The ratio of concentration of H to H is usually indicated with a delta as δH and the geographic patterns of these values are plotted in maps termed as isoscapes. Stable isotopes are incorporated into plants and animals and an analysis of the ratios in a migrant bird or insect can help suggest a rough guide to their origins. Contrast properties Neutron scattering techniques particularly profit from availability of deuterated samples: The H and H cross sections are very distinct and different in sign, which allows contrast variation in such experiments. Further, a nuisance problem of normal hydrogen is its large incoherent neutron cross section, which is nil for H. The substitution of deuterium for normal hydrogen thus reduces scattering noise. Hydrogen is an important and major component in all materials of organic chemistry and life science, but it barely interacts with X-rays. As hydrogen atoms (including deuterium) interact strongly with neutrons; neutron scattering techniques, together with a modern deuteration facility, fills a niche in many studies of macromolecules in biology and many other areas. Nuclear weapons See below. Most stars, including the Sun, generate energy over most of their lives by fusing hydrogen into heavier elements; yet such fusion of light hydrogen (protium) has never been successful in the conditions attainable on Earth. Thus, all artificial fusion, including the hydrogen fusion in hydrogen bombs, requires heavy hydrogen (deuterium, tritium, or both). Drugs A deuterated drug is a small molecule medicinal product in which one or more of the hydrogen atoms in the drug molecule have been replaced by deuterium. Because of the kinetic isotope effect, deuterium-containing drugs may have significantly lower rates of metabolism, and hence a longer half-life. In 2017, deutetrabenazine became the first deuterated drug to receive FDA approval. Reinforced essential nutrients Deuterium can be used to reinforce specific oxidation-vulnerable C–H bonds within essential or conditionally essential nutrients, such as certain amino acids, or polyunsaturated fatty acids (PUFA), making them more resistant to oxidative damage. Deuterated polyunsaturated fatty acids, such as linoleic acid, slow down the chain reaction of lipid peroxidation that damage living cells. Deuterated ethyl ester of linoleic acid (RT001), developed by Retrotope, is in a compassionate use trial in infantile neuroaxonal dystrophy and has successfully completed a Phase I/II trial in Friedreich's ataxia. Thermostabilization Live vaccines, such as oral polio vaccine, can be stabilized by deuterium, either alone or in combination with other stabilizers such as MgCl. Slowing circadian oscillations Deuterium has been shown to lengthen the period of oscillation of the circadian clock when dosed in rats, hamsters, and Gonyaulax dinoflagellates. In rats, chronic intake of 25% HO disrupts circadian rhythm by lengthening the circadian period of suprachiasmatic nucleus-dependent rhythms in the brain's hypothalamus. Experiments in hamsters also support the theory that deuterium acts directly on the suprachiasmatic nucleus to lengthen the free-running circadian period. History Suspicion of lighter element isotopes The existence of nonradioactive isotopes of lighter elements had been suspected in studies of neon as early as 1913, and proven by mass spectrometry of light elements in 1920. At that time the neutron had not yet been discovered, and the prevailing theory was that isotopes of an element differ by the existence of additional protons in the nucleus accompanied by an equal number of nuclear electrons. In this theory, the deuterium nucleus with mass two and charge one would contain two protons and one nuclear electron. However, it was expected that the element hydrogen with a measured average atomic mass very close to , the known mass of the proton, always has a nucleus composed of a single proton (a known particle), and could not contain a second proton. Thus, hydrogen was thought to have no heavy isotopes. Deuterium detected It was first detected spectroscopically in late 1931 by Harold Urey, a chemist at Columbia University. Urey's collaborator, Ferdinand Brickwedde, distilled five liters of cryogenically produced liquid hydrogen to of liquid, using the low-temperature physics laboratory that had recently been established at the National Bureau of Standards (now National Institute of Standards and Technology) in Washington, DC. The technique had previously been used to isolate heavy isotopes of neon. The cryogenic boiloff technique concentrated the fraction of the mass-2 isotope of hydrogen to a degree that made its spectroscopic identification unambiguous. Naming of the isotope and Nobel Prize Urey created the names protium, deuterium, and tritium in an article published in 1934. The name is based in part on advice from Gilbert N. Lewis who had proposed the name "deutium". The name comes from Greek deuteros 'second', and the nucleus was to be called a "deuteron" or "deuton". Isotopes and new elements were traditionally given the name that their discoverer decided. Some British scientists, such as Ernest Rutherford, wanted to call the isotope "diplogen", from Greek diploos 'double', and the nucleus to be called "diplon". The amount inferred for normal abundance of deuterium was so small (only about 1 atom in 6400 hydrogen atoms in seawater [156 parts per million]) that it had not noticeably affected previous measurements of (average) hydrogen atomic mass. This explained why it hadn't been suspected before. Urey was able to concentrate water to show partial enrichment of deuterium. Lewis, Urey's graduate advisor at Berkeley, had prepared and characterized the first samples of pure heavy water in 1933. The discovery of deuterium, coming before the discovery of the neutron in 1932, was an experimental shock to theory; but when the neutron was reported, making deuterium's existence more explicable, Urey was awarded the Nobel Prize in Chemistry only three years after the isotope's isolation. Lewis was deeply disappointed by the Nobel Committee's decision in 1934 and several high-ranking administrators at Berkeley believed this disappointment played a central role in his suicide a decade later. "Heavy water" experiments in World War II Shortly before the war, Hans von Halban and Lew Kowarski moved their research on neutron moderation from France to Britain, smuggling the entire global supply of heavy water (which had been made in Norway) across in twenty-six steel drums. During World War II, Nazi Germany was known to be conducting experiments using heavy water as moderator for a nuclear reactor design. Such experiments were a source of concern because they might allow them to produce plutonium for an atomic bomb. Ultimately it led to the Allied operation called the "Norwegian heavy water sabotage", the purpose of which was to destroy the Vemork deuterium production/enrichment facility in Norway. At the time this was considered important to the potential progress of the war. After World War II ended, the Allies discovered that Germany was not putting as much serious effort into the program as had been previously thought. The Germans had completed only a small, partly built experimental reactor (which had been hidden away) and had been unable to sustain a chain reaction. By the end of the war, the Germans did not even have a fifth of the amount of heavy water needed to run the reactor, partially due to the Norwegian heavy water sabotage operation. However, even if the Germans had succeeded in getting a reactor operational (as the U.S. did with Chicago Pile-1 in late 1942), they would still have been at least several years away from the development of an atomic bomb. The engineering process, even with maximal effort and funding, required about two and a half years (from first critical reactor to bomb) in both the U.S. and U.S.S.R., for example. In thermonuclear weapons The 62-ton Ivy Mike device built by the United States and exploded on 1 November 1952, was the first fully successful hydrogen bomb (thermonuclear bomb). In this context, it was the first bomb in which most of the energy released came from nuclear reaction stages that followed the primary nuclear fission stage of the atomic bomb. The Ivy Mike bomb was a factory-like building, rather than a deliverable weapon. At its center, a very large cylindrical, insulated vacuum flask or cryostat, held cryogenic liquid deuterium in a volume of about 1000 liters (160 kilograms in mass, if this volume had been completely filled). Then, a conventional atomic bomb (the "primary") at one end of the bomb was used to create the conditions of extreme temperature and pressure that were needed to set off the thermonuclear reaction. Within a few years, so-called "dry" hydrogen bombs were developed that did not need cryogenic hydrogen. Released information suggests that all thermonuclear weapons built since then contain chemical compounds of deuterium and lithium in their secondary stages. The material that contains the deuterium is mostly lithium deuteride, with the lithium consisting of the isotope lithium-6. When the lithium-6 is bombarded with fast neutrons from the atomic bomb, tritium (hydrogen-3) is produced, and then the deuterium and the tritium quickly engage in thermonuclear fusion, releasing abundant energy, helium-4, and even more free neutrons. "Pure" fusion weapons such as the Tsar Bomba are believed to be obsolete. In most modern ("boosted") thermonuclear weapons, fusion directly provides only a small fraction of the total energy. Fission of a natural uranium-238 tamper by fast neutrons produced from D–T fusion accounts for a much larger (i.e. boosted) energy release than the fusion reaction itself. Modern research In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic form. This may help researchers better understand gas giant planets, such as Jupiter, Saturn and some exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields. Antideuterium An antideuteron is the antimatter counterpart of the deuteron, consisting of an antiproton and an antineutron. The antideuteron was first produced in 1965 at the Proton Synchrotron at CERN and the Alternating Gradient Synchrotron at Brookhaven National Laboratory. A complete atom, with a positron orbiting the nucleus, would be called antideuterium, but antideuterium has not yet been created. The proposed symbol for antideuterium is , that is, D with an overbar. See also Isotopes of hydrogen Tokamak References External links Environmental isotopes Isotopes of hydrogen Neutron moderators Nuclear fusion fuels Nuclear materials Subatomic particles with spin 1 Medical isotopes
Deuterium
[ "Physics", "Chemistry" ]
8,627
[ "Isotopes of hydrogen", "Environmental isotopes", "Isotopes", "Materials", "Nuclear materials", "Chemicals in medicine", "Matter", "Medical isotopes" ]
8,603
https://en.wikipedia.org/wiki/Diffraction
Diffraction is the deviation of waves from straight-line propagation without any change in their energy due to an obstacle or through an aperture. The diffracting object or aperture effectively becomes a secondary source of the propagating wave. Diffraction is the same physical effect as interference, but interference is typically applied to superposition of a few waves and the term diffraction is used when many waves are superposed. Italian scientist Francesco Maria Grimaldi coined the word diffraction and was the first to record accurate observations of the phenomenon in 1660. In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic pattern is most pronounced when a wave from a coherent source (such as a laser) encounters a slit/aperture that is comparable in size to its wavelength, as shown in the inserted image. This is due to the addition, or interference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. If there are multiple closely spaced openings, a complex pattern of varying intensity can result. These effects also occur when a light wave travels through a medium with a varying refractive index, or when a sound wave travels through a medium with varying acoustic impedance – all waves diffract, including gravitational waves, water waves, and other electromagnetic waves such as X-rays and radio waves. Furthermore, quantum mechanics also demonstrates that matter possesses wave-like properties and, therefore, undergoes diffraction (which is measurable at subatomic to molecular levels). History The effects of diffraction of light were first carefully observed and characterized by Francesco Maria Grimaldi, who also coined the term diffraction, from the Latin diffringere, 'to break into pieces', referring to light breaking up into different directions. The results of Grimaldi's observations were published posthumously in 1665. Isaac Newton studied these effects and attributed them to inflexion of light rays. James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating to be discovered. Thomas Young performed a celebrated experiment in 1803 demonstrating interference from two closely spaced slits. Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves. In 1818, supporters of the corpuscular theory of light proposed that the Paris Academy prize question address diffraction, expecting to see the wave theory defeated. However, Augustin-Jean Fresnel took the prize with his new theory wave propagation, combining the ideas of Christiaan Huygens with Young's interference concept. Siméon Denis Poisson challenged the Fresnel theory by showing that it predicted light in the shadow behind circular obstruction; Dominique-François-Jean Arago proceeded to demonstrate experimentally that such light is visible, confirming Fresnel's diffraction model. Mechanism In classical physics diffraction arises because of how waves propagate; this is described by the Huygens–Fresnel principle and the principle of superposition of waves. The propagation of a wave can be visualized by considering every particle of the transmitted medium on a wavefront as a point source for a secondary spherical wave. The wave displacement at any subsequent point is the sum of these secondary waves. When waves are added together, their sum is determined by the relative phases as well as the amplitudes of the individual waves so that the summed amplitude of the waves can have any value between zero and the sum of the individual amplitudes. Hence, diffraction patterns usually have a series of maxima and minima. In the modern quantum mechanical understanding of light propagation through a slit (or slits) every photon is described by its wavefunction that determines the probability distribution for the photon: the light and dark bands are the areas where the photons are more or less likely to be detected. The wavefunction is determined by the physical surroundings such as slit geometry, screen distance, and initial conditions when the photon is created. The wave nature of individual photons (as opposed to wave properties only arising from the interactions between multitudes of photons) was implied by a low-intensity double-slit experiment first performed by G. I. Taylor in 1909. The quantum approach has some striking similarities to the Huygens-Fresnel principle; based on that principle, as light travels through slits and boundaries, secondary point light sources are created near or along these obstacles, and the resulting diffraction pattern is going to be the intensity profile based on the collective interference of all these light sources that have different optical paths. In the quantum formalism, that is similar to considering the limited regions around the slits and boundaries from which photons are more likely to originate, and calculating the probability distribution (that is proportional to the resulting intensity of classical formalism). There are various analytical models for photons which allow the diffracted field to be calculated, including the Kirchhoff diffraction equation (derived from the wave equation), the Fraunhofer diffraction approximation of the Kirchhoff equation (applicable to the far field), the Fresnel diffraction approximation (applicable to the near field) and the Feynman path integral formulation. Most configurations cannot be solved analytically, but can yield numerical solutions through finite element and boundary element methods. In many cases it is assumed that there is only one scattering event, what is called kinematical diffraction, with an Ewald's sphere construction used to represent that there is no change in energy during the diffraction process. For matter waves a similar but slightly different approach is used based upon a relativistically corrected form of the Schrödinger equation, as first detailed by Hans Bethe. The Fraunhofer and Fresnel limits exist for these as well, although they correspond more to approximations for the matter wave Green's function (propagator) for the Schrödinger equation. More common is full multiple scattering models particular in electron diffraction; in some cases similar dynamical diffraction models are also used for X-rays. It is possible to obtain a qualitative understanding of many diffraction phenomena by considering how the relative phases of the individual secondary wave sources vary, and, in particular, the conditions in which the phase difference equals half a cycle in which case waves will cancel one another out. The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. For water waves, this is already the case; water waves propagate only on the surface of the water. For light, we can often neglect one direction if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes, we will have to take into account the full three-dimensional nature of the problem. Examples The effects of diffraction are often seen in everyday life. The most striking examples of diffraction are those that involve light; for example, the closely spaced tracks on a CD or DVD act as a diffraction grating to form the familiar rainbow pattern seen when looking at a disc. This principle can be extended to engineer a grating with a structure such that it will produce any diffraction pattern desired; the hologram on a credit card is an example. Diffraction in the atmosphere by small particles can cause a corona - a bright disc and rings around a bright light source like the sun or the moon. At the opposite point one may also observe glory - bright rings around the shadow of the observer. In contrast to the corona, glory requires the particles to be transparent spheres (like fog droplets), since the backscattering of the light that forms the glory involves refraction and internal reflection within the droplet. A shadow of a solid object, using light from a compact source, shows small fringes near its edges. Diffraction spikes are diffraction patterns caused due to non-circular aperture in camera or support struts in telescope; In normal vision, diffraction through eyelashes may produce such spikes. The speckle pattern which is observed when laser light falls on an optically rough surface is also a diffraction phenomenon. When deli meat appears to be iridescent, that is diffraction off the meat fibers. All these effects are a consequence of the fact that light propagates as a wave. Diffraction can occur with any kind of wave. Ocean waves diffract around jetties and other obstacles. Sound waves can diffract around objects, which is why one can still hear someone calling even when hiding behind a tree. Diffraction can also be a concern in some technical applications; it sets a fundamental limit to the resolution of a camera, telescope, or microscope. Other examples of diffraction are considered below. Single-slit diffraction A long slit of infinitesimal width which is illuminated by light diffracts the light into a series of circular waves and the wavefront which emerges from the slit is a cylindrical wave of uniform intensity, in accordance with the Huygens–Fresnel principle. An illuminated slit that is wider than a wavelength produces interference effects in the space downstream of the slit. Assuming that the slit behaves as though it has a large number of point sources spaced evenly across the width of the slit interference effects can be calculated. The analysis of this system is simplified if we consider light of a single wavelength. If the incident light is coherent, these sources all have the same phase. Light incident at a given point in the space downstream of the slit is made up of contributions from each of these point sources and if the relative phases of these contributions vary by or more, we may expect to find minima and maxima in the diffracted light. Such phase differences are caused by differences in the path lengths over which contributing rays reach the point from the slit. We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning. The light from a source located at the top edge of the slit interferes destructively with a source located at the middle of the slit, when the path difference between them is equal to Similarly, the source just below the top of the slit will interfere destructively with the source located just below the middle of the slit at the same angle. We can continue this reasoning along the entire height of the slit to conclude that the condition for destructive interference for the entire slit is the same as the condition for destructive interference between two narrow slits a distance apart that is half the width of the slit. The path difference is approximately so that the minimum intensity occurs at an angle given by where is the width of the slit, is the angle of incidence at which the minimum intensity occurs, and is the wavelength of the light. A similar argument can be used to show that if we imagine the slit to be divided into four, six, eight parts, etc., minima are obtained at angles given by where is an integer other than zero. There is no such simple argument to enable us to find the maxima of the diffraction pattern. The intensity profile can be calculated using the Fraunhofer diffraction equation as where is the intensity at a given angle, is the intensity at the central maximum which is also a normalization factor of the intensity profile that can be determined by an integration from to and conservation of energy, and which is the unnormalized sinc function. This analysis applies only to the far field (Fraunhofer diffraction), that is, at a distance much larger than the width of the slit. From the intensity profile above, if the intensity will have little dependency on hence the wavefront emerging from the slit would resemble a cylindrical wave with azimuthal symmetry; If only would have appreciable intensity, hence the wavefront emerging from the slit would resemble that of geometrical optics. When the incident angle of the light onto the slit is non-zero (which causes a change in the path length), the intensity profile in the Fraunhofer regime (i.e. far field) becomes: The choice of plus/minus sign depends on the definition of the incident angle Diffraction grating A diffraction grating is an optical component with a regular pattern. The form of the light diffracted by a grating depends on the structure of the elements and the number of elements present, but all gratings have intensity maxima at angles θm which are given by the grating equation where is the angle at which the light is incident, is the separation of grating elements, and is an integer which can be positive or negative. The light diffracted by a grating is found by summing the light diffracted from each of the elements, and is essentially a convolution of diffraction and interference patterns. The figure shows the light diffracted by 2-element and 5-element gratings where the grating spacings are the same; it can be seen that the maxima are in the same position, but the detailed structures of the intensities are different. Circular aperture The far-field diffraction of a plane wave incident on a circular aperture is often referred to as the Airy disk. The variation in intensity with angle is given by where is the radius of the circular aperture, is equal to and is a Bessel function. The smaller the aperture, the larger the spot size at a given distance, and the greater the divergence of the diffracted beams. General aperture The wave that emerges from a point source has amplitude at location that is given by the solution of the frequency domain wave equation for a point source (the Helmholtz equation), where is the 3-dimensional delta function. The delta function has only radial dependence, so the Laplace operator (a.k.a. scalar Laplacian) in the spherical coordinate system simplifies to (See del in cylindrical and spherical coordinates.) By direct substitution, the solution to this equation can be readily shown to be the scalar Green's function, which in the spherical coordinate system (and using the physics time convention ) is This solution assumes that the delta function source is located at the origin. If the source is located at an arbitrary source point, denoted by the vector and the field point is located at the point , then we may represent the scalar Green's function (for arbitrary source location) as Therefore, if an electric field is incident on the aperture, the field produced by this aperture distribution is given by the surface integral where the source point in the aperture is given by the vector In the far field, wherein the parallel rays approximation can be employed, the Green's function, simplifies to as can be seen in the adjacent figure. The expression for the far-zone (Fraunhofer region) field becomes Now, since and the expression for the Fraunhofer region field from a planar aperture now becomes Letting and the Fraunhofer region field of the planar aperture assumes the form of a Fourier transform In the far-field / Fraunhofer region, this becomes the spatial Fourier transform of the aperture distribution. Huygens' principle when applied to an aperture simply says that the far-field diffraction pattern is the spatial Fourier transform of the aperture shape, and this is a direct by-product of using the parallel-rays approximation, which is identical to doing a plane wave decomposition of the aperture plane fields (see Fourier optics). Propagation of a laser beam The way in which the beam profile of a laser beam changes as it propagates is determined by diffraction. When the entire emitted beam has a planar, spatially coherent wave front, it approximates Gaussian beam profile and has the lowest divergence for a given diameter. The smaller the output beam, the quicker it diverges. It is possible to reduce the divergence of a laser beam by first expanding it with one convex lens, and then collimating it with a second convex lens whose focal point is coincident with that of the first lens. The resulting beam has a larger diameter, and hence a lower divergence. Divergence of a laser beam may be reduced below the diffraction of a Gaussian beam or even reversed to convergence if the refractive index of the propagation media increases with the light intensity. This may result in a self-focusing effect. When the wave front of the emitted beam has perturbations, only the transverse coherence length (where the wave front perturbation is less than 1/4 of the wavelength) should be considered as a Gaussian beam diameter when determining the divergence of the laser beam. If the transverse coherence length in the vertical direction is higher than in horizontal, the laser beam divergence will be lower in the vertical direction than in the horizontal. Diffraction-limited imaging The ability of an imaging system to resolve detail is ultimately limited by diffraction. This is because a plane wave incident on a circular lens or mirror is diffracted as described above. The light is not focused to a point but forms an Airy disk having a central spot in the focal plane whose radius (as measured to the first null) is where is the wavelength of the light and is the f-number (focal length divided by aperture diameter ) of the imaging optics; this is strictly accurate for (paraxial case). In object space, the corresponding angular resolution is where is the diameter of the entrance pupil of the imaging lens (e.g., of a telescope's main mirror). Two point sources will each produce an Airy pattern – see the photo of a binary star. As the point sources move closer together, the patterns will start to overlap, and ultimately they will merge to form a single pattern, in which case the two point sources cannot be resolved in the image. The Rayleigh criterion specifies that two point sources are considered "resolved" if the separation of the two images is at least the radius of the Airy disk, i.e. if the first minimum of one coincides with the maximum of the other. Thus, the larger the aperture of the lens compared to the wavelength, the finer the resolution of an imaging system. This is one reason astronomical telescopes require large objectives, and why microscope objectives require a large numerical aperture (large aperture diameter compared to working distance) in order to obtain the highest possible resolution. Speckle patterns The speckle pattern seen when using a laser pointer is another diffraction phenomenon. It is a result of the superposition of many waves with different phases, which are produced when a laser beam illuminates a rough surface. They add together to give a resultant wave whose amplitude, and therefore intensity, varies randomly. Babinet's principle Babinet's principle is a useful theorem stating that the diffraction pattern from an opaque body is identical to that from a hole of the same size and shape, but with differing intensities. This means that the interference conditions of a single obstruction would be the same as that of a single slit. "Knife edge" The knife-edge effect or knife-edge diffraction is a truncation of a portion of the incident radiation that strikes a sharp well-defined obstacle, such as a mountain range or the wall of a building. The knife-edge effect is explained by the Huygens–Fresnel principle, which states that a well-defined obstruction to an electromagnetic wave acts as a secondary source, and creates a new wavefront. This new wavefront propagates into the geometric shadow area of the obstacle. Knife-edge diffraction is an outgrowth of the "half-plane problem", originally solved by Arnold Sommerfeld using a plane wave spectrum formulation. A generalization of the half-plane problem is the "wedge problem", solvable as a boundary value problem in cylindrical coordinates. The solution in cylindrical coordinates was then extended to the optical regime by Joseph B. Keller, who introduced the notion of diffraction coefficients through his geometrical theory of diffraction (GTD). In 1974, Prabhakar Pathak and Robert Kouyoumjian extended the (singular) Keller coefficients via the uniform theory of diffraction (UTD). Patterns Several qualitative observations can be made of diffraction in general: The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction. In other words: The smaller the diffracting object, the 'wider' the resulting diffraction pattern, and vice versa. (More precisely, this is true of the sines of the angles.) The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object. When the diffracting object has a periodic structure, for example in a diffraction grating, the features generally become sharper. The third figure, for example, shows a comparison of a double-slit pattern with a pattern formed by five slits, both sets of slits having the same spacing, between the center of one slit and the next. Matter wave diffraction According to quantum theory every particle exhibits wave properties and can therefore diffract. Diffraction of electrons and neutrons is one of the powerful arguments in favor of quantum mechanics. The wavelength associated with a non-relativistic particle is the de Broglie wavelength where is the Planck constant and is the momentum of the particle (mass × velocity for slow-moving particles). For example, a sodium atom traveling at about 300 m/s would have a de Broglie wavelength of about 50 picometres. Diffraction of matter waves has been observed for small particles, like electrons, neutrons, atoms, and even large molecules. The short wavelength of these matter waves makes them ideally suited to study the atomic structure of solids, molecules and proteins. Bragg diffraction Diffraction from a large three-dimensional periodic structure such as many thousands of atoms in a crystal is called Bragg diffraction. It is similar to what occurs when waves are scattered from a diffraction grating. Bragg diffraction is a consequence of interference between waves reflecting from many different crystal planes. The condition of constructive interference is given by Bragg's law: where is the wavelength, is the distance between crystal planes, is the angle of the diffracted wave, and is an integer known as the order of the diffracted beam. Bragg diffraction may be carried out using either electromagnetic radiation of very short wavelength like X-rays or matter waves like neutrons (and electrons) whose wavelength is on the order of (or much smaller than) the atomic spacing. The pattern produced gives information of the separations of crystallographic planes , allowing one to deduce the crystal structure. For completeness, Bragg diffraction is a limit for a large number of atoms with X-rays or neutrons, and is rarely valid for electron diffraction or with solid particles in the size range of less than 50 nanometers. Coherence The description of diffraction relies on the interference of waves emanating from the same source taking different paths to the same point on a screen. In this description, the difference in phase between waves that took different paths is only dependent on the effective path length. This does not take into account the fact that waves that arrive at the screen at the same time were emitted by the source at different times. The initial phase with which the source emits waves can change over time in an unpredictable way. This means that waves emitted by the source at times that are too far apart can no longer form a constant interference pattern since the relation between their phases is no longer time independent. The length over which the phase in a beam of light is correlated is called the coherence length. In order for interference to occur, the path length difference must be smaller than the coherence length. This is sometimes referred to as spectral coherence, as it is related to the presence of different frequency components in the wave. In the case of light emitted by an atomic transition, the coherence length is related to the lifetime of the excited state from which the atom made its transition. If waves are emitted from an extended source, this can lead to incoherence in the transversal direction. When looking at a cross section of a beam of light, the length over which the phase is correlated is called the transverse coherence length. In the case of Young's double-slit experiment, this would mean that if the transverse coherence length is smaller than the spacing between the two slits, the resulting pattern on a screen would look like two single-slit diffraction patterns. In the case of particles like electrons, neutrons, and atoms, the coherence length is related to the spatial extent of the wave function that describes the particle. Applications Diffraction before destruction A new way to image single biological particles has emerged since the 2010s, utilising the bright X-rays generated by X-ray free-electron lasers. These femtosecond-duration pulses will allow for the (potential) imaging of single biological macromolecules. Due to these short pulses, radiation damage can be outrun, and diffraction patterns of single biological macromolecules will be able to be obtained. See also Angle-sensitive pixel Atmospheric diffraction Brocken spectre Cloud iridescence Coherent diffraction imaging Diffraction from slits Diffraction spike Diffraction vs. interference Diffractive solar sail Diffractometer Dynamical theory of diffraction Electron diffraction Fraunhofer diffraction Fresnel imager Fresnel number Fresnel zone Point spread function Powder diffraction Quasioptics Refraction Reflection Schaefer–Bergmann diffraction Thinned-array curse X-ray diffraction References External links The Feynman Lectures on Physics Vol. I Ch. 30: Diffraction Using a cd as a diffraction grating at YouTube Physical phenomena
Diffraction
[ "Physics", "Chemistry", "Materials_science" ]
5,438
[ "Physical phenomena", "Spectrum (physical sciences)", "Crystallography", "Diffraction", "Spectroscopy" ]
8,643
https://en.wikipedia.org/wiki/Molecular%20diffusion
Molecular diffusion, often simply called diffusion, is the thermal motion of all (liquid or gas) particles at temperatures above absolute zero. The rate of this movement is a function of temperature, viscosity of the fluid and the size (mass) of the particles. Diffusion explains the net flux of molecules from a region of higher concentration to one of lower concentration. Once the concentrations are equal the molecules continue to move, but since there is no concentration gradient the process of molecular diffusion has ceased and is instead governed by the process of self-diffusion, originating from the random motion of the molecules. The result of diffusion is a gradual mixing of material such that the distribution of molecules is uniform. Since the molecules are still in motion, but an equilibrium has been established, the result of molecular diffusion is called a "dynamic equilibrium". In a phase with uniform temperature, absent external net forces acting on the particles, the diffusion process will eventually result in complete mixing. Consider two systems; S1 and S2 at the same temperature and capable of exchanging particles. If there is a change in the potential energy of a system; for example μ1>μ2 (μ is Chemical potential) an energy flow will occur from S1 to S2, because nature always prefers low energy and maximum entropy. Molecular diffusion is typically described mathematically using Fick's laws of diffusion. Applications Diffusion is of fundamental importance in many disciplines of physics, chemistry, and biology. Some example applications of diffusion: Sintering to produce solid materials (powder metallurgy, production of ceramics) Chemical reactor design Catalyst design in chemical industry Steel can be diffused (e.g., with carbon or nitrogen) to modify its properties Doping during production of semiconductors. Significance Diffusion is part of the transport phenomena. Of mass transport mechanisms, molecular diffusion is known as a slower one. Biology In cell biology, diffusion is a main form of transport for necessary materials such as amino acids within cells. Diffusion of solvents, such as water, through a semipermeable membrane is classified as osmosis. Metabolism and respiration rely in part upon diffusion in addition to bulk or active processes. For example, in the alveoli of mammalian lungs, due to differences in partial pressures across the alveolar-capillary membrane, oxygen diffuses into the blood and carbon dioxide diffuses out. Lungs contain a large surface area to facilitate this gas exchange process. Tracer, self- and chemical diffusion Fundamentally, two types of diffusion are distinguished: Tracer diffusion and Self-diffusion, which is a spontaneous mixing of molecules taking place in the absence of concentration (or chemical potential) gradient. This type of diffusion can be followed using isotopic tracers, hence the name. The tracer diffusion is usually assumed to be identical to self-diffusion (assuming no significant isotopic effect). This diffusion can take place under equilibrium. An excellent method for the measurement of self-diffusion coefficients is pulsed field gradient (PFG) NMR, where no isotopic tracers are needed. In a so-called NMR spin echo experiment this technique uses the nuclear spin precession phase, allowing to distinguish chemically and physically completely identical species e.g. in the liquid phase, as for example water molecules within liquid water. The self-diffusion coefficient of water has been experimentally determined with high accuracy and thus serves often as a reference value for measurements on other liquids. The self-diffusion coefficient of neat water is: 2.299·10−9 m2·s−1 at 25 °C and 1.261·10−9 m2·s−1 at 4 °C. Chemical diffusion occurs in a presence of concentration (or chemical potential) gradient and it results in net transport of mass. This is the process described by the diffusion equation. This diffusion is always a non-equilibrium process, increases the system entropy, and brings the system closer to equilibrium. The diffusion coefficients for these two types of diffusion are generally different because the diffusion coefficient for chemical diffusion is binary and it includes the effects due to the correlation of the movement of the different diffusing species. Non-equilibrium system Because chemical diffusion is a net transport process, the system in which it takes place is not an equilibrium system (i.e. it is not at rest yet). Many results in classical thermodynamics are not easily applied to non-equilibrium systems. However, there sometimes occur so-called quasi-steady states, where the diffusion process does not change in time, where classical results may locally apply. As the name suggests, this process is a not a true equilibrium since the system is still evolving. Non-equilibrium fluid systems can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale. Chemical diffusion increases the entropy of a system, i.e. diffusion is a spontaneous and irreversible process. Particles can spread out by diffusion, but will not spontaneously re-order themselves (absent changes to the system, assuming no creation of new chemical bonds, and absent external forces acting on the particle). Concentration dependent "collective" diffusion Collective diffusion is the diffusion of a large number of particles, most often within a solvent. Contrary to brownian motion, which is the diffusion of a single particle, interactions between particles may have to be considered, unless the particles form an ideal mix with their solvent (ideal mix conditions correspond to the case where the interactions between the solvent and particles are identical to the interactions between particles and the interactions between solvent molecules; in this case, the particles do not interact when inside the solvent). In case of an ideal mix, the particle diffusion equation holds true and the diffusion coefficient D the speed of diffusion in the particle diffusion equation is independent of particle concentration. In other cases, resulting interactions between particles within the solvent will account for the following effects: the diffusion coefficient D in the particle diffusion equation becomes dependent of concentration. For an attractive interaction between particles, the diffusion coefficient tends to decrease as concentration increases. For a repulsive interaction between particles, the diffusion coefficient tends to increase as concentration increases. In the case of an attractive interaction between particles, particles exhibit a tendency to coalesce and form clusters if their concentration lies above a certain threshold. This is equivalent to a precipitation chemical reaction (and if the considered diffusing particles are chemical molecules in solution, then it is a precipitation). Molecular diffusion of gases Transport of material in stagnant fluid or across streamlines of a fluid in a laminar flow occurs by molecular diffusion. Two adjacent compartments separated by a partition, containing pure gases A or B may be envisaged. Random movement of all molecules occurs so that after a period molecules are found remote from their original positions. If the partition is removed, some molecules of A move towards the region occupied by B, their number depends on the number of molecules at the region considered. Concurrently, molecules of B diffuse toward regimens formerly occupied by pure A. Finally, complete mixing occurs. Before this point in time, a gradual variation in the concentration of A occurs along an axis, designated x, which joins the original compartments. This variation, expressed mathematically as -dCA/dx, where CA is the concentration of A. The negative sign arises because the concentration of A decreases as the distance x increases. Similarly, the variation in the concentration of gas B is -dCB/dx. The rate of diffusion of A, NA, depend on concentration gradient and the average velocity with which the molecules of A moves in the x direction. This relationship is expressed by Fick's law (only applicable for no bulk motion) where D is the diffusivity of A through B, proportional to the average molecular velocity and, therefore dependent on the temperature and pressure of gases. The rate of diffusion NA is usually expressed as the number of moles diffusing across unit area in unit time. As with the basic equation of heat transfer, this indicates that the rate of force is directly proportional to the driving force, which is the concentration gradient. This basic equation applies to a number of situations. Restricting discussion exclusively to steady state conditions, in which neither dCA/dx or dCB/dx change with time, equimolecular counterdiffusion is considered first. Equimolecular counterdiffusion If no bulk flow occurs in an element of length dx, the rates of diffusion of two ideal gases (of similar molar volume) A and B must be equal and opposite, that is . The partial pressure of A changes by dPA over the distance dx. Similarly, the partial pressure of B changes dPB. As there is no difference in total pressure across the element (no bulk flow), we have . For an ideal gas the partial pressure is related to the molar concentration by the relation where nA is the number of moles of gas A in a volume V. As the molar concentration CA is equal to nA/ V therefore Consequently, for gas A, where DAB is the diffusivity of A in B. Similarly, Considering that dPA/dx=-dPB/dx, it therefore proves that DAB=DBA=D. If the partial pressure of A at x1 is PA1 and x2 is PA2, integration of above equation, A similar equation may be derived for the counterdiffusion of gas B. See also References External links Some pictures that display diffusion and osmosis An animation describing diffusion. A tutorial on the theory behind and solution of the Diffusion Equation. NetLogo Simulation Model for Educational Use (Java Applet) Short movie on brownian motion (includes calculation of the diffusion coefficient) A basic introduction to the classical theory of volume diffusion (with figures and animations) Diffusion on the nanoscale (with figures and animations) Transport phenomena Diffusion Underwater diving physics
Molecular diffusion
[ "Physics", "Chemistry", "Engineering" ]
2,030
[ "Transport phenomena", "Physical phenomena", "Applied and interdisciplinary physics", "Diffusion", "Underwater diving physics", "Chemical engineering" ]
8,651
https://en.wikipedia.org/wiki/Dark%20matter
In astronomy, dark matter is an invisible and hypothetical form of matter that does not interact with light or other electromagnetic radiation. Dark matter is implied by gravitational effects which cannot be explained by general relativity unless more matter is present than can be observed. Such effects occur in the context of formation and evolution of galaxies, gravitational lensing, the observable universe's current structure, mass position in galactic collisions, the motion of galaxies within galaxy clusters, and cosmic microwave background anisotropies. In the standard Lambda-CDM model of cosmology, the mass–energy content of the universe is 5% ordinary matter, 26.8% dark matter, and 68.2% a form of energy known as dark energy. Thus, dark matter constitutes 85% of the total mass, while dark energy and dark matter constitute 95% of the total mass–energy content. Dark matter is not known to interact with ordinary baryonic matter and radiation except through gravity, making it difficult to detect in the laboratory. The most prevalent explanation is that dark matter is some as-yet-undiscovered subatomic particle, such as either weakly interacting massive particles (WIMPs) or axions. The other main possibility is that dark matter is composed of primordial black holes. Dark matter is classified as "cold", "warm", or "hot" according to velocity (more precisely, its free streaming length). Recent models have favored a cold dark matter scenario, in which structures emerge by the gradual accumulation of particles. Although the astrophysics community generally accepts the existence of dark matter, a minority of astrophysicists, intrigued by specific observations that are not well explained by ordinary dark matter, argue for various modifications of the standard laws of general relativity. These include modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity. So far none of the proposed modified gravity theories can describe every piece of observational evidence at the same time, suggesting that even if gravity has to be modified, some form of dark matter will still be required. History Early history The hypothesis of dark matter has an elaborate history. Wm. Thomson, Lord Kelvin, discussed the potential number of stars around the Sun in the appendices of a book based on a series of lectures given in 1884 in Baltimore. He inferred their density using the observed velocity dispersion of the stars near the Sun, assuming that the Sun was 20–100 million years old. He posed what would happen if there were a thousand million stars within 1 kiloparsec of the Sun (at which distance their parallax would be 1 milli-arcsecond). Kelvin concluded Many of our supposed thousand million stars – perhaps a great majority of them – may be dark bodies. In 1906, Poincaré used the French term [] ("dark matter") in discussing Kelvin's work. He found that the amount of dark matter would need to be less than that of visible matter, incorrectly, it turns out. The second to suggest the existence of dark matter using stellar velocities was Dutch astronomer Jacobus Kapteyn in 1922. A publication from 1930 by Swedish astronomer Knut Lundmark points to him being the first to realise that the universe must contain much more mass than can be observed. Dutch radio astronomy pioneer Jan Oort also hypothesized the existence of dark matter in 1932. Oort was studying stellar motions in the galactic neighborhood and found the mass in the galactic plane must be greater than what was observed, but this measurement was later determined to be incorrect. In 1933, Swiss astrophysicist Fritz Zwicky studied galaxy clusters while working at Cal Tech and made a similar inference. Zwicky applied the virial theorem to the Coma Cluster and obtained evidence of unseen mass he called ('dark matter'). Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies. He estimated the cluster had about 400 times more mass than was visually observable. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred some unseen matter provided the mass and associated gravitational attraction to hold the cluster together. Zwicky's estimates were off by more than an order of magnitude, mainly due to an obsolete value of the Hubble constant; the same calculation today shows a smaller fraction, using greater values for luminous mass. Nonetheless, Zwicky did correctly conclude from his calculation that most of the gravitational matter present was dark. However unlike modern theories, Zwicky considered "dark matter" to be non-luminous ordinary matter. Further indications of mass-to-light ratio anomalies came from measurements of galaxy rotation curves. In 1939, H.W. Babcock reported the rotation curve for the Andromeda nebula (now called the Andromeda Galaxy), which suggested the mass-to-luminosity ratio increases radially. He attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral, rather than to unseen matter. Following Babcock's 1939 report of unexpectedly rapid rotation in the outskirts of the Andromeda Galaxy and a mass-to-light ratio of 50; in 1940, Oort discovered and wrote about the large non-visible halo of NGC 3115. 1970s The hypothesis of dark matter largely took root in the 1970s. Several different observations were synthesized to argue that galaxies should be surrounded by halos of unseen matter. In two papers that appeared in 1974, this conclusion was drawn in tandem by independent groups: in Princeton, New Jersey, by Jeremiah Ostriker, Jim Peebles, and Amos Yahil, and in Tartu, Estonia, by Jaan Einasto, Enn Saar, and Ants Kaasik. One of the observations that served as evidence for the existence of galactic halos of dark matter was the shape of galaxy rotation curves. These observations were done in optical and radio astronomy. In optical astronomy, Vera Rubin and Kent Ford worked with a new spectrograph to measure the velocity curve of edge-on spiral galaxies with greater accuracy. At the same time, radio astronomers were making use of new radio telescopes to map the 21 cm line of atomic hydrogen in nearby galaxies. The radial distribution of interstellar atomic hydrogen (H) often extends to much greater galactic distances than can be observed as collective starlight, expanding the sampled distances for rotation curves – and thus of the total mass distribution – to a new dynamical regime. Early mapping of Andromeda with the telescope at Green Bank and the dish at Jodrell Bank already showed the H rotation curve did not trace the decline expected from Keplerian orbits. As more sensitive receivers became available, Roberts & Whitehurst (1975) were able to trace the rotational velocity of Andromeda to 30 kpc, much beyond the optical measurements. Illustrating the advantage of tracing the gas disk at large radii; that paper's Figure 16 combines the optical data (the cluster of points at radii of less than 15 kpc with a single point further out) with the H data between 20 and 30 kpc, exhibiting the flatness of the outer galaxy rotation curve; the solid curve peaking at the center is the optical surface density, while the other curve shows the cumulative mass, still rising linearly at the outermost measurement. In parallel, the use of interferometric arrays for extragalactic H spectroscopy was being developed. Rogstad & Shostak (1972) published H rotation curves of five spirals mapped with the Owens Valley interferometer; the rotation curves of all five were very flat, suggesting very large values of mass-to-light ratio in the outer parts of their extended H disks. In 1978, Albert Bosma showed further evidence of flat rotation curves using data from the Westerbork Synthesis Radio Telescope. By the late 1970s the existence of dark matter halos around galaxies was widely recognized as real, and became a major unsolved problem in astronomy. 1980–1990s A stream of observations in the 1980–1990s supported the presence of dark matter. is notable for the investigation of 967 spirals. The evidence for dark matter also included gravitational lensing of background objects by galaxy clusters, the temperature distribution of hot gas in galaxies and clusters, and the pattern of anisotropies in the cosmic microwave background. According to the current consensus among cosmologists, dark matter is composed primarily of some type of not-yet-characterized subatomic particle. The search for this particle, by a variety of means, is one of the major efforts in particle physics. Technical definition In standard cosmological calculations, "matter" means any constituent of the universe whose energy density scales with the inverse cube of the scale factor, i.e., This is in contrast to "radiation", which scales as the inverse fourth power of the scale factor and a cosmological constant, which does not change with respect to (). The different scaling factors for matter and radiation are a consequence of radiation redshift. For example, after doubling the diameter of the observable Universe via cosmic expansion, the scale, , has doubled. The energy of the cosmic microwave background radiation has been halved (because the wavelength of each photon has doubled); the energy of ultra-relativistic particles, such as early-era standard-model neutrinos, is similarly halved. The cosmological constant, as an intrinsic property of space, has a constant energy density regardless of the volume under consideration. In principle, "dark matter" means all components of the universe which are not visible but still obey In practice, the term "dark matter" is often used to mean only the non-baryonic component of dark matter, i.e., excluding "missing baryons". Context will usually indicate which meaning is intended. Observational evidence Galaxy rotation curves The arms of spiral galaxies rotate around their galactic center. The luminous mass density of a spiral galaxy decreases as one goes from the center to the outskirts. If luminous mass were all the matter, then the galaxy can be modelled as a point mass in the centre and test masses orbiting around it, similar to the Solar System. From Kepler's Third Law, it is expected that the rotation velocities will decrease with distance from the center, similar to the Solar System. This is not observed. Instead, the galaxy rotation curve remains flat or even increases as distance from the center increases. If Kepler's laws are correct, then the obvious way to resolve this discrepancy is to conclude the mass distribution in spiral galaxies is not similar to that of the Solar System. In particular, there may be a lot of non-luminous matter (dark matter) in the outskirts of the galaxy. Velocity dispersions Stars in bound systems must obey the virial theorem. The theorem, together with the measured velocity distribution, can be used to measure the mass distribution in a bound system, such as elliptical galaxies or globular clusters. With some exceptions, velocity dispersion estimates of elliptical galaxies do not match the predicted velocity dispersion from the observed mass distribution, even assuming complicated distributions of stellar orbits. As with galaxy rotation curves, the obvious way to resolve the discrepancy is to postulate the existence of non-luminous matter. Galaxy clusters Galaxy clusters are particularly important for dark matter studies since their masses can be estimated in three independent ways: From the scatter in radial velocities of the galaxies within clusters From X-rays emitted by hot gas in the clusters. From the X-ray energy spectrum and flux, the gas temperature and density can be estimated, hence giving the pressure; assuming pressure and gravity balance determines the cluster's mass profile. Gravitational lensing (usually of more distant galaxies) can measure cluster masses without relying on observations of dynamics (e.g., velocity). Generally, these three methods are in reasonable agreement that dark matter outweighs visible matter by approximately 5 to 1. Gravitational lensing One of the consequences of general relativity is the gravitational lens. Gravitational lensing occurs when massive objects between a source of light and the observer act as a lens to bend light from this source. Lensing does not depend on the properties of the mass; it only requires there to be a mass. The more massive an object, the more lensing is observed. An example is a cluster of galaxies lying between a more distant source such as a quasar and an observer. In this case, the galaxy cluster will lens the quasar. Strong lensing is the observed distortion of background galaxies into arcs when their light passes through such a gravitational lens. It has been observed around many distant clusters including Abell 1689. By measuring the distortion geometry, the mass of the intervening cluster can be obtained. In the weak regime, lensing does not distort background galaxies into arcs, causing minute distortions instead. By examining the apparent shear deformation of the adjacent background galaxies, the mean distribution of dark matter can be characterized. The measured mass-to-light ratios correspond to dark matter densities predicted by other large-scale structure measurements. Cosmic microwave background Although both dark matter and ordinary matter are matter, they do not behave in the same way. In particular, in the early universe, ordinary matter was ionized and interacted strongly with radiation via Thomson scattering. Dark matter does not interact directly with radiation, but it does affect the cosmic microwave background (CMB) by its gravitational potential (mainly on large scales) and by its effects on the density and velocity of ordinary matter. Ordinary and dark matter perturbations, therefore, evolve differently with time and leave different imprints on the CMB. The CMB is very close to a perfect blackbody but contains very small temperature anisotropies of a few parts in 100,000. A sky map of anisotropies can be decomposed into an angular power spectrum, which is observed to contain a series of acoustic peaks at near-equal spacing but different heights. The locations of these peaks depend on cosmological parameters. Matching theory to data, therefore, constrains cosmological parameters. The CMB anisotropy was first discovered by COBE in 1992, though this had too coarse resolution to detect the acoustic peaks. After the discovery of the first acoustic peak by the balloon-borne BOOMERanG experiment in 2000, the power spectrum was precisely observed by WMAP in 2003–2012, and even more precisely by the Planck spacecraft in 2013–2015. The results support the Lambda-CDM model. The observed CMB angular power spectrum provides powerful evidence in support of dark matter, as its precise structure is well fitted by the Lambda-CDM model, but difficult to reproduce with any competing model such as modified Newtonian dynamics (MOND). Structure formation Structure formation refers to the period after the Big Bang when density perturbations collapsed to form stars, galaxies, and clusters. Prior to structure formation, the Friedmann solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures. Ordinary matter is affected by radiation, which is the dominant element of the universe at very early times. As a result, its density perturbations are washed out and unable to condense into structure. If there were only ordinary matter in the universe, there would not have been enough time for density perturbations to grow into the galaxies and clusters currently seen. Dark matter provides a solution to this problem because it is unaffected by radiation. Therefore, its density perturbations can grow first. The resulting gravitational potential acts as an attractive potential well for ordinary matter collapsing later, speeding up the structure formation process. Bullet Cluster The Bullet Cluster is the result of a recent collision of two galaxy clusters. It is of particular note because the location of the center of mass as measured by gravitational lensing is different from the location of the center of mass of visible matter. This is difficult for modified gravity theories, which generally predict lensing around visible matter, to explain. Standard dark matter theory however has no issue: the hot, visible gas in each cluster would be cooled and slowed down by electromagnetic interactions, while dark matter (which does not interact electromagnetically) would not. This leads to the dark matter separating from the visible gas, producing the separate lensing peak as observed. Type Ia supernova distance measurements Type Ia supernovae can be used as standard candles to measure extragalactic distances, which can in turn be used to measure how fast the universe has expanded in the past. Data indicates the universe is expanding at an accelerating rate, the cause of which is usually ascribed to dark energy. Since observations indicate the universe is almost flat, it is expected the total energy density of everything in the universe should sum to 1 (). The measured dark energy density is ; the observed ordinary (baryonic) matter energy density is and the energy density of radiation is negligible. This leaves a missing which nonetheless behaves like matter (see technical definition section above)dark matter. Sky surveys and baryon acoustic oscillations Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scales. These are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon–baryon fluid of the early universe and can be observed in the cosmic microwave background angular power spectrum. BAOs set up a preferred length scale for baryons. As the dark matter and baryons clumped together after recombination, the effect is much weaker in the galaxy distribution in the nearby universe, but is detectable as a subtle (≈1 percent) preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130–160 Mpc. This feature was predicted theoretically in the 1990s and then discovered in 2005, in two large galaxy redshift surveys, the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Combining the CMB observations with BAO measurements from galaxy redshift surveys provides a precise estimate of the Hubble constant and the average matter density in the Universe. The results support the Lambda-CDM model. Redshift-space distortions Large galaxy redshift surveys may be used to make a three-dimensional map of the galaxy distribution. These maps are slightly distorted because distances are estimated from observed redshifts; the redshift contains a contribution from the galaxy's so-called peculiar velocity in addition to the dominant Hubble expansion term. On average, superclusters are expanding more slowly than the cosmic mean due to their gravity, while voids are expanding faster than average. In a redshift map, galaxies in front of a supercluster have excess radial velocities towards it and have redshifts slightly higher than their distance would imply, while galaxies behind the supercluster have redshifts slightly low for their distance. This effect causes superclusters to appear squashed in the radial direction, and likewise voids are stretched. Their angular positions are unaffected. This effect is not detectable for any one structure since the true shape is not known, but can be measured by averaging over many structures. It was predicted quantitatively by Nick Kaiser in 1987, and first decisively measured in 2001 by the 2dF Galaxy Redshift Survey. Results are in agreement with the Lambda-CDM model. Lyman-alpha forest In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data. Theoretical classifications Dark matter can be divided into cold, warm, and hot categories. These categories refer to velocity rather than an actual temperature, and indicate how far corresponding objects moved due to random motions in the early universe, before they slowed due to cosmic expansion. This distance is called the free streaming length (FSL). The categories of dark matter are set with respect to the size of a protogalaxy (an object that later evolves into a dwarf galaxy): dark matter particles are classified as cold, warm, or hot if their FSL is much smaller (cold), similar to (warm), or much larger (hot) than a protogalaxy. Mixtures of the above are also possible: a theory of mixed dark matter was popular in the mid-1990s, but was rejected following the discovery of dark energy. The significance of the free streaming length is that the universe began with some primordial density fluctuations from the Big Bang (in turn arising from quantum fluctuations at the microscale). Particles from overdense regions will naturally spread to underdense regions, but because the universe is expanding quickly, there is a time limit for them to do so. Faster particles (hot dark matter) can beat the time limit while slower particles cannot. The particles travel a free streaming length's worth of distance within the time limit; therefore this length sets a minimum scale for later structure formation. Because galaxy-size density fluctuations get washed out by free-streaming, hot dark matter implies the first objects that can form are huge supercluster-size pancakes, which then fragment into galaxies, while the reverse is true for cold dark matter. Deep-field observations show that galaxies formed first, followed by clusters and superclusters as galaxies clump together, and therefore that most dark matter is cold. This is also the reason why neutrinos, which move at nearly the speed of light and therefore would fall under hot dark matter, cannot make up the bulk of dark matter. Composition The identity of dark matter is unknown, but there are many hypotheses about what dark matter could consist of, as set out in the table below. Baryonic matter Dark matter can refer to any substance which interacts predominantly via gravity with visible matter (e.g., stars and planets). Hence in principle it need not be composed of a new type of fundamental particle but could, at least in part, be made up of standard baryonic matter, such as protons or neutrons. Most of the ordinary matter familiar to astronomers, including planets, brown dwarfs, red dwarfs, visible stars, white dwarfs, neutron stars, and black holes, fall into this category. A black hole would ingest both baryonic and non-baryonic matter that comes close enough to its event horizon; afterwards, the distinction between the two is lost. These massive objects that are hard to detect are collectively known as MACHOs. Some scientists initially hoped that baryonic MACHOs could account for and explain all the dark matter. However, multiple lines of evidence suggest the majority of dark matter is not baryonic: Sufficient diffuse, baryonic gas or dust would be visible when backlit by stars. The theory of Big Bang nucleosynthesis predicts the observed abundance of the chemical elements. If there are more baryons, then there should also be more helium, lithium and heavier elements synthesized during the Big Bang. Agreement with observed abundances requires that baryonic matter makes up between 4–5% of the universe's critical density. In contrast, large-scale structure and other observations indicate that the total matter density is about 30% of the critical density. Astronomical searches for gravitational microlensing in the Milky Way found at most only a small fraction of the dark matter may be in dark, compact, conventional objects (MACHOs, etc.); the excluded range of object masses is from half the Earth's mass up to 30 solar masses, which covers nearly all the plausible candidates. Detailed analysis of the small irregularities (anisotropies) in the cosmic microwave background by WMAP and Planck indicate that around five-sixths of the total matter is in a form that only interacts significantly with ordinary matter or photons through gravitational effects. Non-baryonic matter If baryonic matter cannot make up most of dark matter, then dark matter must be non-baryonic. There are two main candidates for non-baryonic dark matter: new hypothetical particles and primordial black holes. Unlike baryonic matter, nonbaryonic particles do not contribute to the formation of the elements in the early universe (Big Bang nucleosynthesis) and so its presence is felt only via its gravitational effects (such as weak lensing). In addition, some dark matter candidates can interact with themselves (self-interacting dark matter) or with ordinary particles (e.g. WIMPs or Weakly Interacting Massive Particles), possibly resulting in observable by-products such as gamma rays and neutrinos (indirect detection). Candidates abound (see the table above), each with their own strengths and weaknesses. Undiscovered massive particles There exists no formal definition of a Weakly Interacting Massive Particle, but broadly, it is an elementary particle which interacts via gravity and any other force (or forces) which is as weak as or weaker than the weak nuclear force, but also non-vanishing in strength. Many WIMP candidates are expected to have been produced thermally in the early Universe, similarly to the particles of the Standard Model according to Big Bang cosmology, and usually will constitute cold dark matter. Obtaining the correct abundance of dark matter today via thermal production requires a self-annihilation cross section of , which is roughly what is expected for a new particle in the 100 GeV mass range that interacts via the electroweak force. Because supersymmetric extensions of the Standard Model of particle physics readily predict a new particle with these properties, this apparent coincidence is known as the "WIMP miracle", and a stable supersymmetric partner has long been a prime explanation for dark matter. Experimental efforts to detect WIMPs include the search for products of WIMP annihilation, including gamma rays, neutrinos and cosmic rays in nearby galaxies and galaxy clusters; direct detection experiments designed to measure the collision of WIMPs with nuclei in the laboratory, as well as attempts to directly produce WIMPs in colliders, such as the Large Hadron Collider at CERN. In the early 2010s, results from direct-detection experiments along with the lack of evidence for supersymmetry at the Large Hadron Collider (LHC) experiment have cast doubt on the simplest WIMP hypothesis. Undiscovered ultralight particles Axions are hypothetical elementary particles originally theorized in 1978 independently by Frank Wilczek and Steven Weinberg as the Goldstone boson of Peccei–Quinn theory, which had been proposed in 1977 to solve the strong CP problem in quantum chromodynamics (QCD). QCD effects produce an effective periodic potential in which the axion field moves. Expanding the potential about one of its minima, one finds that the product of the axion mass with the axion decay constant is determined by the topological susceptibility of the QCD vacuum. An axion with mass much less than 60 keV is long-lived and weakly interacting: A perfect dark matter candidate. The oscillations of the axion field about the minimum of the effective potential, the so-called misalignment mechanism, generate a cosmological population of cold axions with an abundance depending on the mass of the axion. With a mass above 5 μeV/2 ( times the electron mass) axions could account for dark matter, and thus be both a dark-matter candidate and a solution to the strong CP problem. If inflation occurs at a low scale and lasts sufficiently long, the axion mass can be as low as 1 peV/2. Because axions have extremely low mass, their de Broglie wavelength is very large, in turn meaning that quantum effects could help resolve the small-scale problems of the Lambda-CDM model. A single ultralight axion with a decay constant at the grand unified theory scale provides the correct relic density without fine-tuning. Axions as a dark matter candidate has gained in popularity in recent years, because of the non-detection of WIMPS. Primordial black holes Primordial black holes are hypothetical black holes that formed soon after the Big Bang. In the inflationary era and early radiation-dominated universe, extremely dense pockets of subatomic matter may have been tightly packed to the point of gravitational collapse, creating primordial black holes without the supernova compression typically needed to make black holes today. Because the creation of primordial black holes would pre-date the first stars, they are not limited to the narrow mass range of stellar black holes and also not classified as baryonic dark matter. The idea that black holes could form in the early universe was first suggested by Yakov Zeldovich and Igor Dmitriyevich Novikov in 1967, and independently by Stephen Hawking in 1971. It quickly became clear that such black holes might account for at least part of dark matter. Primordial black holes as a dark matter candidate has the major advantage that it is based on a well-understood theory (General Relativity) and objects (black holes) that are already known to exist. However, producing primordial black holes requires exotic cosmic inflation or physics beyond the standard model of particle physics, and might also require fine-tuning. Primordial black holes can also span nearly the entire possible mass range, from atom-sized to supermassive. The idea that primordial black holes make up dark matter gained prominence in 2015 following results of gravitational wave measurements which detected the merger of intermediate-mass black holes. Black holes with about 30 solar masses are not predicted to form by either stellar collapse (typically less than 15 solar masses) or by the merger of black holes in galactic centers (millions or billions of solar masses), which suggests that the detected black holes might be primordial. A later survey of about a thousand supernovae detected no gravitational lensing events, when about eight would be expected if intermediate-mass primordial black holes above a certain mass range accounted for over 60% of dark matter. However, that study assumed that all black holes have the same or similar mass to the LIGO/Virgo mass range, which might not be the case (as suggested by subsequent James Webb Space Telescope observations). The possibility that atom-sized primordial black holes account for a significant fraction of dark matter was ruled out by measurements of positron and electron fluxes outside the Sun's heliosphere by the Voyager 1 spacecraft. Tiny black holes are theorized to emit Hawking radiation. However the detected fluxes were too low and did not have the expected energy spectrum, suggesting that tiny primordial black holes are not widespread enough to account for dark matter. Nonetheless, research and theories proposing dense dark matter accounts for dark matter continue as of 2018, including approaches to dark matter cooling, and the question remains unsettled. In 2019, the lack of microlensing effects in the observation of Andromeda suggests that tiny black holes do not exist. Nonetheless, there still exists a largely unconstrained mass range smaller than that which can be limited by optical microlensing observations, where primordial black holes may account for all dark matter. Dark matter aggregation and dense dark matter objects If dark matter is composed of weakly interacting particles, then an obvious question is whether it can form objects equivalent to planets, stars, or black holes. Historically, the answer has been it cannot, because of two factors: It lacks an efficient means to lose energy Ordinary matter forms dense objects because it has numerous ways to lose energy. Losing energy would be essential for object formation, because a particle that gains energy during compaction or falling "inward" under gravity, and cannot lose it any other way, will heat up and increase velocity and momentum. Dark matter appears to lack a means to lose energy, simply because it is not capable of interacting strongly in other ways except through gravity. The virial theorem suggests that such a particle would not stay bound to the gradually forming object – as the object began to form and compact, the dark matter particles within it would speed up and tend to escape. It lacks a diversity of interactions needed to form structures Ordinary matter interacts in many different ways, which allows the matter to form more complex structures. For example, stars form through gravity, but the particles within them interact and can emit energy in the form of neutrinos and electromagnetic radiation through fusion when they become energetic enough. Protons and neutrons can bind via the strong interaction and then form atoms with electrons largely through electromagnetic interaction. There is no evidence that dark matter is capable of such a wide variety of interactions, since it seems to only interact through gravity (and possibly through some means no stronger than the weak interaction, although until dark matter is better understood, this is only speculation). Detection of dark matter particles If dark matter is made up of subatomic particles, then millions, possibly billions, of such particles must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis. Although WIMPs have been the main search candidates, axions have drawn renewed attention, with the Axion Dark Matter Experiment (ADMX) searches for axions and many more planned in the future. Another candidate is heavy hidden sector particles which only interact with ordinary matter via gravity. These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of dark matter particle annihilations or decays. Direct detection Direct detection experiments aim to observe low-energy recoils (typically a few keVs) of nuclei induced by interactions with particles of dark matter, which (in theory) are passing through the Earth. After such a recoil, the nucleus will emit energy in the form of scintillation light or phonons as they pass through sensitive detection apparatus. To do so effectively, it is crucial to maintain an extremely low background, which is the reason why such experiments typically operate deep underground, where interference from cosmic rays is minimized. Examples of underground laboratories with direct detection experiments include the Stawell mine, the Soudan mine, the SNOLAB underground laboratory at Sudbury, the Gran Sasso National Laboratory, the Canfranc Underground Laboratory, the Boulby Underground Laboratory, the Deep Underground Science and Engineering Laboratory and the China Jinping Underground Laboratory. These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include such projects as CDMS, CRESST, EDELWEISS, and EURECA, while noble liquid experiments include LZ, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO, which use alternative methods in their attempts to detect dark matter. Currently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particles. The DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matter. This results from the expectation that as the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount. This claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX, SuperCDMS and XENON100. A special case of direct detection experiments covers those with directional sensitivity. This is a search strategy based on the motion of the Solar System around the Galactic Center. A low-pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun travels (approximately towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC. Indirect detection Indirect detection experiments search for the products of the self-annihilation or decay of dark matter particles in outer space. For example, in regions of high dark matter density (e.g., the centre of the Milky Way) two dark matter particles could annihilate to produce gamma rays or Standard Model particle–antiparticle pairs. Alternatively, if a dark matter particle is unstable, it could decay into Standard Model (or other) particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from high density regions in the Milky Way and other galaxies. A major difficulty inherent in such searches is that various astrophysical sources can mimic the signal expected from dark matter, and so multiple signals are likely required for a conclusive discovery. A few of the dark matter particles passing through the Sun or Earth may scatter off atoms and lose energy. Thus dark matter may accumulate at the center of these bodies, increasing the chance of collision/annihilation. This could produce a distinctive signal in the form of high-energy neutrinos. Such a signal would be strong indirect proof of WIMP dark matter. High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal. The detection by LIGO in September 2015 of gravitational waves opens the possibility of observing dark matter in a new way, particularly if it is in the form of primordial black holes. Many experimental searches have been undertaken to look for such emission from dark matter annihilation or decay, examples of which follow. The Energetic Gamma Ray Experiment Telescope observed more gamma rays in 2008 than expected from the Milky Way, but scientists concluded this was most likely due to incorrect estimation of the telescope's sensitivity. The Fermi Gamma-ray Space Telescope is searching for similar gamma rays. In 2009, an as yet unexplained surplus of gamma rays from the Milky Way's galactic center was found in Fermi data. This Galactic Center GeV excess might be due to dark matter annihilation or to a population of pulsars. In April 2012, an analysis of previously available data from Fermi's Large Area Telescope instrument produced statistical evidence of a 130 GeV signal in the gamma radiation coming from the center of the Milky Way. WIMP annihilation was seen as the most probable explanation. At higher energies, ground-based gamma-ray telescopes have set limits on the annihilation of dark matter in dwarf spheroidal galaxies and in clusters of galaxies. The PAMELA experiment (launched in 2006) detected excess positrons. They could be from dark matter annihilation or from pulsars. No excess antiprotons were observed. In 2013, results from the Alpha Magnetic Spectrometer on the International Space Station indicated excess high-energy cosmic rays which could be due to dark matter annihilation. Collider searches for dark matter An alternative approach to the detection of dark matter particles in nature is to produce them in a laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect dark matter particles produced in collisions of the LHC proton beams. Because a dark matter particle should have negligible interactions with normal visible matter, it may be detected indirectly as (large amounts of) missing energy and momentum that escape the detectors, provided other (non-negligible) collision products are detected. Constraints on dark matter also exist from the LEP experiment using a similar principle, but probing the interaction of dark matter particles with electrons rather than quarks. Any discovery from collider searches must be corroborated by discoveries in the indirect or direct detection sectors to prove that the particle discovered is, in fact, dark matter. Alternative hypotheses Because dark matter has not yet been identified, many other hypotheses have emerged aiming to explain the same observational phenomena without introducing a new unknown type of matter. The theory underpinning most observational evidence for dark matter, general relativity, is well-tested on Solar System scales, but its validity on galactic or cosmological scales has not been well proven. A suitable modification to general relativity can in principle conceivably eliminate the need for dark matter. The best-known theories of this class are MOND and its relativistic generalization tensor–vector–scalar gravity (TeVeS), f(R) gravity, negative mass, dark fluid, and entropic gravity. Alternative theories abound. A problem with alternative hypotheses is that observational evidence for dark matter comes from so many independent approaches (see the "observational evidence" section above). Explaining any individual observation is possible but explaining all of them in the absence of dark matter is very difficult. Nonetheless, there have been some scattered successes for alternative hypotheses, such as a 2016 test of gravitational lensing in entropic gravity and a 2020 measurement of a unique MOND effect. The prevailing opinion among most astrophysicists is that while modifications to general relativity can conceivably explain part of the observational evidence, there is probably enough data to conclude there must be some form of dark matter present in the universe. In popular culture Dark matter regularly appears as a topic in hybrid periodicals that cover both factual scientific topics and science fiction, and dark matter itself has been referred to as "the stuff of science fiction". Mention of dark matter is made in works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties, thus becoming inconsistent with the hypothesized properties of dark matter in physics and cosmology. For example: Dark matter serves as a plot device in the 1995 X-Files episode "Soft Light". A dark-matter-inspired substance known as "Dust" features prominently in Philip Pullman's His Dark Materials trilogy. Beings made of dark matter are antagonists in Stephen Baxter's Xeelee Sequence. More broadly, the phrase "dark matter" is used metaphorically in fiction to evoke the unseen or invisible. Gallery See also Related theories Density wave theory – A theory in which waves of compressed gas, which move slower than the galaxy, maintain galaxy's structure Experiments , a search apparatus , large underground dark matter detector , a space mission , a research program , astrophysical simulations , a particle accelerator research infrastructure Dark matter candidates Weakly interacting slim particle (WISP)Low-mass counterpart to WIMP Other Luminiferous aether – A once theorized invisible and infinite material with no interaction with physical objects, used to explain how light could travel through a vacuum (now disproven) Notes References Further reading (Recommended on BookAuthrority site)) Weiss, Rainer, (July/August 2023) "The Dark Universe Comes into Focus" Scientific American, vol. 329, no. 1, pp. 7–8. External links Celestial mechanics Large-scale structure of the cosmos Physics beyond the Standard Model Astroparticle physics Exotic matter Matter Concepts in astronomy Unsolved problems in astronomy Articles containing video clips Dark concepts in astrophysics
Dark matter
[ "Physics", "Astronomy" ]
9,000
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Astroparticle physics", "Unsolved problems in physics", "Classical mechanics", "Astrophysics", "Dark concepts in astrophysics", "Astronomical controversies", "Particle physics", "Exotic matter", "Celestial mechanics", ...
8,697
https://en.wikipedia.org/wiki/DNA%20ligase
DNA ligase is a type of enzyme that facilitates the joining of DNA strands together by catalyzing the formation of a phosphodiester bond. It plays a role in repairing single-strand breaks in duplex DNA in living organisms, but some forms (such as DNA ligase IV) may specifically repair double-strand breaks (i.e. a break in both complementary strands of DNA). Single-strand breaks are repaired by DNA ligase using the complementary strand of the double helix as a template, with DNA ligase creating the final phosphodiester bond to fully repair the DNA. DNA ligase is used in both DNA repair and DNA replication (see Mammalian ligases). In addition, DNA ligase has extensive use in molecular biology laboratories for recombinant DNA experiments (see Research applications). Purified DNA ligase is used in gene cloning to join DNA molecules together to form recombinant DNA. Enzymatic mechanism The mechanism of DNA ligase is to form two covalent phosphodiester bonds between 3' hydroxyl ends of one nucleotide ("acceptor"), with the 5' phosphate end of another ("donor"). Two ATP molecules are consumed for each phosphodiester bond formed. AMP is required for the ligase reaction, which proceeds in four steps: Reorganization of activity site such as nicks in DNA segments or Okazaki fragments etc. Adenylylation (addition of AMP) of a lysine residue in the active center of the enzyme, pyrophosphate is released; Transfer of the AMP to the 5' phosphate of the so-called donor, formation of a pyrophosphate bond; Formation of a phosphodiester bond between the 5' phosphate of the donor and the 3' hydroxyl of the acceptor. Ligase will also work with blunt ends, although higher enzyme concentrations and different reaction conditions are required. Types E. coli The E. coli DNA ligase is encoded by the lig gene. DNA ligase in E. coli, as well as most prokaryotes, uses energy gained by cleaving nicotinamide adenine dinucleotide (NAD) to create the phosphodiester bond. It does not ligate blunt-ended DNA except under conditions of molecular crowding with polyethylene glycol, and cannot join RNA to DNA efficiently. The activity of E. coli DNA ligase can be enhanced by DNA polymerase at the right concentrations. Enhancement only works when the concentrations of the DNA polymerase 1 are much lower than the DNA fragments to be ligated. When the concentrations of Pol I DNA polymerases are higher, it has an adverse effect on E. coli DNA ligase T4 The DNA ligase from bacteriophage T4 (a bacteriophage that infects Escherichia coli bacteria). The T4 ligase is the most-commonly used in laboratory research. It can ligate either cohesive or blunt ends of DNA, oligonucleotides, as well as RNA and RNA-DNA hybrids, but not single-stranded nucleic acids. It can also ligate blunt-ended DNA with much greater efficiency than E. coli DNA ligase. Unlike E. coli DNA ligase, T4 DNA ligase cannot utilize NAD and it has an absolute requirement for ATP as a cofactor. Some engineering has been done to improve the in vitro activity of T4 DNA ligase; one successful approach, for example, tested T4 DNA ligase fused to several alternative DNA binding proteins and found that the constructs with either p50 or NF-kB as fusion partners were over 160% more active in blunt-end ligations for cloning purposes than wild type T4 DNA ligase. A typical reaction for inserting a fragment into a plasmid vector would use about 0.01 (sticky ends) to 1 (blunt ends) units of ligase. The optimal incubation temperature for T4 DNA ligase is 16 °C. Bacteriophage T4 ligase mutants have increased sensitivity to both UV irradiation and the alkylating agent methyl methanesulfonate indicating that DNA ligase is employed in the repair of the DNA damages caused by these agents. Mammalian In mammals, there are four specific types of ligase. DNA ligase 1: ligates the nascent DNA of the lagging strand after the Ribonuclease H has removed the RNA primer from the Okazaki fragments. DNA ligase 3: complexes with DNA repair protein XRCC1 to aid in sealing DNA during the process of nucleotide excision repair and recombinant fragments. Of the all known mammalian DNA ligases, only ligase 3 has been found to be present in mitochondria. DNA ligase 4: complexes with XRCC4. It catalyzes the final step in the non-homologous end joining DNA double-strand break repair pathway. It is also required for V(D)J recombination, the process that generates diversity in immunoglobulin and T-cell receptor loci during immune system development. DNA ligase 2: A purification artifact resulting from proteolytic degradation of DNA ligase 3. Initially, it has been recognized as another DNA ligase and it is the reason for the unusual nomenclature of DNA ligases. DNA ligase from eukaryotes and some microbes uses adenosine triphosphate (ATP) rather than NAD. Thermostable Derived from a thermophilic bacterium, the enzyme is stable and active at much higher temperatures than conventional DNA ligases. Its half-life is 48 hours at 65 °C and greater than 1 hour at 95 °C. Ampligase DNA Ligase has been shown to be active for at least 500 thermal cycles (94 °C/80 °C) or 16 hours of cycling.10 This exceptional thermostability permits extremely high hybridization stringency and ligation specificity. Measurement of activity There are at least three different units used to measure the activity of DNA ligase: Weiss unit - the amount of ligase that catalyzes the exchange of 1 nmole of 32P from inorganic pyrophosphate to ATP in 20 minutes at 37°C. This is the one most commonly used. Modrich-Lehman unit - this is rarely used, and one unit is defined as the amount of enzyme required to convert 100 nmoles of d(A-T)n to an exonuclease-III resistant form in 30 minutes under standard conditions. Many commercial suppliers of ligases use an arbitrary unit based on the ability of ligase to ligate cohesive ends. These units are often more subjective than quantitative and lack precision. Research applications DNA ligases have become indispensable tools in modern molecular biology research for generating recombinant DNA sequences. For example, DNA ligases are used with restriction enzymes to insert DNA fragments, often genes, into plasmids. Controlling the optimal temperature is a vital aspect of performing efficient recombination experiments involving the ligation of cohesive-ended fragments. Most experiments use T4 DNA Ligase (isolated from bacteriophage T4), which is most active at 37 °C. However, for optimal ligation efficiency with cohesive-ended fragments ("sticky ends"), the optimal enzyme temperature needs to be balanced with the melting temperature Tm of the sticky ends being ligated, the homologous pairing of the sticky ends will not be stable because the high temperature disrupts hydrogen bonding. A ligation reaction is most efficient when the sticky ends are already stably annealed, and disruption of the annealing ends would therefore result in low ligation efficiency. The shorter the overhang, the lower the Tm. Since blunt-ended DNA fragments have no cohesive ends to anneal, the melting temperature is not a factor to consider within the normal temperature range of the ligation reaction. The limiting factor in blunt end ligation is not the activity of the ligase but rather the number of alignments between DNA fragment ends that occur. The most efficient ligation temperature for blunt-ended DNA would therefore be the temperature at which the greatest number of alignments can occur. The majority of blunt-ended ligations are carried out at 14-25 °C overnight. The absence of stably annealed ends also means that the ligation efficiency is lowered, requiring a higher ligase concentration to be used. A novel use of DNA ligase can be seen in the field of nano chemistry, specifically in DNA origami.  DNA based self-assembly principles have proven useful for organizing nanoscale objects, such as biomolecules, nanomachines, nanoelectronic and photonic component. Assembly of such nano structure requires the creation of an intricate mesh of DNA molecules. Although DNA self-assembly is possible without any outside help using different substrates such as provision of catatonic surface of Aluminium foil, DNA ligase can provide the enzymatic assistance that is required to make DNA lattice structure from DNA over hangs. History The first DNA ligase was purified and characterized in 1967 by the Gellert, Lehman, Richardson, and Hurwitz laboratories. It was first purified and characterized by Weiss and Richardson using a six-step chromatographic-fractionation process beginning with elimination of cell debris and addition of streptomycin, followed by several Diethylaminoethyl (DEAE)-cellulose column washes and a final phosphocellulose fractionation. The final extract contained 10% of the activity initially recorded in the E. coli media; along the process it was discovered that ATP and Mg++ were necessary to optimize the reaction. The common commercially available DNA ligases were originally discovered in bacteriophage T4, E. coli and other bacteria. Disorders Genetic deficiencies in human DNA ligases have been associated with clinical syndromes marked by immunodeficiency, radiation sensitivity, and developmental abnormalities,  LIG4 syndrome (Ligase IV syndrome) is a rare disease associated with mutations in DNA ligase 4 and interferes with dsDNA break-repair mechanisms. Ligase IV syndrome causes immunodeficiency in individuals and is commonly associated with microcephaly and marrow hypoplasia. A list of prevalent diseases caused by lack of or malfunctioning of DNA ligase is as follows. Xeroderma pigmentosum Xeroderma pigmentosum, which is commonly known as XP, is an inherited condition characterized by an extreme sensitivity to ultraviolet (UV) rays from sunlight. This condition mostly affects the eyes and areas of skin exposed to the sun. Some affected individuals also have problems involving the nervous system. Ataxia-telangiectasia Mutations in the ATM gene cause ataxia–telangiectasia. The ATM gene provides instructions for making a protein that helps control cell division and is involved in DNA repair. This protein plays an important role in the normal development and activity of several body systems, including the nervous system and immune system. The ATM protein assists cells in recognizing damaged or broken DNA strands and coordinates DNA repair by activating enzymes that fix the broken strands. Efficient repair of damaged DNA strands helps maintain the stability of the cell's genetic information. Affected children typically develop difficulty walking, problems with balance and hand coordination, involuntary jerking movements (chorea), muscle twitches (myoclonus), and disturbances in nerve function (neuropathy). The movement problems typically cause people to require wheelchair assistance by adolescence. People with this disorder also have slurred speech and trouble moving their eyes to look side-to-side (oculomotor apraxia). Fanconi Anemia Fanconi anemia (FA) is a rare, inherited blood disorder that leads to bone marrow failure. FA prevents bone marrow from making enough new blood cells for the body to work normally. FA also can cause the bone marrow to make many faulty blood cells. This can lead to serious health problems, such as leukemia. Bloom syndrome Bloom syndrome results in skin that is sensitive to sun exposure, and usually the development of a butterfly-shaped patch of reddened skin across the nose and cheeks. A skin rash can also appear on other areas that are typically exposed to the sun, such as the back of the hands and the forearms. Small clusters of enlarged blood vessels (telangiectases) often appear in the rash; telangiectases can also occur in the eyes. Other skin features include patches of skin that are lighter or darker than the surrounding areas (hypopigmentation or hyperpigmentation respectively). These patches appear on areas of the skin that are not exposed to the sun, and their development is not related to the rashes. As a drug target In recent studies, human DNA ligase I was used in Computer-aided drug design to identify DNA ligase inhibitors as possible therapeutic agents to treat cancer. Since excessive cell growth is a hallmark of cancer development, targeted chemotherapy that disrupts the functioning of DNA ligase can impede adjuvant cancer forms. Furthermore, it has been shown that DNA ligases can be broadly divided into two categories, namely, ATP- and NAD+-dependent. Previous research has shown that although NAD+-dependent DNA ligases have been discovered in sporadic cellular or viral niches outside the bacterial domain of life, there is no instance in which a NAD+-dependent ligase is present in a eukaryotic organism. The presence solely in non-eukaryotic organisms, unique substrate specificity, and distinctive domain structure of NAD+ dependent compared with ATP-dependent human DNA ligases together make NAD+-dependent ligases ideal targets for the development of new antibacterial drugs. See also DNA end Lagging strand DNA replication Okazaki fragment DNA polymerase Sequencing by ligation References External links DNA Ligase: PDB molecule of the month Davidson College General Information on Ligase OpenWetWare DNA Ligation Protocol EC 6.5 Biotechnology DNA replication Enzymes Genetics techniques
DNA ligase
[ "Engineering", "Biology" ]
2,918
[ "Genetics techniques", "Genetic engineering", "Biotechnology", "DNA replication", "Molecular genetics", "nan" ]
8,724
https://en.wikipedia.org/wiki/Doppler%20effect
The Doppler effect (also Doppler shift) is the change in the frequency of a wave in relation to an observer who is moving relative to the source of the wave. The Doppler effect is named after the physicist Christian Doppler, who described the phenomenon in 1842. A common example of Doppler shift is the change of pitch heard when a vehicle sounding a horn approaches and recedes from an observer. Compared to the emitted frequency, the received frequency is higher during the approach, identical at the instant of passing by, and lower during the recession. When the source of the sound wave is moving towards the observer, each successive cycle of the wave is emitted from a position closer to the observer than the previous cycle. Hence, from the observer's perspective, the time between cycles is reduced, meaning the frequency is increased. Conversely, if the source of the sound wave is moving away from the observer, each cycle of the wave is emitted from a position farther from the observer than the previous cycle, so the arrival time between successive cycles is increased, thus reducing the frequency. For waves that propagate in a medium, such as sound waves, the velocity of the observer and of the source are relative to the medium in which the waves are transmitted. The total Doppler effect in such cases may therefore result from motion of the source, motion of the observer, motion of the medium, or any combination thereof. For waves propagating in vacuum, as is possible for electromagnetic waves or gravitational waves, only the difference in velocity between the observer and the source needs to be considered. History Doppler first proposed this effect in 1842 in his treatise "Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels" (On the coloured light of the binary stars and some other stars of the heavens). The hypothesis was tested for sound waves by Buys Ballot in 1845. He confirmed that the sound's pitch was higher than the emitted frequency when the sound source approached him, and lower than the emitted frequency when the sound source receded from him. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848 (in France, the effect is sometimes called "effet Doppler-Fizeau" but that name was not adopted by the rest of the world as Fizeau's discovery was six years after Doppler's proposal). In Britain, John Scott Russell made an experimental study of the Doppler effect (1848). General In classical physics, where the speeds of source and the receiver relative to the medium are lower than the speed of waves in the medium, the relationship between observed frequency and emitted frequency is given by: where is the propagation speed of waves in the medium; is the speed of the receiver relative to the medium. In the formula, is added to if the receiver is moving towards the source, subtracted if the receiver is moving away from the source; is the speed of the source relative to the medium. is subtracted from if the source is moving towards the receiver, added if the source is moving away from the receiver. Note this relationship predicts that the frequency will decrease if either source or receiver is moving away from the other. Equivalently, under the assumption that the source is either directly approaching or receding from the observer: where is the wave's speed relative to the receiver; is the wave's speed relative to the source; is the wavelength. If the source approaches the observer at an angle (but still with a constant speed), the observed frequency that is first heard is higher than the object's emitted frequency. Thereafter, there is a monotonic decrease in the observed frequency as it gets closer to the observer, through equality when it is coming from a direction perpendicular to the relative motion (and was emitted at the point of closest approach; but when the wave is received, the source and observer will no longer be at their closest), and a continued monotonic decrease as it recedes from the observer. When the observer is very close to the path of the object, the transition from high to low frequency is very abrupt. When the observer is far from the path of the object, the transition from high to low frequency is gradual. If the speeds and are small compared to the speed of the wave, the relationship between observed frequency and emitted frequency is approximately where is the opposite of the relative speed of the receiver with respect to the source: it is positive when the source and the receiver are moving towards each other. Consequences Assuming a stationary observer and a wave source moving towards the observer at (or exceeding) the speed of the wave, the Doppler equation predicts an infinite (or negative) frequency as from the observer's perspective. Thus, the Doppler equation is inapplicable for such cases. If the wave is a sound wave and the sound source is moving faster than the speed of sound, the resulting shock wave creates a sonic boom. Lord Rayleigh predicted the following effect in his classic book on sound: if the observer were moving from the (stationary) source at twice the speed of sound, a musical piece previously emitted by that source would be heard in correct tempo and pitch, but as if played backwards. Applications Sirens A siren on a passing emergency vehicle will start out higher than its stationary pitch, slide down as it passes, and continue lower than its stationary pitch as it recedes from the observer. Astronomer John Dobson explained the effect thus: In other words, if the siren approached the observer directly, the pitch would remain constant, at a higher than stationary pitch, until the vehicle hit him, and then immediately jump to a new lower pitch. Because the vehicle passes by the observer, the radial speed does not remain constant, but instead varies as a function of the angle between his line of sight and the siren's velocity: where is the angle between the object's forward velocity and the line of sight from the object to the observer. Astronomy The Doppler effect for electromagnetic waves such as light is of widespread use in astronomy to measure the speed at which stars and galaxies are approaching or receding from us, resulting in so called blueshift or redshift, respectively. This may be used to detect if an apparently single star is, in reality, a close binary, to measure the rotational speed of stars and galaxies, or to detect exoplanets. This effect typically happens on a very small scale; there would not be a noticeable difference in visible light to the unaided eye. The use of the Doppler effect in astronomy depends on knowledge of precise frequencies of discrete lines in the spectra of stars. Among the nearby stars, the largest radial velocities with respect to the Sun are +308 km/s (BD-15°4041, also known as LHS 52, 81.7 light-years away) and −260 km/s (Woolley 9722, also known as Wolf 1106 and LHS 64, 78.2 light-years away). Positive radial speed means the star is receding from the Sun, negative that it is approaching. The relationship between the expansion of the universe and the Doppler effect is not simple matter of the source moving away from the observer. In cosmology, the redshift of expansion is considered separate from redshifts due to gravity or Doppler motion. Distant galaxies also exhibit peculiar motion distinct from their cosmological recession speeds. If redshifts are used to determine distances in accordance with Hubble's law, then these peculiar motions give rise to redshift-space distortions. Radar The Doppler effect is used in some types of radar, to measure the velocity of detected objects. A radar beam is fired at a moving target – e.g. a motor car, as police use radar to detect speeding motorists – as it approaches or recedes from the radar source. Each successive radar wave has to travel farther to reach the car, before being reflected and re-detected near the source. As each wave has to move farther, the gap between each wave increases, increasing the wavelength. In some situations, the radar beam is fired at the moving car as it approaches, in which case each successive wave travels a lesser distance, decreasing the wavelength. In either situation, calculations from the Doppler effect accurately determine the car's speed. Moreover, the proximity fuze, developed during World War II, relies upon Doppler radar to detonate explosives at the correct time, height, distance, etc. Because the Doppler shift affects the wave incident upon the target as well as the wave reflected back to the radar, the change in frequency observed by a radar due to a target moving at relative speed is twice that from the same target emitting a wave: Medical An echocardiogram can, within certain limits, produce an accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. One of the limitations is that the ultrasound beam should be as parallel to the blood flow as possible. Velocity measurements allow assessment of cardiac valve areas and function, abnormal communications between the left and right side of the heart, leaking of blood through the valves (valvular regurgitation), and calculation of the cardiac output. Contrast-enhanced ultrasound using gas-filled microbubble contrast media can be used to improve velocity or other flow-related medical measurements. Although "Doppler" has become synonymous with "velocity measurement" in medical imaging, in many cases it is not the frequency shift (Doppler shift) of the received signal that is measured, but the phase shift (when the received signal arrives). Velocity measurements of blood flow are also used in other fields of medical ultrasonography, such as obstetric ultrasonography and neurology. Velocity measurement of blood flow in arteries and veins based on Doppler effect is an effective tool for diagnosis of vascular problems like stenosis. Flow measurement Instruments such as the laser Doppler velocimeter (LDV), and acoustic Doppler velocimeter (ADV) have been developed to measure velocities in a fluid flow. The LDV emits a light beam and the ADV emits an ultrasonic acoustic burst, and measure the Doppler shift in wavelengths of reflections from particles moving with the flow. The actual flow is computed as a function of the water velocity and phase. This technique allows non-intrusive flow measurements, at high precision and high frequency. Velocity profile measurement Developed originally for velocity measurements in medical applications (blood flow), Ultrasonic Doppler Velocimetry (UDV) can measure in real time complete velocity profile in almost any liquids containing particles in suspension such as dust, gas bubbles, emulsions. Flows can be pulsating, oscillating, laminar or turbulent, stationary or transient. This technique is fully non-invasive. Satellites Satellite navigation The Doppler shift can be exploited for satellite navigation such as in Transit and DORIS. Satellite communication Doppler also needs to be compensated in satellite communication. Fast moving satellites can have a Doppler shift of dozens of kilohertz relative to a ground station. The speed, thus magnitude of Doppler effect, changes due to earth curvature. Dynamic Doppler compensation, where the frequency of a signal is changed progressively during transmission, is used so the satellite receives a constant frequency signal. After realizing that the Doppler shift had not been considered before launch of the Huygens probe of the 2005 Cassini–Huygens mission, the probe trajectory was altered to approach Titan in such a way that its transmissions traveled perpendicular to its direction of motion relative to Cassini, greatly reducing the Doppler shift. Doppler shift of the direct path can be estimated by the following formula: where is the speed of the mobile station, is the wavelength of the carrier, is the elevation angle of the satellite and is the driving direction with respect to the satellite. The additional Doppler shift due to the satellite moving can be described as: where is the relative speed of the satellite. Audio The Leslie speaker, most commonly associated with and predominantly used with the famous Hammond organ, takes advantage of the Doppler effect by using an electric motor to rotate an acoustic horn around a loudspeaker, sending its sound in a circle. This results at the listener's ear in rapidly fluctuating frequencies of a keyboard note. Vibration measurement A laser Doppler vibrometer (LDV) is a non-contact instrument for measuring vibration. The laser beam from the LDV is directed at the surface of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface. Robotics Dynamic real-time path planning in robotics to aid the movement of robots in a sophisticated environment with moving obstacles often take help of Doppler effect. Such applications are specially used for competitive robotics where the environment is constantly changing, such as robosoccer. Inverse Doppler effect Since 1968 scientists such as Victor Veselago have speculated about the possibility of an inverse Doppler effect. The size of the Doppler shift depends on the refractive index of the medium a wave is traveling through. Some materials are capable of negative refraction, which should lead to a Doppler shift that works in a direction opposite that of a conventional Doppler shift. The first experiment that detected this effect was conducted by Nigel Seddon and Trevor Bearpark in Bristol, United Kingdom in 2003. Later, the inverse Doppler effect was observed in some inhomogeneous materials, and predicted inside a Vavilov–Cherenkov cone. See also Bistatic Doppler shift Differential Doppler effect Doppler cooling Dopplergraph Fading Fizeau experiment Photoacoustic Doppler effect Range rate Rayleigh fading Redshift Laser Doppler imaging Relativistic Doppler effect Primary sources References Further reading Doppler, C. (1842). Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels (About the coloured light of the binary stars and some other stars of the heavens). Publisher: Abhandlungen der Königl. Böhm. Gesellschaft der Wissenschaften (V. Folge, Bd. 2, S. 465–482) [Proceedings of the Royal Bohemian Society of Sciences (Part V, Vol 2)]; Prague: 1842 (Reissued 1903). Some sources mention 1843 as year of publication because in that year the article was published in the Proceedings of the Bohemian Society of Sciences. Doppler himself referred to the publication as "Prag 1842 bei Borrosch und André", because in 1842 he had a preliminary edition printed that he distributed independently. "Doppler and the Doppler effect", E. N. da C. Andrade, Endeavour Vol. XVIII No. 69, January 1959 (published by ICI London). Historical account of Doppler's original paper and subsequent developments. David Nolte (2020). The fall and rise of the Doppler effect. Physics Today, v. 73, pp. 31–35. DOI: 10.1063/PT.3.4429 External links The Doppler effect – The Feynman Lectures on Physics Doppler Effect, ScienceWorld Wave mechanics Radio frequency propagation Radar signal processing Sound Acoustics
Doppler effect
[ "Physics" ]
3,201
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Classical mechanics", "Acoustics", "Astrophysics", "Waves", "Wave mechanics", "Doppler effects" ]
11,913,417
https://en.wikipedia.org/wiki/Representative%20elementary%20volume
In the theory of composite materials, the representative elementary volume (REV) (also called the representative volume element (RVE) or the unit cell) is the smallest volume over which a measurement can be made that will yield a value representative of the whole. In the case of periodic materials, one simply chooses a periodic unit cell (which, however, may be non-unique), but in random media, the situation is much more complicated. For volumes smaller than the RVE, a representative property cannot be defined and the continuum description of the material involves Statistical Volume Element (SVE) and random fields. The property of interest can include mechanical properties such as elastic moduli, hydrogeological properties, electromagnetic properties, thermal properties, and other averaged quantities that are used to describe physical systems. Definition Rodney Hill defined the RVE as a sample of a heterogeneous material that: "is entirely typical of the whole mixture on average”, and "contains a sufficient number of inclusions for the apparent properties to be independent of the surface values of traction and displacement, so long as these values are macroscopically uniform.” In essence, statement (1) is about the material's statistics (i.e. spatially homogeneous and ergodic), while statement (2) is a pronouncement on the independence of effective constitutive response with respect to the applied boundary conditions. Both of these are issues of mesoscale (L) of the domain of random microstructure over which smoothing (or homogenization) is being done relative to the microscale (d). As L/d goes to infinity, the RVE is obtained, while any finite mesoscale involves statistical scatter and, therefore, describes an SVE. With these considerations one obtains bounds on effective (macroscopic) response of elastic (non)linear and inelastic random microstructures. In general, the stronger the mismatch in material properties, or the stronger the departure from elastic behavior, the larger is the RVE. The finite-size scaling of elastic material properties from SVE to RVE can be grasped in compact forms with the help of scaling functions universally based on stretched exponentials. Considering that the SVE may be placed anywhere in the material domain, one arrives at a technique for characterization of continuum random fields. Another definition of the RVE was proposed by Drugan and Willis: "It is the smallest material volume element of the composite for which the usual spatially constant (overall modulus) macroscopic constitutive representation is a sufficiently accurate model to represent mean constitutive response." The choice of RVE can be quite a complicated process. The existence of a RVE assumes that it is possible to replace a heterogeneous material with an equivalent homogeneous material. This assumption implies that the volume should be large enough to represent the microstructure without introducing non-existing macroscopic properties (such as anisotropy in a macroscopically isotropic material). On the other hand, the sample should be small enough to be analyzed analytically or numerically. Examples RVEs for mechanical properties In continuum mechanics generally for a heterogeneous material, RVE can be considered as a volume V that represents a composite statistically, i.e., volume that effectively includes a sampling of all microstructural heterogeneities (grains, inclusions, voids, fibers, etc.) that occur in the composite. It must however remain small enough to be considered as a volume element of continuum mechanics. Several types of boundary conditions can be prescribed on V to impose a given mean strain or mean stress to the material element. One of the tools available to calculate the elastic properties of an RVE is the use of the open-source EasyPBC ABAQUS plugin tool. Analytical or numerical micromechanical analysis of fiber reinforced composites involves the study of a representative volume element (RVE). Although fibers are distributed randomly in real composites, many micromechanical models assume periodic arrangement of fibers from which RVE can be isolated in a straightforward manner. The RVE has the same elastic constants and fiber volume fraction as the composite. In general RVE can be considered same as a differential element with a large number of crystals. RVEs for porous media Establishing a given porous medium's properties requires measuring samples of the porous medium. If the sample is too small, the readings tend to oscillate. With increasing sample size, the oscillations begin to dampen out. Eventually the sample size will become large enough that readings are consistent. This sample size is referred to as the representative elementary volume. If sample size is increased further, measurement will remain stable until the sample size gets large enough that it begins to include other hydrostratigraphic layers. This is referred to as the maximum elementary volume (MEV). Groundwater flow equation has to be defined in an REV. RVEs for electromagnetic media While RVEs for electromagnetic media can have the same form as those for elastic or porous media, the fact that mechanical strength and stability are not concerns allow for a wide range of RVEs. In the adjacent figure, the RVE consists of a split-ring resonator and its surrounding backing material. Alternatives for RVE There does not exist one RVE size and depending on the studied mechanical properties, the RVE size can vary significantly. The concept of statistical volume element (SVE) and uncorrelated volume element (UVE) have been introduced as alternatives for RVE. Statistical Volume Element (SVE) Statistical volume element (SVE), which is also referred to as stochastic volume element in finite element analysis, takes into account the variability in the microstructure. Unlike RVE in which average value is assumed for all realizations, SVE can have a different value from one realization to another. SVE models have been developed to study polycrystalline microstructures. Grain features, including orientation, misorientation, grain size, grain shape, grain aspect ratio are considered in SVE model. SVE model was applied in the material characterization and damage prediction in microscale. Compared with RVE, SVE can provide a comprehensive representation of the microstructure of materials. Uncorrelated Volume Element (UVE) Uncorrelated volume element (UVE) is an extension of SVE which also considers the co-variance of adjacent microstructure to present an accurate length scale for stochastic modelling. References Bibliography . Volume Hydrogeology Continuum mechanics
Representative elementary volume
[ "Physics", "Mathematics", "Environmental_science" ]
1,362
[ "Scalar physical quantities", "Hydrology", "Physical quantities", "Continuum mechanics", "Quantity", "Classical mechanics", "Size", "Extensive quantities", "Volume", "Wikipedia categories named after physical quantities", "Hydrogeology" ]
14,596,042
https://en.wikipedia.org/wiki/HD%20117207
HD 117207 is a star in the southern constellation Centaurus. With an apparent visual magnitude of 7.24, it is too dim to be visible to the naked eye but can be seen with a small telescope. Based upon parallax measurements, it is located at a distance of from the Sun. The star is drifting closer with a radial velocity of −17.4 km/s. It has an absolute magnitude of 4.67. This object has a stellar classification of G7IV-V, showing blended spectral traits of a G-type main-sequence star and an older, evolving subgiant star. It is around four billion years old with 5% greater mass than the Sun and a 7% larger radius. The star is radiating 1.16 times the luminosity of the Sun from its photosphere at an effective temperature of 5,644 K. In 2005, a planet was found orbiting the star using the radial velocity method, and was designated HD 117207 b. The orbital elements of this planet were refined in 2018, showing an orbital period of , a semimajor axis of , and an eccentricity of 0.16. The minimum mass of this object is nearly double that of Jupiter. If an inner planet is orbiting the star, it must have an orbital period no greater than to satisfy Hill's criteria for dynamic stability. In 2023, the inclination and true mass of HD 117207 b were determined via astrometry. See also HD 117618 List of extrasolar planets References G-type main-sequence stars Planetary systems with one confirmed planet Centaurus CD-34 08913 117207 065808
HD 117207
[ "Astronomy" ]
342
[ "Centaurus", "Constellations" ]
14,596,160
https://en.wikipedia.org/wiki/Continuous%20cooling%20transformation
A continuous cooling transformation (CCT) phase diagram is often used when heat treating steel. These diagrams are used to represent which types of phase changes will occur in a material as it is cooled at different rates. These diagrams are often more useful than time-temperature-transformation diagrams because it is more convenient to cool materials at a certain rate (temperature-variable cooling), than to cool quickly and hold at a certain temperature (isothermal cooling). Types of continuous cooling diagrams There are two types of continuous cooling diagrams drawn for practical purposes. Type 1: This is the plot beginning with the transformation start point, cooling with a specific transformation fraction and ending with a transformation finish temperature for all products against transformation time for each cooling curve. Type 2: This is the plot beginning with the transformation start point, cooling with specific transformation fraction and ending with a transformation finish temperature for all products against cooling rate or bar diameter of the specimen for each type of cooling medium.. See also Isothermal transformation Phase diagram References Diagrams Phase transitions Metallurgy
Continuous cooling transformation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
208
[ "Physical phenomena", "Phase transitions", "Mechanical engineering stubs", "Metallurgy", "Phases of matter", "Critical phenomena", "Materials science", "nan", "Mechanical engineering", "Statistical mechanics", "Matter" ]
14,598,730
https://en.wikipedia.org/wiki/Toeplitz%20operator
In operator theory, a Toeplitz operator is the compression of a multiplication operator on the circle to the Hardy space. Details Let be the unit circle in the complex plane, with the standard Lebesgue measure, and be the Hilbert space of complex-valued square-integrable functions. A bounded measurable complex-valued function on defines a multiplication operator on . Let be the projection from onto the Hardy space . The Toeplitz operator with symbol is defined by where " | " means restriction. A bounded operator on is Toeplitz if and only if its matrix representation, in the basis , has constant diagonals. Theorems Theorem: If is continuous, then is Fredholm if and only if is not in the set . If it is Fredholm, its index is minus the winding number of the curve traced out by with respect to the origin. For a proof, see . He attributes the theorem to Mark Krein, Harold Widom, and Allen Devinatz. This can be thought of as an important special case of the Atiyah-Singer index theorem. Axler-Chang-Sarason Theorem: The operator is compact if and only if . Here, denotes the closed subalgebra of of analytic functions (functions with vanishing negative Fourier coefficients), is the closed subalgebra of generated by and , and is the space (as an algebraic set) of continuous functions on the circle. See . See also References . . . . Reprinted by Dover Publications, 1997, . Operator theory Hardy spaces Linear operators
Toeplitz operator
[ "Mathematics" ]
312
[ "Mathematical analysis", "Functions and mappings", "Mathematical analysis stubs", "Mathematical objects", "Linear operators", "Mathematical relations" ]
14,598,828
https://en.wikipedia.org/wiki/Perceptual%20robotics
Perceptual robotics is an interdisciplinary science linking Robotics and Neuroscience. It investigates biologically motivated robot control strategies, concentrating on perceptual rather than cognitive processes and thereby sides with J. J. Gibson's view against the Poverty of the stimulus theory. As a working definition, the following quote from Chapter 64 by H. Bülthoff, C. Wallraven and M. Giese from The Springer Handbook of Robotics, edited by Bruno Siciliano and Oussama Khatib, published by Springer in 2007, could be used: In the following we will apply the term Perceptual Robotics to signify the design of robots based on principles that are derived from human perception on all three levels in the sense of Marr. This includes a realization in terms of specific neural circuits as well as the transfer of more abstract biologically-inspired strategies for the solution of relevant computational problems. See also David Marr (neuroscientist) (including a short description of the three levels of perception) PERCRO Perceptual Robotics Laboratory, Scuola Superiore Sant'Anna, Pisa, Italy Robotics {
Perceptual robotics
[ "Engineering" ]
232
[ "Robotics", "Automation" ]
14,598,839
https://en.wikipedia.org/wiki/Arsenic%20pentachloride
Arsenic pentachloride is a chemical compound of arsenic and chlorine. This compound was first prepared in 1976 through the UV irradiation of arsenic trichloride, AsCl3, in liquid chlorine at −105 °C. AsCl5 decomposes at around −50 °C. The structure of the solid was finally determined in 2001. AsCl5 is similar to phosphorus pentachloride, PCl5 in having a trigonal bipyramidal structure where the equatorial bonds are shorter than the axial bonds (As-Cleq = 210.6 pm, 211.9 pm; As-Clax= 220.7 pm). The pentachlorides of the elements above and below arsenic in group 15, phosphorus pentachloride and antimony pentachloride are much more stable and the instability of AsCl5 appears anomalous. The cause is believed to be due to incomplete shielding of the nucleus in the 4p elements following the first transition series (i.e. gallium, germanium, arsenic, selenium, bromine, and krypton) which leads to stabilisation of their 4s electrons making them less available for bonding. This effect has been termed the d-block contraction and is similar to the f-block contraction normally termed the lanthanide contraction. References Arsenic(V) compounds Chlorides Arsenic halides Substances discovered in the 1970s
Arsenic pentachloride
[ "Chemistry" ]
297
[ "Chlorides", "Inorganic compounds", "Salts" ]
14,601,018
https://en.wikipedia.org/wiki/Ogden%20hyperelastic%20model
The Ogden material model is a hyperelastic material model used to describe the non-linear stress–strain behaviour of complex materials such as rubbers, polymers, and biological tissue. The model was developed by Raymond Ogden in 1972. The Ogden model, like other hyperelastic material models, assumes that the material behaviour can be described by means of a strain energy density function, from which the stress–strain relationships can be derived. Ogden material model In the Ogden material model, the strain energy density is expressed in terms of the principal stretches , as: where , and are material constants. Under the assumption of incompressibility one can rewrite as In general the shear modulus results from With and by fitting the material parameters, the material behaviour of rubbers can be described very accurately. For particular values of material constants the Ogden model will reduce to either the Neo-Hookean solid (, ) or the Mooney-Rivlin material (, , , with the constraint condition ). Using the Ogden material model, the three principal values of the Cauchy stresses can now be computed as . Uniaxial tension We now consider an incompressible material under uniaxial tension, with the stretch ratio given as , where is the stretched length and is the original unstretched length. The pressure is determined from incompressibility and boundary condition , yielding: . Equi-biaxial tension Considering an incompressible material under eqi-biaxial tension, with . The pressure is determined from incompressibility, and boundary condition , gives: . Other hyperelastic models For rubber and biological materials, more sophisticated models are necessary. Such materials may exhibit a non-linear stress–strain behaviour at modest strains, or are elastic up to huge strains. These complex non-linear stress–strain behaviours need to be accommodated by specifically tailored strain-energy density functions. The simplest of these hyperelastic models, is the Neo-Hookean solid. where is the shear modulus, which can be determined by experiments. From experiments it is known that for rubbery materials under moderate straining up to 30–70%, the Neo-Hookean model usually fits the material behaviour with sufficient accuracy. To model rubber at high strains, the one-parametric Neo-Hookean model is replaced by more general models, such as the Mooney-Rivlin solid where the strain energy is a linear combination of two invariants The Mooney-Rivlin material was originally also developed for rubber, but is today often applied to model (incompressible) biological tissue. For modeling rubbery and biological materials at even higher strains, the more sophisticated Ogden material model has been developed. References F. Cirak: Lecture Notes for 5R14: Non-linear solid mechanics, University of Cambridge. R.W. Ogden: Non-Linear Elastic Deformations, Continuum mechanics Solid mechanics
Ogden hyperelastic model
[ "Physics" ]
595
[ "Solid mechanics", "Mechanics", "Classical mechanics", "Continuum mechanics" ]
14,601,271
https://en.wikipedia.org/wiki/Autoacceleration
In polymer chemistry, autoacceleration (gel effect) is a dangerous reaction behavior that can occur in free-radical polymerization systems. It is due to the localized increases in viscosity of the polymerizing system that slow termination reactions. The removal of reaction obstacles therefore causes a rapid increase in the overall rate of reaction, leading to possible reaction runaway and altering the characteristics of the polymers produced. It is also known as the Trommsdorff–Norrish effect after German chemist Johann Trommsdorff and British chemist Ronald G.W. Norrish. Background Autoacceleration of the overall rate of a free-radical polymerization system has been noted in many bulk polymerization systems. The polymerization of methyl methacrylate, for example, deviates strongly from classical mechanism behavior around 20% conversion; in this region the conversion and molecular mass of the polymer produced increases rapidly. This increase of polymerization is usually accompanied by a large rise in temperature if heat dissipation is not adequate. Without proper precautions, autoacceleration of polymerization systems could cause metallurgic failure of the reaction vessel or, worse, explosion. To avoid the occurrence of thermal runaway due to autoacceleration, suspension polymerization techniques are employed to make polymers such as polystyrene. The droplets dispersed in the water are small reaction vessels, but the heat capacity of the water lowers the temperature rise, thus moderating the reaction. Causes Norrish and Smith, Trommsdorff, and later, Schultz and Harborth, concluded that autoacceleration must be caused by a totally different polymerization mechanism. They rationalized through experiment that a decrease in the termination rate was the basis of the phenomenon. This decrease in termination rate, kt, is caused by the raised viscosity of the polymerization region when the concentration of previously formed polymer molecules increases. Before autoacceleration, chain termination by combination of two free-radical chains is a very rapid reaction that occurs at very high frequency (about one in 104 collisions). However, when the growing polymer molecules – with active free-radical ends – are surrounded in the highly viscous mixture consisting of a growing concentration of "dead" polymer, the rate of termination becomes limited by diffusion. The Brownian motion of the larger molecules in the polymer "soup" is restricted, therefore limiting the frequency of their effective (termination) collisions. Results With termination collisions restricted, the concentration of active polymerizing chains and simultaneously the consumption of monomer rises rapidly. Assuming abundant unreacted monomer, viscosity changes affect the macromolecules but do not prove high enough to prevent smaller molecules – such as the monomer – from moving relatively freely. Therefore, the propagation reaction of the free-radical polymerization process is relatively insensitive to changes in viscosity. This also implies that at the onset of autoacceleration the overall rate of reaction increases relative to the rate of un-autoaccelerated reaction given by the overall rate of reaction equation for free-radical polymerization: where is the rate of polymerization is the concentration of monomer is the concentration of initiator is the dissociation constant is the rate constant for propagation is the rate constant for chain transfer is the fraction of initiators which initiate chain growth Approximately, as the termination decreases by a factor of 4, the overall rate of reaction will double. The decrease of termination reactions also allows radical chains to add monomer for longer time periods, raising the mass-average molecular mass dramatically. However, the number-average molecular mass only increases slightly, leading to broadening of the molecular mass distribution (high dispersity, very polydispersed product). References Bibliography Dvornic, Petar R., and Jacovic S. Milhailo. "The Viscosity Effect on Autoacceleration of the Rate of Free Radical Polymerization". Wiley InterScience. 6 December 2007. Polymer chemistry Reaction mechanisms
Autoacceleration
[ "Chemistry", "Materials_science", "Engineering" ]
810
[ "Reaction mechanisms", "Materials science", "Polymer chemistry", "Physical organic chemistry", "Chemical kinetics" ]
14,602,439
https://en.wikipedia.org/wiki/Melanoblast
A melanoblast is a precursor cell of a melanocyte. These cells migrate from the trunk neural crest cells (in terms of axial level from neck to posterior end) dorsolaterally between the ectoderm and dorsal surface of the somites. See also Biological pigment List of human cell types derived from the germ layers References Pigments Biomolecules Pigmentation
Melanoblast
[ "Chemistry", "Biology" ]
83
[ "Natural products", "Organic compounds", "Biomolecules", "Structural biology", "Biochemistry", "Pigmentation", "Molecular biology" ]
14,603,289
https://en.wikipedia.org/wiki/Fazia
FAZIA stands for the Four Pi A and Z Identification Array. This is a project which aims at building a new 4pi particle detector for charged particles. It will operate in the domain of heavy-ion induced reactions around the Fermi energy. It groups together more than 10 institutions worldwide in Nuclear Physics. It is planned to work in 2013-2014, coincidentally to the advent of new high intensity particle accelerators for radioactive nuclear beams. A large effort on research and development is currently made, especially on digital electronics and pulse shape analysis, in order to improve the detection capabilities of such particle detectors in different domains, such as charge and mass identification, lower energy thresholds, as well as improved energetic and angular resolutions. References G. Poggi (INFN Firenze, Italy), Isospin effects : toward a new generation array, Proceedings to the XVth GANIL Colloque, Giens, June 2006 O. Lopez (LPC Caen, France), FAZIA for EURISOL: Physics cases, EURISOL Town Meeting, Task 10 (Physics and Instrumentation), CERN, November 2006 L. Bardelli(INFN Firenze, Italy), FAZIA for EURISOL : Instrumentation, EURISOL Town Meeting, Task 10 (Physics and Instrumentation), CERN, November 2006 G. Verde (GANIL, France), presentation for the SPIRAL2 meeting, GANIL, October 2006 External links FAZIA collaboration official website Physics organizations Nuclear physics
Fazia
[ "Physics" ]
306
[ "Nuclear physics" ]
14,603,715
https://en.wikipedia.org/wiki/Darboux%27s%20formula
In mathematical analysis, Darboux's formula is a formula introduced by for summing infinite series by using integrals or evaluating integrals using infinite series. It is a generalization to the complex plane of the Euler–Maclaurin summation formula, which is used for similar purposes and derived in a similar manner (by repeated integration by parts of a particular choice of integrand). Darboux's formula can also be used to derive the Taylor series from calculus. Statement If φ(t) is a polynomial of degree n and f an analytic function then The formula can be proved by repeated integration by parts. Special cases Taking φ to be a Bernoulli polynomial in Darboux's formula gives the Euler–Maclaurin summation formula. Taking φ to be (t − 1)n gives the formula for a Taylor series. References Whittaker, E. T. and Watson, G. N. "A Formula Due to Darboux." §7.1 in A Course in Modern Analysis, 4th ed. Cambridge, England: Cambridge University Press, p. 125, 1990. External links Darboux's formula at MathWorld Mathematical analysis Summability methods
Darboux's formula
[ "Mathematics" ]
250
[ "Sequences and series", "Mathematical analysis", "Summability methods", "Mathematical structures" ]
14,604,961
https://en.wikipedia.org/wiki/Outer%20membrane%20phospholipase%20A1
Outer membrane phospholipase A1 (OMPLA) is an acyl hydrolase with a broad substrate specificity (EC:3.1.1.32.) from the bacterial outer membrane. It has been proposed that Ser164 is the active site of the protein (UniProt ) This integral membrane phospholipase was found in many Gram-negative bacteria and has a broad substrate specificity . The role of OMPLA has been most thoroughly studied in Escherichia coli, where it participates in the secretion of bacteriocins. Bacteriocin release is triggered by a lysis protein (bacteriocin release protein or BRP), followed by a phospholipase dependent accumulation of lysophospholipids and free fatty acids in the outer membrane. The reaction products enhance the permeability of the outer membrane, which allows the semispecific secretion of bacteriocins. One speculative function of OMPLA is related to organic solvent tolerance in bacteria. Structurally, it consists of a 12-stranded antiparallel beta-barrel with a convex and a flat side. The active site residues are exposed on the exterior of the flat face of the beta-barrel. The activity of the enzyme is regulated by reversible dimerisation. Dimer interactions occur exclusively in the membrane-embedded parts of the flat side of the beta-barrel, with polar residues embedded in an apolar environment forming the key interactions. The active site His and Ser residues are located at the exterior of the beta-barrel, at the outer leaflet side of the membrane. This location indicates that under normal conditions the substrate and the active site are physically separated, since in E. coli phospholipids are exclusively located in the inner leaflet of the outer membrane. References Protein domains Outer membrane proteins
Outer membrane phospholipase A1
[ "Biology" ]
389
[ "Protein domains", "Protein classification" ]
14,605,198
https://en.wikipedia.org/wiki/Outer%20membrane%20efflux%20protein
The outer membrane efflux protein is a protein family member that forms trimeric (three-piece) channels allowing the export of a variety of substrates in gram-negative bacteria. Each efflux protein is composed of two repeats. The trimeric channel is composed of a 12-stranded beta-barrel that spans the outer membrane, and a long tail helical barrel that spans the periplasm. Examples include the Escherichia coli TolC outer membrane protein, which is required for proper expression of outer membrane protein genes; the Rhizobium nodulation protein; and the Pseudomonas FusA protein, which is involved in resistance to fusaric acid. References Protein domains Protein families Outer membrane proteins
Outer membrane efflux protein
[ "Biology" ]
149
[ "Protein families", "Protein domains", "Protein classification" ]
14,605,490
https://en.wikipedia.org/wiki/FadL%20outer%20membrane%20protein%20transport%20family
Outer membrane transport proteins (OMPP1/FadL/TodX) family includes several proteins that are involved in toluene catabolism and degradation of aromatic hydrocarbons. This family also includes protein FadL involved in translocation of long-chain fatty acids across the outer membrane. It is also a receptor for the bacteriophage T2. Notes References Protein families Outer membrane proteins
FadL outer membrane protein transport family
[ "Biology" ]
85
[ "Protein families", "Protein classification" ]
9,441,061
https://en.wikipedia.org/wiki/GISAID
GISAID (), the Global Initiative on Sharing All Influenza Data, previously the Global Initiative on Sharing Avian Influenza Data, is a global science initiative established in 2008 to provide access to genomic data of influenza viruses. The database was expanded to include the coronavirus responsible for the COVID-19 pandemic, as well as other pathogens. The database has been described as "the world's largest repository of COVID-19 sequences". GISAID facilitates genomic epidemiology and real-time surveillance to monitor the emergence of new COVID-19 viral strains across the planet. Since its establishment as an alternative to sharing avian influenza data via conventional public-domain archives, GISAID has facilitated the exchange of outbreak genome data during the H1N1 pandemic in 2009, the H7N9 epidemic in 2013, the COVID-19 pandemic and the 2022–2023 mpox outbreak. History Origin Since 1952, influenza strains had been collected by National Influenza Centers (NICs) and distributed through the WHO's Global Influenza Surveillance and Response System (GISRS). Countries provided samples to the WHO but the data was then shared with them for free with pharmaceutical companies who could patent vaccines produced from the samples. Beginning in January 2006, Italian researcher Ilaria Capua refused to upload her data to a closed database and called for genomic data on H5N1 avian influenza to be in the public domain. At a conference of the OIE/FAO Network of Expertise on Animal Influenza, Capua persuaded participants to agree to each sequence and release data on 20 strains of influenza. Some scientists had concerns about sharing their data in case others published scientific papers using the data before them, but Capua dismissed this telling Science "What is more important? Another paper for Ilaria Capua's team or addressing a major health threat? Let's get our priorities straight." Peter Bogner, a German in his 40s based in the US and who previously had no experience in public health, read an article about Capua's call and helped to found and fund GISAID. Bogner met Nancy Cox, who was then leading the US Centers for Disease Control's influenza division at a conference, and Cox went on to chair GISAID's Scientific Advisory Council. The acronym GISAID was coined in a correspondence letter published in the journal Nature in August 2006, putting forward an initial aspiration of creating a consortium for a new Global Initiative on Sharing Avian Influenza Data (later, "All" would replace "Avian"), whereby its members would release data in publicly available databases up to six months after analysis and validation. Initially the organisation collaborated with the Australian non-profit organization Cambia and the Creative Commons project Science Commons. Although no essential ground rules for sharing were established, the correspondence letter was signed by over 70 leading scientists, including seven Nobel laureates, because access to the most current genetic data for the highly pathogenic H5N1 zoonotic virus was often restricted, in part due to the hesitancy of World Health Organization member states to share their virus genomes and put ownership rights at risk. Towards the end of 2006, Indonesia announced it would not share samples of avian flu with the WHO which led to a global health crisis due to an ongoing epidemic. By October 2006, Indonesia had agreed to share their data with GISAID, which their health minister considered to have a "fair and transparent" mechanism for sharing data. It was one of the first countries to do so. In February 2007, GISAID and the Swiss Institute of Bioinformatics (SIB) announced a cooperation agreement, with the SIB building and administering the EpiFlu database on behalf of GISAID. Ultimately, GISAID was launched in May 2008 in Geneva on the occasion of the 61st World Health Assembly, as a registration-based database rather than a consortium. 2009 onwards In 2009 SIB disconnected the database from the GISAID portal over a contract dispute, resulting in litigation. In April 2010 the Federal Republic of Germany announced during the 7th International Ministerial Conference on Avian and Pandemic Influenza in Hanoi, Vietnam, that GISAID had entered into a cooperation agreement with the German government, making Germany the long-term host of the GISAID platform. Under the agreement, Germany's Federal Ministry of Food, Agriculture and Consumer Protection was to ensure the sustainability of the initiative by providing technical hosting facilities, and the Federal Institute for Animal Health, the Friedrich Loeffler Institute, was to ensure the plausibility and curation of scientific data in GISAID. By 2021, the ministry was no longer involved with either database hosting nor curation. In 2013 GISAID dissolved a nonprofit organisation based in Washington DC and the organisation began to be operated by a German association called Freunde von GISAID (Friends of GISAID). Some of the earliest SARS-CoV-2 genetic sequences were released by the Chinese Center for Disease Control and Prevention and shared through GISAID in mid January 2020. Since 2020, millions of SARS-CoV-2 genome sequences have been uploaded to the GISAID database. In 2022, GISAID added Mpox virus and Respiratory syncytial virus (RSV) to the list of pathogens supported by its database. Indonesia's Ministry of Health announced in November 2023 the establishment of GISAID Academy in Bali, to focus on bioinformatics education, advance pathogen genomic surveillance, and increased regional response capacity. The GISAID model of incentivizing and recognizing those who deposit data has been recommended as a model for future initiatives; Because of this work, the entity has been described as "a critical shield for humankind". Database for SARS-CoV-2 genomes GISAID maintains what has been described as "the world's largest repository of COVID-19 sequences", and "by far the world's largest database of SARS-CoV-2 sequences". By mid-April 2021, GISAID's SARS-CoV-2 database reached over 1,200,000 submissions, a testament to the hard work of researchers in over 170 different countries. Only three months later, the number of uploaded SARS-CoV-2 sequences had doubled again, to over 2.4 million. By late 2021, the database contained over 5 million genome sequences; as of December 2021, over 6 million sequences had been submitted; by April 2022, there were 10 million sequences accumulated; and in January 2023 the number had reached 14.4 million. In January 2020, the SARS-CoV-2 genetic sequence data was shared through GISAID. Throughout the first year of the COVID-19 pandemic, most of the SARS-CoV-2 whole-genome sequences that were generated and shared globally were submitted through GISAID. When the SARS-CoV-2 Omicron variant was detected in South Africa, by quickly uploading the sequence to GISAID, the National Institute for Communicable Diseases there was able to learn that Botswana and Hong Kong had also reported cases possessing the same gene sequence. In March 2023, GISAID temporarily suspended database access for some scientists, removing raw data relevant to investigations of the origins of SARS-CoV-2. GISAID stated that they do not delete records from their database, but data may become temporarily invisible during updates or corrections. Availability of the data was restored, with an additional restriction that any analysis based thereon would not be shared with the public. Governance The board of Friends of GISAID consists of Peter Bogner and two German lawyers who are not involved in the day-to-day operations of the organisation. Scientific advice to the organization is provided by its Scientific Advisory Council, including directors of leading public health laboratories, such as WHO Collaborating Centres for Influenza. In 2023, GISAID's lack of transparency was criticized by some GISAID funders, including the European Commission and the Rockefeller Foundation, with long-term funding being denied from International Federation of Pharmaceutical Manufacturers and Associations (IFPMA). In June 2023, it was reported in Vanity Fair that Bogner had said that "GISAID will soon launch an independent compliance board 'responsible for addressing a wide range of governance matters'". The Telegraph similarly reported that GISAID's in-house counsel was developing new governance processes intended to be transparent and allow for the resolution of scientific disputes without the involvement of Bogner. Access and intellectual property The creation of the GISAID database was motivated in part by concerns raised by researchers from developing countries, with Scientific American noting in 2009 that "a previous data-sharing system run by WHO forced them to give up intellectual property rights to their virus samples when they sent them to WHO. The virus samples would then be used by private pharmaceutical companies to make vaccines that are awarded patents and sold at a profit at prices many poor nations cannot afford". In a 2022 piece in The Lancet, it was further noted that scientists in North America and Europe sought unrestricted access, with "scientists from Africa requiring sufficient protections for those who generate and share data as per the GISAID terms and conditions". Unlike public-domain databases such as GenBank and EMBL, users of GISAID must have their identity confirmed and agree to a Database Access Agreement that governs the way GISAID data can be used. These Terms of Use are "weighted in favour of the data provider and gives them enduring control over the genetic data they upload". They prevent users from sharing any data with other users who have not agreed to them, and require that users of the data must credit the data generators in published work, and also make a reasonable attempt to collaborate with data generators and involve them in research and analysis that uses their data. A difficulty that GISAID's Data Access Agreement attempts to address is that many researchers fear sharing of influenza sequence data could facilitate its misappropriation through intellectual property claims by the vaccine industry and others, hindering access to vaccines and other items in developing countries, either through high costs or by preventing technology transfer. While most public interest experts agree with GISAID that influenza sequence data should be made public, and this is the subject of agreement by many researchers, some provide the information only after filing patent claims while others have said that access to it should be only on the condition that no patents or other intellectual property claims are filed, as was controversial with the Human Genome Project. GISAID's Data Access Agreement addresses this directly to promote sharing data. GISAID's procedures additionally suggest that those who access the EpiFlu database consult the countries of origin of genetic sequences and the researchers who discovered the sequences. As a result, the GISAID license has been important in rapid pandemic preparedness. However, these restrictions evidence common criticisms to an open data model. GISAID describes itself as "open access", which is naturally replicated by the media and in journal publications. This description indeed aligns with the original announcement of the consortium, which also mentioned depositing the data to the databases participating in the INSDC. As of March 2023, this is not the case, as "GISAID does not offer a mechanism to release data to any other database". A few academic papers have compared GISAID's licensing model to unrestricted, open databases, highlighting the differences while other researchers have signed an open letter calling for the use of any of the INSDC's unrestricted databases. In 2017, GISAID's editorial board stated that "re3data.org and DataCite, the world's leading provider of digital object identifiers (DOI) for research data, affirmed the designation of access to GISAID's database and data as Open Access". However, after several researchers had their accounts suspended in March 2023 as reported by the journal Science and other news outlets, its open access status was revoked by the Registry of Research Data Repositories (re3data), which now classifies it as a "restricted access repository". In 2020 the World Health Organization chief scientist Soumya Swaminathan called the initiative "a game changer", while the co-director of the European Bioinformatics Institute (EBI) Rolf Apweiler has argued that because it does not allow sequences to be reshared publicly, it hampers efforts to understand the coronavirus and the rapid rise of new variants. GISAID's restrictions on access have led to conflict with "labs and institutions whose priorities are academic rather than driven by the immediate priorities of public health protection". In January 2021, GISAID's restricted access led a group of scientists to write an open letter asking for SARS-CoV-2 sequences to be deposited in open databases, which was replicated in the journals Nature and Science. Furthermore, the article from Science points out that the lack of transparency in access to the database also prevents many scientists from even criticising the platform. A paper from 2017 describing the success of GISAID mentions that revoking researchers' credentials was rare, but it did happen. The same publication described a "perceived merit in GISAID's formula for balancing the need for control and openness". In April 2023, Science and The Economist reported these issues continue as well as the lack of transparency of its governance. An investigation by The Telegraph into claims made by Science noted the incentives of various potential competitors in the field, for whom GISAID is an obstacle to consolidation of control over the field, and also noted that GISAID's position inevitably places it at the center of disputes between groups of scientists, which will tend to result in the losing side blaming GISAID for that outcome. See also References Further reading External links Avian influenza Influenza Mpox COVID-19 pandemic Genome databases Influenza A virus subtype H5N1 Organisations based in Munich Public health organizations International scientific organizations Bioinformatics Virology Non-profit organisations based in Germany
GISAID
[ "Engineering", "Biology" ]
2,906
[ "Bioinformatics", "Biological engineering" ]
9,442,947
https://en.wikipedia.org/wiki/Carleman%27s%20condition
In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure satisfies Carleman's condition, there is no other measure having the same moments as The condition was discovered by Torsten Carleman in 1922. Hamburger moment problem For the Hamburger moment problem (the moment problem on the whole real line), the theorem states the following: Let be a measure on such that all the moments are finite. If then the moment problem for is determinate; that is, is the only measure on with as its sequence of moments. Stieltjes moment problem For the Stieltjes moment problem, the sufficient condition for determinacy is Generalized Carleman's condition In, Nasiraee et al. showed that, despite previous assumptions, when the integrand is an arbitrary function, Carleman's condition is not sufficient, as demonstrated by a counter-example. In fact, the example violates the bijection, i.e. determinacy, property in the probability sum theorem. When the integrand is an arbitrary function, they further establish a sufficient condition for the determinacy of the moment problem, referred to as the generalized Carleman's condition. Notes References Chapter 3.3, Durrett, Richard. Probability: Theory and Examples. 5th ed. Cambridge Series in Statistical and Probabilistic Mathematics 49. Cambridge ; New York, NY: Cambridge University Press, 2019. Mathematical analysis Moment (mathematics) Probability theory Theorems in approximation theory
Carleman's condition
[ "Physics", "Mathematics" ]
326
[ "Theorems in mathematical analysis", "Mathematical analysis", "Moments (mathematics)", "Physical quantities", "Theorems in approximation theory", "Moment (physics)" ]
9,443,168
https://en.wikipedia.org/wiki/Marmorino
Marmorino Veneziano is a type of plaster or stucco. It is based on calcium oxide and used for interior and exterior wall decorations. Marmorino plaster can be finished via multiple techniques for a variety of matte, satin, and glossy final effects. It was used as far back as Roman times, but was made popular once more during the Renaissance 500 years ago in Venice. Marmorino is made from crushed marble and lime putty, which can be tinted to give a wide range of colours. This can then be applied to make many textures, from polished marble to natural stone effects. Widely used in Italy, its appeal has spread through North America especially, but now worldwide. Because of the hours of workmanship, the pricing places it in the high-end market. However, many examples can be seen in public buildings, bars, restaurants, etc. Its waterproofing and antibacterial qualities as well as visual effects have also made it very desirable for luxury bathrooms, honeymoon bedrooms and other wet areas. Not confined to interior use, it can be seen on the exterior of many buildings to great effect. History Marmorino is well known as a classic Venetian plaster; however, its origins are much older, dating to ancient Roman times. We can see evidence of it today in the villas of Pompeii and in various ancient Roman structures. In addition, it was also written about in Vitruvius's De architectura, a 1st Century B.C. history of Rome. Marmorino was rediscovered centuries later after the discovery of Vitruvio's ancient treatise in the 15th century. This 'new' plaster conformed well to the aesthetic requirements dictated by the classical ideal that in the 15th century had recently become fashionable in the Venetian lagoon area. The first record of work being done with marmorino is a building contract with the nuns of Santa Chiara of Murano in 1473. In this document, it is written that before the marmorino could be applied, the wall had to be prepared with a mortar made of lime and "coccio pesto" (ground terra cotta). This "coccio pesto" was then excavated from tailings of bricks or recycled from old roof tiles. At this point, to better understand the popularity of marmorino in Venetian life, two facts need to be considered. The first is that in a city that extends over water, the transport of sand for making plaster and the disposal of tailings was, and still is, a huge problem. So the use of marmorino was successful not only because the substrate was prepared using terra cotta scraps, but also the finish, marmorino, was made with leftover stone and marble, which were in great abundance at that time. These ground discards were mixed with lime to create marmorino. Besides, marmorino and substrates made of "coccio pesto" resisted the ambient dampness of the lagoon better than almost any other plaster. The first because it is extremely breathable by virtue of the kind of lime used (only lime which sets on exposure to air after losing excess water) and the second, because it contains terracotta which when added to lime makes the mixture hydraulic, that is, it's effective even in very damp conditions (because it contains silica and aluminium, bases of modern cement and Hydraulic lime preparations). The second consideration is that an aesthetically pleasing result could be achieved in an era dominated by the return of a classical Greco-Roman style allowing less weight to be transmitted to the foundation when compared to the habit of covering facades with slabs of stone. Usually, marmorino was white to imitate Istrian stone, which was most often used in Venetian construction, but was occasionally decorated with frescoes to imitate the marble, which Venetian merchants brought home from their voyages to the Orient. (In this period of the Republic of Venice, merchants felt obliged to return home bearing precious, exotic marble as a tribute to the beauty of their own city.) Marmorino maintained its prestige for centuries until the end of the 1800s when interest in it faded and it was considered only an economical solution to the use of marble. Only at the end of the 1970s, thanks in part to architect Carlo Scarpa's use of marmorino, did this finishing technique return to the interest of the best modern architects. For about 10 years, industries were also interested in marmorino which was only produced by artisans. Today, however, ready-to-use marmorino can be found, often with glue added to allow it to be applied on non-traditional surfaces such as drywall or wood panelling. See also Scagliola Stucco References Giovanni Polistena, History of Marmorino, Stucco Italiano, 2012. External links Why Lime? History & Benefits Wall & Furniture Films Building materials Craft materials Wallcoverings Plastering
Marmorino
[ "Physics", "Chemistry", "Engineering" ]
1,003
[ "Building engineering", "Coatings", "Architecture", "Construction", "Materials", "Plastering", "Matter", "Building materials" ]
52,033
https://en.wikipedia.org/wiki/Mathematical%20optimization
Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. Optimization problems Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete: An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set. A problem with continuous variables is known as a continuous optimization, in which optimal arguments from a continuous set must be found. They can include constrained problems and multimodal problems. An optimization problem can be represented in the following way: Given: a function from some set to the real numbers Sought: an element such that for all ("minimization") or such that for all ("maximization"). Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Many real-world and theoretical problems may be modeled in this general framework. Since the following is valid: it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too. Problems formulated using this technique in the fields of physics may refer to the technique as energy minimization, speaking of the value of the function as representing the energy of the system being modeled. In machine learning, it is always necessary to continuously evaluate the quality of a data model by using a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error. Typically, is some subset of the Euclidean space , often specified by a set of constraints, equalities or inequalities that the members of have to satisfy. The domain of is called the search space or the choice set, while the elements of are called candidate solutions or feasible solutions. The function is variously called an objective function, criterion function, loss function, cost function (minimization), utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional. A feasible solution that minimizes (or maximizes) the objective function is called an optimal solution. In mathematics, conventional optimization problems are usually stated in terms of minimization. A local minimum is defined as an element for which there exists some such that the expression holds; that is to say, on some region around all of the function values are greater than or equal to the value at that element. Local maxima are defined similarly. While a local minimum is at least as good as any nearby elements, a global minimum is at least as good as every feasible element. Generally, unless the objective function is convex in a minimization problem, there may be several local minima. In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem. Notation Optimization problems are often expressed with special notation. Here are some examples: Minimum and maximum value of a function Consider the following notation: This denotes the minimum value of the objective function , when choosing from the set of real numbers . The minimum value in this case is 1, occurring at . Similarly, the notation asks for the maximum value of the objective function , where may be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined". Optimal input arguments Consider the following notation: or equivalently This represents the value (or values) of the argument in the interval that minimizes (or minimize) the objective function (the actual minimum value of that function is not what the problem asks for). In this case, the answer is , since is infeasible, that is, it does not belong to the feasible set. Similarly, or equivalently represents the pair (or pairs) that maximizes (or maximize) the value of the objective function , with the added constraint that lie in the interval (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form and , where ranges over all integers. Operators and are sometimes also written as and , and stand for argument of the minimum and argument of the maximum. History Fermat and Lagrange found calculus-based formulae for identifying optima, while Newton and Gauss proposed iterative methods for moving towards an optimum. The term "linear programming" for certain optimization cases was due to George B. Dantzig, although much of the theory had been introduced by Leonid Kantorovich in 1939. (Programming in this context does not refer to computer programming, but comes from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time.) Dantzig published the Simplex algorithm in 1947, and also John von Neumann and other researchers worked on the theoretical aspects of linear programming (like the theory of duality) around the same time. Other notable researchers in mathematical optimization include the following: Richard Bellman Dimitri Bertsekas Michel Bierlaire Stephen P. Boyd Roger Fletcher Martin Grötschel Ronald A. Howard Fritz John Narendra Karmarkar William Karush Leonid Khachiyan Bernard Koopman Harold Kuhn László Lovász David Luenberger Arkadi Nemirovski Yurii Nesterov Lev Pontryagin R. Tyrrell Rockafellar Naum Z. Shor Albert Tucker Major subfields Convex programming studies the case when the objective function is convex (minimization) or concave (maximization) and the constraint set is convex. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming. Linear programming (LP), a type of convex programming, studies the case in which the objective function f is linear and the constraints are specified using only linear equalities and inequalities. Such a constraint set is called a polyhedron or a polytope if it is bounded. Second-order cone programming (SOCP) is a convex program, and includes certain types of quadratic programs. Semidefinite programming (SDP) is a subfield of convex optimization where the underlying variables are semidefinite matrices. It is a generalization of linear and convex quadratic programming. Conic programming is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone. Geometric programming is a technique whereby objective and inequality constraints expressed as posynomials and equality constraints as monomials can be transformed into a convex program. Integer programming studies linear programs in which some or all variables are constrained to take on integer values. This is not convex, and in general much more difficult than regular linear programming. Quadratic programming allows the objective function to have quadratic terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming. Fractional programming studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem. Nonlinear programming studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it. Stochastic programming studies the case in which some of the constraints or parameters depend on random variables. Robust optimization is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set. Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one. Stochastic optimization is used with random (noisy) function measurements or random inputs in the search process. Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions. Heuristics and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems. Constraint satisfaction studies the case in which the objective function f is constant (this is used in artificial intelligence, particularly in automated reasoning). Constraint programming is a programming paradigm wherein relations between variables are stated in the form of constraints. Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling. Space mapping is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or surrogate model. In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): Calculus of variations is concerned with finding the best way to achieve some goal, such as finding a surface whose boundary is a specific curve, but with the least possible area. Optimal control theory is a generalization of the calculus of variations which introduces control policies. Dynamic programming is the approach to solve the stochastic optimization problem with stochastic, randomness, and unknown model parameters. It studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the Bellman equation. Mathematical programming with equilibrium constraints is where the constraints include variational inequalities or complementarities. Multi-objective optimization Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier. A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal. The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker. Multi-objective optimization problems have been generalized further into vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering. Multi-modal or global optimization Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer. Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. Common approaches to global optimization problems, where multiple local extrema may be present include evolutionary algorithms, Bayesian optimization and simulated annealing. Classification of critical points and extrema Feasibility problem The satisfiability problem, also called the feasibility problem, is just the problem of finding any feasible solution at all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal. Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative. Existence The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view. Necessary conditions for optimality One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test). More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions. Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'. Sufficient conditions for optimality While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality. Sensitivity and continuity of optima The envelope theorem describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics. The maximum theorem of Claude Berge (1963) describes the continuity of an optimal solution as a function of underlying parameters. Calculus of optimization For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero subgradient certifies that a local minimum has been found for minimization problems with convex functions and other locally Lipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum. Further, critical points can be classified using the definiteness of the Hessian matrix: If the Hessian is positive definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind of saddle point. Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers. Lagrangian relaxation can also provide approximate solutions to difficult constrained problems. When the objective function is a convex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such as interior-point methods. Global convergence More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies on line searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence uses trust regions. Both line searches and trust regions are used in modern methods of non-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points. Computational optimization techniques To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge). Optimization algorithms Simplex algorithm of George Dantzig, designed for linear programming Extensions of the simplex algorithm, designed for quadratic programming and for linear-fractional programming Variants of the simplex algorithm that are especially suited for network optimization Combinatorial algorithms Quantum optimization algorithms Iterative methods The iterative methods used to solve problems of nonlinear programming differ according to whether they evaluate Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high. One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself. Methods that evaluate Hessians (or approximate Hessians, using finite differences): Newton's method Sequential quadratic programming: A Newton-based method for small-medium scale constrained problems. Some versions can handle large-dimensional problems. Interior point methods: This is a large class of methods for constrained optimization, some of which use only (sub)gradient information and others of which require the evaluation of Hessians. Methods that evaluate gradients, or approximate gradients in some way (or even subgradients): Coordinate descent methods: Algorithms which update a single coordinate in each iteration Conjugate gradient methods: Iterative methods for large problems. (In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.) Gradient descent (alternatively, "steepest descent" or "steepest ascent"): A (slow) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems. Subgradient methods: An iterative method for large locally Lipschitz functions using generalized gradients. Following Boris T. Polyak, subgradient–projection methods are similar to conjugate–gradient methods. Bundle method of descent: An iterative method for small–medium-sized problems with locally Lipschitz functions, particularly for convex minimization problems (similar to conjugate gradient methods). Ellipsoid method: An iterative method for small problems with quasiconvex objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods. Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with linear constraints, especially with traffic networks. For general unconstrained problems, this method reduces to the gradient method, which is regarded as obsolete (for almost all problems). Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000). Simultaneous perturbation stochastic approximation (SPSA) method for stochastic optimization; uses random (efficient) gradient approximation. Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used. Interpolation methods Pattern search methods, which have better convergence properties than the Nelder–Mead heuristic (with simplices), which is listed below. Mirror descent Heuristics Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics: Differential evolution Dynamic relaxation Evolutionary algorithms Genetic algorithms Hill climbing with random restart Memetic algorithm Nelder–Mead simplicial heuristic: A popular heuristic for approximate minimization (without calling gradients) Particle swarm optimization Simulated annealing Stochastic tunneling Tabu search Applications Mechanics Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold; the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem. Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems. This approach may be applied in cosmology and astrophysics. Economics and finance Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses. Modern optimization theory includes traditional optimization theory but also overlaps with game theory and the study of economic equilibria. The Journal of Economic Literature codes classify mathematical programming, optimization techniques, and related topics under JEL:C61-C63. In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem, are economic optimization problems. Insofar as they behave consistently, consumers are assumed to maximize their utility, while firms are usually assumed to maximize their profit. Also, agents are often modeled as being risk-averse, thereby preferring to avoid risk. Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing stochastic processes rather than on static optimization. International trade theory also uses optimization to explain trade patterns between nations. The optimization of portfolios is an example of multi-objective optimization in economics. Since the 1970s, economists have modeled dynamic decisions over time using control theory. For example, dynamic search models are used to study labor-market behavior. A crucial distinction is between deterministic and stochastic models. Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments. Electrical engineering Some common applications of optimization techniques in electrical engineering include active filter design, stray field reduction in superconducting magnetic energy storage systems, space mapping design of microwave structures, handset antennas, electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical surrogate model and space mapping methodologies since the discovery of space mapping in 1993. Optimization techniques are also used in power-flow analysis. Civil engineering Optimization has been widely used in civil engineering. Construction management and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures, resource leveling, water resource allocation, traffic management and schedule optimization. Operations research Another field that uses optimization techniques extensively is operations research. Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research uses stochastic programming to model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization and stochastic optimization methods. Control engineering Mathematical optimization is used in much modern controller design. High-level controllers such as model predictive control (MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled. Geophysics Optimization techniques are regularly used in geophysical parameter estimation problems. Given a set of geophysical measurements, e.g. seismic recordings, it is common to solve for the physical properties and geometrical shapes of the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used. Molecular modeling Nonlinear optimization methods are widely used in conformational analysis. Computational systems biology Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology. Linear programming has been applied to calculate the maximal possible yields of fermentation products, and to infer gene regulatory networks from multiple microarray datasets as well as transcriptional regulatory networks from high-throughput data. Nonlinear programming has been used to analyze energy metabolism and has been applied to metabolic engineering and parameter estimation in biochemical pathways. Machine learning Solvers See also Brachistochrone curve Curve fitting Deterministic global optimization Goal programming Important publications in optimization Least squares Mathematical Optimization Society (formerly Mathematical Programming Society) Mathematical optimization algorithms Mathematical optimization software Process optimization Simulation-based optimization Test functions for optimization Vehicle routing problem Notes Further reading G.L. Nemhauser, A.H.G. Rinnooy Kan and M.J. Todd (eds.): Optimization, Elsevier, (1989). Stanislav Walukiewicz:Integer Programming, Springer,ISBN 978-9048140688, (1990). R. Fletcher: Practical Methods of Optimization, 2nd Ed.,Wiley, (2000). Panos M. Pardalos:Approximation and Complexity in Numerical Optimization: Continuous and Discrete Problems, Springer,ISBN 978-1-44194829-8, (2000). Xiaoqi Yang, K. L. Teo, Lou Caccetta (Eds.):Optimization Methods and Applications,Springer, ISBN 978-0-79236866-3, (2001). Panos M. Pardalos, and Mauricio G. C. Resende(Eds.):Handbook of Applied Optimization、Oxford Univ Pr on Demand, ISBN 978-0-19512594-8, (2002). Wil Michiels, Emile Aarts, and Jan Korst: Theoretical Aspects of Local Search, Springer, ISBN 978-3-64207148-5, (2006). Der-San Chen, Robert G. Batson,and Yu Dang: Applied Integer Programming: Modeling and Solution,Wiley,ISBN 978-0-47037306-4, (2010). Mykel J. Kochenderfer and Tim A. Wheeler: Algorithms for Optimization, The MIT Press, ISBN 978-0-26203942-0, (2019). Vladislav Bukshtynov: Optimization: Success in Practice, CRC Press (Taylor & Francis), ISBN 978-1-03222947-8, (2023) . Rosario Toscano: Solving Optimization Problems with the Heuristic Kalman Algorithm: New Stochastic Methods, Springer, ISBN 978-3-031-52458-5 (2024). Immanuel M. Bomze, Tibor Csendes, Reiner Horst and Panos M. Pardalos: Developments in Global Optimization, Kluwer Academic, ISBN 978-1-4419-4768-0 (2010). External links Links to optimization source codes Operations research Optimization
Mathematical optimization
[ "Mathematics" ]
6,307
[ "Mathematical optimization", "Applied mathematics", "Mathematical analysis", "Operations research" ]
52,081
https://en.wikipedia.org/wiki/Antihydrogen
Antihydrogen () is the antimatter counterpart of hydrogen. Whereas the common hydrogen atom is composed of an electron and proton, the antihydrogen atom is made up of a positron and antiproton. Scientists hope that studying antihydrogen may shed light on the question of why there is more matter than antimatter in the observable universe, known as the baryon asymmetry problem. Antihydrogen is produced artificially in particle accelerators. Experimental history Accelerators first detected hot antihydrogen in the 1990s. ATHENA studied cold in 2002. It was first trapped by the Antihydrogen Laser Physics Apparatus (ALPHA) team at CERN in 2010, who then measured the structure and other important properties. ALPHA, AEgIS, and GBAR plan to further cool and study atoms. 1s–2s transition measurement In 2016, the ALPHA experiment measured the atomic electron transition between the two lowest energy levels of antihydrogen, 1s–2s. The results, which are identical to that of hydrogen within the experimental resolution, support the idea of matter–antimatter symmetry and CPT symmetry. In the presence of a magnetic field the 1s–2s transition splits into two hyperfine transitions with slightly different frequencies. The team calculated the transition frequencies for normal hydrogen under the magnetic field in the confinement volume as: fdd = fcc = A single-photon transition between s states is prohibited by quantum selection rules, so to elevate ground state positrons to the 2s level, the confinement space was illuminated by a laser tuned to half the calculated transition frequencies, stimulating allowed two photon absorption. Antihydrogen atoms excited to the 2s state can then evolve in one of several ways: They can emit two photons and return directly to the ground state as they were They can absorb another photon, which ionizes the atom They can emit a single photon and return to the ground state via the 2p state—in this case the positron spin can flip or remain the same. Both the ionization and spin-flip outcomes cause the atom to escape confinement. The team calculated that, assuming antihydrogen behaves like normal hydrogen, roughly half the antihydrogen atoms would be lost during the resonant frequency exposure, as compared to the no-laser case. With the laser source tuned 200 kHz below half the transition frequencies, the calculated loss was essentially the same as for the no-laser case. The ALPHA team made batches of antihydrogen, held them for 600 seconds and then tapered down the confinement field over 1.5 seconds while counting how many antihydrogen atoms were annihilated. They did this under three different experimental conditions: Resonance: exposing the confined antihydrogen atoms to a laser source tuned to exactly half the transition frequency for 300 seconds for each of the two transitions, Off-resonance: exposing the confined antihydrogen atoms to a laser source tuned 200 kilohertz below the two resonance frequencies for 300 seconds each, No-laser: confining the antihydrogen atoms without any laser illumination. The two controls, off-resonance and no-laser, were needed to ensure that the laser illumination itself was not causing annihilations, perhaps by liberating normal atoms from the confinement vessel surface that could then combine with the antihydrogen. The team conducted 11 runs of the three cases and found no significant difference between the off-resonance and no laser runs, but a 58% drop in the number of events detected after the resonance runs. They were also able to count annihilation events during the runs and found a higher level during the resonance runs, again with no significant difference between the off-resonance and no laser runs. The results were in good agreement with predictions based on normal hydrogen and can be "interpreted as a test of CPT symmetry at a precision of 200 ppt." Characteristics The CPT theorem of particle physics predicts antihydrogen atoms have many of the characteristics regular hydrogen has; i.e. the same mass, magnetic moment, and atomic state transition frequencies (see atomic spectroscopy). For example, excited antihydrogen atoms are expected to glow the same color as regular hydrogen. Antihydrogen atoms should be attracted to other matter or antimatter gravitationally with a force of the same magnitude that ordinary hydrogen atoms experience. This would not be true if antimatter has negative gravitational mass, which is considered highly unlikely, though not yet empirically disproven (see gravitational interaction of antimatter). Recent theoretical framework for negative mass and repulsive gravity (antigravity) between matter and antimatter has been developed, and the theory is compatible with CPT theorem. When antihydrogen comes into contact with ordinary matter, its constituents quickly annihilate. The positron annihilates with an electron to produce gamma rays. The antiproton, on the other hand, is made up of antiquarks that combine with quarks in either neutrons or protons, resulting in high-energy pions, that quickly decay into muons, neutrinos, positrons, and electrons. If antihydrogen atoms were suspended in a perfect vacuum, they should survive indefinitely. As an anti-element, it is expected to have exactly the same properties as hydrogen. For example, antihydrogen would be a gas under standard conditions and combine with antioxygen to form antiwater, 2. Production The first antihydrogen was produced in 1995 by a team led by Walter Oelert at CERN using a method first proposed by Charles Munger Jr, Stanley Brodsky and Ivan Schmidt Andrade. In the LEAR, antiprotons from an accelerator were shot at xenon clusters, producing electron-positron pairs. Antiprotons can capture positrons with probability about , so this method is not suited for substantial production, as calculated. Fermilab measured a somewhat different cross section, in agreement with predictions of quantum electrodynamics. Both resulted in highly energetic, or hot, anti-atoms, unsuitable for detailed study. Subsequently, CERN built the Antiproton Decelerator (AD) to support efforts towards low-energy antihydrogen, for tests of fundamental symmetries. The AD supplies several CERN groups. CERN expects their facilities will be capable of producing 10 million antiprotons per minute. Low-energy antihydrogen Experiments by the ATRAP and ATHENA collaborations at CERN, brought together positrons and antiprotons in Penning traps, resulting in synthesis at a typical rate of 100 antihydrogen atoms per second. Antihydrogen was first produced by ATHENA in 2002, and then by ATRAP and by 2004, millions of antihydrogen atoms were made. The atoms synthesized had a relatively high temperature (a few thousand kelvins), and would hit the walls of the experimental apparatus as a consequence and annihilate. Most precision tests require long observation times. ALPHA, a successor of the ATHENA collaboration, was formed to stably trap antihydrogen. While electrically neutral, its spin magnetic moments interact with an inhomogeneous magnetic field; some atoms will be attracted to a magnetic minimum, created by a combination of mirror and multipole fields. In November 2010, the ALPHA collaboration announced that they had trapped 38 antihydrogen atoms for a sixth of a second, the first confinement of neutral antimatter. In June 2011, they trapped 309 antihydrogen atoms, up to 3 simultaneously, for up to 1,000 seconds. They then studied its hyperfine structure, gravity effects, and charge. ALPHA will continue measurements along with experiments ATRAP, AEgIS and GBAR. In 2018, AEgIS has produced a novel pulsed source of antihydrogen atoms with a production time spread of merely 250 nanoseconds. The pulsed source is generated by the charge exchange reaction between Rydberg positronium atoms -- produced via the injection of a pulsed positron beam into a nanochanneled Si target, and excited by laser pulses -- and antiprotons, trapped, cooled and manipulated in electromagnetic traps. The pulsed production enables the control of the antihydrogen temperature, the formation of an antihydrogen beam, and in the next phase a precision measurement on the gravitational behaviour using an atomic interferometer, the so-called Moiré deflectormeter. Larger antimatter atoms Larger antimatter atoms such as antideuterium (), antitritium (), and antihelium () are much more difficult to produce. Antideuterium, antihelium-3 () and antihelium-4 () nuclei have been produced with such high velocities that synthesis of their corresponding atoms poses several technical hurdles. See also Gravitational interaction of antimatter References External links Antimatter Hydrogen Hydrogen physics Gases
Antihydrogen
[ "Physics", "Chemistry" ]
1,854
[ "Antimatter", "Matter", "Phases of matter", "Statistical mechanics", "Gases" ]
52,085
https://en.wikipedia.org/wiki/Protein%20folding
Protein folding is the physical process by which a protein, after synthesis by a ribosome as a linear chain of amino acids, changes from an unstable random coil into a more ordered three-dimensional structure. This structure permits the protein to become biologically functional. The folding of many proteins begins even during the translation of the polypeptide chain. The amino acids interact with each other to produce a well-defined three-dimensional structure, known as the protein's native state. This structure is determined by the amino-acid sequence or primary structure. The correct three-dimensional structure is essential to function, although some parts of functional proteins may remain unfolded, indicating that protein dynamics are important. Failure to fold into a native structure generally produces inactive proteins, but in some instances, misfolded proteins have modified or toxic functionality. Several neurodegenerative and other diseases are believed to result from the accumulation of amyloid fibrils formed by misfolded proteins, the infectious varieties of which are known as prions. Many allergies are caused by the incorrect folding of some proteins because the immune system does not produce the antibodies for certain protein structures. Denaturation of proteins is a process of transition from a folded to an unfolded state. It happens in cooking, burns, proteinopathies, and other contexts. Residual structure present, if any, in the supposedly unfolded state may form a folding initiation site and guide the subsequent folding reactions. The duration of the folding process varies dramatically depending on the protein of interest. When studied outside the cell, the slowest folding proteins require many minutes or hours to fold, primarily due to proline isomerization, and must pass through a number of intermediate states, like checkpoints, before the process is complete. On the other hand, very small single-domain proteins with lengths of up to a hundred amino acids typically fold in a single step. Time scales of milliseconds are the norm, and the fastest known protein folding reactions are complete within a few microseconds. The folding time scale of a protein depends on its size, contact order, and circuit topology. Understanding and simulating the protein folding process has been an important challenge for computational biology since the late 1960s. Process of protein folding Primary structure The primary structure of a protein, its linear amino-acid sequence, determines its native conformation. The specific amino acid residues and their position in the polypeptide chain are the determining factors for which portions of the protein fold closely together and form its three-dimensional conformation. The amino acid composition is not as important as the sequence. The essential fact of folding, however, remains that the amino acid sequence of each protein contains the information that specifies both the native structure and the pathway to attain that state. This is not to say that nearly identical amino acid sequences always fold similarly. Conformations differ based on environmental factors as well; similar proteins fold differently based on where they are found. Secondary structure Formation of a secondary structure is the first step in the folding process that a protein takes to assume its native structure. Characteristic of secondary structure are the structures known as alpha helices and beta sheets that fold rapidly because they are stabilized by intramolecular hydrogen bonds, as was first characterized by Linus Pauling. Formation of intramolecular hydrogen bonds provides another important contribution to protein stability. α-helices are formed by hydrogen bonding of the backbone to form a spiral shape (refer to figure on the right). The β pleated sheet is a structure that forms with the backbone bending over itself to form the hydrogen bonds (as displayed in the figure to the left). The hydrogen bonds are between the amide hydrogen and carbonyl oxygen of the peptide bond. There exists anti-parallel β pleated sheets and parallel β pleated sheets where the stability of the hydrogen bonds is stronger in the anti-parallel β sheet as it hydrogen bonds with the ideal 180 degree angle compared to the slanted hydrogen bonds formed by parallel sheets. Tertiary structure The α-Helices and β-Sheets are commonly amphipathic, meaning they have a hydrophilic and a hydrophobic portion. This ability helps in forming tertiary structure of a protein in which folding occurs so that the hydrophilic sides are facing the aqueous environment surrounding the protein and the hydrophobic sides are facing the hydrophobic core of the protein. Secondary structure hierarchically gives way to tertiary structure formation. Once the protein's tertiary structure is formed and stabilized by the hydrophobic interactions, there may also be covalent bonding in the form of disulfide bridges formed between two cysteine residues. These non-covalent and covalent contacts take a specific topological arrangement in a native structure of a protein. Tertiary structure of a protein involves a single polypeptide chain; however, additional interactions of folded polypeptide chains give rise to quaternary structure formation. Quaternary structure Tertiary structure may give way to the formation of quaternary structure in some proteins, which usually involves the "assembly" or "coassembly" of subunits that have already folded; in other words, multiple polypeptide chains could interact to form a fully functional quaternary protein. Driving forces of protein folding Folding is a spontaneous process that is mainly guided by hydrophobic interactions, formation of intramolecular hydrogen bonds, van der Waals forces, and it is opposed by conformational entropy. The folding time scale of an isolated protein depends on its size, contact order, and circuit topology. Inside cells, the process of folding often begins co-translationally, so that the N-terminus of the protein begins to fold while the C-terminal portion of the protein is still being synthesized by the ribosome; however, a protein molecule may fold spontaneously during or after biosynthesis. While these macromolecules may be regarded as "folding themselves", the process also depends on the solvent (water or lipid bilayer), the concentration of salts, the pH, the temperature, the possible presence of cofactors and of molecular chaperones. Proteins will have limitations on their folding abilities by the restricted bending angles or conformations that are possible. These allowable angles of protein folding are described with a two-dimensional plot known as the Ramachandran plot, depicted with psi and phi angles of allowable rotation. Hydrophobic effect Protein folding must be thermodynamically favorable within a cell in order for it to be a spontaneous reaction. Since it is known that protein folding is a spontaneous reaction, then it must assume a negative Gibbs free energy value. Gibbs free energy in protein folding is directly related to enthalpy and entropy. For a negative delta G to arise and for protein folding to become thermodynamically favorable, then either enthalpy, entropy, or both terms must be favorable. Minimizing the number of hydrophobic side-chains exposed to water is an important driving force behind the folding process. The hydrophobic effect is the phenomenon in which the hydrophobic chains of a protein collapse into the core of the protein (away from the hydrophilic environment). In an aqueous environment, the water molecules tend to aggregate around the hydrophobic regions or side chains of the protein, creating water shells of ordered water molecules. An ordering of water molecules around a hydrophobic region increases order in a system and therefore contributes a negative change in entropy (less entropy in the system). The water molecules are fixed in these water cages which drives the hydrophobic collapse, or the inward folding of the hydrophobic groups. The hydrophobic collapse introduces entropy back to the system via the breaking of the water cages which frees the ordered water molecules. The multitude of hydrophobic groups interacting within the core of the globular folded protein contributes a significant amount to protein stability after folding, because of the vastly accumulated van der Waals forces (specifically London Dispersion forces). The hydrophobic effect exists as a driving force in thermodynamics only if there is the presence of an aqueous medium with an amphiphilic molecule containing a large hydrophobic region. The strength of hydrogen bonds depends on their environment; thus, H-bonds enveloped in a hydrophobic core contribute more than H-bonds exposed to the aqueous environment to the stability of the native state. In proteins with globular folds, hydrophobic amino acids tend to be interspersed along the primary sequence, rather than randomly distributed or clustered together. However, proteins that have recently been born de novo, which tend to be intrinsically disordered, show the opposite pattern of hydrophobic amino acid clustering along the primary sequence. Chaperones Molecular chaperones are a class of proteins that aid in the correct folding of other proteins in vivo. Chaperones exist in all cellular compartments and interact with the polypeptide chain in order to allow the native three-dimensional conformation of the protein to form; however, chaperones themselves are not included in the final structure of the protein they are assisting in. Chaperones may assist in folding even when the nascent polypeptide is being synthesized by the ribosome. Molecular chaperones operate by binding to stabilize an otherwise unstable structure of a protein in its folding pathway, but chaperones do not contain the necessary information to know the correct native structure of the protein they are aiding; rather, chaperones work by preventing incorrect folding conformations. In this way, chaperones do not actually increase the rate of individual steps involved in the folding pathway toward the native structure; instead, they work by reducing possible unwanted aggregations of the polypeptide chain that might otherwise slow down the search for the proper intermediate and they provide a more efficient pathway for the polypeptide chain to assume the correct conformations. Chaperones are not to be confused with folding catalyst proteins, which catalyze chemical reactions responsible for slow steps in folding pathways. Examples of folding catalysts are protein disulfide isomerases and peptidyl-prolyl isomerases that may be involved in formation of disulfide bonds or interconversion between cis and trans stereoisomers of peptide group. Chaperones are shown to be critical in the process of protein folding in vivo because they provide the protein with the aid needed to assume its proper alignments and conformations efficiently enough to become "biologically relevant". This means that the polypeptide chain could theoretically fold into its native structure without the aid of chaperones, as demonstrated by protein folding experiments conducted in vitro; however, this process proves to be too inefficient or too slow to exist in biological systems; therefore, chaperones are necessary for protein folding in vivo. Along with its role in aiding native structure formation, chaperones are shown to be involved in various roles such as protein transport, degradation, and even allow denatured proteins exposed to certain external denaturant factors an opportunity to refold into their correct native structures. A fully denatured protein lacks both tertiary and secondary structure, and exists as a so-called random coil. Under certain conditions some proteins can refold; however, in many cases, denaturation is irreversible. Cells sometimes protect their proteins against the denaturing influence of heat with enzymes known as heat shock proteins (a type of chaperone), which assist other proteins both in folding and in remaining folded. Heat shock proteins have been found in all species examined, from bacteria to humans, suggesting that they evolved very early and have an important function. Some proteins never fold in cells at all except with the assistance of chaperones which either isolate individual proteins so that their folding is not interrupted by interactions with other proteins or help to unfold misfolded proteins, allowing them to refold into the correct native structure. This function is crucial to prevent the risk of precipitation into insoluble amorphous aggregates. The external factors involved in protein denaturation or disruption of the native state include temperature, external fields (electric, magnetic), molecular crowding, and even the limitation of space (i.e. confinement), which can have a big influence on the folding of proteins. High concentrations of solutes, extremes of pH, mechanical forces, and the presence of chemical denaturants can contribute to protein denaturation, as well. These individual factors are categorized together as stresses. Chaperones are shown to exist in increasing concentrations during times of cellular stress and help the proper folding of emerging proteins as well as denatured or misfolded ones. Under some conditions proteins will not fold into their biochemically functional forms. Temperatures above or below the range that cells tend to live in will cause thermally unstable proteins to unfold or denature (this is why boiling makes an egg white turn opaque). Protein thermal stability is far from constant, however; for example, hyperthermophilic bacteria have been found that grow at temperatures as high as 122 °C, which of course requires that their full complement of vital proteins and protein assemblies be stable at that temperature or above. The bacterium E. coli is the host for bacteriophage T4, and the phage encoded gp31 protein () appears to be structurally and functionally homologous to E. coli chaperone protein GroES and able to substitute for it in the assembly of bacteriophage T4 virus particles during infection. Like GroES, gp31 forms a stable complex with GroEL chaperonin that is absolutely necessary for the folding and assembly in vivo of the bacteriophage T4 major capsid protein gp23. Fold switching Some proteins have multiple native structures, and change their fold based on some external factors. For example, the KaiB protein switches fold throughout the day, acting as a clock for cyanobacteria. It has been estimated that around 0.5–4% of PDB (Protein Data Bank) proteins switch folds. Protein misfolding and neurodegenerative disease A protein is considered to be misfolded if it cannot achieve its normal native state. This can be due to mutations in the amino acid sequence or a disruption of the normal folding process by external factors. The misfolded protein typically contains β-sheets that are organized in a supramolecular arrangement known as a cross-β structure. These β-sheet-rich assemblies are very stable, very insoluble, and generally resistant to proteolysis. The structural stability of these fibrillar assemblies is caused by extensive interactions between the protein monomers, formed by backbone hydrogen bonds between their β-strands. The misfolding of proteins can trigger the further misfolding and accumulation of other proteins into aggregates or oligomers. The increased levels of aggregated proteins in the cell leads to formation of amyloid-like structures which can cause degenerative disorders and cell death. The amyloids are fibrillary structures that contain intermolecular hydrogen bonds which are highly insoluble and made from converted protein aggregates. Therefore, the proteasome pathway may not be efficient enough to degrade the misfolded proteins prior to aggregation. Misfolded proteins can interact with one another and form structured aggregates and gain toxicity through intermolecular interactions. Aggregated proteins are associated with prion-related illnesses such as Creutzfeldt–Jakob disease, bovine spongiform encephalopathy (mad cow disease), amyloid-related illnesses such as Alzheimer's disease and familial amyloid cardiomyopathy or polyneuropathy, as well as intracellular aggregation diseases such as Huntington's and Parkinson's disease. These age onset degenerative diseases are associated with the aggregation of misfolded proteins into insoluble, extracellular aggregates and/or intracellular inclusions including cross-β amyloid fibrils. It is not completely clear whether the aggregates are the cause or merely a reflection of the loss of protein homeostasis, the balance between synthesis, folding, aggregation and protein turnover. Recently the European Medicines Agency approved the use of Tafamidis or Vyndaqel (a kinetic stabilizer of tetrameric transthyretin) for the treatment of transthyretin amyloid diseases. This suggests that the process of amyloid fibril formation (and not the fibrils themselves) causes the degeneration of post-mitotic tissue in human amyloid diseases. Misfolding and excessive degradation instead of folding and function leads to a number of proteopathy diseases such as antitrypsin-associated emphysema, cystic fibrosis and the lysosomal storage diseases, where loss of function is the origin of the disorder. While protein replacement therapy has historically been used to correct the latter disorders, an emerging approach is to use pharmaceutical chaperones to fold mutated proteins to render them functional. Experimental techniques for studying protein folding While inferences about protein folding can be made through mutation studies, typically, experimental techniques for studying protein folding rely on the gradual unfolding or folding of proteins and observing conformational changes using standard non-crystallographic techniques. X-ray crystallography X-ray crystallography is one of the more efficient and important methods for attempting to decipher the three dimensional configuration of a folded protein. To be able to conduct X-ray crystallography, the protein under investigation must be located inside a crystal lattice. To place a protein inside a crystal lattice, one must have a suitable solvent for crystallization, obtain a pure protein at supersaturated levels in solution, and precipitate the crystals in solution. Once a protein is crystallized, X-ray beams can be concentrated through the crystal lattice which would diffract the beams or shoot them outwards in various directions. These exiting beams are correlated to the specific three-dimensional configuration of the protein enclosed within. The X-rays specifically interact with the electron clouds surrounding the individual atoms within the protein crystal lattice and produce a discernible diffraction pattern. Only by relating the electron density clouds with the amplitude of the X-rays can this pattern be read and lead to assumptions of the phases or phase angles involved that complicate this method. Without the relation established through a mathematical basis known as Fourier transform, the "phase problem" would render predicting the diffraction patterns very difficult. Emerging methods like multiple isomorphous replacement use the presence of a heavy metal ion to diffract the X-rays into a more predictable manner, reducing the number of variables involved and resolving the phase problem. Fluorescence spectroscopy Fluorescence spectroscopy is a highly sensitive method for studying the folding state of proteins. Three amino acids, phenylalanine (Phe), tyrosine (Tyr) and tryptophan (Trp), have intrinsic fluorescence properties, but only Tyr and Trp are used experimentally because their quantum yields are high enough to give good fluorescence signals. Both Trp and Tyr are excited by a wavelength of 280 nm, whereas only Trp is excited by a wavelength of 295 nm. Because of their aromatic character, Trp and Tyr residues are often found fully or partially buried in the hydrophobic core of proteins, at the interface between two protein domains, or at the interface between subunits of oligomeric proteins. In this apolar environment, they have high quantum yields and therefore high fluorescence intensities. Upon disruption of the protein's tertiary or quaternary structure, these side chains become more exposed to the hydrophilic environment of the solvent, and their quantum yields decrease, leading to low fluorescence intensities. For Trp residues, the wavelength of their maximal fluorescence emission also depend on their environment. Fluorescence spectroscopy can be used to characterize the equilibrium unfolding of proteins by measuring the variation in the intensity of fluorescence emission or in the wavelength of maximal emission as functions of a denaturant value. The denaturant can be a chemical molecule (urea, guanidinium hydrochloride), temperature, pH, pressure, etc. The equilibrium between the different but discrete protein states, i.e. native state, intermediate states, unfolded state, depends on the denaturant value; therefore, the global fluorescence signal of their equilibrium mixture also depends on this value. One thus obtains a profile relating the global protein signal to the denaturant value. The profile of equilibrium unfolding may enable one to detect and identify intermediates of unfolding. General equations have been developed by Hugues Bedouelle to obtain the thermodynamic parameters that characterize the unfolding equilibria for homomeric or heteromeric proteins, up to trimers and potentially tetramers, from such profiles. Fluorescence spectroscopy can be combined with fast-mixing devices such as stopped flow, to measure protein folding kinetics, generate a chevron plot and derive a Phi value analysis. Circular dichroism Circular dichroism is one of the most general and basic tools to study protein folding. Circular dichroism spectroscopy measures the absorption of circularly polarized light. In proteins, structures such as alpha helices and beta sheets are chiral, and thus absorb such light. The absorption of this light acts as a marker of the degree of foldedness of the protein ensemble. This technique has been used to measure equilibrium unfolding of the protein by measuring the change in this absorption as a function of denaturant concentration or temperature. A denaturant melt measures the free energy of unfolding as well as the protein's m value, or denaturant dependence. A temperature melt measures the denaturation temperature (Tm) of the protein. As for fluorescence spectroscopy, circular-dichroism spectroscopy can be combined with fast-mixing devices such as stopped flow to measure protein folding kinetics and to generate chevron plots. Vibrational circular dichroism of proteins The more recent developments of vibrational circular dichroism (VCD) techniques for proteins, currently involving Fourier transform (FT) instruments, provide powerful means for determining protein conformations in solution even for very large protein molecules. Such VCD studies of proteins can be combined with X-ray diffraction data for protein crystals, FT-IR data for protein solutions in heavy water (D2O), or quantum computations. Protein nuclear magnetic resonance spectroscopy Protein nuclear magnetic resonance (NMR) is able to collect protein structural data by inducing a magnet field through samples of concentrated protein. In NMR, depending on the chemical environment, certain nuclei will absorb specific radio-frequencies. Because protein structural changes operate on a time scale from ns to ms, NMR is especially equipped to study intermediate structures in timescales of ps to s. Some of the main techniques for studying proteins structure and non-folding protein structural changes include COSY, TOCSY, HSQC, time relaxation (T1 & T2), and NOE. NOE is especially useful because magnetization transfers can be observed between spatially proximal hydrogens are observed. Different NMR experiments have varying degrees of timescale sensitivity that are appropriate for different protein structural changes. NOE can pick up bond vibrations or side chain rotations, however, NOE is too sensitive to pick up protein folding because it occurs at larger timescale. Because protein folding takes place in about 50 to 3000 s−1 CPMG Relaxation dispersion and chemical exchange saturation transfer have become some of the primary techniques for NMR analysis of folding. In addition, both techniques are used to uncover excited intermediate states in the protein folding landscape. To do this, CPMG Relaxation dispersion takes advantage of the spin echo phenomenon. This technique exposes the target nuclei to a 90 pulse followed by one or more 180 pulses. As the nuclei refocus, a broad distribution indicates the target nuclei is involved in an intermediate excited state. By looking at Relaxation dispersion plots the data collect information on the thermodynamics and kinetics between the excited and ground. Saturation Transfer measures changes in signal from the ground state as excited states become perturbed. It uses weak radio frequency irradiation to saturate the excited state of a particular nuclei which transfers its saturation to the ground state. This signal is amplified by decreasing the magnetization (and the signal) of the ground state. The main limitations in NMR is that its resolution decreases with proteins that are larger than 25 kDa and is not as detailed as X-ray crystallography. Additionally, protein NMR analysis is quite difficult and can propose multiple solutions from the same NMR spectrum. In a study focused on the folding of an amyotrophic lateral sclerosis involved protein SOD1, excited intermediates were studied with relaxation dispersion and Saturation transfer. SOD1 had been previously tied to many disease causing mutants which were assumed to be involved in protein aggregation, however the mechanism was still unknown. By using Relaxation Dispersion and Saturation Transfer experiments many excited intermediate states were uncovered misfolding in the SOD1 mutants. Dual-polarization interferometry Dual polarisation interferometry is a surface-based technique for measuring the optical properties of molecular layers. When used to characterize protein folding, it measures the conformation by determining the overall size of a monolayer of the protein and its density in real time at sub-Angstrom resolution, although real-time measurement of the kinetics of protein folding are limited to processes that occur slower than ~10 Hz. Similar to circular dichroism, the stimulus for folding can be a denaturant or temperature. Studies of folding with high time resolution The study of protein folding has been greatly advanced in recent years by the development of fast, time-resolved techniques. Experimenters rapidly trigger the folding of a sample of unfolded protein and observe the resulting dynamics. Fast techniques in use include neutron scattering, ultrafast mixing of solutions, photochemical methods, and laser temperature jump spectroscopy. Among the many scientists who have contributed to the development of these techniques are Jeremy Cook, Heinrich Roder, Terry Oas, Harry Gray, Martin Gruebele, Brian Dyer, William Eaton, Sheena Radford, Chris Dobson, Alan Fersht, Bengt Nölting and Lars Konermann. Proteolysis Proteolysis is routinely used to probe the fraction unfolded under a wide range of solution conditions (e.g. fast parallel proteolysis (FASTpp). Single-molecule force spectroscopy Single molecule techniques such as optical tweezers and AFM have been used to understand protein folding mechanisms of isolated proteins as well as proteins with chaperones. Optical tweezers have been used to stretch single protein molecules from their C- and N-termini and unfold them to allow study of the subsequent refolding. The technique allows one to measure folding rates at single-molecule level; for example, optical tweezers have been recently applied to study folding and unfolding of proteins involved in blood coagulation. von Willebrand factor (vWF) is a protein with an essential role in blood clot formation process. It discovered – using single molecule optical tweezers measurement – that calcium-bound vWF acts as a shear force sensor in the blood. Shear force leads to unfolding of the A2 domain of vWF, whose refolding rate is dramatically enhanced in the presence of calcium. Recently, it was also shown that the simple src SH3 domain accesses multiple unfolding pathways under force. Biotin painting Biotin painting enables condition-specific cellular snapshots of (un)folded proteins. Biotin 'painting' shows a bias towards predicted Intrinsically disordered proteins. Computational studies of protein folding Computational studies of protein folding includes three main aspects related to the prediction of protein stability, kinetics, and structure. A 2013 review summarizes the available computational methods for protein folding. Levinthal's paradox In 1969, Cyrus Levinthal noted that, because of the very large number of degrees of freedom in an unfolded polypeptide chain, the molecule has an astronomical number of possible conformations. An estimate of 3300 or 10143 was made in one of his papers. Levinthal's paradox is a thought experiment based on the observation that if a protein were folded by sequential sampling of all possible conformations, it would take an astronomical amount of time to do so, even if the conformations were sampled at a rapid rate (on the nanosecond or picosecond scale). Based upon the observation that proteins fold much faster than this, Levinthal then proposed that a random conformational search does not occur, and the protein must, therefore, fold through a series of meta-stable intermediate states. Energy landscape of protein folding The configuration space of a protein during folding can be visualized as an energy landscape. According to Joseph Bryngelson and Peter Wolynes, proteins follow the principle of minimal frustration, meaning that naturally evolved proteins have optimized their folding energy landscapes, and that nature has chosen amino acid sequences so that the folded state of the protein is sufficiently stable. In addition, the acquisition of the folded state had to become a sufficiently fast process. Even though nature has reduced the level of frustration in proteins, some degree of it remains up to now as can be observed in the presence of local minima in the energy landscape of proteins. A consequence of these evolutionarily selected sequences is that proteins are generally thought to have globally "funneled energy landscapes" (a term coined by José Onuchic) that are largely directed toward the native state. This "folding funnel" landscape allows the protein to fold to the native state through any of a large number of pathways and intermediates, rather than being restricted to a single mechanism. The theory is supported by both computational simulations of model proteins and experimental studies, and it has been used to improve methods for protein structure prediction and design. The description of protein folding by the leveling free-energy landscape is also consistent with the 2nd law of thermodynamics. Physically, thinking of landscapes in terms of visualizable potential or total energy surfaces simply with maxima, saddle points, minima, and funnels, rather like geographic landscapes, is perhaps a little misleading. The relevant description is really a high-dimensional phase space in which manifolds might take a variety of more complicated topological forms. The unfolded polypeptide chain begins at the top of the funnel where it may assume the largest number of unfolded variations and is in its highest energy state. Energy landscapes such as these indicate that there are a large number of initial possibilities, but only a single native state is possible; however, it does not reveal the numerous folding pathways that are possible. A different molecule of the same exact protein may be able to follow marginally different folding pathways, seeking different lower energy intermediates, as long as the same native structure is reached. Different pathways may have different frequencies of utilization depending on the thermodynamic favorability of each pathway. This means that if one pathway is found to be more thermodynamically favorable than another, it is likely to be used more frequently in the pursuit of the native structure. As the protein begins to fold and assume its various conformations, it always seeks a more thermodynamically favorable structure than before and thus continues through the energy funnel. Formation of secondary structures is a strong indication of increased stability within the protein, and only one combination of secondary structures assumed by the polypeptide backbone will have the lowest energy and therefore be present in the native state of the protein. Among the first structures to form once the polypeptide begins to fold are alpha helices and beta turns, where alpha helices can form in as little as 100 nanoseconds and beta turns in 1 microsecond. There exists a saddle point in the energy funnel landscape where the transition state for a particular protein is found. The transition state in the energy funnel diagram is the conformation that must be assumed by every molecule of that protein if the protein wishes to finally assume the native structure. No protein may assume the native structure without first passing through the transition state. The transition state can be referred to as a variant or premature form of the native state rather than just another intermediary step. The folding of the transition state is shown to be rate-determining, and even though it exists in a higher energy state than the native fold, it greatly resembles the native structure. Within the transition state, there exists a nucleus around which the protein is able to fold, formed by a process referred to as "nucleation condensation" where the structure begins to collapse onto the nucleus. Modeling of protein folding De novo or ab initio techniques for computational protein structure prediction can be used for simulating various aspects of protein folding. Molecular dynamics (MD) was used in simulations of protein folding and dynamics in silico. First equilibrium folding simulations were done using implicit solvent model and umbrella sampling. Because of computational cost, ab initio MD folding simulations with explicit water are limited to peptides and small proteins. MD simulations of larger proteins remain restricted to dynamics of the experimental structure or its high-temperature unfolding. Long-time folding processes (beyond about 1 millisecond), like folding of larger proteins (>150 residues) can be accessed using coarse-grained models. Several large-scale computational projects, such as Rosetta@home, Folding@home and Foldit, target protein folding. Long continuous-trajectory simulations have been performed on Anton, a massively parallel supercomputer designed and built around custom ASICs and interconnects by D. E. Shaw Research. The longest published result of a simulation performed using Anton as of 2011 was a 2.936 millisecond simulation of NTL9 at 355 K. Such simulations are currently able to unfold and refold small proteins (<150 amino acids residues) in equilibrium and predict how mutations affect folding kinetics and stability. In 2020 a team of researchers that used AlphaFold, an artificial intelligence (AI) protein structure prediction program developed by DeepMind placed first in CASP, a long-standing structure prediction contest. The team achieved a level of accuracy much higher than any other group. It scored above 90% for around two-thirds of the proteins in CASP's global distance test (GDT), a test that measures the degree of similarity between the structure predicted by a computational program, and the empirical structure determined experimentally in a lab. A score of 100 is considered a complete match, within the distance cutoff used for calculating GDT. AlphaFold's protein structure prediction results at CASP were described as "transformational" and "astounding". Some researchers noted that the accuracy is not high enough for a third of its predictions, and that it does not reveal the physical mechanism of protein folding for the protein folding problem to be considered solved. Nevertheless, it is considered a significant achievement in computational biology and great progress towards a decades-old grand challenge of biology, predicting the structure of proteins. See also Anfinsen's dogma Chevron plot Denaturation midpoint Downhill folding Folding (chemistry) Phi value analysis Potential energy of protein Protein dynamics Protein misfolding cyclic amplification Protein structure prediction software Proteopathy Time-resolved mass spectrometry References External links Human Proteome Folding Project Biochemical reactions Protein structure
Protein folding
[ "Chemistry", "Biology" ]
7,285
[ "Biochemistry", "Protein structure", "Structural biology", "Biochemical reactions" ]
52,206
https://en.wikipedia.org/wiki/Nanowire
A nanowire is a nanostructure in the form of a wire with the diameter of the order of a nanometre (10−9 m). More generally, nanowires can be defined as structures that have a thickness or diameter constrained to tens of nanometers or less and an unconstrained length. At these scales, quantum mechanical effects are important—which coined the term "quantum wires". Many different types of nanowires exist, including superconducting (e.g. YBCO), metallic (e.g. Ni, Pt, Au, Ag), semiconducting (e.g. silicon nanowires (SiNWs), InP, GaN) and insulating (e.g. SiO2, TiO2). Molecular nanowires are composed of repeating molecular units either organic (e.g. DNA) or inorganic (e.g. Mo6S9−xIx). Characteristics Typical nanowires exhibit aspect ratios (length-to-width ratio) of 1000 or more. As such they are often referred to as one-dimensional (1-D) materials. Nanowires have many interesting properties that are not seen in bulk or 3-D (three-dimensional) materials. This is because electrons in nanowires are quantum confined laterally and thus occupy energy levels that are different from the traditional continuum of energy levels or bands found in bulk materials. A consequence of this quantum confinement in nanowires is that they exhibit discrete values of the electrical conductance. Such discrete values arise from a quantum mechanical constraint on the number electronic transport channels at the nanometer scale, and they are often approximately equal to integer multiples of the quantum of conductance: This conductance is twice the reciprocal of the resistance unit called the von Klitzing constant, defined as and named for Klaus von Klitzing, the discoverer of the integer quantum Hall effect. Examples of nanowires include inorganic molecular nanowires (Mo6S9−xIx, Li2Mo6Se6), which can have a diameter of 0.9 nm and be hundreds of micrometers long. Other important examples are based on semiconductors such as InP, Si, GaN, etc., dielectrics (e.g. SiO2,TiO2), or metals (e.g. Ni, Pt). There are many applications where nanowires may become important in electronic, opto-electronic and nanoelectromechanical devices, as additives in advanced composites, for metallic interconnects in nanoscale quantum devices, as field-emitters and as leads for biomolecular nanosensors. Synthesis There are two basic approaches to synthesizing nanowires: top-down and bottom-up. A top-down approach reduces a large piece of material to small pieces, by various means such as lithography, milling or thermal oxidation. A bottom-up approach synthesizes the nanowire by combining constituent adatoms. Most synthesis techniques use a bottom-up approach. Initial synthesis via either method may often be followed by a nanowire thermal treatment step, often involving a form of self-limiting oxidation, to fine tune the size and aspect ratio of the structures. After the bottom-up synthesis, nanowires can be integrated using pick-and-place techniques. Nanowire production uses several common laboratory techniques, including suspension, electrochemical deposition, vapor deposition, and VLS growth. Ion track technology enables growing homogeneous and segmented nanowires down to 8 nm diameter. As nanowire oxidation rate is controlled by diameter, thermal oxidation steps are often applied to tune their morphology. Suspension A suspended nanowire is a wire produced in a high-vacuum chamber held at the longitudinal extremities. Suspended nanowires can be produced by: The chemical etching of a larger wire The bombardment of a larger wire, typically with highly energetic ions Indenting the tip of a STM in the surface of a metal near its melting point, and then retracting it VLS growth A common technique for creating a nanowire is vapor–liquid–solid method (VLS), which was first reported by Wagner and Ellis in 1964 for silicon whiskers with diameters ranging from hundreds of nm to hundreds of μm. This process can produce high-quality crystalline nanowires of many semiconductor materials, for example, VLS–grown single crystalline silicon nanowires (SiNWs) with smooth surfaces could have excellent properties, such as ultra-large elasticity. This method uses a source material from either laser ablated particles or a feed gas such as silane. VLS synthesis requires a catalyst. For nanowires, the best catalysts are liquid metal (such as gold) nanoclusters, which can either be self-assembled from a thin film by dewetting, or purchased in colloidal form and deposited on a substrate. The source enters these nanoclusters and begins to saturate them. On reaching supersaturation, the source solidifies and grows outward from the nanocluster. Simply turning off the source can adjust the final length of the nanowire. Switching sources while still in the growth phase can create compound nanowires with super-lattices of alternating materials. For example, a method termed ENGRAVE (Encoded Nanowire GRowth and Appearance through VLS and Etching) developed by the Cahoon Lab at UNC-Chapel Hill allows for nanometer-scale morphological control via rapid in situ dopant modulation. A single-step vapour phase reaction at elevated temperature synthesises inorganic nanowires such as Mo6S9−xIx. From another point of view, such nanowires are cluster polymers. Similar to VLS synthesis, VSS (vapor-solid-solid) synthesis of nanowires (NWs) proceeds through thermolytic decomposition of a silicon precursor (typically phenylsilane). Unlike VLS, the catalytic seed remains in solid state when subjected to high temperature annealing of the substrate. This such type of synthesis is widely used to synthesise metal silicide/germanide nanowires through VSS alloying between a copper substrate and a silicon/germanium precursor. Solution-phase synthesis Solution-phase synthesis refers to techniques that grow nanowires in solution. They can produce nanowires of many types of materials. Solution-phase synthesis has the advantage that it can produce very large quantities, compared to other methods. In one technique, the polyol synthesis, ethylene glycol is both solvent and reducing agent. This technique is particularly versatile at producing nanowires of gold, lead, platinum, and silver. The supercritical fluid-liquid-solid growth method can be used to synthesize semiconductor nanowires, e.g., Si and Ge. By using metal nanocrystals as seeds, Si and Ge organometallic precursors are fed into a reactor filled with a supercritical organic solvent, such as toluene. Thermolysis results in degradation of the precursor, allowing release of Si or Ge, and dissolution into the metal nanocrystals. As more of the semiconductor solute is added from the supercritical phase (due to a concentration gradient), a solid crystallite precipitates, and a nanowire grows uniaxially from the nanocrystal seed. Liquid Bridge Induced Self-assembly Protein nanowires in spider silk have been formed by rolling a droplet of spider silk solution over a superhydrophobic pillar structure. Non-catalytic growth The vast majority of nanowire-formation mechanisms are explained through the use of catalytic nanoparticles, which drive the nanowire growth and are either added intentionally or generated during the growth. However, nanowires can be also grown without the help of catalysts, which gives an advantage of pure nanowires and minimizes the number of technological steps. The mechanisms for catalyst-free growth of nanowires (or whiskers) were known from 1950s. The simplest methods to obtain metal oxide nanowires use ordinary heating of the metals, e.g. metal wire heated with battery, by Joule heating in air can be easily done at home. Spontaneous nanowire formation by non-catalytic methods were explained by the dislocation present in specific directions or the growth anisotropy of various crystal faces. More recently, after microscopy advancement, the nanowire growth driven by screw dislocations or twin boundaries were demonstrated. The picture on the right shows a single atomic layer growth on the tip of CuO nanowire, observed by in situ TEM microscopy during the non-catalytic synthesis of nanowire. Atomic-scale nanowires can also form completely self-organised without need for defects. For example, rare-earth silicide (RESi2) nanowires of few nm width and height and several 100 nm length form on silicon(001) substrates which are covered with a sub-monolayer of a rare earth metal and subsequently annealed. The lateral dimensions of the nanowires confine the electrons in such a way that the system resembles a (quasi-)one-dimensional metal. Metallic RESi2 nanowires form on silicon(hhk) as well. This system permits tuning the dimensionality between two-dimensional and one-dimensional by the coverage and the tilt angle of the substrate. DNA-templated metallic nanowire synthesis An emerging field is to use DNA strands as scaffolds for metallic nanowire synthesis. This method is investigated both for the synthesis of metallic nanowires in electronic components and for biosensing applications, in which they allow the transduction of a DNA strand into a metallic nanowire that can be electrically detected. Typically, ssDNA strands are stretched, whereafter they are decorated with metallic nanoparticles that have been functionalised with short complementary ssDNA strands. Crack-Defined Shadow Mask Lithography A simple method to produce nanowires with defined geometries has been recently reported using conventional optical lithography. In this approach, optical lithography is used to generate nanogaps using controlled crack formation. These nanogaps are then used as shadow mask for generating individual nanowires with precise lengths and widths. This technique allows to produce individual nanowires below 20 nm in width in a scalable way out of several metallic and metal oxide materials. Physics Conductivity Several physical reasons predict that the conductivity of a nanowire will be much less than that of the corresponding bulk material. First, there is scattering from the wire boundaries, whose effect will be very significant whenever the wire width is below the free electron mean free path of the bulk material. In copper, for example, the mean free path is 40 nm. Copper nanowires less than 40 nm wide will shorten the mean free path to the wire width. Silver nanowires have very different electrical and thermal conductivity from bulk silver. Nanowires also show other peculiar electrical properties due to their size. Unlike single wall carbon nanotubes, whose motion of electrons can fall under the regime of ballistic transport (meaning the electrons can travel freely from one electrode to the other), nanowire conductivity is strongly influenced by edge effects. The edge effects come from atoms that lay at the nanowire surface and are not fully bonded to neighboring atoms like the atoms within the bulk of the nanowire. The unbonded atoms are often a source of defects within the nanowire, and may cause the nanowire to conduct electricity more poorly than the bulk material. As a nanowire shrinks in size, the surface atoms become more numerous compared to the atoms within the nanowire, and edge effects become more important. The conductance in a nanowire is described as the sum of the transport by separate channels, each having a different electronic wavefunction normal to the wire. The thinner the wire is, the smaller the number of channels available to the transport of electrons. As a result, wires that are only one or a few atoms wide exhibit quantization of the conductance: i.e. the conductance can assume only discrete values that are multiples of the conductance quantum (where e is the elementary charge and h is the Planck constant) (see also Quantum Hall effect). This quantization has been observed by measuring the conductance of a nanowire suspended between two electrodes while pulling it progressively longer: as its diameter reduces, its conductivity decreases in a stepwise fashion and the plateaus correspond approximately to multiples of G0. The quantization of conductivity is more pronounced in semiconductors like Si or GaAs than in metals, because of their lower electron density and lower effective mass. It can be observed in 25 nm wide silicon fins, and results in increased threshold voltage. In practical terms, this means that a MOSFET with such nanoscale silicon fins, when used in digital applications, will need a higher gate (control) voltage to switch the transistor on. Welding To incorporate nanowire technology into industrial applications, researchers in 2008 developed a method of welding nanowires together: a sacrificial metal nanowire is placed adjacent to the ends of the pieces to be joined (using the manipulators of a scanning electron microscope); then an electric current is applied, which fuses the wire ends. The technique fuses wires as small as 10 nm. For nanowires with diameters less than 10 nm, existing welding techniques, which require precise control of the heating mechanism and which may introduce the possibility of damage, will not be practical. Recently scientists discovered that single-crystalline ultrathin gold nanowires with diameters ≈ 3–10 nm can be "cold-welded" together within seconds by mechanical contact alone, and under remarkably low applied pressures (unlike macro- and micro-scale cold welding process). High-resolution transmission electron microscopy and in situ measurements reveal that the welds are nearly perfect, with the same crystal orientation, strength and electrical conductivity as the rest of the nanowire. The high quality of the welds is attributed to the nanoscale sample dimensions, oriented-attachment mechanisms and mechanically assisted fast surface diffusion. Nanowire welds were also demonstrated between gold and silver, and silver nanowires (with diameters ≈ 5–15 nm) at near room temperature, indicating that this technique may be generally applicable for ultrathin metallic nanowires. Combined with other nano- and microfabrication technologies, cold welding is anticipated to have potential applications in the future bottom-up assembly of metallic one-dimensional nanostructures. Mechanical properties The study of nanowire mechanics has boomed since the advent of the atomic force microscope (AFM), and associated technologies which have enabled direct study of the response of the nanowire to an applied load. Specifically, a nanowire can be clamped from one end, and the free end displaced by an AFM tip. In this cantilever geometry, the height of the AFM is precisely known, and the force applied is precisely known. This allows for construction of a force vs. displacement curve, which can be converted to a stress vs. strain curve if the nanowire dimensions are known. From the stress-strain curve, the elastic constant known as the Young's Modulus can be derived, as well as the toughness, and degree of strain-hardening. Young's modulus The elastic component of the stress-strain curve described by the Young's Modulus, has been reported for nanowires, however the modulus depends very strongly on the microstructure. Thus a complete description of the modulus dependence on diameter is lacking. Analytically, continuum mechanics has been applied to estimate the dependence of modulus on diameter: in tension, where is the bulk modulus, is the thickness of a shell layer in which the modulus is surface dependent and varies from the bulk, is the surface modulus, and is the diameter. This equation implies that the modulus increases as the diameter decreases. However, various computational methods such as molecular dynamics have predicted that modulus should decrease as diameter decreases. Experimentally, gold nanowires have been shown to have a Young's modulus which is effectively diameter independent. Similarly, nano-indentation was applied to study the modulus of silver nanowires, and again the modulus was found to be 88 GPa, very similar to the modulus of bulk Silver (85 GPa) These works demonstrated that the analytically determined modulus dependence seems to be suppressed in nanowire samples where the crystalline structure highly resembles that of the bulk system. In contrast, Si solid nanowires have been studied, and shown to have a decreasing modulus with diameter The authors of that work report a Si modulus which is half that of the bulk value, and they suggest that the density of point defects, and or loss of chemical stoichiometry may account for this difference. Yield strength The plastic component of the stress strain curve (or more accurately the onset of plasticity) is described by the yield strength. The strength of a material is increased by decreasing the number of defects in the solid, which occurs naturally in nanomaterials where the volume of the solid is reduced. As a nanowire is shrunk to a single line of atoms, the strength should theoretically increase all the way to the molecular tensile strength. Gold nanowires have been described as 'ultrahigh strength' due to the extreme increase in yield strength, approaching the theoretical value of E/10. This huge increase in yield is determined to be due to the lack of dislocations in the solid. Without dislocation motion, a 'dislocation-starvation' mechanism is in operation. The material can accordingly experience huge stresses before dislocation motion is possible, and then begins to strain-harden. For these reasons, nanowires (historically described as 'whiskers') have been used extensively in composites for increasing the overall strength of a material. Moreover, nanowires continue to be actively studied, with research aiming to translate enhanced mechanical properties to novel devices in the fields of MEMS or NEMS. Possible applications Electronic devices Nanowires have been proposed for use as MOSFETs (MOS field-effect transistors). MOS transistors are used widely as fundamental building elements in today's electronic circuits. As predicted by Moore's law, the dimension of MOS transistors is shrinking smaller and smaller into nanoscale. One of the key challenges of building future nanoscale MOS transistors is ensuring good gate control over the channel. In general, having a wider gate relative to the total transistor length affords greater gate control. Therefore, the high aspect ratio of nanowires potentially allows for good gate control. Due to their one-dimensional structure with unusual optical properties, the nanowire are of interest for photovoltaic devices. Compared with its bulk counterparts, the nanowire solar cells are less sensitive to impurities due to bulk recombination, and thus silicon wafers with lower purity can be used to achieve acceptable efficiency, leading to the reduction on material consumption. After p-n junctions were built with nanowires, the next logical step was to build logic gates. By connecting several p-n junctions together, researchers have been able to create the basis of all logic circuits: the AND, OR, and NOT gates have all been built from semiconductor nanowire crossings. In August 2012, researchers reported constructing the first NAND gate from undoped silicon nanowires. This avoids the problem of how to achieve precision doping of complementary nanocircuits, which is unsolved. They were able to control the Schottky barrier to achieve low-resistance contacts by placing a silicide layer in the metal-silicon interface. It is possible that semiconductor nanowire crossings will be important to the future of digital computing. Though there are other uses for nanowires beyond these, the only ones that actually take advantage of physics in the nanometer regime are electronic. In addition, nanowires are also being studied for use as photon ballistic waveguides as interconnects in quantum dot/quantum effect well photon logic arrays. Photons travel inside the tube, electrons travel on the outside shell. When two nanowires acting as photon waveguides cross each other the juncture acts as a quantum dot. Conducting nanowires offer the possibility of connecting molecular-scale entities in a molecular computer. Dispersions of conducting nanowires in different polymers are being investigated for use as transparent electrodes for flexible flat-screen displays. Because of their high Young's moduli, their use in mechanically enhancing composites is being investigated. Because nanowires appear in bundles, they may be used as tribological additives to improve friction characteristics and reliability of electronic transducers and actuators. Because of their high aspect ratio, nanowires are also suited to dielectrophoretic manipulation, which offers a low-cost, bottom-up approach to integrating suspended dielectric metal oxide nanowires in electronic devices such as UV, water vapor, and ethanol sensors. Due to their large surface-to-volume ratio, physico-chemical reactions are facilitated on the surface of nanowires. Single nanowire devices for gas and chemical sensing The high aspect ratio of nanowires makes this nanostructures suitable for electrochemical sensing with the potential for ultimate sensitivity. One of the challenge for the use of nanowires in commercial products is related to the isolation, handling, and integration of nanowires in an electrical circuit when using the conventional and manual pick-and-place approach, leading to a very limited throughput. Recent developments in the nanowire synthesis methods now allow for parallel production of single nanowire devices with useful applications in electrochemistry, photonics, and gas- and biosensing. Nanowire lasers Nanowire lasers are nano-scaled lasers with potential as optical interconnects and optical data communication on chip. Nanowire lasers are built from III–V semiconductor heterostructures, the high refractive index allows for low optical loss in the nanowire core. Nanowire lasers are subwavelength lasers of only a few hundred nanometers. Nanowire lasers are Fabry–Perot resonator cavities defined by the end facets of the wire with high-reflectivity, recent developments have demonstrated repetition rates greater than 200 GHz offering possibilities for optical chip level communications. Sensing of proteins and chemicals using semiconductor nanowires In an analogous way to FET devices in which the modulation of conductance (flow of electrons/holes) in the semiconductor, between the input (source) and the output (drain) terminals, is controlled by electrostatic potential variation (gate-electrode) of the charge carriers in the device conduction channel, the methodology of a Bio/Chem-FET is based on the detection of the local change in charge density, or so-called "field effect", that characterizes the recognition event between a target molecule and the surface receptor. This change in the surface potential influences the Chem-FET device exactly as a 'gate' voltage does, leading to a detectable and measurable change in the device conduction. When these devices are fabricated using semiconductor nanowires as the transistor element the binding of a chemical or biological species to the surface of the sensor can lead to the depletion or accumulation of charge carriers in the "bulk" of the nanometer diameter nanowire i.e. (small cross section available for conduction channels). Moreover, the wire, which serves as a tunable conducting channel, is in close contact with the sensing environment of the target, leading to a short response time, along with orders of magnitude increase in the sensitivity of the device as a result of the huge S/V ratio of the nanowires. While several inorganic semiconducting materials such as Si, Ge, and metal oxides (e.g. In2O3, SnO2, ZnO, etc.) have been used for the preparation of nanowires, Si is usually the material of choice when fabricating nanowire FET-based chemo/biosensors. Several examples of the use of silicon nanowire(SiNW) sensing devices include the ultra sensitive, real-time sensing of biomarker proteins for cancer, detection of single virus particles, and the detection of nitro-aromatic explosive materials such as 2,4,6-tri-nitrotoluene (TNT) in sensitives superior to these of canines. Silicon nanowires could also be used in their twisted form, as electromechanical devices, to measure intermolecular forces with great precision. Limitations of sensing with silicon nanowire FET devices Generally, the charges on dissolved molecules and macromolecules are screened by dissolved counterions, since in most cases molecules bound to the devices are separated from the sensor surface by approximately 2–12 nm (the size of the receptor proteins or DNA linkers bound to the sensor surface). As a result of the screening, the electrostatic potential that arises from charges on the analyte molecule decays exponentially toward zero with distance. Thus, for optimal sensing, the Debye length must be carefully selected for nanowire FET measurements. One approach of overcoming this limitation employs fragmentation of the antibody-capturing units and control over surface receptor density, allowing more intimate binding to the nanowire of the target protein. This approach proved useful for dramatically enhancing the sensitivity of cardiac biomarkers (e.g. Troponin) detection directly from serum for the diagnosis of acute myocardial infarction. Nanowire assisted transfer of sensitive TEM samples For a minimal introduction of stress and bending to transmission electron microscopy (TEM) samples (lamellae, thin films, and other mechanically and beam sensitive samples), when transferring inside a focused ion beam (FIB), flexible metallic nanowires can be attached to a typically rigid micromanipulator. The main advantages of this method include a significant reduction of sample preparation time (quick welding and cutting of nanowire at low beam current), and minimization of stress-induced bending, Pt contamination, and ion beam damage. This technique is particularly suitable for in situ electron microscopy sample preparation. Corn-like nanowires Corn-like nanowire is a one-dimensional nanowire with interconnected nanoparticles on the surface, providing a large percentage of reactive facets. TiO2 corn-like nanowires were first prepared by a surface modification concept using surface tension stress mechanism through a two consecutive hydrothermal operation, and showed an increase of 12% in dye-sensitized solar cell efficiency the light scattering layer. CdSe corn-like nanowires grown by chemical bath deposition and corn-like γ-Fe2O3@SiO2@TiO2 photocatalysts induced by magnetic dipole interactions have been also reported previously. See also Bacterial nanowires Molecular wire Nanoantenna Nanorod Nanowire battery Non-carbon nanotube Silicon nanowire Solar cell References External links Nanohedron.com | Nano Image Gallery several images of nanowires are included in the galleries. Stanford's nanowire battery holds 10 times the charge of existing ones Original article on the Quantum Hall Effect: K. v. Klitzing, G. Dorda, and M. Pepper; Phys. Rev. Lett. 45, 494–497 (1980). Strongest theoretical nanowire produced at Australia's University of Melbourne. Penn Engineers Design Electronic Computer Memory in Nanoscale Form That Retrieves Data 1,000 Times Faster. One atom thick, hundreds of nanometers long Pt-nanowires are one of the best examples of self-assembly. (University of Twente) Nanoelectronics Electrical connectors Mesoscopic physics
Nanowire
[ "Physics", "Materials_science" ]
5,768
[ "Quantum mechanics", "Nanoelectronics", "Condensed matter physics", "Nanotechnology", "Mesoscopic physics" ]
52,247
https://en.wikipedia.org/wiki/Fourier%20transform
In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches. Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another Gaussian function. Joseph Fourier introduced sine and cosine transforms (which correspond to the imaginary and real components of the modern Fourier transform) in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation. The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory. For example, many relatively simple applications use the Dirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint. The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a function of 'position space' to a function of momentum (or a function of space and time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly vector-valued. Still further generalization is possible to functions on groups, which, besides the original Fourier transform on or , notably includes the discrete-time Fourier transform (DTFT, group = ), the discrete Fourier transform (DFT, group = ) and the Fourier series or circular Fourier transform (group = , the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the DFT. Definition The Fourier transform of a complex-valued (Lebesgue) integrable function on the real line, is the complex valued function , defined by the integral Evaluating the Fourier transform for all values of produces the frequency-domain function, and it converges at all frequencies to a continuous function tending to zero at infinity. If decays with all derivatives, i.e., then converges for all frequencies and, by the Riemann–Lebesgue lemma, also decays with all derivatives. First introduced in Fourier's Analytical Theory of Heat., the corresponding inversion formula for "sufficiently nice" functions is given by the Fourier inversion theorem, i.e., The functions and are referred to as a Fourier transform pair.  A common notation for designating transform pairs is:   for example   By analogy, the Fourier series can be regarded as abstract Fourier transform on the group of integers. That is, the synthesis of a sequence of complex numbers is defined by the Fourier transform such that are given by the inversion formula, i.e., the analysis for some complex-valued, -periodic function defined on a bounded interval . When the constituent frequencies are a continuum: and . In other words, on the finite interval the function has a discrete decomposition in the periodic functions . On the infinite interval the function has a continuous decomposition in periodic functions . Lebesgue integrable functions A measurable function is called (Lebesgue) integrable if the Lebesgue integral of its absolute value is finite: If is Lebesgue integrable then the Fourier transform, given by , is well-defined for all . Furthermore, is bounded, uniformly continuous and (by the Riemann–Lebesgue lemma) zero at infinity. The space is the space of measurable functions for which the norm is finite, modulo the equivalence relation of equality almost everywhere. The Fourier transform is one-to-one on . However, there is no easy characterization of the image, and thus no easy characterization of the inverse transform. In particular, is no longer valid, as it was stated only under the hypothesis that decayed with all derivatives. While defines the Fourier transform for (complex-valued) functions in , it is not well-defined for other integrability classes, most importantly the space of square-integrable functions . For example, the function is in but not and therefore the Lebesgue integral does not exist. However, the Fourier transform on the dense subspace admits a unique continuous extension to a unitary operator on . This extension is important in part because, unlike the case of , the Fourier transform is an automorphism of the space . In such cases, the Fourier transform can be obtained explicitly by regularizing the integral, and then passing to a limit. In practice, the integral is often regarded as an improper integral instead of a proper Lebesgue integral, but sometimes for convergence one needs to use weak limit or principal value instead of the (pointwise) limits implicit in an improper integral. and each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure. A general principle in working with the Fourier transform is that Gaussians are dense in , and the various features of the Fourier transform, such as its unitarity, are easily inferred for Gaussians. Many of the properties of the Fourier transform, can then be proven from two facts about Gaussians: that is its own Fourier transform; and that the Gaussian integral A feature of the Fourier transform is that it is a homomorphism of Banach algebras from equipped with the convolution operation to the Banach algebra of continuous functions under the (supremum) norm. The conventions chosen in this article are those of harmonic analysis, and are characterized as the unique conventions such that the Fourier transform is both unitary on and an algebra homomorphism from to , without renormalizing the Lebesgue measure. Angular frequency (ω) When the independent variable () represents time (often denoted by ), the transform variable () represents frequency (often denoted by ). For example, if time is measured in seconds, then frequency is in hertz. The Fourier transform can also be written in terms of angular frequency, whose units are radians per second. The substitution into produces this convention, where function is relabeled Unlike the definition, the Fourier transform is no longer a unitary transformation, and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the factor evenly between the transform and its inverse, which leads to another convention: Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites. Background History In 1822, Fourier claimed (see ) that any function, whether continuous or discontinuous, can be expanded into a series of sines. That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since. Complex sinusoids In general, the coefficients are complex numbers, which have two equivalent forms (see Euler's formula): The product with () has these forms: which conveys both amplitude and phase of frequency Likewise, the intuitive interpretation of is that multiplying by has the effect of subtracting from every frequency component of function Only the component that was at frequency can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see ) It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula. Negative frequency Euler's formula introduces the possibility of negative   And is defined Only certain complex-valued have transforms (See Analytic signal. A simple example is )  But negative frequency is necessary to characterize all other complex-valued found in signal processing, partial differential equations, radar, nonlinear optics, quantum mechanics, and others. For a real-valued has the symmetry property (see below). This redundancy enables to distinguish from   But of course it cannot tell us the actual sign of because and are indistinguishable on just the real numbers line. Fourier transform for periodic functions The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral in to be defined the function must be absolutely integrable. Instead it is common to use Fourier series. It is possible to extend the definition to include periodic functions by viewing them as tempered distributions. This makes it possible to see a connection between the Fourier series and the Fourier transform for periodic functions that have a convergent Fourier series. If is a periodic function, with period , that has a convergent Fourier series, then: where are the Fourier series coefficients of , and is the Dirac delta function. In other words, the Fourier transform is a Dirac comb function whose teeth are multiplied by the Fourier series coefficients. Sampling the Fourier transform The Fourier transform of an integrable function can be sampled at regular intervals of arbitrary length These samples can be deduced from one cycle of a periodic function which has Fourier series coefficients proportional to those samples by the Poisson summation formula: The integrability of ensures the periodic summation converges. Therefore, the samples can be determined by Fourier series analysis: When has compact support, has a finite number of terms within the interval of integration. When does not have compact support, numerical evaluation of requires an approximation, such as tapering or truncating the number of terms. Units The frequency variable must have inverse units to the units of the original function's domain (typically named or ). For example, if is measured in seconds, should be in cycles per second or hertz. If the scale of time is in units of seconds, then another Greek letter is typically used instead to represent angular frequency (where ) in units of radians per second. If using for units of length, then must be in inverse length, e.g., wavenumbers. That is to say, there are two versions of the real line: one which is the range of and measured in units of and the other which is the range of and measured in inverse units to the units of These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition. In general, must always be taken to be a linear form on the space of its domain, which is to say that the second real line is the dual space of the first real line. See the article on linear algebra for a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to general symmetry groups, including the case of Fourier series. That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants. In other conventions, the Fourier transform has in the exponent instead of , and vice versa for the inversion formula. This convention is common in modern physics and is the default for Wolfram Alpha, and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means that is the amplitude of the wave    instead of the wave   (the former, with its minus sign, is often seen in the time dependence for Sinusoidal plane-wave solutions of the electromagnetic wave equation, or in the time dependence for quantum wave functions). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involve have it replaced by . In Electrical engineering the letter is typically used for the imaginary unit instead of because is used for current. When using dimensionless units, the constant factors might not even be written in the transform definition. For instance, in probability theory, the characteristic function of the probability density function of a random variable of continuous type is defined without a negative sign in the exponential, and since the units of are ignored, there is no 2 either: (In probability theory, and in mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because so many random variables are not of continuous type, and do not possess a density function, and one must treat not functions but distributions, i.e., measures which possess "atoms".) From the higher point of view of group characters, which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on a locally compact Abelian group. Properties Let and represent integrable functions Lebesgue-measurable on the real line satisfying: We denote the Fourier transforms of these functions as and respectively. Basic properties The Fourier transform has the following basic properties: Linearity Time shifting Frequency shifting Time scaling The case leads to the time-reversal property: Symmetry When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform: From this, various relationships are apparent, for example: The transform of a real-valued function is the conjugate symmetric function Conversely, a conjugate symmetric transform implies a real-valued time-domain. The transform of an imaginary-valued function is the conjugate antisymmetric function and the converse is true. The transform of a conjugate symmetric function is the real-valued function and the converse is true. The transform of a conjugate antisymmetric function is the imaginary-valued function and the converse is true. Conjugation (Note: the ∗ denotes complex conjugation.) In particular, if is real, then is even symmetric (aka Hermitian function): And if is purely imaginary, then is odd symmetric: Real and imaginary parts Zero frequency component Substituting in the definition, we obtain: The integral of over its domain is known as the average value or DC bias of the function. Uniform continuity and the Riemann–Lebesgue lemma The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties. The Fourier transform of any integrable function is uniformly continuous and By the Riemann–Lebesgue lemma, However, need not be integrable. For example, the Fourier transform of the rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely convergent. It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both and are integrable, the inverse equality holds for almost every . As a result, the Fourier transform is injective on . Plancherel theorem and Parseval's theorem Let and be integrable, and let and be their Fourier transforms. If and are also square-integrable, then the Parseval formula follows: where the bar denotes complex conjugation. The Plancherel theorem, which follows from the above, states that Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on . On , this extension agrees with original Fourier transform defined on , thus enlarging the domain of the Fourier transform to (and consequently to for ). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem. See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups. Convolution theorem The Fourier transform translates between convolution and multiplication of functions. If and are integrable functions with Fourier transforms and respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms and (under other conventions for the definition of the Fourier transform a constant factor may appear). This means that if: where denotes the convolution operation, then: In linear time invariant (LTI) system theory, it is common to interpret as the impulse response of an LTI system with input and output , since substituting the unit impulse for yields . In this case, represents the frequency response of the system. Conversely, if can be decomposed as the product of two square integrable functions and , then the Fourier transform of is given by the convolution of the respective Fourier transforms and . Cross-correlation theorem In an analogous manner, it can be shown that if is the cross-correlation of and : then the Fourier transform of is: As a special case, the autocorrelation of function is: for which Differentiation Suppose is an absolutely continuous differentiable function, and both and its derivative are integrable. Then the Fourier transform of the derivative is given by More generally, the Fourier transformation of the th derivative is given by Analogously, , so By applying the Fourier transform and using these formulas, some ordinary differential equations can be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb " is smooth if and only if quickly falls to 0 for ." By using the analogous rules for the inverse Fourier transform, one can also say " quickly falls to 0 for if and only if is smooth." Eigenfunctions The Fourier transform is a linear transform which has eigenfunctions obeying with A set of eigenfunctions is found by noting that the homogeneous differential equation leads to eigenfunctions of the Fourier transform as long as the form of the equation remains invariant under Fourier transform. In other words, every solution and its Fourier transform obey the same equation. Assuming uniqueness of the solutions, every solution must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform if can be expanded in a power series in which for all terms the same factor of either one of arises from the factors introduced by the differentiation rules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowable leads to the standard normal distribution. More generally, a set of eigenfunctions is also found by noting that the differentiation rules imply that the ordinary differential equation with constant and being a non-constant even function remains invariant in form when applying the Fourier transform to both sides of the equation. The simplest example is provided by which is equivalent to considering the Schrödinger equation for the quantum harmonic oscillator. The corresponding solutions provide an important choice of an orthonormal basis for and are given by the "physicist's" Hermite functions. Equivalently one may use where are the "probabilist's" Hermite polynomials, defined as Under this convention for the Fourier transform, we have that In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier transform on . However, this choice of eigenfunctions is not unique. Because of there are only four different eigenvalues of the Fourier transform (the fourth roots of unity ±1 and ±) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose as a direct sum of four spaces , , , and where the Fourier transform acts on simply by multiplication by . Since the complete set of Hermite functions provides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed: This approach to define the Fourier transform was first proposed by Norbert Wiener. Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the fractional Fourier transform used in time–frequency analysis. In physics, this transform was introduced by Edward Condon. This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the right conventions. Consequently, under the proper conditions it may be expected to result from a self-adjoint generator via The operator is the number operator of the quantum harmonic oscillator written as It can be interpreted as the generator of fractional Fourier transforms for arbitrary values of , and of the conventional continuous Fourier transform for the particular value with the Mehler kernel implementing the corresponding active transform. The eigenfunctions of are the Hermite functions which are therefore also eigenfunctions of Upon extending the Fourier transform to distributions the Dirac comb is also an eigenfunction of the Fourier transform. Inversion and periodicity Under suitable conditions on the function , it can be recovered from its Fourier transform . Indeed, denoting the Fourier transform operator by , so , then for suitable functions, applying the Fourier transform twice simply flips the function: , which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yields , so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times: . In particular the Fourier transform is invertible (under suitable conditions). More precisely, defining the parity operator such that , we have: These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality almost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the Fourier inversion theorem. This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in the time–frequency domain (considering time as the -axis and frequency as the -axis), and the Fourier transform can be generalized to the fractional Fourier transform, which involves rotations by other angles. This can be further generalized to linear canonical transformations, which can be visualized as the action of the special linear group on the time–frequency plane, with the preserved symplectic form corresponding to the uncertainty principle, below. This approach is particularly studied in signal processing, under time–frequency analysis. Connection with the Heisenberg group The Heisenberg group is a certain group of unitary operators on the Hilbert space of square integrable complex valued functions on the real line, generated by the translations and multiplication by , . These operators do not commute, as their (group) commutator is which is multiplication by the constant (independent of ) (the circle group of unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensional Lie group of triples , with the group law Denote the Heisenberg group by . The above procedure describes not only the group structure, but also a standard unitary representation of on a Hilbert space, which we denote by . Define the linear automorphism of by so that . This can be extended to a unique automorphism of : According to the Stone–von Neumann theorem, the unitary representations and are unitarily equivalent, so there is a unique intertwiner such that This operator is the Fourier transform. Many of the standard properties of the Fourier transform are immediate consequences of this more general framework. For example, the square of the Fourier transform, , is an intertwiner associated with , and so we have is the reflection of the original function . Complex domain The integral for the Fourier transform can be studied for complex values of its argument . Depending on the properties of , this might not converge off the real axis at all, or it might converge to a complex analytic function for all values of , or something in between. The Paley–Wiener theorem says that is smooth (i.e., -times differentiable for all positive integers ) and compactly supported if and only if is a holomorphic function for which there exists a constant such that for any integer , for some constant . (In this case, is supported on .) This can be expressed by saying that is an entire function which is rapidly decreasing in (for fixed ) and of exponential growth in (uniformly in ). (If is not smooth, but only , the statement still holds provided .) The space of such functions of a complex variable is called the Paley—Wiener space. This theorem has been generalised to semisimple Lie groups. If is supported on the half-line , then is said to be "causal" because the impulse response function of a physically realisable filter must have this property, as no effect can precede its cause. Paley and Wiener showed that then extends to a holomorphic function on the complex lower half-plane which tends to zero as goes to infinity. The converse is false and it is not known how to characterise the Fourier transform of a causal function. Laplace transform The Fourier transform is related to the Laplace transform , which is also used for the solution of differential equations and the analysis of filters. It may happen that a function for which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of the complex plane. For example, if is of exponential growth, i.e., for some constants , then convergent for all , is the two-sided Laplace transform of . The more usual version ("one-sided") of the Laplace transform is If is also causal, and analytical, then: Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variable . From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb. Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel. In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea of harmonic analysis. Inversion Still with , if is complex analytic for , then by Cauchy's integral theorem. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis. Theorem: If for , and for some constants , then for any . This theorem implies the Mellin inversion formula for the Laplace transformation, for any , where is the Laplace transform of . The hypotheses can be weakened, as in the results of Carleson and Hunt, to being , provided that be of bounded variation in a closed neighborhood of (cf. Dini test), the value of at be taken to be the arithmetic mean of the left and right limits, and that the integrals be taken in the sense of Cauchy principal values. versions of these inversion formulas are also available. Fourier transform on Euclidean space The Fourier transform can be defined in any arbitrary number of dimensions . As with the one-dimensional case, there are many conventions. For an integrable function , this article takes the definition: where and are -dimensional vectors, and is the dot product of the vectors. Alternatively, can be viewed as belonging to the dual vector space , in which case the dot product becomes the contraction of and , usually written as . All of the basic properties listed above hold for the -dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the Riemann–Lebesgue lemma holds. Uncertainty principle Generally speaking, the more concentrated is, the more spread out its Fourier transform must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function in , its Fourier transform stretches out in . It is not possible to arbitrarily concentrate both a function and its Fourier transform. The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the symplectic form on the time–frequency domain: from the point of view of the linear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the symplectic form. Suppose is an integrable and square-integrable function. Without loss of generality, assume that is normalized: It follows from the Plancherel theorem that is also normalized. The spread around may be measured by the dispersion about zero defined by In probability terms, this is the second moment of about zero. The uncertainty principle states that, if is absolutely continuous and the functions and are square integrable, then The equality is attained only in the case where is arbitrary and so that is -normalized. In other words, where is a (normalized) Gaussian function with variance , centered at zero, and its Fourier transform is a Gaussian function with variance . In fact, this inequality implies that: for any , . In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, up to a factor of the Planck constant. With this constant properly taken into account, the inequality above becomes the statement of the Heisenberg uncertainty principle. A stronger uncertainty principle is the Hirschman uncertainty principle, which is expressed as: where is the differential entropy of the probability density function : where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case. Sine and cosine transforms Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable function for which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically) by This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functions and can be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised): and Older literature refers to the two transform functions, the Fourier cosine transform, , and the Fourier sine transform, . The function can be recovered from the sine and cosine transform using together with trigonometric identities. This is referred to as Fourier's integral formula. Spherical harmonics Let the set of homogeneous harmonic polynomials of degree on be denoted by . The set consists of the solid spherical harmonics of degree . The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if for some in , then . Let the set be the closure in of linear combinations of functions of the form where is in . The space is then a direct sum of the spaces and the Fourier transform maps each space to itself and is possible to characterize the action of the Fourier transform on each space . Let (with in ), then where Here denotes the Bessel function of the first kind with order . When this gives a useful formula for the Fourier transform of a radial function. This is essentially the Hankel transform. Moreover, there is a simple recursion relating the cases and allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one. Restriction problems In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the restriction of the Fourier transform of an function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in for . It is possible in some cases to define the restriction of a Fourier transform to a set , provided has non-zero curvature. The case when is the unit sphere in is of particular interest. In this case the Tomas–Stein restriction theorem states that the restriction of the Fourier transform to the unit sphere in is a bounded operator on provided . One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets indexed by : such as balls of radius centered at the origin, or cubes of side . For a given integrable function , consider the function defined by: Suppose in addition that . For and , if one takes , then converges to in as tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for . In the case that is taken to be a cube with side length , then convergence still holds. Another natural candidate is the Euclidean ball . In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in . For it is a celebrated theorem of Charles Fefferman that the multiplier for the unit ball is never bounded unless . In fact, when , this shows that not only may fail to converge to in , but for some functions , is not even an element of . Fourier transform on function spaces The definition of the Fourier transform naturally extends from to . That is, if then the Fourier transform is given by This operator is bounded as which shows that its operator norm is bounded by . The Riemann–Lebesgue lemma shows that if then its Fourier transform actually belongs to the space of continuous functions which vanish at infinity, i.e., . Furthermore, the image of under is a strict subset of . Similarly to the case of one variable, the Fourier transform can be defined on . The Fourier transform in is no longer given by an ordinary Lebesgue integral, although it can be computed by an improper integral, i.e., where the limit is taken in the sense. Furthermore, is a unitary operator. For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any we have In particular, the image of is itself under the Fourier transform. On other Lp For , the Fourier transform can be defined on by Marcinkiewicz interpolation, which amounts to decomposing such functions into a fat tail part in plus a fat body part in . In each of these spaces, the Fourier transform of a function in is in , where is the Hölder conjugate of (by the Hausdorff–Young inequality). However, except for , the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in for the range requires the study of distributions. In fact, it can be shown that there are functions in with so that the Fourier transform is not defined as a function. Tempered distributions One might consider enlarging the domain of the Fourier transform from by considering generalized functions, or distributions. A distribution on is a continuous linear functional on the space of compactly supported smooth functions (i.e. bump functions), equipped with a suitable topology. Since is dense in , the Plancherel theorem allows one to extend the definition of the Fourier transform to general functions in by continuity arguments. The strategy is then to consider the action of the Fourier transform on and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not map to . In fact the Fourier transform of an element in can not vanish on an open set; see the above discussion on the uncertainty principle. The Fourier transform can also be defined for tempered distributions , dual to the space of Schwartz functions . A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives, hence and: The Fourier transform is an automorphism of the Schwartz space and, by duality, also an automorphism of the space of tempered distributions. The tempered distributions include well-behaved functions of polynomial growth, distributions of compact support as well as all the integrable functions mentioned above. For the definition of the Fourier transform of a tempered distribution, let and be integrable functions, and let and be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula, Every integrable function defines (induces) a distribution by the relation So it makes sense to define the Fourier transform of a tempered distribution by the duality: Extending this to all tempered distributions gives the general definition of the Fourier transform. Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions. Generalizations Fourier–Stieltjes transform on measurable spaces The Fourier transform of a finite Borel measure on is given by the continuous function: and called the Fourier-Stieltjes transform due to its connection with the Riemann-Stieltjes integral representation of (Radon) measures. If is the probability distribution of a random variable then its Fourier–Stieltjes transform is, by definition, a characteristic function. If, in addition, the probability distribution has a probability density function, this definition is subject to the usual Fourier transform. Stated more generally, when is absolutely continuous with respect to the Lebesgue measure, i.e., then and the Fourier-Stieltjes transform reduces to the usual definition of the Fourier transform. That is, the notable difference with the Fourier transform of integrable functions is that the Fourier-Stieltjes transform need not vanish at infinity, i.e., the Riemann–Lebesgue lemma fails for measures. Bochner's theorem characterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle. One example of a finite Borel measure that is not a function is the Dirac measure. Its Fourier transform is a constant function (whose value depends on the form of the Fourier transform used). Locally compact abelian groups The Fourier transform may be generalized to any locally compact abelian group, i.e., an abelian group that is also a locally compact Hausdorff space such that the group operation is continuous. If is a locally compact abelian group, it has a translation invariant measure , called Haar measure. For a locally compact abelian group , the set of irreducible, i.e. one-dimensional, unitary representations are called its characters. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from to the circle group), the set of characters is itself a locally compact abelian group, called the Pontryagin dual of . For a function in , its Fourier transform is defined by The Riemann–Lebesgue lemma holds in this case; is a function vanishing at infinity on . The Fourier transform on is an example; here is a locally compact abelian group, and the Haar measure on can be thought of as the Lebesgue measure on [0,1). Consider the representation of on the complex plane that is a 1-dimensional complex vector space. There are a group of representations (which are irreducible since is 1-dim) where for . The character of such representation, that is the trace of for each and , is itself. In the case of representation of finite group, the character table of the group are rows of vectors such that each row is the character of one irreducible representation of , and these vectors form an orthonormal basis of the space of class functions that map from to by Schur's lemma. Now the group is no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the function of and the inner product between two class functions (all functions being class functions since is abelian) is defined as with the normalizing factor . The sequence is an orthonormal basis of the space of class functions . For any representation of a finite group , can be expressed as the span ( are the irreps of ), such that . Similarly for and , . The Pontriagin dual is and for , is its Fourier transform for . Gelfand transform The Fourier transform is also a special case of Gelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above. Given an abelian locally compact Hausdorff topological group , as before we consider space , defined using a Haar measure. With convolution as multiplication, is an abelian Banach algebra. It also has an involution * given by Taking the completion with respect to the largest possibly -norm gives its enveloping -algebra, called the group -algebra of . (Any -norm on is bounded by the norm, therefore their supremum exists.) Given any abelian -algebra , the Gelfand transform gives an isomorphism between and , where is the multiplicative linear functionals, i.e. one-dimensional representations, on with the weak-* topology. The map is simply given by It turns out that the multiplicative linear functionals of , after suitable identification, are exactly the characters of , and the Gelfand transform, when restricted to the dense subset is the Fourier–Pontryagin transform. Compact non-abelian groups The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators. The Fourier transform on compact groups is a major tool in representation theory and non-commutative harmonic analysis. Let be a compact Hausdorff topological group. Let denote the collection of all isomorphism classes of finite-dimensional irreducible unitary representations, along with a definite choice of representation on the Hilbert space of finite dimension for each . If is a finite Borel measure on , then the Fourier–Stieltjes transform of is the operator on defined by where is the complex-conjugate representation of acting on . If is absolutely continuous with respect to the left-invariant probability measure on , represented as for some , one identifies the Fourier transform of with the Fourier–Stieltjes transform of . The mapping defines an isomorphism between the Banach space of finite Borel measures (see rca space) and a closed subspace of the Banach space consisting of all sequences indexed by of (bounded) linear operators for which the norm is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of C*-algebras into a subspace of . Multiplication on is given by convolution of measures and the involution * defined by and has a natural -algebra structure as Hilbert space operators. The Peter–Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if , then where the summation is understood as convergent in the sense. The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of noncommutative geometry. In this context, a categorical generalization of the Fourier transform to noncommutative groups is Tannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions. Alternatives In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and standing waves are not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably transients, or any signal of finite extent. As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, fractional Fourier transform, Synchrosqueezing Fourier transform, or other functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform. Example The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the function which is a 3 Hz cosine wave (the first term) shaped by a Gaussian envelope function (the second term) that smoothly turns the wave on and off. The next 2 images show the product which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs of and oscillate at the same rate and in phase, whereas and oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1. However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a function To re-enforce an earlier point, the reason for the response at   Hz  is because    and    are indistinguishable. The transform of    would have just one response, whose amplitude is the integral of the smooth envelope:   whereas   is Applications Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency, so some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to ordinary multiplication in the frequency domain (see Convolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics. Analysis of differential equations Perhaps the most important use of the Fourier transformation is to solve partial differential equations. Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is The example we will give, a slightly more difficult one, is the wave equation in one dimension, As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions" Here, and are given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutions which satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution. It is easier to find the Fourier transform of the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. After is determined, we can apply the inverse Fourier transformation to find . Fourier's method is as follows. First, note that any function of the forms satisfies the wave equation. These are called the elementary solutions. Second, note that therefore any integral satisfies the wave equation for arbitrary . This integral may be interpreted as a continuous linear combination of solutions for the linear equation. Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform of and in the variable . The third step is to examine how to find the specific unknown coefficient functions and that will lead to satisfying the boundary conditions. We are interested in the values of these solutions at . So we will set . Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variable ) of both sides and obtain and Similarly, taking the derivative of with respect to and then applying the Fourier sine and cosine transformations yields and These are four linear equations for the four unknowns and , in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found. In summary, we chose a set of elementary solutions, parametrized by , of which the general solution would be a (continuous) linear combination in the form of an integral over the parameter . But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functions and . But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functions and in terms of the given boundary conditions and . From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in both and rather than operate as Fourier did, who only transformed in the spatial variables. Note that must be considered in the sense of a distribution since is not going to be : as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation in to multiplication by and differentiation with respect to to multiplication by where is the frequency. Then the wave equation becomes an algebraic equation in : This is equivalent to requiring unless . Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviously will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conic . We may as well consider the distributions supported on the conic that are given by distributions of one variable on the line plus distributions on the line as follows: if is any test function, where , and , are distributions of one variable. Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (put , which is clearly of polynomial growth): and Now, as before, applying the one-variable Fourier transformation in the variable to these functions of yields two equations in the two unknown distributions (which can be taken to be ordinary functions if the boundary conditions are or ). From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used. The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well. Fourier-transform spectroscopy The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in magnetic resonance imaging (MRI) and mass spectrometry. Quantum mechanics The Fourier transform is useful in quantum mechanics in at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs of complementary variables, connected by the Heisenberg uncertainty principle. For example, in one dimension, the spatial variable of, say, a particle, can only be measured by the quantum mechanical "position operator" at the cost of losing information about the momentum of the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", of or by a function of but not by a function of both variables. The variable is called the conjugate variable to . In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to both and simultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with a -axis and a -axis called the phase space. In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, the -axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing the -axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such that or, equivalently, Physically realisable states are , and so by the Plancherel theorem, their Fourier transforms are also . (Note that since is in units of distance and is in units of momentum, the presence of the Planck constant in the exponent makes the exponent dimensionless, as it should be.) Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberg uncertainty principle. The other use of the Fourier transform in both quantum mechanics and quantum field theory is to solve the applicable wave equation. In non-relativistic quantum mechanics, Schrödinger's equation for a time-varying wave function in one-dimension, not subject to external forces, is This is the same as the heat equation except for the presence of the imaginary unit . Fourier methods can be used to solve this equation. In the presence of a potential, given by the potential energy function , the equation becomes The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution of given its values for . Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important. In relativistic quantum mechanics, Schrödinger's equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units, This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions. Finally, the number operator of the quantum harmonic oscillator can be interpreted, for example via the Mehler kernel, as the generator of the Fourier transform . Signal processing The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function. The autocorrelation function of a function is defined by This function is a function of the time-lag elapsing between the values of to be correlated. For most functions that occur in practice, is a bounded even function of the time-lag and for typical noisy signals it turns out to be uniformly continuous with a maximum at . The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values of separated by a time lag. This is a way of searching for the correlation of with its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, if represents the temperature at time , one expects a strong correlation with the temperature at a time lag of 24 hours. It possesses a Fourier transform, This Fourier transform is called the power spectral density function of . (Unless all periodic components are first filtered out from , this integral will diverge, but it is easy to filter out such periodicities.) The power spectrum, as indicated by this density function , measures the amount of variance contributed to the data by the frequency . In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series (ANOVA). Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data. The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out. Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool. Other notations Other common notations for include: In the sciences and engineering it is also common to make substitutions like these: So the transform pair can become A disadvantage of the capital letter notation is when expressing a transform such as or which become the more awkward and In some contexts such as particle physics, the same symbol may be used for both for a function as well as it Fourier transform, with the two only distinguished by their argument I.e. would refer to the Fourier transform because of the momentum argument, while would refer to the original function because of the positional argument. Although tildes may be used as in to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a more Lorentz invariant form, such as , so care must be taken. Similarly, often denotes the Hilbert transform of . The interpretation of the complex function may be aided by expressing it in polar coordinate form in terms of the two real functions and where: is the amplitude and is the phase (see arg function). Then the inverse transform can be written: which is a recombination of all the frequency components of . Each component is a complex sinusoid of the form whose amplitude is and whose initial phase angle (at ) is . The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted and is used to denote the Fourier transform of the function . This mapping is linear, which means that can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function ) can be used to write instead of . Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value for its variable, and this is denoted either as or as . Notice that in the former case, it is implicitly understood that is applied first to and then the resulting function is evaluated at , not the other way around. In mathematics and various applied sciences, it is often necessary to distinguish between a function and the value of when its variable equals , denoted . This means that a notation like formally can be interpreted as the Fourier transform of the values of at . Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function of , not of . As discussed above, the characteristic function of a random variable is the same as the Fourier–Stieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined As in the case of the "non-unitary angular frequency" convention above, the factor of 2 appears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent. Computation methods The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable, and functions of a discrete variable (i.e. ordered pairs of and values). For discrete-valued the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency ( or ). When the sinusoids are harmonically related (i.e. when the -values are spaced at integer multiples of an interval), the transform is called discrete-time Fourier transform (DTFT). Discrete Fourier transforms and fast Fourier transforms Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described at . The discrete Fourier transform (DFT), used there, is usually computed by a fast Fourier transform (FFT) algorithm. Analytic integration of closed-form functions Tables of closed-form Fourier transforms, such as and , are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency ( or ). When mathematically possible, this provides a transform for a continuum of frequency values. Many computer algebra systems such as Matlab and Mathematica that are capable of symbolic integration are capable of computing Fourier transforms analytically. For example, to compute the Fourier transform of one might enter the command into Wolfram Alpha. Numerical integration of closed-form continuous functions Discrete sampling of the Fourier transform can also be done by numerical integration of the definition at each value of frequency for which transform is desired. The numerical integration approach works on a much broader class of functions than the analytic approach. Numerical integration of a series of ordered pairs If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs. The DTFT is a common subcase of this more general situation. Tables of important Fourier transforms The following tables record some closed-form Fourier transforms. For functions and denote their Fourier transforms by and . Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse. Functional relationships, one-dimensional The Fourier transforms in this table may be found in or . Square-integrable functions, one-dimensional The Fourier transforms in this table may be found in , , or . Distributions, one-dimensional The Fourier transforms in this table may be found in or . Two-dimensional functions Formulas for general -dimensional functions See also Analog signal processing Beevers–Lipson strip Constant-Q transform Discrete Fourier transform DFT matrix Fast Fourier transform Fourier integral operator Fourier inversion theorem Fourier multiplier Fourier series Fourier sine transform Fourier–Deligne transform Fourier–Mukai transform Fractional Fourier transform Indirect Fourier transform Integral transform Hankel transform Hartley transform Laplace transform Least-squares spectral analysis Linear canonical transform List of Fourier-related transforms Mellin transform Multidimensional transform NGC 4622, especially the image NGC 4622 Fourier transform . Nonlocal operator Quantum Fourier transform Quadratic Fourier transform Short-time Fourier transform Spectral density Spectral density estimation Symbolic integration Time stretch dispersive Fourier transform Transform (mathematics) Notes Citations References (translated from French) (translated from Russian) (translated from Russian) (translated from Russian) (translated from Russian) ; also available at Fundamentals of Music Processing, Section 2.1, pages 40–56 External links Encyclopedia of Mathematics Fourier Transform in Crystallography Fourier analysis Integral transforms Unitary operators Joseph Fourier Mathematical physics
Fourier transform
[ "Physics", "Mathematics" ]
14,466
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
52,381
https://en.wikipedia.org/wiki/Thermite
Thermite () is a pyrotechnic composition of metal powder and metal oxide. When ignited by heat or chemical reaction, thermite undergoes an exothermic reduction-oxidation (redox) reaction. Most varieties are not explosive, but can create brief bursts of heat and high temperature in a small area. Its form of action is similar to that of other fuel-oxidizer mixtures, such as black powder. Thermites have diverse compositions. Fuels include aluminium, magnesium, titanium, zinc, silicon, and boron. Aluminium is common because of its high boiling point and low cost. Oxidizers include bismuth(III) oxide, boron(III) oxide, silicon(IV) oxide, chromium(III) oxide, manganese(IV) oxide, iron(III) oxide, iron(II,III) oxide, copper(II) oxide, and lead(II,IV) oxide. In a thermochemical survey comprising twenty-five metals and thirty-two metal oxides, 288 out of 800 binary combinations were characterized by adiabatic temperatures greater than 2000 K. Combinations like these, which possess the thermodynamic potential to produce very high temperatures, are either already known to be reactive or are plausible thermitic systems. The first thermite reaction was discovered in 1893 by the German chemist Hans Goldschmidt, who obtained a patent for his process. Today, thermite is used mainly for thermite welding, particularly for welding together railway tracks. Thermites have also been used in metal refining, disabling munitions, and in incendiary weapons. Some thermite-like mixtures are used as pyrotechnic initiators in fireworks. Chemical reactions In the following example, elemental aluminium reduces the oxide of another metal, in this common example iron oxide, because aluminium forms stronger and more stable bonds with oxygen than iron: Fe2O3 + 2 Al → 2 Fe + Al2O3 The products are aluminium oxide, elemental iron, and a large amount of heat. The reactants are commonly powdered and mixed with a binder to keep the material solid and prevent separation. Other metal oxides can be used, such as chromium oxide, to generate the given metal in its elemental form. For example, a copper thermite reaction using copper oxide and elemental aluminium can be used for creating electric joints in a process called cadwelding, that produces elemental copper (it may react violently): 3 CuO + 2 Al → 3 Cu + Al2O3 Thermites with nanosized particles are described by a variety of terms, such as metastable intermolecular composites, super-thermite, nano-thermite, and nanocomposite energetic materials. History The thermite () reaction was discovered in 1893 and patented in 1895 by German chemist Hans Goldschmidt. Consequently, the reaction is sometimes called the "Goldschmidt reaction" or "Goldschmidt process". Goldschmidt was originally interested in producing very pure metals by avoiding the use of carbon in smelting, but he soon discovered the value of thermite in welding. The first commercial application of thermite was the welding of tram tracks in Essen in 1899. Types Red iron(III) oxide (Fe2O3, commonly known as rust) is the most common iron oxide used in thermite. Black iron(II,III) oxide (Fe3O4, magnetite) also works. Other oxides are occasionally used, such as MnO2 in manganese thermite, Cr2O3 in chromium thermite, SiO2 (quartz) in silicon thermite, or copper(II) oxide in copper thermite, but only for specialized purposes. All of these examples use aluminium as the reactive metal. Fluoropolymers can be used in special formulations, Teflon with magnesium or aluminium being a relatively common example. Magnesium/Teflon/Viton is another pyrolant of this type. Combinations of dry ice (frozen carbon dioxide) and reducing agents such as magnesium, aluminium and boron follow the same chemical reaction as with traditional thermite mixtures, producing metal oxides and carbon. Despite the very low temperature of a dry ice thermite mixture, such a system is capable of being ignited with a flame. When the ingredients are finely divided, confined in a pipe and armed like a traditional explosive, this cryo-thermite is detonatable and a portion of the carbon liberated in the reaction emerges in the form of diamond. In principle, any reactive metal could be used instead of aluminium. This is rarely done, because the properties of aluminium are nearly ideal for this reaction: It forms a passivation layer making it safer to handle than many other reactive metals. Its relatively low melting point (660 °C) means that it is easy to melt the metal, so that the reaction can occur mainly in the liquid phase, thus it proceeds fairly quickly. Its high boiling point (2519 °C) enables the reaction to reach very high temperatures, since several processes tend to limit the maximum temperature to just below the boiling point. Such a high boiling point is common among transition metals (e.g., iron and copper boil at 2887 and 2582 °C, respectively), but is especially unusual among the highly reactive metals (cf. magnesium and sodium, which boil at 1090 and 883 °C, respectively). Further, the low density of the aluminium oxide formed as a result of the reaction tends to leave it floating on the resultant pure metal. This is particularly important for reducing contamination in a weld. Although the reactants are stable at room temperature, they burn with an extremely intense exothermic reaction when they are heated to ignition temperature. The products emerge as liquids due to the high temperatures reached (up to 2500 °C (4532°F) with iron(III) oxide)—although the actual temperature reached depends on how quickly heat can escape to the surrounding environment. Thermite contains its own supply of oxygen and does not require any external source of air. Consequently, it cannot be smothered, and may ignite in any environment given sufficient initial heat. It burns well while wet, and cannot be easily extinguished with water—though enough water to remove sufficient heat may stop the reaction. Small amounts of water boil before reaching the reaction. Even so, thermite is used for welding under water. The thermites are characterized by almost complete absence of gas production during burning, high reaction temperature, and production of molten slag. The fuel should have high heat of combustion and produce oxides with low melting point and high boiling point. The oxidizer should contain at least 25% oxygen, have high density, low heat of formation, and produce metal with low melting and high boiling points (so the energy released is not consumed in evaporation of reaction products). Organic binders can be added to the composition to improve its mechanical properties, but they tend to produce endothermic decomposition products, causing some loss of reaction heat and production of gases. The temperature achieved during the reaction determines the outcome. In an ideal case, the reaction produces a well-separated melt of metal and slag. For this, the temperature must be high enough to melt both reaction products, the resulting metal and the fuel oxide. Too low a temperature produces a mixture of sintered metal and slag; too high a temperature (above the boiling point of any reactant or product) leads to rapid production of gas, dispersing the burning reaction mixture, sometimes with effects similar to a low-yield explosion. In compositions intended for production of metal by aluminothermic reaction, these effects can be counteracted. Too low a reaction temperature (e.g., when producing silicon from sand) can be boosted with addition of a suitable oxidizer (e.g., sulfur in aluminium-sulfur-sand compositions); too high a temperature can be reduced by using a suitable coolant or slag flux. The flux often used in amateur compositions is calcium fluoride, as it reacts only minimally, has relatively low melting point, low melt viscosity at high temperatures (therefore increasing fluidity of the slag) and forms a eutectic with alumina. Too much flux, however, dilutes the reactants to the point of not being able to sustain combustion. The type of metal oxide also has dramatic influence to the amount of energy produced; the higher the oxide, the higher the amount of energy produced. A good example is the difference between manganese(IV) oxide and manganese(II) oxide, where the former produces too high temperature and the latter is barely able to sustain combustion; to achieve good results, a mixture with proper ratio of both oxides can be used. The reaction rate can be also tuned with particle sizes; coarser particles burn slower than finer particles. The effect is more pronounced with the particles requiring heating to higher temperature to start reacting. This effect is pushed to the extreme with nano-thermites. The temperature achieved in the reaction in adiabatic conditions, when no heat is lost to the environment, can be estimated using Hess’s law – by calculating the energy produced by the reaction itself (subtracting the enthalpy of the reactants from the enthalpy of the products) and subtracting the energy consumed by heating the products (from their specific heat, when the materials only change their temperature, and their enthalpy of fusion and eventually enthalpy of vaporization, when the materials melt or boil). In real conditions, the reaction loses heat to the environment, the achieved temperature is therefore somewhat lower. The heat transfer rate is finite, so the faster the reaction is, the closer to adiabatic condition it runs and the higher is the achieved temperature. Iron thermite The most common composition is iron thermite. The oxidizer used is usually either iron(III) oxide or iron(II,III) oxide. The former produces more heat. The latter is easier to ignite, likely due to the crystal structure of the oxide. Addition of copper or manganese oxides can significantly improve the ease of ignition. The density of prepared thermite is often as low as 0.7 g/cm3. This, in turn, results in relatively poor energy density (about 3 kJ/cm3), rapid burn times, and spray of molten iron due to the expansion of trapped air. Thermite can be pressed to densities as high as 4.9 g/cm3 (almost 16 kJ/cm3) with slow burning speeds (about 1 cm/s). Pressed thermite has higher melting power, i.e. it can melt a steel cup where a low-density thermite would fail. Iron thermite with or without additives can be pressed into cutting devices that have heat-resistant casing and a nozzle. Oxygen-balanced iron thermite 2Al + Fe2O3 has theoretical maximum density of 4.175 g/cm3 an adiabatic burn temperature of 3135 K or 2862 °C or 5183 °F (with phase transitions included, limited by iron, which boils at 3135 K), the aluminium oxide is (briefly) molten and the produced iron is mostly liquid with part of it being in gaseous form - 78.4 g of iron vapor per kg of thermite are produced. The energy content is 945.4 cal/g (3 956 J/g). The energy density is 16,516 J/cm3. The original mixture, as invented, used iron oxide in the form of mill scale. The composition was very difficult to ignite. Copper thermite Copper thermite can be prepared using either copper(I) oxide (Cu2O, red) or copper(II) oxide (CuO, black). The burn rate tends to be very fast and the melting point of copper is relatively low, so the reaction produces a significant amount of molten copper in a very short time. Copper(II) thermite reactions can be so fast that it can be considered a type of flash powder. An explosion can occur, which sends a spray of copper drops to considerable distances. Oxygen-balanced mixture has theoretical maximum density of 5.109 g/cm3, adiabatic flame temperature 2843 K (phase transitions included) with the aluminium oxide being molten and copper in both liquid and gaseous form; 343 g of copper vapor per kg of this thermite are produced. The energy content is 974 cal/g. Copper(I) thermite has industrial uses in e.g., welding of thick copper conductors (cadwelding). This kind of welding is being evaluated also for cable splicing on the US Navy fleet, for use in high-current systems, e.g., electric propulsion. Oxygen-balanced mixture has theoretical maximum density of 5.280 g/cm3, adiabatic flame temperature 2843 K (phase transitions included) with the aluminium oxide being molten and copper in both liquid and gaseous form; 77.6 g of copper vapor per kg of this thermite are produced. The energy content is 575.5 cal/g. Thermates Thermate composition is a thermite enriched with a salt-based oxidizer (usually nitrates, e.g., barium nitrate, or peroxides). In contrast with thermites, thermates burn with evolution of flame and gases. The presence of the oxidizer makes the mixture easier to ignite and improves penetration of target by the burning composition, as the evolved gas is projecting the molten slag and providing mechanical agitation. This mechanism makes thermate more suitable than thermite for incendiary purposes and for emergency destruction of sensitive equipment (e.g., cryptographic devices), as thermite's effect is more localized. Ignition Metals, under the right conditions, burn in a process similar to the combustion of wood or gasoline. In fact, rust is the result of oxidation of steel or iron at very slow rates. A thermite reaction results when the correct mixtures of metallic fuels combine and ignite. Ignition itself requires extremely high temperatures. Ignition of a thermite reaction normally requires a sparkler or easily obtainable magnesium ribbon, but may require persistent efforts, as ignition can be unreliable and unpredictable. These temperatures cannot be reached with conventional black powder fuses, nitrocellulose rods, detonators, pyrotechnic initiators, or other common igniting substances. Even when the thermite is hot enough to glow bright red, it does not ignite, as it has a very high ignition temperature. Starting the reaction is possible using a propane torch if done correctly. Often, strips of magnesium metal are used as fuses. Because metals burn without releasing cooling gases, they can potentially burn at extremely high temperatures. Reactive metals such as magnesium can easily reach temperatures sufficiently high for thermite ignition. Magnesium ignition remains popular among amateur thermite users, mainly because it can be easily obtained, but a piece of the burning strip can fall off into the mixture, resulting in premature ignition. The reaction between potassium permanganate and glycerol or ethylene glycol is used as an alternative to the magnesium method. When these two substances mix, a spontaneous reaction begins, slowly increasing the temperature of the mixture until it produces flames. The heat released by the oxidation of glycerine is sufficient to initiate a thermite reaction. Apart from magnesium ignition, some amateurs also choose to use sparklers to ignite the thermite mixture. These reach the necessary temperatures and provide enough time before the burning point reaches the sample. This can be a dangerous method, as the iron sparks, like the magnesium strips, burn at thousands of degrees and can ignite the thermite, though the sparkler itself is not in contact with it. This is especially dangerous with finely powdered thermite. Match heads burn hot enough to ignite thermite. Use of match heads enveloped with aluminium foil and a sufficiently long viscofuse/electric match leading to the match heads is possible. Similarly, finely powdered thermite can be ignited by a flint spark lighter, as the sparks are burning metal (in this case, the highly reactive rare-earth metals lanthanum and cerium). Therefore, it is unsafe to strike a lighter close to thermite. Civilian uses Thermite reactions have many uses. It is not an explosive; instead, it operates by exposing a very small area to extremely high temperatures. Intense heat focused on a small spot can be used to cut through metal or weld metal components together both by melting metal from the components, and by injecting molten metal from the thermite reaction itself. Thermite may be used for repair by the welding in-place of thick steel sections such as locomotive axle-frames where the repair can take place without removing the part from its installed location. Thermite can be used for quickly cutting or welding steel such as rail tracks, without requiring complex or heavy equipment. However, defects such as slag inclusions and voids (holes) are often present in such welded junctions, so great care is needed to operate the process successfully. The numerical analysis of thermite welding of rails has been approached similar to casting cooling analysis. Both this finite element analysis and experimental analysis of thermite rail welds has shown that weld gap is the most influential parameter affecting defect formation. Increasing weld gap has been shown to reduce shrinkage cavity formation and cold lap welding defects, and increasing preheat and thermite temperature further reduces these defects. However, reducing these defects promotes a second form of defect: microporosity. Care must also be taken to ensure that the rails remain straight, without resulting in dipped joints, which can cause wear on high speed and heavy axle load lines. Studies to make the hardness of thermite welds to repair tracks have made improvements to the hardness to compare more to the original tracks while keeping its portable nature. As the reaction of thermite is oxidation-reduction and environmentally friendly, it has started to be adapted into use for sealing oil wells instead of using concrete. Though thermite is usually in a powder-state, a diluted mixture can reduce damage to the surroundings during the process, though too much alumina can risk hurting the integrity of the seal. A higher concentration of mixture was needed to melt the plastic of a model tube, making it a favorable mixture. Other experiments have been done to simulate the heat flux of the well sealing to predict the temperature on the surface of the seal over time. A thermite reaction, when used to purify the ores of some metals, is called the , or aluminothermic reaction. An adaptation of the reaction, used to obtain pure uranium, was developed as part of the Manhattan Project at Ames Laboratory under the direction of Frank Spedding. It is sometimes called the Ames process. Copper thermite is used for welding together thick copper wires for the purpose of electrical connections. It is used extensively by the electrical utilities and telecommunications industries (exothermic welded connections). Military uses Thermite hand grenades and charges are typically used by armed forces in both an anti-materiel role and in the partial destruction of equipment, the latter being common when time is not available for safer or more thorough methods. For example, thermite can be used for the emergency destruction of cryptographic equipment when there is a danger that it might be captured by enemy troops. Because standard iron-thermite is difficult to ignite, burns with practically no flame and has a small radius of action, standard thermite is rarely used on its own as an incendiary composition. In general, an increase in the volume of gaseous reaction products of a thermite blend increases the heat transfer rate (and therefore damage) of that particular thermite blend. It is usually used with other ingredients that increase its incendiary effects. Thermate-TH3 is a mixture of thermite and pyrotechnic additives that have been found superior to standard thermite for incendiary purposes. Its composition by weight is generally about 68.7% thermite, 29.0% barium nitrate, 2.0% sulfur, and 0.3% of a binder (such as PBAN). The addition of barium nitrate to thermite increases its thermal effect, produces a larger flame, and significantly reduces the ignition temperature. Although the primary purpose of Thermate-TH3 by the armed forces is as an incendiary anti-materiel weapon, it also has uses in welding together metal components. A classic military use for thermite is disabling artillery pieces, and it has been used for this purpose since World War II, such as at Pointe du Hoc, Normandy. Because it permanently disables artillery pieces without the use of explosive charges, thermite can be used when silence is necessary to an operation. This can be accomplished by inserting one or more armed thermite grenades into the breech, then quickly closing it; this welds the breech shut and makes loading the weapon impossible. During World War II, both German and Allied incendiary bombs used thermite mixtures. Incendiary bombs usually consisted of dozens of thin, thermite-filled canisters (bomblets) ignited by a magnesium fuse. Incendiary bombs created massive damage in numerous cities due to the fires started by the thermite. Cities that primarily consisted of wooden buildings were especially susceptible. These incendiary bombs were used primarily during nighttime air raids. Bombsights could not be used at night, creating the need for munitions that could destroy targets without requiring precision placement. So called Dragon drones equipped with thermite munitions were used by the Ukrainian army during the Russian invasion of Ukraine against Russian positions. Hazards Thermite usage is hazardous due to the extremely high temperatures produced and the extreme difficulty in smothering a reaction once initiated. Small streams of molten iron released in the reaction can travel considerable distances and may melt through metal containers, igniting their contents. Additionally, flammable metals with relatively low boiling points such as zinc (with a boiling point of 907 °C, which is about 1,370 °C below the temperature at which thermite burns) could potentially spray superheated boiling metal violently into the air if near a thermite reaction. If, for some reason, thermite is contaminated with organics, hydrated oxides and other compounds able to produce gases upon heating or reaction with thermite components, the reaction products may be sprayed. Moreover, if the thermite mixture contains enough empty spaces with air and burns fast enough, the super-heated air also may cause the mixture to spray. For this reason it is preferable to use relatively crude powders, so the reaction rate is moderate and hot gases could escape the reaction zone. Preheating of thermite before ignition can easily be done accidentally, for example by pouring a new pile of thermite over a hot, recently ignited pile of thermite slag. When ignited, preheated thermite can burn almost instantaneously, releasing light and heat energy at a much higher rate than normal and causing burns and eye damage at what would normally be a reasonably safe distance. The thermite reaction can take place accidentally in industrial locations where workers use abrasive grinding and cutting wheels with ferrous metals. Using aluminium in this situation produces a mixture of oxides that can explode violently. Mixing water with thermite or pouring water onto burning thermite can cause a steam explosion, spraying hot fragments in all directions. Thermite's main ingredients were also utilized for their individual qualities, specifically reflectivity and heat insulation, in a paint coating or dope for the German zeppelin Hindenburg, possibly contributing to its fiery destruction. This was a theory put forward by the former NASA scientist Addison Bain, and later tested in small scale by the scientific reality-TV show MythBusters with semi-inconclusive results (it was proven not to be the fault of the thermite reaction alone, but instead conjectured to be a combination of that and the burning of hydrogen gas that filled the body of the Hindenburg). The MythBusters program also tested the veracity of a video found on the Internet, whereby a quantity of thermite in a metal bucket was ignited while sitting on top of several blocks of ice, causing a sudden explosion. They were able to confirm the results, finding huge chunks of ice as far as 50 m from the point of explosion. Co-host Jamie Hyneman conjectured that this was due to the thermite mixture aerosolizing, perhaps in a cloud of steam, causing it to burn even faster. Hyneman also voiced skepticism about another theory explaining the phenomenon: that the reaction somehow separated the hydrogen and oxygen in the ice and then ignited them. This explanation claims that the explosion is due to the reaction of high temperature molten aluminium with water. Aluminium reacts violently with water or steam at high temperatures, releasing hydrogen and oxidizing in the process. The speed of that reaction and the ignition of the resulting hydrogen can easily account for the explosion verified. This process is akin to the explosive reaction caused by dropping metallic potassium into water. In popular culture In the episode "A No-Rough-Stuff-Type Deal" of the crime drama television series Breaking Bad, Walter White uses thermite to burn through a security lock in order to steal a methylamine drum from a chemical plant. See also References Further reading External links Thermite Pictures & Videos (Including Exotic Thermite) Video – steel casting with thermite Welding Inorganic reactions Incendiary weapons Pyrotechnic compositions Powders Aluminium
Thermite
[ "Physics", "Chemistry", "Engineering" ]
5,377
[ "Pyrotechnic compositions", "Welding", "Inorganic reactions", "Materials", "Powders", "Mechanical engineering", "Matter" ]
52,432
https://en.wikipedia.org/wiki/Xanthine
Xanthine ( or , from Ancient Greek for its yellowish-white appearance; archaically xanthic acid; systematic name 3,7-dihydropurine-2,6-dione) is a purine base found in most human body tissues and fluids, as well as in other organisms. Several stimulants are derived from xanthine, including caffeine, theophylline, and theobromine. Xanthine is a product on the pathway of purine degradation. It is created from guanine by guanine deaminase. It is created from hypoxanthine by xanthine oxidoreductase. It is also created from xanthosine by purine nucleoside phosphorylase. Xanthine is subsequently converted to uric acid by the action of the xanthine oxidase enzyme. Use and production Xanthine is used as a drug precursor for human and animal medications, and is produced as a pesticide ingredient. Clinical significance Derivatives of xanthine (known collectively as xanthines) are a group of alkaloids commonly used for their effects as mild stimulants and as bronchodilators, notably in the treatment of asthma or influenza symptoms. In contrast to other, more potent stimulants like sympathomimetic amines, xanthines mainly act to oppose the actions of adenosine, and increase alertness in the central nervous system. Toxicity Methylxanthines (methylated xanthines), which include caffeine, aminophylline, IBMX, paraxanthine, pentoxifylline, theobromine, theophylline, and 7-methylxanthine (heteroxanthine), among others, affect the airways, increase heart rate and force of contraction, and at high concentrations can cause cardiac arrhythmias. In high doses, they can lead to convulsions that are resistant to anticonvulsants. Methylxanthines induce gastric acid and pepsin secretions in the gastrointestinal tract. Methylxanthines are metabolized by cytochrome P450 in the liver. If swallowed, inhaled, or exposed to the eyes in high amounts, xanthines can be harmful, and they may cause an allergic reaction if applied topically. Pharmacology In in vitro pharmacological studies, xanthines act as both competitive nonselective phosphodiesterase inhibitors and nonselective adenosine receptor antagonists. Phosphodiesterase inhibitors raise intracellular cAMP, activate PKA, inhibit TNF-α synthesis, and leukotriene and reduce inflammation and innate immunity. Adenosine receptor antagonists inhibit sleepiness-inducing adenosine. However, different analogues show varying potency at the numerous subtypes, and a wide range of synthetic xanthines (some nonmethylated) have been developed searching for compounds with greater selectivity for phosphodiesterase enzyme or adenosine receptor subtypes. Pathology People with rare genetic disorders, specifically xanthinuria and Lesch–Nyhan syndrome, lack sufficient xanthine oxidase and cannot convert xanthine to uric acid. Possible formation in absence of life Studies reported in 2008, based on 12C/13C isotopic ratios of organic compounds found in the Murchison meteorite, suggested that xanthine and related chemicals, including the RNA component uracil, have been formed extraterrestrially. In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting xanthine and related organic molecules, including the DNA and RNA components adenine and guanine, were found in outer space. See also DMPX Murchison meteorite Theobromine poisoning Xanthene Xanthone Xanthydrol Kidney stone disease References Enones
Xanthine
[ "Chemistry" ]
835
[ "Alkaloids by chemical classification", "Xanthines" ]
52,564
https://en.wikipedia.org/wiki/Partial%20differential%20equation
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives. The function is often thought of as an "unknown" that solves the equation, similar to how is thought of as an unknown number solving, e.g., an algebraic equation like . However, it is usually impossible to write down explicit formulae for solutions of partial differential equations. There is correspondingly a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers. Partial differential equations also occupy a large sector of pure mathematical research, in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations, such as existence, uniqueness, regularity and stability. Among the many open questions are the existence and smoothness of solutions to the Navier–Stokes equations, named as one of the Millennium Prize Problems in 2000. Partial differential equations are ubiquitous in mathematically oriented scientific fields, such as physics and engineering. For instance, they are foundational in the modern scientific understanding of sound, heat, diffusion, electrostatics, electrodynamics, thermodynamics, fluid dynamics, elasticity, general relativity, and quantum mechanics (Schrödinger equation, Pauli equation etc.). They also arise from many purely mathematical considerations, such as differential geometry and the calculus of variations; among other notable applications, they are the fundamental tool in the proof of the Poincaré conjecture from geometric topology. Partly due to this variety of sources, there is a wide spectrum of different types of partial differential equations, where the meaning of a solution depends on the context of the problem, and methods have been developed for dealing with many of the individual equations which arise. As such, it is usually acknowledged that there is no "universal theory" of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields. Ordinary differential equations can be viewed as a subclass of partial differential equations, corresponding to functions of a single variable. Stochastic partial differential equations and nonlocal equations are, as of 2020, particularly widely studied extensions of the "PDE" notion. More classical topics, on which there is still much active research, include elliptic and parabolic partial differential equations, fluid mechanics, Boltzmann equations, and dispersive partial differential equations. Introduction A function of three variables is "harmonic" or "a solution of the Laplace equation" if it satisfies the condition Such functions were widely studied in the 19th century due to their relevance for classical mechanics, for example the equilibrium temperature distribution of a homogeneous solid is a harmonic function. If explicitly given a function, it is usually a matter of straightforward computation to check whether or not it is harmonic. For instance and are both harmonic while is not. It may be surprising that the two examples of harmonic functions are of such strikingly different form. This is a reflection of the fact that they are not, in any immediate way, special cases of a "general solution formula" of the Laplace equation. This is in striking contrast to the case of ordinary differential equations (ODEs) roughly similar to the Laplace equation, with the aim of many introductory textbooks being to find algorithms leading to general solution formulas. For the Laplace equation, as for a large number of partial differential equations, such solution formulas fail to exist. The nature of this failure can be seen more concretely in the case of the following PDE: for a function of two variables, consider the equation It can be directly checked that any function of the form , for any single-variable functions and whatsoever, will satisfy this condition. This is far beyond the choices available in ODE solution formulas, which typically allow the free choice of some numbers. In the study of PDEs, one generally has the free choice of functions. The nature of this choice varies from PDE to PDE. To understand it for any given equation, existence and uniqueness theorems are usually important organizational principles. In many introductory textbooks, the role of existence and uniqueness theorems for ODE can be somewhat opaque; the existence half is usually unnecessary, since one can directly check any proposed solution formula, while the uniqueness half is often only present in the background in order to ensure that a proposed solution formula is as general as possible. By contrast, for PDE, existence and uniqueness theorems are often the only means by which one can navigate through the plethora of different solutions at hand. For this reason, they are also fundamental when carrying out a purely numerical simulation, as one must have an understanding of what data is to be prescribed by the user and what is to be left to the computer to calculate. To discuss such existence and uniqueness theorems, it is necessary to be precise about the domain of the "unknown function". Otherwise, speaking only in terms such as "a function of two variables", it is impossible to meaningfully formulate the results. That is, the domain of the unknown function must be regarded as part of the structure of the PDE itself. The following provides two classic examples of such existence and uniqueness theorems. Even though the two PDE in question are so similar, there is a striking difference in behavior: for the first PDE, one has the free prescription of a single function, while for the second PDE, one has the free prescription of two functions. Let denote the unit-radius disk around the origin in the plane. For any continuous function on the unit circle, there is exactly one function on such that and whose restriction to the unit circle is given by . For any functions and on the real line , there is exactly one function on such that and with and for all values of . Even more phenomena are possible. For instance, the following PDE, arising naturally in the field of differential geometry, illustrates an example where there is a simple and completely explicit solution formula, but with the free choice of only three numbers and not even one function. If is a function on with then there are numbers , , and with . In contrast to the earlier examples, this PDE is nonlinear, owing to the square roots and the squares. A linear PDE is one such that, if it is homogeneous, the sum of any two solutions is also a solution, and any constant multiple of any solution is also a solution. Definition A partial differential equation is an equation that involves an unknown function of variables and (some of) its partial derivatives. That is, for the unknown function of variables belonging to the open subset of , the -order partial differential equation is defined as where and is the partial derivative operator. Notation When writing PDEs, it is common to denote partial derivatives using subscripts. For example: In the general situation that is a function of variables, then denotes the first partial derivative relative to the -th input, denotes the second partial derivative relative to the -th and -th inputs, and so on. The Greek letter denotes the Laplace operator; if is a function of variables, then In the physics literature, the Laplace operator is often denoted by ; in the mathematics literature, may also denote the Hessian matrix of . Classification Linear and nonlinear equations A PDE is called linear if it is linear in the unknown and its derivatives. For example, for a function of and , a second order linear PDE is of the form where and are functions of the independent variables and only. (Often the mixed-partial derivatives and will be equated, but this is not required for the discussion of linearity.) If the are constants (independent of and ) then the PDE is called linear with constant coefficients. If is zero everywhere then the linear PDE is homogeneous, otherwise it is inhomogeneous. (This is separate from asymptotic homogenization, which studies the effects of high-frequency oscillations in the coefficients upon solutions to PDEs.) Nearest to linear PDEs are semi-linear PDEs, where only the highest order derivatives appear as linear terms, with coefficients that are functions of the independent variables. The lower order derivatives and the unknown function may appear arbitrarily. For example, a general second order semi-linear PDE in two variables is In a quasilinear PDE the highest order derivatives likewise appear only as linear terms, but with coefficients possibly functions of the unknown and lower-order derivatives: Many of the fundamental PDEs in physics are quasilinear, such as the Einstein equations of general relativity and the Navier–Stokes equations describing fluid motion. A PDE without any linearity properties is called fully nonlinear, and possesses nonlinearities on one or more of the highest-order derivatives. An example is the Monge–Ampère equation, which arises in differential geometry. Second order equations The elliptic/parabolic/hyperbolic classification provides a guide to appropriate initial- and boundary conditions and to the smoothness of the solutions. Assuming , the general linear second-order PDE in two independent variables has the form where the coefficients , , ... may depend upon and . If over a region of the -plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section: More precisely, replacing by , and likewise for other variables (formally this is done by a Fourier transform), converts a constant-coefficient PDE into a polynomial of the same degree, with the terms of the highest degree (a homogeneous polynomial, here a quadratic form) being most significant for the classification. Just as one classifies conic sections and quadratic forms into parabolic, hyperbolic, and elliptic based on the discriminant , the same can be done for a second-order PDE at a given point. However, the discriminant in a PDE is given by due to the convention of the term being rather than ; formally, the discriminant (of the associated quadratic form) is , with the factor of 4 dropped for simplicity. (elliptic partial differential equation): Solutions of elliptic PDEs are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be approximated with elliptic PDEs, and the Euler–Tricomi equation is elliptic where . By change of variables, the equation can always be expressed in the form: where x and y correspond to changed variables. This justifies Laplace equation as an example of this type. (parabolic partial differential equation): Equations that are parabolic at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the transformed time variable increases. The Euler–Tricomi equation has parabolic type on the line where . By change of variables, the equation can always be expressed in the form: where x correspond to changed variables. This justifies heat equation, which are of form , as an example of this type. (hyperbolic partial differential equation): hyperbolic equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds can be approximated with hyperbolic PDEs, and the Euler–Tricomi equation is hyperbolic where . By change of variables, the equation can always be expressed in the form: where x and y correspond to changed variables. This justifies wave equation as an example of this type. If there are independent variables , a general linear partial differential equation of second order has the form The classification depends upon the signature of the eigenvalues of the coefficient matrix . Elliptic: the eigenvalues are all positive or all negative. Parabolic: the eigenvalues are all positive or all negative, except one that is zero. Hyperbolic: there is only one negative eigenvalue and all the rest are positive, or there is only one positive eigenvalue and all the rest are negative. Ultrahyperbolic: there is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues. The theory of elliptic, parabolic, and hyperbolic equations have been studied for centuries, largely centered around or based upon the standard examples of the Laplace equation, the heat equation, and the wave equation. However, the classification only depends on linearity of the second-order terms and is therefore applicable to semi- and quasilinear PDEs as well. The basic types also extend to hybrids such as the Euler–Tricomi equation; varying from elliptic to hyperbolic for different regions of the domain, as well as higher-order PDEs, but such knowledge is more specialized. Systems of first-order equations and characteristic surfaces The classification of partial differential equations can be extended to systems of first-order equations, where the unknown is now a vector with components, and the coefficient matrices are by matrices for . The partial differential equation takes the form where the coefficient matrices and the vector may depend upon and . If a hypersurface is given in the implicit form where has a non-zero gradient, then is a characteristic surface for the operator at a given point if the characteristic form vanishes: The geometric interpretation of this condition is as follows: if data for are prescribed on the surface , then it may be possible to determine the normal derivative of on from the differential equation. If the data on and the differential equation determine the normal derivative of on , then is non-characteristic. If the data on and the differential equation do not determine the normal derivative of on , then the surface is characteristic, and the differential equation restricts the data on : the differential equation is internal to . A first-order system is elliptic if no surface is characteristic for : the values of on and the differential equation always determine the normal derivative of on . A first-order system is hyperbolic at a point if there is a spacelike surface with normal at that point. This means that, given any non-trivial vector orthogonal to , and a scalar multiplier , the equation has real roots . The system is strictly hyperbolic if these roots are always distinct. The geometrical interpretation of this condition is as follows: the characteristic form defines a cone (the normal cone) with homogeneous coordinates ζ. In the hyperbolic case, this cone has sheets, and the axis runs inside these sheets: it does not intersect any of them. But when displaced from the origin by η, this axis intersects every sheet. In the elliptic case, the normal cone has no real sheets. Analytical solutions Separation of variables Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. This technique rests on a feature of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is the solution (this also applies to ODEs). We assume as an ansatz that the dependence of a solution on the parameters space and time can be written as a product of terms that each depend on a single parameter, and then see if this can be made to solve the problem. In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ordinary differential equation if in one variable – these are in turn easier to solve. This is possible for simple PDEs, which are called separable partial differential equations, and the domain is generally a rectangle (a product of intervals). Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed " as a coordinate, each coordinate can be understood separately. This generalizes to the method of characteristics, and is also used in integral transforms. Method of characteristics The characteristic surface in dimensional space is called a characteristic curve. In special cases, one can find characteristic curves on which the first-order PDE reduces to an ODE – changing coordinates in the domain to straighten these curves allows separation of variables, and is called the method of characteristics. More generally, applying the method to first-order PDEs in higher dimensions, one may find characteristic surfaces. Integral transform An integral transform may transform the PDE to a simpler one, in particular, a separable PDE. This corresponds to diagonalizing an operator. An important example of this is Fourier analysis, which diagonalizes the heat equation using the eigenbasis of sinusoidal waves. If the domain is finite or periodic, an infinite sum of solutions such as a Fourier series is appropriate, but an integral of solutions such as a Fourier integral is generally required for infinite domains. The solution for a point source for the heat equation given above is an example of the use of a Fourier integral. Change of variables Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables. For example, the Black–Scholes equation is reducible to the heat equation by the change of variables Fundamental solution Inhomogeneous equations can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source ), then taking the convolution with the boundary conditions to get the solution. This is analogous in signal processing to understanding a filter by its impulse response. Superposition principle The superposition principle applies to any linear system, including linear systems of PDEs. A common visualization of this concept is the interaction of two waves in phase being combined to result in a greater amplitude, for example . The same principle can be observed in PDEs where the solutions may be real or complex and additive. If and are solutions of linear PDE in some function space , then with any constants and are also a solution of that PDE in the same function space. Methods for non-linear equations There are no generally applicable analytical methods to solve nonlinear PDEs. Still, existence and uniqueness results (such as the Cauchy–Kowalevski theorem) are often possible, as are proofs of important qualitative and quantitative properties of solutions (getting these results is a major part of analysis). Nevertheless, some techniques can be used for several types of equations. The -principle is the most powerful method to solve underdetermined equations. The Riquier–Janet theory is an effective method for obtaining information about many analytic overdetermined systems. The method of characteristics can be used in some very special cases to solve nonlinear partial differential equations. In some cases, a PDE can be solved via perturbation analysis in which the solution is considered to be a correction to an equation with a known solution. Alternatives are numerical analysis techniques from simple finite difference schemes to the more mature multigrid and finite element methods. Many interesting problems in science and engineering are solved in this way using computers, sometimes high performance supercomputers. Lie group method From 1870 Sophus Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred, to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact. A general approach to solving PDEs uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the PDE. Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines. Semi-analytical methods The Adomian decomposition method, the Lyapunov artificial small parameter method, and his homotopy perturbation method are all special cases of the more general homotopy analysis method. These are series expansion methods, and except for the Lyapunov method, are independent of small physical parameters as compared to the well known perturbation theory, thus giving these methods greater flexibility and solution generality. Numerical solutions The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM), as well other kind of methods called meshfree methods, which were made to solve problems where the aforementioned methods are limited. The FEM has a prominent position among these methods and especially its exceptionally efficient higher-order version hp-FEM. Other hybrid versions of FEM and Meshfree methods include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method, discontinuous Galerkin finite element method (DGFEM), element-free Galerkin method (EFGM), interpolating element-free Galerkin method (IEFGM), etc. Finite element method The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as of integral equations. The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler's method, Runge–Kutta, etc. Finite difference method Finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives. Finite volume method Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, surface integrals in a partial differential equation that contain a divergence term are converted to volume integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods conserve mass by design. Neural networks Weak solutions Weak solutions are functions that satisfy the PDE, yet in other meanings than regular sense. The meaning for this term may differ with context, and one of the most commonly used definitions is based on the notion of distributions. An example for the definition of a weak solution is as follows: Consider the boundary-value problem given by: where denotes a second-order partial differential operator in divergence form. We say a is a weak solution if for every , which can be derived by a formal integral by parts. An example for a weak solution is as follows: is a weak solution satisfying in distributional sense, as formally, Theoretical Studies As a branch of pure mathematics, the theoretical studies of PDEs focus on the criteria for a solution to exist, the properties of a solution, and finding its formula is often secondary. Well-posedness Well-posedness refers to a common schematic package of information about a PDE. To say that a PDE is well-posed, one must have: an existence and uniqueness theorem, asserting that by the prescription of some freely chosen functions, one can single out one specific solution of the PDE by continuously changing the free choices, one continuously changes the corresponding solution This is, by the necessity of being applicable to several different PDE, somewhat vague. The requirement of "continuity", in particular, is ambiguous, since there are usually many inequivalent means by which it can be rigorously defined. It is, however, somewhat unusual to study a PDE without specifying a way in which it is well-posed. Regularity Regularity refers to the integrability and differentiability of weak solutions, which can often be represented by Sobolev spaces. This problem arise due to the difficulty in searching for classical solutions. Researchers often tend to find weak solutions at first and then find out whether it is smooth enough to be qualified as a classical solution. Results from functional analysis are often used in this field of study. See also Some common PDEs Acoustic wave equation Burgers' equation Continuity equation Heat equation Helmholtz equation Klein–Gordon equation Jacobi equation Lagrange equation Laplace's equation Maxwell's equations Navier-Stokes equation Poisson's equation Reaction–diffusion system Schrödinger equation Wave equation Types of boundary conditions Dirichlet boundary condition Neumann boundary condition Robin boundary condition Cauchy problem Various topics Jet bundle Laplace transform applied to differential equations List of dynamical systems and differential equations topics Matrix differential equation Numerical partial differential equations Partial differential algebraic equation Recurrence relation Stochastic processes and boundary value problems Notes References . . . . . . . . . . . . . . . . Further reading Nirenberg, Louis (1994). "Partial differential equations in the first half of the century." Development of mathematics 1900–1950 (Luxembourg, 1992), 479–515, Birkhäuser, Basel. External links Partial Differential Equations: Exact Solutions at EqWorld: The World of Mathematical Equations. Partial Differential Equations: Index at EqWorld: The World of Mathematical Equations. Partial Differential Equations: Methods at EqWorld: The World of Mathematical Equations. Example problems with solutions at exampleproblems.com Partial Differential Equations at mathworld.wolfram.com Partial Differential Equations with Mathematica Partial Differential Equations in Cleve Moler: Numerical Computing with MATLAB Partial Differential Equations at nag.com Multivariable calculus Mathematical physics Differential equations
Partial differential equation
[ "Physics", "Mathematics" ]
5,308
[ "Calculus", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Differential equations", "Equations", "Multivariable calculus", "Mathematical physics" ]
52,636
https://en.wikipedia.org/wiki/Boiling
Boiling or ebullition is the rapid phase transition from liquid to gas or vapour; the reverse of boiling is condensation. Boiling occurs when a liquid is heated to its boiling point, so that the vapour pressure of the liquid is equal to the pressure exerted on the liquid by the surrounding atmosphere. Boiling and evaporation are the two main forms of liquid vapourization. There are two main types of boiling: nucleate boiling where small bubbles of vapour form at discrete points, and critical heat flux boiling where the boiling surface is heated above a certain critical temperature and a film of vapour forms on the surface. Transition boiling is an intermediate, unstable form of boiling with elements of both types. The boiling point of water is 100 °C or 212 °F but is lower with the decreased atmospheric pressure found at higher altitudes. Boiling water is used as a method of making it potable by killing microbes and viruses that may be present. The sensitivity of different micro-organisms to heat varies, but if water is held at for one minute, most micro-organisms and viruses are inactivated. Ten minutes at a temperature of 70 °C (158 °F) is also sufficient to inactivate most bacteria. Boiling water is also used in several cooking methods including boiling, steaming, and poaching. Types Free convection The lowest heat flux seen in boiling is only sufficient to cause [natural convection], where the warmer fluid rises due to its slightly lower density. This condition occurs only when the superheat is very low, meaning that the hot surface near the fluid is nearly the same temperature as the boiling point. Nucleate Nucleate boiling is characterised by the growth of bubbles or pops on a heated surface (heterogeneous nucleation), which rises from discrete points on a surface, whose temperature is only slightly above the temperature of the liquid. In general, the number of nucleation sites is increased by an increasing surface temperature. An irregular surface of the boiling vessel (i.e., increased surface roughness) or additives to the fluid (i.e., surfactants and/or nanoparticles) facilitate nucleate boiling over a broader temperature range, while an exceptionally smooth surface, such as plastic, lends itself to superheating. Under these conditions, a heated liquid may show boiling delay and the temperature may go somewhat above the boiling point without boiling. Homogeneous nucleation, where the bubbles form from the surrounding liquid instead of on a surface, can occur if the liquid is warmer in its center, and cooler at the surfaces of the container. This can be done, for instance, in a microwave oven, which heats the water and not the container. Critical heat flux Critical heat flux (CHF) describes the thermal limit of a phenomenon where a phase change occurs during heating (such as bubbles forming on a metal surface used to heat water), which suddenly decreases the efficiency of heat transfer, thus causing localised overheating of the heating surface. As the boiling surface is heated above a critical temperature, a film of vapour forms on the surface. Since this vapour film is much less capable of carrying heat away from the surface, the temperature rises very rapidly beyond this point into the transition boiling regime. The point at which this occurs is dependent on the characteristics of boiling fluid and the heating surface in question. Transition Transition boiling may be defined as the unstable boiling, which occurs at surface temperatures between the maximum attainable in nucleate and the minimum attainable in film boiling. The formation of bubbles in a heated liquid is a complex physical process which often involves cavitation and acoustic effects, such as the broad-spectrum hiss one hears in a kettle not yet heated to the point where bubbles boil to the surface. Film If a surface heating the liquid is significantly hotter than the liquid then film boiling will occur, where a thin layer of vapour, which has low thermal conductivity, insulates the surface. This condition of a vapour film insulating the surface from the liquid characterises film boiling. Influence of geometry Pool boiling "Pool boiling" refers to boiling where there is no forced convective flow. Instead, the flow occurs due to density gradients. It can experience any of the regimes mentioned above. Flow boiling "Flow boiling" occurs when the boiling fluid circulates, typically through pipes. Its movement can be powered by pumps, such as in power plants, or by density gradients, such as in a thermosiphon or a heat pipe. Flows in flow boiling are often characterised by a void fraction parameter, which indicates the fraction of the volume in the system that is vapor. One can use this fraction and the densities to calculate the vapor quality, which refers to the mass fraction that is in the gas phase. Flow boiling can be very complex, with heavy influences of density, flow rates, and heat flux, as well as surface tension. The same system may have regions that are liquid, gas, and two-phase flow. Such two phase regimes can lead to some of the best heat transfer coefficients of any system. Confined boiling Confined boiling refers to boiling in confined geometries, typically characterized by a Bond number that compares the gap spacing to the capillary length. Confined boiling regimes begin to play a major role when Bo < 0.5. This boiling regime is dominated by "vapour stem bubbles" left behind after vapour departs. These bubbles act as seeds for vapor growth. Confined boiling typically has higher heat transfer coefficient but a lower CHF than pool boiling. CHF occurs when the vapor momentum force at the two-phase interface balances the combined surface tension and hydrostatic forces, leading to irreversible growth of the dry spot. Confined boiling is particularly promising for electronics cooling. Physics The boiling point of an element at a given pressure is a characteristic attribute of the element. This is also true for many simple compounds including water and simple alcohols. Once boiling has started and provided that boiling remains stable and the pressure is constant, the temperature of the boiling liquid remains constant. This attribute led to the adoption of boiling points as the definition of 100 °C. Distillation Mixtures of volatile liquids have a boiling point specific to that mixture producing vapour with a constant mix of components - the constant boiling mixture. This attribute allows mixtures of liquids to be separated or partly separated by boiling and is best known as a means of separating ethanol from water. Uses Refrigeration and air conditioning Most types of refrigeration and some type of air-conditioning work by compressing a gas so that it becomes liquid and then allowing it to boil. This adsorbs heat from the surroundings cooling the fridge or freezer or cooling the air entering a building. Typical liquids include propane, ammonia, carbon dioxide or nitrogen. For making water potable As a method of disinfecting water, bringing it to its boiling point at , is the oldest and most effective way since it does not affect the taste, it is effective despite contaminants or particles present in it, and is a single step process which eliminates most microbes responsible for causing intestine related diseases. The boiling point of water is at sea level and at normal barometric pressure. In places having a proper water purification system, it is recommended only as an emergency treatment method or for obtaining potable water in the wilderness or in rural areas, as it cannot remove chemical toxins or impurities. The elimination of micro-organisms by boiling follows first-order kinetics—at high temperatures, it is achieved in less time and at lower temperatures, in more time. The heat sensitivity of micro-organisms varies, at , Giardia species (which cause giardiasis) can take ten minutes for complete inactivation, most intestine affecting microbes and E. coli (gastroenteritis) take less than a minute; at boiling point, Vibrio cholerae (cholera) takes ten seconds and hepatitis A virus (causes the symptom of jaundice), one minute. Boiling does not ensure the elimination of all micro-organisms; the bacterial spores Clostridium can survive at but are not water-borne or intestine affecting. Thus for human health, complete sterilization of water is not required. The traditional advice of boiling water for ten minutes is mainly for additional safety, since microbes start getting eliminated at temperatures greater than and bringing it to its boiling point is also a useful indication that can be seen without the help of a thermometer, and by this time, the water is disinfected. Though the boiling point decreases with increasing altitude, it is not enough to affect the disinfecting process. In cooking Boiling is the method of cooking food in boiling water or other water-based liquids such as stock or milk. Simmering is gentle boiling, while in poaching the cooking liquid moves but scarcely bubbles. The boiling point of water is typically considered to be , especially at sea level. Pressure and a change in the composition of the liquid may alter the boiling point of the liquid. High elevation cooking generally takes longer since boiling point is a function of atmospheric pressure. At an elevation of about , water boils at approximately . Depending on the type of food and the elevation, the boiling water may not be hot enough to cook the food properly. Similarly, increasing the pressure as in a pressure cooker raises the temperature of the contents above the open air boiling point. Boil-in-the-bag Also known as "boil-in-bag", this involves heating or cooking ready-made foods sealed in a thick plastic bag. The bag containing the food, often frozen, is submerged in boiling water for a prescribed time. The resulting dishes can be prepared with greater convenience as no pots or pans are dirtied in the process. Such meals are available for camping as well as home dining. Contrast with evaporation At any given temperature, the molecules in a liquid have varying kinetic energies. Some high energy particles on the liquid surface may have enough energy to escape the intermolecular forces of attraction of the liquid and become a gas. This is called evaporation. Evaporation only happens on the surface while boiling happens throughout the liquid. When a liquid reaches its boiling point bubbles of gas form in it which rise into the surface and burst into the air. This process is called boiling. If the boiling liquid is heated more strongly the temperature does not rise but the liquid boils more quickly. This distinction is exclusive to the liquid-to-gas transition; any transition directly from solid to gas is always referred to as sublimation regardless of whether it is at its boiling point or not. See also Phase transition Phase diagram Enthalpy of vaporization Explosive boiling Recovery time (culinary) References Cooking techniques Phase transitions Heat transfer Gases
Boiling
[ "Physics", "Chemistry" ]
2,208
[ "Transport phenomena", "Matter", "Phase transitions", "Physical phenomena", "Heat transfer", "Phases of matter", "Critical phenomena", "Thermodynamics", "Statistical mechanics", "Gases" ]
53,031
https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann%20law
The Stefan–Boltzmann law, also known as Stefan's law, describes the intensity of the thermal radiation emitted by matter in terms of that matter's temperature. It is named for Josef Stefan, who empirically derived the relationship, and Ludwig Boltzmann who derived the law theoretically. For an ideal absorber/emitter or black body, the Stefan–Boltzmann law states that the total energy radiated per unit surface area per unit time (also known as the radiant exitance) is directly proportional to the fourth power of the black body's temperature, : The constant of proportionality, , is called the Stefan–Boltzmann constant. It has the value In the general case, the Stefan–Boltzmann law for radiant exitance takes the form: where is the emissivity of the surface emitting the radiation. The emissivity is generally between zero and one. An emissivity of one corresponds to a black body. Detailed explanation The radiant exitance (previously called radiant emittance), , has dimensions of energy flux (energy per unit time per unit area), and the SI units of measure are joules per second per square metre (J⋅s⋅m), or equivalently, watts per square metre (W⋅m). The SI unit for absolute temperature, , is the kelvin (K). To find the total power, , radiated from an object, multiply the radiant exitance by the object's surface area, : Matter that does not absorb all incident radiation emits less total energy than a black body. Emissions are reduced by a factor , where the emissivity, , is a material property which, for most matter, satisfies . Emissivity can in general depend on wavelength, direction, and polarization. However, the emissivity which appears in the non-directional form of the Stefan–Boltzmann law is the hemispherical total emissivity, which reflects emissions as totaled over all wavelengths, directions, and polarizations. The form of the Stefan–Boltzmann law that includes emissivity is applicable to all matter, provided that matter is in a state of local thermodynamic equilibrium (LTE) so that its temperature is well-defined. (This is a trivial conclusion, since the emissivity, , is defined to be the quantity that makes this equation valid. What is non-trivial is the proposition that , which is a consequence of Kirchhoff's law of thermal radiation.) A so-called grey body is a body for which the spectral emissivity is independent of wavelength, so that the total emissivity, , is a constant. In the more general (and realistic) case, the spectral emissivity depends on wavelength. The total emissivity, as applicable to the Stefan–Boltzmann law, may be calculated as a weighted average of the spectral emissivity, with the blackbody emission spectrum serving as the weighting function. It follows that if the spectral emissivity depends on wavelength then the total emissivity depends on the temperature, i.e., . However, if the dependence on wavelength is small, then the dependence on temperature will be small as well. Wavelength- and subwavelength-scale particles, metamaterials, and other nanostructures are not subject to ray-optical limits and may be designed to have an emissivity greater than 1. In national and international standards documents, the symbol is recommended to denote radiant exitance; a superscript circle (°) indicates a term relate to a black body. (A subscript "e" is added when it is important to distinguish the energetic (radiometric) quantity radiant exitance, , from the analogous human vision (photometric) quantity, luminous exitance, denoted .) In common usage, the symbol used for radiant exitance (often called radiant emittance) varies among different texts and in different fields. The Stefan–Boltzmann law may be expressed as a formula for radiance as a function of temperature. Radiance is measured in watts per square metre per steradian (W⋅m⋅sr). The Stefan–Boltzmann law for the radiance of a black body is: The Stefan–Boltzmann law expressed as a formula for radiation energy density is: where is the speed of light. History In 1864, John Tyndall presented measurements of the infrared emission by a platinum filament and the corresponding color of the filament. The proportionality to the fourth power of the absolute temperature was deduced by Josef Stefan (1835–1893) in 1877 on the basis of Tyndall's experimental measurements, in the article Über die Beziehung zwischen der Wärmestrahlung und der Temperatur (On the relationship between thermal radiation and temperature) in the Bulletins from the sessions of the Vienna Academy of Sciences. A derivation of the law from theoretical considerations was presented by Ludwig Boltzmann (1844–1906) in 1884, drawing upon the work of Adolfo Bartoli. Bartoli in 1876 had derived the existence of radiation pressure from the principles of thermodynamics. Following Bartoli, Boltzmann considered an ideal heat engine using electromagnetic radiation instead of an ideal gas as working matter. The law was almost immediately experimentally verified. Heinrich Weber in 1888 pointed out deviations at higher temperatures, but perfect accuracy within measurement uncertainties was confirmed up to temperatures of 1535 K by 1897. The law, including the theoretical prediction of the Stefan–Boltzmann constant as a function of the speed of light, the Boltzmann constant and the Planck constant, is a direct consequence of Planck's law as formulated in 1900. Stefan–Boltzmann constant The Stefan–Boltzmann constant, , is derived from other known physical constants: where is the Boltzmann constant, the is the Planck constant, and is the speed of light in vacuum. As of the 2019 revision of the SI, which establishes exact fixed values for , , and , the Stefan–Boltzmann constant is exactly: Thus, Prior to this, the value of was calculated from the measured value of the gas constant. The numerical value of the Stefan–Boltzmann constant is different in other systems of units, as shown in the table below. Examples Temperature of the Sun With his law, Stefan also determined the temperature of the Sun's surface. He inferred from the data of Jacques-Louis Soret (1827–1890) that the energy flux density from the Sun is 29 times greater than the energy flux density of a certain warmed metal lamella (a thin plate). A round lamella was placed at such a distance from the measuring device that it would be seen at the same angular diameter as the Sun. Soret estimated the temperature of the lamella to be approximately 1900 °C to 2000 °C. Stefan surmised that 1/3 of the energy flux from the Sun is absorbed by the Earth's atmosphere, so he took for the correct Sun's energy flux a value 3/2 times greater than Soret's value, namely 29 × 3/2 = 43.5. Precise measurements of atmospheric absorption were not made until 1888 and 1904. The temperature Stefan obtained was a median value of previous ones, 1950 °C and the absolute thermodynamic one 2200 K. As 2.574 = 43.5, it follows from the law that the temperature of the Sun is 2.57 times greater than the temperature of the lamella, so Stefan got a value of 5430 °C or 5700 K. This was the first sensible value for the temperature of the Sun. Before this, values ranging from as low as 1800 °C to as high as were claimed. The lower value of 1800 °C was determined by Claude Pouillet (1790–1868) in 1838 using the Dulong–Petit law. Pouillet also took just half the value of the Sun's correct energy flux. Temperature of stars The temperature of stars other than the Sun can be approximated using a similar means by treating the emitted energy as a black body radiation. So: where is the luminosity, is the Stefan–Boltzmann constant, is the stellar radius and is the effective temperature. This formula can then be rearranged to calculate the temperature: or alternatively the radius: The same formulae can also be simplified to compute the parameters relative to the Sun: where is the solar radius, and so forth. They can also be rewritten in terms of the surface area A and radiant exitance : where and With the Stefan–Boltzmann law, astronomers can easily infer the radii of stars. The law is also met in the thermodynamics of black holes in so-called Hawking radiation. Effective temperature of the Earth Similarly we can calculate the effective temperature of the Earth T⊕ by equating the energy received from the Sun and the energy radiated by the Earth, under the black-body approximation (Earth's own production of energy being small enough to be negligible). The luminosity of the Sun, L⊙, is given by: At Earth, this energy is passing through a sphere with a radius of a0, the distance between the Earth and the Sun, and the irradiance (received power per unit area) is given by The Earth has a radius of R⊕, and therefore has a cross-section of . The radiant flux (i.e. solar power) absorbed by the Earth is thus given by: Because the Stefan–Boltzmann law uses a fourth power, it has a stabilizing effect on the exchange and the flux emitted by Earth tends to be equal to the flux absorbed, close to the steady state where: T⊕ can then be found: where T⊙ is the temperature of the Sun, R⊙ the radius of the Sun, and a0 is the distance between the Earth and the Sun. This gives an effective temperature of 6 °C on the surface of the Earth, assuming that it perfectly absorbs all emission falling on it and has no atmosphere. The Earth has an albedo of 0.3, meaning that 30% of the solar radiation that hits the planet gets scattered back into space without absorption. The effect of albedo on temperature can be approximated by assuming that the energy absorbed is multiplied by 0.7, but that the planet still radiates as a black body (the latter by definition of effective temperature, which is what we are calculating). This approximation reduces the temperature by a factor of 0.71/4, giving . The above temperature is Earth's as seen from space, not ground temperature but an average over all emitting bodies of Earth from surface to high altitude. Because of the greenhouse effect, the Earth's actual average surface temperature is about , which is higher than the effective temperature, and even higher than the temperature that a black body would have. In the above discussion, we have assumed that the whole surface of the earth is at one temperature. Another interesting question is to ask what the temperature of a blackbody surface on the earth would be assuming that it reaches equilibrium with the sunlight falling on it. This of course depends on the angle of the sun on the surface and on how much air the sunlight has gone through. When the sun is at the zenith and the surface is horizontal, the irradiance can be as high as 1120 W/m2. The Stefan–Boltzmann law then gives a temperature of or . (Above the atmosphere, the result is even higher: .) We can think of the earth's surface as "trying" to reach equilibrium temperature during the day, but being cooled by the atmosphere, and "trying" to reach equilibrium with starlight and possibly moonlight at night, but being warmed by the atmosphere. Origination Thermodynamic derivation of the energy density The fact that the energy density of the box containing radiation is proportional to can be derived using thermodynamics. This derivation uses the relation between the radiation pressure p and the internal energy density , a relation that can be shown using the form of the electromagnetic stress–energy tensor. This relation is: Now, from the fundamental thermodynamic relation we obtain the following expression, after dividing by and fixing : The last equality comes from the following Maxwell relation: From the definition of energy density it follows that where the energy density of radiation only depends on the temperature, therefore Now, the equality is after substitution of Meanwhile, the pressure is the rate of momentum change per unit area. Since the momentum of a photon is the same as the energy divided by the speed of light, where the factor 1/3 comes from the projection of the momentum transfer onto the normal to the wall of the container. Since the partial derivative can be expressed as a relationship between only and (if one isolates it on one side of the equality), the partial derivative can be replaced by the ordinary derivative. After separating the differentials the equality becomes which leads immediately to , with as some constant of integration. Derivation from Planck's law The law can be derived by considering a small flat black body surface radiating out into a half-sphere. This derivation uses spherical coordinates, with θ as the zenith angle and φ as the azimuthal angle; and the small flat blackbody surface lies on the xy-plane, where θ = /2. The intensity of the light emitted from the blackbody surface is given by Planck's law, where is the amount of power per unit surface area per unit solid angle per unit frequency emitted at a frequency by a black body at temperature T. is the Planck constant is the speed of light, and is the Boltzmann constant. The quantity is the power radiated by a surface of area A through a solid angle in the frequency range between and . The Stefan–Boltzmann law gives the power emitted per unit area of the emitting body, Note that the cosine appears because black bodies are Lambertian (i.e. they obey Lambert's cosine law), meaning that the intensity observed along the sphere will be the actual intensity times the cosine of the zenith angle. To derive the Stefan–Boltzmann law, we must integrate over the half-sphere and integrate from 0 to ∞. Then we plug in for I: To evaluate this integral, do a substitution, which gives: The integral on the right is standard and goes by many names: it is a particular case of a Bose–Einstein integral, the polylogarithm, or the Riemann zeta function . The value of the integral is (where is the Gamma function), giving the result that, for a perfect blackbody surface: Finally, this proof started out only considering a small flat surface. However, any differentiable surface can be approximated by a collection of small flat surfaces. So long as the geometry of the surface does not cause the blackbody to reabsorb its own radiation, the total energy radiated is just the sum of the energies radiated by each surface; and the total surface area is just the sum of the areas of each surface—so this law holds for all convex blackbodies, too, so long as the surface has the same temperature throughout. The law extends to radiation from non-convex bodies by using the fact that the convex hull of a black body radiates as though it were itself a black body. Energy density The total energy density U can be similarly calculated, except the integration is over the whole sphere and there is no cosine, and the energy flux (U c) should be divided by the velocity c to give the energy density U: Thus is replaced by , giving an extra factor of 4. Thus, in total: The product is sometimes known as the radiation constant or radiation density constant. Decomposition in terms of photons The Stephan–Boltzmann law can be expressed as where the flux of photons, , is given by and the average energy per photon,, is given by Marr and Wilkin (2012) recommend that students be taught about instead of being taught Wien's displacement law, and that the above decomposition be taught when the Stefan–Boltzmann law is taught. See also Black-body radiation Rayleigh–Jeans law Sakuma–Hattori equation Notes References Laws of thermodynamics Power laws Heat transfer Ludwig Boltzmann
Stefan–Boltzmann law
[ "Physics", "Chemistry" ]
3,358
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics", "Laws of thermodynamics" ]
53,268
https://en.wikipedia.org/wiki/Convolution%20theorem
In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms. Functions of a continuous variable Consider two functions and with Fourier transforms and : where denotes the Fourier transform operator. The transform may be normalized in other ways, in which case constant scaling factors (typically or ) will appear in the convolution theorem below. The convolution of and is defined by: In this context the asterisk denotes convolution, instead of standard multiplication. The tensor product symbol is sometimes used instead. The convolution theorem states that: Applying the inverse Fourier transform produces the corollary: The theorem also generally applies to multi-dimensional functions. Consider functions in Lp-space with Fourier transforms : where indicates the inner product of :     and   The convolution of and is defined by: Also: Hence by Fubini's theorem we have that so its Fourier transform is defined by the integral formula: Note that   Hence by the argument above we may apply Fubini's theorem again (i.e. interchange the order of integration): This theorem also holds for the Laplace transform, the two-sided Laplace transform and, when suitably modified, for the Mellin transform and Hartley transform (see Mellin inversion theorem). It can be extended to the Fourier transform of abstract harmonic analysis defined over locally compact abelian groups. Periodic convolution (Fourier series coefficients) Consider -periodic functions   and   which can be expressed as periodic summations:   and   In practice the non-zero portion of components and are often limited to duration but nothing in the theorem requires that. The Fourier series coefficients are: where denotes the Fourier series integral. The product: is also -periodic, and its Fourier series coefficients are given by the discrete convolution of the and sequences: The convolution: is also -periodic, and is called a periodic convolution. The corresponding convolution theorem is: Functions of a discrete variable (sequences) By a derivation similar to Eq.1, there is an analogous theorem for sequences, such as samples of two continuous functions, where now denotes the discrete-time Fourier transform (DTFT) operator. Consider two sequences and with transforms and : The of and is defined by: The convolution theorem for discrete sequences is: Periodic convolution and as defined above, are periodic, with a period of 1. Consider -periodic sequences and :   and   These functions occur as the result of sampling and at intervals of and performing an inverse discrete Fourier transform (DFT) on samples (see ). The discrete convolution: is also -periodic, and is called a periodic convolution. Redefining the operator as the -length DFT, the corresponding theorem is: And therefore: Under the right conditions, it is possible for this -length sequence to contain a distortion-free segment of a convolution. But when the non-zero portion of the or sequence is equal or longer than some distortion is inevitable.  Such is the case when the sequence is obtained by directly sampling the DTFT of the infinitely long impulse response. For and sequences whose non-zero duration is less than or equal to a final simplification is: This form is often used to efficiently implement numerical convolution by computer. (see and ) As a partial reciprocal, it has been shown that any linear transform that turns convolution into a product is the DFT (up to a permutation of coefficients). A time-domain derivation proceeds as follows: A frequency-domain derivation follows from , which indicates that the DTFTs can be written as: The product with is thereby reduced to a discrete-frequency function: where the equivalence of and follows from . Therefore, the equivalence of (5a) and (5b) requires: We can also verify the inverse DTFT of (5b): Convolution theorem for inverse Fourier transform There is also a convolution theorem for the inverse Fourier transform: Here, "" represents the Hadamard product, and "" represents a convolution between the two matrices. so that Convolution theorem for tempered distributions The convolution theorem extends to tempered distributions. Here, is an arbitrary tempered distribution: But must be "rapidly decreasing" towards and in order to guarantee the existence of both, convolution and multiplication product. Equivalently, if is a smooth "slowly growing" ordinary function, it guarantees the existence of both, multiplication and convolution product. In particular, every compactly supported tempered distribution, such as the Dirac delta, is "rapidly decreasing". Equivalently, bandlimited functions, such as the function that is constantly are smooth "slowly growing" ordinary functions. If, for example, is the Dirac comb both equations yield the Poisson summation formula and if, furthermore, is the Dirac delta then is constantly one and these equations yield the Dirac comb identity. See also Moment-generating function of a random variable Notes References Further reading Additional resources For a visual representation of the use of the convolution theorem in signal processing, see: Johns Hopkins University's Java-aided simulation: http://www.jhu.edu/signals/convolve/index.html Theorems in Fourier analysis Articles containing proofs de:Faltung (Mathematik)#Faltungstheorem 2 fr:Produit de convolution
Convolution theorem
[ "Mathematics" ]
1,208
[ "Articles containing proofs" ]
53,299
https://en.wikipedia.org/wiki/Brookhaven%20National%20Laboratory
Brookhaven National Laboratory (BNL) is a United States Department of Energy national laboratory located in Upton, New York, a hamlet of the Town of Brookhaven. It was formally established in 1947 at the site of Camp Upton, a former U.S. Army base on Long Island. Located approximately 60 miles east of New York City, it is managed by Stony Brook University and Battelle Memorial Institute. Research at BNL includes nuclear and high energy physics, energy science and technology, environmental and bioscience, nanoscience, and national security. The 5,300 acre campus contains several large research facilities, including the Relativistic Heavy Ion Collider and National Synchrotron Light Source II. Seven Nobel Prizes have been awarded for work conducted at Brookhaven Lab. Overview BNL operations are overseen by a Department of Energy Site office, is staffed by approximately 2,750 scientists, engineers, technicians, and support personnel, and hosts 4,000 guest investigators every year. The laboratory is guarded by a Department of Energy Protective Force, has a full service fire department, and has its own ZIP code (11973). In total, the lab spans a area that is mostly coterminous with the hamlet of Upton, New York. BNL is served by a rail spur operated as-needed by the New York and Atlantic Railway. Co-located with the laboratory is the New York, NY, weather forecast office of the National Weather Service. Major programs Although originally conceived as a nuclear research facility, Brookhaven Lab's mission has greatly expanded. Its foci are now: Nuclear and high-energy physics Physics and chemistry of materials Environmental and climate research Nanomaterials Energy research Nonproliferation Structural biology Accelerator physics Operation Brookhaven National Lab was originally owned by the Atomic Energy Commission and is now owned by that agency's successor, the United States Department of Energy (DOE). DOE subcontracts the research and operation to universities and research organizations. It is currently operated by Brookhaven Science Associates LLC, which is an equal partnership of Stony Brook University and Battelle Memorial Institute. From 1947 to 1998, it was operated by Associated Universities, Inc. (AUI), but AUI lost its contract in the wake of two incidents: a 1994 fire at the facility's high-flux beam reactor that exposed several workers to radiation and reports in 1997 of a tritium leak into the groundwater of the Long Island Central Pine Barrens on which the facility sits. History Foundations Following World War II, the US Atomic Energy Commission was created to support government-sponsored peacetime research on atomic energy. The effort to build a nuclear reactor in the American northeast was fostered largely by physicists Isidor Isaac Rabi and Norman Foster Ramsey Jr., who during the war witnessed many of their colleagues at Columbia University leave for new remote research sites following the departure of the Manhattan Project from its campus. Their effort to house this reactor near New York City was rivalled by a similar effort at the Massachusetts Institute of Technology to have a facility near Boston. Involvement was quickly solicited from representatives of northeastern universities to the south and west of New York City such that this city would be at their geographic center. In March 1946 a nonprofit corporation was established that consisted of representatives from nine major research universities — Columbia, Cornell, Harvard, Johns Hopkins, MIT, Princeton, University of Pennsylvania, University of Rochester, and Yale University. Out of 17 considered sites in the Boston-Washington corridor, Camp Upton on Long Island was eventually chosen as the most suitable in consideration of space, transportation, and availability. The camp had been a training center for the US Army during both World War I and World War II, and a Japanese internment camp during the latter. Following the war, Camp Upton was no longer needed, and a plan was conceived to convert the military camp into a research facility. On March 21, 1947, the Camp Upton site was officially transferred from the U.S. War Department to the new U.S. Atomic Energy Commission (AEC), predecessor to the U.S. Department of Energy (DOE). Research and facilities Reactor history In 1947 construction began on the first nuclear reactor at Brookhaven, the Brookhaven Graphite Research Reactor. This reactor, which opened in 1950, was the first reactor to be constructed in the United States after World War II. The High Flux Beam Reactor operated from 1965 to 1999. In 1959 Brookhaven built the first US reactor specifically tailored to medical research, the Brookhaven Medical Research Reactor, which operated until 2000. Accelerator history In 1952 Brookhaven began using its first particle accelerator, the Cosmotron. At the time the Cosmotron was the world's highest energy accelerator, being the first to impart more than 1 GeV of energy to a particle. The Cosmotron was retired in 1966, after it was superseded in 1960 by the new Alternating Gradient Synchrotron (AGS). The AGS was used in research that resulted in three Nobel Prizes, including the discovery of the muon neutrino, the charm quark, and CP violation. In 1970 in BNL started the ISABELLE project to develop and build two proton intersecting storage rings. The groundbreaking for the project was in October 1978. In 1981, with the tunnel for the accelerator already excavated, problems with the superconducting magnets needed for the ISABELLE accelerator brought the project to a halt, and the project was eventually cancelled in 1983. The National Synchrotron Light Source operated from 1982 to 2014 and was involved with two Nobel Prize-winning discoveries. It has since been replaced by the National Synchrotron Light Source II. After ISABELLE'S cancellation, physicist at BNL proposed that the excavated tunnel and parts of the magnet assembly be used in another accelerator. In 1984 the first proposal for the accelerator now known as the Relativistic Heavy Ion Collider (RHIC) was put forward. The construction got funded in 1991 and RHIC has been operational since 2000. One of the world's only two operating heavy-ion colliders, RHIC is as of 2010 the second-highest-energy collider after the Large Hadron Collider. RHIC is housed in a tunnel 2.4 miles (3.9 km) long and is visible from space. On January 9, 2020, It was announced by Paul Dabbar, undersecretary of the US Department of Energy Office of Science, that the BNL eRHIC design has been selected over the conceptual design put forward by Thomas Jefferson National Accelerator Facility as the future Electron–ion collider (EIC) in the United States. In addition to the site selection, it was announced that the BNL EIC had acquired CD-0 (mission need) from the Department of Energy. BNL's eRHIC design proposes upgrading the existing Relativistic Heavy Ion Collider, which collides beams light to heavy ions including polarized protons, with a polarized electron facility, to be housed in the same tunnel. Other discoveries In 1958, Brookhaven scientists created one of the world's first video games, Tennis for Two. In 1967, Brookhaven scientists patented Maglev, a transportation technology that utilizes magnetic levitation. In 2024, Brookhaven National Laboratories scientists discovered a new kind of antimatter nucleus. Major facilities Relativistic Heavy Ion Collider (RHIC), which was designed to research quark–gluon plasma and the sources of proton spin. Until 2009 it was the world's most powerful heavy ion collider. It is the only collider of spin-polarized protons. Center for Functional Nanomaterials (CFN), used for the study of nanoscale materials. National Synchrotron Light Source II (NSLS-II), Brookhaven's newest user facility, opened in 2015 to replace the National Synchrotron Light Source (NSLS), which had operated for 30 years. NSLS was involved in the work that won the 2003 and 2009 Nobel Prize in Chemistry. Alternating Gradient Synchrotron, a particle accelerator that was used in three of the lab's Nobel prizes. Accelerator Test Facility, generates, accelerates and monitors particle beams. Tandem Van de Graaff, once the world's largest electrostatic accelerator. Computational Science resources, including access to a massively parallel Blue Gene series supercomputer that is among the fastest in the world for scientific research, run jointly by Brookhaven National Laboratory and Stony Brook University. Interdisciplinary Science Building, with unique laboratories for studying high-temperature superconductors and other materials important for addressing energy challenges. NASA Space Radiation Laboratory, where scientists use beams of ions to simulate cosmic rays and assess the risks of space radiation to human space travelers and equipment. Off-site contributions It is a contributing partner to ATLAS experiment, one of the four detectors located at the Large Hadron Collider (LHC). It is currently operating at CERN near Geneva, Switzerland. Brookhaven was also responsible for the design of the SNS accumulator ring in partnership with Spallation Neutron Source in Oak Ridge, Tennessee. Brookhaven plays a role in a range of neutrino research projects around the world, including the Daya Bay Reactor Neutrino Experiment in China and the Deep Underground Neutrino Experiment at Fermi National Accelerator Laboratory. Public access For other than approved Public Events, the Laboratory is closed to the general public. The lab is open to the public on several Sundays during the summer for tours and special programs. The public access program is referred to as 'Summer Sundays' and takes place in July, and features a science show and a tour of the lab's major facilities. The laboratory also hosts science fairs, science bowls, and robotics competitions for local schools, and lectures, concerts, and scientific talks for the local community. The Lab estimates that each year it enhances the science education of roughly 35,000 K-12 students on Long Island, more than 200 undergraduates, and 550 teachers from across the United States. Environmental cleanup In January 1997, ground water samples taken by BNL staff revealed concentrations of tritium that were twice the allowable federal drinking water standards—some samples taken later were 32 times the standard. The tritium was found to be leaking from the laboratory's High Flux Beam Reactor's spent-fuel pool into the aquifer that provides drinking water for nearby Suffolk County residents. DOE's and BNL's investigation of this incident concluded that the tritium had been leaking for as long as 12 years without DOE's or BNL's knowledge. Installing wells that could have detected the leak was first discussed by BNL engineers in 1993, but the wells were not completed until 1996. The resulting controversy about both BNL's handling of the tritium leak and perceived lapses in DOE's oversight led to the termination of AUI as the BNL contractor in May 1997. The responsibility for failing to discover Brookhaven's tritium leak has been acknowledged by laboratory managers, and DOE admits it failed to properly oversee the laboratory's operations. Brookhaven officials repeatedly treated the need for installing monitoring wells that would have detected the tritium leak as a low priority despite public concern and the laboratory's agreement to follow local environmental regulations. DOE's on-site oversight office, the Brookhaven Group, was directly responsible for Brookhaven's performance, but it failed to hold the laboratory accountable for meeting all of its regulatory commitments, especially its agreement to install monitoring wells. Senior DOE leadership also shared responsibility because they failed to put in place an effective system that encourages all parts of DOE to work together to ensure that contractors meet their responsibilities on environmental, safety and health issues. Unclear responsibilities for environment, safety and health matters has been a recurring problem for DOE management. Since 1993, DOE has spent more than US$580 million on remediating soil and groundwater contamination at the lab site and completed several high-profile projects. These include the decommissioning and decontamination of the Brookhaven Graphite Research Reactor, removal of mercury-contaminated sediment from the Peconic River, and installation and operation of 16 on- and off-site groundwater treatment systems that have cleaned more than 25 billion gallons of groundwater since 1996. Shortly after winning the contract to operate the lab in 1997, BSA formed a Community Advisory Council (CAC) to advise the laboratory director on cleanup projects and other items of interest to the community. The CAC represents a diverse range of interests and values of individuals and groups who are interested in or affected by the actions of the Laboratory. It consists of representatives from 26 local business, civic, education, environment, employee, government, and health organizations. The CAC sets its own agenda, brings forth issues important to the community, and works to provide consensus recommendations to Laboratory management. Nobel Prizes Nobel Prize in Physics 1957 – Chen Ning Yang and Tsung-Dao Lee – parity laws 1976 – Samuel C. C. Ting – J/Psi particle 1980 – James Cronin and Val Logsdon Fitch – CP-violation 1988 – Leon M. Lederman, Melvin Schwartz, Jack Steinberger – Muon neutrino 2002 – Raymond Davis, Jr. – Solar neutrino Nobel Prize in Chemistry 2003 – Roderick MacKinnon – Ion channel 2009 – Venkatraman Ramakrishnan and Thomas A. Steitz – Ribosome See also Center for the Advancement of Science in Space—operates the US National Laboratory on the ISS. Goldhaber fellows References "Dr. Strangelet or: How I Learned to Stop Worrying and Love the Big Bang" External links Brookhaven National Lab official website Physics Today: DOE Shuts Brookhaven Lab's HFBR in a Triumph of Politics Over Science 404 Summer Sundays at Brookhaven National Laboratory Annotated bibliography for Brookhaven Laboratory from the Alsos Digital Library for Nuclear Issues Headlines Digitized Brookhaven National Laboratory reports from the TRAIL project, hosted at University of North Texas Libraries and TRAIL Stony Brook University United States Department of Energy national laboratories Federally Funded Research and Development Centers Nuclear research institutes Particle physics facilities Brookhaven, New York Tourist attractions in Suffolk County, New York Battelle Memorial Institute Superfund sites in New York (state) 1947 establishments in New York (state) Physics research institutes Theoretical physics institutes Institutes associated with CERN Energy infrastructure on Long Island, New York Research institutes in New York (state)
Brookhaven National Laboratory
[ "Physics", "Engineering" ]
2,952
[ "Nuclear research institutes", "Theoretical physics", "Nuclear organizations", "Theoretical physics institutes" ]
53,300
https://en.wikipedia.org/wiki/SLAC%20National%20Accelerator%20Laboratory
SLAC National Accelerator Laboratory, originally named the Stanford Linear Accelerator Center, is a federally funded research and development center in Menlo Park, California, United States. Founded in 1962, the laboratory is now sponsored by the United States Department of Energy and administrated by Stanford University. It is the site of the Stanford Linear Accelerator, a 3.2 kilometer (2-mile) linear accelerator constructed in 1966 that could accelerate electrons to energies of 50 GeV. Today SLAC research centers on a broad program in atomic and solid-state physics, chemistry, biology, and medicine using X-rays from synchrotron radiation and a free-electron laser as well as experimental and theoretical research in elementary particle physics, accelerator physics, astroparticle physics, and cosmology. The laboratory is under the programmatic direction of the United States Department of Energy Office of Science. History Founded in 1962 as the Stanford Linear Accelerator Center, the facility is located on of Stanford University-owned land on Sand Hill Road in Menlo Park, California, just west of the university's main campus. The main accelerator is long, making it the longest linear accelerator in the world, and has been operational since 1966. Research at SLAC has produced three Nobel Prizes in Physics: 1976: The charm quark; see J/ψ meson 1990: Quark structure inside protons and neutrons 1995: The tau lepton SLAC's meeting facilities also provided a venue for the Homebrew Computer Club and other pioneers of the home computer revolution of the late 1970s and early 1980s. In 1984, the laboratory was named an ASME National Historic Engineering Landmark and an IEEE Milestone. SLAC developed and, in December 1991, began hosting the first World Wide Web server outside of Europe. In the early-to-mid 1990s, the Stanford Linear Collider (SLC) investigated the properties of the Z boson using the Stanford Large Detector. As of 2005, SLAC employed over 1,000 people, some 150 of whom were physicists with doctorate degrees, and served over 3,000 visiting researchers yearly, operating particle accelerators for high-energy physics and the Stanford Synchrotron Radiation Laboratory (SSRL) for synchrotron light radiation research, which was "indispensable" in the research leading to the 2006 Nobel Prize in Chemistry awarded to Stanford Professor Roger D. Kornberg. In October 2008, the Department of Energy announced that the center's name would be changed to SLAC National Accelerator Laboratory. The reasons given include a better representation of the new direction of the lab and the ability to trademark the laboratory's name. Stanford University had legally opposed the Department of Energy's attempt to trademark "Stanford Linear Accelerator Center". In March 2009, it was announced that the SLAC National Accelerator Laboratory was to receive $68.3 million in Recovery Act Funding to be disbursed by Department of Energy's Office of Science. In October 2016, Bits and Watts launched as a collaboration between SLAC and Stanford University to design "better, greener electric grids". SLAC later pulled out over concerns about an industry partner, the state-owned Chinese electric utility. In April of 2024, SLAC completed two decades of work constructing the world's largest digital camera for the Legacy Survey of Space and Time (LSST) project at the Vera C. Rubin Observatory in Chile. The camera is expected to become operational in 2025. Components Accelerator The main accelerator was an RF linear accelerator that accelerated electrons and positrons up to 50 GeV. At long, the accelerator was the longest linear accelerator in the world, and was claimed to be "the world's most straight object." until 2017 when the European x-ray free electron laser opened. The main accelerator is buried below ground and passes underneath Interstate Highway 280. The above-ground klystron gallery atop the beamline, was the longest building in the United States until the LIGO project's twin interferometers were completed in 1999. It is easily distinguishable from the air and is marked as a visual waypoint on aeronautical charts. A portion of the original linear accelerator is now part of the Linac Coherent Light Source. Stanford Linear Collider The Stanford Linear Collider was a linear accelerator that collided electrons and positrons at SLAC. The center of mass energy was about 90 GeV, equal to the mass of the Z boson, which the accelerator was designed to study. Grad student Barrett D. Milliken discovered the first Z event on 12 April 1989 while poring over the previous day's computer data from the Mark II detector. The bulk of the data was collected by the SLAC Large Detector, which came online in 1991. Although largely overshadowed by the Large Electron–Positron Collider at CERN, which began running in 1989, the highly polarized electron beam at SLC (close to 80%) made certain unique measurements possible, such as parity violation in Z Boson-b quark coupling. Presently no beam enters the south and north arcs in the machine, which leads to the Final Focus, therefore this section is mothballed to run beam into the PEP2 section from the beam switchyard. SLAC Large Detector The SLAC Large Detector (SLD) was the main detector for the Stanford Linear Collider. It was designed primarily to detect Z bosons produced by the accelerator's electron-positron collisions. Built in 1991, the SLD operated from 1992 to 1998. PEP PEP (Positron-Electron Project) began operation in 1980, with center-of-mass energies up to 29 GeV. At its apex, PEP had five large particle detectors in operation, as well as a sixth smaller detector. About 300 researchers made used of PEP. PEP stopped operating in 1990, and PEP-II began construction in 1994. PEP-II From 1999 to 2008, the main purpose of the linear accelerator was to inject electrons and positrons into the PEP-II accelerator, an electron-positron collider with a pair of storage rings in circumference. PEP-II was host to the BaBar experiment, one of the so-called B-Factory experiments studying charge-parity symmetry. Stanford Synchrotron Radiation Lightsource The Stanford Synchrotron Radiation Lightsource (SSRL) is a synchrotron light user facility located on the SLAC campus. Originally built for particle physics, it was used in experiments where the J/ψ meson was discovered. It is now used exclusively for materials science and biology experiments which take advantage of the high-intensity synchrotron radiation emitted by the stored electron beam to study the structure of molecules. In the early 1990s, an independent electron injector was built for this storage ring, allowing it to operate independently of the main linear accelerator. Fermi Gamma-ray Space Telescope SLAC plays a primary role in the mission and operation of the Fermi Gamma-ray Space Telescope, launched in August 2008. The principal scientific objectives of this mission are: To understand the mechanisms of particle acceleration in AGNs, pulsars, and SNRs. To resolve the gamma-ray sky: unidentified sources and diffuse emission. To determine the high-energy behavior of gamma-ray bursts and transients. To probe dark matter and fundamental physics. KIPAC The Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) is partially housed on the grounds of SLAC, in addition to its presence on the main Stanford campus. PULSE The Stanford PULSE Institute (PULSE) is a Stanford Independent Laboratory located in the Central Laboratory at SLAC. PULSE was created by Stanford in 2005 to help Stanford faculty and SLAC scientists develop ultrafast x-ray research at LCLS. PULSE research publications can be viewed here. LCLS The Linac Coherent Light Source (LCLS) is a free electron laser facility located at SLAC. The LCLS is partially a reconstruction of the last 1/3 of the original linear accelerator at SLAC, and can deliver extremely intense x-ray radiation for research in a number of areas. It achieved first lasing in April 2009. The laser produces hard X-rays, 109 times the relative brightness of traditional synchrotron sources and is the most powerful x-ray source in the world. LCLS enables a variety of new experiments and provides enhancements for existing experimental methods. Often, x-rays are used to take "snapshots" of objects at the atomic level before obliterating samples. The laser's wavelength, ranging from 6.2 to 0.13 nm (200 to 9500 electron volts (eV)) is similar to the width of an atom, providing extremely detailed information that was previously unattainable. Additionally, the laser is capable of capturing images with a "shutter speed" measured in femtoseconds, or million-billionths of a second, necessary because the intensity of the beam is often high enough so that the sample explodes on the femtosecond timescale. LCLS-II The LCLS-II project is to provide a major upgrade to LCLS by adding two new X-ray laser beams. The new system will utilize the of existing tunnel to add a new superconducting accelerator at 4 GeV and two new sets of undulators that will increase the available energy range of LCLS. The advancement from the discoveries using this new capabilities may include new drugs, next-generation computers, and new materials. FACET In 2012, the first two-thirds (~2 km) of the original SLAC LINAC were recommissioned for a new user facility, the Facility for Advanced Accelerator Experimental Tests (FACET). This facility was capable of delivering 20 GeV, 3 nC electron (and positron) beams with short bunch lengths and small spot sizes, ideal for beam-driven plasma acceleration studies. The facility ended operations in 2016 for the constructions of LCLS-II which will occupy the first third of the SLAC LINAC. The FACET-II project will re-establish electron and positron beams in the middle third of the LINAC for the continuation of beam-driven plasma acceleration studies in 2019. NLCTA The Next Linear Collider Test Accelerator (NLCTA) is a 60-120 MeV high-brightness electron beam linear accelerator used for experiments on advanced beam manipulation and acceleration techniques. It is located at SLAC's end station B. A list of relevant research publications can be viewed here . Theoretical Physics SLAC also performs theoretical research in elementary particle physics, including in areas of quantum field theory, collider physics, astroparticle physics, and particle phenomenology. Other discoveries SLAC has also been instrumental in the development of the klystron, a high-power microwave amplification tube. There is active research on plasma acceleration with recent successes such as the doubling of the energy of 42 GeV electrons in a meter-scale accelerator. There was a Paleoparadoxia found at the SLAC site, and its skeleton can be seen at a small museum there in the Breezeway. The SSRL facility was used to reveal hidden text in the Archimedes Palimpsest. X-rays from the synchrotron radiation lightsource caused the iron in the original ink to glow, allowing the researchers to photograph the original document that a Christian monk had scrubbed off. See also Accelerator physics Cyclotron Dipole magnet Electromagnetism List of particles List of United States college laboratories conducting basic defense research Particle beam Quadrupole magnet Spallation Neutron Source Wolfgang Panofsky (1961–84, SLAC Director; Professor, Stanford University) References External links SLAC Today , SLAC's online newspaper, published weekdays symmetry magazine, SLAC's monthly particle physics magazine, with Fermilab Particle physics facilities Stanford University Laboratories in California United States Department of Energy national laboratories Federally Funded Research and Development Centers Buildings and structures in San Mateo County, California Experimental particle physics Menlo Park, California University and college laboratories in the United States Research institutes established in 1962 1962 establishments in California Theoretical physics institutes Research institutes in the San Francisco Bay Area
SLAC National Accelerator Laboratory
[ "Physics" ]
2,503
[ "Theoretical physics", "Theoretical physics institutes", "Experimental physics", "Particle physics", "Experimental particle physics" ]
53,452
https://en.wikipedia.org/wiki/Euler%27s%20totient%20function
In number theory, Euler's totient function counts the positive integers up to a given integer that are relatively prime to . It is written using the Greek letter phi as or , and may also be called Euler's phi function. In other words, it is the number of integers in the range for which the greatest common divisor is equal to 1. The integers of this form are sometimes referred to as totatives of . For example, the totatives of are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, since and . Therefore, . As another example, since for the only integer in the range from 1 to is 1 itself, and . Euler's totient function is a multiplicative function, meaning that if two numbers and are relatively prime, then . This function gives the order of the multiplicative group of integers modulo (the group of units of the ring ). It is also used for defining the RSA encryption system. History, terminology, and notation Leonhard Euler introduced the function in 1763. However, he did not at that time choose any specific symbol to denote it. In a 1784 publication, Euler studied the function further, choosing the Greek letter to denote it: he wrote for "the multitude of numbers less than , and which have no common divisor with it". This definition varies from the current definition for the totient function at but is otherwise the same. The now-standard notation comes from Gauss's 1801 treatise Disquisitiones Arithmeticae, although Gauss did not use parentheses around the argument and wrote . Thus, it is often called Euler's phi function or simply the phi function. In 1879, J. J. Sylvester coined the term totient for this function, so it is also referred to as Euler's totient function, the Euler totient, or Euler's totient. Jordan's totient is a generalization of Euler's. The cototient of is defined as . It counts the number of positive integers less than or equal to that have at least one prime factor in common with . Computing Euler's totient function There are several formulae for computing . Euler's product formula It states where the product is over the distinct prime numbers dividing . (For notation, see Arithmetical function.) An equivalent formulation is where is the prime factorization of (that is, are distinct prime numbers). The proof of these formulae depends on two important facts. Phi is a multiplicative function This means that if , then . Proof outline: Let , , be the sets of positive integers which are coprime to and less than , , , respectively, so that , etc. Then there is a bijection between and by the Chinese remainder theorem. Value of phi for a prime power argument If is prime and , then Proof: Since is a prime number, the only possible values of are , and the only way to have is if is a multiple of , that is, , and there are such multiples not greater than . Therefore, the other numbers are all relatively prime to . Proof of Euler's product formula The fundamental theorem of arithmetic states that if there is a unique expression where are prime numbers and each . (The case corresponds to the empty product.) Repeatedly using the multiplicative property of and the formula for gives This gives both versions of Euler's product formula. An alternative proof that does not require the multiplicative property instead uses the inclusion-exclusion principle applied to the set , excluding the sets of integers divisible by the prime divisors. Example In words: the distinct prime factors of 20 are 2 and 5; half of the twenty integers from 1 to 20 are divisible by 2, leaving ten; a fifth of those are divisible by 5, leaving eight numbers coprime to 20; these are: 1, 3, 7, 9, 11, 13, 17, 19. The alternative formula uses only integers: Fourier transform The totient is the discrete Fourier transform of the gcd, evaluated at 1. Let where for . Then The real part of this formula is For example, using and :Unlike the Euler product and the divisor sum formula, this one does not require knowing the factors of . However, it does involve the calculation of the greatest common divisor of and every positive integer less than , which suffices to provide the factorization anyway. Divisor sum The property established by Gauss, that where the sum is over all positive divisors of , can be proven in several ways. (See Arithmetical function for notational conventions.) One proof is to note that is also equal to the number of possible generators of the cyclic group ; specifically, if with , then is a generator for every coprime to . Since every element of generates a cyclic subgroup, and each subgroup is generated by precisely elements of , the formula follows. Equivalently, the formula can be derived by the same argument applied to the multiplicative group of the th roots of unity and the primitive th roots of unity. The formula can also be derived from elementary arithmetic. For example, let and consider the positive fractions up to 1 with denominator 20: Put them into lowest terms: These twenty fractions are all the positive ≤ 1 whose denominators are the divisors . The fractions with 20 as denominator are those with numerators relatively prime to 20, namely , , , , , , , ; by definition this is fractions. Similarly, there are fractions with denominator 10, and fractions with denominator 5, etc. Thus the set of twenty fractions is split into subsets of size for each dividing 20. A similar argument applies for any n. Möbius inversion applied to the divisor sum formula gives where is the Möbius function, the multiplicative function defined by and for each prime and . This formula may also be derived from the product formula by multiplying out to get An example: Some values The first 100 values are shown in the table and graph below: {| class="wikitable" style="text-align: right" |+ for ! + ! 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 || 10 |- ! 0 | 1 || 1 || 2 || 2 || 4 || 2 || 6 || 4 || 6 || 4 |- ! 10 | 10 || 4 || 12 || 6 || 8 || 8 || 16 || 6 || 18 || 8 |- ! 20 | 12 || 10 || 22 || 8 || 20 || 12 || 18 || 12 || 28 || 8 |- ! 30 | 30 || 16 || 20 || 16 || 24 || 12 || 36 || 18 || 24 || 16 |- ! 40 | 40 || 12 || 42 || 20 || 24 || 22 || 46 || 16 || 42 || 20 |- ! 50 | 32 || 24 || 52 || 18 || 40 || 24 || 36 || 28 || 58 || 16 |- ! 60 | 60 || 30 || 36 || 32 || 48 || 20 || 66 || 32 || 44 || 24 |- ! 70 | 70 || 24 || 72 || 36 || 40 || 36 || 60 || 24 || 78 || 32 |- ! 80 | 54 || 40 || 82 || 24 || 64 || 42 || 56 || 40 || 88 || 24 |- ! 90 | 72 || 44 || 60 || 46 || 72 || 32 || 96 || 42 || 60 || 40 |} In the graph at right the top line is an upper bound valid for all other than one, and attained if and only if is a prime number. A simple lower bound is , which is rather loose: in fact, the lower limit of the graph is proportional to . Euler's theorem This states that if and are relatively prime then The special case where is prime is known as Fermat's little theorem. This follows from Lagrange's theorem and the fact that is the order of the multiplicative group of integers modulo . The RSA cryptosystem is based on this theorem: it implies that the inverse of the function , where is the (public) encryption exponent, is the function , where , the (private) decryption exponent, is the multiplicative inverse of modulo . The difficulty of computing without knowing the factorization of is thus the difficulty of computing : this is known as the RSA problem which can be solved by factoring . The owner of the private key knows the factorization, since an RSA private key is constructed by choosing as the product of two (randomly chosen) large primes and . Only is publicly disclosed, and given the difficulty to factor large numbers we have the guarantee that no one else knows the factorization. Other formulae In particular: Compare this to the formula (see least common multiple). is even for . Moreover, if has distinct odd prime factors, For any and such that there exists an such that . where is the radical of (the product of all distinct primes dividing ).    ( cited in) [Liu (2016)]      (where is the Euler–Mascheroni constant). Menon's identity In 1965 P. Kesava Menon proved where is the number of divisors of . Divisibility by any fixed positive integer The following property, which is part of the « folklore » (i.e., apparently unpublished as a specific result: see the introduction of this article in which it is stated as having « long been known ») has important consequences. For instance it rules out uniform distribution of the values of in the arithmetic progressions modulo for any integer . For every fixed positive integer , the relation holds for almost all , meaning for all but values of as . This is an elementary consequence of the fact that the sum of the reciprocals of the primes congruent to 1 modulo diverges, which itself is a corollary of the proof of Dirichlet's theorem on arithmetic progressions. Generating functions The Dirichlet series for may be written in terms of the Riemann zeta function as: where the left-hand side converges for . The Lambert series generating function is which converges for . Both of these are proved by elementary series manipulations and the formulae for . Growth rate In the words of Hardy & Wright, the order of is "always 'nearly '." First but as n goes to infinity, for all These two formulae can be proved by using little more than the formulae for and the divisor sum function . In fact, during the proof of the second formula, the inequality true for , is proved. We also have Here is Euler's constant, , so and . Proving this does not quite require the prime number theorem. Since goes to infinity, this formula shows that In fact, more is true. and The second inequality was shown by Jean-Louis Nicolas. Ribenboim says "The method of proof is interesting, in that the inequality is shown first under the assumption that the Riemann hypothesis is true, secondly under the contrary assumption." For the average order, we have due to Arnold Walfisz, its proof exploiting estimates on exponential sums due to I. M. Vinogradov and N. M. Korobov. By a combination of van der Corput's and Vinogradov's methods, H.-Q. Liu (On Euler's function.Proc. Roy. Soc. Edinburgh Sect. A 146 (2016), no. 4, 769–775) improved the error term to (this is currently the best known estimate of this type). The "Big " stands for a quantity that is bounded by a constant times the function of inside the parentheses (which is small compared to ). This result can be used to prove that the probability of two randomly chosen numbers being relatively prime is . Ratio of consecutive values In 1950 Somayajulu proved In 1954 Schinzel and Sierpiński strengthened this, proving that the set is dense in the positive real numbers. They also proved that the set is dense in the interval (0,1). Totient numbers A totient number is a value of Euler's totient function: that is, an for which there is at least one for which . The valency or multiplicity of a totient number is the number of solutions to this equation. A nontotient is a natural number which is not a totient number. Every odd integer exceeding 1 is trivially a nontotient. There are also infinitely many even nontotients, and indeed every positive integer has a multiple which is an even nontotient. The number of totient numbers up to a given limit is for a constant . If counted accordingly to multiplicity, the number of totient numbers up to a given limit is where the error term is of order at most for any positive . It is known that the multiplicity of exceeds infinitely often for any . Ford's theorem proved that for every integer there is a totient number of multiplicity : that is, for which the equation has exactly solutions; this result had previously been conjectured by Wacław Sierpiński, and it had been obtained as a consequence of Schinzel's hypothesis H. Indeed, each multiplicity that occurs, does so infinitely often. However, no number is known with multiplicity . Carmichael's totient function conjecture is the statement that there is no such . Perfect totient numbers A perfect totient number is an integer that is equal to the sum of its iterated totients. That is, we apply the totient function to a number n, apply it again to the resulting totient, and so on, until the number 1 is reached, and add together the resulting sequence of numbers; if the sum equals n, then n is a perfect totient number. Applications Cyclotomy In the last section of the Disquisitiones Gauss proves that a regular -gon can be constructed with straightedge and compass if is a power of 2. If is a power of an odd prime number the formula for the totient says its totient can be a power of two only if is a first power and is a power of 2. The primes that are one more than a power of 2 are called Fermat primes, and only five are known: 3, 5, 17, 257, and 65537. Fermat and Gauss knew of these. Nobody has been able to prove whether there are any more. Thus, a regular -gon has a straightedge-and-compass construction if n is a product of distinct Fermat primes and any power of 2. The first few such are 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40,... . Prime number theorem for arithmetic progressions The RSA cryptosystem Setting up an RSA system involves choosing large prime numbers and , computing and , and finding two numbers and such that . The numbers and (the "encryption key") are released to the public, and (the "decryption key") is kept private. A message, represented by an integer , where , is encrypted by computing . It is decrypted by computing . Euler's Theorem can be used to show that if , then . The security of an RSA system would be compromised if the number could be efficiently factored or if could be efficiently computed without factoring . Unsolved problems Lehmer's conjecture If is prime, then . In 1932 D. H. Lehmer asked if there are any composite numbers such that divides . None are known. In 1933 he proved that if any such exists, it must be odd, square-free, and divisible by at least seven primes (i.e. ). In 1980 Cohen and Hagis proved that and that . Further, Hagis showed that if 3 divides then and . Carmichael's conjecture This states that there is no number with the property that for all other numbers , , . See Ford's theorem above. As stated in the main article, if there is a single counterexample to this conjecture, there must be infinitely many counterexamples, and the smallest one has at least ten billion digits in base 10. Riemann hypothesis The Riemann hypothesis is true if and only if the inequality is true for all where is Euler's constant and is the product of the first primes. See also Carmichael function (λ) Dedekind psi function (𝜓) Divisor function (σ) Duffin–Schaeffer conjecture Generalizations of Fermat's little theorem Highly composite number Multiplicative group of integers modulo Ramanujan sum Totient summatory function (𝛷) Notes References The Disquisitiones Arithmeticae has been translated from Latin into English and German. The German edition includes all of Gauss's papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. References to the Disquisitiones are of the form Gauss, DA, art. nnn. . See paragraph 24.3.2. Dickson, Leonard Eugene, "History Of The Theory Of Numbers", vol 1, chapter 5 "Euler's Function, Generalizations; Farey Series", Chelsea Publishing 1952 . . . External links Euler's Phi Function and the Chinese Remainder Theorem — proof that is multiplicative Euler's totient function calculator in JavaScript — up to 20 digits Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions Plytage, Loomis, Polhill Summing Up The Euler Phi Function Modular arithmetic Multiplicative functions Articles containing proofs Algebra Number theory Leonhard Euler
Euler's totient function
[ "Mathematics" ]
3,898
[ "Discrete mathematics", "Algebra", "Multiplicative functions", "Arithmetic", "Articles containing proofs", "Modular arithmetic", "Number theory" ]