id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
4,595,202
https://en.wikipedia.org/wiki/Sodium%20phenylbutyrate
Sodium phenylbutyrate, sold under the brand name Buphenyl among others, is a salt of an aromatic fatty acid, 4-phenylbutyrate (4-PBA) or 4-phenylbutyric acid. The compound is used to treat urea cycle disorders, because its metabolites offer an alternative pathway to the urea cycle to allow excretion of excess nitrogen. Sodium phenylbutyrate is also a histone deacetylase inhibitor and chemical chaperone, leading respectively to research into its use as an anti-cancer agent and in protein misfolding diseases such as cystic fibrosis. Structure and properties Sodium phenylbutyrate is a sodium salt of an aromatic fatty acid, made up of an aromatic ring and butyric acid. The chemical name for sodium phenylbutyrate is 4-phenylbutyric acid, sodium salt. It forms water-soluble off-white crystals. Uses Medical uses Sodium phenylbutyrate is taken orally or by nasogastric intubation as a tablet or powder, and tastes very salty and bitter. It treats urea cycle disorders, genetic diseases in which nitrogen waste builds up in the blood plasma as ammonia glutamine (a state called hyperammonemia) due to deficiences in the enzymes carbamoyl phosphate synthetase I, ornithine transcarbamylase, or argininosuccinic acid synthetase. Uncontrolled, this causes intellectual impairment and early death. Sodium phenylbutyrate metabolites allows the kidneys to excrete excess nitrogen in place of urea, and coupled with dialysis, amino acid supplements and a protein-restricted diet, children born with urea cycle disorders can usually survive beyond 12 months. Patients may need treatment for all their life. The treatment was introduced by researchers in the 1990s, and approved by the U.S. Food and Drugs Administration (FDA) in April 1996. Adverse effects Nearly of women may experience an adverse effect of amenorrhea or menstrual dysfunction. Appetite loss is seen in 4% of patients. Body odor due to metabolization of phenylbutyrate affects 3% of patients, and 3% experience unpleasant tastes. Gastrointestinal symptoms and mostly mild indications of neurotoxicity are also seen in less than 2% of patients, among several other reported adverse effects. Administration during pregnancy is not recommended because sodium phenylbutyrate treatment could mimic maternal phenylketonuria due to the production of phenylalanine, potentially causing fetal brain damage. Research Urea cycle disorders Sodium phenylbutyrate administration was discovered to lead to an alternative nitrogen disposal pathway by Dr. Saul Brusilow, Mark Batshaw and colleagues at the Johns Hopkins School of Medicine in the early 1980s, due to some serendipitous discoveries. They had studied ketoacid therapy for another inborn error of metabolism, citrullinemia, in the late 1970s and they noticed that arginine treatment led to an increase of nitrogen in the urine and a drop in ammonia in the blood. The researchers spoke to Norman Radin about this finding, and he remembered a 1914 article on using sodium benzoate to reduce urea excretion. Another 1919 article had used sodium phenylacetate, and so the researchers treated five patients with hyperammonemia with benzoate and phenylacetate and published a report in Science. In 1982 and 1984, the researchers published on using benzoate and arginine for urea cycle disorders in the NEJM. Use of sodium phenylbutyrate was introduced in the early 1990s, as it lacks the odor of phenylacetate. Chemical chaperone In cystic fibrosis, a point mutation in the Cystic Fibrosis Transmembrane Conductance Regulator protein, ΔF508-CFTR, causes it to be unstable and misfold, hence trapped in the endoplasmic reticulum and unable to reach the cell membrane. This lack of CFTR in the cell membrane leads to disrupted chloride transport and the symptoms of cystic fibrosis. Sodium phenylbutyrate can act as a chemical chaperone, stabilising the mutant CFTR in the endoplasmic reticulum and allowing it to reach the cell surface. Histone deacetylase inhibitor Deriving from its activity as a histone deacetylase inhibitor, sodium phenylbutyrate is under investigation for use as a potential differentiation-inducing agent in malignant glioma and acute myeloid leukaemia, and also for the treatment of some sickle-cell disorders as an alternative to hydroxycarbamide due to it inducing expression of fetal hemoglobin to replace missing adult hemoglobin. While small-scale investigation is proceeding, there is to date no published data to support the use of the compound in the clinical treatment of cancer, and it remains under limited investigation. Sodium phenylbutyrate is also being studied as a therapeutic option for the treatment of Huntington's disease. Other Phenylbutyrate has been associated with longer lifespans in Drosophila. University of Colorado researchers Curt Freed and Wenbo Zhou demonstrated that phenylbutyrate stops the progression of Parkinson's disease in mice by turning on a gene called DJ-1 that can protect dopaminergic neurons in the midbrain from dying. they plan on testing phenylbutyrate for the treatment of Parkinson's disease in humans. Pharmacology Phenylbutyrate is a prodrug. In the human body it is first converted to phenylbutyryl-CoA and then metabolized by mitochondrial beta-oxidation, mainly in the liver and kidneys, to the active form, phenylacetate. Phenylacetate conjugates with glutamine to phenylacetylglutamine, which is eliminated with the urine. It contains the same amount of nitrogen as urea, which makes it an alternative to urea for excreting nitrogen. Sodium phenylbutyrate taken by mouth can be detected in the blood within fifteen minutes, and reaches peak concentration in the bloodstream within an hour. It is metabolized into phenylacetate within half an hour. References CYP2D6 inhibitors Orphan drugs Prodrugs Organic sodium salts Nitrogen cycle Histone deacetylase inhibitors Phenyl alkanoic acids
Sodium phenylbutyrate
[ "Chemistry" ]
1,377
[ "Prodrugs", "Salts", "Organic sodium salts", "Nitrogen cycle", "Chemicals in medicine", "Metabolism" ]
4,595,738
https://en.wikipedia.org/wiki/Pipe%20bursting
Pipe bursting is a trenchless method of replacing buried pipelines (such as sewer, water, or natural gas pipes) without the need for a traditional construction trench. "Launching and receiving pits" replace the trench needed by conventional pipe-laying. Equipment There are five key pieces of equipment used in a pipebursting operation: the expander head, pulling rods, a pulling machine, a retaining device, and a hydraulic power pack. Today's expander heads have a leading end much smaller in diameter than the trailing (bursting) end, small enough to fit through the pipe that will be replaced. The smaller leading end is designed to guide the expander head through the existing pipe; earlier models did not have this feature and lost course at times, resulting in incomplete pipe bursts and project failures. The transition from the leading end to the trailing end can include "fins" that make first contact with the existing pipe. Using these fins as the primary breaking point is a very effective way to ensure that the pipe is broken along the entire circumference. A machine is set in the receiving pit to pull the expander head and new pipe into the line. The head is pulled by heavy, interlocking links that form a chain. Each link weighs several hundred pounds. All of the equipment used in a pipe bursting operation is powered by one or multiple hydraulic power generators. Other applications Pipe bursting may also be used to expand pipeline carrying capacity by replacing smaller pipes with larger ones, or "upsizing." Extensive proving work by the gas and water industries has demonstrated the feasibility of upsizing gas mains, water mains and sewers. Upsizing from 100mm to 225mm diameter is now well established, and pipes of up to 36 inch diameter and greater have been replaced. References Trenchless technology Piping
Pipe bursting
[ "Chemistry", "Engineering" ]
367
[ "Piping", "Chemical engineering", "Mechanical engineering", "Building engineering" ]
7,931,146
https://en.wikipedia.org/wiki/Electroencephalography%20functional%20magnetic%20resonance%20imaging
EEG-fMRI (short for EEG-correlated fMRI or electroencephalography-correlated functional magnetic resonance imaging) is a multimodal neuroimaging technique whereby EEG and fMRI data are recorded synchronously for the study of electrical brain activity in correlation with haemodynamic changes in brain during the electrical activity, be it normal function or associated with disorders. Principle Scalp EEG reflects the brain's electrical activity, and in particular post-synaptic potentials (see Inhibitory postsynaptic current and Excitatory postsynaptic potential) in the cerebral cortex, whereas fMRI is capable of detecting haemodynamic changes throughout the brain through the BOLD effect. EEG-fMRI therefore allows measuring both neuronal and haemodynamic activity which comprise two important components of the neurovascular coupling mechanism. Methodology The simultaneous acquisition of EEG and fMRI data of sufficient quality requires solutions to problems linked to potential health risks (due to currents induced by the MR image forming process in the circuits created by the subject and EEG recording system) and EEG and fMRI data quality. There are two degrees of integration of the data acquisition, reflecting technical limitations associated with the interference between the EEG and MR instruments. These are: interleaved acquisitions, in which each acquisition modality is interrupted in turn (periodically) to allow data of adequate quality to be recorded by the other modality; continuous acquisitions, in which both modalities are able to record data of adequate quality continuously. The latter can be achieved using real-time or post-processing EEG artifact reduction software. EEG was first recorded in an MR environment around 1993. Several groups have found independent means to solve the problems of mutual contamination of the EEG and fMRI signals. The first continuous EEG-fMRI experiment was performed in 1999 using a numerical filtering approach. A predominantly software-based method was implemented shortly thereafter. An addition to EEG-fMRI set up is the simultaneous and synchronized video recording without affecting the EEG and fMRI data quality. For the most part, the acquisition of concurrent EEG-fMRI data is now treated as a solved problem, and commercial devices are available from major manufacturers (e.g., Electrical Geodesics, Inc.; NeuroScan/Compumedics, Inc.; Brain Products; Advanced Neuro Technology), but issues remain. For example, there are significant residual artifacts in the EEG that occur with each heartbeat. The traces in the EEG that record this are often referred to as a, "Ballistocardiogram (BCG)," because of their presumed origin in the motion of the EEG leads in the magnetic field that occurs with each heartbeat. A number of methods have been developed to remove the BCG artifact from concurrent EEG-fMRI signals. The majority of early methods were based on manual identification of noise components using independent component analysis. However, more recent methods use low-rank sparse decomposition (LRSD) which automatically identifies noise components and results in a more thorough "scrubbing" of the BCG noise Applications In principle, the technique combines the EEG’s well documented ability to characterise certain brain states with high temporal resolution and to reveal pathological patterns, with fMRI’s (more recently discovered and less well understood) ability to image blood dynamics through the entire brain with high spatial resolution. Up to now, EEG-fMRI has been mainly seen as an fMRI technique in which the synchronously acquired EEG is used to characterise brain activity (‘brain state’) across time allowing to map (through statistical parametric mapping, for example) the associated haemodynamic changes. The initial motivation for EEG-fMRI was in the field of research into epilepsy, and in particular the study of interictal epileptiform discharges (IED, or interictal spikes), and their generators, and of seizures. IED are unpredictable and sub-clinical events in patients with epilepsy that can only be observed using EEG (or MEG). Therefore, recording EEG during fMRI acquisition allows the study of their haemodynamic correlates. The method can reveal haemodynamic changes linked to IED and seizures, and has proven a powerful scientific tool. The simultaneous and synchronized video recording identifies clinical seizure activity along with electrophysiological activity on EEG, which helps to investigate, correlated haemodynamic changes in brain during seizures. The clinical value of these findings is the subject of ongoing investigations, but recent researches suggest an acceptable reliability for EEG-fMRI studies and better sensitivity in higher field scanner. Outside the field of epilepsy, EEG-fMRI has been used to study event-related (triggered by external stimuli) brain responses and provided important new insights into baseline brain activity during resting wakefulness and sleep. References Further reading Neuroimaging Medical tests Medical physics
Electroencephalography functional magnetic resonance imaging
[ "Physics" ]
1,029
[ "Applied and interdisciplinary physics", "Medical physics" ]
7,931,806
https://en.wikipedia.org/wiki/Essential%20matrix
In computer vision, the essential matrix is a matrix, that relates corresponding points in stereo images assuming that the cameras satisfy the pinhole camera model. Function More specifically, if and are homogeneous normalized image coordinates in image 1 and 2, respectively, then if and correspond to the same 3D point in the scene (not an "if and only if" due to the fact that points that lie on the same epipolar line in the first image will get mapped to the same epipolar line in the second image). The above relation which defines the essential matrix was published in 1981 by H. Christopher Longuet-Higgins, introducing the concept to the computer vision community. Richard Hartley and Andrew Zisserman's book reports that an analogous matrix appeared in photogrammetry long before that. Longuet-Higgins' paper includes an algorithm for estimating from a set of corresponding normalized image coordinates as well as an algorithm for determining the relative position and orientation of the two cameras given that is known. Finally, it shows how the 3D coordinates of the image points can be determined with the aid of the essential matrix. Use The essential matrix can be seen as a precursor to the fundamental matrix, . Both matrices can be used for establishing constraints between matching image points, but the fundamental matrix can only be used in relation to calibrated cameras since the inner camera parameters (matrices and ) must be known in order to achieve the normalization. If, however, the cameras are calibrated the essential matrix can be useful for determining both the relative position and orientation between the cameras and the 3D position of corresponding image points. The essential matrix is related to the fundamental matrix with Derivation and definition This derivation follows the paper by Longuet-Higgins. Two normalized cameras project the 3D world onto their respective image planes. Let the 3D coordinates of a point P be and relative to each camera's coordinate system. Since the cameras are normalized, the corresponding image coordinates are   and   A homogeneous representation of the two image coordinates is then given by   and   which also can be written more compactly as   and   where and are homogeneous representations of the 2D image coordinates and and are proper 3D coordinates but in two different coordinate systems. Another consequence of the normalized cameras is that their respective coordinate systems are related by means of a translation and rotation. This implies that the two sets of 3D coordinates are related as where is a rotation matrix and is a 3-dimensional translation vector. The essential matrix is then defined as: {| style="font-size:120%; border:3px dashed red;" cellpadding="8" |- | |} where is the matrix representation of the cross product with . Note: Here, the transformation will transform points in the 2nd view to the 1st view. For the definition of we are only interested in the orientations of the normalized image coordinates (See also: Triple product). As such we don't need the translational component when substituting image coordinates into the essential equation. To see that this definition of describes a constraint on corresponding image coordinates multiply from left and right with the 3D coordinates of point P in the two different coordinate systems: Insert the above relations between and and the definition of in terms of and . since is a rotation matrix. Properties of the matrix representation of the cross product. Finally, it can be assumed that both and are > 0, otherwise they are not visible in both cameras. This gives which is the constraint that the essential matrix defines between corresponding image points. Properties Not every arbitrary matrix can be an essential matrix for some stereo cameras. To see this notice that it is defined as the matrix product of one rotation matrix and one skew-symmetric matrix, both . The skew-symmetric matrix must have two singular values which are equal and another which is zero. The multiplication of the rotation matrix does not change the singular values which means that also the essential matrix has two singular values which are equal and one which is zero. The properties described here are sometimes referred to as internal constraints of the essential matrix. If the essential matrix is multiplied by a non-zero scalar, the result is again an essential matrix which defines exactly the same constraint as does. This means that can be seen as an element of a projective space, that is, two such matrices are considered equivalent if one is a non-zero scalar multiplication of the other. This is a relevant position, for example, if is estimated from image data. However, it is also possible to take the position that is defined as where , and then has a well-defined "scaling". It depends on the application which position is the more relevant. The constraints can also be expressed as and Here, the last equation is a matrix constraint, which can be seen as 9 constraints, one for each matrix element. These constraints are often used for determining the essential matrix from five corresponding point pairs. The essential matrix has five or six degrees of freedom, depending on whether or not it is seen as a projective element. The rotation matrix and the translation vector have three degrees of freedom each, in total six. If the essential matrix is considered as a projective element, however, one degree of freedom related to scalar multiplication must be subtracted leaving five degrees of freedom in total. Estimation Given a set of corresponding image points it is possible to estimate an essential matrix which satisfies the defining epipolar constraint for all the points in the set. However, if the image points are subject to noise, which is the common case in any practical situation, it is not possible to find an essential matrix which satisfies all constraints exactly. Depending on how the error related to each constraint is measured, it is possible to determine or estimate an essential matrix which optimally satisfies the constraints for a given set of corresponding image points. The most straightforward approach is to set up a total least squares problem, commonly known as the eight-point algorithm. Extracting rotation and translation Given that the essential matrix has been determined for a stereo camera pair -- for example, using the estimation method above -- this information can be used for determining also the rotation and translation (up to a scaling) between the two camera's coordinate systems. In these derivations is seen as a projective element rather than having a well-determined scaling. Finding one solution The following method for determining and is based on performing a SVD of , see Hartley & Zisserman's book. It is also possible to determine and without an SVD, for example, following Longuet-Higgins' paper. An SVD of gives where and are orthogonal matrices and is a diagonal matrix with The diagonal entries of are the singular values of which, according to the internal constraints of the essential matrix, must consist of two identical and one zero value. Define   with   and make the following ansatz Since may not completely fulfill the constraints when dealing with real world data (f.e. camera images), the alternative   with   may help. Proof First, these expressions for and do satisfy the defining equation for the essential matrix Second, it must be shown that this is a matrix representation of the cross product for some . Since it is the case that is skew-symmetric, i.e., . This is also the case for our , since According to the general properties of the matrix representation of the cross product it then follows that must be the cross product operator of exactly one vector . Third, it must also need to be shown that the above expression for is a rotation matrix. It is the product of three matrices which all are orthogonal which means that , too, is orthogonal or . To be a proper rotation matrix it must also satisfy . Since, in this case, is seen as a projective element this can be accomplished by reversing the sign of if necessary. Finding all solutions So far one possible solution for and has been established given . It is, however, not the only possible solution and it may not even be a valid solution from a practical point of view. To begin with, since the scaling of is undefined, the scaling of is also undefined. It must lie in the null space of since For the subsequent analysis of the solutions, however, the exact scaling of is not so important as its "sign", i.e., in which direction it points. Let be normalized vector in the null space of . It is then the case that both and are valid translation vectors relative . It is also possible to change into in the derivations of and above. For the translation vector this only causes a change of sign, which has already been described as a possibility. For the rotation, on the other hand, this will produce a different transformation, at least in the general case. To summarize, given there are two opposite directions which are possible for and two different rotations which are compatible with this essential matrix. In total this gives four classes of solutions for the rotation and translation between the two camera coordinate systems. On top of that, there is also an unknown scaling for the chosen translation direction. It turns out, however, that only one of the four classes of solutions can be realized in practice. Given a pair of corresponding image coordinates, three of the solutions will always produce a 3D point which lies behind at least one of the two cameras and therefore cannot be seen. Only one of the four classes will consistently produce 3D points which are in front of both cameras. This must then be the correct solution. Still, however, it has an undetermined positive scaling related to the translation component. The above determination of and assumes that satisfy the internal constraints of the essential matrix. If this is not the case which, for example, typically is the case if has been estimated from real (and noisy) image data, it has to be assumed that it approximately satisfy the internal constraints. The vector is then chosen as right singular vector of corresponding to the smallest singular value. 3D points from corresponding image points Many methods exist for computing given corresponding normalized image coordinates and , if the essential matrix is known and the corresponding rotation and translation transformations have been determined. See also Bundle adjustment Epipolar geometry Fundamental matrix Geometric camera calibration Triangulation (computer vision) Trifocal tensor Toolboxes Essential Matrix Estimation in MATLAB (Manolis Lourakis). External links An Investigation of the Essential Matrix by R.I. Hartley References Geometry in computer vision
Essential matrix
[ "Mathematics" ]
2,103
[ "Geometry in computer vision", "Geometry" ]
7,931,875
https://en.wikipedia.org/wiki/Neumann%27s%20law
Neumann's law states that the molecular heat in compounds of analogous constitution is always the same. It is named after German mineralogist and physicist Franz Ernst Neumann, who extended the law of the heat of elements by stating that the molecular heat is equal to the sum of the heat of each constituent atom. References Laws of thermodynamics
Neumann's law
[ "Physics", "Chemistry" ]
70
[ "Thermodynamics stubs", "Physical chemistry stubs", "Thermodynamics", "Laws of thermodynamics" ]
7,932,644
https://en.wikipedia.org/wiki/Unit%20tangent%20bundle
In Riemannian geometry, the unit tangent bundle of a Riemannian manifold (M, g), denoted by T1M, UT(M), UTM, or SM is the unit sphere bundle for the tangent bundle T(M). It is a fiber bundle over M whose fiber at each point is the unit sphere in the tangent space: where Tx(M) denotes the tangent space to M at x. Thus, elements of UT(M) are pairs (x, v), where x is some point of the manifold and v is some tangent direction (of unit length) to the manifold at x. The unit tangent bundle is equipped with a natural projection which takes each point of the bundle to its base point. The fiber π−1(x) over each point x ∈ M is an (n−1)-sphere Sn−1, where n is the dimension of M. The unit tangent bundle is therefore a sphere bundle over M with fiber Sn−1. The definition of unit sphere bundle can easily accommodate Finsler manifolds as well. Specifically, if M is a manifold equipped with a Finsler metric F : TM → R, then the unit sphere bundle is the subbundle of the tangent bundle whose fiber at x is the indicatrix of F: If M is an infinite-dimensional manifold (for example, a Banach, Fréchet or Hilbert manifold), then UT(M) can still be thought of as the unit sphere bundle for the tangent bundle T(M), but the fiber π−1(x) over x is then the infinite-dimensional unit sphere in the tangent space. Structures The unit tangent bundle carries a variety of differential geometric structures. The metric on M induces a contact structure on UTM. This is given in terms of a tautological one-form, defined at a point u of UTM (a unit tangent vector of M) by where is the pushforward along π of the vector v ∈ TuUTM. Geometrically, this contact structure can be regarded as the distribution of (2n−2)-planes which, at the unit vector u, is the pullback of the orthogonal complement of u in the tangent space of M. This is a contact structure, for the fiber of UTM is obviously an integral manifold (the vertical bundle is everywhere in the kernel of θ), and the remaining tangent directions are filled out by moving up the fiber of UTM. Thus the maximal integral manifold of θ is (an open set of) M itself. On a Finsler manifold, the contact form is defined by the analogous formula where gu is the fundamental tensor (the hessian of the Finsler metric). Geometrically, the associated distribution of hyperplanes at the point u ∈ UTxM is the inverse image under π* of the tangent hyperplane to the unit sphere in TxM at u. The volume form θ∧dθn−1 defines a measure on M, known as the kinematic measure, or Liouville measure, that is invariant under the geodesic flow of M. As a Radon measure, the kinematic measure μ is defined on compactly supported continuous functions ƒ on UTM by where dV is the volume element on M, and μp is the standard rotationally-invariant Borel measure on the Euclidean sphere UTpM. The Levi-Civita connection of M gives rise to a splitting of the tangent bundle into a vertical space V = kerπ* and horizontal space H on which π* is a linear isomorphism at each point of UTM. This splitting induces a metric on UTM by declaring that this splitting be an orthogonal direct sum, and defining the metric on H by the pullback: and defining the metric on V as the induced metric from the embedding of the fiber UTxM into the Euclidean space TxM. Equipped with this metric and contact form, UTM becomes a Sasakian manifold. Bibliography Jeffrey M. Lee: Manifolds and Differential Geometry. Graduate Studies in Mathematics Vol. 107, American Mathematical Society, Providence (2009). Jürgen Jost: Riemannian Geometry and Geometric Analysis, (2002) Springer-Verlag, Berlin. Ralph Abraham und Jerrold E. Marsden: Foundations of Mechanics, (1978) Benjamin-Cummings, London. Differential topology Ergodic theory Fiber bundles Riemannian geometry
Unit tangent bundle
[ "Mathematics" ]
893
[ "Topology", "Differential topology", "Ergodic theory", "Dynamical systems" ]
7,932,827
https://en.wikipedia.org/wiki/Spatial%20multiplexing
Spatial multiplexing or space-division multiplexing (SM, SDM or SMX) is a multiplexing technique in MIMO wireless communication, fiber-optic communication and other communications technologies used to transmit independent channels separated in space. Fiber-optic communication In fiber-optic communication SDM refers to the usage of the transverse dimension of the fiber to separate the channels. Techniques Multi-core fiber (MCF) Multi-core fibers are designed with more than a single core. Different types of MCFs exist, of which “Uncoupled MCF” is the most common, in which each core is treated as an independent optical path. The main limitation of these systems is the presence of inter-core crosstalk. In recent times, different splicing techniques, and coupling methods have been proposed and demonstrated, and despite many of the component technologies still being in the development stage, MCF systems already present the capability for huge transmission capacity. Recently, some developed component technologies for multicore optical fiber have been demonstrated, such as three-dimensional Y-splitters between different multicore fibers, a universal interconnection among the same fiber cores, and a device for fast swapping and interchange of wavelength-division multiplexed data among cores of multicore optical fiber. Multi-mode fibers (MMF) and Few-mode fibers (FMF) Multi-mode fibers have a larger core that allows the propagation of multiple cylindrical transverse modes (Also referred as linearly polarized modes), in contrast to a single mode fiber (SMF) that only supports the fundamental mode. Each transverse mode is spatially orthogonal, and allows for the propagation in both orthogonal polarization. Typical MMF are currently not viable for SDM, as the high mode count results in unmanageable levels of modal coupling and dispersion. The utilization of few-mode fibers, which are MMFs with a core size designed specially to allow a low count of spatial modes, is currently under consideration. Due to physical imperfections, the modes exchange power and are experience different effective refractive indexes as they propagate through the fiber. The power exchange results in modal coupling, and this effect is known to reduce the achievable capacity of the fiber, if the modes experience unequal gain or attenuation. Therefore, if not compensated, the capacity increase is not linear to the mode count. The effective refractive index difference results in inter-symbolic interference, resulting from delay spread. Mode multiplexers consist of photonic lanterns, multi-plane light conversion, and others. Fiber bundles Bundled fibers are also considered a form of SDM. Wireless communications If the transmitter is equipped with antennas and the receiver has antennas, the maximum spatial multiplexing order (the number of streams) is, if a linear receiver is used. This means that streams can be transmitted in parallel, ideally leading to an increase of the spectral efficiency (the number of bits per second per Hz that can be transmitted over the wireless channel). The practical multiplexing gain can be limited by spatial correlation, which means that some of the parallel streams may have very weak channel gains. Encoding Open-loop approach In an open-loop MIMO system with transmitter antennas and receiver antennas, the input-output relationship can be described as where is the vector of transmitted symbols, are the vectors of received symbols and noise respectively and is the matrix of channel coefficients. An often encountered problem in open loop spatial multiplexing is to guard against instance of high channel correlation and strong power imbalances between the multiple streams. One such extension which is being considered for DVB-NGH systems is the so-called enhanced Spatial Multiplexing (eSM) scheme. Closed-loop approach A closed-loop MIMO system utilizes Channel State Information (CSI) at the transmitter. In most cases, only partial CSI is available at the transmitter because of the limitations of the feedback channel. In a closed-loop MIMO system the input-output relationship with a closed-loop approach can be described as where is the vector of transmitted symbols, are the vectors of received symbols and noise respectively, is the matrix of channel coefficients and is the linear precoding matrix. A precoding matrix is used to precode the symbols in the vector to enhance the performance. The column dimension of can be selected smaller than which is useful if the system requires streams because of several reasons. Examples of the reasons are as follows: either the rank of the MIMO channel or the number of receiver antennas is smaller than the number of transmit antennas. See also 3G MIMO Space–time code Space–time trellis code WiMAX MIMO Fiber-optic communication Multiplexing References IEEE 802 Information theory Radio resource management
Spatial multiplexing
[ "Mathematics", "Technology", "Engineering" ]
957
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
7,933,916
https://en.wikipedia.org/wiki/Heteroreceptor
A heteroreceptor is a receptor located in the cell membrane of a neuron, regulating the synthesis and/or the release of mediators other than its own ligand. Heteroreceptors play a crucial role in modulating neurotransmitter systems and are often targets for therapeutic drugs. By influencing the activity of other neurotransmitters, the receptors contribute to the complex regulation of neural communication and have been implicated in various physiological and pathological processes. Heteroreceptors may be located in any part of the Neuron including the dendrites, the cell body, the axon, or the axon terminals. Heteroreceptors respond to neurotransmitters, neuromodulators, or neurohormones released from adjacent neurons or cells; they are opposite to autoreceptors, which are sensitive only to neurotransmitters or hormones released by the cell in whose wall they are embedded. Examples Norepinephrine can influence the release of acetylcholine from parasympathetic neurons by acting on α2 adrenergic (α2A, α2B, and α2C) heteroreceptors. These effects are related to analgesia, sedation, hypothermia. Acetylcholine can influence the release of norepinephrine from sympathetic neurons by acting on muscarinic-2 and muscarinic-4 heteroreceptors. CB1 negatively modulates the release of GABA and glutamate, playing a crucial role in maintaining a homeostasis between excitatory and inhibitory transmission. Glutamate released from an excitatory neuron escapes from the synaptic cleft and preferentially affects mGluR III receptors on the presynaptic terminals of interneurons. Glutamate spillover leads to inhibition of GABA release, modulating GABAergic transmission. See also Autoreceptor References Receptors Cell signaling
Heteroreceptor
[ "Chemistry" ]
429
[ "Biochemistry stubs", "Receptors", "Molecular and cellular biology stubs", "Signal transduction" ]
7,934,659
https://en.wikipedia.org/wiki/Void%20ratio
The void ratio () of a mixture of solids and fluids (gases and liquids), or of a porous composite material such as concrete, is the ratio of the volume of the voids () filled by the fluids to the volume of all the solids (). It is a dimensionless quantity in materials science and in soil science, and is closely related to the porosity (often noted as , or , depending on the convention), the ratio of the volume of voids () to the total (or bulk) volume (), as follows: in which, for idealized porous media with a rigid and undeformable skeleton structure (i.e., without variation of total volume () when the water content of the sample changes (no expansion or swelling with the wetting of the sample); nor contraction or shrinking effect after drying of the sample), the total (or bulk) volume () of an ideal porous material is the sum of the volume of the solids () and the volume of voids (): (in a rock, or in a soil, this also assumes that the solid grains and the pore fluid are clearly separated, so swelling clay minerals such as smectite, montmorillonite, or bentonite containing bound water in their interlayer space are not considered here.) and where is the void ratio, is the porosity, VV is the volume of void-space (gases and liquids), VS is the volume of solids, and VT is the total (or bulk) volume. This figure is relevant in composites, in mining (particular with regard to the properties of tailings), and in soil science. In geotechnical engineering, it is considered one of the state variables of soils and represented by the symbol . Note that in geotechnical engineering, the symbol usually represents the angle of shearing resistance, a shear strength (soil) parameter. Because of this, in soil science and geotechnics, these two equations are usually presented using for porosity: and where is the void ratio, is the porosity, VV is the volume of void-space (air and water), VS is the volume of solids, and VT is the total (or bulk) volume. Applications in soil sciences and geomechanics Control of the volume change tendency. Suppose the void ratio is high (loose soils). Under loading, voids in the soil skeleton tend to decrease (shrinkage), increasing the contact between adjacent particles and modifying the soil effective stress. The opposite situation, i. e. when the void ratio is relatively small (dense soils), indicates that the volume of the soil is vulnerable to increase (swelling) under unloading – the smectite (montmorillonite, bentonite) partially dry clay particles present in an unsaturated soil can swell due to their hydration after contact with water (when the saturated/unsaturated conditions fluctuate in a soil). Control of the fluid hydraulic conductivity (ability of water movement through the soil). Loose soils show a high hydraulic conductivity, while dense soils are less permeable. Particle movement. Small, unbound particles can move relatively quickly through the larger open voids in loose soil. In contrast, in dense soil, finer particles cannot freely pass the smaller pores, which leads to the clogging of the porosity. See also Pore space in soil Void (composites) References Further reading External links Materials science Soil mechanics Earth sciences Soil science Mining terminology Physical quantities
Void ratio
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
729
[ "Physical phenomena", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Soil mechanics", "Materials science", "nan", "Physical properties" ]
7,934,734
https://en.wikipedia.org/wiki/Photometric%20redshift
A photometric redshift is an estimate for the recession velocity of an astronomical object such as a galaxy or quasar, made without measuring its spectrum. The technique uses photometry (that is, the brightness of the object viewed through various standard filters, each of which lets through a relatively broad passband of colours, such as red light, green light, or blue light) to determine the redshift, and hence, through Hubble's law, the distance, of the observed object. The technique was developed in the 1960s, but was largely replaced in the 1970s and 1980s by spectroscopic redshifts, using spectroscopy to observe the frequency (or wavelength) of characteristic spectral lines, and measure the shift of these lines from their laboratory positions. The photometric redshift technique has come back into mainstream use since 2000, as a result of large sky surveys conducted in the late 1990s and 2000s which have detected a large number of faint high-redshift objects, and telescope time limitations mean that only a small fraction of these can be observed by spectroscopy. Photometric redshifts were originally determined by calculating the expected observed data from a known emission spectrum at a range of redshifts. The technique relies upon the spectrum of radiation being emitted by the object having strong features that can be detected by the relatively crude filters. As photometric filters are sensitive to a range of wavelengths, and the technique relies on making many assumptions about the nature of the spectrum at the light-source, errors for these sorts of measurements can range up to δz = 0.5, and are much less reliable than spectroscopic determinations. In the absence of sufficient telescope time to determine a spectroscopic redshift for each object, the technique of photometric redshifts provides a method to determine an at least qualitative characterization of a redshift. For example, if a Sun-like spectrum had a redshift of z = 1, it would be brightest in the infrared rather than at the yellow-green color associated with the peak of its blackbody spectrum, and the light intensity will be reduced in the filter by a factor of two (i.e. 1+z) (see K correction for more details on the photometric consequences of redshift). Other means of estimating the redshift based on alternative observed quantities have been developed, like morphological redshifts of galaxy clusters derived from geometric measurements. In recent years, Bayesian statistical methods and artificial neural networks have been used to estimate redshifts from photometric data. References External links What are photometric redshifts? Doppler effects
Photometric redshift
[ "Physics" ]
543
[ "Doppler effects", "Physical phenomena", "Astrophysics" ]
15,381,860
https://en.wikipedia.org/wiki/Chemical%20Propulsion%20Information%20Analysis%20Center
The Chemical Propulsion Information Analysis Center (CPIAC) is one of several United States Department of Defense (DoD) sponsored Information Analysis Centers (IACs), administered by the Defense Technical Information Center (DTIC). CPIAC is the oldest IAC, having been in continuous operation since 1946 when it was founded as the Rocket Propellant Information Agency as part of the Johns Hopkins University's Applied Physics Laboratory. Currently CPIAC is operated by The Johns Hopkins University, Whiting School of Engineering. IACs are part of the DoD’s Scientific and Technical Information Program (STIP) prescribed by DoD Directive 3200.12 and are chartered under DoD Instruction 3200.14-E5. CPIAC is the U.S. national clearinghouse and technical resource center for data, reports, and analyses related to system and component level technologies for chemical, electrical, and nuclear propulsion for rockets, missiles, and space and gun propulsion systems. CPIAC also provides technical and administrative support to the Joint Army-Navy-NASA-Air Force (JANNAF) Interagency Propulsion Committee, the primary technical information exchange platform for the U.S. propulsion industry. In addition to maintaining the most comprehensive propulsion-related scientific and technical reports collection in the world, CPIAC maintains a number of industry handbooks, manuals, databases, and its signature Propulsion Information Retrieval System (PIRS). This extensive information collection represents the documented national knowledge base in chemical rocket propulsion and is available for dissemination to eligible individuals and organizations. As a knowledgeable and objective participant in supporting industry research and development, CPIAC assists sponsors in maximizing increasingly limited research and development funding by focusing on key propulsion system technology needs through workshops, symposia, technical assessments, and surveys. CPIAC also performs research in support of its publication of a series of recurrent technology reviews and state-of-the-art reports in selected technical areas. History of CPIAC The rapid technological advances of the U.S. rocket industry during World War II, accomplished primarily through the wartime Office of Scientific Research and Development (OSRD) and its cadre of leading scientists, produced a substantial foundation of technical reports and data on solid rockets, propellants, and ballistics. Following deactivation of the OSRD in 1945, several of these early scientists accepted positions at the fledgling Johns Hopkins University Applied Physics Laboratory (APL) and were subsequently appointed by Commander (later Admiral) Levering Smith to serve on the post-war Navy Bureau of Ordnance (BuOrd) Propellant and Ignition Advisory Group. In April 1946, at the suggestion of Dr. Ralph E. Gibson (later to become the second director of APL), the group recommended the establishment of “a rocket intelligence agency with one main responsibility—that of promoting rapid circulation of technical information to all activities concerned.” Armed with $20,000 in BuOrd funding, APL established the initial Rocket Propellant Information Agency (RPIA) on 3 December 1946 to consolidate, organize, and catalog the inventory of wartime reports. In 1948, the addition of Army sponsorship and accompanying expansion of agency scope into gun propellants prompted a name change to the Solid Propellant Information Agency (SPIA). SPIA subsequently assumed responsibility for organizing and publishing the proceedings of the Joint Army-Navy and Interagency Solid Propellant Group Meetings, which later evolved into the JANNAF Propulsion Meeting. The previously sporadic industry technical exchange meetings were now formalized and conducted on a regular basis. The Air Force and NASA joined in sponsorship of SPIA in 1951 and 1959, respectively. With the establishment of NASA and increased activity in liquid-fueled rockets, missiles, and space vehicles, the Navy established the companion Liquid Propellant Information Agency (LPIA) at APL in 1958. On December 1, 1962, the SPIA and LPIA combined operations to form the Chemical Propulsion Information Agency. At the same time, CPIAC’s scope was expanded to include airbreathing, electrical, nuclear, and gun propulsion. The Interagency Chemical Rocket Propulsion Group (ICRPG), predecessor of the current JANNAF Interagency Propulsion Committee, was also chartered that year. In 1964, CPIAC became a DoD Information Analysis Center under the Naval Sea Systems Command. In 1980, the Defense Technical Information Center (DTIC) assumed administrative oversight of CPIAC, and in 1990, the operation of CPIAC was transferred from APL to The Johns Hopkins University Whiting School of Engineering. While CPIAC’s core functions have expanded significantly over the years, its founding mission of report collection activities has continued uninterrupted to date, making CPIAC the custodian of the most comprehensive chemical propulsion scientific and technical reports collection in the world. References Defense Technical Information Center Rocketry 1946 establishments in Maryland Johns Hopkins University
Chemical Propulsion Information Analysis Center
[ "Engineering" ]
977
[ "Rocketry", "Aerospace engineering" ]
15,383,036
https://en.wikipedia.org/wiki/Bionic%20contact%20lens
A bionic contact lens is a proposed device that could provide a virtual display that could have a variety of uses from assisting the visually impaired to video gaming, as claimed by the manufacturers and developers. The device will have the form of a conventional contact lens with added bionics technology in the form of a head-up display, with functional electronic circuits and infrared lights to create a virtual display allowing the viewer to see a computer-generated display superimposed on the world outside. Proposed components An antenna on the lens could pick up a radio frequency. In 2016, work on Interscatter from the University of Washington has shown the first Wi-Fi enabled contact lens prototype that can communicate directly with mobile devices such as smartphones at data rates between 2–11 Mbit/s. Development Development of the first contact lens display began in the 1990s. Experimental versions of these devices have been demonstrated, such as one developed by Sandia National Laboratories. The lens is expected to have more electronics and capabilities on the areas where the eye does not see. Radio frequency power transmission and solar cells are expected in future developments. Recent work augmented the contact lens with Wi-Fi connectivity. In 2011, a functioning prototype with a wireless antenna and a single-pixel display was developed. Previous prototypes proved that it is possible to create a biologically safe electronic lens that does not obstruct a person’s view. Engineers have tested the finished lenses on rabbits for up to 20 minutes and the animals showed no problems. See also Augmented reality Google Contact Lens Heads-up display Optical head-mounted display Smartglasses Visual prosthesis References Contact lenses Artificial organs American inventions Augmented reality Bionics
Bionic contact lens
[ "Engineering", "Biology" ]
338
[ "Bionics", "Artificial organs" ]
15,383,423
https://en.wikipedia.org/wiki/Non-bonding%20orbital
A non-bonding orbital, also known as non-bonding molecular orbital (NBMO), is a molecular orbital whose occupation by electrons neither increases nor decreases the bond order between the involved atoms. Non-bonding orbitals are often designated by the letter n in molecular orbital diagrams and electron transition notations. Non-bonding orbitals are the equivalent in molecular orbital theory of the lone pairs in Lewis structures. The energy level of a non-bonding orbital is typically in between the lower energy of a valence shell bonding orbital and the higher energy of a corresponding antibonding orbital. As such, a non-bonding orbital with electrons would commonly be a HOMO (highest occupied molecular orbital). According to molecular orbital theory, molecular orbitals are often modeled by the linear combination of atomic orbitals. In a simple diatomic molecule such as hydrogen fluoride (chemical formula: HF), one atom may have many more electrons than the other. A sigma bonding orbital is created between the atomic orbitals with like symmetry. Some orbitals (e.g. px and py orbitals from the fluorine in HF) may not have any other orbitals to combine with and become non-bonding molecular orbitals. In the HF example, the px and py orbitals remain px and py orbitals in shape but when viewed as molecular orbitals are thought of as non-bonding. The energy of the orbital does not depend on the length of any bond within the molecule. Its occupation neither increases nor decreases the stability of the molecule, relative to the atoms, since its energy is the same in the molecule as in one of the atoms. For example, there are two rigorously non-bonding orbitals that are occupied in the ground state of the hydrogen fluoride diatomic molecule; these molecular orbitals are localized on the fluorine atom and are composed of p-type atomic orbitals whose orientation is perpendicular to the internuclear axis. They are therefore unable to overlap and interact with the s-type valence orbital on the hydrogen atom. Although non-bonding orbitals are often similar to the atomic orbitals of their constituent atom, they do not need to be similar. An example of a non-similar one is the non-bonding orbital of the allyl anion, whose electron density is concentrated on the first and third carbon atoms. In fully delocalized canonical molecular orbital theory, it is often the case that none of the molecular orbitals of a molecule are strictly non-bonding in nature. However, in the context of localized molecular orbitals, the concept of a filled, non-bonding orbital tends to correspond to electrons described in Lewis structure terms as "lone pairs." There are several symbols used to represent unoccupied non-bonding orbitals. Occasionally, n* is used, in analogy to σ* and π*, but this usage is rare. Often, the atomic orbital symbol is used, most often p for p orbital; others have used the letter a for a generic atomic orbital. (By Bent's rule, unoccupied orbitals for a main-group element are almost always of p character, since s character is stabilizing and will be used for bonding orbitals. As an exception, the LUMO of phenyl cation is an spx (x ≈ 2) atomic orbital, due to the geometric constraint of the benzene ring.) Finally, Woodward and Hoffmann used the letter ω for non-bonding orbitals (occupied or unoccupied) in their monograph Conservation of Orbital Symmetry. Electron transitions Electrons in molecular non-bonding orbitals can undergo electron transitions such as n→σ* or n→π* transitions. For example, n→π* transitions can be seen in ultraviolet-visible spectroscopy of compounds with carbonyl groups, although absorbance is fairly weak. See also Molecular orbital theory Bonding orbital Antibonding orbital LCAO References Chemical bonding
Non-bonding orbital
[ "Physics", "Chemistry", "Materials_science" ]
809
[ "Chemical bonding", "Condensed matter physics", "nan" ]
15,384,297
https://en.wikipedia.org/wiki/Sacrificial%20part
A sacrificial part is a part of a machine or product that is intentionally engineered to fail under excess mechanical stress, electrical stress, or other unexpected and dangerous situations. The sacrificial part is engineered to fail first, thus breaking the serial connection and protecting other parts of the system downstream. Examples Examples of sacrificial parts include: Electrical fuses Over-pressure burst disks Mechanical shear pins Galvanic anodes Pyrotechnic fastener Fusible plug Some leader lines used in angling See also Mechanical engineering Safety equipment References
Sacrificial part
[ "Physics", "Engineering" ]
113
[ "Mechanical engineering stubs", "Applied and interdisciplinary physics", "Mechanical engineering" ]
15,386,743
https://en.wikipedia.org/wiki/Vector%20%28molecular%20biology%29
In molecular cloning, a vector is any particle (e.g., plasmids, cosmids, Lambda phages) used as a vehicle to artificially carry a foreign nucleic sequence – usually DNA – into another cell, where it can be replicated and/or expressed. A vector containing foreign DNA is termed recombinant DNA. The four major types of vectors are plasmids, viral vectors, cosmids, and artificial chromosomes. Of these, the most commonly used vectors are plasmids. Common to all engineered vectors are an origin of replication, a multicloning site, and a selectable marker. The vector itself generally carries a DNA sequence that consists of an insert (in this case the transgene) and a larger sequence that serves as the "backbone" of the vector. The purpose of a vector which transfers genetic information to another cell is typically to isolate, multiply, or express the insert in the target cell. All vectors may be used for cloning and are therefore cloning vectors, but there are also vectors designed specially for cloning, while others may be designed specifically for other purposes, such as transcription and protein expression. Vectors designed specifically for the expression of the transgene in the target cell are called expression vectors, and generally have a promoter sequence that drives expression of the transgene. Simpler vectors called transcription vectors are only capable of being transcribed but not translated: they can be replicated in a target cell but not expressed, unlike expression vectors. Transcription vectors are used to amplify their insert. The manipulation of DNA is normally conducted on E. coli vectors, which contain elements necessary for their maintenance in E. coli. However, vectors may also have elements that allow them to be maintained in another organism such as yeast, plant or mammalian cells, and these vectors are called shuttle vectors. Such vectors have bacterial or viral elements which may be transferred to the non-bacterial host organism, however other vectors termed intragenic vectors have also been developed to avoid the transfer of any genetic material from an alien species. Insertion of a vector into the target cell is usually called transformation for bacterial cells, transfection for eukaryotic cells, although insertion of a viral vector is often called transduction. Characteristics Plasmids Plasmids are double-stranded extra chromosomal and generally circular DNA sequences that are capable of replication using the host cell's replication machinery. Plasmid vectors minimalistically consist of an origin of replication that allows for semi-independent replication of the plasmid in the host. Plasmids are found widely in many bacteria, for example in Escherichia coli, but may also be found in a few eukaryotes, for example in yeast such as Saccharomyces cerevisiae. Bacterial plasmids may be conjugative/transmissible and non-conjugative: conjugative - mediate DNA transfer through conjugation and therefore spread rapidly among the bacterial cells of a population; e.g., F plasmid, many R and some col plasmids. nonconjugative - do not mediate DNA through conjugation, e.g., many R and col plasmids. Plasmids with specially-constructed features are commonly used in laboratory for cloning purposes. These plasmid are generally non-conjugative but may have many more features, notably a "multiple cloning site" where multiple restriction enzyme cleavage sites allow for the insertion of a transgene insert. The bacteria containing the plasmids can generate millions of copies of the vector within the bacteria in hours, and the amplified vectors can be extracted from the bacteria for further manipulation. Plasmids may be used specifically as transcription vectors and such plasmids may lack crucial sequences for protein expression. Plasmids used for protein expression, called expression vectors, would include elements for translation of protein, such as a ribosome binding site, start and stop codons. Viral vectors Viral vectors are genetically engineered viruses carrying modified viral DNA or RNA that has been rendered noninfectious, but still contain viral promoters and the transgene, thus allowing for translation of the transgene through a viral promoter. However, because viral vectors frequently lack infectious sequences, they require helper viruses or packaging lines for large-scale transfection. Viral vectors are often designed to permanently incorporate the insert into the host genome, and thus leave distinct genetic markers in the host genome after incorporating the transgene. For example, retroviruses leaves a characteristic retroviral integration pattern after insertion that is detectable and indicates that the viral vector has incorporated into the host genome. Artificial chromosomes Artificial chromosomes are manufactured chromosomes in the context of yeast artificial chromosomes (YACs), bacterial artificial chromosomes (BACs), or human artificial chromosomes (HACs). An artificial chromosome can carry a much larger DNA fragment than other vectors. YACs and BACs can carry a DNA fragment up to 300,000 nucleotides long. Three structural necessities of an artificial chromosome include an origin of replication, a centromere, and telomeric end sequences. Transcription Transcription of the cloned gene is a necessary component of the vector when expression of the gene is required: one gene may be amplified through transcription to generate multiple copies of mRNAs, the template on which protein may be produced through translation. A larger number of mRNAs would express a greater amount of protein, and how many copies of mRNA are generated depends on the promoter used in the vector. The expression may be constitutive, meaning that the protein is produced constantly in the background, or it may be inducible whereby the protein is expressed only under certain condition, for example when a chemical inducer is added. These two different types of expression depend on the types of promoter and operator used. Viral promoters are often used for constitutive expression in plasmids and in viral vectors because they normally force constant transcription in many cell lines and types reliably. Inducible expression depends on promoters that respond to the induction conditions: for example, the murine mammary tumor virus promoter only initiates transcription after dexamethasone application and the Drosophila heat shock promoter only initiates after high temperatures. Some vectors are designed for transcription only, for example for in vitro mRNA production. These vectors are called transcription vectors. They may lack the sequences necessary for polyadenylation and termination, therefore may not be used for protein production. Expression Expression vectors produce proteins through the transcription of the vector's insert followed by translation of the mRNA produced, they therefore require more components than the simpler transcription-only vectors. Expression in different host organism would require different elements, although they share similar requirements, for example a promoter for initiation of transcription, a ribosomal binding site for translation initiation, and termination signals. Prokaryotes expression vector Promoter - commonly used inducible promoters are promoters derived from lac operon and the T7 promoter. Other strong promoters used include Trp promoter and Tac-Promoter, which are a hybrid of both the Trp and Lac Operon promoters. Ribosome binding site (RBS) - follows the promoter, and promotes efficient translation of the protein of interest. Translation initiation site - Shine-Dalgarno sequence enclosed in the RBS, 8 base-pairs upstream of the AUG start codon. Eukaryotes expression vector Eukaryote expression vectors require sequences that encode for: Polyadenylation tail: Creates a polyadenylation tail at the end of the transcribed pre-mRNA that protects the mRNA from exonucleases and ensures transcriptional and translational termination: stabilizes mRNA production. Minimal UTR length: UTRs contain specific characteristics that may impede transcription or translation, and thus the shortest UTRs or none at all are encoded for in optimal expression vectors. Kozak sequence: Vectors should encode for a Kozak sequence in the mRNA, which assembles the ribosome for translation of the mRNA. Features Modern artificially-constructed vectors contain essential components found in all vectors, and may contain other additional features found only in some vectors: Origin of replication: Necessary for the replication and maintenance of the vector in the host cell. Promoter: Promoters are used to drive the transcription of the vector's transgene as well as the other genes in the vector such as the antibiotic resistance gene. Some cloning vectors need not have a promoter for the cloned insert but it is an essential component of expression vectors so that the cloned product may be expressed. Cloning site: This may be a multiple cloning site or other features that allow for the insertion of foreign DNA into the vector through ligation. Genetic markers: Genetic markers for viral vectors allow for confirmation that the vector has integrated with the host genomic DNA. Antibiotic resistance: Vectors with antibiotic-resistance open reading frames allow for survival of cells that have taken up the vector in growth media containing antibiotics through antibiotic selection. Epitope: Some vectors may contain a sequence for a specific epitope that can be incorporated into the expressed protein. It allows for antibody identification of cells expressing the target protein. Reporter genes: Some vectors may contain a reporter gene that allow for identification of plasmid that contains inserted DNA sequence. An example is lacZ-α which codes for the N-terminus fragment of β-galactosidase, an enzyme that digests galactose. A multiple cloning site is located within lacZ-α, and an insert successfully ligated into the vector will disrupt the gene sequence, resulting in an inactive β-galactosidase. Cells containing vector with an insert may be identified using blue/white selection by growing cells in media containing an analogue of galactose (X-gal). Cells expressing β-galactosidase (therefore does not contain an insert) appear as blue colonies. White colonies would be selected as those that may contain an insert. Other commonly used reporters include green fluorescent protein and luciferase. Targeting sequence: Expression vectors may include encoding for a targeting sequence in the finished protein that directs the expressed protein to a specific organelle in the cell or specific location such as the periplasmic space of bacteria. Protein purification tags: Some expression vectors include proteins or peptide sequences that allows for easier purification of the expressed protein. Examples include polyhistidine-tag, glutathione-S-transferase, and maltose binding protein. Some of these tags may also allow for increased solubility of the target protein. The target protein is fused to the protein tag, but a protease cleavage site positioned in the polypeptide linker region between the protein and the tag allows the tag to be removed later. See also Plasmid Viral vector Cloning vector Expression vector Hybrid vector Minicircle Recombinant DNA Naked DNA Vector (epidemiology), an organism that transmits disease Human artificial chromosomes Yeast artificial chromosomes Bacterial artificial chromosomes DNA vaccination References Further reading External links Waksman Scholars introduction to vectors A comparison of vectors in use for clinical gene transfer Gene Transport Unit Molecular biology Gene delivery
Vector (molecular biology)
[ "Chemistry", "Biology" ]
2,296
[ "Genetics techniques", "Molecular biology techniques", "Molecular biology", "Biochemistry", "Gene delivery" ]
20,487,941
https://en.wikipedia.org/wiki/Relation%20between%20Schr%C3%B6dinger%27s%20equation%20and%20the%20path%20integral%20formulation%20of%20quantum%20mechanics
This article relates the Schrödinger equation with the path integral formulation of quantum mechanics using a simple nonrelativistic one-dimensional single-particle Hamiltonian composed of kinetic and potential energy. Background Schrödinger's equation Schrödinger's equation, in bra–ket notation, is where is the Hamiltonian operator. The Hamiltonian operator can be written where is the potential energy, m is the mass and we have assumed for simplicity that there is only one spatial dimension . The formal solution of the equation is where we have assumed the initial state is a free-particle spatial state . The transition probability amplitude for a transition from an initial state to a final free-particle spatial state at time is Path integral formulation The path integral formulation states that the transition amplitude is simply the integral of the quantity over all possible paths from the initial state to the final state. Here is the classical action. The reformulation of this transition amplitude, originally due to Dirac and conceptualized by Feynman, forms the basis of the path integral formulation. From Schrödinger's equation to the path integral formulation The following derivation makes use of the Trotter product formula, which states that for self-adjoint operators and (satisfying certain technical conditions), we have even if and do not commute. We can divide the time interval into segments of length The transition amplitude can then be written Although the kinetic energy and potential energy operators do not commute, the Trotter product formula, cited above, says that over each small time-interval, we can ignore this noncommutativity and write The equality of the above can be verified to hold up to first order in by expanding the exponential as power series. For notational simplicity, we delay making this substitution for the moment. We can insert the identity matrix times between the exponentials to yield We now implement the substitution associated to the Trotter product formula, so that we have, effectively We can insert the identity into the amplitude to yield where we have used the fact that the free particle wave function is The integral over can be performed (see Common integrals in quantum field theory) to obtain The transition amplitude for the entire time period is If we take the limit of large the transition amplitude reduces to where is the classical action given by and is the classical Lagrangian given by Any possible path of the particle, going from the initial state to the final state, is approximated as a broken line and included in the measure of the integral This expression actually defines the manner in which the path integrals are to be taken. The coefficient in front is needed to ensure that the expression has the correct dimensions, but it has no actual relevance in any physical application. This recovers the path integral formulation from Schrödinger's equation. From path integral formulation to Schrödinger's equation The path integral reproduces the Schrödinger equation for the initial and final state even when a potential is present. This is easiest to see by taking a path-integral over infinitesimally separated times. Since the time separation is infinitesimal and the cancelling oscillations become severe for large values of , the path integral has most weight for close to . In this case, to lowest order the potential energy is constant, and only the kinetic energy contribution is nontrivial. (This separation of the kinetic and potential energy terms in the exponent is essentially the Trotter product formula.) The exponential of the action is The first term rotates the phase of locally by an amount proportional to the potential energy. The second term is the free particle propagator, corresponding to times a diffusion process. To lowest order in they are additive; in any case one has with : As mentioned, the spread in is diffusive from the free particle propagation, with an extra infinitesimal rotation in phase which slowly varies from point to point from the potential: and this is the Schrödinger equation. Note that the normalization of the path integral needs to be fixed in exactly the same way as in the free particle case. An arbitrary continuous potential does not affect the normalization, although singular potentials require careful treatment. See also Normalized solutions (nonlinear Schrödinger equation) References Statistical mechanics Quantum field theory Schrödinger equation
Relation between Schrödinger's equation and the path integral formulation of quantum mechanics
[ "Physics" ]
879
[ "Quantum field theory", "Equations of physics", "Eponymous equations of physics", "Quantum mechanics", "Schrödinger equation", "Statistical mechanics" ]
20,488,086
https://en.wikipedia.org/wiki/Common%20integrals%20in%20quantum%20field%20theory
Common integrals in quantum field theory are all variations and generalizations of Gaussian integrals to the complex plane and to multiple dimensions. Other integrals can be approximated by versions of the Gaussian integral. Fourier integrals are also considered. Variations on a simple Gaussian integral Gaussian integral The first integral, with broad application outside of quantum field theory, is the Gaussian integral. In physics the factor of 1/2 in the argument of the exponential is common. Note: Thus we obtain Slight generalization of the Gaussian integral where we have scaled Integrals of exponents and even powers of x and In general Note that the integrals of exponents and odd powers of x are 0, due to odd symmetry. Integrals with a linear term in the argument of the exponent This integral can be performed by completing the square: Therefore: Integrals with an imaginary linear term in the argument of the exponent The integral is proportional to the Fourier transform of the Gaussian where is the conjugate variable of . By again completing the square we see that the Fourier transform of a Gaussian is also a Gaussian, but in the conjugate variable. The larger is, the narrower the Gaussian in and the wider the Gaussian in . This is a demonstration of the uncertainty principle. This integral is also known as the Hubbard–Stratonovich transformation used in field theory. Integrals with a complex argument of the exponent The integral of interest is (for an example of an application see Relation between Schrödinger's equation and the path integral formulation of quantum mechanics) We now assume that and may be complex. Completing the square By analogy with the previous integrals This result is valid as an integration in the complex plane as long as is non-zero and has a semi-positive imaginary part. See Fresnel integral. Gaussian integrals in higher dimensions The one-dimensional integrals can be generalized to multiple dimensions. Here is a real positive definite symmetric matrix. This integral is performed by diagonalization of with an orthogonal transformation where is a diagonal matrix and is an orthogonal matrix. This decouples the variables and allows the integration to be performed as one-dimensional integrations. This is best illustrated with a two-dimensional example. Example: Simple Gaussian integration in two dimensions The Gaussian integral in two dimensions is where is a two-dimensional symmetric matrix with components specified as and we have used the Einstein summation convention. Diagonalize the matrix The first step is to diagonalize the matrix. Note that where, since is a real symmetric matrix, we can choose to be orthogonal, and hence also a unitary matrix. can be obtained from the eigenvectors of . We choose such that: is diagonal. Eigenvalues of A To find the eigenvectors of one first finds the eigenvalues of given by The eigenvalues are solutions of the characteristic polynomial which are found using the quadratic equation: Eigenvectors of A Substitution of the eigenvalues back into the eigenvector equation yields From the characteristic equation we know Also note The eigenvectors can be written as: for the two eigenvectors. Here is a normalizing factor given by, It is easily verified that the two eigenvectors are orthogonal to each other. Construction of the orthogonal matrix The orthogonal matrix is constructed by assigning the normalized eigenvectors as columns in the orthogonal matrix Note that . If we define then the orthogonal matrix can be written which is simply a rotation of the eigenvectors with the inverse: Diagonal matrix The diagonal matrix becomes with eigenvectors Numerical example The eigenvalues are The eigenvectors are where Then The diagonal matrix becomes with eigenvectors Rescale the variables and integrate With the diagonalization the integral can be written where Since the coordinate transformation is simply a rotation of coordinates the Jacobian determinant of the transformation is one yielding The integrations can now be performed: which is the advertised solution. Integrals with complex and linear terms in multiple dimensions With the two-dimensional example it is now easy to see the generalization to the complex plane and to multiple dimensions. Integrals with a linear term in the argument Integrals with an imaginary linear term Integrals with a complex quadratic term Integrals with differential operators in the argument As an example consider the integral where is a differential operator with and functions of spacetime, and indicates integration over all possible paths. In analogy with the matrix version of this integral the solution is where and , called the propagator, is the inverse of , and is the Dirac delta function. Similar arguments yield and See Path-integral formulation of virtual-particle exchange for an application of this integral. Integrals that can be approximated by the method of steepest descent In quantum field theory n-dimensional integrals of the form appear often. Here is the reduced Planck constant and f is a function with a positive minimum at . These integrals can be approximated by the method of steepest descent. For small values of the Planck constant, f can be expanded about its minimum Here is the n by n matrix of second derivatives evaluated at the minimum of the function. If we neglect higher order terms this integral can be integrated explicitly. Integrals that can be approximated by the method of stationary phase A common integral is a path integral of the form where is the classical action and the integral is over all possible paths that a particle may take. In the limit of small the integral can be evaluated in the stationary phase approximation. In this approximation the integral is over the path in which the action is a minimum. Therefore, this approximation recovers the classical limit of mechanics. Fourier integrals Dirac delta distribution The Dirac delta distribution in spacetime can be written as a Fourier transform In general, for any dimension Fourier integrals of forms of the Coulomb potential Laplacian of 1/r While not an integral, the identity in three-dimensional Euclidean space whereis a consequence of Gauss's theorem and can be used to derive integral identities. For an example see Longitudinal and transverse vector fields. This identity implies that the Fourier integral representation of 1/r is Yukawa potential: the Coulomb potential with mass The Yukawa potential in three dimensions can be represented as an integral over a Fourier transform where See Static forces and virtual-particle exchange for an application of this integral. In the small m limit the integral reduces to . To derive this result note: Modified Coulomb potential with mass where the hat indicates a unit vector in three dimensional space. The derivation of this result is as follows: Note that in the small limit the integral goes to the result for the Coulomb potential since the term in the brackets goes to . Longitudinal potential with mass where the hat indicates a unit vector in three dimensional space. The derivation for this result is as follows: Note that in the small limit the integral reduces to Transverse potential with mass In the small mr limit the integral goes to For large distance, the integral falls off as the inverse cube of r For applications of this integral see Darwin Lagrangian and Darwin interaction in a vacuum. Angular integration in cylindrical coordinates There are two important integrals. The angular integration of an exponential in cylindrical coordinates can be written in terms of Bessel functions of the first kind and For applications of these integrals see Magnetic interaction between current loops in a simple plasma or electron gas. Bessel functions Integration of the cylindrical propagator with mass First power of a Bessel function See Abramowitz and Stegun. For , we have For an application of this integral see Two line charges embedded in a plasma or electron gas. Squares of Bessel functions The integration of the propagator in cylindrical coordinates is For small mr the integral becomes For large mr the integral becomes For applications of this integral see Magnetic interaction between current loops in a simple plasma or electron gas. In general, Integration over a magnetic wave function The two-dimensional integral over a magnetic wave function is Here, M is a confluent hypergeometric function. For an application of this integral see Charge density spread over a wave function. See also Relation between Schrödinger's equation and the path integral formulation of quantum mechanics References Mathematical physics
Common integrals in quantum field theory
[ "Physics", "Mathematics" ]
1,694
[ "Quantum field theory", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Mathematical physics" ]
429,789
https://en.wikipedia.org/wiki/Coupled%20cluster
Coupled cluster (CC) is a numerical technique used for describing many-body systems. Its most common use is as one of several post-Hartree–Fock ab initio quantum chemistry methods in the field of computational chemistry, but it is also used in nuclear physics. Coupled cluster essentially takes the basic Hartree–Fock molecular orbital method and constructs multi-electron wavefunctions using the exponential cluster operator to account for electron correlation. Some of the most accurate calculations for small to medium-sized molecules use this method. The method was initially developed by Fritz Coester and Hermann Kümmel in the 1950s for studying nuclear-physics phenomena, but became more frequently used when in 1966 Jiří Čížek (and later together with Josef Paldus) reformulated the method for electron correlation in atoms and molecules. It is now one of the most prevalent methods in quantum chemistry that includes electronic correlation. CC theory is simply the perturbative variant of the many-electron theory (MET) of Oktay Sinanoğlu, which is the exact (and variational) solution of the many-electron problem, so it was also called "coupled-pair MET (CPMET)". J. Čížek used the correlation function of MET and used Goldstone-type perturbation theory to get the energy expression, while original MET was completely variational. Čížek first developed the linear CPMET and then generalized it to full CPMET in the same work in 1966. He then also performed an application of it on the benzene molecule with Sinanoğlu in the same year. Because MET is somewhat difficult to perform computationally, CC is simpler and thus, in today's computational chemistry, CC is the best variant of MET and gives highly accurate results in comparison to experiments. Wavefunction ansatz Coupled-cluster theory provides the exact solution to the time-independent Schrödinger equation where is the Hamiltonian of the system, is the exact wavefunction, and E is the exact energy of the ground state. Coupled-cluster theory can also be used to obtain solutions for excited states using, for example, linear-response, equation-of-motion, state-universal multi-reference, or valence-universal multi-reference coupled cluster approaches. The wavefunction of the coupled-cluster theory is written as an exponential ansatz: where is the reference wave function, which is typically a Slater determinant constructed from Hartree–Fock molecular orbitals, though other wave functions such as configuration interaction, multi-configurational self-consistent field, or Brueckner orbitals can also be used. is the cluster operator, which, when acting on , produces a linear combination of excited determinants from the reference wave function (see section below for greater detail). The choice of the exponential ansatz is opportune because (unlike other ansatzes, for example, configuration interaction) it guarantees the size extensivity of the solution. Size consistency in CC theory, also unlike other theories, does not depend on the size consistency of the reference wave function. This is easily seen, for example, in the single bond breaking of F2 when using a restricted Hartree–Fock (RHF) reference, which is not size-consistent, at the CCSDT (coupled cluster single-double-triple) level of theory, which provides an almost exact, full-CI-quality, potential-energy surface and does not dissociate the molecule into F− and F+ ions, like the RHF wave function, but rather into two neutral F atoms. If one were to use, for example, the CCSD, or CCSD(T) levels of theory, they would not provide reasonable results for the bond breaking of F2, with the latter one approaches unphysical potential energy surfaces, though this is for reasons other than just size consistency. A criticism of the method is that the conventional implementation employing the similarity-transformed Hamiltonian (see below) is not variational, though there are bi-variational and quasi-variational approaches that have been developed since the first implementations of the theory. While the above ansatz for the wave function itself has no natural truncation, however, for other properties, such as energy, there is a natural truncation when examining expectation values, which has its basis in the linked- and connected-cluster theorems, and thus does not suffer from issues such as lack of size extensivity, like the variational configuration-interaction approach. Cluster operator The cluster operator is written in the form where is the operator of all single excitations, is the operator of all double excitations, and so forth. In the formalism of second quantization these excitation operators are expressed as and for the general n-fold cluster operator In the above formulae and denote the creation and annihilation operators respectively, while i, j stand for occupied (hole) and a, b for unoccupied (particle) orbitals (states). The creation and annihilation operators in the coupled-cluster terms above are written in canonical form, where each term is in the normal order form, with respect to the Fermi vacuum . Being the one-particle cluster operator and the two-particle cluster operator, and convert the reference function into a linear combination of the singly and doubly excited Slater determinants respectively, if applied without the exponential (such as in CI, where a linear excitation operator is applied to the wave function). Applying the exponential cluster operator to the wave function, one can then generate more than doubly excited determinants due to the various powers of and that appear in the resulting expressions (see below). Solving for the unknown coefficients and is necessary for finding the approximate solution . The exponential operator may be expanded as a Taylor series, and if we consider only the and cluster operators of , we can write Though in practice this series is finite because the number of occupied molecular orbitals is finite, as is the number of excitations, it is still very large, to the extent that even modern-day massively parallel computers are inadequate, except for problems of a dozen or so electrons and very small basis sets, when considering all contributions to the cluster operator and not just and . Often, as was done above, the cluster operator includes only singles and doubles (see CCSD below) as this offers a computationally affordable method that performs better than MP2 and CISD, but is not very accurate usually. For accurate results some form of triples (approximate or full) are needed, even near the equilibrium geometry (in the Franck–Condon region), and especially when breaking single bonds or describing diradical species (these latter examples are often what is referred to as multi-reference problems, since more than one determinant has a significant contribution to the resulting wave function). For double-bond breaking and more complicated problems in chemistry, quadruple excitations often become important as well, though usually they have small contributions for most problems, and as such, the contribution of , etc. to the operator is typically small. Furthermore, if the highest excitation level in the operator is n, then Slater determinants for an N-electron system excited more than () times may still contribute to the coupled-cluster wave function because of the non-linear nature of the exponential ansatz, and therefore, coupled cluster terminated at usually recovers more correlation energy than CI with maximum n excitations. Coupled-cluster equations The Schrödinger equation can be written, using the coupled-cluster wave function, as where there are a total of q coefficients (t-amplitudes) to solve for. To obtain the q equations, first, we multiply the above Schrödinger equation on the left by and then project onto the entire set of up to excited determinants, where m is the highest-order excitation included in that can be constructed from the reference wave function , denoted by . Individually, are singly excited determinants where the electron in orbital i has been excited to orbital a; are doubly excited determinants where the electron in orbital i has been excited to orbital a and the electron in orbital j has been excited to orbital b, etc. In this way we generate a set of coupled energy-independent non-linear algebraic equations needed to determine the t-amplitudes: the latter being the equations to be solved, and the former the equation for the evaluation of the energy. (Note that we have made use of , the identity operator, and also assume that orbitals are orthogonal, though this does not necessarily have to be true, e.g., valence bond orbitals can be used, and in such cases the last set of equations are not necessarily equal to zero.) Considering the basic CCSD method: in which the similarity-transformed Hamiltonian can be explicitly written down using Hadamard's formula in Lie algebra, also called Hadamard's lemma (see also Baker–Campbell–Hausdorff formula (BCH formula), though note that they are different, in that Hadamard's formula is a lemma of the BCH formula): The subscript C designates the connected part of the corresponding operator expression. The resulting similarity-transformed Hamiltonian is non-Hermitian, resulting in different left and right vectors (wave functions) for the same state of interest (this is what is often referred to in coupled-cluster theory as the biorthogonality of the solution, or wave function, though it also applies to other non-Hermitian theories as well). The resulting equations are a set of non-linear equations, which are solved in an iterative manner. Standard quantum-chemistry packages (GAMESS (US), NWChem, ACES II, etc.) solve the coupled-cluster equations using the Jacobi method and direct inversion of the iterative subspace (DIIS) extrapolation of the t-amplitudes to accelerate convergence. Types of coupled-cluster methods The classification of traditional coupled-cluster methods rests on the highest number of excitations allowed in the definition of . The abbreviations for coupled-cluster methods usually begin with the letters "CC" (for "coupled cluster") followed by S – for single excitations (shortened to singles in coupled-cluster terminology), D – for double excitations (doubles), T – for triple excitations (triples), Q – for quadruple excitations (quadruples). Thus, the operator in CCSDT has the form Terms in round brackets indicate that these terms are calculated based on perturbation theory. For example, the CCSD(T) method means: Coupled cluster with a full treatment singles and doubles. An estimate to the connected triples contribution is calculated non-iteratively using many-body perturbation theory arguments. General description of the theory The complexity of equations and the corresponding computer codes, as well as the cost of the computation, increases sharply with the highest level of excitation. For many applications CCSD, while relatively inexpensive, does not provide sufficient accuracy except for the smallest systems (approximately 2 to 4 electrons), and often an approximate treatment of triples is needed. The most well known coupled-cluster method that provides an estimate of connected triples is CCSD(T), which provides a good description of closed-shell molecules near the equilibrium geometry, but breaks down in more complicated situations such as bond breaking and diradicals. Another popular method that makes up for the failings of the standard CCSD(T) approach is -CC(2,3), where the triples contribution to the energy is computed from the difference between the exact solution and the CCSD energy and is not based on perturbation-theory arguments. More complicated coupled-cluster methods such as CCSDT and CCSDTQ are used only for high-accuracy calculations of small molecules. The inclusion of all n levels of excitation for the n-electron system gives the exact solution of the Schrödinger equation within the given basis set, within the Born–Oppenheimer approximation (although schemes have also been drawn up to work without the BO approximation). One possible improvement to the standard coupled-cluster approach is to add terms linear in the interelectronic distances through methods such as CCSD-R12. This improves the treatment of dynamical electron correlation by satisfying the Kato cusp condition and accelerates convergence with respect to the orbital basis set. Unfortunately, R12 methods invoke the resolution of the identity, which requires a relatively large basis set in order to be a good approximation. The coupled-cluster method described above is also known as the single-reference (SR) coupled-cluster method because the exponential ansatz involves only one reference function . The standard generalizations of the SR-CC method are the multi-reference (MR) approaches: state-universal coupled cluster (also known as Hilbert space coupled cluster), valence-universal coupled cluster (or Fock space coupled cluster) and state-selective coupled cluster (or state-specific coupled cluster). Historical accounts Kümmel comments: Considering the fact that the CC method was well understood around the late fifties[,] it looks strange that nothing happened with it until 1966, as Jiří Čížek published his first paper on a quantum chemistry problem. He had looked into the 1957 and 1960 papers published in Nuclear Physics by Fritz and myself. I always found it quite remarkable that a quantum chemist would open an issue of a nuclear physics journal. I myself at the time had almost given up the CC method as not tractable and, of course, I never looked into the quantum chemistry journals. The result was that I learnt about Jiří's work as late as in the early seventies, when he sent me a big parcel with reprints of the many papers he and Joe Paldus had written until then. Josef Paldus also wrote his first-hand account of the origins of coupled-cluster theory, its implementation, and exploitation in electronic wave-function determination; his account is primarily about the making of coupled-cluster theory rather than about the theory itself. Relation to other theories Configuration interaction The Cj excitation operators defining the CI expansion of an N-electron system for the wave function , are related to the cluster operators , since in the limit of including up to in the cluster operator the CC theory must be equal to full CI, we obtain the following relationships etc. For general relationships see J. Paldus, in Methods in Computational Molecular Physics, Vol. 293 of Nato Advanced Study Institute Series B: Physics, edited by S. Wilson and G. H. F. Diercksen (Plenum, New York, 1992), pp. 99–194. Symmetry-adapted cluster The symmetry-adapted cluster (SAC) approach determines the (spin- and) symmetry-adapted cluster operator by solving the following system of energy-dependent equations: where are the excited determinants relative to (usually, in practical implementations, they are the spin- and symmetry-adapted configuration state functions), and is the highest order of excitation included in the SAC operator. If all of the nonlinear terms in are included, then the SAC equations become equivalent to the standard coupled-cluster equations of Jiří Čížek. This is due to the cancellation of the energy-dependent terms with the disconnected terms contributing to the product of , resulting in the same set of nonlinear energy-independent equations. Typically, all nonlinear terms, except are dropped, as higher-order nonlinear terms are usually small. Use in nuclear physics In nuclear physics, coupled cluster saw significantly less use than in quantum chemistry during the 1980s and 1990s. More powerful computers, as well as advances in theory (such as the inclusion of three-nucleon interactions), have spawned renewed interest in the method since then, and it has been successfully applied to neutron-rich and medium-mass nuclei. Coupled cluster is one of several ab initio methods in nuclear physics and is specifically suitable for nuclei having closed or nearly closed shells. See also Quantum chemistry computer programs References Quantum chemistry Electronic structure methods Post-Hartree–Fock methods
Coupled cluster
[ "Physics", "Chemistry" ]
3,338
[ "Quantum chemistry", "Quantum mechanics", "Computational physics", "Theoretical chemistry", "Electronic structure methods", "Computational chemistry", " molecular", "Atomic", " and optical physics" ]
430,014
https://en.wikipedia.org/wiki/Fused%20quartz
Fused quartz, fused silica or quartz glass is a glass consisting of almost pure silica (silicon dioxide, SiO2) in amorphous (non-crystalline) form. This differs from all other commercial glasses, such as soda-lime glass, lead glass, or borosilicate glass, in which other ingredients are added which change the glasses' optical and physical properties, such as lowering the melt temperature, the spectral transmission range, or the mechanical strength. Fused quartz, therefore, has high working and melting temperatures, making it difficult to form and less desirable for most common applications, but is much stronger, more chemically resistant, and exhibits lower thermal expansion, making it more suitable for many specialized uses such as lighting and scientific applications. The terms fused quartz and fused silica are used interchangeably but can refer to different manufacturing techniques, resulting in different trace impurities. However fused quartz, being in the glassy state, has quite different physical properties compared to crystalline quartz despite being made of the same substance. Due to its physical properties it finds specialty uses in semiconductor fabrication and laboratory equipment, for instance. Compared to other common glasses, the optical transmission of pure silica extends well into the ultraviolet and infrared wavelengths, so is used to make lenses and other optics for these wavelengths. Depending on manufacturing processes, impurities will restrict the optical transmission, resulting in commercial grades of fused quartz optimized for use in the infrared, or in the ultraviolet. The low coefficient of thermal expansion of fused quartz makes it a useful material for precision mirror substrates or optical flats. Manufacture Fused quartz is produced by fusing (melting) high-purity silica sand, which consists of quartz crystals. There are four basic types of commercial silica glass: Type I is produced by induction melting natural quartz in a vacuum or an inert atmosphere. Type II is produced by fusing quartz crystal powder in a high-temperature flame. Type III is produced by burning SiCl4 in a hydrogen-oxygen flame. Type IV is produced by burning SiCl4 in a water vapor-free plasma flame. Quartz contains only silicon and oxygen, although commercial quartz glass often contains impurities. Two dominant impurities are aluminium and titanium which affect the optical transmission at ultraviolet wavelengths. If water is present in the manufacturing process, hydroxyl (OH) groups may become embedded which reduces transmission in the infrared. Fusion Melting is effected at approximately 2200 °C (4000 °F) using either an electrically heated furnace (electrically fused) or a gas/oxygen-fuelled furnace (flame-fused). Fused silica can be made from almost any silicon-rich chemical precursor, usually using a continuous process which involves flame oxidation of volatile silicon compounds to silicon dioxide, and thermal fusion of the resulting dust (although alternative processes are used). This results in a transparent glass with an ultra-high purity and improved optical transmission in the deep ultraviolet. One common method involves adding silicon tetrachloride to a hydrogen–oxygen flame. Product quality Fused quartz is normally transparent. The material can, however, become translucent if small air bubbles are allowed to be trapped within. The water content (and therefore infrared transmission) of fused quartz is determined by the manufacturing process. Flame-fused material always has a higher water content due to the combination of the hydrocarbons and oxygen fueling the furnace, forming hydroxyl [OH] groups within the material. An IR grade material typically has an [OH] content below 10 ppm. Applications Many optical applications of fused quartz exploit its wide transparency range, which can extend well into the ultraviolet and into the near-mid infrared. Fused quartz is the key starting material for optical fiber, used for telecommunications. Because of its strength and high melting point (compared to ordinary glass), fused quartz is used as an envelope for halogen lamps and high-intensity discharge lamps, which must operate at a high envelope temperature to achieve their combination of high brightness and long life. Some high-power vacuum tubes used silica envelopes whose good transmission at infrared wavelengths facilitated radiation cooling of their incandescent anodes. Because of its physical strength, fused quartz was used in deep diving vessels such as the bathysphere and benthoscope and in the windows of crewed spacecraft, including the Space Shuttle and International Space Station. Fused quartz was used also in composite armour development. In the semiconductor industry, its combination of strength, thermal stability, and UV transparency makes it an excellent substrate for projection masks for photolithography. Its UV transparency also finds use as windows on EPROMs (erasable programmable read only memory), a type of non-volatile memory chip which is erased by exposure to strong ultraviolet light. EPROMs are recognizable by the transparent fused quartz (although some later models use UV-transparent resin) window which sits on top of the package, through which the silicon chip is visible, and which transmits UV light for erasing. Due to the thermal stability and composition, it is used in 5D optical data storage and in semiconductor fabrication furnaces. Fused quartz has nearly ideal properties for fabricating first surface mirrors such as those used in telescopes. The material behaves in a predictable way and allows the optical fabricator to put a very smooth polish onto the surface and produce the desired figure with fewer testing iterations. In some instances, a high-purity UV grade of fused quartz has been used to make several of the individual uncoated lens elements of special-purpose lenses including the Zeiss 105 mm f/4.3 UV Sonnar, a lens formerly made for the Hasselblad camera, and the Nikon UV-Nikkor 105 mm f/4.5 (presently sold as the Nikon PF10545MF-UV) lens. These lenses are used for UV photography, as the quartz glass can be transparent at much shorter wavelengths than lenses made with more common flint or crown glass formulas. Fused quartz can be metallised and etched for use as a substrate for high-precision microwave circuits, the thermal stability making it a good choice for narrowband filters and similar demanding applications. The lower dielectric constant than alumina allows higher impedance tracks or thinner substrates. Refractory material applications Fused quartz as an industrial raw material is used to make various refractory shapes such as crucibles, trays, shrouds, and rollers for many high-temperature thermal processes including steelmaking, investment casting, and glass manufacture. Refractory shapes made from fused quartz have excellent thermal shock resistance and are chemically inert to most elements and compounds, including virtually all acids, regardless of concentration, except hydrofluoric acid, which is very reactive even in fairly low concentrations. Translucent fused-quartz tubes are commonly used to sheathe electric elements in room heaters, industrial furnaces, and other similar applications. Owing to its low mechanical damping at ordinary temperatures, it is used for high-Q resonators, in particular, for wine-glass resonator of hemispherical resonator gyro. For the same reason fused quartz is also the material used for modern glass instruments such as the glass harp and the verrophone, and is also used for new builds of the historical glass harmonica, giving these instruments a greater dynamic range and a clearer sound than with the historically used lead crystal. Quartz glassware is occasionally used in chemistry laboratories when standard borosilicate glass cannot withstand high temperatures or when high UV transmission is required. The cost of production is significantly higher, limiting its use; it is usually found as a single basic element, such as a tube in a furnace, or as a flask, the elements in direct exposure to the heat. Properties of fused quartz The extremely low coefficient of thermal expansion, about (20–320 °C), accounts for its remarkable ability to undergo large, rapid temperature changes without cracking (see thermal shock). Fused quartz is prone to phosphorescence and "solarisation" (purplish discoloration) under intense UV illumination, as is often seen in flashtubes. "UV grade" synthetic fused silica (sold under various tradenames including "HPFS", "Spectrosil", and "Suprasil") has a very low metallic impurity content making it transparent deeper into the ultraviolet. An optic with a thickness of 1 cm has a transmittance around 50% at a wavelength of 170 nm, which drops to only a few percent at 160 nm. However, its infrared transmission is limited by strong water absorptions at 2.2 μm and 2.7 μm. "Infrared grade" fused quartz (tradenames "Infrasil", "Vitreosil IR", and others), which is electrically fused, has a greater presence of metallic impurities, limiting its UV transmittance wavelength to around 250 nm, but a much lower water content, leading to excellent infrared transmission up to 3.6 μm wavelength. All grades of transparent fused quartz/fused silica have nearly identical mechanical properties. Refractive index The optical dispersion of fused quartz can be approximated by the following Sellmeier equation: where the wavelength is measured in micrometers. This equation is valid between 0.21 and 3.71 μm and at 20 °C. Its validity was confirmed for wavelengths up to 6.7 μm. Experimental data for the real (refractive index) and imaginary (absorption index) parts of the complex refractive index of fused quartz reported in the literature over the spectral range from 30 nm to 1000 μm have been reviewed by Kitamura et al. and are available online. Its quite high Abbe Number of 67.8 makes it among the lowest dispersion glasses at visible wavelengths, as well as having an exceptionally low refractive index in the visible (nd = 1.4585). Note that fused quartz has a very different and lower refractive index compared to crystalline quartz which is birefringent with refractive indices no = 1.5443 and ne = 1.5534 at the same wavelength. Although these forms have the same chemical formula, their differing structures result in different optical and other physical properties. List of physical properties Density: 2.203 g/cm3 Hardness: 5.3–6.5 (Mohs scale), 8.8 GPa Tensile strength: 48.3 MPa Compressive strength: > 1.1 GPa Bulk modulus: ~37 GPa Rigidity modulus: 31 GPa Young's modulus: 71.7 GPa Poisson's ratio: 0.17 Lamé elastic constants: λ = 15.87 GPa, μ = 31.26 GPa Coefficient of thermal expansion: 5.5 × 10−7/K (average 20–320 °C) Thermal conductivity: 1.3 W/(m·K) Specific heat capacity: 45.3 J/(mol·K) Softening point: ≈ 1665 °C Annealing point: ≈ 1140 °C Strain point: 1070 °C Electrical resistivity: > 1018 Ω·m Dielectric constant: 3.75 at 20 °C 1 MHz Dielectric loss factor: less than 0.0004 at 20 °C 1 MHz typically 6 × 10−5 at 10 GHz Dielectric strength: 250–400 kV/cm at 20 °C Magnetic susceptibility: −11.28 × 10−6 (SI, 22 °C) Hamaker constant: A = 6.5 × 10−20 J. Surface tension: 0.300 N/m at 1800–2400 °C Index of refraction: nd = 1.4585 (at 587.6 nm) Change of refractive index with temperature: 1.28 × 10−5/K (20–30 °C) Transmission range: Cutoff – 160 to 5000 nm, with a deep absorption band at 2730 nm. Best transmittance – 180 to 2700 nm. Stress-optic coefficients: p11 = 0.113, p12 = 0.252. Abbe number: Vd = 67.82 See also Quartz fiber Structure of liquids and glasses Vycor References External links "Frozen Eye to Bring New Worlds into View" Popular Mechanics, June 1931 General Electrics, West Lynn Massachusetts Labs work on large fuzed quartz blocks Glass types Low-expansion glass Chemical engineering Glass compositions Silicon dioxide Transparent materials Quartz
Fused quartz
[ "Physics", "Chemistry", "Engineering" ]
2,531
[ "Physical phenomena", "Glass chemistry", "Glass compositions", "Chemical engineering", "Optical phenomena", "Materials", "Transparent materials", "nan", "Matter" ]
430,680
https://en.wikipedia.org/wiki/Motive%20%28algebraic%20geometry%29
In algebraic geometry, motives (or sometimes motifs, following French usage) is a theory proposed by Alexander Grothendieck in the 1960s to unify the vast array of similarly behaved cohomology theories such as singular cohomology, de Rham cohomology, etale cohomology, and crystalline cohomology. Philosophically, a "motif" is the "cohomology essence" of a variety. In the formulation of Grothendieck for smooth projective varieties, a motive is a triple , where is a smooth projective variety, is an idempotent correspondence, and m an integer; however, such a triple contains almost no information outside the context of Grothendieck's category of pure motives, where a morphism from to is given by a correspondence of degree . A more object-focused approach is taken by Pierre Deligne in Le Groupe Fondamental de la Droite Projective Moins Trois Points. In that article, a motive is a "system of realisations" – that is, a tuple consisting of modules over the rings respectively, various comparison isomorphisms between the obvious base changes of these modules, filtrations , a action of the absolute Galois group on and a "Frobenius" automorphism of . This data is modeled on the cohomologies of a smooth projective -variety and the structures and compatibilities they admit, and gives an idea about what kind of information is contained in a motive. Introduction The theory of motives was originally conjectured as an attempt to unify a rapidly multiplying array of cohomology theories, including Betti cohomology, de Rham cohomology, l-adic cohomology, and crystalline cohomology. The general hope is that equations like [projective line] = [line] + [point] [projective plane] = [plane] + [line] + [point] can be put on increasingly solid mathematical footing with a deep meaning. Of course, the above equations are already known to be true in many senses, such as in the sense of CW-complex where "+" corresponds to attaching cells, and in the sense of various cohomology theories, where "+" corresponds to the direct sum. From another viewpoint, motives continue the sequence of generalizations from rational functions on varieties to divisors on varieties to Chow groups of varieties. The generalization happens in more than one direction, since motives can be considered with respect to more types of equivalence than rational equivalence. The admissible equivalences are given by the definition of an adequate equivalence relation. Definition of pure motives The category of pure motives often proceeds in three steps. Below we describe the case of Chow motives , where k is any field. First step: category of (degree 0) correspondences, Corr(k) The objects of are simply smooth projective varieties over k. The morphisms are correspondences. They generalize morphisms of varieties , which can be associated with their graphs in , to fixed dimensional Chow cycles on . It will be useful to describe correspondences of arbitrary degree, although morphisms in are correspondences of degree 0. In detail, let X and Y be smooth projective varieties and consider a decomposition of X into connected components: If , then the correspondences of degree r from X to Y are where denotes the Chow-cycles of codimension k. Correspondences are often denoted using the "⊢"-notation, e.g., . For any and their composition is defined by where the dot denotes the product in the Chow ring (i.e., intersection). Returning to constructing the category notice that the composition of degree 0 correspondences is degree 0. Hence we define morphisms of to be degree 0 correspondences. The following association is a functor (here denotes the graph of ): Just like the category has direct sums () and tensor products (). It is a preadditive category. The sum of morphisms is defined by Second step: category of pure effective Chow motives, Choweff(k) The transition to motives is made by taking the pseudo-abelian envelope of : . In other words, effective Chow motives are pairs of smooth projective varieties X and idempotent correspondences α: X ⊢ X, and morphisms are of a certain type of correspondence: Composition is the above defined composition of correspondences, and the identity morphism of (X, α) is defined to be α : X ⊢ X. The association, , where ΔX := [idX] denotes the diagonal of X × X, is a functor. The motive [X] is often called the motive associated to the variety X. As intended, Choweff(k) is a pseudo-abelian category. The direct sum of effective motives is given by The tensor product of effective motives is defined by where The tensor product of morphisms may also be defined. Let f1 : (X1, α1) → (Y1, β1) and f2 : (X2, α2) → (Y2, β2) be morphisms of motives. Then let γ1 ∈ A(X1 × Y1) and γ2 ∈ A(X2 × Y2) be representatives of f1 and f2. Then , where πi : X1 × X2 × Y1 × Y2 → Xi × Yi are the projections. Third step: category of pure Chow motives, Chow(k) To proceed to motives, we adjoin to Choweff(k) a formal inverse (with respect to the tensor product) of a motive called the Lefschetz motive. The effect is that motives become triples instead of pairs. The Lefschetz motive L is . If we define the motive 1, called the trivial Tate motive, by 1 := h(Spec(k)), then the elegant equation holds, since The tensor inverse of the Lefschetz motive is known as the Tate motive, T := L−1. Then we define the category of pure Chow motives by . A motive is then a triple such that morphisms are given by correspondences and the composition of morphisms comes from composition of correspondences. As intended, is a rigid pseudo-abelian category. Other types of motives In order to define an intersection product, cycles must be "movable" so we can intersect them in general position. Choosing a suitable equivalence relation on cycles will guarantee that every pair of cycles has an equivalent pair in general position that we can intersect. The Chow groups are defined using rational equivalence, but other equivalences are possible, and each defines a different sort of motive. Examples of equivalences, from strongest to weakest, are Rational equivalence Algebraic equivalence Smash-nilpotence equivalence (sometimes called Voevodsky equivalence) Homological equivalence (in the sense of Weil cohomology) Numerical equivalence The literature occasionally calls every type of pure motive a Chow motive, in which case a motive with respect to algebraic equivalence would be called a Chow motive modulo algebraic equivalence. Mixed motives For a fixed base field k, the category of mixed motives is a conjectural abelian tensor category , together with a contravariant functor taking values on all varieties (not just smooth projective ones as it was the case with pure motives). This should be such that motivic cohomology defined by coincides with the one predicted by algebraic K-theory, and contains the category of Chow motives in a suitable sense (and other properties). The existence of such a category was conjectured by Alexander Beilinson. Instead of constructing such a category, it was proposed by Deligne to first construct a category DM having the properties one expects for the derived category . Getting MM back from DM would then be accomplished by a (conjectural) motivic t-structure. The current state of the theory is that we do have a suitable category DM. Already this category is useful in applications. Vladimir Voevodsky's Fields Medal-winning proof of the Milnor conjecture uses these motives as a key ingredient. There are different definitions due to Hanamura, Levine and Voevodsky. They are known to be equivalent in most cases and we will give Voevodsky's definition below. The category contains Chow motives as a full subcategory and gives the "right" motivic cohomology. However, Voevodsky also shows that (with integral coefficients) it does not admit a motivic t-structure. Geometric mixed motives Notation Here we will fix a field of characteristic and let be our coefficient ring. Set as the category of quasi-projective varieties over are separated schemes of finite type. We will also let be the subcategory of smooth varieties. Smooth varieties with correspondences Given a smooth variety and a variety call an integral closed subscheme which is finite over and surjective over a component of a prime correspondence from to . Then, we can take the set of prime correspondences from to and construct a free -module . Its elements are called finite correspondences. Then, we can form an additive category whose objects are smooth varieties and morphisms are given by smooth correspondences. The only non-trivial part of this "definition" is the fact that we need to describe compositions. These are given by a push-pull formula from the theory of Chow rings. Examples of correspondences Typical examples of prime correspondences come from the graph of a morphism of varieties . Localizing the homotopy category From here we can form the homotopy category of bounded complexes of smooth correspondences. Here smooth varieties will be denoted . If we localize this category with respect to the smallest thick subcategory (meaning it is closed under extensions) containing morphisms and then we can form the triangulated category of effective geometric motives Note that the first class of morphisms are localizing -homotopies of varieties while the second will give the category of geometric mixed motives the Mayer–Vietoris sequence. Also, note that this category has a tensor structure given by the product of varieties, so . Inverting the Tate motive Using the triangulated structure we can construct a triangle from the canonical map . We will set and call it the Tate motive. Taking the iterative tensor product lets us construct . If we have an effective geometric motive we let denote Moreover, this behaves functorially and forms a triangulated functor. Finally, we can define the category of geometric mixed motives as the category of pairs for an effective geometric mixed motive and an integer representing the twist by the Tate motive. The hom-groups are then the colimit Examples of motives Tate motives There are several elementary examples of motives which are readily accessible. One of them being the Tate motives, denoted , , or , depending on the coefficients used in the construction of the category of motives. These are fundamental building blocks in the category of motives because they form the "other part" besides Abelian varieties. Motives of curves The motive of a curve can be explicitly understood with relative ease: their Chow ring is justfor any smooth projective curve , hence Jacobians embed into the category of motives. Explanation for non-specialists A commonly applied technique in mathematics is to study objects carrying a particular structure by introducing a category whose morphisms preserve this structure. Then one may ask when two given objects are isomorphic, and ask for a "particularly nice" representative in each isomorphism class. The classification of algebraic varieties, i.e. application of this idea in the case of algebraic varieties, is very difficult due to the highly non-linear structure of the objects. The relaxed question of studying varieties up to birational isomorphism has led to the field of birational geometry. Another way to handle the question is to attach to a given variety X an object of more linear nature, i.e. an object amenable to the techniques of linear algebra, for example a vector space. This "linearization" goes usually under the name of cohomology. There are several important cohomology theories, which reflect different structural aspects of varieties. The (partly conjectural) theory of motives is an attempt to find a universal way to linearize algebraic varieties, i.e. motives are supposed to provide a cohomology theory that embodies all these particular cohomologies. For example, the genus of a smooth projective curve C which is an interesting invariant of the curve, is an integer, which can be read off the dimension of the first Betti cohomology group of C. So, the motive of the curve should contain the genus information. Of course, the genus is a rather coarse invariant, so the motive of C is more than just this number. The search for a universal cohomology Each algebraic variety X has a corresponding motive [X], so the simplest examples of motives are: [point] [projective line] = [point] + [line] [projective plane] = [plane] + [line] + [point] These 'equations' hold in many situations, namely for de Rham cohomology and Betti cohomology, l-adic cohomology, the number of points over any finite field, and in multiplicative notation for local zeta-functions. The general idea is that one motive has the same structure in any reasonable cohomology theory with good formal properties; in particular, any Weil cohomology theory will have such properties. There are different Weil cohomology theories, they apply in different situations and have values in different categories, and reflect different structural aspects of the variety in question: Betti cohomology is defined for varieties over (subfields of) the complex numbers, it has the advantage of being defined over the integers and is a topological invariant de Rham cohomology (for varieties over ) comes with a mixed Hodge structure, it is a differential-geometric invariant l-adic cohomology (over any field of characteristic ≠ l) has a canonical Galois group action, i.e. has values in representations of the (absolute) Galois group crystalline cohomology All these cohomology theories share common properties, e.g. existence of Mayer-Vietoris sequences, homotopy invariance the product of X with the affine line) and others. Moreover, they are linked by comparison isomorphisms, for example Betti cohomology of a smooth variety X over with finite coefficients is isomorphic to l-adic cohomology with finite coefficients. The theory of motives is an attempt to find a universal theory which embodies all these particular cohomologies and their structures and provides a framework for "equations" like [projective line] = [line]+[point]. In particular, calculating the motive of any variety X directly gives all the information about the several Weil cohomology theories HBetti(X), HDR(X) etc. Beginning with Grothendieck, people have tried to precisely define this theory for many years. Motivic cohomology Motivic cohomology itself had been invented before the creation of mixed motives by means of algebraic K-theory. The above category provides a neat way to (re)define it by where n and m are integers and is the m-th tensor power of the Tate object which in Voevodsky's setting is the complex shifted by –2, and [n] means the usual shift in the triangulated category. Conjectures related to motives The standard conjectures were first formulated in terms of the interplay of algebraic cycles and Weil cohomology theories. The category of pure motives provides a categorical framework for these conjectures. The standard conjectures are commonly considered to be very hard and are open in the general case. Grothendieck, with Bombieri, showed the depth of the motivic approach by producing a conditional (very short and elegant) proof of the Weil conjectures (which are proven by different means by Deligne), assuming the standard conjectures to hold. For example, the Künneth standard conjecture, which states the existence of algebraic cycles πi ⊂ X × X inducing the canonical projectors H(X) → Hi(X) ↣ H(X) (for any Weil cohomology H) implies that every pure motive M decomposes in graded pieces of weight n: M = ⨁GrnM. The terminology weights comes from a similar decomposition of, say, de-Rham cohomology of smooth projective varieties, see Hodge theory. Conjecture D, stating the concordance of numerical and homological equivalence, implies the equivalence of pure motives with respect to homological and numerical equivalence. (In particular the former category of motives would not depend on the choice of the Weil cohomology theory). Jannsen (1992) proved the following unconditional result: the category of (pure) motives over a field is abelian and semisimple if and only if the chosen equivalence relation is numerical equivalence. The Hodge conjecture, may be neatly reformulated using motives: it holds iff the Hodge realization mapping any pure motive with rational coefficients (over a subfield of ) to its Hodge structure is a full functor (rational Hodge structures). Here pure motive means pure motive with respect to homological equivalence. Similarly, the Tate conjecture is equivalent to: the so-called Tate realization, i.e. ℓ-adic cohomology, is a full functor (pure motives up to homological equivalence, continuous representations of the absolute Galois group of the base field k), which takes values in semi-simple representations. (The latter part is automatic in the case of the Hodge analogue). Tannakian formalism and motivic Galois group To motivate the (conjectural) motivic Galois group, fix a field k and consider the functor finite separable extensions K of k → non-empty finite sets with a (continuous) transitive action of the absolute Galois group of k which maps K to the (finite) set of embeddings of K into an algebraic closure of k. In Galois theory this functor is shown to be an equivalence of categories. Notice that fields are 0-dimensional. Motives of this kind are called Artin motives. By -linearizing the above objects, another way of expressing the above is to say that Artin motives are equivalent to finite -vector spaces together with an action of the Galois group. The objective of the motivic Galois group is to extend the above equivalence to higher-dimensional varieties. In order to do this, the technical machinery of Tannakian category theory (going back to Tannaka–Krein duality, but a purely algebraic theory) is used. Its purpose is to shed light on both the Hodge conjecture and the Tate conjecture, the outstanding questions in algebraic cycle theory. Fix a Weil cohomology theory H. It gives a functor from Mnum (pure motives using numerical equivalence) to finite-dimensional -vector spaces. It can be shown that the former category is a Tannakian category. Assuming the equivalence of homological and numerical equivalence, i.e. the above standard conjecture D, the functor H is an exact faithful tensor-functor. Applying the Tannakian formalism, one concludes that Mnum is equivalent to the category of representations of an algebraic group G, known as the motivic Galois group. The motivic Galois group is to the theory of motives what the Mumford–Tate group is to Hodge theory. Again speaking in rough terms, the Hodge and Tate conjectures are types of invariant theory (the spaces that are morally the algebraic cycles are picked out by invariance under a group, if one sets up the correct definitions). The motivic Galois group has the surrounding representation theory. (What it is not, is a Galois group; however in terms of the Tate conjecture and Galois representations on étale cohomology, it predicts the image of the Galois group, or, more accurately, its Lie algebra.) See also Ring of periods Motivic cohomology Presheaf with transfers Mixed Hodge module L-functions of motives References Survey Articles (technical introduction with comparatively short proofs) Motives over Finite Fields - J.S. Milne (motives-for-dummies text). (high-level introduction to motives in French). Books L. Breen: Tannakian categories. S. Kleiman: The standard conjectures. A. Scholl: Classical motives. (detailed exposition of Chow motives) Reference Literature (adequate equivalence relations on cycles). Milne, James S. Motives — Grothendieck’s Dream (Voevodsky's definition of mixed motives. Highly technical). Future directions Musings on : Arithmetic spin structures on elliptic curves What are "Fractional Motives"? External links Algebraic geometry Topological methods of algebraic geometry Homological algebra
Motive (algebraic geometry)
[ "Mathematics" ]
4,336
[ "Mathematical structures", "Fields of abstract algebra", "Category theory", "Algebraic geometry", "Homological algebra" ]
430,790
https://en.wikipedia.org/wiki/Gauge%20boson
In particle physics, a gauge boson is a bosonic elementary particle that acts as the force carrier for elementary fermions. Elementary particles whose interactions are described by a gauge theory interact with each other by the exchange of gauge bosons, usually as virtual particles. Photons, W and Z bosons, and gluons are gauge bosons. All known gauge bosons have a spin of 1 and therefore are vector bosons. For comparison, the Higgs boson has spin zero and the hypothetical graviton has a spin of 2. Gauge bosons are different from the other kinds of bosons: first, fundamental scalar bosons (the Higgs boson); second, mesons, which are composite bosons, made of quarks; third, larger composite, non-force-carrying bosons, such as certain atoms. Gauge bosons in the Standard Model The Standard Model of particle physics recognizes four kinds of gauge bosons: photons, which carry the electromagnetic interaction; W and Z bosons, which carry the weak interaction; and gluons, which carry the strong interaction. Isolated gluons do not occur because they are colour-charged and subject to colour confinement. Multiplicity of gauge bosons In a quantized gauge theory, gauge bosons are quanta of the gauge fields. Consequently, there are as many gauge bosons as there are generators of the gauge field. In quantum electrodynamics, the gauge group is U(1); in this simple case, there is only one gauge boson, the photon. In quantum chromodynamics, the more complicated group SU(3) has eight generators, corresponding to the eight gluons. The three W and Z bosons correspond (roughly) to the three generators of SU(2) in electroweak theory. Massive gauge bosons Gauge invariance requires that gauge bosons are described mathematically by field equations for massless particles. Otherwise, the mass terms add non-zero additional terms to the Lagrangian under gauge transformations, violating gauge symmetry. Therefore, at a naïve theoretical level, all gauge bosons are required to be massless, and the forces that they describe are required to be long-ranged. The conflict between this idea and experimental evidence that the weak and strong interactions have a very short range requires further theoretical insight. According to the Standard Model, the W and Z bosons gain mass via the Higgs mechanism. In the Higgs mechanism, the four gauge bosons (of SU(2)×U(1) symmetry) of the unified electroweak interaction couple to a Higgs field. This field undergoes spontaneous symmetry breaking due to the shape of its interaction potential. As a result, the universe is permeated by a non-zero Higgs vacuum expectation value (VEV). This VEV couples to three of the electroweak gauge bosons (W, W and Z), giving them mass; the remaining gauge boson remains massless (the photon). This theory also predicts the existence of a scalar Higgs boson, which has been observed in experiments at the LHC. Beyond the Standard Model Grand unification theories The Georgi–Glashow model predicts additional gauge bosons named X and Y bosons. The hypothetical X and Y bosons mediate interactions between quarks and leptons, hence violating conservation of baryon number and causing proton decay. Such bosons would be even more massive than W and Z bosons due to symmetry breaking. Analysis of data collected from such sources as the Super-Kamiokande neutrino detector has yielded no evidence of X and Y bosons. Gravitons The fourth fundamental interaction, gravity, may also be carried by a boson, called the graviton. In the absence of experimental evidence and a mathematically coherent theory of quantum gravity, it is unknown whether this would be a gauge boson or not. The role of gauge invariance in general relativity is played by a similar symmetry: diffeomorphism invariance. W′ and Z′ bosons W′ and Z′ bosons refer to hypothetical new gauge bosons (named in analogy with the Standard Model W and Z bosons). See also 1964 PRL symmetry breaking papers Boson Glueball Quantum chromodynamics Quantum electrodynamics References External links Explanation of gauge boson and gauge fields by Christopher T. Hill Bosons Particle physics
Gauge boson
[ "Physics" ]
916
[ "Bosons", "Subatomic particles", "Particle physics", "Matter" ]
431,310
https://en.wikipedia.org/wiki/Chemical%20engineer
A chemical engineer is a professional equipped with the knowledge of chemistry and other basic sciences who works principally in the chemical industry to convert basic raw materials into a variety of products and deals with the design and operation of plants and equipment.<ref>MobyDick Dictionary of Engineering", McGraw-Hill, 2nd Ed.</ref> This person applies the principles of chemical engineering in any of its various practical applications, such as Design, manufacture, and operation of plants and machinery in industrial chemical and related processes ("chemical process engineers"); Development of new or adapted substances for products ranging from foods and beverages to cosmetics to cleaners to pharmaceutical ingredients, among many other products ("chemical product engineers"); Development of new technologies such as fuel cells, hydrogen power and nanotechnology, as well as working in fields wholly or partially derived from chemical engineering such as materials science, polymer engineering, and biomedical engineering. This can include working of geophysical projects such as rivers, stones, and signs. History The president of the Institution of Chemical Engineers said in his presidential address "I believe most of us would be willing to regard Edward Charles Howard (1774–1816) as the first chemical engineer of any eminence". Others have suggested Johann Rudolf Glauber (1604–1670) for his development of processes for the manufacture of the major industrial acids. The term appeared in print in 1839, though from the context it suggests a person with mechanical engineering knowledge working in the chemical industry. In 1880, George E. Davis wrote in a letter to Chemical News "A Chemical Engineer is a person who possesses chemical and mechanical knowledge, and who applies that knowledge to the utilisation, on a manufacturing scale, of chemical action." He proposed the name Society of Chemical Engineers, for what was in fact constituted as the Society of Chemical Industry. At the first General Meeting of the Society in 1882, some 15 of the 300 members described themselves as chemical engineers, but the Society's formation of a Chemical Engineering Group in 1918 attracted about 400 members. In 1905 a publication called The Chemical Engineer was founded in the US, and in 1908 the American Institute of Chemical Engineers was established. In 1924 the Institution of Chemical Engineers adopted the following definition: "A chemical engineer is a professional man experienced in the design, construction and operation of plant and works in which matter undergoes a change of state and composition." As can be seen from the later definition, the occupation is not limited to the chemical industry, but more generally the process industries, or other situations in which complex physical and/or chemical processes are to be managed. The UK journal The Chemical Engineer'' (began 1956) has a series of biographies available online entitled “Chemical Engineers who Changed the World”, Overview Historically, the chemical engineer has been primarily concerned with process engineering, which can generally be divided into two complementary areas: chemical reaction engineering and separation processes. The modern discipline of chemical engineering, however, encompasses much more than just process engineering. Chemical engineers are now engaged in the development and production of a diverse range of products, as well as in commodity and specialty chemicals. These products include high-performance materials needed for aerospace, automotive, biomedical, electronic, environmental and military applications. Examples include ultra-strong fibers, fabrics, adhesives and composites for vehicles, bio-compatible materials for implants and prosthetics, gels for medical applications, pharmaceuticals, and films with special dielectric, optical or spectroscopic properties for opto-electronic devices. Additionally, chemical engineering is often intertwined with biology and biomedical engineering. Many chemical engineers work on biological projects such as understanding biopolymers (proteins) and mapping the human genome. Employments and salaries According to a 2015 salary survey by the American Institute of Chemical Engineers, the median annual salary for a chemical engineer was approximately $127,000. The survey was repeated in 2017 and the median annual salary dropped slightly to $124,000. The decrease in median salary was unexpected. A factor contributing to the decline may be that 2017’s survey was conducted by a different research and analysis firm. Median salaries ranged from $70,450 for chemical engineers with fewer than three years of experience to $156,000 for those with more than 40 years in the workforce. In the UK, the IChemE 2016 Salary Survey reported a median salary of approximately £57,000, with a starting salary for a graduate averaging £28,350. Chemical engineering in the USA is one of the engineering disciplines with the highest participation of women, with 35% of students compared with 20% in engineering. In the UK in 2014, students starting degrees were 25% female, compared with 15% in engineering. US graduates who responded to a 2015 salary survey were 18.8% female. According to the latest 2023 figures, Bayes Business School graduates get an average of £51,921 within 5 years of graduation, which is the most among UK universities. This was followed by the University of Oxford at £49,086 and the University of Warwick at £47,446. See also American Institute of Chemical Engineers Distillation Fluid dynamics Heat transfer History of chemical engineering Institution of Chemical Engineers (IChemE) List of chemical engineering societies List of chemical engineers Mass transfer Process control Process design (chemical engineering) Process engineering Process miniaturization Unit operations Chemfluence References External links American Institute of Chemical Engineers (USA) Institution of Chemical Engineers (UK) Canadian Society for Chemical Engineers Engineers Australia (AUS) Engineering occupations
Chemical engineer
[ "Chemistry", "Engineering" ]
1,117
[ "Chemical engineering", "Chemical engineers" ]
431,369
https://en.wikipedia.org/wiki/Sunyaev%E2%80%93Zeldovich%20effect
The Sunyaev–Zeldovich effect (named after Rashid Sunyaev and Yakov B. Zeldovich and often abbreviated as the SZ effect) is the spectral distortion of the cosmic microwave background (CMB) through inverse Compton scattering by high-energy electrons in galaxy clusters, in which the low-energy CMB photons receive an average energy boost during collision with the high-energy cluster electrons. Observed distortions of the cosmic microwave background spectrum are used to detect the disturbance of density in the universe. Using the Sunyaev–Zeldovich effect, dense clusters of galaxies have been observed. Overview The Sunyaev–Zeldovich effect was predicted by Rashid Sunyaev and Yakov Zeldovich to describe anisotropies in the CMB. The effect is caused by the CMB interacting with high energy electrons. These high energy electrons cause inverse Compton scattering of CMB photons which causes a distortion in the radiation spectrum of the CMB. The Sunyaev–Zeldovich effect is most apparent when observing galactic clusters. Analysis of CMB data at higher angular resolution (high -values) requires taking into account the Sunyaev–Zeldovich effect. The Sunyaev–Zeldovich effect can be divided into different types: Thermal effects, where the CMB photons interact with electrons that have high energies due to their temperature Kinematic effects, a second-order effect where the CMB photons interact with electrons that have high energies due to their bulk motion (also called the Ostriker–Vishniac effect, after Jeremiah P. Ostriker and Ethan Vishniac.) Polarization The Sunyaev–Zeldovich effect is of major astrophysical and cosmological interest. It can help determine the value of the Hubble constant, determine the location of new galaxy clusters, and in the study of cluster structure and mass. Since the Sunyaev–Zeldovich effect is a scattering effect, its magnitude is independent of redshift, which means that clusters at high redshift can be detected just as easily as those at low redshift. Thermal effects The distortion of the CMB resulting from a large number of high energy electrons is known as the thermal Sunyaev–Zeldovich effect. The thermal Sunyaev–Zeldovich effect is most commonly studied in galaxy clusters. By comparing the Sunyaev–Zeldovich effect and X-ray emission data, the thermal structure of the cluster can be studied, and if the temperature profile is known, Sunyaev–Zeldovich data can be used to determine the baryonic mass of the cluster along the line of sight. Comparing Sunyaev–Zeldovich and X-ray data can also be used to determine the Hubble constant using the angular diameter distance of the cluster. These thermal distortions can also be measured in superclusters and in gases in the local group, although they are less significant and more difficult to detect. In superclusters, the effect is not strong (< 8 μK), but with precise enough equipment, measuring this distortion can give a glimpse into large-scale structure formation. Gases in the local group may also cause anisotropies in the CMB due to the thermal Sunyaev–Zeldovich effect which must be taken into account when measuring the CMB for certain angular scales. Kinematic effects The kinematic Sunyaev–Zeldovich effect is caused when a galaxy cluster is moving relative to the Hubble flow. The kinematic Sunyaev–Zeldovich effect gives a method for calculating the peculiar velocity: where is the peculiar velocity, and is the optical depth. In order to use this equation, the thermal and kinematic effects need to be separated. The effect is relatively weak for most galaxy clusters. Using gravitational lensing, the peculiar velocity can be used to determine other velocity components for a specific galaxy cluster. These kinematic effects can be used to determine the Hubble constant and the behavior of clusters. Research Current research is focused on modelling how the effect is generated by the intracluster plasma in galaxy clusters, and on using the effect to estimate the Hubble constant and to separate different components in the angular average statistics of fluctuations in the background. Hydrodynamic structure formation simulations are being studied to gain data on thermal and kinetic effects in the theory. Observations are difficult due to the small amplitude of the effect and to confusion with experimental error and other sources of CMB temperature fluctuations. To distinguish the SZ effect due to galaxy clusters from ordinary density perturbations, both the spectral dependence and the spatial dependence of fluctuations in the cosmic microwave background are used. A factor which facilitates high redshift cluster detection is the angular scale versus redshift relation: it changes little between redshifts of 0.3 and 2, meaning that clusters between these redshifts have similar sizes on the sky. The use of surveys of clusters detected by their Sunyaev–Zeldovich effect for the determination of cosmological parameters has been demonstrated by Barbosa et al. (1996). This might help in understanding the dynamics of dark energy in surveys (South Pole Telescope, Atacama Cosmology Telescope, Planck). Observations In 1984, researchers from the Cambridge Radio Astronomy Group and the Owens Valley Radio Observatory first detected the Sunyaev–Zeldovich effect from clusters of galaxies. Ten years later, the Ryle Telescope was used to image a cluster of galaxies in the Sunyaev–Zeldovich effect for the first time. In 1987 the Cosmic Background Explorer (COBE) satellite observed the CMB and gave more accurate data for anisotropies in the CMB, allowing for more accurate analysis of the Sunyaev–Zeldovich effect. Instruments built specifically to study the effect include the Sunyaev–Zeldovich camera on the Atacama Pathfinder Experiment, and the Sunyaev–Zeldovich Array, which both saw first light in 2005. In 2012, the Atacama Cosmology Telescope (ACT) performed the first statistical detection of the kinematic SZ effect. In 2012 the kinematic SZ effect was detected in an individual object for the first time in MACS J0717.5+3745. As of 2015, the South Pole Telescope (SPT) had used the Sunyaev–Zeldovich effect to discover 415 galaxy clusters. The Sunyaev–Zeldovich effect has been and will continue to be an important tool in discovering hundreds of galaxy clusters. Recent experiments such as the OLIMPO balloon-borne telescope try to collect data in specific frequency bands and specific regions of the sky in order to pinpoint the Sunyaev–Zeldovich effect and give a more accurate map of certain regions of the sky. See also Sachs–Wolfe effect Cosmic microwave background spectral distortions Kompaneyets equation References Further reading Royal Astronomical Society, Corrupted echoes from the Big Bang? RAS Press Notice PN 04/01 External links Corrupted echoes from the Big Bang? innovations-report.com. Sunyaev–Zel'dovich effect on arxiv.org Physical cosmological concepts Radio astronomy
Sunyaev–Zeldovich effect
[ "Physics", "Astronomy" ]
1,481
[ "Physical cosmological concepts", "Radio astronomy", "Concepts in astrophysics", "Astronomical sub-disciplines" ]
431,529
https://en.wikipedia.org/wiki/Supersolid
In condensed matter physics, a supersolid is a spatially ordered (i.e. solid) material with superfluid properties. In the case of helium-4, it has been conjectured since the 1960s that it might be possible to create a supersolid. Starting from 2017, a definitive proof for the existence of this state was provided by several experiments using atomic Bose–Einstein condensates. The general conditions required for supersolidity to emerge in a certain substance are a topic of ongoing research. Background A supersolid is a special quantum state of matter where particles form a rigid, spatially ordered structure, but also flow with zero viscosity. This is in contradiction to the intuition that flow, and in particular superfluid flow with zero viscosity, is a property exclusive to the fluid state, e.g., superconducting electron and neutron fluids, gases with Bose–Einstein condensates, or unconventional liquids such as helium-4 or helium-3 at sufficiently low temperature. For more than 50 years it was thus unclear whether the supersolid state can exist. Experiments using helium While several experiments yielded negative results, in the 1980s, John Goodkind discovered the first anomaly in a solid by using ultrasound. Inspired by his observation, in 2004 Eun-Seong Kim and Moses Chan at Pennsylvania State University saw phenomena which were interpreted as supersolid behavior. Specifically, they observed a non-classical rotational moment of inertia of a torsional oscillator. This observation could not be explained by classical models but was consistent with superfluid-like behavior of a small percentage of the helium atoms contained within the oscillator. This observation triggered a large number of follow-up studies to reveal the role played by crystal defects or helium-3 impurities. Further experimentation has cast some doubt on the existence of a true supersolid in helium. Most importantly, it was shown that the observed phenomena could be largely explained due to changes in the elastic properties of the helium. In 2012, Chan repeated his original experiments with a new apparatus that was designed to eliminate any such contributions. In this experiment, Chan and his coauthors found no evidence of supersolidity. Experiments using ultracold quantum gases In 2017, two research groups from ETH Zurich and from MIT reported on the creation of an ultracold quantum gas with supersolid properties. The Zurich group placed a Bose–Einstein condensate inside two optical resonators, which enhanced the atomic interactions until they started to spontaneously crystallize and form a solid that maintains the inherent superfluidity of Bose–Einstein condensates. This setting realises a special form of a supersolid, the so-called lattice supersolid, where atoms are pinned to the sites of an externally imposed lattice structure. The MIT group exposed a Bose–Einstein condensate in a double-well potential to light beams that created an effective spin–orbit coupling. The interference between the atoms on the two spin–orbit coupled lattice sites gave rise to a characteristic density modulation. In 2019, three groups from Stuttgart, Florence, and Innsbruck observed supersolid properties in dipolar Bose–Einstein condensates formed from lanthanide atoms. In these systems, supersolidity emerges directly from the atomic interactions, without the need for an external optical lattice. This facilitated also the direct observation of superfluid flow and hence the definitive proof for the existence of the supersolid state of matter. In 2021, confocal cavity quantum electrodynamics with a Bose–Einstein condensate was used to create a supersolid that possesses a key property of solids, vibration. That is, a supersolid was created that possesses lattice phonons with a Goldstone mode dispersion exhibiting a 16 cm/s speed of sound. In 2021, dysprosium was used to create a 2-dimensional supersolid quantum gas, in 2022, the same team created a supersolid disk in a round trap and in 2024 they reported the observation of quantum vortices in the supersolid phase Theory In most theories of this state, it is supposed that vacancies – empty sites normally occupied by particles in an ideal crystal – lead to supersolidity. These vacancies are caused by zero-point energy, which also causes them to move from site to site as waves. Because vacancies are bosons, if such clouds of vacancies can exist at very low temperatures, then a Bose–Einstein condensation of vacancies could occur at temperatures less than a few tenths of a Kelvin. A coherent flow of vacancies is equivalent to a "superflow" (frictionless flow) of particles in the opposite direction. Despite the presence of the gas of vacancies, the ordered structure of a crystal is maintained, although with less than one particle on each lattice site on average. Alternatively, a supersolid can also emerge from a superfluid. In this situation, which is realised in the experiments with atomic Bose–Einstein condensates, the spatially ordered structure is a modulation on top of the superfluid density distribution. See also Superfluid film Superglass References External links Nature story on a supersolid experiment APS Physics Magazine on a vibrating supersolid experiment Penn State: What is a Supersolid? Condensed matter physics Phases of matter Liquid helium
Supersolid
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,110
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
432,019
https://en.wikipedia.org/wiki/Fresno%20scraper
The Fresno scraper is a machine pulled by horses used for constructing canals and ditches in sandy soil. The design of the Fresno scraper forms the basis of most modern earthmoving scrapers, having the ability to scrape and move a quantity of soil, and also to discharge it at a controlled depth, thus quadrupling the volume which could be handled manually. History The Fresno scraper was invented in 1883 by James Porteous. Working with farmers in Fresno, California, he had recognised the dependence of the Central San Joaquin Valley on irrigation, and the need for a more efficient means of constructing canals and ditches in the sandy soil. In perfecting the design of his machine, Porteous made several revisions on his own and also traded ideas with William Deidrick, Frank Dusy, and Abijah McCall, who invented and held patents on similar scrapers. Porteous bought the patents held by Deidrick, Dusy, and McCall, gaining sole rights to the Fresno Scraper. Prior scrapers pushed the soil ahead of them, while the Fresno scraper lifted it into a C-shaped bowl where it could be dragged along with much less friction. By lifting the handle, the operator could cause the scraper to bite deeper. Once soil was gathered, the handle could be lowered to raise the blade off the ground so it could be dragged to a low spot, and dumped by raising the handle very high. Impact This design was so revolutionary and economical that it influenced the design of modern bulldozer blades and earth movers. Between 1884 and 1910, thousands of Fresno scrapers were produced at the Fresno Agricultural Works, which had been formed by Porteous and used in agriculture, land leveling, road and railroad grading, and the construction industry. They played a vital role in the construction of the Panama Canal and later served the US Army in World War I. It was one of the most important agricultural and civil engineering machines ever made. In 1991, the Fresno Scraper was designated as an International Historic Engineering Landmark by the American Society of Mechanical Engineers. It is currently displayed at the San Joaquin County Historical Society & Museum. See also Wheel tractor-scraper External links Designation of the Fresno Scraper as an Engineering Landmark 19th-century inventions American inventions Engineering vehicles Historic Mechanical Engineering Landmarks Scottish inventions Soil
Fresno scraper
[ "Engineering" ]
465
[ "Engineering vehicles" ]
432,181
https://en.wikipedia.org/wiki/Lewis%20structure
Lewis structuresalso called Lewis dot formulas, Lewis dot structures, electron dot structures, or Lewis electron dot structures (LEDs)are diagrams that show the bonding between atoms of a molecule, as well as the lone pairs of electrons that may exist in the molecule. Introduced by Gilbert N. Lewis in his 1916 article The Atom and the Molecule, a Lewis structure can be drawn for any covalently bonded molecule, as well as coordination compounds. Lewis structures extend the concept of the electron dot diagram by adding lines between atoms to represent shared pairs in a chemical bond. Lewis structures show each atom and its position in the structure of the molecule using its chemical symbol. Lines are drawn between atoms that are bonded to one another (pairs of dots can be used instead of lines). Excess electrons that form lone pairs are represented as pairs of dots, and are placed next to the atoms. Although main group elements of the second period and beyond usually react by gaining, losing, or sharing electrons until they have achieved a valence shell electron configuration with a full octet of (8) electrons, hydrogen (H) can only form bonds which share just two electrons. Construction and electron counting For a neutral molecule, the total number of electrons represented in a Lewis structure is equal to the sum of the numbers of valence electrons on each individual atom. Non-valence electrons are not represented in Lewis structures. Once the total number of valence electrons has been determined, they are placed into the structure according to these steps: Initially, one line (representing a single bond) is drawn between each pair of connected atoms. Each bond consists of a pair of electrons, so if t is the total number of electrons to be placed and n is the number of single bonds just drawn, t−2n electrons remain to be placed. These are temporarily drawn as dots, one per electron, to a maximum of eight per atom (two in the case of hydrogen), minus two for each bond. Electrons are distributed first to the outer atoms and then to the others, until there are no more to be placed. Finally, each atom (other than hydrogen) that is surrounded by fewer than eight electrons (counting each bond as two) is processed as follows: For every two electrons needed, two dots are deleted from a neighboring atom and an additional line is drawn between the two atoms. This represents the conversion of a lone pair of electrons into a bonding pair, which adds two electrons to the former atom's valence shell while leaving the latter's electron count unchanged. In the preceding steps, if there are not enough electrons to fill the valence shells of all atoms, preference is given to those atoms whose electronegativity is higher. Lewis structures for polyatomic ions may be drawn by the same method. However when counting electrons, negative ions should have extra electrons placed in their Lewis structures; positive ions should have fewer electrons than an uncharged molecule. When the Lewis structure of an ion is written, the entire structure is placed in brackets, and the charge is written as a superscript on the upper right, outside the brackets. Miburo method A simpler method has been proposed for constructing Lewis structures, eliminating the need for electron counting: the atoms are drawn showing the valence electrons; bonds are then formed by pairing up valence electrons of the atoms involved in the bond-making process, and anions and cations are formed by adding or removing electrons to/from the appropriate atoms. A trick is to count up valence electrons, then count up the number of electrons needed to complete the octet rule (or with hydrogen just 2 electrons), then take the difference of these two numbers. The answer is the number of electrons that make up the bonds. The rest of the electrons just go to fill all the other atoms' octets. Lever method Another simple and general procedure to write Lewis structures and resonance forms has been proposed. This system works in nearly all cases, however there are 3 instances where it will not work. These exceptions are outlined in the table below. Formal charge In terms of Lewis structures, formal charge is used in the description, comparison, and assessment of likely topological and resonance structures by determining the apparent electronic charge of each atom within, based upon its electron dot structure, assuming exclusive covalency or non-polar bonding. It has uses in determining possible electron re-configuration when referring to reaction mechanisms, and often results in the same sign as the partial charge of the atom, with exceptions. In general, the formal charge of an atom can be calculated using the following formula, assuming non-standard definitions for the markup used: where: is the formal charge. represents the number of valence electrons in a free atom of the element. represents the number of unshared electrons on the atom. represents the total number of electrons in bonds the atom has with another. The formal charge of an atom is computed as the difference between the number of valence electrons that a neutral atom would have and the number of electrons that belong to it in the Lewis structure. Electrons in covalent bonds are split equally between the atoms involved in the bond. The total of the formal charges on an ion should be equal to the charge on the ion, and the total of the formal charges on a neutral molecule should be equal to zero. Resonance For some molecules and ions, it is difficult to determine which lone pairs should be moved to form double or triple bonds, and two or more different resonance structures may be written for the same molecule or ion. In such cases it is usual to write all of them with two-way arrows in between . This is sometimes the case when multiple atoms of the same type surround the central atom, and is especially common for polyatomic ions. When this situation occurs, the molecule's Lewis structure is said to be a resonance structure, and the molecule exists as a resonance hybrid. Each of the different possibilities is superimposed on the others, and the molecule is considered to have a Lewis structure equivalent to some combination of these states. The nitrate ion (), for instance, must form a double bond between nitrogen and one of the oxygens to satisfy the octet rule for nitrogen. However, because the molecule is symmetrical, it does not matter which of the oxygens forms the double bond. In this case, there are three possible resonance structures. Expressing resonance when drawing Lewis structures may be done either by drawing each of the possible resonance forms and placing double-headed arrows between them or by using dashed lines to represent the partial bonds (although the latter is a good representation of the resonance hybrid which is not, formally speaking, a Lewis structure). When comparing resonance structures for the same molecule, usually those with the fewest formal charges contribute more to the overall resonance hybrid. When formal charges are necessary, resonance structures that have negative charges on the more electronegative elements and positive charges on the less electronegative elements are favored. Single bonds can also be moved in the same way to create resonance structures for hypervalent molecules such as sulfur hexafluoride, which is the correct description according to quantum chemical calculations instead of the common expanded octet model. The resonance structure should not be interpreted to indicate that the molecule switches between forms, but that the molecule acts as the average of multiple forms. Example The formula of the nitrite ion is . Nitrogen is the least electronegative atom of the two, so it is the central atom by multiple criteria. Count valence electrons. Nitrogen has 5 valence electrons; each oxygen has 6, for a total of (6 × 2) + 5 = 17. The ion has a charge of −1, which indicates an extra electron, so the total number of electrons is 18. Connect the atoms by single bonds. Each oxygen must be bonded to the nitrogen, which uses four electrons—two in each bond. Place lone pairs. The 14 remaining electrons should initially be placed as 7 lone pairs. Each oxygen may take a maximum of 3 lone pairs, giving each oxygen 8 electrons including the bonding pair. The seventh lone pair must be placed on the nitrogen atom. Satisfy the octet rule. Both oxygen atoms currently have 8 electrons assigned to them. The nitrogen atom has only 6 electrons assigned to it. One of the lone pairs on an oxygen atom must form a double bond, but either atom will work equally well. Therefore, there is a resonance structure. Tie up loose ends. Two Lewis structures must be drawn: Each structure has one of the two oxygen atoms double-bonded to the nitrogen atom. The second oxygen atom in each structure will be single-bonded to the nitrogen atom. Place brackets around each structure, and add the charge (−) to the upper right outside the brackets. Draw a double-headed arrow between the two resonance forms. Alternative formations Chemical structures may be written in more compact forms, particularly when showing organic molecules. In condensed structural formulas, many or even all of the covalent bonds may be left out, with subscripts indicating the number of identical groups attached to a particular atom. Another shorthand structural diagram is the skeletal formula (also known as a bond-line formula or carbon skeleton diagram). In a skeletal formula, carbon atoms are not signified by the symbol C but by the vertices of the lines. Hydrogen atoms bonded to carbon are not shown—they can be inferred by counting the number of bonds to a particular carbon atom—each carbon is assumed to have four bonds in total, so any bonds not shown are, by implication, to hydrogen atoms. Other diagrams may be more complex than Lewis structures, showing bonds in 3D using various forms such as space-filling diagrams. Usage and limitations Despite their simplicity and development in the early twentieth century, when understanding of chemical bonding was still rudimentary, Lewis structures capture many of the key features of the electronic structure of a range of molecular systems, including those of relevance to chemical reactivity. Thus, they continue to enjoy widespread use by chemists and chemistry educators. This is especially true in the field of organic chemistry, where the traditional valence-bond model of bonding still dominates, and mechanisms are often understood in terms of curve-arrow notation superimposed upon skeletal formulae, which are shorthand versions of Lewis structures. Due to the greater variety of bonding schemes encountered in inorganic and organometallic chemistry, many of the molecules encountered require the use of fully delocalized molecular orbitals to adequately describe their bonding, making Lewis structures comparatively less important (although they are still common). There are simple and archetypal molecular systems for which a Lewis description, at least in unmodified form, is misleading or inaccurate. Notably, the naive drawing of Lewis structures for molecules known experimentally to contain unpaired electrons (e.g., O2, NO, and ClO2) leads to incorrect inferences of bond orders, bond lengths, and/or magnetic properties. A simple Lewis model also does not account for the phenomenon of aromaticity. For instance, Lewis structures do not offer an explanation for why cyclic C6H6 (benzene) experiences special stabilization beyond normal delocalization effects, while C4H4 (cyclobutadiene) actually experiences a special destabilization. Molecular orbital theory provides the most straightforward explanation for these phenomena. See also Valence shell electron pair repulsion theory Molecular geometry Structural formula Natural bond orbital References External links Lewis Dot Diagrams of Selected Elements Lewis structures for all compounds 1916 introductions Chemical formulas Chemical bonding
Lewis structure
[ "Physics", "Chemistry", "Materials_science" ]
2,327
[ "Condensed matter physics", "nan", "Chemical structures", "Chemical formulas", "Chemical bonding" ]
432,276
https://en.wikipedia.org/wiki/Brownian%20ratchet
In the philosophy of thermal and statistical physics, the Brownian ratchet or Feynman–Smoluchowski ratchet is an apparent perpetual motion machine of the second kind (converting thermal energy into mechanical work), first analysed in 1912 as a thought experiment by Polish physicist Marian Smoluchowski. It was popularised by American Nobel laureate physicist Richard Feynman in a physics lecture at the California Institute of Technology on May 11, 1962, during his Messenger Lectures series The Character of Physical Law in Cornell University in 1964 and in his text The Feynman Lectures on Physics as an illustration of the laws of thermodynamics. The simple machine, consisting of a tiny paddle wheel and a ratchet, appears to be an example of a Maxwell's demon, able to extract mechanical work from random fluctuations (heat) in a system at thermal equilibrium, in violation of the second law of thermodynamics. Detailed analysis by Feynman and others showed why it cannot actually do this. The machine The device consists of a gear known as a ratchet that rotates freely in one direction but is prevented from rotating in the opposite direction by a pawl. The ratchet is connected by an axle to a paddle wheel that is immersed in a fluid of molecules at temperature . The molecules constitute a heat bath in that they undergo random Brownian motion with a mean kinetic energy that is determined by the temperature. The device is imagined as being small enough that the impulse from a single molecular collision can turn the paddle. Although such collisions would tend to turn the rod in either direction with equal probability, the pawl allows the ratchet to rotate in one direction only. The net effect of many such random collisions would seem to be that the ratchet rotates continuously in that direction. The ratchet's motion then can be used to do work on other systems, for example lifting a weight (m) against gravity. The energy necessary to do this work apparently would come from the heat bath, without any heat gradient (i.e. the motion leeches energy from the temperature of the air). Were such a machine to work successfully, its operation would violate the second law of thermodynamics, one form of which states: "It is impossible for any device that operates on a cycle to receive heat from a single reservoir and produce a net amount of work." Why it fails Although at first sight the Brownian ratchet seems to extract useful work from Brownian motion, Feynman demonstrated that if the entire device is at the same temperature, the ratchet will not rotate continuously in one direction but will move randomly back and forth, and therefore will not produce any useful work. The reason is that since the pawl is at the same temperature as the paddle, it will also undergo Brownian motion, "bouncing" up and down. It therefore will intermittently fail by allowing a ratchet tooth to slip backward under the pawl while it is up. Another issue is that when the pawl rests on the sloping face of the tooth, the spring which returns the pawl exerts a sideways force on the tooth which tends to rotate the ratchet in a backwards direction. Feynman demonstrated that if the temperature of the ratchet and pawl is the same as the temperature of the paddle, then the failure rate must equal the rate at which the ratchet ratchets forward, so that no net motion results over long enough periods or in an ensemble averaged sense. A simple but rigorous proof that no net motion occurs no matter what shape the teeth are was given by Magnasco. If, on the other hand, is less than , the ratchet will indeed move forward, and produce useful work. In this case, though, the energy is extracted from the temperature gradient between the two thermal reservoirs, and some waste heat is exhausted into the lower temperature reservoir by the pawl. In other words, the device functions as a miniature heat engine, in compliance with the second law of thermodynamics. Conversely, if is greater than , the device will rotate in the opposite direction. The Feynman ratchet model led to the similar concept of Brownian motors, nanomachines which can extract useful work not from thermal noise but from chemical potentials and other microscopic nonequilibrium sources, in compliance with the laws of thermodynamics. Diodes are an electrical analog of the ratchet and pawl, and for the same reason cannot produce useful work by rectifying Johnson noise in a circuit at uniform temperature. Millonas as well as Mahato extended the same notion to correlation ratchets driven by mean-zero (unbiased) nonequilibrium noise with a nonvanishing correlation function of odd order greater than one. History The ratchet and pawl was first discussed as a Second Law-violating device by Gabriel Lippmann in 1900. In 1912, Polish physicist Marian Smoluchowski gave the first correct qualitative explanation of why the device fails; thermal motion of the pawl allows the ratchet's teeth to slip backwards. Feynman did the first quantitative analysis of the device in 1962 using the Maxwell–Boltzmann distribution, showing that if the temperature of the paddle T1 was greater than the temperature of the ratchet T2, it would function as a heat engine, but if T1 = T2 there would be no net motion of the paddle. In 1996, Juan Parrondo and Pep Español used a variation of the above device in which no ratchet is present, only two paddles, to show that the axle connecting the paddles and ratchet conducts heat between reservoirs; they argued that although Feynman's conclusion was correct, his analysis was flawed because of his erroneous use of the quasistatic approximation, resulting in incorrect equations for efficiency. Magnasco and Stolovitzky (1998) extended this analysis to consider the full ratchet device, and showed that the power output of the device is far smaller than the Carnot efficiency claimed by Feynman. A paper in 2000 by Derek Abbott, Bruce R. Davis and Juan Parrondo, reanalyzed the problem and extended it to the case of multiple ratchets, showing a link with Parrondo's paradox. Léon Brillouin in 1950 discussed an electrical circuit analogue that uses a rectifier (such as a diode) instead of a ratchet. The idea was the diode would rectify the Johnson noise thermal current fluctuations produced by the resistor, generating a direct current which could be used to perform work. In the detailed analysis it was shown that the thermal fluctuations within the diode generate an electromotive force that cancels the voltage from rectified current fluctuations. Therefore, just as with the ratchet, the circuit will produce no useful energy if all the components are at thermal equilibrium (at the same temperature); a DC current will be produced only when the diode is at a lower temperature than the resistor. Granular gas Researchers from the University of Twente, the University of Patras in Greece, and the Foundation for Fundamental Research on Matter have constructed a Feynman–Smoluchowski engine which, when not in thermal equilibrium, converts pseudo-Brownian motion into work by means of a granular gas, which is a conglomeration of solid particles vibrated with such vigour that the system assumes a gas-like state. The constructed engine consisted of four vanes which were allowed to rotate freely in a vibrofluidized granular gas. Because the ratchet's gear and pawl mechanism, as described above, permitted the axle to rotate only in one direction, random collisions with the moving beads caused the vane to rotate. This seems to contradict Feynman's hypothesis. However, this system is not in perfect thermal equilibrium: energy is constantly being supplied to maintain the fluid motion of the beads. Vigorous vibrations on top of a shaking device mimic the nature of a molecular gas. Unlike an ideal gas, though, in which tiny particles move constantly, stopping the shaking would simply cause the beads to drop. In the experiment, this necessary out-of-equilibrium environment was thus maintained. Work was not immediately being done, though; the ratchet effect only commenced beyond a critical shaking strength. For very strong shaking, the vanes of the paddle wheel interacted with the gas, forming a convection roll, sustaining their rotation. See also Quantum stirring, ratchets, and pumping Notes External links The Feynman Lectures on Physics Vol. I Ch. 46: Ratchet and pawl Feynman's Messenger Lectures Coupled Brownian Motors - Can we get work out of unbiased fluctuation? Experiment finally proves 100-year-old thought experiment is possible (w/ Video) Articles Lukasz Machura: Performance of Brownian Motors. University of Augsburg, 2006 (PDF) Qiu C, Punke M, Tian Y, Han Y, Wang S, Su Y, Salvalaglio M, Pan X, Srolovitz D J, Han J (2024). Grain boundaries are Brownian ratchets. Science 385 (6712): 980:985. doi:10.1126/science.adp1516 Thought experiments in physics Richard Feynman Philosophy of thermal and statistical physics Nanotechnology Perpetual motion
Brownian ratchet
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,920
[ "Philosophy of thermal and statistical physics", "Materials science", "Thermodynamics", "Nanotechnology", "Statistical mechanics" ]
432,630
https://en.wikipedia.org/wiki/Superspace
Superspace is the coordinate space of a theory exhibiting supersymmetry. In such a formulation, along with ordinary space dimensions x, y, z, ..., there are also "anticommuting" dimensions whose coordinates are labeled in Grassmann numbers rather than real numbers. The ordinary space dimensions correspond to bosonic degrees of freedom, the anticommuting dimensions to fermionic degrees of freedom. The word "superspace" was first used by John Wheeler in an unrelated sense to describe the configuration space of general relativity; for example, this usage may be seen in his 1973 textbook Gravitation. Informal discussion There are several similar, but not equivalent, definitions of superspace that have been used, and continue to be used in the mathematical and physics literature. One such usage is as a synonym for super Minkowski space. In this case, one takes ordinary Minkowski space, and extends it with anti-commuting fermionic degrees of freedom, taken to be anti-commuting Weyl spinors from the Clifford algebra associated to the Lorentz group. Equivalently, the super Minkowski space can be understood as the quotient of the super Poincaré algebra modulo the algebra of the Lorentz group. A typical notation for the coordinates on such a space is with the overline being the give-away that super Minkowski space is the intended space. Superspace is also commonly used as a synonym for the super vector space. This is taken to be an ordinary vector space, together with additional coordinates taken from the Grassmann algebra, i.e. coordinate directions that are Grassmann numbers. There are several conventions for constructing a super vector space in use; two of these are described by Rogers and DeWitt. A third usage of the term "superspace" is as a synonym for a supermanifold: a supersymmetric generalization of a manifold. Note that both super Minkowski spaces and super vector spaces can be taken as special cases of supermanifolds. A fourth, and completely unrelated meaning saw a brief usage in general relativity; this is discussed in greater detail at the bottom. Examples Several examples are given below. The first few assume a definition of superspace as a super vector space. This is denoted as Rm|n, the Z2-graded vector space with Rm as the even subspace and Rn as the odd subspace. The same definition applies to Cm|n. The four-dimensional examples take superspace to be super Minkowski space. Although similar to a vector space, this has many important differences: First of all, it is an affine space, having no special point denoting the origin. Next, the fermionic coordinates are taken to be anti-commuting Weyl spinors from the Clifford algebra, rather than being Grassmann numbers. The difference here is that the Clifford algebra has a considerably richer and more subtle structure than the Grassmann numbers. So, the Grassmann numbers are elements of the exterior algebra, and the Clifford algebra has an isomorphism to the exterior algebra, but its relation to the orthogonal group and the spin group, used to construct the spin representations, give it a deep geometric significance. (For example, the spin groups form a normal part of the study of Riemannian geometry, quite outside the ordinary bounds and concerns of physics.) Trivial examples The smallest superspace is a point which contains neither bosonic nor fermionic directions. Other trivial examples include the n-dimensional real plane Rn, which is a vector space extending in n real, bosonic directions and no fermionic directions. The vector space R0|n, which is the n-dimensional real Grassmann algebra. The space R1|1 of one even and one odd direction is known as the space of dual numbers, introduced by William Clifford in 1873. The superspace of supersymmetric quantum mechanics Supersymmetric quantum mechanics with N supercharges is often formulated in the superspace R1|2N, which contains one real direction t identified with time and N complex Grassmann directions which are spanned by Θi and Θ*i, where i runs from 1 to N. Consider the special case N = 1. The superspace R1|2 is a 3-dimensional vector space. A given coordinate therefore may be written as a triple (t, Θ, Θ*). The coordinates form a Lie superalgebra, in which the gradation degree of t is even and that of Θ and Θ* is odd. This means that a bracket may be defined between any two elements of this vector space, and that this bracket reduces to the commutator on two even coordinates and on one even and one odd coordinate while it is an anticommutator on two odd coordinates. This superspace is an abelian Lie superalgebra, which means that all of the aforementioned brackets vanish where is the commutator of a and b and is the anticommutator of a and b. One may define functions from this vector space to itself, which are called superfields. The above algebraic relations imply that, if we expand our superfield as a power series in Θ and Θ*, then we will only find terms at the zeroeth and first orders, because Θ2 = Θ*2 = 0. Therefore, superfields may be written as arbitrary functions of t multiplied by the zeroeth and first order terms in the two Grassmann coordinates Superfields, which are representations of the supersymmetry of superspace, generalize the notion of tensors, which are representations of the rotation group of a bosonic space. One may then define derivatives in the Grassmann directions, which take the first order term in the expansion of a superfield to the zeroeth order term and annihilate the zeroeth order term. One can choose sign conventions such that the derivatives satisfy the anticommutation relations These derivatives may be assembled into supercharges whose anticommutators identify them as the fermionic generators of a supersymmetry algebra where i times the time derivative is the Hamiltonian operator in quantum mechanics. Both Q and its adjoint anticommute with themselves. The supersymmetry variation with supersymmetry parameter ε of a superfield Φ is defined to be We can evaluate this variation using the action of Q on the superfields Similarly one may define covariant derivatives on superspace which anticommute with the supercharges and satisfy a wrong sign supersymmetry algebra . The fact that the covariant derivatives anticommute with the supercharges means the supersymmetry transformation of a covariant derivative of a superfield is equal to the covariant derivative of the same supersymmetry transformation of the same superfield. Thus, generalizing the covariant derivative in bosonic geometry which constructs tensors from tensors, the superspace covariant derivative constructs superfields from superfields. Supersymmetric extensions of Minkowski space N = 1 super Minkowski space Perhaps the most studied concrete superspace in physics is super Minkowski space or sometimes written , which is the direct sum of four real bosonic dimensions and four real Grassmann dimensions (also known as fermionic dimensions or spin dimensions). In supersymmetric quantum field theories one is interested in superspaces which furnish representations of a Lie superalgebra called a supersymmetry algebra. The bosonic part of the supersymmetry algebra is the Poincaré algebra, while the fermionic part is constructed using spinors with Grassmann number valued components. For this reason, in physical applications one considers an action of the supersymmetry algebra on the four fermionic directions of such that they transform as a spinor under the Poincaré subalgebra. In four dimensions there are three distinct irreducible 4-component spinors. There is the Majorana spinor, the left-handed Weyl spinor and the right-handed Weyl spinor. The CPT theorem implies that in a unitary, Poincaré invariant theory, which is a theory in which the S-matrix is a unitary matrix and the same Poincaré generators act on the asymptotic in-states as on the asymptotic out-states, the supersymmetry algebra must contain an equal number of left-handed and right-handed Weyl spinors. However, since each Weyl spinor has four components, this means that if one includes any Weyl spinors one must have 8 fermionic directions. Such a theory is said to have extended supersymmetry, and such models have received a lot of attention. For example, supersymmetric gauge theories with eight supercharges and fundamental matter have been solved by Nathan Seiberg and Edward Witten, see Seiberg–Witten gauge theory. However, in this subsection we are considering the superspace with four fermionic components and so no Weyl spinors are consistent with the CPT theorem. Note: There are many sign conventions in use and this is only one of them. Therefore, the four fermionic directions transform as a Majorana spinor . We can also form a conjugate spinor where is the charge conjugation matrix, which is defined by the property that when it conjugates a gamma matrix, the gamma matrix is negated and transposed. The first equality is the definition of while the second is a consequence of the Majorana spinor condition . The conjugate spinor plays a role similar to that of in the superspace , except that the Majorana condition, as manifested in the above equation, imposes that and are not independent. In particular we may construct the supercharges which satisfy the supersymmetry algebra where is the 4-momentum operator. Again the covariant derivative is defined like the supercharge but with the second term negated and it anticommutes with the supercharges. Thus the covariant derivative of a supermultiplet is another supermultiplet. Extended supersymmetry It is possible to have sets of supercharges with , although this is not possible for all values of . These supercharges generate translations in a total of spin dimensions, hence forming the superspace . In general relativity The word "superspace" is also used in a completely different and unrelated sense, in the book Gravitation by Misner, Thorne and Wheeler. There, it refers to the configuration space of general relativity, and, in particular, the view of gravitation as geometrodynamics, an interpretation of general relativity as a form of dynamical geometry. In modern terms, this particular idea of "superspace" is captured in one of several different formalisms used in solving the Einstein equations in a variety of settings, both theoretical and practical, such as in numerical simulations. This includes primarily the ADM formalism, as well as ideas surrounding the Hamilton–Jacobi–Einstein equation and the Wheeler–DeWitt equation. See also Chiral superspace Harmonic superspace Projective superspace Super Minkowski space Supergroup Lie superalgebra Notes References (Second printing) Geometry Supersymmetry General relativity hu:Szupertér
Superspace
[ "Physics", "Mathematics" ]
2,324
[ "Unsolved problems in physics", "General relativity", "Physics beyond the Standard Model", "Geometry", "Theory of relativity", "Supersymmetry", "Symmetry" ]
432,632
https://en.wikipedia.org/wiki/Supergravity
In theoretical physics, supergravity (supergravity theory; SUGRA for short) is a modern field theory that combines the principles of supersymmetry and general relativity; this is in contrast to non-gravitational supersymmetric theories such as the Minimal Supersymmetric Standard Model. Supergravity is the gauge theory of local supersymmetry. Since the supersymmetry (SUSY) generators form together with the Poincaré algebra a superalgebra, called the super-Poincaré algebra, supersymmetry as a gauge theory makes gravity arise in a natural way. Gravitons Like all covariant approaches to quantum gravity, supergravity contains a spin-2 field whose quantum is the graviton. Supersymmetry requires the graviton field to have a superpartner. This field has spin 3/2 and its quantum is the gravitino. The number of gravitino fields is equal to the number of supersymmetries. History Gauge supersymmetry The first theory of local supersymmetry was proposed by Dick Arnowitt and Pran Nath in 1975 and was called gauge supersymmetry. Supergravity The first model of 4-dimensional supergravity (without this denotation) was formulated by Dmitri Vasilievich Volkov and Vyacheslav A. Soroka in 1973, emphasizing the importance of spontaneous supersymmetry breaking for the possibility of a realistic model. The minimal version of 4-dimensional supergravity (with unbroken local supersymmetry) was constructed in detail in 1976 by Dan Freedman, Sergio Ferrara and Peter van Nieuwenhuizen. In 2019 the three were awarded a special Breakthrough Prize in Fundamental Physics for the discovery. The key issue of whether or not the spin 3/2 field is consistently coupled was resolved in the nearly simultaneous paper, by Deser and Zumino, which independently proposed the minimal 4-dimensional model. It was quickly generalized to many different theories in various numbers of dimensions and involving additional (N) supersymmetries. Supergravity theories with N>1 are usually referred to as extended supergravity (SUEGRA). Some supergravity theories were shown to be related to certain higher-dimensional supergravity theories via dimensional reduction (e.g. N=1, 11-dimensional supergravity is dimensionally reduced on T7 to 4-dimensional, ungauged, N = 8 supergravity). The resulting theories were sometimes referred to as Kaluza–Klein theories as Kaluza and Klein constructed in 1919 a 5-dimensional gravitational theory, that when dimensionally reduced on a circle, its 4-dimensional non-massive modes describe electromagnetism coupled to gravity. mSUGRA mSUGRA means minimal SUper GRAvity. The construction of a realistic model of particle interactions within the N = 1 supergravity framework where supersymmetry (SUSY) breaks by a super Higgs mechanism carried out by Ali Chamseddine, Richard Arnowitt and Pran Nath in 1982. Collectively now known as minimal supergravity Grand Unification Theories (mSUGRA GUT), gravity mediates the breaking of SUSY through the existence of a hidden sector. mSUGRA naturally generates the Soft SUSY breaking terms which are a consequence of the Super Higgs effect. Radiative breaking of electroweak symmetry through Renormalization Group Equations (RGEs) follows as an immediate consequence. Due to its predictive power, requiring only four input parameters and a sign to determine the low energy phenomenology from the scale of Grand Unification, its interest is a widely investigated model of particle physics 11D: the maximal SUGRA One of these supergravities, the 11-dimensional theory, generated considerable excitement as the first potential candidate for the theory of everything. This excitement was built on four pillars, two of which have now been largely discredited: Werner Nahm showed 11 dimensions as the largest number of dimensions consistent with a single graviton, and more dimensions will show particles with spins greater than 2. However, if two of these dimensions are time-like, these problems are avoided in 12 dimensions. Itzhak Bars gives this emphasis. In 1981 Ed Witten showed 11 as the smallest number of dimensions big enough to contain the gauge groups of the Standard Model, namely SU(3) for the strong interactions and SU(2) times U(1) for the electroweak interactions. Many techniques exist to embed the standard model gauge group in supergravity in any number of dimensions like the obligatory gauge symmetry in type I and heterotic string theories, and obtained in type II string theory by compactification on certain Calabi–Yau manifolds. The D-branes engineer gauge symmetries too. In 1978 Eugène Cremmer, Bernard Julia and Joël Scherk (CJS) found the classical action for an 11-dimensional supergravity theory. This remains today the only known classical 11-dimensional theory with local supersymmetry and no fields of spin higher than two. Other 11-dimensional theories known and quantum-mechanically inequivalent reduce to the CJS theory when one imposes the classical equations of motion. However, in the mid-1980s Bernard de Wit and Hermann Nicolai found an alternate theory in D=11 Supergravity with Local SU(8) Invariance. While not manifestly Lorentz-invariant, it is in many ways superior, because it dimensionally-reduces to the 4-dimensional theory without recourse to the classical equations of motion. In 1980 Peter Freund and M. A. Rubin showed that compactification from 11 dimensions preserving all the SUSY generators could occur in two ways, leaving only 4 or 7 macroscopic dimensions, the others compact. The noncompact dimensions have to form an anti-de Sitter space. There are many possible compactifications, but the Freund-Rubin compactification's invariance under all of the supersymmetry transformations preserves the action. Finally, the first two results each appeared to establish 11 dimensions, the third result appeared to specify the theory, and the last result explained why the observed universe appears to be four-dimensional. Many of the details of the theory were fleshed out by Peter van Nieuwenhuizen, Sergio Ferrara and Daniel Z. Freedman. The end of the SUGRA era The initial excitement over 11-dimensional supergravity soon waned, as various failings were discovered, and attempts to repair the model failed as well. Problems included: The compact manifolds which were known at the time and which contained the standard model were not compatible with supersymmetry, and could not hold quarks or leptons. One suggestion was to replace the compact dimensions with the 7-sphere, with the symmetry group SO(8), or the squashed 7-sphere, with symmetry group SO(5) times SU(2). Until recently, the physical neutrinos seen in experiments were believed to be massless, and appeared to be left-handed, a phenomenon referred to as the chirality of the Standard Model. It was very difficult to construct a chiral fermion from a compactification — the compactified manifold needed to have singularities, but physics near singularities did not begin to be understood until the advent of orbifold conformal field theories in the late 1980s. Supergravity models generically result in an unrealistically large cosmological constant in four dimensions, and that constant is difficult to remove, and so require fine-tuning. This is still a problem today. Quantization of the theory led to quantum field theory gauge anomalies rendering the theory inconsistent. In the intervening years physicists have learned how to cancel these anomalies. Some of these difficulties could be avoided by moving to a 10-dimensional theory involving superstrings. However, by moving to 10 dimensions one loses the sense of uniqueness of the 11-dimensional theory. The core breakthrough for the 10-dimensional theory, known as the first superstring revolution, was a demonstration by Michael B. Green, John H. Schwarz and David Gross that there are only three supergravity models in 10 dimensions which have gauge symmetries and in which all of the gauge and gravitational anomalies cancel. These were theories built on the groups SO(32) and , the direct product of two copies of E8. Today we know that, using D-branes for example, gauge symmetries can be introduced in other 10-dimensional theories as well. The second superstring revolution Initial excitement about the 10-dimensional theories, and the string theories that provide their quantum completion, died by the end of the 1980s. There were too many Calabi–Yaus to compactify on, many more than Yau had estimated, as he admitted in December 2005 at the 23rd International Solvay Conference in Physics. None quite gave the standard model, but it seemed as though one could get close with enough effort in many distinct ways. Plus no one understood the theory beyond the regime of applicability of string perturbation theory. There was a comparatively quiet period at the beginning of the 1990s; however, several important tools were developed. For example, it became apparent that the various superstring theories were related by "string dualities", some of which relate weak string-coupling - perturbative - physics in one model with strong string-coupling - non-perturbative - in another. Then the second superstring revolution occurred. Joseph Polchinski realized that obscure string theory objects, called D-branes, which he discovered six years earlier, equate to stringy versions of the p-branes known in supergravity theories. String theory perturbation didn't restrict these p-branes. Thanks to supersymmetry, p-branes in supergravity gained understanding well beyond the limits of string theory. Armed with this new nonperturbative tool, Edward Witten and many others could show all of the perturbative string theories as descriptions of different states in a single theory that Witten named M-theory. Furthermore, he argued that M-theory's long wavelength limit, i.e. when the quantum wavelength associated to objects in the theory appear much larger than the size of the 11th dimension, needs 11-dimensional supergravity descriptors that fell out of favor with the first superstring revolution 10 years earlier, accompanied by the 2- and 5-branes. Therefore, supergravity comes full circle and uses a common framework in understanding features of string theories, M-theory, and their compactifications to lower spacetime dimensions. Relation to superstrings The term "low energy limits" labels some 10-dimensional supergravity theories. These arise as the massless, tree-level approximation of string theories. True effective field theories of string theories, rather than truncations, are rarely available. Due to string dualities, the conjectured 11-dimensional M-theory is required to have 11-dimensional supergravity as a "low energy limit". However, this doesn't necessarily mean that string theory/M-theory is the only possible UV completion of supergravity; supergravity research is useful independent of those relations. 4D N = 1 SUGRA Before we move on to SUGRA proper, let's recapitulate some important details about general relativity. We have a 4D differentiable manifold M with a Spin(3,1) principal bundle over it. This principal bundle represents the local Lorentz symmetry. In addition, we have a vector bundle T over the manifold with the fiber having four real dimensions and transforming as a vector under Spin(3,1). We have an invertible linear map from the tangent bundle TM to T. This map is the vierbein. The local Lorentz symmetry has a gauge connection associated with it, the spin connection. The following discussion will be in superspace notation, as opposed to the component notation, which isn't manifestly covariant under SUSY. There are actually many different versions of SUGRA out there which are inequivalent in the sense that their actions and constraints upon the torsion tensor are different, but ultimately equivalent in that we can always perform a field redefinition of the supervierbeins and spin connection to get from one version to another. In 4D N=1 SUGRA, we have a 4|4 real differentiable supermanifold M, i.e. we have 4 real bosonic dimensions and 4 real fermionic dimensions. As in the nonsupersymmetric case, we have a Spin(3,1) principal bundle over M. We have an R4|4 vector bundle T over M. The fiber of T transforms under the local Lorentz group as follows; the four real bosonic dimensions transform as a vector and the four real fermionic dimensions transform as a Majorana spinor. This Majorana spinor can be reexpressed as a complex left-handed Weyl spinor and its complex conjugate right-handed Weyl spinor (they're not independent of each other). We also have a spin connection as before. We will use the following conventions; the spatial (both bosonic and fermionic) indices will be indicated by M, N, ... . The bosonic spatial indices will be indicated by μ, ν, ..., the left-handed Weyl spatial indices by α, β,..., and the right-handed Weyl spatial indices by , , ... . The indices for the fiber of T will follow a similar notation, except that they will be hatted like this: . See van der Waerden notation for more details. . The supervierbein is denoted by , and the spin connection by . The inverse supervierbein is denoted by . The supervierbein and spin connection are real in the sense that they satisfy the reality conditions where , , and and . The covariant derivative is defined as . The covariant exterior derivative as defined over supermanifolds needs to be super graded. This means that every time we interchange two fermionic indices, we pick up a +1 sign factor, instead of -1. The presence or absence of R symmetries is optional, but if R-symmetry exists, the integrand over the full superspace has to have an R-charge of 0 and the integrand over chiral superspace has to have an R-charge of 2. A chiral superfield X is a superfield which satisfies . In order for this constraint to be consistent, we require the integrability conditions that for some coefficients c. Unlike nonSUSY GR, the torsion has to be nonzero, at least with respect to the fermionic directions. Already, even in flat superspace, . In one version of SUGRA (but certainly not the only one), we have the following constraints upon the torsion tensor: Here, is a shorthand notation to mean the index runs over either the left or right Weyl spinors. The superdeterminant of the supervierbein, , gives us the volume factor for M. Equivalently, we have the volume 4|4-superform. If we complexify the superdiffeomorphisms, there is a gauge where , and . The resulting chiral superspace has the coordinates x and Θ. R is a scalar valued chiral superfield derivable from the supervielbeins and spin connection. If f is any superfield, is always a chiral superfield. The action for a SUGRA theory with chiral superfields X, is given by where K is the Kähler potential and W is the superpotential, and is the chiral volume factor. Unlike the case for flat superspace, adding a constant to either the Kähler or superpotential is now physical. A constant shift to the Kähler potential changes the effective Planck constant, while a constant shift to the superpotential changes the effective cosmological constant. As the effective Planck constant now depends upon the value of the chiral superfield X, we need to rescale the supervierbeins (a field redefinition) to get a constant Planck constant. This is called the Einstein frame. N = 8 supergravity in 4 dimensions N = 8 supergravity is the most symmetric quantum field theory which involves gravity and a finite number of fields. It can be found from a dimensional reduction of 11D supergravity by making the size of 7 of the dimensions go to zero. It has 8 supersymmetries which is the most any gravitational theory can have since there are 8 half-steps between spin 2 and spin −2. (A graviton has the highest spin in this theory which is a spin 2 particle.) More supersymmetries would mean the particles would have superpartners with spins higher than 2. The only theories with spins higher than 2 which are consistent involve an infinite number of particles (such as string theory and higher-spin theories). Stephen Hawking in his A Brief History of Time speculated that this theory could be the Theory of Everything. However, in later years this was abandoned in favour of string theory. There has been renewed interest in the 21st century with the possibility that this theory may be finite. Higher-dimensional SUGRA Higher-dimensional SUGRA is the higher-dimensional, supersymmetric generalization of general relativity. Supergravity can be formulated in any number of dimensions up to eleven. Higher-dimensional SUGRA focuses upon supergravity in greater than four dimensions. The number of supercharges in a spinor depends on the dimension and the signature of spacetime. The supercharges occur in spinors. Thus the limit on the number of supercharges cannot be satisfied in a spacetime of arbitrary dimension. Some theoretical examples in which this is satisfied are: 12-dimensional two-time theory 11-dimensional maximal supergravity 10-dimensional supergravity theories Type IIA supergravity: N = (1, 1) Type IIB supergravity: N = (2, 0) Type I supergravity: N = (1, 0) 9d supergravity theories Maximal 9d supergravity from 10d T-duality N = 1 Gauged supergravity The supergravity theories that have attracted the most interest contain no spins higher than two. This means, in particular, that they do not contain any fields that transform as symmetric tensors of rank higher than two under Lorentz transformations. The consistency of interacting higher spin field theories is, however, presently a field of very active interest. See also General relativity Grand Unified Theory M-theory N = 8 supergravity Quantum mechanics String theory Supermanifold Super-Poincaré algebra Supersymmetry Supermetric References Bibliography Historical General Further reading Dall'Agata, G., Zagermann, M., Supergravity: From First Principles to Modern Applications, Springer, (2021). Freedman, D. Z., Van Proeyen, A., Supergravity, Cambridge University Press, Cambridge, (2012). Lauria, E., Van Proeyen, A., N = 2 Supergravity in D = 4, 5, 6 Dimensions, Springer, (2020). Năstase, H., Introduction to Supergravity and Its Applications, (2024). Nath, P., Supersymmetry, Supergravity, and Unification, Cambridge University Press, Cambridge, (2016) Tanii, Y., Introduction to Supergravity, Springer, (2014). Rausch de Traubenberg, M., Valenzuela, M., A Supergravity Primer, World Scientific Press, Singapore, (2019). Wess, P., Introduction To Supersymmetry And Supergravity, World Scientific Press, Singapore, (1990). Wess, P., Bagger, J., Supersymmetry and Supergravity, Princeton University Press, Princeton, (1992). External links Theories of gravity Supersymmetry Physics beyond the Standard Model
Supergravity
[ "Physics" ]
4,211
[ "Theoretical physics", "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model", "Theories of gravity", "Supersymmetry", "Symmetry" ]
14,251,545
https://en.wikipedia.org/wiki/Coenzyme-B%20sulfoethylthiotransferase
In enzymology, coenzyme-B sulfoethylthiotransferase, also known as methyl-coenzyme M reductase (MCR) or most systematically as 2-(methylthio)ethanesulfonate:N-(7-thioheptanoyl)-3-O-phosphothreonine S-(2-sulfoethyl)thiotransferase is an enzyme that catalyzes the final step in the formation of methane. It does so by combining the hydrogen donor coenzyme B and the methyl donor coenzyme M. Via this enzyme, most of the natural gas on earth was produced. Ruminants (e.g. cows) produce methane because their rumens contain methanogenic prokaryotes (Archaea) that encode and express the set of genes of this enzymatic complex. The enzyme has two active sites, each occupied by the nickel-containing F430 cofactor. + CoM-S-S-CoB + methane The two substrates of this enzyme are 2-(methylthio)ethanesulfonate and N-(7-mercaptoheptanoyl)threonine 3-O-phosphate; its two products are CoM-S-S-CoB and methane. 3-Nitrooxypropanol inhibits the enzyme. In some species, the enzyme reacts in reverse (a process called reverse methanogenesis), catalysing the anaerobic oxidation of methane, therefore removing it from the environment. Such organisms are methanotrophs. This enzyme belongs to the family of transferases, specifically those transferring alkylthio groups. Structure Coenzyme-B sulfoethylthiotransferase is a multiprotein complex made up of a pair of identical halves. Each half is made up of three subunits: α, β and γ, also called McrA, McrB and McrG, respectively. References Further reading EC 2.8.4 Enzymes of unknown structure Enzymes Transferases Anaerobic digestion
Coenzyme-B sulfoethylthiotransferase
[ "Chemistry", "Engineering" ]
442
[ "Water technology", "Anaerobic digestion", "Environmental engineering" ]
14,256,181
https://en.wikipedia.org/wiki/Beijing%20Weather%20Modification%20Office
The Beijing Weather Modification Office is a unit of the Beijing Meteorological Bureau tasked with weather control in Beijing, and its surrounding areas, including parts of Hebei and Inner Mongolia. The Beijing Weather Modification Office form a part of China's nationwide weather control effort, believed to be the world's largest; it employs 37,000 people nationwide, who seed clouds by firing rockets and shells loaded with silver iodide into them. According to Zhang Qiang, head of the Office, cloud seeding increased precipitation in Beijing by about one-eighth in 2004; nationwide, similar efforts added of rain between 1995 and 2003. The work of the Office is largely aimed at hail storm prevention or making rain to end droughts; they have also induced precipitation for purposes of firefighting or counteracting the effect of severe dust storms, as they did in the aftermath of one storm in April 2006 which dropped 300,000 tonnes of dust and sand on the city and was believed to have been the largest in five years. Their technology was also used to create snow on New Year's Day in 1997. Other proposed future uses for induced precipitation include lowering temperatures in summer, in hopes of reducing electricity consumption. More prominently, they were enlisted by the Chinese government to ensure that the 2008 Summer Olympics were free of rain, by breaking up clouds headed towards the capital and forcing them to drop rain on outlying areas instead. The office created a snowstorm in November 2009. References Government agencies with year of establishment missing Organizations based in Beijing China Meteorological Administration Weather modification
Beijing Weather Modification Office
[ "Engineering" ]
308
[ "Planetary engineering", "Weather modification" ]
14,257,055
https://en.wikipedia.org/wiki/Conservation%20grazing
Conservation grazing or targeted grazing is the use of semi-feral or domesticated grazing livestock to maintain and increase the biodiversity of natural or semi-natural grasslands, heathlands, wood pasture, wetlands and many other habitats. Conservation grazing is generally less intensive than practices such as prescribed burning, but still needs to be managed to ensure that overgrazing does not occur. The practice has proven to be beneficial in moderation in restoring and maintaining grassland and heathland ecosystems. Conservation or monitored grazing has been implemented into regenerative agriculture programs to restore soil and overall ecosystem health of current working landscapes. The optimal level of grazing and grazing animal will depend on the goal of conservation. Different levels of grazing, alongside other conservation practices, can be used to induce desired results. History Historically grasslands, grazing animals, herbivores, were a crucial part of the ecosystem ecosystems. When grazers are removed, previously grazed lands may show a decline in both the density and the diversity of the vegetation loss of biodiversity, and wildfires. The history of the land may help ecologists and conservationists determine the best approach to a conservation project. Historic threats to grasslands began with land conversion to crop fields and working landscapes. As of 2017, approximately 20% of native grazing lands worldwide have been transformed into crops resulting in a 60% loss of soil carbon. This shift allowed for improper land management techniques and more recently to the spread of woody plants due to a lack of management and to climate change. Overgrazing and trampling of soil and grasslands from human-introduced livestock has led to reduction in vegetation cover, increased soil erosion from overexposure, and in more arid climate, desertification that is intensified by drought. Now, grazing lands are the most degraded land use worldwide. Conservation Grazing in Practice Intensive grazing maintains an area as a habitat dominated by grasses and small shrubs, largely preventing ecological succession to forest. Extensive grazing also treats habitats dominated by grasses and small shrubs but does not prevent succession to forest, it only slows it down. Conservation grazing is usually done with extensive grazing because of the ecological disadvantages of intensive grazing. Conservation grazing needs to be monitored closely. Overgrazing may cause erosion, habitat destruction, soil compaction, or reduced biodiversity (species richness). Rambo and Faeth found that the use of vertebrates for grazing of an area increased the species richness of plants by decreasing the abundance of dominant species and increasing the richness of rarer species. This may lead to a more open forest canopy and more room for other plant species to emerge. Regenerative Agriculture and Monitored Grazing Regenerative grazing management aims to revert back to natural, historic grazing dynamics between the grazing animals, land, and other ecological processes contributing to the targeted ecosystem. By managing the level of grazing, livestock ranchers can take into account soil health, manage erosion, reduce fire risk, and contribute to an overall healthier ecosystem and allow for grasses to regrow. To lessen the effects of climate change within the agriculture system and encourage resilient farming, soil carbon sequestering, nutrient recycling, and promoting biodiversity is crucial. This is done by rotating livestock herds through multiple paddocks after a certain amount of time. Monitored grazing plans must be flexible to account for: changes to shape and size of paddock, livestock density, duration, intensity of plant loss, frequency of grazing, and time of year. It is unfeasible for all land to be returned to its historic, natural land use through complete removal of agriculture. Therefore, regenerative agriculture is a technique to restore overgrazed land while continuing to farm. Variability in grazing species The outcome of restoration is dependent on the grazing species. For example, wapiti and horses have a similar grazing frequency to cattle but tend to graze a larger surface area – producing a smaller effect on the land as opposed to cattle. Cattle have been found to be more useful in the restoration of pastures with low species richness, whereas sheep were found useful for the re-establishment of neglected fields. The targeted restoration area will determine the species of grazer ideal for conservation grazing. Dumont and colleagues found in the use of varied breeds of steers that "traditional breeds appeared slightly less selective than commercial breeds", but did not make a significant difference in biodiversity. In this particular study biodiversity was maintained by the same amount by both breed types. Effects on Ecosystem Effects on native and non-native plant species Conservation grazing is a tool used for conserving biodiversity. However, one danger in grazing is the potential for increased invasive species alongside the native biodiversity. A study by Loeser et al. showed that areas of high intensity grazing and grazer removal increased the biomass of nonnative introduced species. Both showed that an intermediate approach is the best method. The nonnatives did demonstrate that they were not as well adapted to the disturbances, such as drought. This indicated that implementing controlled grazing methods would decrease the abundance of nonnatives in those plots that had not been properly managed. Effects of grazing can also depend on the individual plant species and its response to grazing. Plants that are adapted to extensive grazing (such as that done by cattle) will respond quicker and more effectively to grazing than native species that have not had to cope with intense grazing pressure in the past. An experiment done by Kimball and Schiffman showed that grazing increased the cover of some native species but did not decrease the cover of nonnative species. The species diversity of the native plants was able to respond to the grazing and increase diversity. The community would become denser than originally with the increased biodiversity. (However, this may have been simply variance in plots due to the fact that the native and nonnative compositions were of different species between the grazed and ungrazed plots.) Effects on animals Insects and butterflies Degree of grazing has a significant effect on the species richness and abundance of insects in grasslands. Land management in the form of grazing tends to decrease diversity with increased intensity. Kruess and Tscharntke attribute this difference to the increased height of grasses in the ungrazed areas. The study showed that the abundance and diversity of insects (such as butterfly adults, trap-nesting bees and wasps) were increased by increased grass height. However, other insects such as grasshoppers responded better to heterogeneity of the vegetation. Vertebrates Grazing can have varied effects on vertebrates. Kuhnert et al. observed that different bird species react in different ways to changes in grazing intensity. Grazing has also been thought to decrease the abundance of vertebrates, such as the prairie dog and the desert tortoise. However, Kazmaier et al. found that moderate grazing by cattle had no effect on the Texas tortoise. Rabbits have been widely discussed due to their influences on land composition. Bell and Watson found that rabbits show grazing preference for different plant species. This preference can alter the composition of a plant community. In some cases, if the preference is for a non-native, invasive plant, rabbit grazing may benefit the community by reducing non-native abundance and creating room for the native plant species to fill. When rabbits graze in moderation they can create a more complex ecosystem, by creating more variable environments that will allow for more predator-competitor relationships between the various organisms. However, besides the effect on wild vegetation, rabbits destroy crops, compete with other herbivores, and can result in extreme ecological damage. Competition can be direct or indirect. The rabbits may specifically eat the competitions target food or it may inhibit the growth of grasses that other species eat. For example, rabbit grazing in the Netherlands inhibits tall grasses from becoming dominant. This in turn enhances the suitability of the pasture for brent goose. However, they may benefit predators that do better in open areas, because the rabbits reduce the amount of vegetation making it easier for those predators to spot their prey. Finally, grazing has demonstrated use in clearing dry brush to reduce the fire hazard of drought-stricken areas. Effect on Ephemeral Wetlands Ephemeral wetlands degradation and loss of biodiversity had, at one point in time, been blamed on mismanaged grazing of both native and non-native ungulates and other grazers. A study done by Jaymee Marty of The Nature Conservancy examined the effects on the vernal pools formed in California when grazers were removed. The results of the short study showed that areas where grazers were removed had a lower diversity of native grasses, invertebrates and vertebrates in the pools, with an increase in non-native grass abundance and distribution in the area. The study also demonstrated reduced reproduction success of individual species in the area, such as the western spadefoot toad and California tiger salamander. Marty argues that this decrease is due to ecosystems adapting to historical changes in grazers and the effects they have. In other words, the historic ecosystem, theoretically, would have responded positively to the removal of cattle grazing, however, the system has adapted to the European introduced species and now may require them for maintained diversity. In another study performed by Pyke and Marty, measurements showed that on average, vernal ponds on grazed land pooled longer than ungrazed areas and soil was more resistant to water absorption in the grazed areas. Targeted grazing A recent synonym or near-synonym for conservation grazing is "targeted grazing", a term introduced in a 2006 handbook in distinction to prescribed grazing, which the USDA National Resource Conservation Service was using to describe all managed grazing. Targeted grazing is often used in combination with other techniques such as burning, herbicide applications or land clearing. Targeted grazing can rival traditional herbicide and mechanical control methods for invasive plants from invasive forb to juniper trees, and has been used to reduce fine fuels in fire prone areas. Principles The most important skill for developing a targeted grazing program is patience and commitment. However, understanding livestock and plant responses to grazing are critical in developing a targeted grazing program. The program should have a clear statement of the kind of animal, timing and rate of grazing necessary to suppress troublesome plants and maintain a healthy landscape. The grazing application should 1) cause significant damage to the target plants 2) limit damage to desired vegetation and 3) be integrated with other control strategies. First, to cause significant damage to targeted plants requires understanding when the target plant is most susceptible to grazing damage and when they are most palatable to livestock. Target plant palatability depends on the grazing animals inherited and developed plant preferences (i.e. the shape of sheep and goat's mouths make them well suited for eating broad leaf weeds). Goats are also well designed for eating shrubs. Second, target plants often exist in a plant community with many desirable plants. The challenge is to select the correct animal, grazing time and grazing intensity to maximize the impact on the target plant while reducing it on the associated plant community. Finally, management objectives, target plant species, weather, topography, plant physiology, and associated plant communities are among the many variables that can determine treatment type and duration. Well-developed targeted grazing objectives and an adaptive management plan that takes into account other control strategies need to be in place. See also Epping Forest New Forest Oostvaardersplassen Wood-pasture hypothesis Milovice Nature Reserve Holistic management (agriculture) References External links Targeted Grazing YouTube channel Society for Range Management Targeted Grazing Committee Permaculture concepts Habitats Habitat management equipment and methods Ecological restoration Grasslands Livestock Grazing
Conservation grazing
[ "Chemistry", "Engineering", "Biology" ]
2,323
[ "Grasslands", "Ecosystems", "Ecological restoration", "Environmental engineering" ]
14,257,881
https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20ACE%20inhibitors
The discovery of an orally inactive peptide from snake venom established the important role of angiotensin converting enzyme (ACE) inhibitors in regulating blood pressure. This led to the development of captopril, the first ACE inhibitor. When the adverse effects of captopril became apparent new derivates were designed. Then after the discovery of two active sites of ACE: N-domain and C-domain, the development of domain-specific ACE inhibitors began. Development of first generation ACE inhibitors The development of the nonapeptide teprotide (Glu-Trp-Pro-Arg-Pro-Gln-Ile-Pro-Pro), which was originally isolated from the venom of the Brazilian pit viper Bothrops jararaca, greatly clarified the importance of ACE in hypertension. However, its lack of oral activity limited its therapeutic utility. L-benzylsuccinic acid (2(R)-benzyl-3-carboxypropionic acid) was described to be the most potent inhibitor of carboxypeptidase A in the early 1980s. The authors referred to it as a by-product analog and it was proposed to bind to the active site of carboxypeptidase A via succinyl carboxyl group and a carbonyl group. Their findings established that L-benzylsuccinic acid is bound at a single locus at the active site of carboxypeptidase A. The authors discussed but dismissed the suggestion that the carboxylate function might bind to the catalytically functional zinc ion present at the active site. Later however this was found to be the case. Drug design of captopril (sulfhydrils) Over 2000 compounds were tested randomly in a guinea pig ileum test and succinyl-L-proline was found to have the properties of a specific ACE inhibitor. It showed inhibitory effect of angiotensin I and bradykinin without having any effects on angiotensin II. Then researchers started to search for a model that would explain inhibition on the basis of specific drug interactions of compounds with the active site of ACE. Previous studies with substrates and inhibitors of ACE suggested that it was a zinc-containing metalloprotein and a carboxypeptidase similar to pancreatic carboxypeptidase A. However ACE releases dipeptides rather than single amino acids from the C-terminus of the peptide substrates. And it was assumed that both their mechanism of action and their active site might be similar. A positively charged Arg145 at the active site was thought to bind with the negatively charged C-terminal carboxyl group of the peptide substrate. It was also proposed that ACE binds by hydrogen bonding to the terminal, non scissile, peptide bond of the substrate. But since ACE is a dipeptide carboxypeptidase, unlike carboxypeptidase A, the distance between the cationic carboxyl-binding site and the zinc atom should be greater, by approximately the length of one amino acid residue. Proline was chosen as the amino acid moiety because of its presence as the carboxy terminal amino acid residue in teprotide and other ACE inhibitors found in snake venoms. Eleven other amino acids were tested but none of them were more inhibitory. So it was proposed that succinyl amino acid derivative should be an ACE inhibitor and succinyl-L-proline was found to be such an inhibitor. It was also known that the nature of penultimate amino acid residue of a peptide substrate for ACE influences binding to the enzyme. The acyl group of the carboxyalkanoyl amino acid binds the zinc ion of the enzyme and occupies the same position at the active site of ACE as the penultimate. Therefore, the substituent of the acyl group might also influence binding to the enzyme. A 2-methyl substituent with D configuration was found to enhance the inhibitory potency by about 15 fold of succinyl-L-proline. Then the search for a better zinc-binding group started. Replacement of the succinyl carboxyl group by nitrogen-containing functionalities (amine, amide or guanidine) did not enhance inhibitory activity. However a potency breakthrough was achieved by the replacement of the carboxyl group with a sulfhydryl function (SH), a group with greater affinity for the enzyme bound zinc ion. This yielded a potent inhibitor that was 1000 times more potent than succinyl-L-proline. The optimal acyl chain length for mercaptoalkanoyl derivates of proline was found to be 3-mercaptopropanoyl-L-proline, 5 times greater than that of 2-mercaptoalkanoyl derivates and 50 times greater than that of 4-mercaptoalkanoyl derivates. So the D-3-mercapto-2-methylpropanoyl-L-proline or Captopril was the most potent inhibitor. Later, the researchers compared a few mercaptoacyl amino acid inhibitors and concluded that the binding of the inhibitor to the enzyme involved a hydrogen bond between a donor site on the enzyme and the oxygen of the amide carbonyl, much like predicted for the substrates. Drug design of other first generation ACE inhibitors The most common adverse effects of Captopril, skin rash and loss of taste, are the same as caused by mercapto-containing penicillamine. Therefore, a group of researchers aimed at finding potent, selective ACE inhibitors that would not contain a mercapto (SH) function and would have a weaker chelating function. They returned to work with carboxyl compounds and started working with substituted N-carboxymethyl-dipeptides as a general structure (R-CHCOOH-A1-A2). According to previous research they assumed that cyclic imino acids would result in good potency if substituted on the carboxyl terminus of the dipeptide. Therefore, substituting A2 with proline gave good results. They also noted that according to the enzyme's specificity imino acids in the position next to the carboxyl terminus would not give a potent compound. By substituting R and A1 groups with hydrophobic and basic residues would give a potent compound. By substituting –NH in the general structure resulted in loss of potency which is consistent to the enzyme's need for a –NH in corresponding position on the substrates. The results were 2 active inhibitors: Enalaprilat and Lisinopril. These compounds both have phenylalanine in R position which occupies the S1 groove in the enzyme. The result was thus these two new, potent tripeptide analogues with zinc-coordinating carboxyl group: Enalaprilat and Lisinopril. Discovery of 2 active sites: C-domain and N-domain Most of the ACE inhibitors on the market today are non-selective towards the two active sites of ACE because their binding to the enzyme is based mostly on the strong fundamental interaction between the zinc atom in the enzyme and the strong chelating group on the inhibitor. The resolution of the 3D structure of germinal ACE, which has only one active site that corresponds with C-domain of the somatic ACE, offers a structural framework for structure-based design approach. Although N- and C-domain have comparable rates in vitro of ACE hydrolyzing, it seems like that in vivo the C-domain is mainly responsible for regulating blood pressure. This indicates that C-domain selective inhibitors could have similar profile to that of a current non-selective inhibitors. Angiotensin I is mainly hydrolyzed by the C-domain in vivo but bradykinin is hydrolyzed by both active sites. Thus, by developing a C-domain selective inhibitor would permit some degradation of bradykinin by the N-domain and this degradation could be enough to prevent accumulation of excess bradykinin which has been observed during attacks of angioedema. C-domain selective inhibition could possibly result in specialized control of blood pressure with less vasodilator-related adverse effects. N-domain selective inhibitors on the other hand give the possibility of opening up novel therapeutic areas. Apparently, the N-domain does not have a big role in controlling blood pressure but it seems to be the principal metabolizing enzyme for AcSDKP, a natural haemoregulatory hormone. Drug design of Keto-ACE and its ketomethylene derivatives It was found that other carbonyl-containing groups such as ketones could substitute for the amide bond that links Phe and Gly in ACE inhibitors. Keto-ACE, first described in 1980, has emerged as a potential lead compound for C-domain specific ACE inhibitors. Keto-ACE, a tripeptide analogue of Phe-Gly-Pro, contains a bulky P1 and P2 benzyl ring and was shown to inhibit the hydrolysis of angiotensin I and bradykinin via the C-domain. The synthesis of keto-ACE analogues with Trp or Phe at the P2’ position led to a marked increase in C-domain selectivity, but the introduction of an aliphatic P2 group conferred N-domain selectivity. Inhibitory potency may further be enhanced by the incorporation of hydrophobic substituent, such as phenyl group at the P1’ position. P1’ substituents with S-stereochemistry have also been shown to possess greater inhibitory potency than their R-counterparts. Keto-ACE was used as the basis for the design of ketomethylene derivates. Its analogues contain a ketomethylene isostere replacement at the scissile bond that is believed to mimic the tetrahedron transition state of the proteolytic reaction at the active site. The focus was on a simple tripeptide Phe-Ala-Pro, which in earlier enzyme assays has shown inhibition activity. Replacement of alanine with glycin gave a tripeptide with 1/14th of the inhibition activity of Phe-Ala-Pro. The benzoylated derivative of Phe-Gly-Pro, Bz-Phe-Gly-Pro, was twice as active. To reduce the peptidic nature of ketomethylene inhibitors the P1’ and P2’ substituent may be cyclized to form a lactam, where there is a correlation between the inhibitory potency and the ring size. In 2001 it was postulated that a substitution α to nitrogen and making of 3-methyl-substituted analog of A58365A, a pyridone acid isolated from the fermentation broth of the bacterium Streptomyces chromofuscus with ACE inhibitory activity, might influence the level of biological activity by steric or hydrophobic effect, and/or by preventing reactions at C3. It was also noticed during the synthetic work on A58365A that potential precursors were sensitive to oxidation of the five-membered ring and so the 3-methyl analogue might be more stable in this respect. Drug design of silanediol The fact that carbon and silicone have similar, but also dissimilar, characteristics triggered the interest in substituting carbon with silanediol as a central, zinc chelating group. Silicone forms a dialkylsilanediol compound that is sufficiently hindered so the formation of a siloxane polymer does not occur. Silanediols are more stable than carbon diols so they are expected to have longer half-life. Silanediols are also neutral at physiological pH (do not ionize). Four stereoisomers of Phe-Ala silanediol were compared to ketone-based inhibitors and the silanediol were found to be fourfold less potent than the ketone analogue. This is because silanediols are weaker zinc chelators compared with ketones. Replacement of the silanediol, with a methylsilano group gave little enzyme inhibition. This confirms that the silanediol group interacts with ACE as a transition state analogue and the interaction is in a manner similar to that of ketone. If the benzyl group of silanediol is replaced by an i-butyl group it gives a weaker ACE inhibitor. Introduction of a hydrophobic methyl phenyl gives a little more potency than an analogue with a tert-butyl-group at P1. That suggests that methyl phenyl gives a better S1 recognition than a tert-butyl group. Phosphinic peptides Phosphinic peptides are pseudo-peptides where a phosphinic acid bond (PO2-CH-) has replaced a peptide bond in the peptide analogue sequence. To some extent the chemical structure of phosphinic peptides is similar to that of intermediates which are produced in hydrolysis of peptides by proteolytic enzymes. The hypothesis has been made that these pseudo-peptides mimic the structure of the enzyme substrates in their transition state and crystallography of zinc proteases in complex with phosphinic peptides supports that hypothesis. Drug design of RXP 407 RXP 407 is the first N-domain selective phosphinic peptide and was discovered by screening phosphinic peptides libraries. Before the discovery of RXP 407 it had long been claimed that the free C-terminal carboxylate group in P2’ position was essential to the potency of ACE inhibitor so it can be reasoned that this has postponed the discovery of N-domain selective ACE inhibitors. When RXP 407 was discovered researchers looked into phosphinic peptides with 3 different general formula, each containing 2 unidentified amino acids, only 1 of these general formula showed potent inhibition (Ac-Yaa-Pheψ(PO2-CH2)Ala-Yaa’-NH2). Peptide mixtures were made, substituting Yaa and Yaa’ with different amino acids, trying to establish if there would be a potent inhibitor that could inhibit either the N-domain or the C-domain of the enzyme. The result was that the compound Ac-Asp(L)-Pheψ(PO2-CH2)(L)Ala-Ala-NH2 actively inhibited the N-domain and was given the name RXP 407. Structure-function relationship showed that the C-terminus carboxamide group played a crucial role in the selectivity for the N-domain of ACE. Additionally, the N-acetyl group and the aspartic side chain in the P2 position aides in the N-domain selectivity of the inhibitor. These features make the inhibitor inaccessible to the C-domain but give good potency for the N-domain, this leads to a difference in inhibitory potency of the active sites of three orders of magnitude. These results also indicate that the N-domain possess a broader selectivity than the C-domain. Another difference between the older ACE inhibitors and RXP 407 is the molecular size of the compound. The older ACE inhibitors had mostly been interacting with S1’, S2’ and S1 subsites but RXP 407 interacts in addition with the S2 subsite. This also is important for the selectivity of the inhibitor since the aspartic side chain and N-acetyl group are located in the P2 position. Drug design of RXPA 380 RXPA380 was the first inhibitor that was highly selective of the C-domain of ACE, it has the formula Phe-Phe-Pro-Trp. The development of this compound was built on researches that showed that some bradykinin-potentiating peptides showed selectivity for the C-domain and all had several prolines in their structure. These observations lead the researchers to synthesize phosphinic peptides containing a proline residue in the P1’ position and evaluating these compounds led to the discovery of RXPA380. To study the roles of the residues on RXPA380 the researchers made 7 analogues of RXPA380. All of the compounds made were obtained as a mixture of either 2 or 4 diastereoisomers but all of them were easily resolved and only one of them was potent. This is consistent with the initial modeling studies of RXPA380 which showed that only one diastereomer could accommodate in the active site of germinal ACE. Analogues where pseudo-proline or tryptophan residues had been substituted showed less selectivity than RXPA380. This is probably because these two analogues have more potency toward the N-domain than RXPA380 does. Substituting both of these residues gives great potency but none selectivity. This shows that pseudo-proline and tryptophan residues accommodate well in the C-domain but not in the N-domain. Two more analogues with both pseudo-proline and tryptophan but missing the pseudo-phenylalanine residue in P1 position showed low potency for N-domain, similar to RXPA380. This supports the significant role of these two residues in the selectivity for C-domain. These two analogues also have less potency for the C-domain which shows that the C-domain prefers pseudo-phenylalanine group in P1 position. Modeling of RXPA380-ACE complex showed that the pseudo-proline residue of the inhibitor was surrounded by amino acids similar to that of the N-domain thus interactions with S2’ domain might not be responsible for the selectivity of RXPA380. Seven of 12 amino acids surrounding tryptophan are the same in C- and N-domain, the biggest difference is that 2 bulky and hydrophobic amino acids in the C-domain have been replaced with 2 smaller and polar amino acids in the N-domain. This indicates that low potency of RXPA380 for N-domain is not because the S2’ cavity does not accommodate the tryptophan side chain but rather that important interactions are missing between the tryptophan side chain and the amino acids of the C-domain. Based on the proximity between the tryptophan side chain and Asp1029 there is also a possible hydrogen bond between the carboxylate of Asp1029 and the NH indole ring in the C-domain but this interaction is much weaker in the N-domain. References ACE inhibitors Drug discovery
Discovery and development of ACE inhibitors
[ "Chemistry", "Biology" ]
3,873
[ "Drug discovery", "Life sciences industry", "Medicinal chemistry" ]
14,260,512
https://en.wikipedia.org/wiki/An%20Exceptionally%20Simple%20Theory%20of%20Everything
"An Exceptionally Simple Theory of Everything" is a physics preprint proposing a basis for a unified field theory, often referred to as "E8 Theory", which attempts to describe all known fundamental interactions in physics and to stand as a possible theory of everything. The paper was posted to the physics arXiv by Antony Garrett Lisi on November 6, 2007, and was not submitted to a peer-reviewed scientific journal. The title is a pun on the algebra used, the Lie algebra of the largest "simple", "exceptional" Lie group, E8. The paper's goal is to describe how the combined structure and dynamics of all gravitational and Standard Model particle fields are part of the E8 Lie algebra. The theory is presented as an extension of the grand unified theory program, incorporating gravity and fermions. The theory received a flurry of media coverage, but was also met with widespread skepticism. Scientific American reported in March 2008 that the theory was being "largely but not entirely ignored" by the mainstream physics community, with a few physicists picking up the work to develop it further. In July 2009, Jacques Distler and Skip Garibaldi published a critical paper in Communications in Mathematical Physics called "There is no 'Theory of Everything' inside E8", arguing that Lisi's theory, and a large class of related models, cannot work. Distler and Garibaldi offer a direct proof that it is impossible to embed all three generations of fermions in E8, or to obtain even one generation of the Standard Model without the presence of additional particles that do not exist in the physical world. Overview The goal of E8 Theory is to describe all elementary particles and their interactions, including gravitation, as quantum excitations of a single Lie group geometry—specifically, excitations of the noncompact quaternionic real form of the largest simple exceptional Lie group, E8. A Lie group, such as a one-dimensional circle, may be understood as a smooth manifold with a fixed, highly symmetric geometry. Larger Lie groups, as higher-dimensional manifolds, may be imagined as smooth surfaces composed of many circles (and hyperbolas) twisting around one another. At each point in a N-dimensional Lie group there can be N different orthogonal circles, tangent to N different orthogonal directions in the Lie group, spanning the N-dimensional Lie algebra of the Lie group. For a Lie group of rank R, one can choose at most R orthogonal circles that do not twist around each other, and so form a maximal torus within the Lie group, corresponding to a collection of R mutually-commuting Lie algebra generators, spanning a Cartan subalgebra. Each elementary particle state can be thought of as a different orthogonal direction, having an integral number of twists around each of the R directions of a chosen maximal torus. These R twist numbers (each multiplied by a scaling factor) are the R different kinds of elementary charge that each particle has. Mathematically, these charges are eigenvalues of the Cartan subalgebra generators, and are called roots or weights of a representation. In the Standard Model of particle physics, each different kind of elementary particle has four different charges, corresponding to twists along directions of a four-dimensional maximal torus in the twelve-dimensional Standard Model Lie group, SU(3)×SU(2)×U(1). In grand unified theories (GUTs), the Standard Model Lie group is considered as a subgroup of a higher-dimensional Lie group, such as of 24-dimensional SU(5) in the Georgi–Glashow model or of 45-dimensional Spin(10) in the SO(10) model. Since there is a different elementary particle for each dimension of the Lie group, these theories contain additional particles beyond the content of the Standard Model. In E8 Theory's current state, it is not possible to calculate masses for the existing or predicted particles. Lisi states the theory is young and incomplete, requiring a better understanding of the three fermion generations and their masses, and places a low confidence in its predictions. However, the discovery of new particles that do not fit in Lisi's classification, such as superpartners or new fermions, would fall outside the model and falsify the theory. As of 2021, none of the particles predicted by any version of E8 Theory have been detected. History Before writing his 2007 paper, Lisi discussed his work on a Foundational Questions Institute (FQXi) forum, at an FQXi conference, and for an FQXi article. Lisi gave his first talk on E8 Theory at the Loops '07 conference in Morelia, Mexico, soon followed by a talk at the Perimeter Institute. John Baez commented on Lisi's work in his column This Week's Finds in Mathematical Physics, finding the idea intriguing but ending on the cautionary note that it might not be "mathematically natural to use this method to combine bosons and fermions". Lisi's arXiv preprint, "An Exceptionally Simple Theory of Everything", appeared on November 6, 2007, and immediately attracted attention. Lisi made a further presentation for the International Loop Quantum Gravity Seminar on November 13, 2007, and responded to press inquiries on an FQXi forum. He presented his work at the TED Conference on February 28, 2008. Numerous news sites reported on the new theory in 2007 and 2008, noting Lisi's personal history and the controversy in the physics community. The first mainstream and scientific press coverage began with articles in The Daily Telegraph and New Scientist, with articles soon following in many other newspapers and magazines. Lisi's paper spawned a variety of reactions and debates across various physics blogs and online discussion groups. The first to comment was Sabine Hossenfelder, summarizing the paper and noting the lack of a dynamical symmetry-breaking mechanism. Peter Woit commented, "I'm glad to see someone pursuing these ideas, even if they haven't come up with solutions to the underlying problems". The group blog The n-Category Café hosted some of the more technical discussions. Mathematician Bertram Kostant discussed the background of Lisi's work in a colloquium presentation at UC Riverside. On his blog, Musings, Jacques Distler offered one of the strongest criticisms of Lisi's approach, claiming to demonstrate that, unlike in the Standard Model, Lisi's model is nonchiral — consisting of a generation and an anti-generation — and to prove that any alternative embedding in E8 must be similarly nonchiral. These arguments were distilled in a paper written jointly with Skip Garibaldi, "There is no 'Theory of Everything' inside E8", published in Communications in Mathematical Physics. In this paper, Distler and Garibaldi offer a proof that it is impossible to embed all three generations of fermions in E8, or to obtain even the one-generation Standard Model. In response, Lisi argued that Distler and Garibaldi made unnecessary assumptions about how the embedding needs to happen. Addressing the one generation case, in June 2010 Lisi posted a new paper on E8 Theory, "An Explicit Embedding of Gravity and the Standard Model in E8", eventually published in a conference proceedings, describing how the algebra of gravity and the Standard Model with one generation of fermions embeds in the E8 Lie algebra explicitly using matrix representations. When this embedding is done, Lisi agrees that there is an antigeneration of fermions (also known as "mirror fermions") remaining in E8; but while Distler and Garibaldi state that these mirror fermions make the theory nonchiral, Lisi states that these mirror fermions might have high masses, making the theory chiral, or that they might be related to the other generations. "The explanation for the existence of three generations of fermions, all with the same apparent algebraic structure, remains largely a mystery," Lisi wrote. Some follow-ups to Lisi's original preprint have been published in peer-reviewed journals. Lee Smolin's "The Plebanski action extended to a unification of gravity and Yang–Mills theory" proposes a symmetry-breaking mechanism to go from an E8 symmetric action to Lisi's action for the Standard Model and gravity. Roberto Percacci's "Mixing internal and spacetime transformations: some examples and counterexamples" addresses a general loophole in the Coleman–Mandula theorem also thought to work in E8 Theory. Percacci and Fabrizio Nesti's "Chirality in unified theories of gravity" confirms the embedding of the algebra of gravitational and Standard Model forces acting on a generation of fermions in spin(3,11) + 64+, mentioning that Lisi's "ambitious attempt to unify all known fields into a single representation of E8 stumbled into chirality issues". In a joint paper with Lee Smolin and Simone Speziale, published in Journal of Physics A, Lisi proposed a new action and symmetry-breaking mechanism. In 2008, FQXi awarded Lisi a grant for further development of E8 Theory. In September 2010, Scientific American reported on a conference inspired by Lisi's work. Shortly thereafter, they published a feature article on E8 Theory, "A Geometric Theory of Everything", written by Lisi and James Owen Weatherall. In December 2011, in a paper for a special issue of the journal Foundations of Physics, Michael Duff argued against Lisi's theory and the attention it has received in the popular press. Duff states that Lisi's paper was incorrect, citing Distler and Garibaldi's proof, and criticizes the press for giving Lisi uncritical attention simply because of his "outsider" image. References Theories of gravity Loop quantum gravity Standard Model Lie groups Hypothetical elementary particles Working papers fr:Antony Garrett Lisi#Une théorie du tout exceptionnellement simple
An Exceptionally Simple Theory of Everything
[ "Physics", "Mathematics" ]
2,097
[ "Standard Model", "Lie groups", "Mathematical structures", "Theoretical physics", "Unsolved problems in physics", "Algebraic structures", "Particle physics", "Theories of gravity", "Hypothetical elementary particles", "Physics beyond the Standard Model" ]
12,577,185
https://en.wikipedia.org/wiki/PILATUS%20%28detector%29
PILATUS is the name of a series of x-ray detectors originally developed by the Paul Scherrer Institute at the Swiss Light Source and further developed and commercialized by DECTRIS. The PILATUS detectors are based on hybrid photon counting (HPC) technology, by which X-rays are converted to electrical signals by the photoelectric effect in a semiconductor sensor layer—either silicon or cadmium telluride—which is subject to a substantial bias voltage. The electric signals are counted directly by a series of cells in an ASIC bonded to the sensor. Each cell—or pixel—is a complete detector in itself, equipped with an amplifier, discriminator and counter circuit. This is possible thanks to contemporary CMOS integrated circuit technology. The direct detection of single photons and the accurate determination of scattering and diffraction intensities over a wide dynamic range have resulted in PILATUS detectors becoming a standard at most synchrotron beamlines and being used for a large variety of X-ray applications, including: small-angle scattering, coherent scattering, X-ray powder diffraction and spectroscopy. History The first large-area PILATUS detector was developed at PSI in 2003 as a project stemming from the development of pixel detectors for the CMS experiment at CERN. It became the first HPC detector to be widely used at synchrotron beamlines around the world. The second generation PILATUS2 systems represented a major technological improvement, featuring a pixel size of 172×172μm, a counter depth of 20 bits and a radiation-tolerant design, necessary for operation with the intense X-ray beams at synchrotrons. In 2006, PILATUS2 was commercialized by DECTRIS. The field of protein crystallography rapidly benefited from the short readout time and noise free signal acquisition of the detector, since it substantially reduced the time required to collect data. The third generation PILATUS3, introduced in 2012, features instant-retrigger technology, which allows for even higher photon counting rates than its predecessors. References Detectors X-ray instrumentation X-ray equipment manufacturers Ionising radiation detectors X-ray crystallography
PILATUS (detector)
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
446
[ "Radioactive contamination", "X-ray instrumentation", "Measuring instruments", "Ionising radiation detectors", "Crystallography", "X-ray crystallography" ]
12,577,562
https://en.wikipedia.org/wiki/Bioadhesive
Bioadhesives are natural polymeric materials that act as adhesives. The term is sometimes used more loosely to describe a glue formed synthetically from biological monomers such as sugars, or to mean a synthetic material designed to adhere to biological tissue. Bioadhesives may consist of a variety of substances, but proteins and carbohydrates feature prominently. Proteins such as gelatin and carbohydrates such as starch have been used as general-purpose glues by man for many years, but typically their performance shortcomings have seen them replaced by synthetic alternatives. Highly effective adhesives found in the natural world are currently under investigation. For example, bioadhesives secreted by microbes and by marine molluscs and crustaceans are being researched with a view to biomimicry. Furthermore, thiolation of proteins and carbohydrates enables these polymers (thiomers) to covalently adhere especially to cysteine-rich subdomains of proteins such as keratins or mucus glycoproteins via disulfide bond formation. Thiolated chitosan and thiolated hyaluronic acid are used as bioadhesives in various medicinal products. Bioadhesives in nature Organisms may secrete bioadhesives for use in attachment, construction and obstruction, as well as in predation and defense. Examples include their use for: Colonization of surfaces (e.g. bacteria, algae, fungi, mussels, barnacles, rotifers) Mussel's byssal threads Tube building by polychaete worms, which live in underwater mounds Insect egg, larval or pupal attachment to surfaces (vegetation, rocks), and insect mating plugs Host attachment by blood-feeding ticks Nest-building by some insects, and also by some fish (e.g. the three-spined stickleback) Defense by Notaden frogs and by sea cucumbers Prey capture in spider webs and by velvet worms Some bioadhesives are very strong. For example, adult barnacles achieve pull-off forces as high as 2 MPa (2 N/mm2). A similarly strong, rapidly adhering glue - which contains 171 different proteins and can adhere to wet, moist and impure surfaces - is produced by the very hard limpet species Patella vulgata; this adhesive material is a very interesting subject of research in the development of surgical adhesives and several other applications. Silk dope can also be used as a glue by arachnids and insects. Polyphenolic proteins The small family of proteins that are sometimes referred to as polyphenolic proteins are produced by some marine invertebrates like the blue mussel, Mytilus edulis by some algae', and by the polychaete Phragmatopoma californica. These proteins contain a high level of a post-translationally modified—oxidized—form of tyrosine, L-3,4-dihydroxyphenylalanine (levodopa, L-DOPA) as well as the disulfide (oxidized) form of cysteine (cystine). In the zebra mussel (Dreissena polymorpha), two such proteins, Dpfp-1 and Dpfp-2, localize in the juncture between byssus threads and adhesive plaque. The presence of these proteins appear, generally, to contribute to stiffening of the materials functioning as bioadhesives. The presence of the dihydroxyphenylalanine-moiety arises from action of a tyrosine hydroxylase-type of enzyme; in vitro, it has been shown that the proteins can be cross-linked (polymerized) using a mushroom tyrosinase. Temporary adhesion Organisms such as limpets and sea stars use suction and mucus-like slimes to create Stefan adhesion, which makes pull-off much harder than lateral drag; this allows both attachment and mobility. Spores, embryos and juvenile forms may use temporary adhesives (often glycoproteins) to secure their initial attachment to surfaces favorable for colonization. Tacky and elastic secretions that act as pressure-sensitive adhesives, forming immediate attachments on contact, are preferable in the context of self-defense and predation. Molecular mechanisms include non-covalent interactions and polymer chain entanglement. Many biopolymers – proteins, carbohydrates, glycoproteins, and mucopolysaccharides – may be used to form hydrogels that contribute to temporary adhesion. Permanent adhesion Many permanent bioadhesives (e.g., the oothecal foam of the mantis) are generated by a "mix to activate" process that involves hardening via covalent cross-linking. On non-polar surfaces the adhesive mechanisms may include van der Waals forces, whereas on polar surfaces mechanisms such as hydrogen bonding and binding to (or forming bridges via) metal cations may allow higher sticking forces to be achieved. Microorganisms use acidic polysaccharides (molecular mass around 100 000 Da) Marine bacteria use carbohydrate exopolymers to achieve bond strengths to glass of up to 500 000 N/m2 Marine invertebrates commonly employ protein-based glues for irreversible attachment. Some mussels achieve 800 000 N/m2 on polar surfaces and 30 000 N/m2 on non-polar surfaces these numbers are dependent on the environment, mussels in high predation environments have an increased attachment to substrates. In high predation environments it can require predators 140% more force to dislodge mussels Some algae and marine invertebrates use lecproteins that contain L-DOPA to effect adhesion Proteins in the oothecal foam of the mantis are cross-linked covalently by small molecules related to L-DOPA via a tanning reaction that is catalysed by catechol oxidase or polyphenol oxidase enzymes. L-DOPA is a tyrosine residue that bears an additional hydroxyl group. The twin hydroxyl groups in each side-chain compete well with water for binding to surfaces, form polar attachments via hydrogen bonds, and chelate the metals in mineral surfaces. The Fe(L-DOPA3) complex can itself account for much cross-linking and cohesion in mussel plaque, but in addition the iron catalyses oxidation of the L-DOPA to reactive quinone free radicals, which go on to form covalent bonds. Applications Bioadhesives are of commercial interest because they tend to be biocompatible, i.e. useful for biomedical applications involving skin or other body tissue. Some work in wet environments and under water, while others can stick to low surface energy – non-polar surfaces like plastic. In recent years, the synthetic adhesives industry has been impacted by environmental concerns and health and safety issues relating to hazardous ingredients, volatile organic compound emissions, and difficulties in recycling or re mediating adhesives derived from petrochemical feedstocks. Rising oil prices may also stimulate commercial interest in biological alternatives to synthetic adhesives. Shellac is an early example of a bioadhesive put to practical use. Additional examples now exist, with others in development: Commodity wood adhesive based on a bacterial exopolysaccharide USB PRF/Soy 2000, a commodity wood adhesive that is 50% soy hydrolysate and excels at finger-jointing green lumber Mussel adhesive proteins can assist in attaching cells to plastic surfaces in laboratory cell and tissue culture experiments (see External Links) The Notaden frog glue is under development for biomedical uses, e.g. as a surgical glue for orthopedic applications or as a hemostat Mucosal drug delivery applications. For example, films of mussel adhesive protein give comparable mucoadhesion to polycarbophil, a synthetic hydrogel used to achieve effective drug delivery at low drug doses. An increased residence time through adhesion to the mucosal surface, such as in the eye or the nose can lead to an improved absorption of the drug. Long-duration continuous imaging of diverse organs (via a wearable bioadhesive stretchable high-resolution ultrasound imaging patch, potentially enabling novel diagnostic and monitoring tools) Several commercial methods of production are being researched: Direct chemical synthesis, e.g. incorporation of L-DOPA groups in synthetic polymers Fermentation of transgenic bacteria or yeasts that express bioadhesive protein genes Farming of natural organisms (small and large) that secrete bioadhesive materials Mucoadhesion A more specific term than bioadhesion is mucoadhesion. Most mucosal surfaces such as in the gut or nose are covered by a layer of mucus. Adhesion of a matter to this layer is hence called mucoadhesion. Mucoadhesive agents are usually polymers containing hydrogen bonding groups that can be used in wet formulations or in dry powders for drug delivery purposes. The mechanisms behind mucoadhesion have not yet been fully elucidated, but a generally accepted theory is that close contact must first be established between the mucoadhesive agent and the mucus, followed by interpenetration of the mucoadhesive polymer and the mucin and finishing with the formation of entanglements and chemical bonds between the macromolecules. In the case of a dry polymer powder, the initial adhesion is most likely achieved by water movement from the mucosa into the formulation, which has also been shown to lead to dehydration and strengthening of the mucus layer. The subsequent formation of van der Waals, hydrogen and, in the case of a positively charged polymer, electrostatic bonds between the mucins and the hydrated polymer promotes prolonged adhesion. See also Mucilage References External links "Mussels inspire new surgical glue possibilities". ScienceDaily article, Dec 2007. Frog glue story on ABC TV science program Catalyst. "Marine algae hold key to better biomedical adhesives", Biomaterials for healthcare: a decade of EU-funded research, p. 23 Thesis on mucoadhesive gels "Marie Curie Project on bioadhesion using the Cnidarian Hydra as model organisms Adhesives Biomolecules Animal proteins
Bioadhesive
[ "Chemistry", "Biology" ]
2,175
[ "Natural products", "Organic compounds", "Structural biology", "Biomolecules", "Biochemistry", "Molecular biology" ]
12,578,506
https://en.wikipedia.org/wiki/Dissipative%20particle%20dynamics
Dissipative particle dynamics (DPD) is an off-lattice mesoscopic simulation technique which involves a set of particles moving in continuous space and discrete time. Particles represent whole molecules or fluid regions, rather than single atoms, and atomistic details are not considered relevant to the processes addressed. The particles' internal degrees of freedom are integrated out and replaced by simplified pairwise dissipative and random forces, so as to conserve momentum locally and ensure correct hydrodynamic behaviour. The main advantage of this method is that it gives access to longer time and length scales than are possible using conventional MD simulations. Simulations of polymeric fluids in volumes up to 100 nm in linear dimension for tens of microseconds are now common. DPD was initially devised by Hoogerbrugge and Koelman to avoid the lattice artifacts of the so-called lattice gas automata and to tackle hydrodynamic time and space scales beyond those available with molecular dynamics (MD). It was subsequently reformulated and slightly modified by P. Español to ensure the proper thermal equilibrium state. A series of new DPD algorithms with reduced computational complexity and better control of transport properties are presented. The algorithms presented in this article choose randomly a pair particle for applying DPD thermostating thus reducing the computational complexity. Equations The total non-bonded force acting on a DPD particle i is given by a sum over all particles j that lie within a fixed cut-off distance, of three pairwise-additive forces: where the first term in the above equation is a conservative force, the second a dissipative force and the third a random force. The conservative force acts to give beads a chemical identity, while the dissipative and random forces together form a thermostat that keeps the mean temperature of the system constant. A key property of all of the non-bonded forces is that they conserve momentum locally, so that hydrodynamic modes of the fluid emerge even for small particle numbers. Local momentum conservation requires that the random force between two interacting beads be antisymmetric. Each pair of interacting particles therefore requires only a single random force calculation. This distinguishes DPD from Brownian dynamics in which each particle experiences a random force independently of all other particles. Beads can be connected into ‘molecules’ by tying them together with soft (often Hookean) springs. The most common applications of DPD keep the particle number, volume and temperature constant, and so take place in the NVT ensemble. Alternatively, the pressure instead of the volume is held constant, so that the simulation is in the NPT ensemble. Parallelization In principle, simulations of very large systems, approaching a cubic micron for milliseconds, are possible using a parallel implementation of DPD running on multiple processors in a Beowulf-style cluster. Because the non-bonded forces are short-ranged in DPD, it is possible to parallelize a DPD code very efficiently using a spatial domain decomposition technique. In this scheme, the total simulation space is divided into a number of cuboidal regions each of which is assigned to a distinct processor in the cluster. Each processor is responsible for integrating the equations of motion of all beads whose centres of mass lie within its region of space. Only beads lying near the boundaries of each processor's space require communication between processors. In order to ensure that the simulation is efficient, the crucial requirement is that the number of particle-particle interactions that require inter-processor communication be much smaller than the number of particle-particle interactions within the bulk of each processor's region of space. Roughly speaking, this means that the volume of space assigned to each processor should be sufficiently large that its surface area (multiplied by a distance comparable to the force cut-off distance) is much less than its volume. Applications A wide variety of complex hydrodynamic phenomena have been simulated using DPD, the list here is necessarily incomplete. The goal of these simulations often is to relate the macroscopic non-Newtonian flow properties of the fluid to its microscopic structure. Such DPD applications range from modeling the rheological properties of concrete to simulating liposome formation in biophysics to other recent three-phase phenomena such as dynamic wetting. The DPD method has also found popularity in modeling heterogeneous multi-phase flows containing deformable objects such as blood cells and polymer micelles. Further reading The full trace of the developments of various important aspects of the DPD methodology since it was first proposed in the early 1990s can be found in "Dissipative Particle Dynamics: Introduction, Methodology and Complex Fluid Applications – A Review". The state-of-the-art in DPD was captured in a CECAM workshop in 2008. Innovations to the technique presented there include DPD with energy conservation; non-central frictional forces that allow the fluid viscosity to be tuned; an algorithm for preventing bond crossing between polymers; and the automated calibration of DPD interaction parameters from atomistic molecular dynamics. Recently, examples of automated calibration and parameterization have been shown against experimental observables. Additionally, datasets for the purpose of interaction potential calibration and parameterisation have been explored. Swope et al, have provided a detailed analysis of literature data and an experimental dataset based on Critical micelle concentration (CMC) and micellar mean aggregation number (Nagg). Examples of micellar simulations using DPD have been well documented previously. References Available packages Some available simulation packages that can (also) perform DPD simulations are: CULGI: The Chemistry Unified Language Interface, Culgi B.V., The Netherlands DL_MESO: Open-source mesoscale simulation software. DPDmacs ESPResSo: Extensible Simulation Package for the Research on Soft Matter Systems - Open-source Fluidix: The Fluidix simulation suite available from OneZero Software. GPIUTMD: Graphical processors for Many-Particle Dynamics Gromacs-DPD: A modified version of Gromacs including DPD. HOOMD-blue : Highly Optimized Object-oriented Many-particle Dynamics—Blue Edition LAMMPS Materials Studio: Materials Studio - Modeling and simulation for studying chemicals and materials, Accelrys Software Inc. OSPREY-DPD: Open Source Polymer Research Engine-DPD SYMPLER: A freeware SYMbolic ParticLE simulatoR from the University of Freiburg. SunlightDPD: Open-source (GPL) DPD software. External links DPD simulation technique by MatDL (Materials Digital Library Pathway) (MatDL) Condensed matter physics Soft matter Computational fluid dynamics Non-Newtonian fluids
Dissipative particle dynamics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,362
[ "Computational fluid dynamics", "Soft matter", "Phases of matter", "Materials science", "Computational physics", "Condensed matter physics", "Matter", "Fluid dynamics" ]
12,579,099
https://en.wikipedia.org/wiki/Emissions%20%26%20Generation%20Resource%20Integrated%20Database
The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. eGRID is issued by the U.S. Environmental Protection Agency (EPA). As of January 2024, the available editions of eGRID contain data for years 2022, 2021, 2020, 2019, 2018, 2016, 2014, 2012, 2010, 2009, 2007, 2005, 2004, and 1996 through 2000. eGRID is unique in that it links air emissions data with electric generation data for United States power plants. History eGRID2022 was released by EPA on January 30, 2024. It contains year 2022 data. eGRID2021 was released by EPA on January 30, 2023. It contains year 2021 data. eGRID2020 was released by EPA on January 27, 2022. It contains year 2020 data. eGRID2019 was released by EPA on February 23, 2021. It contains year 2019 data. eGRID2018 was released by EPA on January 28, 2020 and eGRID2018v2 was released on March 9, 2020. It contains year 2018 data. eGRID2016 was released by EPA on February 15, 2018. It contains year 2016 data. eGRID2014 was released by EPA on January 13, 2017. It contains year 2014 data. eGRID2012 was released by EPA on October 8, 2015. It is the 10th edition and contains year 2012 data. eGRID2010 Version 1.0 with year 2010 data was released on February 24, 2014. eGRID2009 Version 1.0, with year 2009 data was released on May 10, 2012. eGRID2007 Version 1.0 was released on February 23, 2011 and Version 1.1 was released May 20, 2011. eGRID2005 Version 1.0 was released in October 2008 and Version 1.1 was released in January 2009. eGRID2004 Version 1.0 was released in December 2006; Version 2.0 was released in early April 2007; and Version 2.1, was released in late April 2007 and updated for typos in May 2007. eGRID2000 Version 1.0 was released in December 2002; Version 2.0 was released in April 2003; and Version 2.01 was released in May 2003. (eGRID2000 replaced eGRID versions 1996 through 1998). eGRID1998 was released in March and September 2001. eGRID1997 was released in December 1999. eGRID1996 was first released in December 1998. Data summary eGRID data include emissions, different types of emission rates, electricity generation, resource mix, and heat input. eGRID data also include plant identification, location, and structural information. The emissions information in eGRID include carbon dioxide (CO2), nitrogen oxides (NOx), sulfur dioxide (SO2), mercury (Hg), methane (CH4), nitrous oxide (N2O), and carbon dioxide equivalent (CO2e). CO2, CH4, and N2O are greenhouse gases (GHG) that contribute to global warming or climate change. NOx and SO2 contribute to unhealthy air quality and acid rain in many parts of the country. eGRID's resource mix information includes the following fossil fuel resources: coal, oil, gas, other fossil; nuclear resources; and the following renewable resources: hydroelectric (water), biomass (including biogas, landfill gas and digester gas), wind, solar, and geothermal. eGRID data is presented as an Excel workbook with data worksheets and a table of contents. The eGRID workbook contains data at the unit, generator, and plant levels and aggregated data by state, power control area, eGRID subregion, NERC region, and U.S. The workbook also includes a worksheet that displays the grid gross loss (%). Additional documentation is also provided with each eGRID release such as, a Technical Guide (PDF), Summary Tables, eGRID subregion map (JPG), NERC region Map (JPG), and release notes (TXT). These files are available as separate downloadable files or all of them are contained in a ZIP file. Similar files can be downloaded for a given year's eGRID release from EPA's eGRID website. The primary data sources used for eGRID include data reported by electric generators to EPA’s Clean Air Markets Division (pursuant to 40 CFR Part 75) and to the U.S. Energy Information Administration (EIA). Data use eGRID data are used for carbon footprinting; emission reduction calculations; calculating indirect greenhouse gas emissions for The Climate Registry, the California Climate Action Registry, California's Mandatory GHG emissions reporting program (Global Warming Solutions Act of 2006, AB 32), and other GHG protocols; were used as the starting point for the new international carbon emissions database, CARMA. EPA tools and programs such as Power Profiler, Portfolio Manager, the WasteWise Office Carbon Footprint Tool, the Green Power Equivalency Calculator, the Personal Greenhouse Gas Emissions Calculator, and the Greenhouse Gas Equivalencies Calculator use eGRID. Other tools such as labeling/environmental disclosure, Renewable Portfolio Standards (RPS) and Renewable Energy Credits (RECs) attributes are supported by eGRID data. States also rely on eGRID data for electricity labeling (environmental disclosure programs), emissions inventories, and for policy decisions such as output based standards. eGRID is additionally used by nongovernmental organizations for tools and analysis by the International Council for Local Environmental Initiatives (ICLEI), the Northeast States for Coordinated Air Use Management (NESCAUM), the Rocky Mountain Institute, the National Resource Defense Council (NRDC), the Ozone Transport Commission (OTC), Powerscorecard.org, and the Greenhouse Gas Protocol Initiative. In 2010, Executive Order 13514 was issued, requiring Federal agencies to “measure, report, and reduce their greenhouse gas emissions from direct and indirect activities.” The Federal GHG Accounting and Reporting Guidance accompanied this order and recommended using eGRID non-baseload emission rates to estimate the Scope 2 (indirect) emission reductions from renewable energy. See also Air pollution Combined Heat and Power (CHP) Combined cycle Electric power Electric utility Electrical power industry Electricity generation External combustion engine Gas turbine Power station Renewable energy Steam turbine References External links EIA’s Electricity Database Files EPA’s Clean Air Markets - Data and Maps EPA’s Clean Energy Homepage EPA’s Climate Change Homepage EPA's eGRID paper “How to use eGRID for Carbon Footprinting Electricity Purchases in Greenhouse Gas Emission Inventories” EPA’s eGRID website EPA's Power Profiler EPA’s Energy Star Portfolio Manager EPA's Acid Rain Program EPA's Combined Heat and Power Partnership Homepage Executive Order 13514 Federal GHG Accounting and Reporting Guidance Greenhouse Gas Equivalencies Calculator Northeast States for Coordinated Air Use Management (NESCAUM) Ozone Transport Commission (OTC) Personal Greenhouse Gas Emissions Calculator Powerscorecard.org World Business Council for Sustainable Development World Resources Institute Homepage Government databases in the United States Electric power Electric power companies of the United States Air pollution
Emissions & Generation Resource Integrated Database
[ "Physics", "Engineering" ]
1,547
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
12,581,344
https://en.wikipedia.org/wiki/Statistical%20energy%20analysis
Statistical energy analysis (SEA) is a method for predicting the transmission of sound and vibration through complex structural acoustic systems. The method is particularly well suited for quick system level response predictions at the early design stage of a product, and for predicting responses at higher frequencies. In SEA a system is represented in terms of a number of coupled subsystems and a set of linear equations are derived that describe the input, storage, transmission and dissipation of energy within each subsystem. The parameters in the SEA equations are typically obtained by making certain statistical assumptions about the local dynamic properties of each subsystem (similar to assumptions made in room acoustics and statistical mechanics). These assumptions significantly simplify the analysis and make it possible to analyze the response of systems that are often too complex to analyze using other methods (such as finite element and boundary element methods). History The initial derivation of SEA arose from independent calculations made in 1959 by Richard Lyon and Preston Smith as part of work concerned with the development of methods for analyzing the response of large complex aerospace structures subjected to spatially distributed random loading. Lyon's calculation showed that under certain conditions, the flow of energy between two coupled oscillators is proportional to the difference in the oscillator energies (suggesting a thermal analogy exists in structural-acoustic systems). Smith's calculation showed that a structural mode and a diffuse reverberant sound field attain a state of 'equipartition of energy' as the damping of the mode is reduced (suggesting a state of thermal equilibrium can exist in structural-acoustic systems). The extension of the two oscillator results to more general systems is often referred to as the modal approach to SEA. While the modal approach provides physical insights into the mechanisms that govern energy flow it involves assumptions that have been the subject of considerable debate over many decades. The theory that combines deterministic finite element methods (FEM) and SEA was developed by Phil Shorter and Robin Langley and is called hybrid FEM/SEA theory. In recent years, alternative derivations of the SEA equations based on wave approaches have become available. Such derivations form the theoretical foundation behind a number of modern commercial SEA codes and provide a general framework for calculating the parameters in an SEA model. A number of methods also exist for post-processing FE models to obtain estimates of SEA parameters. Lyon mentioned the use of such methods in his initial SEA text book in 1975 but a number of alternative derivations have been presented over the years Method To solve a noise and vibration problem with SEA, the system is partitioned into a number of components (such as plates, shells, beams and acoustic cavities) that are coupled together at various junctions. Each component can support a number of different propagating wavetypes (for example, the bending, longitudinal and shear wavefields in a thin isotropic plate). From an SEA point of view, the reverberant field of each wavefield represents an orthogonal store of energy and so is represented as a separate energy degree of freedom in the SEA equations. The energy storage capacity of each reverberant field is described by a parameter termed the 'modal density', which depends on the average speed with which waves propagate energy through the subsystem (the average group velocity), and the overall dimension of the subsystem. The transmission of energy between different wavefields at a given type of junction is described by parameters termed 'coupling loss factors'. Each coupling loss factor describes the input power to the direct field of a given receiving subsystem per unit energy in the reverberant field of a particular source subsystem. The coupling loss factors are typically calculated by considering the way in which waves are scattered at different types of junctions (for example, point, line and area junctions). Strictly, SEA predicts the average response of a population or ensemble of systems and so the coupling loss factors and modal densities represent ensemble average quantities. To simplify the calculation of the coupling loss factors it is often assumed that there is significant scattering within each subsystem (when viewed across an ensemble) so that direct field transmission between multiple connections to the same subsystem is negligible and reverberant transmission dominates. In practical terms, this means that SEA is often best suited for problems in which each subsystem is large compared with a wavelength (or from a modal point of view, each subsystem contains several modes in a given frequency band of interest). The SEA equations contain a relatively small number of degrees of freedom and so can be easily inverted to find the reverberant energy in each subsystem due to a given set of external input powers. The (ensemble average) sound pressure levels and vibration velocities within each subsystem can then be obtained by superimposing the direct and reverberant fields within each subsystem. Applications Over the past half century, SEA has found applications in virtually every industry for which noise and vibration are of concern. Typical applications include: Interior noise prediction and sound package design in automotive, aircraft, rotorcraft and train applications Interior and exterior radiated noise in marine applications Prediction of dynamic environments in launch vehicles and spacecraft Prediction of noise from consumer goods such as dishwashers, washing machines and refrigerators Prediction of noise from generators and industrial chillers Prediction of air-borne and structure-borne noise through buildings Design of enclosures etc. Additional examples can be found in the proceedings of conferences such as INTERNOISE, NOISECON, EURONOISE, ICSV, NOVEM, SAE N&V. Software implementations Several commercial solutions for Statistical Energy Analysis are available: Actran SEA Module from Free Field Technologies, MSC Software, VA One SEA Module (previously AutoSEA) from ESI Group, France SEAM, SEAM 3D from Cambridge Collaborative Inc. USA, since April 2019 under Altair Hyperworks. wave6 from Dassault Systèmes SIMULIA GSSEA-Light from Gothenburg Sound AB, Sweden SEA+ from InterAC, France distributed by LMS International Free solutions: Statistical Energy Analysis Freeware, SEAlab - open code in Matlab/Octave from Applied Acoustics, Chalmers, Sweden (open source) pyva - python toolbox for vibroacoustic simulation, Germany (open source) Other implementations: NOVASEA, Université de Sherbrooke, Canada References Statistical mechanics Mechanical vibrations Acoustics
Statistical energy analysis
[ "Physics", "Engineering" ]
1,318
[ "Structural engineering", "Classical mechanics", "Acoustics", "Mechanics", "Mechanical vibrations", "Statistical mechanics" ]
12,583,563
https://en.wikipedia.org/wiki/Shock%20and%20Vibration%20Information%20Analysis%20Center
The Shock and Vibration Information Analysis Center (SAVIAC) is a U.S. Government organization established by the U.S. Navy Office of Naval Research on 20 December 1946. SAVIAC's purpose is to promulgate information on the transient and vibratory response of structures and materials. This broad field includes such technical areas as the testing, analysis and design of structural or mechanical systems subjected to dynamic conditions and loading such as vibration, blast, impact, and shock for various agencies in the U. S. Government including NASA, the Department of Energy (DOE), and the Department of Defense (DOD). The organization sponsored the professional journal Shock and Vibration Journal and currently sponsors and publishes the professional journal Journal of Critical Technology in Shock and Vibration. SAVIAC also sponsored and published a series of monograms addressing different aspects of shock and vibration. In 2012 SAVIAC became inactive as a result of new Department of Defense cost-cutting regulations limiting DoD sponsorship and participation of conferences and workshops. SAVIAC has been succeeded by an industrially-funded and managed "Shock and Vibration Exchange" SAVE. SAVIAC assembled and promoted a yearly symposium. The annual Shock and Vibration Symposium was the leading forum for the structural dynamics and vibration community to present and discuss new developments and ongoing research. The Symposium, established in 1947, includes both classified and unclassified sessions. The classified sessions allow critical technology and classified (up to secret level) research to be presented in closed forums of cleared U.S. government and government contractor researchers. Topics covered at the symposium include shock-ship testing, water shock, weapons effects (air blast, ground shock, cratering, penetration), shock physics, earthquake engineering, structural dynamics, and shock and vibration instrumentation and experiment techniques. Over 200 technical papers were typically presented. Panel discussions addressed topics such as new software developments or accelerometer isolation problems. Tutorials provide up-to-date technology overviews by leading specialists. Since 2012 the Shock and Vibration Symposium has continued under the management and sponsorship of SAVE. External links SAVIAC Official website SAVE Official website References: Henry C. Pusey (Editor). "50 Years of Shock and Vibration Technology." SAVIAC Monogram SVM-15, Shock and Vibration Information Analysis Center, 1996. Civil engineering organizations Earthquake engineering Government agencies in the United States
Shock and Vibration Information Analysis Center
[ "Engineering" ]
479
[ "Earthquake engineering", "Civil engineering", "Structural engineering", "Civil engineering organizations" ]
12,585,208
https://en.wikipedia.org/wiki/Water%20cluster
In chemistry, a water cluster is a discrete hydrogen bonded assembly or cluster of molecules of water. Many such clusters have been predicted by theoretical models (in silico), and some have been detected experimentally in various contexts such as ice, bulk liquid water, in the gas phase, in dilute mixtures with non-polar solvents, and as water of hydration in crystal lattices. The simplest example is the water dimer (H2O)2. Water clusters have been proposed as an explanation for some anomalous properties of liquid water, such as its unusual variation of density with temperature. Water clusters are also implicated in the stabilization of certain supramolecular structures. They are expected to play a role also in the hydration of molecules and ions dissolved in water. Theoretical predictions Detailed water models predict the occurrence of water clusters, as configurations of water molecules whose total energy is a local minimum. Of particular interest are the cyclic clusters (H2O)n; these have been predicted to exist for n = 3 to 60. At low temperatures, nearly 50% of water molecules are included in clusters. With increasing cluster size the oxygen to oxygen distance is found to decrease which is attributed to so-called cooperative many-body interactions: due to a change in charge distribution the H-acceptor molecule becomes a better H-donor molecule with each expansion of the water assembly. Many isomeric forms seem to exist for the hexamer (H2O)6: from ring, book, bag, cage, to prism shape with nearly identical energy. Two cage-like isomers exist for heptamers (H2O)7, and octamers (H2O)8 are found either cyclic or in the shape of a cube. Other theoretical studies predict clusters with more complex three-dimensional structures. Examples include the fullerene-like cluster (H2O)28, named the water buckyball, and the 280-water-molecule monster icosahedral network (with each water molecule coordinate to 4 others). The latter, which is 3 nm in diameter, consists of nested icosahedral shells with 280 and 100 molecules. There is also an augmented version with another shell of 320 molecules. There is increased stability with the addition of each shell. There are theoretical models of water clusters of more than 700 water molecules, but they have not been observed experimentally. One line of research uses graph invariants for generating hydrogen bond topologies and predicting physical properties of water clusters and ice. The utility of graph invariants was shown in a study considering the (H2O)6 cage and (H2O)20 dodecahedron, which are associated with roughly the same oxygen atom arrangements as in the solid and liquid phases of water. Experimental observations Experimental study of any supramolecular structures in bulk water is difficult because of their short lifetime: the hydrogen bonds are continually breaking and reforming at timescales faster than 200 femtoseconds. Nevertheless, water clusters have been observed in the gas phase and in dilute mixtures of water and non-polar solvents like benzene and liquid helium. The experimental detection and characterization of the clusters has been achieved with the following methods: far-infrared spectroscopy|far-infrared (FIR), vibration-rotation-tunneling spectroscopy|vibration-rotation-tunneling (VRT), Н-NMR, and neutron diffraction. The hexamer is found to have planar geometry in liquid helium, a chair conformation in organic solvents, and a cage structure in the gas phase. Experiments combining IR spectroscopy with mass spectrometry reveal cubic configurations for clusters in the range n=(8-10). When the water is part of a crystal structure as in a hydrate, x-ray diffraction can be used. Conformation of a water heptamer was determined (cyclic twisted nonplanar) using this method. Further, multi-layered water clusters with formulae (H2O)100 trapped inside cavities of several polyoxometalate clusters were also reported by Mueller et al. Cluster models of bulk liquid water Several models attempt to account for the bulk properties of water by assuming that they are dominated by cluster formation within the liquid. According to the quantum cluster equilibrium (QCE) theory of liquids, n=8 clusters dominate the liquid water bulk phase, followed by n=5 and n=6 clusters. Near the triple point, the presence of an n=24 cluster is invoked. In another model, bulk water is built up from a mixture of hexamer and pentamer rings containing cavities capable of enclosing small solutes. In yet another model an equilibrium exists between a cubic water octamer and two cyclic tetramers. However, none of these models yet have reproduced the experimentally-observed density maximum of water as a function of temperature. See also Hydrogen bond Mpemba effect Properties of water Richard J. Saykally References External links Water clusters at London South Bank University Link The Cambridge Cluster Database - Includes water clusters calculated with various water models and the water clusters explored with ab initio methods. Cluster chemistry Water chemistry
Water cluster
[ "Chemistry" ]
1,060
[ "Cluster chemistry", "Organometallic chemistry", "nan" ]
12,585,474
https://en.wikipedia.org/wiki/Prenyltransferase
Prenyltransferases (PTs) are a class of enzymes that transfer allylic prenyl groups to acceptor molecules. Prenyl transferases commonly refer to isoprenyl diphosphate syntheses (IPPSs). Prenyltransferases are a functional category and include several enzyme groups that are evolutionarily independent. Prenyltransferases are commonly divided into two classes, cis (or Z) and trans (or E), depending upon the stereochemistry of the resulting products. Examples of trans-prenyltranferases include dimethylallyltranstransferase, and geranylgeranyl pyrophosphate synthase. Cis-prenyltransferases include dehydrodolichol diphosphate synthase (involved in the production of a precursor to dolichol). Trans- and cis-prenyltransferases are evolutionarily unrelated to each other and there is no sequential and structural similarity. The beta subunit of the farnesyltransferases is responsible for peptide binding. Squalene-hopene cyclase is a bacterial enzyme that catalyzes the cyclization of squalene into hopene, a key step in hopanoid (triterpenoid) metabolism. Lanosterol synthase () (oxidosqualene-lanosterol cyclase) catalyzes the cyclization of (S)-2,3-epoxysqualene to lanosterol, the initial precursor of cholesterol, steroid hormones and vitamin D in vertebrates and of ergosterol in fungi. Cycloartenol synthase () (2,3-epoxysqualene-cycloartenol cyclase) is a plant enzyme that catalyzes the cyclization of (S)-2,3-epoxysqualene to cycloartenol. Human proteins containing this domain FNTB; LSS; PGGT1B; RABGGTB See also Cis–trans isomerism E–Z notation References External links Protein prenyltransferases alpha subunit repeat in PROSITE Peripheral membrane proteins Protein domains
Prenyltransferase
[ "Biology" ]
475
[ "Protein domains", "Protein classification" ]
232,249
https://en.wikipedia.org/wiki/Crystal%20radio
A crystal radio receiver, also called a crystal set, is a simple radio receiver, popular in the early days of radio. It uses only the power of the received radio signal to produce sound, needing no external power. It is named for its most important component, a crystal detector, originally made from a piece of crystalline mineral such as galena. This component is now called a diode. Crystal radios are the simplest type of radio receiver and can be made with a few inexpensive parts, such as a wire for an antenna, a coil of wire, a capacitor, a crystal detector, and earphones (because a crystal set has insufficient power for a loudspeaker). However they are passive receivers, while other radios use an amplifier powered by current from a battery or wall outlet to make the radio signal louder. Thus, crystal sets produce rather weak sound and must be listened to with sensitive earphones, and can receive stations only within a limited range of the transmitter. The rectifying property of a contact between a mineral and a metal was discovered in 1874 by Karl Ferdinand Braun. Crystals were first used as a detector of radio waves in 1894 by Jagadish Chandra Bose, in his microwave optics experiments. They were first used as a demodulator for radio communication reception in 1902 by G. W. Pickard. Crystal radios were the first widely used type of radio receiver, and the main type used during the wireless telegraphy era. Sold and homemade by the millions, the inexpensive and reliable crystal radio was a major driving force in the introduction of radio to the public, contributing to the development of radio as an entertainment medium with the beginning of radio broadcasting around 1920. Around 1920, crystal sets were superseded by the first amplifying receivers, which used vacuum tubes. With this technological advance, crystal sets became obsolete for commercial use but continued to be built by hobbyists, youth groups, and the Boy Scouts mainly as a way of learning about the technology of radio. They are still sold as educational devices, and there are groups of enthusiasts devoted to their construction. Crystal radios receive amplitude modulated (AM) signals, although FM designs have been built. They can be designed to receive almost any radio frequency band, but most receive the AM broadcast band. A few receive shortwave bands, but strong signals are required. The first crystal sets received wireless telegraphy signals broadcast by spark-gap transmitters at frequencies as low as 20 kHz. History Crystal radio was invented by a long, partly obscure chain of discoveries in the late 19th century that gradually evolved into more and more practical radio receivers in the early 20th century. The earliest practical use of crystal radio was to receive Morse code radio signals transmitted from spark-gap transmitters by early amateur radio experimenters. As electronics evolved, the ability to send voice signals by radio caused a technological explosion around 1920 that evolved into today's radio broadcasting industry. Early years Early radio telegraphy used spark gap and arc transmitters as well as high-frequency alternators running at radio frequencies. The coherer was the first means of detecting a radio signal. These, however, lacked the sensitivity to detect weak signals. In the early 20th century, various researchers discovered that certain metallic minerals, such as galena, could be used to detect radio signals. Indian physicist Jagadish Chandra Bose was first to use a crystal as a radio wave detector, using galena detectors to receive microwaves starting around 1894. In 1901, Bose filed for a U.S. patent for "A Device for Detecting Electrical Disturbances" that mentioned the use of a galena crystal; this was granted in 1904, #755840. On August 30, 1906, Greenleaf Whittier Pickard filed a patent for a silicon crystal detector, which was granted on November 20, 1906. A crystal detector includes a crystal, usually a thin wire or metal probe that contacts the crystal, and the stand or enclosure that holds those components in place. The most common crystal used is a small piece of galena; pyrite was also often used, as it was a more easily adjusted and stable mineral, and quite sufficient for urban signal strengths. Several other minerals also performed well as detectors. Another benefit of crystals was that they could demodulate amplitude modulated signals. This device brought radiotelephones and voice broadcast to a public audience. Crystal sets represented an inexpensive and technologically simple method of receiving these signals at a time when the embryonic radio broadcasting industry was beginning to grow. 1920s and 1930s In 1922 the (then named) United States Bureau of Standards released a publication entitled Construction and Operation of a Simple Homemade Radio Receiving Outfit. This article showed how almost any family having a member who was handy with simple tools could make a radio and tune into weather, crop prices, time, news and the opera. This design was significant in bringing radio to the general public. NBS followed that with a more selective two-circuit version, Construction and Operation of a Two-Circuit Radio Receiving Equipment With Crystal Detector, which was published the same year and is still frequently built by enthusiasts today. In the beginning of the 20th century, radio had little commercial use, and radio experimentation was a hobby for many people. Some historians consider the autumn of 1920 to be the beginning of commercial radio broadcasting for entertainment purposes. Pittsburgh station KDKA, owned by Westinghouse, received its license from the United States Department of Commerce just in time to broadcast the Harding-Cox presidential election returns. In addition to reporting on special events, broadcasts to farmers of crop price reports were an important public service in the early days of radio. In 1921, factory-made radios were very expensive. Since less-affluent families could not afford to own one, newspapers and magazines carried articles on how to build a crystal radio with common household items. To minimize the cost, many of the plans suggested winding the tuning coil on empty pasteboard containers such as oatmeal boxes, which became a common foundation for homemade radios. Crystodyne In early 1920s Russia, Oleg Losev was experimenting with applying voltage biases to various kinds of crystals for the manufacturing of radio detectors. The result was astonishing: with a zincite (zinc oxide) crystal he gained amplification. This was a negative resistance phenomenon, decades before the development of the tunnel diode. After the first experiments, Losev built regenerative and superheterodyne receivers, and even transmitters. A crystodyne could be produced under primitive conditions; it could be made in a rural forge, unlike vacuum tubes and modern semiconductor devices. However, this discovery was not supported by the authorities and was soon forgotten; no device was produced in mass quantity beyond a few examples for research. "Foxhole radios" In addition to mineral crystals, the oxide coatings of many metal surfaces act as semiconductors (detectors) capable of rectification. Crystal radios have been improvised using detectors made from rusty nails, corroded pennies, and many other common objects. When Allied troops were halted near Anzio, Italy during the spring of 1944, powered personal radio receivers were strictly prohibited as the Germans had equipment that could detect the local oscillator signal of superheterodyne receivers. Crystal sets lack power driven local oscillators, hence they could not be detected. Some resourceful soldiers constructed "crystal" sets from discarded materials to listen to news and music. One type used a blue steel razor blade and a pencil lead for a detector. The lead point touching the semiconducting oxide coating (magnetite) on the blade formed a crude point-contact diode. By carefully adjusting the pencil lead on the surface of the blade, they could find spots capable of rectification. The sets were dubbed "foxhole radios" by the popular press, and they became part of the folklore of World War II. In some German-occupied countries during WW2 there were widespread confiscations of radio sets from the civilian population. This led determined listeners to build their own clandestine receivers which often amounted to little more than a basic crystal set. Anyone doing so risked imprisonment or even death if caught, and in most of Europe the signals from the BBC (or other allied stations) were not strong enough to be received on such a set. "Rocket Radio" In the late 1950s, the compact "rocket radio", shaped like a rocket, typically imported from Japan, was introduced, and gained moderate popularity. It used a piezoelectric crystal earpiece (described later in this article), a ferrite core to reduce the size of the tuning coil (also described later), and a small germanium fixed diode, which did not require adjustment. To tune in stations, the user moved the rocket nosepiece, which, in turn, moved a ferrite core inside a coil, changing the inductance in a tuned circuit. Earlier crystal radios suffered from severely reduced Q, and resulting selectivity, from the electrical load of the earphone or earpiece. Furthermore, with its efficient earpiece, the "rocket radio" did not require a large antenna to gather enough signal. With much higher Q, it could typically tune in several strong local stations, while an earlier radio might only receive one station, possibly with other stations heard in the background. For listening in areas where an electric outlet was not available, the "rocket radio" served as an alternative to the vacuum tube portable radios of the day, which required expensive and heavy batteries. Children could hide "rocket radios" under the covers, to listen to radio when their parents thought they were sleeping. Children could take the radios to public swimming pools and listen to radio when they got out of the water, clipping the ground wire to a chain link fence surrounding the pool. The rocket radio was also used as an emergency radio, because it did not require batteries or an AC outlet. The rocket radio was available in several rocket styles, as well as other styles that featured the same basic circuit. Transistor radios had become available at the time, but were expensive. Once those radios dropped in price, the rocket radio declined in popularity. Later years While it never regained the popularity and general use that it enjoyed at its beginnings, the crystal radio circuit is still used. The Boy Scouts have kept the construction of a radio set in their program since the 1920s. A large number of prefabricated novelty items and simple kits could be found through the 1950s and 1960s, and many children with an interest in electronics built one. Building crystal radios was a craze in the 1920s, and again in the 1950s. Recently, hobbyists have started designing and building examples of the early instruments. Much effort goes into the visual appearance of these sets as well as their performance. Annual crystal radio 'DX' contests (long distance reception) and building contests allow these set owners to compete with each other and form a community of interest in the subject. Basic principles A crystal radio can be thought of as a radio receiver reduced to its essentials. It consists of at least these components: An antenna in which electric currents are induced by electromagnetic radiation. A resonant circuit (tuned circuit) which selects the frequency of the desired radio station from all the radio signals received by the antenna. The tuned circuit consists of a coil of wire (called an inductor) and a capacitor connected together. The circuit has a resonant frequency, and allows radio waves at that frequency to pass through to the detector while largely blocking waves at other frequencies. One or both of the coil or capacitor is adjustable, allowing the circuit to be tuned to different frequencies. In some circuits a capacitor is not used and the antenna serves this function, as an antenna that is shorter than a quarter-wavelength of the radio waves it is meant to receive is capacitive. A semiconductor crystal detector that demodulates the radio signal to extract the audio signal (modulation). The crystal detector functions as a square law detector, demodulating the radio frequency alternating current to its audio frequency modulation. The detector's audio frequency output is converted to sound by the earphone. Early sets used a "cat whisker detector" consisting of a small piece of crystalline mineral such as galena with a fine wire touching its surface. The crystal detector was the component that gave crystal radios their name. Modern sets use modern semiconductor diodes, although some hobbyists still experiment with crystal or other detectors. An earphone to convert the audio signal to sound waves so they can be heard. The low power produced by a crystal receiver is insufficient to power a loudspeaker, hence earphones are used. As a crystal radio has no power supply, the sound power produced by the earphone comes solely from the transmitter of the radio station being received, via the radio waves captured by the antenna. The power available to a receiving antenna decreases with the square of its distance from the radio transmitter. Even for a powerful commercial broadcasting station, if it is more than a few miles from the receiver the power received by the antenna is very small, typically measured in microwatts or nanowatts. In modern crystal sets, signals as weak as 50 picowatts at the antenna can be heard. Crystal radios can receive such weak signals without using amplification only due to the great sensitivity of human hearing, which can detect sounds with an intensity of only 10−16 W/cm2. Therefore, crystal receivers have to be designed to convert the energy from the radio waves into sound waves as efficiently as possible. Even so, they are usually only able to receive stations within distances of about 25 miles for AM broadcast stations, although the radiotelegraphy signals used during the wireless telegraphy era could be received at hundreds of miles, and crystal receivers were even used for transoceanic communication during that period. Design Commercial passive receiver development was abandoned with the advent of reliable vacuum tubes around 1920, and subsequent crystal radio research was primarily done by radio amateurs and hobbyists. Many different circuits have been used. The following sections discuss the parts of a crystal radio in greater detail. Antenna The antenna converts the energy in the electromagnetic radio waves to an alternating electric current in the antenna, which is connected to the tuning coil. Since, in a crystal radio, all the power comes from the antenna, it is important that the antenna collect as much power from the radio wave as possible. The larger an antenna, the more power it can intercept. Antennas of the type commonly used with crystal sets are most effective when their length is close to a multiple of a quarter-wavelength of the radio waves they are receiving. Since the length of the waves used with crystal radios is very long (AM broadcast band waves are long) the antenna is made as long as possible, from a long wire, in contrast to the whip antennas or ferrite loopstick antennas used in modern radios. Serious crystal radio hobbyists use "inverted L" and "T" type antennas, consisting of hundreds of feet of wire suspended as high as possible between buildings or trees, with a feed wire attached in the center or at one end leading down to the receiver. However, more often, random lengths of wire dangling out windows are used. A popular practice in early days (particularly among apartment dwellers) was to use existing large metal objects, such as bedsprings, fire escapes, and barbed wire fences as antennas. Ground The wire antennas used with crystal receivers are monopole antennas which develop their output voltage with respect to ground. The receiver thus requires a connection to ground (the earth) as a return circuit for the current. The ground wire was attached to a radiator, water pipe, or a metal stake driven into the ground. In early days if an adequate ground connection could not be made a counterpoise was sometimes used. A good ground is more important for crystal sets than it is for powered receivers, as crystal sets are designed to have a low input impedance needed to transfer power efficiently from the antenna. A low resistance ground connection (preferably below 25 Ω) is necessary because any resistance in the ground reduces available power from the antenna. In contrast, modern receivers are voltage-driven devices, with high input impedance, hence little current flows in the antenna/ground circuit. Also, mains powered receivers are grounded adequately through their power cords, which are in turn attached to the earth through the building wiring. Tuned circuit The tuned circuit, consisting of a coil and a capacitor connected together, acts as a resonator, similar to a tuning fork. Electric charge, induced in the antenna by the radio waves, flows rapidly back and forth between the plates of the capacitor through the coil. The circuit has a high impedance at the desired radio signal's frequency, but a low impedance at all other frequencies. Hence, signals at undesired frequencies pass through the tuned circuit to ground, while the desired frequency is instead passed on to the detector (diode) and stimulates the earpiece and is heard. The frequency of the station received is the resonant frequency f of the tuned circuit, determined by the capacitance C of the capacitor and the inductance L of the coil: The circuit can be adjusted to different frequencies by varying the inductance (L), the capacitance (C), or both, "tuning" the circuit to the frequencies of different radio stations. In the lowest-cost sets, the inductor was made variable via a spring contact pressing against the windings that could slide along the coil, thereby introducing a larger or smaller number of turns of the coil into the circuit, varying the inductance. Alternatively, a variable capacitor is used to tune the circuit. Some modern crystal sets use a ferrite core tuning coil, in which a ferrite magnetic core is moved into and out of the coil, thereby varying the inductance by changing the magnetic permeability (this eliminated the less reliable mechanical contact). The antenna is an integral part of the tuned circuit and its reactance contributes to determining the circuit's resonant frequency. Antennas usually act as a capacitance, as antennas shorter than a quarter-wavelength have capacitive reactance. Many early crystal sets did not have a tuning capacitor, and relied instead on the capacitance inherent in the wire antenna (in addition to significant parasitic capacitance in the coil) to form the tuned circuit with the coil. The earliest crystal receivers did not have a tuned circuit at all, and just consisted of a crystal detector connected between the antenna and ground, with an earphone across it. Since this circuit lacked any frequency-selective elements besides the broad resonance of the antenna, it had little ability to reject unwanted stations, so all stations within a wide band of frequencies were heard in the earphone (in practice the most powerful usually drowns out the others). It was used in the earliest days of radio, when only one or two stations were within a crystal set's limited range. Impedance matching An important principle used in crystal radio design to transfer maximum power to the earphone is impedance matching. The maximum power is transferred from one part of a circuit to another when the impedance of one circuit is the complex conjugate of that of the other; this implies that the two circuits should have equal resistance. However, in crystal sets, the impedance of the antenna-ground system (around 10–200 ohms) is usually lower than the impedance of the receiver's tuned circuit (thousands of ohms at resonance), and also varies depending on the quality of the ground attachment, length of the antenna, and the frequency to which the receiver is tuned. Therefore, in improved receiver circuits, in order to match the antenna impedance to the receiver's impedance, the antenna was connected across only a portion of the tuning coil's turns. This made the tuning coil act as an impedance matching transformer (in an autotransformer connection) in addition to providing the tuning function. The antenna's low resistance was increased (transformed) by a factor equal to the square of the turns ratio (the ratio of the number of turns the antenna was connected to, to the total number of turns of the coil), to match the resistance across the tuned circuit. In the "two-slider" circuit, popular during the wireless era, both the antenna and the detector circuit were attached to the coil with sliding contacts, allowing (interactive) adjustment of both the resonant frequency and the turns ratio. Alternatively a multiposition switch was used to select taps on the coil. These controls were adjusted until the station sounded loudest in the earphone. Problem of selectivity One of the drawbacks of crystal sets is that they are vulnerable to interference from stations near in frequency to the desired station. Often two or more stations are heard simultaneously. This is because the simple tuned circuit does not reject nearby signals well; it allows a wide band of frequencies to pass through, that is, it has a large bandwidth (low Q factor) compared to modern receivers, giving the receiver low selectivity. The crystal detector worsened the problem, because it has relatively low resistance, thus it "loaded" the tuned circuit, drawing significant current and thus damping the oscillations, reducing its Q factor so it allowed through a broader band of frequencies. In many circuits, the selectivity was improved by connecting the detector and earphone circuit to a tap across only a fraction of the coil's turns. This reduced the impedance loading of the tuned circuit, as well as improving the impedance match with the detector. Inductive coupling In more sophisticated crystal receivers, the tuning coil is replaced with an adjustable air core antenna coupling transformer which improves the selectivity by a technique called loose coupling. This consists of two magnetically coupled coils of wire, one (the primary) attached to the antenna and ground and the other (the secondary) attached to the rest of the circuit. The current from the antenna creates an alternating magnetic field in the primary coil, which induced a current in the secondary coil which was then rectified and powered the earphone. Each of the coils functions as a tuned circuit; the primary coil resonated with the capacitance of the antenna (or sometimes another capacitor), and the secondary coil resonated with the tuning capacitor. Both the primary and secondary were tuned to the frequency of the station. The two circuits interacted to form a resonant transformer. Reducing the coupling between the coils, by physically separating them so that less of the magnetic field of one intersects the other, reduces the mutual inductance, narrows the bandwidth, and results in much sharper, more selective tuning than that produced by a single tuned circuit. However, the looser coupling also reduced the power of the signal passed to the second circuit. The transformer was made with adjustable coupling, to allow the listener to experiment with various settings to gain the best reception. One design common in early days, called a "loose coupler", consisted of a smaller secondary coil inside a larger primary coil. The smaller coil was mounted on a rack so it could be slid linearly in or out of the larger coil. If radio interference was encountered, the smaller coil would be slid further out of the larger, loosening the coupling, narrowing the bandwidth, and thereby rejecting the interfering signal. The antenna coupling transformer also functioned as an impedance matching transformer, that allowed a better match of the antenna impedance to the rest of the circuit. One or both of the coils usually had several taps which could be selected with a switch, allowing adjustment of the number of turns of that transformer and hence the "turns ratio". Coupling transformers were difficult to adjust, because the three adjustments, the tuning of the primary circuit, the tuning of the secondary circuit, and the coupling of the coils, were all interactive, and changing one affected the others. Crystal detector The crystal detector demodulates the radio frequency signal, extracting the modulation (the audio signal which represents the sound waves) from the radio frequency carrier wave. In early receivers, a type of crystal detector often used was a "cat whisker detector". The point of contact between the wire and the crystal acted as a semiconductor diode. The cat whisker detector constituted a crude Schottky diode that allowed current to flow better in one direction than in the opposite direction. Modern crystal sets use modern semiconductor diodes. The crystal functions as an envelope detector, rectifying the alternating current radio signal to a pulsing direct current, the peaks of which trace out the audio signal, so it can be converted to sound by the earphone, which is connected to the detector. The rectified current from the detector has radio frequency pulses from the carrier frequency in it, which are blocked by the high inductive reactance and do not pass well through the coils of early date earphones. Hence, a small capacitor called a bypass capacitor is often placed across the earphone terminals; its low reactance at radio frequency bypasses these pulses around the earphone to ground. In some sets the earphone cord had enough capacitance that this component could be omitted. Only certain sites on the crystal surface functioned as rectifying junctions, and the device was very sensitive to the pressure of the crystal-wire contact, which could be disrupted by the slightest vibration. Therefore, a usable contact point had to be found by trial and error before each use. The operator dragged the wire across the crystal surface until a radio station or "static" sounds were heard in the earphones. Alternatively, some radios (circuit, right) used a battery-powered buzzer attached to the input circuit to adjust the detector. The spark at the buzzer's electrical contacts served as a weak source of static, so when the detector began working, the buzzing could be heard in the earphones. The buzzer was then turned off, and the radio tuned to the desired station. Galena (lead sulfide) was the most common crystal used, but various other types of crystals were also used, the most common being iron pyrite (fool's gold, FeS2), silicon, molybdenite (MoS2), silicon carbide (carborundum, SiC), and a zincite-bornite (ZnO-Cu5FeS4) crystal-to-crystal junction trade-named Perikon. Crystal radios have also been improvised from a variety of common objects, such as blue steel razor blades and lead pencils, rusty needles, and pennies In these, a semiconducting layer of oxide or sulfide on the metal surface is usually responsible for the rectifying action. In modern sets, a semiconductor diode is used for the detector, which is much more reliable than a crystal detector and requires no adjustments. Germanium diodes (or sometimes Schottky diodes) are used instead of silicon diodes, because their lower forward voltage drop (roughly 0.3 V compared to 0.6 V) makes them more sensitive. All semiconductor detectors function rather inefficiently in crystal receivers, because the low voltage input to the detector is too low to result in much difference between forward better conduction direction, and the reverse weaker conduction. To improve the sensitivity of some of the early crystal detectors, such as silicon carbide, a small forward bias voltage was applied across the detector by a battery and potentiometer. The bias moves the diode's operating point higher on the detection curve producing more signal voltage at the expense of less signal current (higher impedance). There is a limit to the benefit that this produces, depending on the other impedances of the radio. This improved sensitivity was caused by moving the DC operating point to a more desirable voltage-current operating point (impedance) on the junction's I-V curve. The battery did not power the radio, but only provided the biasing voltage which required little power. Earphones The requirements for earphones used in crystal sets are different from earphones used with modern audio equipment. They have to be efficient at converting the electrical signal energy to sound waves, while most modern earphones sacrifice efficiency in order to gain high fidelity reproduction of the sound. In early homebuilt sets, the earphones were the most costly component. The early earphones used with wireless-era crystal sets had moving iron drivers that worked in a way similar to the horn loudspeakers of the period. Each earpiece contained a permanent magnet about which was a coil of wire which formed a second electromagnet. Both magnetic poles were close to a steel diaphragm of the speaker. When the audio signal from the radio was passed through the electromagnet's windings, current was caused to flow in the coil which created a varying magnetic field that augmented or diminished that due to the permanent magnet. This varied the force of attraction on the diaphragm, causing it to vibrate. The vibrations of the diaphragm push and pull on the air in front of it, creating sound waves. Standard headphones used in telephone work had a low impedance, often 75 Ω, and required more current than a crystal radio could supply. Therefore, the type used with crystal set radios (and other sensitive equipment) was wound with more turns of finer wire giving it a high impedance of 2000–8000 Ω. Modern crystal sets use piezoelectric crystal earpieces, which are much more sensitive and also smaller. They consist of a piezoelectric crystal with electrodes attached to each side, glued to a light diaphragm. When the audio signal from the radio set is applied to the electrodes, it causes the crystal to vibrate, vibrating the diaphragm. Crystal earphones are designed as ear buds that plug directly into the ear canal of the wearer, coupling the sound more efficiently to the eardrum. Their resistance is much higher (typically megohms) so they do not greatly "load" the tuned circuit, allowing increased selectivity of the receiver. The piezoelectric earphone's higher resistance, in parallel with its capacitance of around 9 pF, creates a filter that allows the passage of low frequencies, but blocks the higher frequencies. In that case a bypass capacitor is not needed (although in practice a small one of around 0.68 to 1 nF is often used to help improve quality), but instead a 10–100 kΩ resistor must be added in parallel with the earphone's input. Although the low power produced by crystal radios is typically insufficient to drive a loudspeaker, some homemade 1960s sets have used one, with an audio transformer to match the low impedance of the speaker to the circuit. Similarly, modern low-impedance (8 Ω) earphones cannot be used unmodified in crystal sets because the receiver does not produce enough current to drive them. They are sometimes used by adding an audio transformer to match their impedance with the higher impedance of the driving antenna circuit. Use as a power source A crystal radio tuned to a strong local transmitter can be used as a power source for a second amplified receiver of a distant station that cannot be heard without amplification. There is a long history of unsuccessful attempts and unverified claims to recover the power in the carrier of the received signal itself. Conventional crystal sets use half-wave rectifiers. As AM signals have a modulation factor of only 30% by voltage at peaks, no more than 9% of received signal power () is actual audio information, and 91% is just rectified DC voltage. <correction> The 30% figure is the standard used for radio testing, and is based on the average modulation factor for speech. Properly-designed and managed AM transmitters can be run to 100% modulation on peaks without causing distortion or "splatter" (excess sideband energy that radiates outside of the intended signal bandwidth). Given that the audio signal is unlikely to be at peak all the time, the ratio of energy is, in practice, even greater. Considerable effort was made to convert this DC voltage into sound energy. Some earlier attempts include a one-transistor amplifier in 1966. Sometimes efforts to recover this power are confused with other efforts to produce a more efficient detection. This history continues now with designs as elaborate as "inverted two-wave switching power unit". Gallery During the wireless telegraphy era before 1920, crystal receivers were "state of the art", and sophisticated models were produced. After 1920 crystal sets became the cheap alternative to vacuum tube radios, used in emergencies and by youth and the poor. See also Batteryless radio Coherer Demodulator Detector (radio) Electrolytic detector History of radio References Further reading Ellery W. Stone (1919). Elements of Radiotelegraphy. D. Van Nostrand company. 267 pages. Elmer Eustice Bucher (1920). The Wireless Experimenter's Manual: Incorporating how to Conduct a Radio Club. Milton Blake Sleeper (1922). Radio Hook-ups: A Reference and Record Book of Circuits Used for Connecting Wireless Instruments. The Norman W. Henley publishing co.; 67 pages. JL Preston and HA Wheeler (1922) "Construction and operation of a simple homemade radio receiving outfit", Bureau of Standards, C-120: Apr. 24, 1922. PA Kinzie (1996). Crystal Radio: History, Fundamentals, and Design. Xtal Set Society. Thomas H. Lee (2004). The Design of CMOS Radio-Frequency Integrated Circuits Derek K. Shaeffer and Thomas H. Lee (1999). The Design and Implementation of Low-Power CMOS Radio Receivers Ian L. Sanders. Tickling the Crystal – Domestic British Crystal Sets of the 1920s; Volumes 1–5. BVWS Books (2000–2010). External links A website with lots of information on early radio and crystal sets Hobbydyne Crystal Radios History and Technical Information on Crystal Radios Ben Tongue's Technical Talk Section 1 links to "Crystal Radio Set Systems: Design, Measurements and Improvement". "Semiconductor archeology or tribute to unknown precursors ". earthlink.net/~lenyr. Nyle Steiner K7NS, Zinc Negative Resistance RF Amplifier for Crystal Sets and Regenerative Receivers Uses No Tubes or Transistors. November 20, 2002. Crystal Set DX? Roger Lapthorn G3XBM Details of crystals used in crystal sets http://www.crystal-radio.eu/endiodes.htm Diodes http://www.crystal-radio.eu/engev.htm How to build a sensitive crystal receiver? http://uv201.com/Radio_Pages/Pre-1921/crystal_detectors.htm Crystal Detectors http://www.sparkmuseum.com/DETECTOR.HTM Radio Detectors The Crystal Set Perfected History of radio technology Radio electronics Types of radios Receiver (radio)
Crystal radio
[ "Engineering" ]
7,191
[ "Radio electronics", "Receiver (radio)" ]
232,315
https://en.wikipedia.org/wiki/Numerically%20controlled%20oscillator
A numerically controlled oscillator (NCO) is a digital signal generator which creates a synchronous (i.e., clocked), discrete-time, discrete-valued representation of a waveform, usually sinusoidal. NCOs are often used in conjunction with a digital-to-analog converter (DAC) at the output to create a direct digital synthesizer (DDS). Numerically controlled oscillators offer several advantages over other types of oscillators in terms of agility, accuracy, stability and reliability. NCOs are used in many communications systems including digital up/down converters used in 3G wireless and software radio systems, digital phase-locked loops, radar systems, drivers for optical or acoustic transmissions, and multilevel FSK/PSK modulators/demodulators. Operation An NCO generally consists of two parts: A phase accumulator (PA), which adds to the value held at its output a frequency control value at each clock sample. A phase-to-amplitude converter (PAC), which uses the phase accumulator output word (phase word) usually as an index into a waveform look-up table (LUT) to provide a corresponding amplitude sample. Sometimes interpolation is used with the look-up table to provide better accuracy and reduce phase error noise. Other methods of converting phase to amplitude, including mathematical algorithms such as power series can be used, particularly in a software NCO. When clocked, the phase accumulator (PA) creates a modulo-2N sawtooth waveform which is then converted by the phase-to-amplitude converter (PAC) to a sampled sinusoid, where N is the number of bits carried in the phase accumulator. N sets the NCO frequency resolution and is normally much larger than the number of bits defining the memory space of the PAC look-up table. If the PAC capacity is 2M, the PA output word must be truncated to M bits as shown in Figure 1. However, the truncated bits can be used for interpolation. The truncation of the phase output word does not affect the frequency accuracy but produces a time-varying periodic phase error which is a primary source of spurious products. Another spurious product generation mechanism is finite word length effects of the PAC output (amplitude) word. The frequency accuracy relative to the clock frequency is limited only by the precision of the arithmetic used to compute the phase. NCOs are phase- and frequency-agile, and can be trivially modified to produce a phase-modulated or frequency-modulated output by summation at the appropriate node, or provide quadrature outputs as shown in the figure. Phase accumulator A binary phase accumulator consists of an N-bit binary adder and a register configured as shown in Figure 1. Each clock cycle produces a new N-bit output consisting of the previous output obtained from the register summed with the frequency control word (FCW) which is constant for a given output frequency. The resulting output waveform is a staircase with step size , the integer value of the FCW. In some configurations, the phase output is taken from the output of the register which introduces a one clock cycle latency but allows the adder to operate at a higher clock rate. The adder is designed to overflow when the sum of the absolute value of its operands exceeds its capacity (2N−1). The overflow bit is discarded so the output word width is always equal to its input word width. The remainder , called the residual, is stored in the register and the cycle repeats, starting this time from (see figure 2). Since a phase accumulator is a finite-state machine, eventually the residual at some sample K must return to the initial value . The interval K is referred to as the grand repetition rate (GRR) given by where GCD is the greatest common divisor function. The GRR represents the true periodicity for a given which for a high resolution NCO can be very long. Usually we are more interested in the operating frequency determined by the average overflow rate, given by      (1) The frequency resolution, defined as the smallest possible incremental change in frequency, is given by      (2) Equation (1) shows that the phase accumulator can be thought of as a programmable non-integer frequency divider of divide ratio . Phase-to-amplitude converter The phase-amplitude converter creates the sample-domain waveform from the truncated phase output word received from the PA. The PAC can be a simple read only memory containing 2M contiguous samples of the desired output waveform which typically is a sinusoid. Often though, various tricks are employed to reduce the amount of memory required. This include various trigonometric expansions, trigonometric approximations and methods which take advantage of the quadrature symmetry exhibited by sinusoids. Alternatively, the PAC may consist of random access memory which can be filled as desired to create an arbitrary waveform generator. Spurious products Spurious products are the result of harmonic or non-harmonic distortion in the creation of the output waveform due to non-linear numerical effects in the signal processing chain. Only numerical errors are covered here. For other distortion mechanisms created in the digital-to-analog converter see the corresponding section in the direct-digital synthesizer article. Phase truncation spurs The number of phase accumulator bits of an NCO (N) is usually between 16 and 64. If the PA output word were used directly to index the PAC look-up table an untenably high storage capacity in the ROM would be required. As such, the PA output word must be truncated to span a reasonable memory space. Truncation of the phase word causes phase modulation of the output sinusoid which introduces non-harmonic distortion in proportion to the number of bits truncated. The number of spurious products created by this distortion is given by:            (3) where W is the number of bits truncated. In calculating the spurious-free dynamic range, we are interested in the spurious product with the largest amplitude relative to the carrier output level given by: where P is the size of the phase-to-amplitude converter's lookup table in bits, i.e., M in Figure 1. For W >4, Another related spurious generation method is the slight modulation due to the GRR outlined above. The amplitude of these spurs is low for large N and their frequency is generally too low to be detectable but they may cause issues for some applications. One way to reduce the truncation in the address lookup is to have several smaller lookup tables in parallel and use the upper bits to index into the tables and the lower bits to weigh them for linear or quadratic interpolation. Ie use a 24-bit phase accumulator to look up into two 16-bit LUTS. Address into the truncated 16 MSBs, and that plus 1. Linearly interpolate using the 8 LSBs as weights. (One could instead use 3 LUTs instead and quadratically interpolate). This can result in decreased distortion for the same amount of memory at the cost of some multipliers. Amplitude truncation spurs Another source of spurious products is the amplitude quantization of the sampled waveform contained in the PAC look up table(s). If the number of DAC bits is P, the AM spur level is approximately equal to −6.02 P − 1.76 dBc. Mitigation techniques Phase truncation spurs can be reduced substantially by the introduction of white gaussian noise prior to truncation. The so-called dither noise is summed into the lower W+1 bits of the PA output word to linearize the truncation operation. Often the improvement can be achieved without penalty because the DAC noise floor tends to dominate system performance. Amplitude truncation spurs can not be mitigated in this fashion. Introduction of noise into the static values held in the PAC ROMs would not eliminate the cyclicality of the truncation error terms and thus would not achieve the desired effect. See also Direct digital synthesis (DDS) Digital-to-analog converter (DAC) Digitally controlled oscillator (DCO) References Digital signal processing Synthesizers Electronic oscillators Digital electronics
Numerically controlled oscillator
[ "Engineering" ]
1,730
[ "Electronic engineering", "Digital electronics" ]
232,333
https://en.wikipedia.org/wiki/Surface-mount%20technology
Surface-mount technology (SMT), originally called planar mounting, is a method in which the electrical components are mounted directly onto the surface of a printed circuit board (PCB). An electrical component mounted in this manner is referred to as a surface-mount device (SMD). In industry, this approach has largely replaced through-hole technology construction method of fitting components, in large part because SMT allows for increased manufacturing automation which reduces cost and improves quality. It also allows for more components to fit on a given area of substrate. Both technologies can be used on the same board, with the through-hole technology often used for components not suitable for surface mounting such as large transformers and heat-sinked power semiconductors. An SMT component is usually smaller than its through-hole counterpart because it has either smaller leads or no leads at all. It may have short pins or leads of various styles, flat contacts, a matrix of solder balls (BGAs), or terminations on the body of the component. History Surface-mount technology was developed in the 1960s. By 1986, surface-mounted components accounted for 10% of the market at most but were rapidly gaining popularity. By the late 1990s, the great majority of high-tech electronic printed circuit assemblies were dominated by surface mount devices. Much of the pioneering work in this technology was done by IBM. The design approach first demonstrated by IBM in 1960 in a small-scale computer was later applied in the Launch Vehicle Digital Computer used in the Instrument Unit that guided all Saturn IB and Saturn V vehicles. Components were mechanically redesigned to have small metal tabs or end caps that could be directly soldered to the surface of the PCB. Components became much smaller, and component placement on both sides of a board became far more common with surface mounting than through-hole mounting, allowing much higher circuit densities and smaller circuit boards and, in turn, machines or subassemblies containing the boards. Often, the surface tension of the solder is enough to hold the parts to the board; in rare cases, parts on the bottom or "second" side of the board may be secured with adhesive to keep components from dropping off inside reflow ovens. Adhesive is sometimes used to hold SMT components on the bottom side of a board if a wave soldering process is used to solder both SMT and through-hole components simultaneously. Alternatively, SMT and through-hole components can be soldered on the same side of a board without adhesive if the SMT parts are first reflow-soldered, then a selective solder mask is used to prevent the solder holding those parts in place from reflowing and the parts floating away during wave soldering. Surface mounting lends itself well to a high degree of automation, reducing labor cost and greatly increasing production rates. Conversely, SMT does not lend itself well to manual or low-automation fabrication, which is more economical and faster for one-off prototyping and small-scale production; this is one reason why many through-hole components are still manufactured. Some SMDs can be soldered with a temperature-controlled manual soldering iron, but those that are very small or have too fine a lead pitch are often almost impossible to manually solder without expensive equipment. Common abbreviations Different terms describe the components, technique, and machines used in manufacturing. These terms are listed in the following table: Assembly techniques Where components are to be placed, the printed circuit board normally has flat, usually tin-lead, silver, or gold plated copper pads without holes, called solder pads. Solder paste, a sticky mixture of flux and tiny solder particles, is first applied to all the solder pads with a stainless steel or nickel stencil using a screen printing process. It can also be applied by a jet-printing mechanism, similar to an inkjet printer. After pasting, the boards proceed to the pick-and-place machines, where they are placed on a conveyor belt. The components to be placed on the boards are usually delivered to the production line in either paper/plastic tapes wound on reels or plastic tubes. Some large integrated circuits are delivered in static-free trays. Numerical control pick-and-place machines remove the parts from the tapes, tubes or trays and place them on the PCB. The boards are then conveyed into the reflow soldering oven. They first enter a pre-heat zone, where the temperature of the board and all the components is gradually, uniformly raised to prevent thermal shock. The boards then enter a zone where the temperature is high enough to melt the solder particles in the solder paste, bonding the component leads to the pads on the circuit board. The surface tension of the molten solder helps keep the components in place. If the solder pad geometries are correctly designed, surface tension automatically aligns the components on their pads. There are a number of techniques for reflowing solder. One is to use infrared lamps; this is called infrared reflow. Another is to use a hot gas convection. Another technology that is becoming popular again is special fluorocarbon liquids with high boiling points which use a method called vapor phase reflow. Due to environmental concerns, this method was falling out of favor until lead-free legislation was introduced which requires tighter controls on soldering. At the end of 2008, convection soldering was the most popular reflow technology using either standard air or nitrogen gas. Each method has its advantages and disadvantages. With infrared reflow, the board designer must lay the board out so that short components do not fall into the shadows of tall components. Component location is less restricted if the designer knows that vapor phase reflow or convection soldering will be used in production. Following reflow soldering, certain irregular or heat-sensitive components may be installed and soldered by hand, or in large-scale automation, by focused infrared beam (FIB) or localized convection equipment. If the circuit board is double-sided then this printing, placement, reflow process may be repeated using either solder paste or glue to hold the components in place. If a wave soldering process is used, then the parts must be glued to the board before processing to prevent them from floating off when the solder paste holding them in place is melted. After soldering, the boards may be washed to remove flux residues and any stray solder balls that could short out closely spaced component leads. Rosin flux is removed with fluorocarbon solvents, high flash point hydrocarbon solvents, or low flash solvents e.g. limonene (derived from orange peels) which require extra rinsing or drying cycles. Water-soluble fluxes are removed with deionized water and detergent, followed by an air blast to quickly remove residual water. However, most electronic assemblies are made using a "No-Clean" process where the flux residues are designed to be left on the circuit board, since they are considered harmless. This saves the cost of cleaning, speeds up the manufacturing process, and reduces waste. However, it is generally suggested to wash the assembly, even when a "No-Clean" process is used, when the application uses very high frequency clock signals (in excess of 1 GHz). Another reason to remove no-clean residues is to improve adhesion of conformal coatings and underfill materials. Regardless of whether cleaning or not those PCBs, the current industry trend suggests carefully reviewing a PCB assembly process where "No-Clean" is applied, since flux residues trapped under components and RF shields may affect surface insulation resistance (SIR), especially on high component density boards. Certain manufacturing standards, such as those written by the IPC – Association Connecting Electronics Industries require cleaning regardless of the solder flux type used to ensure a thoroughly clean board. Proper cleaning removes all traces of solder flux, as well as dirt and other contaminants that may be invisible to the naked eye. No-Clean or other soldering processes may leave "white residues" that, according to IPC, are acceptable "provided that these residues have been qualified and documented as benign". However, while shops conforming to IPC standards are expected to adhere to the Association's rules on board condition, not all manufacturing facilities apply IPC standards, nor are they required to do so. Additionally, in some applications, such as low-end electronics, such stringent manufacturing methods are excessive both in expense and time required. Finally, the boards are visually inspected for missing or misaligned components and solder bridging. If needed, they are sent to a rework station where a human operator repairs any errors. They are then usually sent to the testing stations (in-circuit testing and/or functional testing) to verify that they operate correctly. Automated optical inspection (AOI) systems are commonly used in PCB manufacturing. This technology has proven highly efficient for process improvements and quality achievements. Advantages The main advantages of SMT over the through-hole technique are: Faster-automated assembly. Some placement machines are capable of placing more than 136,000 components per hour. Much higher component density (components per unit area) and many more connections per component. Components can be placed on both sides of the circuit board. Higher density of connections because holes do not block routing space on inner layers, nor on back-side layers if components are mounted on only one side of the PCB. Small errors in component placement are corrected automatically as the surface tension of molten solder pulls components into alignment with solder pads. (On the other hand, through-hole components cannot be slightly misaligned because once the leads are through the holes, the components are fully aligned and cannot move laterally out of alignment.) Better mechanical performance under shock and vibration conditions (partly due to lower mass and partly due to less cantilevering) Lower resistance and inductance at the connection; consequently, fewer unwanted RF signal effects and better and more predictable high-frequency performance. Better EMC performance (lower radiated emissions) due to the smaller radiation loop area (because of the smaller package) and the lesser lead inductance. Fewer holes need to be drilled. (Drilling PCBs is time-consuming and expensive.) Lower initial cost and time of setting up for mass production using automated equipment. Many SMT parts cost less than equivalent through-hole parts. Smaller components. Disadvantages SMT may be unsuitable as the sole attachment method for components subject to frequent mechanical stress, such as connectors used to interface with external devices that are frequently attached and detached. SMDs' solder connections may be damaged by potting compounds going through thermal cycling. Manual prototype assembly or component-level repair is more difficult and requires skilled operators and more expensive tools, due to the small sizes and lead spacings of many SMDs. Handling of small SMT components can be difficult, requiring tweezers, unlike nearly all through-hole components. Whereas through-hole components will stay in place (under gravitational force) once inserted and can be mechanically secured prior to soldering by bending out two leads on the solder side of the board, SMDs are easily moved out of place by a touch of a soldering iron. Without developed skill, when manually soldering or desoldering a component, it is easy to accidentally reflow the solder of an adjacent SMT component and unintentionally displace it, something that is almost impossible to do with through-hole components. Many types of SMT component packages cannot be installed in sockets, which provide for easy installation or exchange of components to modify a circuit and easy replacement of failed components. (Virtually all through-hole components can be socketed.) SMDs cannot be used directly with plug-in breadboards (a quick snap-and-play prototyping tool), requiring either a custom PCB for every prototype or the mounting of the SMD upon a pin-leaded carrier. For prototyping around a specific SMD component, a less-expensive breakout board may be used. Additionally, stripboard style protoboards can be used, some of which include pads for standard-sized SMD components. For prototyping, "dead bug" breadboarding can be used. Solder joint dimensions in SMT quickly become much smaller as advances are made toward ultra-fine pitch technology. The reliability of solder joints becomes more of a concern as less and less solder is allowed for each joint. Voiding is a fault commonly associated with solder joints, especially when reflowing a solder paste in the SMT application. The presence of voids can deteriorate the joint strength and eventually lead to joint failure. SMDs, usually being smaller than equivalent through-hole components, have less surface area for marking, requiring marked part ID codes or component values to be more cryptic and smaller, often requiring magnification to be read, whereas a larger through-hole component could be read and identified by the unaided eye. This is a disadvantage for prototyping, repair, rework, reverse engineering, and possibly for production set-up. Rework Defective surface-mount components can be repaired by using soldering irons (for some connections) or a non-contact rework system. In most cases, a rework system is the better choice because SMD work with a soldering iron requires considerable skill and is not always feasible. Reworking usually corrects some type of error, either human- or machine-generated, and includes the following steps: Melt solder and remove component(s) Remove residual solder (may be not required for some components) Print solder paste on PCB, directly or by dispensing or dipping Place new component and reflow. Sometimes hundreds or thousands of the same part need to be repaired. If due to assembly, such errors are often caught during the process. However, a whole new level of rework arises when component failure is discovered too late, and perhaps unnoticed until the end user of the device being manufactured experiences it. Rework can also be used if products of sufficient value to justify it require revision or re-engineering, perhaps to change a single firmware-based component. Reworking in large volumes requires an operation designed for that purpose. There are essentially two non-contact soldering/desoldering methods: infrared soldering and soldering with hot gas. Infrared With infrared soldering, the energy for heating up the solder joint is transmitted by long-, medium- or short-wave infrared electromagnetic radiation. Advantages: Easy setup No compressed air required for the heating process (some systems use compressed air for cooling) No requirement for different nozzles for many component shapes and sizes, reducing cost and the need to change nozzles Very uniform heating possible, assuming high-quality IR heating systems Gentle reflow process with low surface temperatures, assuming correct profile settings Fast reaction of infrared source (depends on the system used) Closed loop temperature control directly on the component is possible by applying a thermocouple or pyrometric measurement. This allows compensation for varying environmental influences and temperature losses. Enables use of the same temperature profile on slightly different assemblies, as the heating process adapts itself automatically. Enables (re)entry into the profile even on hot assemblies Direct setting of target profile temperatures and gradients possible through direct control of component temperature in each individual soldering process. No increased oxidation due to strong blowing of the solder joints with hot air, reduces flux wear or flux blowing away Documentation of the temperature elapsed on the component for each individual rework process possible Disadvantages: Temperature-sensitive nearby components must be shielded from heat to prevent damage, which requires additional time for every board On short wavelength IR only: Surface temperature depends on the component's albedo: dark surfaces will be heated more than lighter surfaces Convective loss of energy at the component possible No reflow atmosphere possible (but also not required) Hot gas During hot gas soldering, the energy for heating up the solder joint is transmitted by a hot gas. This can be air or inert gas (nitrogen). Advantages: Some systems allow switching between hot air and nitrogen Standard and component-specific nozzles allow high reliability and faster processing Allow reproducible soldering profiles (depending on the system used) Efficient heating, large amounts of heat can be transferred Even heating of the affected board area (depends on system/nozzle quality used) Temperature of the component will never exceed the adjusted gas temperature Rapid cooling after reflow, resulting in small-grained solder joints (depending on the system used) Disadvantages: Thermal capacity of the heat generator results in a slow reaction whereby thermal profiles can be distorted (depending on the system used) Precise, sometimes very complex, component-specific hot gas nozzles are needed to direct the hot gas to the target component. These can be very expensive. Today, nozzles can often no longer be deposited on the PCB by neighboring components, which means there is no longer a closed process chamber and adjacent components can be blown on strongly from the side. This can lead to the blowing of adjacent components and even to thermal damage. In this case, adjacent components must be protected from airflow, e.g. by covering them with polyimide tape. Local turbulence of the hot gas can create hot and cold spots on the heated surfaces, resulting in uneven heating. Therefore, perfectly designed, high-quality nozzles are a must! Swirls at component edges, especially at bases and connectors, can heat these edges significantly more than other surfaces. Overheating can occur (burns, melting of plastics) Losses due to environmental influences are not compensated for since the component temperature is not measured in the production process Creation of a suitable reflow profile requires an adjustment and test phase, in some cases involving several stages Direct temperature control of the component is not possible because measuring the actual component temperature is difficult due to the high gas velocity (measurement failure!) Hybrid technology Hybrid rework systems combine medium-wave infrared radiation with hot air Advantages: Easy setup The low flow velocity hot air supporting the IR radiation improves heat transfer but cannot blow away components Heat transfer does not depend entirely on the flow velocity of hot gas at the component/assembly surface (see hot gas) No requirement for different nozzles for many component shapes and sizes, reducing cost and the need to change nozzles Adjustment of the heating surface is possible through various attachments if required Heating even very large/long and exotically shaped components possible, depending on the type of top heater Very uniform heating possible, assuming high-quality hybrid heating systems Gentle reflow process with low surface temperatures, assuming correct profile settings No compressed air is required for the heating process (some systems use compressed air for cooling) Closed loop temperature control directly on the component is possible by applying a thermocouple or pyrometric measurement. This allows compensation for varying environmental influences and temperature losses. Enables use of the same temperature profile on slightly different assemblies, as the heating process adapts itself automatically. Enables (re)entry into the profile even on hot assemblies Direct setting of target profile temperatures and gradients possible through direct control of component temperature in each individual soldering process. No increased oxidation due to strong blowing of the solder joints with hot air, reduces flux wear or flux blowing away Documentation of the temperature elapsed on the component for each individual rework process possible Disadvantages Temperature-sensitive nearby components must be shielded from heat to prevent damage, which requires additional time for every board. Shield must also cover from gas flow Convective loss of energy at the component possible Packages Surface-mount components are usually smaller than their counterparts with leads, and are designed to be handled by machines rather than by humans. The electronics industry has standardized package shapes and sizes (the leading standardisation body is JEDEC). The smallest case sizes available after 0201 are 01005, 008005, 008004, 008003 and 006003. Identification Resistors For 5% precision SMD resistors usually are marked with their resistance values using three digits: two significant digits and a multiplier digit. These are quite often white lettering on a black background, but other colored backgrounds and lettering can be used. For 1% precision SMD resistors, the code is used, as three digits would otherwise not convey enough information. This code consists of two digits and a letter: the digits denote the value's position in the E96 Series of values, while the letter indicates the multiplier. Capacitors Non-electrolytic capacitors are usually unmarked and the only reliable method of determining their value is removal from the circuit and subsequent measurement with a capacitance meter or impedance bridge. The materials used to fabricate the capacitors, such as nickel tantalate, possess different colours and these can give an approximate idea of the capacitance of the component. Generally physical size is proportional to capacitance and (squared) voltage for the same dielectric. For example, a 100 nF, 50 V capacitor may come in the same package as a 10 nF, 150 V device. SMD (non-electrolytic) capacitors, which are usually monolithic ceramic capacitors, exhibit the same body color on all four faces not covered by the end caps. SMD electrolytic capacitors, usually tantalum capacitors, and film capacitors are marked like resistors, with two significant figures and a multiplier in units of picofarads or pF, (10−12 farad.) Inductors Smaller inductance with moderately high current ratings are usually of the ferrite bead type. They are simply a metal conductor looped through a ferrite bead and almost the same as their through-hole versions but possess SMD end caps rather than leads. They appear dark grey and are magnetic, unlike capacitors with a similar dark grey appearance. These ferrite bead type are limited to small values in the nanohenry (nH) range and are often used as power supply rail decouplers or in high frequency parts of a circuit. Larger inductors and transformers may of course be through-hole mounted on the same board. SMT inductors with larger inductance values often have turns of wire or flat strap around the body or embedded in clear epoxy, allowing the wire or strap to be seen. Sometimes a ferrite core is present also. These higher inductance types are often limited to small current ratings, although some of the flat strap types can handle a few amps. As with capacitors, component values and identifiers for smaller inductors are not usually marked on the component itself; if not documented or printed on the PCB, measurement, usually removed from the circuit, is the only way of determining them. Larger inductors, especially wire-wound types in larger footprints, usually have the value printed on the top. For example, "330", which equates to a value of 33μH. Discrete semiconductors Discrete semiconductors, such as diodes and transistors are often marked with a two- or three-symbol code. The same code marked on different packages or on devices from different manufacturers can translate to different devices. Many of these codes, used because the devices are too small to be marked with more traditional numbers used on larger packages, correlate to more familiar traditional part numbers when a correlation list is consulted. GM4PMK in the United Kingdom has prepared a correlation list, and a similar .pdf list is also available, although these lists are not complete. Integrated circuits Generally, integrated circuit packages are large enough to be imprinted with the complete part number which includes the manufacturer's specific prefix, or a significant segment of the part number and the manufacturer's name or logo. See also Board-to-board connectors Chip carrier Electronics Electronics manufacturing services List of electronics package dimensions List of electronic component packaging types Plastic leaded chip carrier Point-to-point construction Printed circuit board RoHS SMT placement equipment Through-hole technology Wire wrap RKM code References Chip carriers Electronic design Electronics manufacturing de:Surface-mounted device
Surface-mount technology
[ "Engineering" ]
4,970
[ "Electronic design", "Electronics manufacturing", "Electronic engineering", "Design" ]
232,386
https://en.wikipedia.org/wiki/Motor%20skill
A motor skill is a function that involves specific movements of the body's muscles to perform a certain task. These tasks could include walking, running, or riding a bike. In order to perform this skill, the body's nervous system, muscles, and brain have to all work together. The goal of motor skill is to optimize the ability to perform the skill at the rate of success, precision, and to reduce the energy consumption required for performance. Performance is an act of executing a motor skill or task. Continuous practice of a specific motor skill will result in a greatly improved performance, which leads to motor learning. Motor learning is a relatively permanent change in the ability to perform a skill as a result of continuous practice or experience. A fundamental movement skill is a developed ability to move the body in coordinated ways to achieve consistent performance at demanding physical tasks, such as found in sports, combat or personal locomotion, especially those unique to humans, such as ice skating, skateboarding, kayaking, or horseback riding. Movement skills generally emphasize stability, balance, and a coordinated muscular progression from prime movers (legs, hips, lower back) to secondary movers (shoulders, elbow, wrist) when conducting explosive movements, such as throwing a baseball. In most physical training, development of core musculature is a central focus. In the athletic context, fundamental movement skills draw upon human physiology and sport psychology. Types of motor skills Motor skills are movements and actions of the muscles. There are two major groups of motor skills: Gross motor skills – require the use of large muscle groups in our legs, torso, and arms to perform tasks such as: walking, balancing, and crawling. The skill required is not extensive and therefore are usually associated with continuous tasks. Much of the development of these skills occurs during early childhood. We use our gross motor skills on a daily basis without putting much thought or effort into them. The performance level of gross motor skill remains unchanged after periods of non-use. Gross motor skills can be further divided into two subgroups: Locomotor skills, such as running, jumping, sliding, and swimming; and object-control skills such as throwing, catching, dribbling, and kicking. Fine motor skills – require the use of smaller muscle groups to perform smaller movements. These muscles include those found in our wrists, hands, fingers, feet and in our toes. These tasks are precise in nature like: playing the piano, tying shoelaces, brushing your teeth, and flossing. Some fine motor skills may be susceptible to retention loss of over a period of time if not in use. The phrase "if you don't use it, you lose it" is a perfect way to describe these skills, they need to be continuously used. Discrete tasks such as switch gears in an automobile, grasping an object, or striking a match, usually require more fine motor skill than gross motor skills. Both gross and fine motor skills can become weakened or damaged. Some reasons for these impairments could be caused by an injury, illness, stroke, congenital deformities (an abnormal change in the size or shape of a body part at birth), cerebral palsy, and developmental disabilities. Problems with the brain, spinal cord, peripheral nerves, muscles, or joints can also have an effect on these motor skills, and decrease control over them. Development Motor skills develop in different parts of a body along three principles: Cephalocaudal – the principle that development occurs from head to tail. For example, infants first learn to lift their heads on their own, followed by sitting up with assistance, then sitting up by themselves. Followed by scooting, crawling, pulling up, and then walking. Proximodistal – the principle that movement of limbs that are closer to the body develop before the parts that are further away. For example, a baby learns to control their upper arm before their hands and fingers. Fine movements of the fingers are the last to develop in the body. Gross to specific – a pattern in which larger muscle movements develop before finer movements. For example, a child will go from only being able to pick up large objects, to then being able to pick up an object that is small, between the thumb and fingers. The earlier movements involve larger groups of muscles, but as the child grows, finer movements become possible and specific tasks can be achieved. An example of this would be a young child learning to grasp a pencil. In children, a critical period for the development of motor skills is preschool years (ages 3–5), as fundamental neuroanatomic structure shows significant development, elaboration, and myelination over the course of this period. Many factors contribute to the rate that children develop their motor skills. Unless afflicted with a severe disability, children are expected to develop a wide range of basic movement abilities and motor skills around a certain age. Motor development progresses in seven stages throughout an individual's life: reflexive, rudimentary, fundamental, sports skill, growth and refinement, peak performance, and regression. Development is age-related but is not age dependent. In regard to age, it is seen that typical developments are expected to attain gross motor skills used for postural control and vertical mobility by 5 years of age. There are six aspects of development: Qualitative – changes in movement-process results in changes in movement-outcome. Sequential – certain motor patterns precede others. Cumulative – current movements are built on previous ones. Directional – cephalocaudal or proximodistal Multifactorial – numerous-factors impact Individual – dependent on each person In the childhood stages of development, gender differences can greatly influence motor skills. In the article "An Investigation of Age and Gender Differences in Preschool Children's Specific Motor Skills", girls scored significantly higher than boys on visual motor and graphomotor tasks. The results from this study suggest that girls attain manual dexterity earlier than boys. Variability of results in the tests can be attributed towards the multiplicity of different assessment tools used. Furthermore, gender differences in motor skills are seen to be affected by environmental factors. In essence, "parents and teachers often encourage girls to engage in [quiet] activities requiring fine motor skills, while they promote boys' participation in dynamic movement actions". In the journal article "Gender Differences in Motor Skill Proficiency From Childhood to Adolescence" by Lisa Barrett, the evidence for gender-based motor skills is apparent. In general, boys are more skillful in object control and object manipulation skills. These tasks include throwing, kicking, and catching skills. These skills were tested and concluded that boys perform better with these tasks. There was no evidence for the difference in locomotor skill between the genders, but both are improved in the intervention of physical activity. Overall, the predominance of development was on balance skills (gross motor) in boys and manual skills (fine motor) in girls. Components of development Growth – increase in the size of the body or its parts as the individual progresses toward maturity (quantitative structural changes) Maturation – refers to qualitative changes that enable one to progress to higher levels of functioning; it is primarily innate Experience or learning – refers to factors within the environment that may alter or modify the appearance of various developmental characteristics through the process of learning Adaptation – refers to the complex interplay or interaction between forces within the individual (nature) and the environment (nurture) Influences on development Stress and arousal – stress and anxiety are the result of an imbalance between the demand of a task and the capacity of the individual. In this context, arousal defines the amount of interest in the skill. The optimal performance level is moderate stress or arousal. Fatigue – the deterioration of performance when a stressful task is continued for a long time, similar to the muscular fatigue experienced when exercising rapidly or over a long period. Fatigue is caused by over-arousal. Fatigue impacts an individual in many ways: perceptual changes in which visual acuity or awareness drops, slowing of performance (reaction times or movements speed), irregularity of timing, and disorganization of performance. A study conducted by Meret Branscheidt concluded that fatigue interferes with the learning of new motor skills. In the experiment, participants were split into two different groups. One group worked the muscles in their hands until they were physically fatigued and then had to learn a new motor task, while the second group learned the task without being fatigued. Those that were fatigued had a harder time learning these new motor skills compared to those who were not. Even in the days following, after the fatigue had subsided, they still had difficulty learning those same tasks. Vigilance – the ability to maintain attention over time and respond appropriately to relevant stimuli. When vigilance is lost, it can result in slower responses or the failure to respond to stimuli all together. Some tasks include actions that require little work and high attention. Gender – gender plays an important role in the development of the child. Girls are more likely to be seen performing fine stationary visual motor-skills, whereas boys predominantly exercise object-manipulation skills. While researching motor development in preschool-aged children, girls were more likely to be seen performing skills such as skipping, hopping, or skills with the use of hands only. Boys were seen to perform gross skills such as kicking or throwing a ball or swinging a bat. There are gender-specific differences in qualitative throwing performance, but not necessarily in quantitative throwing performance. Male and female athletes demonstrated similar movement patterns in humerus and forearm actions but differed in trunk, stepping, and backswing actions. Stages of motor learning Motor learning is a change, resulting from practice. It often involves improving the accuracy of movements both simple and complex as one's environment changes. Motor learning is a relatively permanent skill as the capability to respond appropriately is acquired and retained. The stages of motor learning are the cognitive phase, the associative phase, and the autonomous phase. Cognitive phase – When a learner is new to a specific task, the primary thought process starts with, "What needs to be done?" Considerable cognitive activity is required so that the learner can determine appropriate strategies to adequately reflect the desired goal. Good strategies are retained and inefficient strategies are discarded. The performance is greatly improved in a short amount of time. Associative phase – The learner has determined the most-effective way to do the task and starts to make subtle adjustments in performance. Improvements are more gradual and movements become more consistent. This phase can last for a long time. The skills in this phase are fluent, efficient, and aesthetically pleasing. Autonomous phase – This phase may take several months to years to reach. The phase is dubbed "autonomous" because the performer can now "automatically" complete the task without having to pay any attention to performing it. Examples include walking and talking or sight reading while doing simple arithmetic. Law of effect Motor-skill acquisition has long been defined in the scientific community as an energy-intensive form of stimulus-response (S-R) learning that results in robust neuronal modifications. In 1898, Edward Thorndike proposed the law of effect, which states that the association between some action (R) and some environmental condition (S) is enhanced when the action is followed by a satisfying outcome (O). For instance, if an infant moves his right hand and left leg in just the right way, he can perform a crawling motion, thereby producing the satisfying outcome of increasing his mobility. Because of the satisfying outcome, the association between being on all fours and these particular arm and leg motions are enhanced. Further, a dissatisfying outcome weakens the S-R association. For instance, when a toddler contracts certain muscles, resulting in a painful fall, the child will decrease the association between these muscle contractions and the environmental condition of standing on two feet. Feedback During the learning process of a motor skill, feedback is the positive or negative response that tells the learner how well the task was completed. Inherent feedback: after completing the skill, inherent feedback is the sensory information that tells the learner how well the task was completed. A basketball player will note that he or she made a mistake when the ball misses the hoop. Another example is a diver knowing that a mistake was made when the entry into the water is painful and undesirable. Augmented feedback: in contrast to inherent feedback, augmented feedback is information that supplements or "augments" the inherent feedback. For example, when a person is driving over a speed limit and is pulled over by the police. Although the car did not do any harm, the policeman gives augmented feedback to the driver in order for him to drive more safely. Another example is a private tutor for a new student in a field of study. Augmented feedback decreases the amount of time to master the motor skill and increases the performance level of the prospect. Transfer of motor skills: the gain or loss in the capability for performance in one task as a result of practice and experience on some other task. An example would be the comparison of initial skill of a tennis player and non-tennis player when playing table tennis for the first time. An example of a negative transfer is if it takes longer for a typist to adjust to a randomly assigned letter of the keyboard compared to a new typist. Retention: the performance level of a particular skill after a period of no use. The type of task can have an effect on how well the motor skill is retained after a period of non-use: Continuous tasks – activities like swimming, bicycling, or running; the performance level retains proficiency even after years of non-use. Discrete tasks – an instrument, video game, or a sport; the performance level drops significantly but will be better than a new learner. The relationship between the two tasks is that continuous tasks usually use gross motor skills and discrete tasks use fine motor skills. Brain structures The regions of the frontal lobe responsible for motor skill include the primary motor cortex, the supplemental motor area, and the premotor cortex. The primary motor cortex is located in the precentral gyrus and is often visualized as the motor homunculus. By stimulating certain areas of the motor strip and observing where it had an effect, Penfield and Rassmussen were able to map out the motor homunculus. Areas on the body that have complex movements, such as the hands, have a bigger representation on the motor homunculus. The supplemental motor area, which is just anterior to the primary motor cortex, is involved with postural stability and adjustment as well as coordinating sequences of movement. The premotor cortex, which is just below the supplemental motor area, integrates sensory information from the posterior parietal cortex and is involved with the sensory-guided planning of movement and begins the programming of movement. The basal ganglia are an area of the brain where gender differences in brain physiology is evident. The basal ganglia are a group of nuclei in the brain that is responsible for a variety of functions, some of which include movement. The globus pallidus and putamen are two nuclei of the basal ganglia which are both involved in motor skills. The globes pallid-us is involved with the voluntary motor movement, while the putamen is involved with motor learning. Even after controlling for the naturally larger volume of the male brain, it was found that males have a larger volume of both the globus pallidus and putamen. The cerebellum is an additional area of the brain important for motor skills. The cerebellum controls fine motor skills as well as balance and coordination. Although women tend to have better fine motor skills, the cerebellum has a larger volume in males than in females, even after correcting for the fact that males naturally have a larger brain volume. Hormones are an additional factor that contributes to gender differences in motor skill. For instance, women perform better on manual dexterity tasks during times of high estradiol and progesterone levels, as opposed to when these hormones are low such as during menstruation. An evolutionary perspective is sometimes drawn upon to explain how gender differences in motor skills may have developed, although this approach is controversial. For instance, it has been suggested that men were the hunters and provided food for the family, while women stayed at home taking care of the children and doing domestic work. Some theories of human development suggest that men's tasks involved gross motor skill such as chasing after prey, throwing spears and fighting. Women, on the other hand, used their fine motor skills the most in order to handle domestic tools and accomplish other tasks that required fine motor-control. See also Muscle memory Motor control Motor skill consolidation Motor system Sensorimotor stage References External links Section about motor learning and control in the Wikibook "Stuttering" What's the difference between fine motor and gross motor skills? Motor control Skills
Motor skill
[ "Biology" ]
3,460
[ "Behavior", "Motor skills", "Motor control" ]
233,195
https://en.wikipedia.org/wiki/Air%20compressor
An air compressor is a machine that takes ambient air from the surroundings and discharges it at a higher pressure. It is an application of a gas compressor and a pneumatic device that converts mechanical power (from an electric motor, diesel or gasoline engine, etc.) into potential energy stored in compressed air, which has many uses. A common application is to compress air into a storage tank, for immediate or later use. When the delivery pressure reaches its set upper limit, the compressor is shut off, or the excess air is released through an overpressure valve. The compressed air is stored in the tank until it is needed. The pressure energy provided by the compressed air can be used for a variety of applications such as pneumatic tools as it is released. When tank pressure reaches its lower limit, the air compressor turns on again and re-pressurizes the tank. A compressor is different from a pump because it works on a gas, while pumps work on a liquid. Classification Power source Internal combustion engine: Petrol, petrol without oil, diesel Electric: AC, DC Drive type Direct drive Belt drive Compressors may be classified according to the pressure delivered: Low-pressure air compressors, which have a discharge pressure of or less Medium-pressure compressors which have a discharge pressure of High-pressure air compressors, which have a discharge pressure above There are numerous methods of air compression, divided into either positive-displacement or roto-dynamic types. Single-stage reciprocating compressor Multi-stage reciprocating compressor Single stage rotary-screw compressor Two-stage rotary screw compressor Rotary vane pump Scroll compressor Centrifugal (roto-dynamic or turbo) compressor Axial compressor, often used in jet engines. Another way of classification, is by lubrication type: oil lubricated and oil-free. The oil-less (or oil-free) system has more technical development such as they do not require oil for lubrication. oil-less air compressors are also lighter and more portable than oil-lubricated models but require more maintenance. On other side Oil-lubricated air compressors are the more traditional type of air compressor. They require oil to lubricate the motor which helps prolong the compressor's life. One of the benefits of oil-lubricated compressors is that they tend to be more durable and require less maintenance than oil-free compressors. Positive displacement compressors Positive-displacement compressors work by forcing air through a chamber whose volume is decreased to compress the air. Once the pressure is greater than the pressure outside the discharge valve, a port or valve opens and air is discharged into the outlet system from the compression chamber. Common types of positive displacement compressors are Piston-type air compressors, which compress air by pumping it through cylinders by reciprocating pistons. They use one-way valves to admit air into the cylinder on the induction stroke and prevent it from leaving by the same route, and out of the cylinder through the exhaust valve to the high pressure side on the compression stroke, again using a non-return valve to prevent it leaking back on the next induction stroke. Piston compressors can be single or multi-stage, and may also have one or more sets of cylinders in parallel (at the same pressure). Multi-stage compressors provide greater efficiency than their single-stage counterparts for high compression ratios, and generally use interstage cooling to improve efficiency. The capacities for both single-stage and two-stage compressors are generally specified in Standard Cubic feet per Minute (SCFM) or litres per minute and pounds per square Inch (PSI) or bar. To a lesser extent, some compressors are rated in actual cubic feet per minute (ACFM). Still others are rated in cubic feet per minute (CFM). Using CFM alone to rate a compressor is ambiguous because it represents a flow rate without a pressure reference. i.e. 20 CFM at 60 PSI. Single stage compressors usually fall into the fractional through 5 horsepower range. Two-stage compressors normally fall into the 5 through 30 horsepower range. Rotary screw compressors provide positive-displacement compression by matching two helical screws that, when turned, guide air into a chamber, whose volume is decreased as the screws turn. Rotary screw compressors can be single-stage or two-stage. Vane compressors: use a slotted rotor with varied blade placement to guide air into a chamber and compress the volume. This type of compressor delivers a fixed volume of air at high pressures. Roto-dynamic or turbo compressors Roto-dynamic air compressors include centrifugal compressors where Rotating vanes impart kinetic energy to a gas and stationary passages convert velocity into a rise in pressure, and axial compressors, where rotor blades impart the kinetic energy and stator blades convert it to a rise in pressure. Cooling Due to adiabatic heating, air compressors require some method of disposing of waste heat. Generally this is some form of air- or water-cooling, although some (particularly rotary type) compressors may be cooled by oil (that is then in turn air- or water-cooled). The atmospheric changes are also considered during cooling of compressors. The type of cooling is determined by considering the factors such as inlet temperature, ambient temperature, power of the compressor and area of application. There is no single type of compressor that could be used for any application. Applications Air compressors have many uses, such as supplying clean high-pressure air to fill gas cylinders, supplying clean moderate-pressure air to a submerged surface supplied air diver, supplying moderate-pressure clean air for driving some office and school building pneumatic HVAC control system valves, supplying a large amount of moderate-pressure air to power pneumatic tools, such as jackhammers, filling high pressure air tanks (HPA, air tank), for filling tires, and to produce large volumes of moderate-pressure air for large-scale industrial processes (such as oxidation for petroleum coking or cement plant bag house purge systems). Air compressors are also widely used in oil and gas, mining and drilling applications as the flushing medium, aerating muds in underbalanced drilling and in air pigging of pipelines. Most air compressors either are reciprocating piston type, rotary vane or rotary screw. Centrifugal compressors are common in very large applications, while rotary screw, scroll, and reciprocating air compressors are favored for small and medium-sized applications. Power source Air compressors are designed to utilize a variety of power sources. While direct drive gasoline or diesel-engines and electric motors are among the most popular, air compressors that utilize vehicle engines, power-take-off, or hydraulic ports are also commonly used in mobile applications. The power of a compressor is measured in HP (horsepower) and CFM (cubic feet per minute of intake air). The volume of the pressure vessel and the stored pressure indicate the volume of compressed air (in reserve) available. Gasoline and diesel-powered compressors are widely used in remote areas with problematic access to electricity. They are noisy and require ventilation for exhaust gases, particularly if the compressed air is to be used for a breathing air supply. Electric-powered compressors are widely used in production, workshops and garages with permanent access to electricity. Common workshop/garage compressors are 110-120 Volt or 230-240 Volt. Compressor tank shapes are: "pancake", "twin tank", "horizontal", and "vertical". Depending on a size and purpose compressors can be stationary or portable. Maintenance To ensure all compressor types run efficiently with no leaks, it is necessary to perform routine maintenance. The cost of maintenance only accounts for 8% of the life cycle cost of owning an air compressor. Air compressor isentropic efficiency According to CAGI air compressor performance verification data sheets, the higher the isentropic efficiency is, the better the energy saving is. The better air compressor isentropic efficiency has reached 95%. Approximately 70~80% of the air compressor's total lifetime cost is energy consumption, so using the high-efficiency air compressor is one of the energy-saving methods. See also Vacuum pump Free-piston engine Gas compressor Pneumatics Gas cylinder "The Blue Air Compressor" References Gas compressors Gases Diving support equipment Gas technologies Industrial gases
Air compressor
[ "Physics", "Chemistry" ]
1,728
[ "Matter", "Turbomachinery", "Gas compressors", "Phases of matter", "Industrial gases", "Chemical process engineering", "Statistical mechanics", "Gases" ]
233,281
https://en.wikipedia.org/wiki/Cementite
Cementite (or iron carbide) is a compound of iron and carbon, more precisely an intermediate transition metal carbide with the formula Fe3C. By weight, it is 6.67% carbon and 93.3% iron. It has an orthorhombic crystal structure. It is a hard, brittle material, normally classified as a ceramic in its pure form, and is a frequently found and important constituent in ferrous metallurgy. While cementite is present in most steels and cast irons, it is produced as a raw material in the iron carbide process, which belongs to the family of alternative ironmaking technologies. The name cementite originated from the theory of Floris Osmond and J. Werth, in which the structure of solidified steel consists of a kind of cellular tissue, with ferrite as the nucleus and Fe3C the envelope of the cells. The carbide therefore cemented the iron. Metallurgy In the iron–carbon system (i.e. plain-carbon steels and cast irons) it is a common constituent because ferrite can contain at most 0.02wt% of uncombined carbon. Therefore, in carbon steels and cast irons that are slowly cooled, a portion of the carbon is in the form of cementite. Cementite forms directly from the melt in the case of white cast iron. In carbon steel, cementite precipitates from austenite as austenite transforms to ferrite on slow cooling, or from martensite during tempering. An intimate mixture with ferrite, the other product of austenite, forms a lamellar structure called pearlite. While cementite is thermodynamically unstable, eventually being converted to austenite (low carbon level) and graphite (high carbon level) at higher temperatures, it does not decompose on heating at temperatures below the eutectoid temperature (723 °C) on the metastable iron-carbon phase diagram. Mechanical properties are as follows: room temperature microhardness 760–1350 HV; bending strength 4.6–8 GPa, Young's modulus 160–180 GPa, indentation fracture toughness 1.5–2.7 MPa√m. The morphology of cementite plays a critical role in the kinetics of phase transformations in steel. The coiling temperature and cooling rate significantly affect cementite formation. At lower coiling temperatures, cementite forms fine pearlitic colonies, whereas at higher temperatures, it precipitates as coarse particles at grain boundaries. This morphological difference influences the rate of austenite formation and decomposition, with fine cementite promoting faster transformations due to its increased surface area and the proximity of the carbide-ferrite interface. Furthermore, the dissolution kinetics of cementite during annealing are slower for coarse carbides, impacting the microstructural evolution during heat treatments. Pure form Cementite changes from ferromagnetic to paramagnetic upon heating to its Curie temperature of approximately . A natural iron carbide (containing minor amounts of nickel and cobalt) occurs in iron meteorites and is called cohenite after the German mineralogist Emil Cohen, who first described it. Other iron carbides There are other forms of metastable iron carbides that have been identified in tempered steel and in the industrial Fischer–Tropsch process. These include epsilon (ε) carbide, hexagonal close-packed Fe2–3C, precipitates in plain-carbon steels of carbon content > 0.2%, tempered at 100–200 °C. Non-stoichiometric ε-carbide dissolves above ~200 °C, where Hägg carbides and cementite begin to form. Hägg carbide, monoclinic Fe5C2, precipitates in hardened tool steels tempered at 200–300 °C. It has also been found naturally as the mineral Edscottite in the Wedderburn meteorite. References Bibliography External links Crystal structure of cementite at NRL Iron compounds Carbides Metallurgy Iron
Cementite
[ "Chemistry", "Materials_science", "Engineering" ]
856
[ "Metallurgy", "Materials science", "nan" ]
233,487
https://en.wikipedia.org/wiki/Tachometer
A tachometer (revolution-counter, tach, rev-counter, RPM gauge) is an instrument measuring the rotation speed of a shaft or disk, as in a motor or other machine. The device usually displays the revolutions per minute (RPM) on a calibrated analogue dial, but digital displays are increasingly common. The word comes . Essentially the words tachometer and speedometer have identical meaning: a device that measures speed. It is by arbitrary convention that in the automotive world one is used for engine revolutions and the other for vehicle speed. In formal engineering nomenclature, more precise terms are used to distinguish the two. History The first tachometer was described by Bryan Donkin in a paper to the Royal Society of Arts in 1810 for which he was awarded the Gold medal of the society. This consisted of a bowl of mercury constructed in such a way that centrifugal force caused the level in a central tube to fall when it rotated and brought down the level in a narrower tube above filled with coloured spirit. The bowl was connected to the machinery to be measured by pulleys. The first mechanical tachometers were based on measuring the centrifugal force, similar to the operation of a centrifugal governor. The inventor is assumed to be the German engineer Dietrich Uhlhorn; he used it for measuring the speed of machines in 1817. Since 1840, it has been used to measure the speed of locomotives. In automobiles, trucks, tractors and aircraft Tachometers or revolution counters on cars, aircraft, and other vehicles show the rate of rotation of the engine's crankshaft, and typically have markings indicating a safe range of rotation speeds. This can assist the driver in selecting appropriate throttle and gear settings for the driving conditions. Prolonged use at high speeds may cause inadequate lubrication, overheating (exceeding capability of the cooling system), exceeding speed capability of sub-parts of the engine (for example spring retracted valves) thus causing excessive wear or permanent damage or failure of engines. On analogue tachometers, speeds above maximum safe operating speed are typically indicated by an area of the gauge marked in red, giving rise to the expression of "redlining" an engine — revving the engine up to the maximum safe limit. Most modern cars typically have a revolution limiter which electronically limits engine speed to prevent damage. Diesel engines with traditional mechanical injector systems have an integral governor which prevents over-speeding the engine, so the tachometers in vehicles and machinery fitted with such engines sometimes lack a redline. In vehicles such as tractors and trucks, the tachometer often has other markings, usually a green arc showing the speed range in which the engine produces maximum torque, which is of prime interest to operators of such vehicles. Tractors fitted with a power take-off (PTO) system have tachometers showing the engine speed needed to rotate the PTO at the standardized speed required by most PTO-driven implements. In many countries, tractors are required to have a speedometer for use on a road. To save fitting a second dial, the vehicle's tachometer is often marked with a second scale in units of speed. This scale is only accurate in a certain gear, but since many tractors only have one gear that is practical for use on-road, this is sufficient. Tractors with multiple 'road gears' often have tachometers with more than one speed scale. Aircraft tachometers have a green arc showing the engine's designed cruising speed range. In older vehicles, the tachometer is driven by the RMS voltage waves from the low tension (LT contact breaker) side of the ignition coil, while on others (and nearly all diesel engines, which have no ignition system) engine speed is determined by the frequency from the alternator tachometer output. This is from a special connection called an "AC tap" which is a connection to one of the stator's coil output, before the rectifier. Tachometers driven by a rotating cable from a drive unit fitted to the engine (usually on the camshaft) exist - usually on simple diesel-engined machinery with basic or no electrical systems. On recent EMS found on modern vehicles, the signal for the tachometer is usually generated from an ECU which derives the information from either the crankshaft or camshaft speed sensor. Traffic engineering Tachometers are used to estimate traffic speed and volume (flow). A vehicle is equipped with the sensor and conducts "tach runs" which record the traffic data. These data are a substitute or complement to loop detector data. To get statistically significant results requires a high number of runs, and bias is introduced by the time of day, day of week, and the season. However, because of the expense, spacing (a lower density of loop detectors diminishes data accuracy), and relatively low reliability of loop detectors (often 30% or more are out of service at any given time), tach runs remain a common practice. In trains and light rail vehicles Speed sensing devices, termed variously "wheel impulse generators" (WIG), pulse generators, speed probes, or tachometers are used extensively in rail vehicles. Common types include opto-isolator slotted disk sensors and Hall effect sensors. Hall effect sensors typically use a rotating target attached to a wheel, gearbox or motor. This target may contain magnets, or it may be a toothed wheel. The teeth on the wheel vary the flux density of a magnet inside the sensor head. The probe is mounted with its head a precise distance from the target wheel and detects the teeth or magnets passing its face. One problem with this system is that the necessary air gap between the target wheel and the sensor allows ferrous dust from the vehicle's underframe to build up on the probe or target, inhibiting its function. Opto-isolator sensors are completely encased to prevent ingress from the outside environment. The only exposed parts are a sealed plug connector and a drive fork, which is attached to a slotted disk internally through a bearing and seal. The slotted disk is typically sandwiched between two circuit boards containing a photo-diode, photo-transistor, amplifier, and filtering circuits which produce a square wave pulse train output customized to the customers voltage and pulses per revolution requirements. These types of sensors typically provide 2 to 8 independent channels of output that can be sampled by other systems in the vehicle such as automatic train control systems and propulsion/braking controllers. The sensors mounted around the circumference of the disk provide quadrature encoded outputs and thus allow the vehicle's computer to determine the direction of rotation of the wheel. This is a legal requirement in Switzerland to prevent rollback when starting from standstill. Strictly, such devices are not tachometers since they do not provide a direct reading of the rotational speed of the disk. The speed has to be derived externally by counting the number of pulses in a time period. It is difficult to prove conclusively that the vehicle is stationary, other than by waiting a certain time to ensure that no further pulses occur. This is one reason why there is often a time delay between the train stopping, as perceived by a passenger, and the doors being released. Slotted-disk devices are typical sensors used in odometer systems for rail vehicles, such as are required for train protection systems — notably the European Train Control System. As well as speed sensing, these probes are often used to calculate distance travelled by multiplying wheel rotations by wheel circumference. They can be used to automatically calibrate wheel diameter by comparing the number of rotations of each axle against a master wheel that has been measured manually. Since all wheels travel the same distance, the diameter of each wheel is proportional to its number of rotations compared to the master wheel. This calibration must be done while coasting at a fixed speed to eliminate the possibility of wheel slip/slide introducing errors into the calculation. Automatic calibration of this type is used to generate more accurate traction and braking signals, and to improve wheel slip detection. A weakness of systems that rely on wheel rotation for tachometry and odometry is that the train wheels and the rails are very smooth and the friction between them is low, leading to high error rates if the wheels slip or slide. To compensate for this, secondary odometry inputs employ Doppler radar units beneath the train to measure speed independently. In analogue audio recording In analogue audio recording, a tachometer is a device that measures the speed of audiotape as it passes across the head. On most audio tape recorders the tachometer (or simply "tach") is a relatively large spindle near the ERP head stack, isolated from the feed and take-up spindles by tension idlers. On many recorders the tachometer spindle is connected by an axle to a rotating magnet that induces a changing magnetic field upon a Hall effect transistor. Other systems connect the spindle to a stroboscope, which alternates light and dark upon a photodiode. The tape recorder's drive electronics use signals from the tachometer to ensure that the tape is played at the proper speed. The signal is compared to a reference signal (either a quartz crystal or alternating current from the mains). The comparison of the two frequencies drives the speed of the tape transport. When the tach signal and the reference signal match, the tape transport is said to be "at speed." (To this day on film sets, the director calls "Roll sound!" and the sound man replies "Sound speed!" This is a vestige of the days when recording devices required several seconds to reach a regulated speed.) Having perfectly regulated tape speed is important because the human ear is very sensitive to changes in pitch, particularly sudden ones, and without a self-regulating system to control the speed of tape across the head, the pitch could drift several percent. This effect is called a wow-and-flutter, and a modern, tachometer-regulated cassette deck has a wow-and-flutter of 0.07%. Tachometers are acceptable for high-fidelity sound playback, but not for recording in synchronization with a movie camera. For such purposes, special recorders that record pilottone must be used. Tachometer signals can be used to synchronize several tape machines together, but only if in addition to the tach signal, a directional signal is transmitted, to tell slave machines in which direction the master is moving. See also List of auto parts List of vehicle instruments Redline Tachograph References Avionics Aircraft instruments Automotive technologies Vehicle parts Speed sensors Measuring instruments
Tachometer
[ "Technology", "Engineering" ]
2,209
[ "Vehicle parts", "Avionics", "Measuring instruments", "Aircraft instruments", "Speed sensors", "Components" ]
233,500
https://en.wikipedia.org/wiki/Scramjet
A scramjet (supersonic combustion ramjet) is a variant of a ramjet airbreathing jet engine in which combustion takes place in supersonic airflow. As in ramjets, a scramjet relies on high vehicle speed to compress the incoming air forcefully before combustion (hence ramjet), but whereas a ramjet decelerates the air to subsonic velocities before combustion using shock cones, a scramjet has no shock cone and slows the airflow using shockwaves produced by its ignition source in place of a shock cone. This allows the scramjet to operate efficiently at extremely high speeds. Although scramjet engines have been used in a handful of operational military vehicles, scramjets have so far mostly been demonstrated in research test articles and experimental vehicles. History Before 2000 The Bell X-1 attained supersonic flight in 1947 and, by the early 1960s, rapid progress toward faster aircraft suggested that operational aircraft would be flying at "hypersonic" speeds within a few years. Except for specialized rocket research vehicles like the North American X-15 and other rocket-powered spacecraft, aircraft top speeds have remained level, generally in the range of Mach1 to Mach3. During the US aerospaceplane program, between the 1950s and the mid 1960s, Alexander Kartveli and Antonio Ferri were proponents of the scramjet approach. In the 1950s and 1960s, a variety of experimental scramjet engines were built and ground tested in the US and the UK. Antonio Ferri successfully demonstrated a scramjet producing net thrust in November 1964, eventually producing 517 pounds-force (2.30 kN), about 80% of his goal. In 1958, an analytical paper discussed the merits and disadvantages of supersonic combustion ramjets. In 1964, Frederick S. Billig and Gordon L. Dugger submitted a patent application for a supersonic combustion ramjet based on Billig's PhD thesis. This patent was issued in 1981 following the removal of an order of secrecy. In 1981, tests were made in Australia under the guidance of Professor Ray Stalker in the T3 ground test facility at ANU. The first successful flight test of a scramjet was performed as a joint effort with NASA, over the Soviet Union in 1991. It was an axisymmetric hydrogen-fueled dual-mode scramjet developed by Central Institute of Aviation Motors (CIAM), Moscow in the late 1970s, but modernized with a FeCrAl alloy on a converted SM-6 missile to achieve initial flight parameters of Mach 6.8, before the scramjet flew at Mach 5.5. The scramjet flight was flown captive-carry atop the SA-5 surface-to-air missile that included an experimental flight support unit known as the "Hypersonic Flying Laboratory" (HFL), "Kholod". Then, from 1992 to 1998, an additional six flight tests of the axisymmetric high-speed scramjet-demonstrator were conducted by CIAM together with France and then with NASA. Maximum flight speed greater than Mach6.4 was achieved and scramjet operation during 77 seconds was demonstrated. These flight test series also provided insight into autonomous hypersonic flight controls. 2000s In the 2000s, significant progress was made in the development of hypersonic technology, particularly in the field of scramjet engines. The HyShot project demonstrated scramjet combustion on 30 July 2002. The scramjet engine worked effectively and demonstrated supersonic combustion in action. However, the engine was not designed to provide thrust to propel a craft. It was designed more or less as a technology demonstrator. A joint British and Australian team from UK defense company Qinetiq and the University of Queensland were the first group to demonstrate a scramjet working in an atmospheric test. Hyper-X claimed the first flight of a thrust-producing scramjet-powered vehicle with full aerodynamic maneuvering surfaces in 2004 with the X-43A. The last of the three X-43A scramjet tests achieved Mach9.6 for a brief time. On 15 June 2007, the US Defense Advanced Research Project Agency (DARPA), in cooperation with the Australian Defence Science and Technology Organisation (DSTO), announced a successful scramjet flight at Mach10 using rocket engines to boost the test vehicle to hypersonic speeds. A series of scramjet ground tests was completed at NASA Langley Arc-Heated Scramjet Test Facility (AHSTF) at simulated Mach8 flight conditions. These experiments were used to support HIFiRE flight 2. On 22 May 2009, Woomera hosted the first successful test flight of a hypersonic aircraft in HIFiRE (Hypersonic International Flight Research Experimentation). The launch was one of ten planned test flights. The series of flights is part of a joint research program between the Defence Science and Technology Organisation and the US Air Force, designated as the HIFiRE. HIFiRE is investigating hypersonics technology and its application to advanced scramjet-powered space launch vehicles; the objective is to support the new Boeing X-51 scramjet demonstrator while also building a strong base of flight test data for quick-reaction space launch development and hypersonic "quick-strike" weapons. 2010s On 22 and 23 March 2010, Australian and American defense scientists successfully tested a (HIFiRE) hypersonic rocket. It reached an atmospheric speed of "more than 5,000 kilometres per hour" (Mach4) after taking off from the Woomera Test Range in outback South Australia. On 27 May 2010, NASA and the United States Air Force successfully flew the X-51A Waverider for approximately 200 seconds at Mach5, setting a new world record for flight duration at hypersonic airspeed. The Waverider flew autonomously before losing acceleration for an unknown reason and destroying itself as planned. The test was declared a success. The X-51A was carried aboard a B-52, accelerated to Mach4.5 via a solid rocket booster, and then ignited the Pratt & Whitney Rocketdyne scramjet engine to reach Mach5 at . However, a second flight on 13 June 2011 was ended prematurely when the engine lit briefly on ethylene but failed to transition to its primary JP-7 fuel, failing to reach full power. On 16 November 2010, Australian scientists from the University of New South Wales at the Australian Defence Force Academy successfully demonstrated that the high-speed flow in a naturally non-burning scramjet engine can be ignited using a pulsed laser source. A further X-51A Waverider test failed on 15 August 2012. The attempt to fly the scramjet for a prolonged period at Mach6 was cut short when, only 15 seconds into the flight, the X-51A craft lost control and broke apart, falling into the Pacific Ocean north-west of Los Angeles. The cause of the failure was blamed on a faulty control fin. In May 2013, an X-51A Waverider reached 4828 km/h (Mach3.9) during a three-minute flight under scramjet power. The WaveRider was dropped at from a B-52 bomber, and then accelerated to Mach4.8 by a solid rocket booster which then separated before the WaveRider's scramjet engine came into effect. On 28 August 2016, the Indian space agency ISRO conducted a successful test of a scramjet engine on a two-stage, solid-fueled rocket. Twin scramjet engines were mounted on the back of the second stage of a two-stage, solid-fueled sounding rocket called Advanced Technology Vehicle (ATV), which is ISRO's advanced sounding rocket. The twin scramjet engines were ignited during the second stage of the rocket when the ATV achieved a speed of 7350 km/h (Mach6) at an altitude of 20 km. The scramjet engines were fired for a duration of about 5 seconds. On 12 June 2019, India successfully conducted the maiden flight test of its indigenously developed uncrewed scramjet demonstration aircraft for hypersonic speed flight from a base from Abdul Kalam Island in the Bay of Bengal at about 11:25 am. The aircraft is called the Hypersonic Technology Demonstrator Vehicle. The trial was carried out by the Defence Research and Development Organisation. The aircraft forms an important component of the country's programme for development of a hypersonic cruise missile system. 2020s On 27 September 2021, DARPA announced successful flight of its Hypersonic Air-breathing Weapon Concept scramjet cruise missile. Another successful test was carried out in mid-March 2022 amid the Russian invasion of Ukraine. Details were kept secret to avoid escalating tension with Russia, only to be revealed by an unnamed Pentagon official in early April. Design principles Scramjet engines are a type of jet engine, and rely on the combustion of fuel and an oxidizer to produce thrust. Similar to conventional jet engines, scramjet-powered aircraft carry the fuel on board, and obtain the oxidizer by the ingestion of atmospheric oxygen (as compared to rockets, which carry both fuel and an oxidizing agent). This requirement limits scramjets to suborbital atmospheric propulsion, where the oxygen content of the air is sufficient to maintain combustion. The scramjet is composed of three basic components: a converging inlet, where incoming air is compressed; a combustor, where gaseous fuel is burned with atmospheric oxygen to produce heat; and a diverging nozzle, where the heated air is accelerated to produce thrust. Unlike a typical jet engine, such as a turbojet or turbofan engine, a scramjet does not use rotating, fan-like components to compress the air; rather, the achievable speed of the aircraft moving through the atmosphere causes the air to compress within the inlet. As such, no moving parts are needed in a scramjet. In comparison, typical turbojet engines require multiple stages of rotating compressor rotors, and multiple rotating turbine stages, all of which add weight, complexity, and a greater number of failure points to the engine. Due to the nature of their design, scramjet operation is limited to near-hypersonic velocities. As they lack mechanical compressors, scramjets require the high kinetic energy of a hypersonic flow to compress the incoming air to operational conditions. Thus, a scramjet-powered vehicle must be accelerated to the required velocity (usually about Mach4) by some other means of propulsion, such as turbojet, or rocket engines. In the flight of the experimental scramjet-powered Boeing X-51A, the test craft was lifted to flight altitude by a Boeing B-52 Stratofortress before being released and accelerated by a detachable rocket to near Mach4.5. In May 2013, another flight achieved an increased speed of Mach5.1. While scramjets are conceptually simple, actual implementation is limited by extreme technical challenges. Hypersonic flight within the atmosphere generates immense drag, and temperatures found on the aircraft and within the engine can be much greater than that of the surrounding air. Maintaining combustion in the supersonic flow presents additional challenges, as the fuel must be injected, mixed, ignited, and burned within milliseconds. While scramjet technology has been under development since the 1950s, only very recently have scramjets successfully achieved powered flight. Scramjets are designed to operate in the hypersonic flight regime, beyond the reach of turbojet engines, and, along with ramjets, fill the gap between the high efficiency of turbojets and the high speed of rocket engines. Turbomachinery-based engines, while highly efficient at subsonic speeds, become increasingly inefficient at transonic speeds, as the compressor rotors found in turbojet engines require subsonic speeds to operate. While the flow from transonic to low supersonic speeds can be decelerated to these conditions, doing so at supersonic speeds results in a tremendous increase in temperature and a loss in the total pressure of the flow. Around Mach3–4, turbomachinery is no longer useful, and ram-style compression becomes the preferred method. Ramjets use high-speed characteristics of air to literally 'ram' air through an inlet diffuser into the combustor. At transonic and supersonic flight speeds, the air upstream of the inlet is not able to move out of the way quickly enough, and is compressed within the diffuser before being diffused into the combustor. Combustion in a ramjet takes place at subsonic velocities, similar to turbojets but the combustion products are then accelerated through a convergent-divergent nozzle to supersonic speeds. As they have no mechanical means of compression, ramjets cannot start from a standstill, and generally do not achieve sufficient compression until supersonic flight. The lack of intricate turbomachinery allows ramjets to deal with the temperature rise associated with decelerating a supersonic flow to subsonic speeds. However, as speed rises, the internal energy of the flow after diffusor grows rapidly, so the relative addition of energy due to fuel combustion becomes lower, leading to decrease in efficiency of the engine. This leads to decrease in thrust generated by ramjets at higher speeds. Thus, to generate thrust at very high velocities, the rise of the pressure and temperature of the incoming air flow must be tightly controlled. In particular, this means that deceleration of the airflow to subsonic speed cannot be allowed. Mixing the fuel and air in this situation presents a considerable engineering challenge, compounded by the need to closely manage the speed of combustion while maximizing the relative increase of internal energy within the combustion chamber. Consequently, current scramjet technology requires the use of high-energy fuels and active cooling schemes to maintain sustained operation, often using hydrogen and regenerative cooling techniques. Theory All scramjet engines have an intake which compresses the incoming air, fuel injectors, a combustion chamber, and a divergent thrust nozzle. Sometimes engines also include a region which acts as a flame holder, although the high stagnation temperatures mean that an area of focused waves may be used, rather than a discrete engine part as seen in turbine engines. Other engines use pyrophoric fuel additives, such as silane, to avoid flameout. An isolator between the inlet and combustion chamber is often included to improve the homogeneity of the flow in the combustor and to extend the operating range of the engine. Shockwave imaging by the University of Maryland using Schlieren imaging determined that the fuel mixture controls compression by creating backpressure and shockwaves that slow and compress the air before ignition, much like the shock cone of a Ramjet. The imaging showed that the higher the fuel flow and combustion, the more shockwaves formed ahead of the combustor, which slowed and compressed the air before ignition. A scramjet is reminiscent of a ramjet. In a typical ramjet, the supersonic inflow of the engine is decelerated at the inlet to subsonic speeds and then reaccelerated through a nozzle to supersonic speeds to produce thrust. This deceleration, which is produced by a normal shock, creates a total pressure loss which limits the upper operating point of a ramjet engine. For a scramjet, the kinetic energy of the freestream air entering the scramjet engine is largely comparable to the energy released by the reaction of the oxygen content of the air with a fuel (e.g. hydrogen). Thus the heat released from combustion at Mach2.5 is around 10% of the total enthalpy of the working fluid. Depending on the fuel, the kinetic energy of the air and the potential combustion heat release will be equal at around Mach8. Thus the design of a scramjet engine is as much about minimizing drag as maximizing thrust. This high speed makes the control of the flow within the combustion chamber more difficult. Since the flow is supersonic, no downstream influence propagates within the freestream of the combustion chamber. Throttling of the entrance to the thrust nozzle is not a usable control technique. In effect, a block of gas entering the combustion chamber must mix with fuel and have sufficient time for initiation and reaction, all the while traveling supersonically through the combustion chamber, before the burned gas is expanded through the thrust nozzle. This places stringent requirements on the pressure and temperature of the flow, and requires that the fuel injection and mixing be extremely efficient. Usable dynamic pressures lie in the range , where where q is the dynamic pressure of the gas ρ (rho) is the density of the gas v is the velocity of the gas To keep the combustion rate of the fuel constant, the pressure and temperature in the engine must also be constant. This is problematic because the airflow control systems that would facilitate this are not physically possible in a scramjet launch vehicle due to the large speed and altitude range involved, meaning that it must travel at an altitude specific to its speed. Because air density reduces at higher altitudes, a scramjet must climb at a specific rate as it accelerates to maintain a constant air pressure at the intake. This optimal climb/descent profile is called a "constant dynamic pressure path". It is thought that scramjets might be operable up to an altitude of 75 km. Fuel injection and management is also potentially complex. One possibility would be that the fuel be pressurized to 100 bar by a turbo pump, heated by the fuselage, sent through the turbine and accelerated to higher speeds than the air by a nozzle. The air and fuel stream are crossed in a comb-like structure, which generates a large interface. Turbulence due to the higher speed of the fuel leads to additional mixing. Complex fuels like kerosene need a long engine to complete combustion. The minimum Mach number at which a scramjet can operate is limited by the fact that the compressed flow must be hot enough to burn the fuel, and have pressure high enough that the reaction be finished before the air moves out the back of the engine. Additionally, to be called a scramjet, the compressed flow must still be supersonic after combustion. Here two limits must be observed: First, since when a supersonic flow is compressed it slows down, the level of compression must be low enough (or the initial speed high enough) not to slow the gas below Mach1. If the gas within a scramjet goes below Mach1 the engine will "choke", transitioning to subsonic flow in the combustion chamber. This effect is well known amongst experimenters on scramjets since the waves caused by choking are easily observable. Additionally, the sudden increase in pressure and temperature in the engine can lead to an acceleration of the combustion, leading to the combustion chamber exploding. Second, the heating of the gas by combustion causes the speed of sound in the gas to increase (and the Mach number to decrease) even though the gas is still travelling at the same speed. Forcing the speed of air flow in the combustion chamber under Mach1 in this way is called "thermal choking". It is clear that a pure scramjet can operate at Mach numbers of 6–8, but in the lower limit, it depends on the definition of a scramjet. There are engine designs where a ramjet transforms into a scramjet over the Mach3–6 range, known as dual-mode scramjets. In this range however, the engine is still receiving significant thrust from subsonic combustion of the ramjet type. The high cost of flight testing and the unavailability of ground facilities have hindered scramjet development. A large amount of the experimental work on scramjets has been undertaken in cryogenic facilities, direct-connect tests, or burners, each of which simulates one aspect of the engine operation. Further, vitiated facilities (with the ability to control air impurities), storage heated facilities, arc facilities and the various types of shock tunnels each have limitations which have prevented perfect simulation of scramjet operation. The HyShot flight test showed the relevance of the 1:1 simulation of conditions in the T4 and HEG shock tunnels, despite having cold models and a short test time. The NASA-CIAM tests provided similar verification for CIAM's C-16 V/K facility and the Hyper-X project is expected to provide similar verification for the Langley AHSTF, CHSTF, and HTT. Computational fluid dynamics has only recently reached a position to make reasonable computations in solving scramjet operation problems. Boundary layer modeling, turbulent mixing, two-phase flow, flow separation, and real-gas aerothermodynamics continue to be problems on the cutting edge of CFD. Additionally, the modeling of kinetic-limited combustion with very fast-reacting species such as hydrogen makes severe demands on computing resources. Reaction schemes are numerically stiff requiring reduced reaction schemes. Much of scramjet experimentation remains classified. Several groups, including the US Navy with the SCRAM engine between 1968 and 1974, and the Hyper-X program with the X-43A, have claimed successful demonstrations of scramjet technology. Since these results have not been published openly, they remain unverified and a final design method of scramjet engines still does not exist. The final application of a scramjet engine is likely to be in conjunction with engines which can operate outside the scramjet's operating range. Dual-mode scramjets combine subsonic combustion with supersonic combustion for operation at lower speeds, and rocket-based combined cycle (RBCC) engines supplement a traditional rocket's propulsion with a scramjet, allowing for additional oxidizer to be added to the scramjet flow. RBCCs offer a possibility to extend a scramjet's operating range to higher speeds or lower intake dynamic pressures than would otherwise be possible. Characteristics Aircraft Does not have to carry oxygen No rotating parts makes it easier to manufacture than a turbojet Has a higher specific impulse (change in momentum per unit of propellant) than a rocket engine; could provide between 1000 and 4000 seconds, while a rocket typically provides around 450 seconds or less. Higher speed could mean cheaper access to outer space in the future Difficult / expensive testing and development Very high initial propulsion requirements Unlike a rocket that quickly passes mostly vertically through the atmosphere or a turbojet or ramjet that flies at much lower speeds, a hypersonic airbreathing vehicle optimally flies a "depressed trajectory", staying within the atmosphere at hypersonic speeds. Because scramjets have only mediocre thrust-to-weight ratios, acceleration would be limited. Therefore, time in the atmosphere at supersonic speed would be considerable, possibly 15–30 minutes. Similar to a reentering space vehicle, heat insulation would be a formidable task, with protection required for a duration longer than that of a typical space capsule, although less than the Space Shuttle. New materials offer good insulation at high temperature, but they often sacrifice themselves in the process. Therefore, studies often plan on "active cooling", where coolant circulating throughout the vehicle skin prevents it from disintegrating. Often the coolant is the fuel itself, in much the same way that modern rockets use their own fuel and oxidizer as coolant for their engines. All cooling systems add weight and complexity to a launch system. The cooling of scramjets in this way may result in greater efficiency, as heat is added to the fuel prior to entry into the engine, but results in increased complexity and weight which ultimately could outweigh any performance gains. The performance of a launch system is complex and depends greatly on its weight. Normally craft are designed to maximise range (), orbital radius () or payload mass fraction () for a given engine and fuel. This results in tradeoffs between the efficiency of the engine (takeoff fuel weight) and the complexity of the engine (takeoff dry weight), which can be expressed by the following: Where : is the empty mass fraction, and represents the weight of the superstructure, tankage and engine. is the fuel mass fraction, and represents the weight of fuel, oxidiser and any other materials which are consumed during the launch. is initial mass ratio, and is the inverse of the payload mass fraction. This represents how much payload the vehicle can deliver to a destination. A scramjet increases the mass of the motor over a rocket, and decreases the mass of the fuel . It can be difficult to decide whether this will result in an increased (which would be an increased payload delivered to a destination for a constant vehicle takeoff weight). The logic behind efforts driving a scramjet is (for example) that the reduction in fuel decreases the total mass by 30%, while the increased engine weight adds 10% to the vehicle total mass. Unfortunately the uncertainty in the calculation of any mass or efficiency changes in a vehicle is so great that slightly different assumptions for engine efficiency or mass can provide equally good arguments for or against scramjet powered vehicles. Additionally, the drag of the new configuration must be considered. The drag of the total configuration can be considered as the sum of the vehicle drag () and the engine installation drag (). The installation drag traditionally results from the pylons and the coupled flow due to the engine jet, and is a function of the throttle setting. Thus it is often written as: Where: is the loss coefficient is the thrust of the engine For an engine strongly integrated into the aerodynamic body, it may be more convenient to think of () as the difference in drag from a known base configuration. The overall engine efficiency can be represented as a value between 0 and 1 (), in terms of the specific impulse of the engine: Where: is the acceleration due to gravity at ground level is the vehicle speed is the specific impulse is fuel heat of reaction Specific impulse is often used as the unit of efficiency for rockets, since in the case of the rocket, there is a direct relation between specific impulse, specific fuel consumption and exhaust velocity. This direct relation is not generally present for airbreathing engines, and so specific impulse is less used in the literature. Note that for an airbreathing engine, both and are a function of velocity. The specific impulse of a rocket engine is independent of velocity, and common values are between 200 and 600 seconds (450s for the space shuttle main engines). The specific impulse of a scramjet varies with velocity, reducing at higher speeds, starting at about 1200s, although values in the literature vary. For the simple case of a single stage vehicle, the fuel mass fraction can be expressed as: Where this can be expressed for single stage transfer to orbit as: or for level atmospheric flight from air launch (missile flight): Where is the range, and the calculation can be expressed in the form of the Breguet range formula: Where: is the lift coefficient is the drag coefficient This extremely simple formulation, used for the purposes of discussion assumes: Single stage vehicle No aerodynamic lift for the transatmospheric lifter However they are true generally for all engines. A scramjet cannot produce efficient thrust unless boosted to high speed, around Mach5, although depending on the design it could act as a ramjet at low speeds. A horizontal take-off aircraft would need conventional turbofan, turbojet, or rocket engines to take off, sufficiently large to move a heavy craft. Also needed would be fuel for those engines, plus all engine-associated mounting structure and control systems. Turbofan and turbojet engines are heavy and cannot easily exceed about Mach2–3, so another propulsion method would be needed to reach scramjet operating speed. That could be ramjets or rockets. Those would also need their own separate fuel supply, structure, and systems. Many proposals instead call for a first stage of droppable solid rocket boosters, which greatly simplifies the design. Unlike jet or rocket propulsion systems facilities which can be tested on the ground, testing scramjet designs uses extremely expensive hypersonic test chambers or expensive launch vehicles, both of which lead to high instrumentation costs. Tests using launched test vehicles very typically end with destruction of the test item and instrumentation. Orbital vehicles An advantage of a hypersonic airbreathing (typically scramjet) vehicle like the X-30 is avoiding or at least reducing the need for carrying oxidizer. For example, the Space Shuttle external tank held 616,432.2 kg of liquid oxygen (LOX) and 103,000 kg of liquid hydrogen (LH) while having an empty weight of 30,000 kg. The orbiter gross weight was 109,000 kg with a maximum payload of about 25,000 kg and to get the assembly off the launch pad the shuttle used two very powerful solid rocket boosters with a weight of 590,000 kg each. If the oxygen could be eliminated, the vehicle could be lighter at liftoff and possibly carry more payload. On the other hand, scramjets spend more time in the atmosphere and require more hydrogen fuel to deal with aerodynamic drag. Whereas liquid oxygen is quite a dense fluid (1141 kg/m3), liquid hydrogen has much lower density (70.85 kg/m3) and takes up more volume. This means that the vehicle using this fuel becomes much bigger and gives more drag. Other fuels have more comparable density, such as RP-1 (810 kg/m3) JP-7 (density at 15 °C 779–806 kg/m3) and unsymmetrical dimethylhydrazine (UDMH) (793.00 kg/m3). One issue is that scramjet engines are predicted to have exceptionally poor thrust-to-weight ratio of around 2, when installed in a launch vehicle. A rocket has the advantage that its engines have very high thrust-weight ratios (~100:1), while the tank to hold the liquid oxygen approaches a volume ratio of ~100:1 also. Thus a rocket can achieve a very high mass fraction, which improves performance. By way of contrast the projected thrust/weight ratio of scramjet engines of about 2 mean a much larger percentage of the takeoff mass is engine (ignoring that this fraction increases anyway by a factor of about four due to the lack of onboard oxidiser). In addition the vehicle's lower thrust does not necessarily avoid the need for the expensive, bulky, and failure-prone high performance turbopumps found in conventional liquid-fuelled rocket engines, since most scramjet designs seem to be incapable of orbital speeds in airbreathing mode, and hence extra rocket engines are needed. Scramjets might be able to accelerate from approximately Mach5–7 to around somewhere between half of orbital speed and orbital speed (X-30 research suggested that Mach17 might be the limit compared to an orbital speed of Mach25, and other studies put the upper speed limit for a pure scramjet engine between Mach10 and 25, depending on the assumptions made). Generally, another propulsion system (very typically, a rocket is proposed) is expected to be needed for the final acceleration into orbit. Since the delta-V is moderate and the payload fraction of scramjets high, lower performance rockets such as solids, hypergolics, or simple liquid fueled boosters might be acceptable. Theoretical projections place the top speed of a scramjet between and . For comparison, the orbital speed at low Earth orbit is . The scramjet's heat-resistant underside potentially doubles as its reentry system if a single-stage-to-orbit vehicle using non-ablative, non-active cooling is visualised. If an ablative shielding is used on the engine it will probably not be usable after ascent to orbit. If active cooling is used with the fuel as coolant, the loss of all fuel during the burn to orbit will also mean the loss of all cooling for the thermal protection system. Reducing the amount of fuel and oxidizer does not necessarily improve costs as rocket propellants are comparatively very cheap. Indeed, the unit cost of the vehicle can be expected to end up far higher, since aerospace hardware cost is about two orders of magnitude higher than liquid oxygen, fuel and tankage, and scramjet hardware seems to be much heavier than rockets for any given payload. Still, if scramjets enable reusable vehicles, this could theoretically be a cost benefit. Whether equipment subject to the extreme conditions of a scramjet can be reused sufficiently many times is unclear; all flown scramjet tests only survive for short periods and have never been designed to survive a flight to date. The eventual cost of such a vehicle is the subject of intense debate since even the best estimates disagree whether a scramjet vehicle would be advantageous. It is likely that a scramjet vehicle would need to lift more load than a rocket of equal takeoff weight to be equally as cost efficient (if the scramjet is a non-reusable vehicle). Space launch vehicles may or may not benefit from having a scramjet stage. A scramjet stage of a launch vehicle theoretically provides a specific impulse of 1000 to 4000s whereas a rocket provides less than 450s while in the atmosphere. A scramjet's specific impulse decreases rapidly with speed, however, and the vehicle would suffer from a relatively low lift to drag ratio. The installed thrust to weight ratio of scramjets compares very unfavorably with the 50–100 of a typical rocket engine. This is compensated for in scramjets partly because the weight of the vehicle would be carried by aerodynamic lift rather than pure rocket power (giving reduced 'gravity losses'), but scramjets would take much longer to get to orbit due to lower thrust which greatly offsets the advantage. The takeoff weight of a scramjet vehicle is significantly reduced over that of a rocket, due to the lack of onboard oxidiser, but increased by the structural requirements of the larger and heavier engines. Whether this vehicle could be reusable or not is still a subject of debate and research. Proposed applications An aircraft using this type of jet engine could dramatically reduce the time it takes to travel from one place to another, potentially putting any place on Earth within a 90-minute flight. However, there are questions about whether such a vehicle could carry enough fuel to make useful length trips. In addition, some countries ban or penalize airliners and other civil aircraft that create sonic booms. (For example, in the United States, FAA regulations prohibit supersonic flights over land, by civil aircraft. ) Scramjet vehicle has been proposed for a single stage to tether vehicle, where a Mach12 spinning orbital tether would pick up a payload from a vehicle at around 100 km and carry it to orbit. See also Avangard (hypersonic glide vehicle) Precooled jet engine Ram accelerator Shcramjet SABRE (rocket engine) References Citations Bibliography Aerospaceplane – 1961. Aerospace Projects Review, Volume 2, No 5. Aspects of the Aerospace Plane. Flight International, 2 January 1964, pages 36–37. External links Aircraft engines Jet engines Spacecraft propulsion Single-stage-to-orbit Space access Non-rocket spacelaunch Australian inventions de:Staustrahltriebwerk#Überschallverbrennung im Scramjet
Scramjet
[ "Technology" ]
7,281
[ "Jet engines", "Engines", "Aircraft engines" ]
233,631
https://en.wikipedia.org/wiki/Cryoelectronics
In electronics, cryoelectronics or cryolectronics is the study of superconductivity under cryogenic conditions and its applications. It is also described as the operation of power electronic devices at cryogenic temperatures. Practical applications of this field is quite broad, although it is particularly useful in areas where cryogenic environment exists such as superconducting technologies and spacecraft design. It also became a special branch of cryophysics and cryotechnics and plays a role in operations that require high resolution and precision measurements. Cryoelectronic devices include the SQUIDs or the superconducting quantum interference devices, which represent magnetic sensors of highest sensitivity. They serve as the backbone of applications that range from materials evaluation, geological and environmental prospecting, and medical diagnostics, among others. Marketable Uses A key factor in production of new technologies is whether it is cost effective and useful. Devices that make use of cryoelectronics and the applications of superconductivity such as computers, information transmission lines, and magnetocardiography have potential for commercial value outside of a few specific devices for singular purposes. At the same time, the presence of other devices made with highly specialized functions can be competitively marketed without having to rely on a large market. Devices and activities that are derived from this and have marketable functions include: Magnetometry: this includes magnetocardiography, communications, geomagnetism, and submarine detection. This includes several more specialized functions and some broader functions that can be derived from cryoelectronics. Computers: being able to mass-produce cheap, compact tunneling cryotron provides a diverse base of uses and marketing. Electrical Metrology: Allowing for more precise readings and measurements of current, voltage, power, and attenuation ratio, this will allow for more precise control over maintaining legal levels which provides a specific need and use for the technology. Galvanometers: a range of measurement devices that will be of use to the scientific field though more precise measurements in specialized fields. References Superconductivity
Cryoelectronics
[ "Physics", "Materials_science", "Engineering" ]
430
[ "Materials science stubs", "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electromagnetism stubs", "Electrical resistance and conductance" ]
233,654
https://en.wikipedia.org/wiki/World%20Geodetic%20System
The World Geodetic System (WGS) is a standard used in cartography, geodesy, and satellite navigation including GPS. The current version, WGS 84, defines an Earth-centered, Earth-fixed coordinate system and a geodetic datum, and also describes the associated Earth Gravitational Model (EGM) and World Magnetic Model (WMM). The standard is published and maintained by the United States National Geospatial-Intelligence Agency. History Efforts to supplement the various national surveying systems began in the 19th century with F.R. Helmert's book (Mathematical and Physical Theories of Physical Geodesy). Austria and Germany founded the (Central Bureau of International Geodesy), and a series of global ellipsoids of the Earth were derived (e.g., Helmert 1906, Hayford 1910 and 1924). A unified geodetic system for the whole world became essential in the 1950s for several reasons: International space science and the beginning of astronautics. The lack of inter-continental geodetic information. The inability of the large geodetic systems, such as European Datum (ED50), North American Datum (NAD), and Tokyo Datum (TD), to provide a worldwide geo-data basis Need for global maps for navigation, aviation, and geography. Western Cold War preparedness necessitated a standardised, NATO-wide geospatial reference system, in accordance with the NATO Standardisation Agreement WGS 60 In the late 1950s, the United States Department of Defense, together with scientists of other institutions and countries, began to develop the needed world system to which geodetic data could be referred and compatibility established between the coordinates of widely separated sites of interest. Efforts of the U.S. Army, Navy and Air Force were combined leading to the DoD World Geodetic System 1960 (WGS 60). The term datum as used here refers to a smooth surface somewhat arbitrarily defined as zero elevation, consistent with a set of surveyor's measures of distances between various stations, and differences in elevation, all reduced to a grid of latitudes, longitudes, and elevations. Heritage surveying methods found elevation differences from a local horizontal determined by the spirit level, plumb line, or an equivalent device that depends on the local gravity field (see physical geodesy). As a result, the elevations in the data are referenced to the geoid, a surface that is not readily found using satellite geodesy. The latter observational method is more suitable for global mapping. Therefore, a motivation, and a substantial problem in the WGS and similar work is to patch together data that were not only made separately, for different regions, but to re-reference the elevations to an ellipsoid model rather than to the geoid. In accomplishing WGS 60, a combination of available surface gravity data, astro-geodetic data and results from HIRAN and Canadian SHORAN surveys were used to define a best-fitting ellipsoid and an earth-centered orientation for each initially selected datum. (Every datum is relatively oriented with respect to different portions of the geoid by the astro-geodetic methods already described.) The sole contribution of satellite data to the development of WGS 60 was a value for the ellipsoid flattening which was obtained from the nodal motion of a satellite. Prior to WGS 60, the U.S. Army and U.S. Air Force had each developed a world system by using different approaches to the gravimetric datum orientation method. To determine their gravimetric orientation parameters, the Air Force used the mean of the differences between the gravimetric and astro-geodetic deflections and geoid heights (undulations) at specifically selected stations in the areas of the major datums. The Army performed an adjustment to minimize the difference between astro-geodetic and gravimetric geoids. By matching the relative astro-geodetic geoids of the selected datums with an earth-centered gravimetric geoid, the selected datums were reduced to an earth-centered orientation. Since the Army and Air Force systems agreed remarkably well for the NAD, ED and TD areas, they were consolidated and became WGS 60. WGS 66 Improvements to the global system included the Astrogeoid of Irene Fischer and the astronautic Mercury datum. In January 1966, a World Geodetic System Committee composed of representatives from the United States Army, Navy and Air Force was charged with developing an improved WGS, needed to satisfy mapping, charting and geodetic requirements. Additional surface gravity observations, results from the extension of triangulation and trilateration networks, and large amounts of Doppler and optical satellite data had become available since the development of WGS 60. Using the additional data and improved techniques, WGS 66 was produced which served DoD needs for about five years after its implementation in 1967. The defining parameters of the WGS 66 Ellipsoid were the flattening ( determined from satellite data) and the semimajor axis ( determined from a combination of Doppler satellite and astro-geodetic data). A worldwide 5° × 5° mean free air gravity anomaly field provided the basic data for producing the WGS 66 gravimetric geoid. Also, a geoid referenced to the WGS 66 Ellipsoid was derived from available astrogeodetic data to provide a detailed representation of limited land areas. WGS 72 After an extensive effort over a period of approximately three years, the Department of Defense World Geodetic System 1972 was completed. Selected satellite, surface gravity and astrogeodetic data available through 1972 from both DoD and non-DoD sources were used in a Unified WGS Solution (a large scale least squares adjustment). The results of the adjustment consisted of corrections to initial station coordinates and coefficients of the gravitational field. The largest collection of data ever used for WGS purposes was assembled, processed and applied in the development of WGS 72. Both optical and electronic satellite data were used. The electronic satellite data consisted, in part, of Doppler data provided by the U.S. Navy and cooperating non-DoD satellite tracking stations established in support of the Navy's Navigational Satellite System (NNSS). Doppler data was also available from the numerous sites established by GEOCEIVERS during 1971 and 1972. Doppler data was the primary data source for WGS 72 (see image). Additional electronic satellite data was provided by the SECOR (Sequential Collation of Range) Equatorial Network completed by the U.S. Army in 1970. Optical satellite data from the Worldwide Geometric Satellite Triangulation Program was provided by the BC-4 camera system (see image). Data from the Smithsonian Astrophysical Observatory was also used which included camera (Baker–Nunn) and some laser ranging. The surface gravity field used in the Unified WGS Solution consisted of a set of 410 10° × 10° equal area mean free air gravity anomalies determined solely from terrestrial data. This gravity field includes mean anomaly values compiled directly from observed gravity data wherever the latter was available in sufficient quantity. The value for areas of sparse or no observational data were developed from geophysically compatible gravity approximations using gravity-geophysical correlation techniques. Approximately 45 percent of the 410 mean free air gravity anomaly values were determined directly from observed gravity data. The astrogeodetic data in its basic form consists of deflection of the vertical components referred to the various national geodetic datums. These deflection values were integrated into astrogeodetic geoid charts referred to these national datums. The geoid heights contributed to the Unified WGS Solution by providing additional and more detailed data for land areas. Conventional ground survey data was included in the solution to enforce a consistent adjustment of the coordinates of neighboring observation sites of the BC-4, SECOR, Doppler and Baker–Nunn systems. Also, eight geodimeter long line precise traverses were included for the purpose of controlling the scale of the solution. The Unified WGS Solution, as stated above, was a solution for geodetic positions and associated parameters of the gravitational field based on an optimum combination of available data. The WGS 72 ellipsoid parameters, datum shifts and other associated constants were derived separately. For the unified solution, a normal equation matrix was formed based on each of the mentioned data sets. Then, the individual normal equation matrices were combined and the resultant matrix solved to obtain the positions and the parameters. The value for the semimajor axis () of the WGS 72 Ellipsoid is . The adoption of an -value 10 meters smaller than that for the WGS 66 Ellipsoid was based on several calculations and indicators including a combination of satellite and surface gravity data for position and gravitational field determinations. Sets of satellite derived station coordinates and gravimetric deflection of the vertical and geoid height data were used to determine local-to-geocentric datum shifts, datum rotation parameters, a datum scale parameter and a value for the semimajor axis of the WGS Ellipsoid. Eight solutions were made with the various sets of input data, both from an investigative point of view and also because of the limited number of unknowns which could be solved for in any individual solution due to computer limitations. Selected Doppler satellite tracking and astro-geodetic datum orientation stations were included in the various solutions. Based on these results and other related studies accomplished by the committee, an -value of and a flattening of 1/298.26 were adopted. In the development of local-to WGS 72 datum shifts, results from different geodetic disciplines were investigated, analyzed and compared. Those shifts adopted were based primarily on a large number of Doppler TRANET and GEOCEIVER station coordinates which were available worldwide. These coordinates had been determined using the Doppler point positioning method. WGS 84 In the early 1980s, the need for a new world geodetic system was generally recognized by the geodetic community as well as within the US Department of Defense. WGS 72 no longer provided sufficient data, information, geographic coverage, or product accuracy for all then-current and anticipated applications. The means for producing a new WGS were available in the form of improved data, increased data coverage, new data types and improved techniques. Observations from Doppler, satellite laser ranging and very-long-baseline interferometry (VLBI) constituted significant new information. An outstanding new source of data had become available from satellite radar altimetry. Also available was an advanced least squares method called collocation that allowed for a consistent combination solution from different types of measurements all relative to the Earth's gravity field, measurements such as the geoid, gravity anomalies, deflections, and dynamic Doppler. The new world geodetic system was called WGS 84. It is the reference system used by the Global Positioning System. It is geocentric and globally consistent within . Current geodetic realizations of the geocentric reference system family International Terrestrial Reference System (ITRS) maintained by the IERS are geocentric, and internally consistent, at the few-cm level, while still being metre-level consistent with WGS 84. The WGS 84 reference ellipsoid was based on GRS 80, but it contains a very slight variation in the inverse flattening, as it was derived independently and the result was rounded to a different number of significant digits. This resulted in a tiny difference of in the semi-minor axis. The following table compares the primary ellipsoid parameters. Definition The coordinate origin of WGS 84 is meant to be located at the Earth's center of mass; the uncertainty is believed to be less than . The WGS 84 meridian of zero longitude is the IERS Reference Meridian, 5.3 arc seconds or east of the Greenwich meridian at the latitude of the Royal Observatory. (This is related to the fact that the local gravity field at Greenwich does not point exactly through the Earth's center of mass, but rather "misses west" of the center of mass by about 102 meters.) The longitude positions on WGS 84 agree with those on the older North American Datum 1927 at roughly 85° longitude west, in the east-central United States. The WGS 84 datum surface is an oblate spheroid with equatorial radius = at the equator and flattening = . The refined value of the WGS 84 gravitational constant (mass of Earth's atmosphere included) is = . The angular velocity of the Earth is defined to be = . This leads to several computed parameters such as the polar semi-minor axis which equals = , and the first eccentricity squared, = . Updates and new standards The original standardization document for WGS 84 was Technical Report 8350.2, published in September 1987 by the Defense Mapping Agency (which later became the National Imagery and Mapping Agency). New editions were published in September 1991 and July 1997; the latter edition was amended twice, in January 2000 and June 2004. The standardization document was revised again and published in July 2014 by the National Geospatial-Intelligence Agency as NGA.STND.0036. These updates provide refined descriptions of the Earth and realizations of the system for higher precision. The original WGS84 model had an absolute accuracy of 1–2 meters. WGS84 (G730) first incorporated GPS observations, taking the accuracy down to 10 cm/component rms. All following revisions including WGS84 (G873) and WGS84 (G1150) also used GPS. WGS 84 (G1762) is the sixth update to the WGS reference frame. WGS 84 has most recently been updated to use the reference frame G2296, which was released on 7 January 2024 as an update to G2139, now aligned to both the ITRF2020, the most recent ITRF realization, and the IGS20, the frame used by the International GNSS Service (IGS). G2139 was aligned with the IGb14 realization of the International Terrestrial Reference Frame (ITRF) 2014 and uses the new IGS Antex standard. Updates to the original geoid for WGS 84 are now published as a separate Earth Gravitational Model (EGM), with improved resolution and accuracy. Likewise, the World Magnetic Model (WMM) is updated separately. The current version of WGS 84 uses EGM2008 and WMM2020. Solution for Earth orientation parameters consistent with ITRF2014 is also needed (IERS EOP 14C04). Identifiers Components of WGS 84 are identified by codes in the EPSG Geodetic Parameter Dataset: EPSG:4326 – 2D coordinate reference system (CRS) EPSG:4979 – 3D CRS EPSG:4978 – geocentric 3D CRS EPSG:7030 – reference ellipsoid EPSG:6326 – horizontal datum See also Degree Confluence Project Earth Gravitational Model European Terrestrial Reference System 1989 Geo (microformat) – for marking up WGS 84 coordinates in (X)HTML geo URI scheme Geographic information system Geotagging GIS file formats North American Datum Point of interest TRANSIT system References External links NGA Standardization Document Department of Defense World Geodetic System 1984, Its Definition and Relationships With Local Geodetic Systems (2014-07-08) DMA Technical Report 8350.2 Department of Defense World Geodetic System 1984, Its Definition and Relationships With Local Geodetic Systems (1991-09-01). This edition documents the original Earth Gravitational Model. NGA webpage for WGS 84 Geodesy for the Layman, Chapter VIII, "The World Geodetic System" Spatial reference for EPSG:4326 ANTEX (.atx) files that define IGS20 Coordinate systems Geodesy Global Positioning System Military globalization Navigation
World Geodetic System
[ "Mathematics", "Technology", "Engineering" ]
3,311
[ "Wireless locating", "Applied mathematics", "Aerospace engineering", "Aircraft instruments", "Coordinate systems", "Global Positioning System", "Geodesy" ]
233,668
https://en.wikipedia.org/wiki/Figure%20of%20the%20Earth
In geodesy, the figure of the Earth is the size and shape used to model planet Earth. The kind of figure depends on application, including the precision needed for the model. A spherical Earth is a well-known historical approximation that is satisfactory for geography, astronomy and many other purposes. Several models with greater accuracy (including ellipsoid) have been developed so that coordinate systems can serve the precise needs of navigation, surveying, cadastre, land use, and various other concerns. Motivation Earth's topographic surface is apparent with its variety of land forms and water areas. This topographic surface is generally the concern of topographers, hydrographers, and geophysicists. While it is the surface on which Earth measurements are made, mathematically modeling it while taking the irregularities into account would be extremely complicated. The Pythagorean concept of a spherical Earth offers a simple surface that is easy to deal with mathematically. Many astronomical and navigational computations use a sphere to model the Earth as a close approximation. However, a more accurate figure is needed for measuring distances and areas on the scale beyond the purely local. Better approximations can be made by modeling the entire surface as an oblate spheroid, using spherical harmonics to approximate the geoid, or modeling a region with a best-fit reference ellipsoid. For surveys of small areas, a planar (flat) model of Earth's surface suffices because the local topography overwhelms the curvature. Plane-table surveys are made for relatively small areas without considering the size and shape of the entire Earth. A survey of a city, for example, might be conducted this way. By the late 1600s, serious effort was devoted to modeling the Earth as an ellipsoid, beginning with French astronomer Jean Picard's measurement of a degree of arc along the Paris meridian. Improved maps and better measurement of distances and areas of national territories motivated these early attempts. Surveying instrumentation and techniques improved over the ensuing centuries. Models for the figure of the Earth improved in step. In the mid- to late 20th century, research across the geosciences contributed to drastic improvements in the accuracy of the figure of the Earth. The primary utility of this improved accuracy was to provide geographical and gravitational data for the inertial guidance systems of ballistic missiles. This funding also drove the expansion of geoscientific disciplines, fostering the creation and growth of various geoscience departments at many universities. These developments benefited many civilian pursuits as well, such as weather and communication satellite control and GPS location-finding, which would be impossible without highly accurate models for the figure of the Earth. Models The models for the figure of the Earth vary in the way they are used, in their complexity, and in the accuracy with which they represent the size and shape of the Earth. Sphere The simplest model for the shape of the entire Earth is a sphere. The Earth's radius is the distance from Earth's center to its surface, about . While "radius" normally is a characteristic of perfect spheres, the Earth deviates from spherical by only a third of a percent, sufficiently close to treat it as a sphere in many contexts and justifying the term "the radius of the Earth". The concept of a spherical Earth dates back to around the 6th century BC, but remained a matter of philosophical speculation until the 3rd century BC. The first scientific estimation of the radius of the Earth was given by Eratosthenes about 240 BC, with estimates of the accuracy of Eratosthenes's measurement ranging from −1% to 15%. The Earth is only approximately spherical, so no single value serves as its natural radius. Distances from points on the surface to the center range from to . Several different ways of modeling the Earth as a sphere each yield a mean radius of . Regardless of the model, any radius falls between the polar minimum of about and the equatorial maximum of about . The difference correspond to the polar radius being approximately 0.3% shorter than the equatorial radius. Ellipsoid of revolution As theorized by Isaac Newton and Christiaan Huygens, the Earth is flattened at the poles and bulged at the equator. Thus, geodesy represents the figure of the Earth as an oblate spheroid. The oblate spheroid, or oblate ellipsoid, is an ellipsoid of revolution obtained by rotating an ellipse about its shorter axis. It is the regular geometric shape that most nearly approximates the shape of the Earth. A spheroid describing the figure of the Earth or other celestial body is called a reference ellipsoid. The reference ellipsoid for Earth is called an Earth ellipsoid. An ellipsoid of revolution is uniquely defined by two quantities. Several conventions for expressing the two quantities are used in geodesy, but they are all equivalent to and convertible with each other: Equatorial radius (called semimajor axis), and polar radius (called semiminor axis); and eccentricity ; and flattening . Eccentricity and flattening are different ways of expressing how squashed the ellipsoid is. When flattening appears as one of the defining quantities in geodesy, generally it is expressed by its reciprocal. For example, in the WGS 84 spheroid used by today's GPS systems, the reciprocal of the flattening is set to be exactly . The difference between a sphere and a reference ellipsoid for Earth is small, only about one part in 300. Historically, flattening was computed from grade measurements. Nowadays, geodetic networks and satellite geodesy are used. In practice, many reference ellipsoids have been developed over the centuries from different surveys. The flattening value varies slightly from one reference ellipsoid to another, reflecting local conditions and whether the reference ellipsoid is intended to model the entire Earth or only some portion of it. A sphere has a single radius of curvature, which is simply the radius of the sphere. More complex surfaces have radii of curvature that vary over the surface. The radius of curvature describes the radius of the sphere that best approximates the surface at that point. Oblate ellipsoids have a constant radius of curvature east to west along parallels, if a graticule is drawn on the surface, but varying curvature in any other direction. For an oblate ellipsoid, the polar radius of curvature is larger than the equatorial because the pole is flattened: the flatter the surface, the larger the sphere must be to approximate it. Conversely, the ellipsoid's north–south radius of curvature at the equator is smaller than the polar where is the distance from the center of the ellipsoid to the equator (semi-major axis), and is the distance from the center to the pole. (semi-minor axis) Non-spheroidal deviations Triaxiality (equatorial eccentricity) The possibility that the Earth's equator is better characterized as an ellipse rather than a circle and therefore that the ellipsoid is triaxial has been a matter of scientific inquiry for many years. Modern technological developments have furnished new and rapid methods for data collection and, since the launch of Sputnik 1, orbital data have been used to investigate the theory of ellipticity. More recent results indicate a 70 m difference between the two equatorial major and minor axes of inertia, with the larger semidiameter pointing to 15° W longitude (and also 180-degree away). Egg or pear shape Following work by Picard, Italian polymath Giovanni Domenico Cassini found that the length of a degree was apparently shorter north of Paris than to the south, implying the Earth to be egg-shaped. In 1498, Christopher Columbus dubiously suggested that the Earth was pear-shaped based on his disparate mobile readings of the angle of the North Star, which he incorrectly interpreted as having varying diurnal motion. The theory of a slightly pear-shaped Earth arose when data was received from the U.S.'s artificial satellite Vanguard 1 in 1958. It was found to vary in its long periodic orbit, with the Southern Hemisphere exhibiting higher gravitational attraction than the Northern Hemisphere. This indicated a flattening at the South Pole and a bulge of the same degree at the North Pole, with the sea level increased about at the latter. This theory implies the northern middle latitudes to be slightly flattened and the southern middle latitudes correspondingly bulged. Potential factors involved in this aberration include tides and subcrustal motion (e.g. plate tectonics). John A. O'Keefe and co-authors are credited with the discovery that the Earth had a significant third degree zonal spherical harmonic in its gravitational field using Vanguard 1 satellite data. Based on further satellite geodesy data, Desmond King-Hele refined the estimate to a difference between north and south polar radii, owing to a "stem" rising in the North Pole and a depression in the South Pole. The polar asymmetry is about a thousand times smaller than the Earth's flattening and even smaller than its geoidal undulation in some regions. Geoid Modern geodesy tends to retain the ellipsoid of revolution as a reference ellipsoid and treat triaxiality and pear shape as a part of the geoid figure: they are represented by the spherical harmonic coefficients and , respectively, corresponding to degree and order numbers 2.2 for the triaxiality and 3.0 for the pear shape. It was stated earlier that measurements are made on the apparent or topographic surface of the Earth and it has just been explained that computations are performed on an ellipsoid. One other surface is involved in geodetic measurement: the geoid. In geodetic surveying, the computation of the geodetic coordinates of points is commonly performed on a reference ellipsoid closely approximating the size and shape of the Earth in the area of the survey. The actual measurements made on the surface of the Earth with certain instruments are however referred to the geoid. The ellipsoid is a mathematically defined regular surface with specific dimensions. The geoid, on the other hand, coincides with that surface to which the oceans would conform over the entire Earth if free to adjust to the combined effect of the Earth's mass attraction (gravitation) and the centrifugal force of the Earth's rotation. As a result of the uneven distribution of the Earth's mass, the geoidal surface is irregular and, since the ellipsoid is a regular surface, the separations between the two, referred to as geoid undulations, geoid heights, or geoid separations, will be irregular as well. The geoid is a surface along which the gravity potential is equal everywhere and to which the direction of gravity is always perpendicular. The latter is particularly important because optical instruments containing gravity-reference leveling devices are commonly used to make geodetic measurements. When properly adjusted, the vertical axis of the instrument coincides with the direction of gravity and is, therefore, perpendicular to the geoid. The angle between the plumb line which is perpendicular to the geoid (sometimes called "the vertical") and the perpendicular to the ellipsoid (sometimes called "the ellipsoidal normal") is defined as the deflection of the vertical. It has two components: an east–west and a north–south component. Local approximations Simpler local approximations are possible. Local tangent plane The local tangent plane is appropriate for analysis across small distances. Osculating sphere The best local spherical approximation to the ellipsoid in the vicinity of a given point is the Earth's osculating sphere. Its radius equals Earth's Gaussian radius of curvature, and its radial direction coincides with the geodetic normal direction. The center of the osculating sphere is offset from the center of the ellipsoid, but is at the center of curvature for the given point on the ellipsoid surface. This concept aids the interpretation of terrestrial and planetary radio occultation refraction measurements and in some navigation and surveillance applications. Earth rotation and Earth's interior Determining the exact figure of the Earth is not only a geometric task of geodesy, but also has geophysical considerations. According to theoretical arguments by Newton, Leonhard Euler, and others, a body having a uniform density of 5,515 kg/m that rotates like the Earth should have a flattening of 1:229. This can be concluded without any information about the composition of Earth's interior. However, the measured flattening is 1:298.25, which is closer to a sphere and a strong argument that Earth's core is extremely compact. Therefore, the density must be a function of the depth, ranging from 2,600 kg/m at the surface (rock density of granite, etc.), up to 13,000 kg/m within the inner core. Global and regional gravity field Also with implications for the physical exploration of the Earth's interior is the gravitational field, which is the net effect of gravitation (due to mass attraction) and centrifugal force (due to rotation). It can be measured very accurately at the surface and remotely by satellites. True vertical generally does not correspond to theoretical vertical (deflection ranges up to 50") because topography and all geological masses disturb the gravitational field. Therefore, the gross structure of the Earth's crust and mantle can be determined by geodetic-geophysical models of the subsurface. See also Clairaut's theorem EGM96 Gravity formula Meridian arc History Pierre Bouguer Earth's circumference#History Earth's radius#History Flat Earth Friedrich Robert Helmert History of geodesy History of the metre Meridian arc#History Seconds pendulum References Attribution Further reading Guy Bomford, Geodesy, Oxford 1952 and 1980. Guy Bomford, Determination of the European geoid by means of vertical deflections. Rpt of Comm. 14, IUGG 10th Gen. Ass., Rome 1954. Karl Ledersteger and Gottfried Gerstbach, Die horizontale Isostasie / Das isostatische Geoid 31. Ordnung. Geowissenschaftliche Mitteilungen Band 5, TU Wien 1975. Helmut Moritz and Bernhard Hofmann, Physical Geodesy. Springer, Wien & New York 2005. Geodesy for the Layman, Defense Mapping Agency, St. Louis, 1983. External links Reference Ellipsoids (PCI Geomatics) Reference Ellipsoids (ScanEx) Changes in Earth shape due to climate changes Jos Leys "The shape of Planet Earth" Earth Geodesy Geophysics
Figure of the Earth
[ "Physics", "Mathematics" ]
3,072
[ "Applied mathematics", "Applied and interdisciplinary physics", "Geodesy", "Geophysics" ]
233,740
https://en.wikipedia.org/wiki/Heat%20shield
In engineering, a heat shield is a component designed to protect an object or a human operator from being burnt or overheated by dissipating, reflecting, and/or absorbing heat. The term is most often used in reference to exhaust heat management and to systems for dissipating frictional heat. Heat shields are used most commonly in the automotive and aerospace industries. Principles of operation Heat shields protect structures from extreme temperatures and thermal gradients by two primary mechanisms. Thermal insulation and radiative cooling, respectively isolate the underlying structure from high external surface temperatures, while emitting heat outwards through thermal radiation. To achieve good functionality the three attributes required of a heat shield are low thermal conductivity (high thermal resistance), high emissivity, and good thermal stability (refractoriness). Porous ceramics with high emissivity coatings (HECs) are often employed to address these three characteristics, owing to the good thermal stability of ceramics, the thermal insulation of porous materials and the good radiative cooling effects offered by HECs. Uses Automotive Due to the large amounts of heat given off by internal combustion engines, heat shields are used on most engines to protect components and bodywork from heat damage. As well as protection, effective heat shields can give a performance benefit by reducing engine bay temperatures, therefore reducing the temperature of the air entering the engine. Heat shields vary widely in price, but most are easy to fit, usually by stainless steel clips, high temperature tape or specially designed metal cable ties. There are three main types of automotive heat shield: Rigid heat shields have until recently commonly been made from solid steel, but are now often made from aluminum. Some high-end rigid heat shields are made out of either aluminum, gold or composite, with most examples including a ceramic coating to provide a thermal barrier, which improves heat insulation. The flexible heat shield are normally made from thin aluminum or gold sheeting, most commonly sold either flat or in a roll. These heat shields are often bent by hand by the installer. High performance flexible heat shields sometimes include extras, such as ceramic insulation applied via plasma spraying. Another common tactic in flexible heat shields is using exotic composite materials to improve thermal insulation and shave weight. These latest products are commonplace in top-end motorsports such as Formula 1. Textile heat shields, (also known as heat wraps), are used to insulate various exhaust components by trapping the heat emitted by the exhaust inside the exhaust pipe, rather than allowing the immense heat from these components to radiate within the engine bay. These wraps are most common in motorcycle exhaust pipes. Heat shields are often fitted by both amateur and professional personnel during the optimization phase of engine tuning. Heat shields are also used to cool engine mount vents. When a vehicle is at higher speed there is enough ram air to cool the under the hood engine compartment, but when the vehicle is moving at lower speeds or climbing a gradient there is a need of insulating the engine heat to get transferred to other parts around it, e.g. Engine Mounts. With the help of proper thermal analysis and use of heat shields, the engine mount vents can be optimized for the best performances. Aircraft Some aircraft at high speed, such as the Concorde and SR-71 Blackbird, must be designed considering similar, but lower, overheating to what occurs in spacecraft. In the case of the Concorde the aluminum nose can reach a maximum operating temperature of 127 °C (which is 180 °C higher than the ambient air outside which is below zero); the metallurgical consequences associated with the peak temperature were a significant factor in determining the maximum aircraft speed. Recently new materials have been developed that could be superior to RCC. The prototype SHARP (Slender Hypervelocity Aerothermodynamic Research Probe) is based on ultra-high temperature ceramics such as zirconium diboride (ZrB2) and hafnium diboride (HfB2). The thermal protection system based on these materials would allow to reach a speed of Mach number 7 at sea level, Mach 11 at 35000 meters and significant improvements for vehicles designed for hypersonic speed. The materials used have thermal protection characteristics in a temperature range from 0 °C to + 2000 °C, with melting point at over 3500 °C. They are also structurally more resistant than RCC, so they do not require additional reinforcements, and are very efficient in re-irradiating the absorbed heat. NASA funded (and subsequently discontinued) a research and development program in 2001 for testing this protection system through the University of Montana. The European Commission funded a research project, C3HARME, under the NMP-19-2015 call of Framework Programmes for Research and Technological Development in 2016 (still ongoing) for the design, development, production and testing of a new class of ultra-refractory ceramic matrix composites reinforced with silicon carbide fibers and carbon fibers suitable for applications in severe aerospace environments. Spacecraft Spacecraft that land on a planet with an atmosphere, such as Earth, Mars, and Venus, currently do so by entering the atmosphere at high speeds, depending on air resistance rather than rocket power to slow them down. A side effect of this method of atmospheric re-entry is aerodynamic heating, which can be highly destructive to the structure of an unprotected or faulty spacecraft. An aerodynamic heat shield consists of a protective layer of special materials to dissipate the heat. Two basic types of aerodynamic heat shield have been used: An ablative heat shield consists of a layer of plastic resin, the outer surface of which is heated to a gas, which then carries the heat away by convection. Such shields were used on the Vostok, Voskhod, Mercury, Gemini, and Apollo spacecraft, and are currently used by the SpaceX Dragon 2, Orion, and Soyuz spacecraft. The Soviet Vostok 1, the first crewed spacecraft, used ablative heat shielding made from asbestos fabric in resin. The succeeding Mercury and Gemini missions both used fiber glass in the resin, while the Apollo spacecraft using a quartz fiber reinforced resin. The first use of a super-light ablator (SLA) for spacecraft purposes was for the Viking Landers in 1976. SLA would also be utilized for the Pathfinder mission. Phenolic impregnated carbon ablators (PICA) was used for the Stardust mission launched in 1999. A thermal soak heat shield uses an insulating material to absorb and radiate the heat away from the spacecraft structure. This type was used on the Space Shuttle, with the intent for the shield to be reused with minimal refurbishment in between launches. The heat shield on the space shuttle consisted of ceramic or composite tiles over most of the vehicle surface, with reinforced carbon-carbon material on the highest heat load points (the nose and wing leading edges). This protected the orbiter when it reached a temperature of 1,648 degrees Celsius during reentry. The Soviet spaceplane, known as the Buran, also used TPS tiles that are similar to the American Shuttles. With the Buran also using a ceramic tiles on the bottom of the orbiter, and carbon-carbon on the nose cone. Many problems arose with the tiles used on the Space Shuttle, while minor damage to the heat shield was somewhat commonplace. Major damage to the heat shield almost caused the destruction of Space shuttle Atlantis in 1988 and did cause the loss of Columbia in 2003. With possible inflatable heat shields, as developed by the US (Low Earth Orbit Flight Test Inflatable Decelerator - LOFTID) and China, single-use rockets like the Space Launch System are considered to be retrofitted with such heat shields to salvage the expensive engines, possibly reducing the costs of launches significantly. On November 10, 2022, LOFTID was launched using an Atlas V rocket and, then, detached in order to reenter the atmosphere. The outer layer of the heat shield consisted of a silicon carbide ceramic. The recovered LOFTID had minimal damage. Passive cooling Passive cooled protectors are used to protect spaceships during atmospheric entry to absorb heat peaks and subsequently radiate heat to the atmosphere. Early versions included a substantial amount of metals such as titanium, beryllium and copper. This greatly increased the mass of the vehicle. Heat absorption and ablative systems became preferable. In modern vehicles, passive cooling can be found as reinforced carbon–carbon material instead of metal. This material constitutes the thermal protection system of the nose and the front edges of the Space Shuttle and was proposed for the vehicle X-33. Carbon is the most refractory material known with a sublimation temperature (for graphite) of 3825 °C. These characteristics make it a material particularly suitable for passive cooling, but with the disadvantage of being very expensive and fragile. Some spacecraft also use a heat shield (in the conventional automotive sense) to protect fuel tanks and equipment from the heat produced by a large rocket engine. Such shields were used on the Apollo Service Module and Lunar Module descent stage. The Parker Solar Probe, designed to enter the corona of the Sun, experiences a surface temperature of 2,500 °F. To withstand this temperature without damage to its body or instruments, the spacecraft uses a heat shield using a carbon-carbon ceramic with a layer of carbon foam in between. The probe was launched into space on August 18, 2018. Military Heat shields are often affixed to semi-automatic or automatic rifles and shotguns as barrel shrouds in order to protect the user's hands from the heat caused by firing shots in rapid succession. They have also often been affixed to pump-action combat shotguns, allowing the soldier to grasp the barrel while using a bayonet. Industry Heat shields are used in metallurgical industry to protect structural steel of the building or other equipment from the high temperature of nearby liquid metal. See also Aeroshell Atmospheric reentry AVCOAT Intumescent Starlite Sunshield (JWST) References Spacecraft components Atmospheric entry Auto parts
Heat shield
[ "Engineering" ]
2,045
[ "Atmospheric entry", "Aerospace engineering" ]
233,944
https://en.wikipedia.org/wiki/External%20combustion%20engine
An external combustion engine (EC engine) is a reciprocating heat engine where a working fluid, contained internally, is heated by combustion in an external source, through the engine wall or a heat exchanger. The fluid then, by expanding and acting on the mechanism of the engine, produces motion and usable work. The fluid is then dumped (open cycle), or cooled, compressed and reused (closed cycle). In these types of engines, the combustion is primarily used as a heat source, and the engine can work equally well with other types of heat sources. Combustion "Combustion" refers to burning fuel with an oxidizer, to supply the heat. Engines of similar (or even identical) configuration and operation may use a supply of heat from other sources such as nuclear, solar, geothermal or exothermic reactions not involving combustion; they are not then strictly classed as external combustion engines, but as external thermal engines. Working fluid The working fluid can be of any composition and the system may be single-phase (liquid only or gas only) or dual-phase (liquid/gas). Single phase Gas is used in a Stirling engine. Single-phase liquid may sometimes be used. Dual phase Dual-phase external combustion engines use a phase transition to convert temperature to usable work, for example from liquid to (generally much larger) gas. This type of engine follows variants of the Rankine cycle. Steam engines are a common example of dual-phase engines. Another example is engines that use the Organic Rankine cycle. See also Organic Rankine cycle Steam engines Stirling engines Trochilic engine Internal combustion engine (ICE) Nuclear power Solar thermal rocket (an externally heated rocket) Naptha engine, a variant of the steam engine, using a petroleum liquid as both fuel and working fluid. References Engines
External combustion engine
[ "Physics", "Technology" ]
368
[ "Physical systems", "External combustion engines", "Machines", "Engines" ]
234,018
https://en.wikipedia.org/wiki/Assertion%20%28software%20development%29
In computer programming, specifically when using the imperative programming paradigm, an assertion is a predicate (a Boolean-valued function over the state space, usually expressed as a logical proposition using the variables of a program) connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help a compiler compile it, or help the program detect its own defects. For the latter, some programs check assertions by actually evaluating the predicate as they run. Then, if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberately crashes or throws an assertion failure exception. Details The following code contains two assertions, x > 0 and x > 1, and they are indeed true at the indicated points during execution: x = 1; assert x > 0; x++; assert x > 1; Programmers can use assertions to help specify programs and to reason about program correctness. For example, a precondition—an assertion placed at the beginning of a section of code—determines the set of states under which the programmer expects the code to execute. A postcondition—placed at the end—describes the expected state at the end of execution. For example: x > 0 { x++ } x > 1. The example above uses the notation for including assertions used by C. A. R. Hoare in his 1969 article. That notation cannot be used in existing mainstream programming languages. However, programmers can include unchecked assertions using the comment feature of their programming language. For example, in C++: x = 5; x = x + 1; // {x > 1} The braces included in the comment help distinguish this use of a comment from other uses. Libraries may provide assertion features as well. For example, in C using glibc with C99 support: #include <assert.h> int f(void) { int x = 5; x = x + 1; assert(x > 1); } Several modern programming languages include checked assertions – statements that are checked at runtime or sometimes statically. If an assertion evaluates to false at runtime, an assertion failure results, which typically causes execution to abort. This draws attention to the location at which the logical inconsistency is detected and can be preferable to the behaviour that would otherwise result. The use of assertions helps the programmer design, develop, and reason about a program. Usage In languages such as Eiffel, assertions form part of the design process; other languages, such as C and Java, use them only to check assumptions at runtime. In both cases, they can be checked for validity at runtime but can usually also be suppressed. Assertions in design by contract Assertions can function as a form of documentation: they can describe the state the code expects to find before it runs (its preconditions), and the state the code expects to result in when it is finished running (postconditions); they can also specify invariants of a class. Eiffel integrates such assertions into the language and automatically extracts them to document the class. This forms an important part of the method of design by contract. This approach is also useful in languages that do not explicitly support it: the advantage of using assertion statements rather than assertions in comments is that the program can check the assertions every time it runs; if the assertion no longer holds, an error can be reported. This prevents the code from getting out of sync with the assertions. Assertions for run-time checking An assertion may be used to verify that an assumption made by the programmer during the implementation of the program remains valid when the program is executed. For example, consider the following Java code: int total = countNumberOfUsers(); if (total % 2 == 0) { // total is even } else { // total is odd and non-negative assert total % 2 == 1; } In Java, % is the remainder operator (modulo), and in Java, if its first operand is negative, the result can also be negative (unlike the modulo used in mathematics). Here, the programmer has assumed that total is non-negative, so that the remainder of a division with 2 will always be 0 or 1. The assertion makes this assumption explicit: if countNumberOfUsers does return a negative value, the program may have a bug. A major advantage of this technique is that when an error does occur it is detected immediately and directly, rather than later through often obscure effects. Since an assertion failure usually reports the code location, one can often pin-point the error without further debugging. Assertions are also sometimes placed at points the execution is not supposed to reach. For example, assertions could be placed at the default clause of the switch statement in languages such as C, C++, and Java. Any case which the programmer does not handle intentionally will raise an error and the program will abort rather than silently continuing in an erroneous state. In D such an assertion is added automatically when a switch statement doesn't contain a default clause. In Java, assertions have been a part of the language since version 1.4. Assertion failures result in raising an AssertionError when the program is run with the appropriate flags, without which the assert statements are ignored. In C, they are added on by the standard header assert.h defining assert (assertion) as a macro that signals an error in the case of failure, usually terminating the program. In C++, both assert.h and cassert headers provide the assert macro. The danger of assertions is that they may cause side effects either by changing memory data or by changing thread timing. Assertions should be implemented carefully so they cause no side effects on program code. Assertion constructs in a language allow for easy test-driven development (TDD) without the use of a third-party library. Assertions during the development cycle During the development cycle, the programmer will typically run the program with assertions enabled. When an assertion failure occurs, the programmer is immediately notified of the problem. Many assertion implementations will also halt the program's execution: this is useful, since if the program continued to run after an assertion violation occurred, it might corrupt its state and make the cause of the problem more difficult to locate. Using the information provided by the assertion failure (such as the location of the failure and perhaps a stack trace, or even the full program state if the environment supports core dumps or if the program is running in a debugger), the programmer can usually fix the problem. Thus assertions provide a very powerful tool in debugging. Assertions in production environment When a program is deployed to production, assertions are typically turned off, to avoid any overhead or side effects they may have. In some cases assertions are completely absent from deployed code, such as in C/C++ assertions via macros. In other cases, such as Java, assertions are present in the deployed code, and can be turned on in the field for debugging. Assertions may also be used to promise the compiler that a given edge condition is not actually reachable, thereby permitting certain optimizations that would not otherwise be possible. In this case, disabling the assertions could actually reduce performance. Static assertions Assertions that are checked at compile time are called static assertions. Static assertions are particularly useful in compile time template metaprogramming, but can also be used in low-level languages like C by introducing illegal code if (and only if) the assertion fails. C11 and C++11 support static assertions directly through static_assert. In earlier C versions, a static assertion can be implemented, for example, like this: #define SASSERT(pred) switch(0){case 0:case pred:;} SASSERT( BOOLEAN CONDITION ); If the (BOOLEAN CONDITION) part evaluates to false then the above code will not compile because the compiler will not allow two case labels with the same constant. The boolean expression must be a compile-time constant value, for example (sizeof(int)==4) would be a valid expression in that context. This construct does not work at file scope (i.e. not inside a function), and so it must be wrapped inside a function. Another popular way of implementing assertions in C is: static char const static_assertion[ (BOOLEAN CONDITION) ? 1 : -1 ] = {'!'}; If the (BOOLEAN CONDITION) part evaluates to false then the above code will not compile because arrays may not have a negative length. If in fact the compiler allows a negative length then the initialization byte (the '!' part) should cause even such over-lenient compilers to complain. The boolean expression must be a compile-time constant value, for example (sizeof(int) == 4) would be a valid expression in that context. Both of these methods require a method of constructing unique names. Modern compilers support a preprocessor define that facilitates the construction of unique names, by returning monotonically increasing numbers for each compilation unit. D provides static assertions through the use of static assert. Disabling assertions Most languages allow assertions to be enabled or disabled globally, and sometimes independently. Assertions are often enabled during development and disabled during final testing and on release to the customer. Not checking assertions avoids the cost of evaluating the assertions while (assuming the assertions are free of side effects) still producing the same result under normal conditions. Under abnormal conditions, disabling assertion checking can mean that a program that would have aborted will continue to run. This is sometimes preferable. Some languages, including C, YASS and C++, can completely remove assertions at compile time using the preprocessor. Similarly, launching the Python interpreter with "-O" (for "optimize") as an argument will cause the Python code generator to not emit any bytecode for asserts. Java requires an option to be passed to the run-time engine in order to enable assertions. Absent the option, assertions are bypassed, but they always remain in the code unless optimised away by a JIT compiler at run-time or excluded at compile time via the programmer manually placing each assertion behind an if (false) clause. Programmers can build checks into their code that are always active by bypassing or manipulating the language's normal assertion-checking mechanisms. Comparison with error handling Assertions are distinct from routine error-handling. Assertions document logically impossible situations and discover programming errors: if the impossible occurs, then something fundamental is clearly wrong with the program. This is distinct from error handling: most error conditions are possible, although some may be extremely unlikely to occur in practice. Using assertions as a general-purpose error handling mechanism is unwise: assertions do not allow for recovery from errors; an assertion failure will normally halt the program's execution abruptly; and assertions are often disabled in production code. Assertions also do not display a user-friendly error message. Consider the following example of using an assertion to handle an error: int *ptr = malloc(sizeof(int) * 10); assert(ptr); // use ptr ... Here, the programmer is aware that malloc will return a NULL pointer if memory is not allocated. This is possible: the operating system does not guarantee that every call to malloc will succeed. If an out of memory error occurs the program will immediately abort. Without the assertion, the program would continue running until ptr was dereferenced, and possibly longer, depending on the specific hardware being used. So long as assertions are not disabled, an immediate exit is assured. But if a graceful failure is desired, the program has to handle the failure. For example, a server may have multiple clients, or may hold resources that will not be released cleanly, or it may have uncommitted changes to write to a datastore. In such cases it is better to fail a single transaction than to abort abruptly. Another error is to rely on side effects of expressions used as arguments of an assertion. One should always keep in mind that assertions might not be executed at all, since their sole purpose is to verify that a condition which should always be true does in fact hold true. Consequently, if the program is considered to be error-free and released, assertions may be disabled and will no longer be evaluated. Consider another version of the previous example: int *ptr; // Statement below fails if malloc() returns NULL, // but is not executed at all when compiling with -NDEBUG! assert(ptr = malloc(sizeof(int) * 10)); // use ptr: ptr isn't initialised when compiling with -NDEBUG! ... This might look like a smart way to assign the return value of malloc to ptr and check if it is NULL in one step, but the malloc call and the assignment to ptr is a side effect of evaluating the expression that forms the assert condition. When the NDEBUG parameter is passed to the compiler, as when the program is considered to be error-free and released, the assert() statement is removed, so malloc() isn't called, rendering ptr uninitialised. This could potentially result in a segmentation fault or similar null pointer error much further down the line in program execution, causing bugs that may be sporadic and/or difficult to track down. Programmers sometimes use a similar VERIFY(X) define to alleviate this problem. Modern compilers may issue a warning when encountering the above code. History In 1947 reports by von Neumann and Goldstine on their design for the IAS machine, they described algorithms using an early version of flow charts, in which they included assertions: "It may be true, that whenever C actually reaches a certain point in the flow diagram, one or more bound variables will necessarily possess certain specified values, or possess certain properties, or satisfy certain properties with each other. Furthermore, we may, at such a point, indicate the validity of these limitations. For this reason we will denote each area in which the validity of such limitations is being asserted, by a special box, which we call an assertion box." The assertional method for proving correctness of programs was advocated by Alan Turing. In a talk "Checking a Large Routine" at Cambridge, June 24, 1949 Turing suggested: "How can one check a large routine in the sense of making sure that it's right? In order that the man who checks may not have too difficult a task, the programmer should make a number of definite assertions which can be checked individually, and from which the correctness of the whole program easily follows". See also Assertion definition language Design by contract Exception handling Hoare logic Static code analysis Java Modeling Language Invariant (computer science) References External links A historical perspective on runtime assertion checking in software development by Lori A. Clarke, David S. Rosenblum in: ACM SIGSOFT Software Engineering Notes 31(3):25-37, 2006 Assertions: a personal perspective by C.A.R. Hoare in: IEEE Annals of the History of Computing, Volume: 25, Issue: 2 (2003), Page(s): 14 - 25 My Compiler Does Not Understand Me by Poul-Henning Kamp in: ACM Queue 10(5), May 2012 Use of Assertions by John Regehr Formal methods Logic in computer science Conditional constructs Debugging
Assertion (software development)
[ "Mathematics", "Engineering" ]
3,290
[ "Software engineering", "Mathematical logic", "Logic in computer science", "Formal methods" ]
234,088
https://en.wikipedia.org/wiki/Crystal%20filter
A crystal filter allows some frequencies to pass through an electrical circuit while attenuating undesired frequencies. An electronic filter can use quartz crystals as resonator components of a filter circuit. Quartz crystals are piezoelectric, so their mechanical characteristics can affect electronic circuits (see mechanical filter). In particular, quartz crystals can exhibit mechanical resonances with a very high factor (from 10,000 to 100,000 and greater – far higher than conventional resonators built from inductors and capacitors). The crystal's stability and its high Q factor allow crystal filters to have precise center frequencies and steep band-pass characteristics. Typical crystal filter attenuation in the band-pass is approximately 2-3dB. Crystal filters are commonly used in communication devices such as radio receivers. Crystal filters are used in the intermediate frequency (IF) stages of high-quality radio receivers. They are preferred because they are very stable mechanically and thus have little change in resonant frequency with changes in operating temperature. For the highest available stability applications, crystals are placed in ovens with controlled temperature making operating temperature independent of ambient temperature. Cheaper sets may use ceramic filters built from ceramic resonators (which also exploit the piezoelectric effect) or tuned LC circuits. Very high quality "crystal ladder" filters can be constructed of serial arrays of crystals. The most common use of crystal filters are at frequencies of 9 MHz or 10.7 MHz to provide selectivity in communications receivers, or at higher frequencies as a roofing filter in receivers using up-conversion. The vibrating frequencies of the crystal are determined by its "cut" (physical shape), such as the common AT cut used for crystal filters designed for radio communications. The cut also determines some temperature characteristics, which affect the stability of the resonant frequency. However, quartz has an inherently high temperature stability, its shape does not change much with temperatures found in typical radios. By contrast, less expensive ceramic-based filters are commonly used with a frequency of 10.7 MHz to provide filtering of unwanted frequencies in consumer FM receivers. Additionally, a lower frequency (typically 455 kHz or nearby) can be used as the second intermediate frequency and have a piezoelectric-based filter. Ceramic filters at 455 kHz can achieve similar narrow bandwidths to crystal filters at 10.7 MHz. The design concept for using quartz crystals as a filtering component was first established by W.G. Cady in 1922, but it was largely W.P. Mason's work in the late 1920s and early 1930s that devised methods for incorporating crystals into LC [[Thumbnail for Electronic filter topology Electronic filter topology |lattice filter networks]] which set the groundwork for much of the progress in telephone communications. Crystal filter designs from the 1960s allowed for true Chebyshev, Butterworth, and other typical filter types. Crystal filter design continued to improve in the 1970s and 1980s with the development of multi-pole monolithic filters, widely used today to provide IF selectivity in communication receivers. Crystal filters can be found today in radio communications, telecommunications, signal generation, and GPS devices. See also Bandpass filter Crystal oscillator References Linear filters Wireless tuning and filtering Signal processing filter Radio technology
Crystal filter
[ "Chemistry", "Technology", "Engineering" ]
657
[ "Information and communications technology", "Radio electronics", "Wireless tuning and filtering", "Telecommunications engineering", "Filters", "Radio technology", "Signal processing filter" ]
234,129
https://en.wikipedia.org/wiki/Zinc%20finger
A zinc finger is a small protein structural motif that is characterized by the coordination of one or more zinc ions (Zn2+) which stabilizes the fold. It was originally coined to describe the finger-like appearance of a hypothesized structure from the African clawed frog (Xenopus laevis) transcription factor IIIA. However, it has been found to encompass a wide variety of differing protein structures in eukaryotic cells. Xenopus laevis TFIIIA was originally demonstrated to contain zinc and require the metal for function in 1983, the first such reported zinc requirement for a gene regulatory protein followed soon thereafter by the Krüppel factor in Drosophila. It often appears as a metal-binding domain in multi-domain proteins. Proteins that contain zinc fingers (zinc finger proteins) are classified into several different structural families. Unlike many other clearly defined supersecondary structures such as Greek keys or β hairpins, there are a number of types of zinc fingers, each with a unique three-dimensional architecture. A particular zinc finger protein's class is determined by its three-dimensional structure, but it can also be recognized based on the primary structure of the protein or the identity of the ligands coordinating the zinc ion. In spite of the large variety of these proteins, however, the vast majority typically function as interaction modules that bind DNA, RNA, proteins, or other small, useful molecules, and variations in structure serve primarily to alter the binding specificity of a particular protein. Since their original discovery and the elucidation of their structure, these interaction modules have proven ubiquitous in the biological world and may be found in 3% of the genes of the human genome. In addition, zinc fingers have become extremely useful in various therapeutic and research capacities. Engineering zinc fingers to have an affinity for a specific sequence is an area of active research, and zinc finger nucleases and zinc finger transcription factors are two of the most important applications of this to be realized to date. History Zinc fingers were first identified in a study of transcription in the African clawed frog, Xenopus laevis in the laboratory of Aaron Klug. A study of the transcription of a particular RNA sequence revealed that the binding strength of a small transcription factor (transcription factor IIIA; TFIIIA) was due to the presence of zinc-coordinating finger-like structures. Amino acid sequencing of TFIIIA revealed nine tandem sequences of 30 amino acids, including two invariant pairs of cysteine and histidine residues. Extended x-ray absorption fine structure confirmed the identity of the zinc ligands: two cysteines and two histidines. The DNA-binding loop formed by the coordination of these ligands by zinc were thought to resemble fingers, hence the name. This was followed soon thereafter by the discovery of the Krüppel factor in Drosophila by the Schuh team in 1986. More recent work in the characterization of proteins in various organisms has revealed the importance of zinc ions in polypeptide stabilization. The crystal structures of zinc finger-DNA complexes solved in 1991 and 1993 revealed the canonical pattern of interactions of zinc fingers with DNA. The binding of zinc finger is found to be distinct from many other DNA-binding proteins that bind DNA through the 2-fold symmetry of the double helix, instead zinc fingers are linked linearly in tandem to bind nucleic acid sequences of varying lengths. Zinc fingers often bind to a sequence of DNA known as the GC box. The modular nature of the zinc finger motif allows for a large number of combinations of DNA and RNA sequences to be bound with high degree of affinity and specificity, and is therefore ideally suited for engineering protein that can be targeted to and bind specific DNA sequences. In 1994, it was shown that an artificially-constructed three-finger protein can block the expression of an oncogene in a mouse cell line. Zinc fingers fused to various other effector domains, some with therapeutic significance, have since been constructed. Such was its importance that "the zinc-finger motif" was cited in the Scientific Background to the 2024 Nobel Prize in Chemistry (awarded to David Baker, Demis Hassabis, and John M. Jumper for computational protein design and protein structure prediction). Domain Zinc finger (Znf) domains are relatively small protein motifs that contain multiple finger-like protrusions that make tandem contacts with their target molecule. Some of these domains bind zinc, but many do not, instead binding other metals such as iron, or no metal at all. For example, some family members form salt bridges to stabilise the finger-like folds. They were first identified as a DNA-binding motif in transcription factor TFIIIA from Xenopus laevis (African clawed frog), however they are now recognised to bind DNA, RNA, protein, and/or lipid substrates. Their binding properties depend on the amino acid sequence of the finger domains and on the linker between fingers, as well as on the higher-order structures and the number of fingers. Znf domains are often found in clusters, where fingers can have different binding specificities. Znf motifs occur in several unrelated protein superfamilies, varying in both sequence and structure. They display considerable versatility in binding modes, even between members of the same class (e.g., some bind DNA, others protein), suggesting that Znf motifs are stable scaffolds that have evolved specialised functions. For example, Znf-containing proteins function in gene transcription, translation, mRNA trafficking, cytoskeleton organization, epithelial development, cell adhesion, protein folding, chromatin remodeling, and zinc sensing, to name but a few. Zinc-binding motifs are stable structures, and they rarely undergo conformational changes upon binding their target. Classes Initially, the term zinc finger was used solely to describe DNA-binding motif found in Xenopus laevis; however, it is now used to refer to any number of structures related by their coordination of a zinc ion. In general, zinc fingers coordinate zinc ions with a combination of cysteine and histidine residues. Originally, the number and order of these residues was used to classify different types of zinc fingers ( e.g., Cys2His2, Cys4, and Cys6). More recently, a more systematic method has been used to classify zinc finger proteins instead. This method classifies zinc finger proteins into "fold groups" based on the overall shape of the protein backbone in the folded domain. The most common "fold groups" of zinc fingers are the Cys2His2-like (the "classic zinc finger"), treble clef, and zinc ribbon. The following table shows the different structures and their key features: Cys2His2 The Cys2His2-like fold group (C2H2) is by far the best-characterized class of zinc fingers, and is common in mammalian transcription factors. Such domains adopt a simple ββα fold and have the amino acid sequence motif: X2-Cys-X2,4-Cys-X12-His-X3,4,5-His This class of zinc fingers can have a variety of functions such as binding RNA and mediating protein-protein interactions, but is best known for its role in sequence-specific DNA-binding proteins such as Zif268 (Egr1). In such proteins, individual zinc finger domains typically occur as tandem repeats with two, three, or more fingers comprising the DNA-binding domain of the protein. These tandem arrays can bind in the major groove of DNA and are typically spaced at 3-bp intervals. The α-helix of each domain (often called the "recognition helix") can make sequence-specific contacts to DNA bases; residues from a single recognition helix can contact four or more bases to yield an overlapping pattern of contacts with adjacent zinc fingers. Gag-knuckle This fold group is defined by two short β-strands connected by a turn (zinc knuckle) followed by a short helix or loop and resembles the classical Cys2His2 motif with a large portion of the helix and β-hairpin truncated. The retroviral nucleocapsid (NC) protein from HIV and other related retroviruses are examples of proteins possessing these motifs. The gag-knuckle zinc finger in the HIV NC protein is the target of a class of drugs known as zinc finger inhibitors. Treble-clef The treble-clef motif consists of a β-hairpin at the N-terminus and an α-helix at the C-terminus that each contribute two ligands for zinc binding, although a loop and a second β-hairpin of varying length and conformation can be present between the N-terminal β-hairpin and the C-terminal α-helix. These fingers are present in a diverse group of proteins that frequently do not share sequence or functional similarity with each other. The best-characterized proteins containing treble-clef zinc fingers are the nuclear hormone receptors. Zinc ribbon The zinc ribbon fold is characterised by two beta-hairpins forming two structurally similar zinc-binding sub-sites. Zn2/Cys6 The canonical members of this class contain a binuclear zinc cluster in which two zinc ions are bound by six cysteine residues. These zinc fingers can be found in several transcription factors including the yeast Gal4 protein. Miscellaneous The zinc finger antiviral protein () binds to the CpG site. It is used in mammals for antiviral defense. Applications Various protein engineering techniques can be used to alter the DNA-binding specificity of zinc fingers and tandem repeats of such engineered zinc fingers can be used to target desired genomic DNA sequences. Fusing a second protein domain such as a transcriptional activator or repressor to an array of engineered zinc fingers that bind near the promoter of a given gene can be used to alter the transcription of that gene. Fusions between engineered zinc finger arrays and protein domains that cleave or otherwise modify DNA can also be used to target those activities to desired genomic loci. The most common applications for engineered zinc finger arrays include zinc finger transcription factors and zinc finger nucleases, but other applications have also been described. Typical engineered zinc finger arrays have between 3 and 6 individual zinc finger motifs and bind target sites ranging from 9 basepairs to 18 basepairs in length. Arrays with 6 zinc finger motifs are particularly attractive because they bind a target site that is long enough to have a good chance of being unique in a mammalian genome. Zinc finger nucleases Engineered zinc finger arrays are often fused to a DNA cleavage domain (usually the cleavage domain of FokI) to generate zinc finger nucleases. Such zinc finger-FokI fusions have become useful reagents for manipulating genomes of many higher organisms including Drosophila melanogaster, Caenorhabditis elegans, tobacco, corn, zebrafish, various types of mammalian cells, and rats. Targeting a double-strand break to a desired genomic locus can be used to introduce frame-shift mutations into the coding sequence of a gene due to the error-prone nature of the non-homologous DNA repair pathway. If a homologous DNA "donor sequence" is also used then the genomic locus can be converted to a defined sequence via the homology directed repair pathway. An ongoing clinical trial is evaluating Zinc finger nucleases that disrupt the CCR5 gene in CD4+ human T-cells as a potential treatment for HIV/AIDS. Methods of engineering zinc finger arrays The majority of engineered zinc finger arrays are based on the zinc finger domain of the murine transcription factor Zif268, although some groups have used zinc finger arrays based on the human transcription factor SP1. Zif268 has three individual zinc finger motifs that collectively bind a 9 bp sequence with high affinity. The structure of this protein bound to DNA was solved in 1991 and stimulated a great deal of research into engineered zinc finger arrays. In 1994 and 1995, a number of groups used phage display to alter the specificity of a single zinc finger of Zif268. There are two main methods currently used to generate engineered zinc finger arrays, modular assembly, and a bacterial selection system, and there is some debate about which method is best suited for most applications. The most straightforward method to generate new zinc finger arrays is to combine smaller zinc finger "modules" of known specificity. The structure of the zinc finger protein Zif268 bound to DNA described by Pavletich and Pabo in their 1991 publication has been key to much of this work and describes the concept of obtaining fingers for each of the 64 possible base pair triplets and then mixing and matching these fingers to design proteins with any desired sequence specificity. The most common modular assembly process involves combining separate zinc fingers that can each recognize a 3-basepair DNA sequence to generate 3-finger, 4-, 5-, or 6-finger arrays that recognize target sites ranging from 9 basepairs to 18 basepairs in length. Another method uses 2-finger modules to generate zinc finger arrays with up to six individual zinc fingers. The Barbas Laboratory of The Scripps Research Institute used phage display to develop and characterize zinc finger domains that recognize most DNA triplet sequences while another group isolated and characterized individual fingers from the human genome. A potential drawback with modular assembly in general is that specificities of individual zinc finger can overlap and can depend on the context of the surrounding zinc fingers and DNA. A recent study demonstrated that a high proportion of 3-finger zinc finger arrays generated by modular assembly fail to bind their intended target with sufficient affinity in a bacterial two-hybrid assay and fail to function as zinc finger nucleases, but the success rate was somewhat higher when sites of the form GNNGNNGNN were targeted. A subsequent study used modular assembly to generate zinc finger nucleases with both 3-finger arrays and 4-finger arrays and observed a much higher success rate with 4-finger arrays. A variant of modular assembly that takes the context of neighboring fingers into account has also been reported and this method tends to yield proteins with improved performance relative to standard modular assembly. Numerous selection methods have been used to generate zinc finger arrays capable of targeting desired sequences. Initial selection efforts utilized phage display to select proteins that bound a given DNA target from a large pool of partially randomized zinc finger arrays. This technique is difficult to use on more than a single zinc finger at a time, so a multi-step process that generated a completely optimized 3-finger array by adding and optimizing a single zinc finger at a time was developed. More recent efforts have utilized yeast one-hybrid systems, bacterial one-hybrid and two-hybrid systems, and mammalian cells. A promising new method to select novel 3-finger zinc finger arrays utilizes a bacterial two-hybrid system and has been dubbed "OPEN" by its creators. This system combines pre-selected pools of individual zinc fingers that were each selected to bind a given triplet and then utilizes a second round of selection to obtain 3-finger arrays capable of binding a desired 9-bp sequence. This system was developed by the Zinc Finger Consortium as an alternative to commercial sources of engineered zinc finger arrays. It is somewhat difficult to directly compare the binding properties of proteins generated with this method to proteins generated by modular assembly as the specificity profiles of proteins generated by the OPEN method have never been reported. Examples This entry represents the CysCysHisCys (C2HC) type zinc finger domain found in eukaryotes. Proteins containing these domains include: MYST family histone acetyltransferases Myelin transcription factor Myt1 Suppressor of tumourigenicity protein 18 (ST18) See also B-box zinc finger DNA-binding protein FPG IleRS zinc finger Krüppel associated box RING finger domain Sequence motif Steroid hormone receptor Structural motif TAL effector Transcription Activator-Like Effector Nuclease Zinc finger inhibitor Zinc finger nuclease Zinc Finger Transcription Factor References External links C2H2 family at PlantTFDB: Plant Transcription Factor Database The double helix between the zinc finger Zinc Finger Tools design and information site Human KZNF Gene Catalog Zinc finger C2H2-type domain in PROSITE Entry for zinc finger class C2H2 in the SMART database The Zinc Finger Consortium ZiFiT- Zinc Finger Design Tool Zinc Finger Consortium Materials from Addgene Predicting DNA-binding Specificities for C2H2 Zinc Finger Proteins Protein domains Protein structural motifs Protein folds DNA-binding substances Zinc finger proteins Thiolates Protein superfamilies
Zinc finger
[ "Chemistry", "Biology" ]
3,438
[ "Genetics techniques", "Functional groups", "Protein classification", "Protein structural motifs", "Thiolates", "DNA-binding substances", "Protein domains", "Protein superfamilies" ]
234,132
https://en.wikipedia.org/wiki/Klystron
A klystron is a specialized linear-beam vacuum tube, invented in 1937 by American electrical engineers Russell and Sigurd Varian, which is used as an amplifier for high radio frequencies, from UHF up into the microwave range. Low-power klystrons are used as oscillators in terrestrial microwave relay communications links, while high-power klystrons are used as output tubes in UHF television transmitters, satellite communication, radar transmitters, and to generate the drive power for modern particle accelerators. In a klystron, an electron beam interacts with radio waves as it passes through resonant cavities, metal boxes along the length of a tube. The electron beam first passes through a cavity to which the input signal is applied. The energy of the electron beam amplifies the signal, and the amplified signal is taken from a cavity at the other end of the tube. The output signal can be coupled back into the input cavity to make an electronic oscillator to generate radio waves. The power gain of klystrons can be high, up to 60 dB (an increase in signal power of a factor of one million), with output power up to tens of megawatts, but the bandwidth is narrow, usually a few percent although it can be up to 10% in some devices. A reflex klystron is an obsolete type in which the electron beam was reflected back along its path by a high potential electrode, used as an oscillator. Etymology The name klystron comes from the Greek verb κλύζω (klyzo) referring to the action of waves breaking against a shore, and the suffix -τρον ("tron") meaning the place where the action happens. The name "klystron" was suggested by Hermann Fränkel, a professor in the classics department at Stanford University when the klystron was under development. History The klystron was the first significantly powerful source of radio waves in the microwave range; before its invention the only sources were the Barkhausen–Kurz tube and split-anode magnetron, which were limited to very low power. It was invented by the brothers Russell and Sigurd Varian at Stanford University. Their prototype was completed and demonstrated successfully on August 30, 1937. Upon publication in 1939, news of the klystron immediately influenced the work of US and UK researchers working on radar equipment. The Varians went on to found Varian Associates to commercialize the technology (for example, to make small linear accelerators to generate photons for external beam radiation therapy). Their work was preceded by the description of velocity modulation by A. Arsenjewa-Heil and Oskar Heil (wife and husband) in 1935, though the Varians were probably unaware of the Heils' work. The work of physicist W. W. Hansen was instrumental in the development of the klystron and was cited by the Varian brothers in their 1939 paper. His resonator analysis, which dealt with the problem of accelerating electrons toward a target, could be used just as well to decelerate electrons (i.e., transfer their kinetic energy to RF energy in a resonator). During the Second World War, Hansen lectured at the MIT Radiation labs two days a week, commuting to Boston from Sperry Gyroscope Company on Long Island. His resonator was called a "rhumbatron" by the Varian brothers. Hansen died of beryllium disease in 1949 as a result of exposure to beryllium oxide (BeO). During the Second World War, the Axis powers relied mostly on (then low-powered and long wavelength) klystron technology for their radar system microwave generation, while the Allies used the far more powerful but frequency-drifting technology of the cavity magnetron for much shorter-wavelength centimetric microwave generation. Klystron tube technologies for very high-power applications, such as synchrotrons and radar systems, have since been developed. Right after the war, AT&T used 4-watt klystrons in its brand new network of microwave relay links that covered the contiguous United States . The network provided long-distance telephone service and also carried television signals for the major TV networks. Western Union Telegraph Company also built point-to-point microwave communication links using intermediate repeater stations at about 40 mile intervals at that time, using 2K25 reflex klystrons in both the transmitters and receivers. In some applications Klystrons have been replaced by solid state transistors. High efficiency Klystrons have been developed with have 10% more effiency than conventional Klystrons. Operation Klystrons amplify RF signals by converting the kinetic energy in a DC electron beam into radio frequency power. In a vacuum, a beam of electrons is emitted by an electron gun or thermionic cathode and accelerated by high-voltage electrodes (typically in the tens of kilovolts). This beam passes through an input cavity resonator. RF energy has been fed into the input cavity at, or near, its resonant frequency, creating standing waves, which produce an oscillating voltage, which acts on the electron beam. The electric field causes the electrons to "bunch": electrons that pass through when the electric field opposes their motion are slowed, while electrons which pass through when the electric field is in the same direction are accelerated, causing the previously continuous electron beam to form bunches at the input frequency. To reinforce the bunching, a klystron may contain additional "buncher" cavities. The beam then passes through a "drift" tube, in which the faster electrons catch up to the slower ones, creating the "bunches", then through a "catcher" cavity. In the output "catcher" cavity, each bunch enters the cavity at the time in the cycle when the electric field opposes the electrons' motion, decelerating them. Thus the kinetic energy of the electrons is converted to potential energy of the field, increasing the amplitude of the oscillations. The oscillations excited in the catcher cavity are coupled out through a coaxial cable or waveguide. The spent electron beam, with reduced energy, is captured by a collector electrode. To make an oscillator, the output cavity can be coupled to the input cavity(s) with a coaxial cable or waveguide. Positive feedback excites spontaneous oscillations at the resonant frequency of the cavities. Two-cavity klystron The simplest klystron tube is the two-cavity klystron. In this tube there are two microwave cavity resonators, the "catcher" and the "buncher". When used as an amplifier, the weak microwave signal to be amplified is applied to the buncher cavity through a coaxial cable or waveguide, and the amplified signal is extracted from the catcher cavity. At one end of the tube is the hot cathode which produces electrons when heated by a filament. The electrons are attracted to and pass through an anode cylinder at a high positive potential; the cathode and anode act as an electron gun to produce a high velocity stream of electrons. An external electromagnet winding creates a longitudinal magnetic field along the beam axis which prevents the beam from spreading. The beam first passes through the "buncher" cavity resonator, through grids attached to each side. The buncher grids have an oscillating AC potential across them, produced by standing wave oscillations within the cavity, excited by the input signal at the cavity's resonant frequency applied by a coaxial cable or waveguide. The direction of the field between the grids changes twice per cycle of the input signal. Electrons entering when the entrance grid is negative and the exit grid is positive encounter an electric field in the same direction as their motion, and are accelerated by the field. Electrons entering a half-cycle later, when the polarity is opposite, encounter an electric field which opposes their motion, and are decelerated. Beyond the buncher grids is a space called the drift space. This space is long enough so that the accelerated electrons catch up with electrons that were decelerated at an earlier time, forming "bunches" longitudinally along the beam axis. Its length is chosen to allow maximum bunching at the resonant frequency, and may be several feet long. The electrons then pass through a second cavity, called the "catcher", through a similar pair of grids on each side of the cavity. The function of the catcher grids is to absorb energy from the electron beam. The bunches of electrons passing through excite standing waves in the cavity, which has the same resonant frequency as the buncher cavity. Each bunch of electrons passes between the grids at a point in the cycle when the exit grid is negative with respect to the entrance grid, so the electric field in the cavity between the grids opposes the electrons motion. The electrons thus do work on the electric field, and are decelerated, their kinetic energy is converted to electric potential energy, increasing the amplitude of the oscillating electric field in the cavity. Thus the oscillating field in the catcher cavity is an amplified copy of the signal applied to the buncher cavity. The amplified signal is extracted from the catcher cavity through a coaxial cable or waveguide. After passing through the catcher and giving up its energy, the lower energy electron beam is absorbed by a "collector" electrode, a second anode which is kept at a small positive voltage. Klystron oscillator An electronic oscillator can be made from a klystron tube, by providing a feedback path from output to input by connecting the "catcher" and "buncher" cavities with a coaxial cable or waveguide. When the device is turned on, electronic noise in the cavity is amplified by the tube and fed back from the output catcher to the buncher cavity to be amplified again. Because of the high Q of the cavities, the signal quickly becomes a sine wave at the resonant frequency of the cavities. Multicavity klystron In all modern klystrons, the number of cavities exceeds two. Additional "buncher" cavities added between the first "buncher" and the "catcher" may be used to increase the gain of the klystron or to increase the bandwidth. The residual kinetic energy in the electron beam when it hits the collector electrode represents wasted energy, which is dissipated as heat, which must be removed by a cooling system. Some modern klystrons include depressed collectors, which recover energy from the beam before collecting the electrons, increasing efficiency. Multistage depressed collectors enhance the energy recovery by "sorting" the electrons in energy bins. Reflex klystron The reflex klystron (also known as a Sutton tube after one of its inventors, Robert Sutton) was a low power klystron tube with a single cavity, which functioned as an oscillator. It was used as a local oscillator in some radar receivers and a modulator in microwave transmitters in the 1950s and 1960s, but is now obsolete, replaced by semiconductor microwave devices. In the reflex klystron the electron beam passes through a single resonant cavity. The electrons are fired into one end of the tube by an electron gun. After passing through the resonant cavity they are reflected by a negatively charged reflector electrode for another pass through the cavity, where they are then collected. The electron beam is velocity modulated when it first passes through the cavity. The formation of electron bunches takes place in the drift space between the reflector and the cavity. The voltage on the reflector must be adjusted so that the bunching is at a maximum as the electron beam re-enters the resonant cavity, thus ensuring a maximum of energy is transferred from the electron beam to the RF oscillations in the cavity. The reflector voltage may be varied slightly from the optimum value, which results in some loss of output power, but also in a variation in frequency. This effect is used to good advantage for automatic frequency control in receivers, and in frequency modulation for transmitters. The level of modulation applied for transmission is small enough that the power output essentially remains constant. At regions far from the optimum voltage, no oscillations are obtained at all. There are often several regions of reflector voltage where the reflex klystron will oscillate; these are referred to as modes. The electronic tuning range of the reflex klystron is usually referred to as the variation in frequency between half power points—the points in the oscillating mode where the power output is half the maximum output in the mode. Modern semiconductor technology has effectively replaced the reflex klystron in most applications. Gyroklystron The gyroklystron is a microwave amplifier with operation dependent on the cyclotron resonance condition. Similarly to the klystron, its operation depends on the modulation of the electron beam, but instead of axial bunching the modulation forces alter the cyclotron frequency and hence the azimuthal component of motion, resulting in phase bunches. In the output cavity, electrons which arrive at the correct decelerating phase transfer their energy to the cavity field and the amplified signal can be coupled out. The gyroklystron has cylindrical or coaxial cavities and operates with transverse electric field modes. Since the interaction depends on the resonance condition, larger cavity dimensions than a conventional klystron can be used. This allows the gyroklystron to deliver high power at very high frequencies which is challenging using conventional klystrons. Tuning Some klystrons have cavities that are tunable. By adjusting the frequency of individual cavities, the technician can change the operating frequency, gain, output power, or bandwidth of the amplifier. No two klystrons are exactly identical (even when comparing like part/model number klystrons). Each unit has manufacturer-supplied calibration values for its specific performance characteristics. Without this information the klystron would not be properly tunable, and hence not perform well, if at all. Tuning a klystron is delicate work which, if not done properly, can cause damage to equipment or injury to the technician due to the very high voltages that could be produced. The technician must be careful not to exceed the limits of the graduations, or damage to the klystron can result. Other precautions taken when tuning a klystron include using nonferrous tools. Some klystrons employ permanent magnets. If a technician uses ferrous tools (which are ferromagnetic) and comes too close to the intense magnetic fields that contain the electron beam, such a tool can be pulled into the unit by the intense magnetic force, smashing fingers, injuring the technician, or damaging the unit. Special lightweight nonmagnetic (or rather very weakly diamagnetic) tools made of beryllium alloy have been used for tuning U.S. Air Force klystrons. Precautions are routinely taken when transporting klystron devices in aircraft, as the intense magnetic field can interfere with magnetic navigation equipment. Special overpacks are designed to help limit this field "in the field," and thus allow such devices to be transported safely. Optical klystron The technique of amplification used in the klystron is also being applied experimentally at optical frequencies in a type of laser called the free-electron laser (FEL); these devices are called optical klystrons. Instead of microwave cavities, these use devices called undulators. The electron beam passes through an undulator, in which a laser light beam causes bunching of the electrons. Then the beam passes through a second undulator, in which the electron bunches cause oscillation to create a second, more powerful light beam. Floating drift tube klystron The floating drift tube klystron has a single cylindrical chamber containing an electrically isolated central tube. Electrically, this is similar to the two cavity oscillator klystron with considerable feedback between the two cavities. Electrons exiting the source cavity are velocity modulated by the electric field as they travel through the drift tube and emerge at the destination chamber in bunches, delivering power to the oscillation in the cavity. This type of oscillator klystron has an advantage over the two-cavity klystron on which it is based, in that it needs only one tuning element to effect changes in frequency. The drift tube is electrically insulated from the cavity walls, and DC bias is applied separately. The DC bias on the drift tube may be adjusted to alter the transit time through it, thus allowing some electronic tuning of the oscillating frequency. The amount of tuning in this manner is not large and is normally used for frequency modulation when transmitting. Applications Klystrons can produce far higher microwave power outputs than solid state microwave devices such as Gunn diodes. In modern systems, they are used from UHF (hundreds of megahertz) up to hundreds of gigahertz (as in the Extended Interaction Klystrons in the CloudSat satellite). Klystrons can be found at work in radar, satellite and wideband high-power communication (very common in television broadcasting and EHF satellite terminals), medicine (radiation oncology), and high-energy physics (particle accelerators and experimental reactors). At SLAC, for example, klystrons are routinely employed which have outputs in the range of 50 MW (pulse) and 50 kW (time-averaged) at 2856 MHz. The Arecibo Planetary Radar used two klystrons that provided a total power output of 1 MW (continuous) at 2380 MHz. Popular Sciences "Best of What's New 2007" described a company, Global Resource Corporation, currently defunct, using a klystron to convert the hydrocarbons in everyday materials, automotive waste, coal, oil shale, and oil sands into natural gas and diesel fuel. See also Crossed-field amplifier Electromagnetic radiation Free-electron laser Gyrotron Inductive output tube Linear accelerator Magnetron Backward-wave oscillator Particle accelerator Traveling-wave tube Waveguide Extended interaction oscillator References External links The Klystron (YouTube-video, describes in detail the electron gun and various klystron designs.) (Two cavity klystron) (Multicavity klystron) (Reflex klystron) (High power for linear accelerator) History of the Klystron from Varian Stanford Linear Accelerator Center (249) Klystron Gallery Pictures Klystron collection in the Virtual Valve Museum Klystron Amplifier "Microwave Gun" klystron developed at the SLAC Electron beam Microwave technology Accelerator physics Vacuum tubes American inventions
Klystron
[ "Physics", "Chemistry" ]
3,963
[ "Electron", "Applied and interdisciplinary physics", "Electron beam", "Vacuum tubes", "Vacuum", "Experimental physics", "Accelerator physics", "Matter" ]
31,211
https://en.wikipedia.org/wiki/Bolzano%E2%80%93Weierstrass%20theorem
In mathematics, specifically in real analysis, the Bolzano–Weierstrass theorem, named after Bernard Bolzano and Karl Weierstrass, is a fundamental result about convergence in a finite-dimensional Euclidean space . The theorem states that each infinite bounded sequence in has a convergent subsequence. An equivalent formulation is that a subset of is sequentially compact if and only if it is closed and bounded. The theorem is sometimes called the sequential compactness theorem. History and significance The Bolzano–Weierstrass theorem is named after mathematicians Bernard Bolzano and Karl Weierstrass. It was actually first proved by Bolzano in 1817 as a lemma in the proof of the intermediate value theorem. Some fifty years later the result was identified as significant in its own right, and proved again by Weierstrass. It has since become an essential theorem of analysis. Proof First we prove the theorem for (set of all real numbers), in which case the ordering on can be put to good use. Indeed, we have the following result: Lemma: Every infinite sequence in has an infinite monotone subsequence (a subsequence that is either non-decreasing or non-increasing). Proof: Let us call a positive integer-valued index of a sequence a "peak" of the sequence when for every . Suppose first that the sequence has infinitely many peaks, which means there is a subsequence with the following indices and the following terms . So, the infinite sequence in has a monotone (non-increasing) subsequence, which is . But suppose now that there are only finitely many peaks, let be the final peak if one exists (let otherwise) and let the first index of a new subsequence be set to . Then is not a peak, since comes after the final peak, which implies the existence of with and . Again, comes after the final peak, hence there is an where with . Repeating this process leads to an infinite non-decreasing subsequence  , thereby proving that every infinite sequence in has a monotone subsequence. Now suppose one has a bounded sequence in ; by the lemma proven above there exists a monotone subsequence, likewise also bounded. It follows from the monotone convergence theorem that this subsequence converges. The general case () can be reduced to the case of . Firstly, we will acknowledge that a sequence (in or ) has a convergent subsequence if and only if there exists a countable set where is the index set of the sequence such that converges. Let be any bounded sequence in and denote its index set by . The sequence may be expressed as an n-tuple of sequences in such that where is a sequence for . Since is bounded, is also bounded for . It follows then by the lemma that has a convergent subsequence and hence there exists a countable set such that converges. For the sequence , by applying the lemma once again there exists a countable set such that converges and hence has a convergent subsequence. This reasoning may be applied until we obtain a countable set for which converges for . Hence, converges and therefore since was arbitrary, any bounded sequence in has a convergent subsequence. Alternative proof There is also an alternative proof of the Bolzano–Weierstrass theorem using nested intervals. We start with a bounded sequence : Because we halve the length of an interval at each step, the limit of the interval's length is zero. Also, by the nested intervals theorem, which states that if each is a closed and bounded interval, say with then under the assumption of nesting, the intersection of the is not empty. Thus there is a number that is in each interval . Now we show, that is an accumulation point of . Take a neighbourhood of . Because the length of the intervals converges to zero, there is an interval that is a subset of . Because contains by construction infinitely many members of and , also contains infinitely many members of . This proves that is an accumulation point of . Thus, there is a subsequence of that converges to . Sequential compactness in Euclidean spaces Definition: A set is sequentially compact if every sequence in has a convergent subsequence converging to an element of . Theorem: is sequentially compact if and only if is closed and bounded. Proof: (sequential compactness implies closed and bounded) Suppose is a subset of with the property that every sequence in has a subsequence converging to an element of . Then must be bounded, since otherwise the following unbounded sequence can be constructed. For every , define to be any arbitrary point such that . Then, every subsequence of is unbounded and therefore not convergent. Moreover, must be closed, since any limit point of , which has a sequence of points in converging to itself, must also lie in . Proof: (closed and bounded implies sequential compactness) Since is bounded, any sequence is also bounded. From the Bolzano-Weierstrass theorem, contains a subsequence converging to some point . Since is a limit point of and is a closed set, must be an element of . Thus the subsets of for which every sequence in A has a subsequence converging to an element of – i.e., the subsets that are sequentially compact in the subspace topology – are precisely the closed and bounded subsets. This form of the theorem makes especially clear the analogy to the Heine–Borel theorem, which asserts that a subset of is compact if and only if it is closed and bounded. In fact, general topology tells us that a metrizable space is compact if and only if it is sequentially compact, so that the Bolzano–Weierstrass and Heine–Borel theorems are essentially the same. Application to economics There are different important equilibrium concepts in economics, the proofs of the existence of which often require variations of the Bolzano–Weierstrass theorem. One example is the existence of a Pareto efficient allocation. An allocation is a matrix of consumption bundles for agents in an economy, and an allocation is Pareto efficient if no change can be made to it that makes no agent worse off and at least one agent better off (here rows of the allocation matrix must be rankable by a preference relation). The Bolzano–Weierstrass theorem allows one to prove that if the set of allocations is compact and non-empty, then the system has a Pareto-efficient allocation. See also Sequentially compact space Heine–Borel theorem Completeness of the real numbers Ekeland's variational principle Notes References External links A proof of the Bolzano–Weierstrass theorem PlanetMath: proof of Bolzano–Weierstrass Theorem The Bolzano-Weierstrass Rap Theorems about real number sequences Compactness theorems
Bolzano–Weierstrass theorem
[ "Mathematics" ]
1,456
[ "Sequences and series", "Compactness theorems", "Mathematical structures", "Theorems about real number sequences", "Theorems in topology" ]
31,248
https://en.wikipedia.org/wiki/Travelling%20salesman%20problem
In the theory of computational complexity, the travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research. The travelling purchaser problem, the vehicle routing problem and the ring star problem are three generalizations of TSP. The decision version of the TSP (where given a length L, the task is to decide whether the graph has a tour whose length is at most L) belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially (but no more than exponentially) with the number of cities. The problem was first formulated in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Even though the problem is computationally difficult, many heuristics and exact algorithms are known, so that some instances with tens of thousands of cities can be solved completely, and even problems with millions of cities can be approximated within a small fraction of 1%. The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. The TSP also appears in astronomy, as astronomers observing many sources want to minimize the time spent moving the telescope between the sources; in such problems, the TSP can be embedded inside an optimal control problem. In many applications, additional constraints such as limited resources or time windows may be imposed. History The origins of the travelling salesman problem are unclear. A handbook for travelling salesmen from 1832 mentions the problem and includes example tours through Germany and Switzerland, but contains no mathematical treatment. The TSP was mathematically formulated in the 19th century by the Irish mathematician William Rowan Hamilton and by the British mathematician Thomas Kirkman. Hamilton's icosian game was a recreational puzzle based on finding a Hamiltonian cycle. The general form of the TSP appears to have been first studied by mathematicians during the 1930s in Vienna and at Harvard, notably by Karl Menger, who defines the problem, considers the obvious brute-force algorithm, and observes the non-optimality of the nearest neighbour heuristic: It was first considered mathematically in the 1930s by Merrill M. Flood who was looking to solve a school bus routing problem. Hassler Whitney at Princeton University generated interest in the problem, which he called the "48 states problem". The earliest publication using the phrase "travelling [or traveling] salesman problem" was the 1949 RAND Corporation report by Julia Robinson, "On the Hamiltonian game (a traveling salesman problem)." In the 1950s and 1960s, the problem became increasingly popular in scientific circles in Europe and the United States after the RAND Corporation in Santa Monica offered prizes for steps in solving the problem. Notable contributions were made by George Dantzig, Delbert Ray Fulkerson, and Selmer M. Johnson from the RAND Corporation, who expressed the problem as an integer linear program and developed the cutting plane method for its solution. They wrote what is considered the seminal paper on the subject in which, with these new methods, they solved an instance with 49 cities to optimality by constructing a tour and proving that no other tour could be shorter. Dantzig, Fulkerson, and Johnson, however, speculated that, given a near-optimal solution, one may be able to find optimality or prove optimality by adding a small number of extra inequalities (cuts). They used this idea to solve their initial 49-city problem using a string model. They found they only needed 26 cuts to come to a solution for their 49 city problem. While this paper did not give an algorithmic approach to TSP problems, the ideas that lay within it were indispensable to later creating exact solution methods for the TSP, though it would take 15 years to find an algorithmic approach in creating these cuts. As well as cutting plane methods, Dantzig, Fulkerson, and Johnson used branch-and-bound algorithms perhaps for the first time. In 1959, Jillian Beardwood, J.H. Halton, and John Hammersley published an article entitled "The Shortest Path Through Many Points" in the journal of the Cambridge Philosophical Society. The Beardwood–Halton–Hammersley theorem provides a practical solution to the travelling salesman problem. The authors derived an asymptotic formula to determine the length of the shortest route for a salesman who starts at a home or office and visits a fixed number of locations before returning to the start. In the following decades, the problem was studied by many researchers from mathematics, computer science, chemistry, physics, and other sciences. In the 1960s, however, a new approach was created that, instead of seeking optimal solutions, would produce a solution whose length is provably bounded by a multiple of the optimal length, and in doing so would create lower bounds for the problem; these lower bounds would then be used with branch-and-bound approaches. One method of doing this was to create a minimum spanning tree of the graph and then double all its edges, which produces the bound that the length of an optimal tour is at most twice the weight of a minimum spanning tree. In 1976, Christofides and Serdyukov (independently of each other) made a big advance in this direction: the Christofides-Serdyukov algorithm yields a solution that, in the worst case, is at most 1.5 times longer than the optimal solution. As the algorithm was simple and quick, many hoped it would give way to a near-optimal solution method. However, this hope for improvement did not immediately materialize, and Christofides-Serdyukov remained the method with the best worst-case scenario until 2011, when a (very) slightly improved approximation algorithm was developed for the subset of "graphical" TSPs. In 2020 this tiny improvement was extended to the full (metric) TSP. Richard M. Karp showed in 1972 that the Hamiltonian cycle problem was NP-complete, which implies the NP-hardness of TSP. This supplied a mathematical explanation for the apparent computational difficulty of finding optimal tours. Great progress was made in the late 1970s and 1980, when Grötschel, Padberg, Rinaldi and others managed to exactly solve instances with up to 2,392 cities, using cutting planes and branch-and-bound. In the 1990s, Applegate, Bixby, Chvátal, and Cook developed the program Concorde that has been used in many recent record solutions. Gerhard Reinelt published the TSPLIB in 1991, a collection of benchmark instances of varying difficulty, which has been used by many research groups for comparing results. In 2006, Cook and others computed an optimal tour through an 85,900-city instance given by a microchip layout problem, currently the largest solved TSPLIB instance. For many other instances with millions of cities, solutions can be found that are guaranteed to be within 2–3% of an optimal tour. Description As a graph problem TSP can be modeled as an undirected weighted graph, such that cities are the graph's vertices, paths are the graph's edges, and a path's distance is the edge's weight. It is a minimization problem starting and finishing at a specified vertex after having visited each other vertex exactly once. Often, the model is a complete graph (i.e., each pair of vertices is connected by an edge). If no path exists between two cities, then adding a sufficiently long edge will complete the graph without affecting the optimal tour. Asymmetric and symmetric In the symmetric TSP, the distance between two cities is the same in each opposite direction, forming an undirected graph. This symmetry halves the number of possible solutions. In the asymmetric TSP, paths may not exist in both directions or the distances might be different, forming a directed graph. Traffic congestion, one-way streets, and airfares for cities with different departure and arrival fees are real-world considerations that could yield a TSP problem in asymmetric form. Related problems An equivalent formulation in terms of graph theory is: Given a complete weighted graph (where the vertices would represent the cities, the edges would represent the roads, and the weights would be the cost or distance of that road), find a Hamiltonian cycle with the least weight. This is more general than the Hamiltonian path problem, which only asks if a Hamiltonian path (or cycle) exists in a non-complete unweighted graph. The requirement of returning to the starting city does not change the computational complexity of the problem; see Hamiltonian path problem. Another related problem is the bottleneck travelling salesman problem: Find a Hamiltonian cycle in a weighted graph with the minimal weight of the weightiest edge. A real-world example is avoiding narrow streets with big buses. The problem is of considerable practical importance, apart from evident transportation and logistics areas. A classic example is in printed circuit manufacturing: scheduling of a route of the drill machine to drill holes in a PCB. In robotic machining or drilling applications, the "cities" are parts to machine or holes (of different sizes) to drill, and the "cost of travel" includes time for retooling the robot (single-machine job sequencing problem). The generalized travelling salesman problem, also known as the "travelling politician problem", deals with "states" that have (one or more) "cities", and the salesman must visit exactly one city from each state. One application is encountered in ordering a solution to the cutting stock problem in order to minimize knife changes. Another is concerned with drilling in semiconductor manufacturing; see e.g., . Noon and Bean demonstrated that the generalized travelling salesman problem can be transformed into a standard TSP with the same number of cities, but a modified distance matrix. The sequential ordering problem deals with the problem of visiting a set of cities, where precedence relations between the cities exist. A common interview question at Google is how to route data among data processing nodes; routes vary by time to transfer the data, but nodes also differ by their computing power and storage, compounding the problem of where to send data. The travelling purchaser problem deals with a purchaser who is charged with purchasing a set of products. He can purchase these products in several cities, but at different prices, and not all cities offer the same products. The objective is to find a route between a subset of the cities that minimizes total cost (travel cost + purchasing cost) and enables the purchase of all required products. Integer linear programming formulations The TSP can be formulated as an integer linear program. Several formulations are known. Two notable formulations are the Miller–Tucker–Zemlin (MTZ) formulation and the Dantzig–Fulkerson–Johnson (DFJ) formulation. The DFJ formulation is stronger, though the MTZ formulation is still useful in certain settings. Common to both these formulations is that one labels the cities with the numbers and takes to be the cost (distance) from city to city . The main variables in the formulations are: It is because these are 0/1 variables that the formulations become integer programs; all other constraints are purely linear. In particular, the objective in the program is to minimize the tour length Without further constraints, the will effectively range over all subsets of the set of edges, which is very far from the sets of edges in a tour, and allows for a trivial minimum where all . Therefore, both formulations also have the constraints that, at each vertex, there is exactly one incoming edge and one outgoing edge, which may be expressed as the linear equations for and for These ensure that the chosen set of edges locally looks like that of a tour, but still allow for solutions violating the global requirement that there is one tour which visits all vertices, as the edges chosen could make up several tours, each visiting only a subset of the vertices; arguably, it is this global requirement that makes TSP a hard problem. The MTZ and DFJ formulations differ in how they express this final requirement as linear constraints. Miller–Tucker–Zemlin formulation In addition to the variables as above, there is for each a dummy variable that keeps track of the order in which the cities are visited, counting from city the interpretation is that implies city is visited before city For a given tour (as encoded into values of the variables), one may find satisfying values for the variables by making equal to the number of edges along that tour, when going from city to city Because linear programming favors non-strict inequalities () over strict we would like to impose constraints to the effect that if Merely requiring would not achieve that, because this also requires when which is not correct. Instead MTZ use the linear constraints for all distinct where the constant term provides sufficient slack that does not impose a relation between and The way that the variables then enforce that a single tour visits all cities is that they increase by at least for each step along a tour, with a decrease only allowed where the tour passes through city  That constraint would be violated by every tour which does not pass through city  so the only way to satisfy it is that the tour passing city  also passes through all other cities. The MTZ formulation of TSP is thus the following integer linear programming problem: The first set of equalities requires that each city is arrived at from exactly one other city, and the second set of equalities requires that from each city there is a departure to exactly one other city. The last constraint enforces that there is only a single tour covering all cities, and not two or more disjointed tours that only collectively cover all cities. Dantzig–Fulkerson–Johnson formulation Label the cities with the numbers 1, ..., n and define: Take to be the distance from city i to city j. Then TSP can be written as the following integer linear programming problem: The last constraint of the DFJ formulation—called a subtour elimination constraint—ensures that no proper subset Q can form a sub-tour, so the solution returned is a single tour and not the union of smaller tours. Intuitively, for each proper subset Q of the cities, the constraint requires that there be fewer edges than cities in Q: if there were to be as many edges in Q as cities in Q, that would represent a subtour of the cities of Q. Because this leads to an exponential number of possible constraints, in practice it is solved with row generation. Computing a solution The traditional lines of attack for the NP-hard problems are the following: Devising exact algorithms, which work reasonably fast only for small problem sizes. Devising "suboptimal" or heuristic algorithms, i.e., algorithms that deliver approximated solutions in a reasonable time. Finding special cases for the problem ("subproblems") for which either better or exact heuristics are possible. Exact algorithms The most direct solution would be to try all permutations (ordered combinations) and see which one is cheapest (using brute-force search). The running time for this approach lies within a polynomial factor of , the factorial of the number of cities, so this solution becomes impractical even for only 20 cities. One of the earliest applications of dynamic programming is the Held–Karp algorithm, which solves the problem in time . This bound has also been reached by Exclusion-Inclusion in an attempt preceding the dynamic programming approach. Improving these time bounds seems to be difficult. For example, it has not been determined whether a classical exact algorithm for TSP that runs in time exists. The currently best quantum exact algorithm for TSP due to Ambainis et al. runs in time . Other approaches include: Various branch-and-bound algorithms, which can be used to process TSPs containing thousands of cities. Progressive improvement algorithms, which use techniques reminiscent of linear programming. This works well for up to 200 cities. Implementations of branch-and-bound and problem-specific cut generation (branch-and-cut); this is the method of choice for solving large instances. This approach holds the current record, solving an instance with 85,900 cities, see . An exact solution for 15,112 German towns from TSPLIB was found in 2001 using the cutting-plane method proposed by George Dantzig, Ray Fulkerson, and Selmer M. Johnson in 1954, based on linear programming. The computations were performed on a network of 110 processors located at Rice University and Princeton University. The total computation time was equivalent to 22.6 years on a single 500 MHz Alpha processor. In May 2004, the travelling salesman problem of visiting all 24,978 towns in Sweden was solved: a tour of length approximately 72,500 kilometres was found, and it was proven that no shorter tour exists. In March 2005, the travelling salesman problem of visiting all 33,810 points in a circuit board was solved using Concorde TSP Solver: a tour of length 66,048,945 units was found, and it was proven that no shorter tour exists. The computation took approximately 15.7 CPU-years (Cook et al. 2006). In April 2006 an instance with 85,900 points was solved using Concorde TSP Solver, taking over 136 CPU-years; see . Heuristic and approximation algorithms Various heuristics and approximation algorithms, which quickly yield good solutions, have been devised. These include the multi-fragment algorithm. Modern methods can find solutions for extremely large problems (millions of cities) within a reasonable time which are, with a high probability, just 2–3% away from the optimal solution. Several categories of heuristics are recognized. Constructive heuristics The nearest neighbour (NN) algorithm (a greedy algorithm) lets the salesman choose the nearest unvisited city as his next move. This algorithm quickly yields an effectively short route. For N cities randomly distributed on a plane, the algorithm on average yields a path 25% longer than the shortest possible path; however, there exist many specially-arranged city distributions which make the NN algorithm give the worst route. This is true for both asymmetric and symmetric TSPs. Rosenkrantz et al. showed that the NN algorithm has the approximation factor for instances satisfying the triangle inequality. A variation of the NN algorithm, called nearest fragment (NF) operator, which connects a group (fragment) of nearest unvisited cities, can find shorter routes with successive iterations. The NF operator can also be applied on an initial solution obtained by the NN algorithm for further improvement in an elitist model, where only better solutions are accepted. The bitonic tour of a set of points is the minimum-perimeter monotone polygon that has the points as its vertices; it can be computed efficiently with dynamic programming. Another constructive heuristic, Match Twice and Stitch (MTS), performs two sequential matchings, where the second matching is executed after deleting all the edges of the first matching, to yield a set of cycles. The cycles are then stitched to produce the final tour. The Algorithm of Christofides and Serdyukov The algorithm of Christofides and Serdyukov follows a similar outline but combines the minimum spanning tree with a solution of another problem, minimum-weight perfect matching. This gives a TSP tour which is at most 1.5 times the optimal. It was one of the first approximation algorithms, and was in part responsible for drawing attention to approximation algorithms as a practical approach to intractable problems. As a matter of fact, the term "algorithm" was not commonly extended to approximation algorithms until later; the Christofides algorithm was initially referred to as the Christofides heuristic. This algorithm looks at things differently by using a result from graph theory which helps improve on the lower bound of the TSP which originated from doubling the cost of the minimum spanning tree. Given an Eulerian graph, we can find an Eulerian tour in time, so if we had an Eulerian graph with cities from a TSP as vertices, then we can easily see that we could use such a method for finding an Eulerian tour to find a TSP solution. By the triangle inequality, we know that the TSP tour can be no longer than the Eulerian tour, and we therefore have a lower bound for the TSP. Such a method is described below. Find a minimum spanning tree for the problem. Create duplicates for every edge to create an Eulerian graph. Find an Eulerian tour for this graph. Convert to TSP: if a city is visited twice, then create a shortcut from the city before this in the tour to the one after this. To improve the lower bound, a better way of creating an Eulerian graph is needed. By the triangle inequality, the best Eulerian graph must have the same cost as the best travelling salesman tour; hence, finding optimal Eulerian graphs is at least as hard as TSP. One way of doing this is by minimum weight matching using algorithms with a complexity of . Making a graph into an Eulerian graph starts with the minimum spanning tree; all the vertices of odd order must then be made even, so a matching for the odd-degree vertices must be added, which increases the order of every odd-degree vertex by 1. This leaves us with a graph where every vertex is of even order, which is thus Eulerian. Adapting the above method gives the algorithm of Christofides and Serdyukov: Find a minimum spanning tree for the problem. Create a matching for the problem with the set of cities of odd order. Find an Eulerian tour for this graph. Convert to TSP using shortcuts. Pairwise exchange The pairwise exchange or 2-opt technique involves iteratively removing two edges and replacing them with two different edges that reconnect the fragments created by edge removal into a new and shorter tour. Similarly, the 3-opt technique removes 3 edges and reconnects them to form a shorter tour. These are special cases of the k-opt method. The label Lin–Kernighan is an often heard misnomer for 2-opt; Lin–Kernighan is actually the more general k-opt method. For Euclidean instances, 2-opt heuristics give on average solutions that are about 5% better than those yielded by Christofides' algorithm. If we start with an initial solution made with a greedy algorithm, then the average number of moves greatly decreases again and is ; however, for random starts, the average number of moves is . While this is a small increase in size, the initial number of moves for small problems is 10 times as big for a random start compared to one made from a greedy heuristic. This is because such 2-opt heuristics exploit 'bad' parts of a solution such as crossings. These types of heuristics are often used within vehicle routing problem heuristics to re-optimize route solutions. k-opt heuristic, or Lin–Kernighan heuristics The Lin–Kernighan heuristic is a special case of the V-opt or variable-opt technique. It involves the following steps: Given a tour, delete k mutually disjoint edges. Reassemble the remaining fragments into a tour, leaving no disjoint subtours (that is, do not connect a fragment's endpoints together). This in effect simplifies the TSP under consideration into a much simpler problem. Each fragment endpoint can be connected to other possibilities: of 2k total fragment endpoints available, the two endpoints of the fragment under consideration are disallowed. Such a constrained 2k-city TSP can then be solved with brute-force methods to find the least-cost recombination of the original fragments. The most popular of the k-opt methods are 3-opt, as introduced by Shen Lin of Bell Labs in 1965. A special case of 3-opt is where the edges are not disjoint (two of the edges are adjacent to one another). In practice, it is often possible to achieve substantial improvement over 2-opt without the combinatorial cost of the general 3-opt by restricting the 3-changes to this special subset where two of the removed edges are adjacent. This so-called two-and-a-half-opt typically falls roughly midway between 2-opt and 3-opt, both in terms of the quality of tours achieved and the time required to achieve those tours. V-opt heuristic The variable-opt method is related to, and a generalization of, the k-opt method. Whereas the k-opt methods remove a fixed number (k) of edges from the original tour, the variable-opt methods do not fix the size of the edge set to remove. Instead, they grow the set as the search process continues. The best-known method in this family is the Lin–Kernighan method (mentioned above as a misnomer for 2-opt). Shen Lin and Brian Kernighan first published their method in 1972, and it was the most reliable heuristic for solving travelling salesman problems for nearly two decades. More advanced variable-opt methods were developed at Bell Labs in the late 1980s by David Johnson and his research team. These methods (sometimes called Lin–Kernighan–Johnson) build on the Lin–Kernighan method, adding ideas from tabu search and evolutionary computing. The basic Lin–Kernighan technique gives results that are guaranteed to be at least 3-opt. The Lin–Kernighan–Johnson methods compute a Lin–Kernighan tour, and then perturb the tour by what has been described as a mutation that removes at least four edges and reconnects the tour in a different way, then V-opting the new tour. The mutation is often enough to move the tour from the local minimum identified by Lin–Kernighan. V-opt methods are widely considered the most powerful heuristics for the problem, and are able to address special cases, such as the Hamilton Cycle Problem and other non-metric TSPs that other heuristics fail on. For many years, Lin–Kernighan–Johnson had identified optimal solutions for all TSPs where an optimal solution was known and had identified the best-known solutions for all other TSPs on which the method had been tried. Randomized improvement Optimized Markov chain algorithms which use local searching heuristic sub-algorithms can find a route extremely close to the optimal route for 700 to 800 cities. TSP is a touchstone for many general heuristics devised for combinatorial optimization such as genetic algorithms, simulated annealing, tabu search, ant colony optimization, river formation dynamics (see swarm intelligence), and the cross entropy method. Constricting Insertion Heuristic This starts with a sub-tour such as the convex hull and then inserts other vertices. Ant colony optimization Artificial intelligence researcher Marco Dorigo described in 1993 a method of heuristically generating "good solutions" to the TSP using a simulation of an ant colony called ACS (ant colony system). It models behavior observed in real ants to find short paths between food sources and their nest, an emergent behavior resulting from each ant's preference to follow trail pheromones deposited by other ants. ACS sends out a large number of virtual ant agents to explore many possible routes on the map. Each ant probabilistically chooses the next city to visit based on a heuristic combining the distance to the city and the amount of virtual pheromone deposited on the edge to the city. The ants explore, depositing pheromone on each edge that they cross, until they have all completed a tour. At this point the ant which completed the shortest tour deposits virtual pheromone along its complete tour route (global trail updating). The amount of pheromone deposited is inversely proportional to the tour length: the shorter the tour, the more it deposits. Special cases Metric In the metric TSP, also known as delta-TSP or Δ-TSP, the intercity distances satisfy the triangle inequality. A very natural restriction of the TSP is to require that the distances between cities form a metric to satisfy the triangle inequality; that is, the direct connection from A to B is never farther than the route via intermediate C: . The edges then build a metric on the set of vertices. When the cities are viewed as points in the plane, many natural distance functions are metrics, and so many natural instances of TSP satisfy this constraint. The following are some examples of metric TSPs for various metrics. In the Euclidean TSP (see below), the distance between two cities is the Euclidean distance between the corresponding points. In the rectilinear TSP, the distance between two cities is the sum of the absolute values of the differences of their x- and y-coordinates. This metric is often called the Manhattan distance or city-block metric. In the maximum metric, the distance between two points is the maximum of the absolute values of differences of their x- and y-coordinates. The last two metrics appear, for example, in routing a machine that drills a given set of holes in a printed circuit board. The Manhattan metric corresponds to a machine that adjusts first one coordinate, and then the other, so the time to move to a new point is the sum of both movements. The maximum metric corresponds to a machine that adjusts both coordinates simultaneously, so the time to move to a new point is the slower of the two movements. In its definition, the TSP does not allow cities to be visited twice, but many applications do not need this constraint. In such cases, a symmetric, non-metric instance can be reduced to a metric one. This replaces the original graph with a complete graph in which the inter-city distance is replaced by the shortest path length between A and B in the original graph. Euclidean For points in the Euclidean plane, the optimal solution to the travelling salesman problem forms a simple polygon through all of the points, a polygonalization of the points. Any non-optimal solution with crossings can be made into a shorter solution without crossings by local optimizations. The Euclidean distance obeys the triangle inequality, so the Euclidean TSP forms a special case of metric TSP. However, even when the input points have integer coordinates, their distances generally take the form of square roots, and the length of a tour is a sum of radicals, making it difficult to perform the symbolic computation needed to perform exact comparisons of the lengths of different tours. Like the general TSP, the exact Euclidean TSP is NP-hard, but the issue with sums of radicals is an obstacle to proving that its decision version is in NP, and therefore NP-complete. A discretized version of the problem with distances rounded to integers is NP-complete. With rational coordinates and the actual Euclidean metric, Euclidean TSP is known to be in the Counting Hierarchy, a subclass of PSPACE. With arbitrary real coordinates, Euclidean TSP cannot be in such classes, since there are uncountably many possible inputs. Despite these complications, Euclidean TSP is much easier than the general metric case for approximation. For example, the minimum spanning tree of the graph associated with an instance of the Euclidean TSP is a Euclidean minimum spanning tree, and so can be computed in expected O(n log n) time for n points (considerably less than the number of edges). This enables the simple 2-approximation algorithm for TSP with triangle inequality above to operate more quickly. In general, for any c > 0, where d is the number of dimensions in the Euclidean space, there is a polynomial-time algorithm that finds a tour of length at most (1 + 1/c) times the optimal for geometric instances of TSP in time; this is called a polynomial-time approximation scheme (PTAS). Sanjeev Arora and Joseph S. B. Mitchell were awarded the Gödel Prize in 2010 for their concurrent discovery of a PTAS for the Euclidean TSP. In practice, simpler heuristics with weaker guarantees continue to be used. Asymmetric In most cases, the distance between two nodes in the TSP network is the same in both directions. The case where the distance from A to B is not equal to the distance from B to A is called asymmetric TSP. A practical application of an asymmetric TSP is route optimization using street-level routing (which is made asymmetric by one-way streets, slip-roads, motorways, etc.). The stacker crane problem can be viewed as a special case of the asymmetric TSP. In this problem, the input consists of ordered pairs of points in a metric space, which must be visited consecutively in order by the tour. These pairs of points can be viewed as the nodes of an asymmetric TSP, with asymmetric distances reflecting the combined cost of traveling from the first point of a pair to its second and then from the second point of a pair to the first point of the next pair. Conversion to symmetric Solving an asymmetric TSP graph can be somewhat complex. The following is a 3×3 matrix containing all possible path weights between the nodes A, B and C. One option is to turn an asymmetric matrix of size N into a symmetric matrix of size 2N. {| class="wikitable" |- style="text-align:center;" |+ Asymmetric path weights ! !! A !! B !! C |- style="text-align:center;" ! A | || 1 || 2 |- style="text-align:center;" ! B | 6 || || 3 |- style="text-align:center;" ! C | 5 || 4 || |} To double the size, each of the nodes in the graph is duplicated, creating a second ghost node, linked to the original node with a "ghost" edge of very low (possibly negative) weight, here denoted −w. (Alternatively, the ghost edges have weight 0, and weight w is added to all other edges.) The original 3×3 matrix shown above is visible in the bottom left and the transpose of the original in the top-right. Both copies of the matrix have had their diagonals replaced by the low-cost hop paths, represented by −w. In the new graph, no edge directly links original nodes and no edge directly links ghost nodes. {| class="wikitable" |- style="text-align:center;" class="wikitable" |+ Symmetric path weights ! !! A !! B !! C !! A′ !! B′ !! C′ |- style="text-align:center;" ! A | || || || −w || 6 || 5 |- style="text-align:center;" ! B | || || || 1 || −w || 4 |- style="text-align:center;" ! C | || || || 2 || 3 || −w |- style="text-align:center;" ! A′ | −w || 1 || 2 || || || |- style="text-align:center;" ! B′ | 6 || −w || 3 || || || |- style="text-align:center;" ! C′ | 5 || 4 || −w || || || |} The weight −w of the "ghost" edges linking the ghost nodes to the corresponding original nodes must be low enough to ensure that all ghost edges must belong to any optimal symmetric TSP solution on the new graph (w = 0 is not always low enough). As a consequence, in the optimal symmetric tour, each original node appears next to its ghost node (e.g. a possible path is ), and by merging the original and ghost nodes again we get an (optimal) solution of the original asymmetric problem (in our example, ). Analyst's problem There is an analogous problem in geometric measure theory which asks the following: under what conditions may a subset E of Euclidean space be contained in a rectifiable curve (that is, when is there a curve with finite length that visits every point in E)? This problem is known as the analyst's travelling salesman problem. Path length for random sets of points in a square Suppose are independent random variables with uniform distribution in the square , and let be the shortest path length (i.e. TSP solution) for this set of points, according to the usual Euclidean distance. It is known that, almost surely, where is a positive constant that is not known explicitly. Since (see below), it follows from bounded convergence theorem that , hence lower and upper bounds on follow from bounds on . The almost-sure limit as may not exist if the independent locations are replaced with observations from a stationary ergodic process with uniform marginals. Upper bound One has , and therefore , by using a naïve path which visits monotonically the points inside each of slices of width in the square. Few proved , hence , later improved by Karloff (1987): . Fietcher empirically suggested an upper bound of . Lower bound By observing that is greater than times the distance between and the closest point , one gets (after a short computation) A better lower bound is obtained by observing that is greater than times the sum of the distances between and the closest and second closest points , which gives The currently-best lower bound is Held and Karp gave a polynomial-time algorithm that provides numerical lower bounds for , and thus for , which seem to be good up to more or less 1%. In particular, David S. Johnson obtained a lower bound by computer experiment: where 0.522 comes from the points near the square boundary which have fewer neighbours, and Christine L. Valenzuela and Antonia J. Jones obtained the following other numerical lower bound: . Computational complexity The problem has been shown to be NP-hard (more precisely, it is complete for the complexity class FPNP; see function problem), and the decision problem version ("given the costs and a number x, decide whether there is a round-trip route cheaper than x") is NP-complete. The bottleneck travelling salesman problem is also NP-hard. The problem remains NP-hard even for the case when the cities are in the plane with Euclidean distances, as well as in a number of other restrictive cases. Removing the condition of visiting each city "only once" does not remove the NP-hardness, since in the planar case there is an optimal tour that visits each city only once (otherwise, by the triangle inequality, a shortcut that skips a repeated visit would not increase the tour length). Complexity of approximation In the general case, finding a shortest travelling salesman tour is NPO-complete. If the distance measure is a metric (and thus symmetric), the problem becomes APX-complete, and the algorithm of Christofides and Serdyukov approximates it within 1.5. If the distances are restricted to 1 and 2 (but still are a metric), then the approximation ratio becomes 8/7. In the asymmetric case with triangle inequality, in 2018, a constant factor approximation was developed by Svensson, Tarnawski, and Végh. An algorithm by Vera Traub and achieves a performance ratio of . The best known inapproximability bound is 75/74. The corresponding maximization problem of finding the longest travelling salesman tour is approximable within 63/38. If the distance function is symmetric, then the longest tour can be approximated within 4/3 by a deterministic algorithm and within by a randomized algorithm. Human and animal performance The TSP, in particular the Euclidean variant of the problem, has attracted the attention of researchers in cognitive psychology. It has been observed that humans are able to produce near-optimal solutions quickly, in a close-to-linear fashion, with performance that ranges from 1% less efficient, for graphs with 10–20 nodes, to 11% less efficient for graphs with 120 nodes. The apparent ease with which humans accurately generate near-optimal solutions to the problem has led researchers to hypothesize that humans use one or more heuristics, with the two most popular theories arguably being the convex-hull hypothesis and the crossing-avoidance heuristic. However, additional evidence suggests that human performance is quite varied, and individual differences as well as graph geometry appear to affect performance in the task. Nevertheless, results suggest that computer performance on the TSP may be improved by understanding and emulating the methods used by humans for these problems, and have also led to new insights into the mechanisms of human thought. The first issue of the Journal of Problem Solving was devoted to the topic of human performance on TSP, and a 2011 review listed dozens of papers on the subject. A 2011 study in animal cognition titled "Let the Pigeon Drive the Bus," named after the children's book Don't Let the Pigeon Drive the Bus!, examined spatial cognition in pigeons by studying their flight patterns between multiple feeders in a laboratory in relation to the travelling salesman problem. In the first experiment, pigeons were placed in the corner of a lab room and allowed to fly to nearby feeders containing peas. The researchers found that pigeons largely used proximity to determine which feeder they would select next. In the second experiment, the feeders were arranged in such a way that flying to the nearest feeder at every opportunity would be largely inefficient if the pigeons needed to visit every feeder. The results of the second experiment indicate that pigeons, while still favoring proximity-based solutions, "can plan several steps ahead along the route when the differences in travel costs between efficient and less efficient routes based on proximity become larger." These results are consistent with other experiments done with non-primates, which have proven that some non-primates were able to plan complex travel routes. This suggests non-primates may possess a relatively sophisticated spatial cognitive ability. Natural computation When presented with a spatial configuration of food sources, the amoeboid Physarum polycephalum adapts its morphology to create an efficient path between the food sources, which can also be viewed as an approximate solution to TSP. Benchmarks For benchmarking of TSP algorithms, TSPLIB is a library of sample instances of the TSP and related problems is maintained; see the TSPLIB external reference. Many of them are lists of actual cities and layouts of actual printed circuits. Popular culture Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the U.S. government to solve the most elusive problem in computer-science history: P vs. NP. Solutions to the problem are used by mathematician Robert A. Bosch in a subgenre called TSP art. See also Canadian traveller problem Exact algorithm Route inspection problem (also known as "Chinese postman problem") Set TSP problem Seven Bridges of Königsberg Steiner travelling salesman problem Subway Challenge Tube Challenge Vehicle routing problem Graph exploration Mixed Chinese postman problem Arc routing Snow plow routing problem Monge array Ring star problem Notes References . . . . . . . . . . . . . . . . . . . Further reading External links at University of Waterloo TSPLIB, Sample instances for the TSP at the University of Heidelberg Traveling Salesman Problem by Jon McLoone at the Wolfram Demonstrations Project TSP visualization tool NP-complete problems NP-hard problems Combinatorial optimization Graph algorithms Computational problems in graph theory Hamiltonian paths and cycles Metaphors referring to people
Travelling salesman problem
[ "Mathematics" ]
9,182
[ "Computational problems in graph theory", "NP-hard problems", "Computational mathematics", "Graph theory", "Computational problems", "Mathematical relations", "Mathematical problems", "NP-complete problems" ]
31,306
https://en.wikipedia.org/wiki/Tritium
Tritium () or hydrogen-3 (symbol T or H) is a rare and radioactive isotope of hydrogen with a half-life of ~12.3 years. The tritium nucleus (t, sometimes called a triton) contains one proton and two neutrons, whereas the nucleus of the common isotope hydrogen-1 (protium) contains one proton and no neutrons, and that of non-radioactive hydrogen-2 (deuterium) contains one proton and one neutron. Tritium is the heaviest particle-bound isotope of hydrogen. It is one of the few nuclides with a distinct name. The use of the name hydrogen-3, though more systematic, is much less common. Naturally occurring tritium is extremely rare on Earth. The atmosphere has only trace amounts, formed by the interaction of its gases with cosmic rays. It can be produced artificially by irradiation of lithium or lithium-bearing ceramic pebbles in a nuclear reactor and is a low-abundance byproduct in normal operations of nuclear reactors. Tritium is used as the energy source in radioluminescent lights for watches, night sights for firearms, numerous instruments and tools, and novelty items such as self-illuminating key chains. It is used in a medical and scientific setting as a radioactive tracer. Tritium is also used as a nuclear fusion fuel, along with more abundant deuterium, in tokamak reactors and in hydrogen bombs. Tritium has also been used commercially in betavoltaic devices such as NanoTritium batteries. History Tritium was first detected in 1934 by Ernest Rutherford, Mark Oliphant and Paul Harteck after bombarding deuterium with deuterons (deuterium nuclei). Deuterium is another isotope of hydrogen, which occurs naturally with an abundance of 0.015%. Their experiment could not isolate tritium, which was first accomplished in 1939 by Luis Alvarez and Robert Cornog, who also realized tritium's radioactivity. Willard Libby recognized in 1954 that tritium could be used for radiometric dating of water and wine. Decay The half life of tritium is listed by the National Institute of Standards and Technology as () – an annualized rate of approximately 5.5% per year. Tritium decays into helium-3 by beta-minus decay as shown in this nuclear equation: {| border="0" |- style="height:2em;" | ||→ || ||+ || ||+ || |} releasing 18.6 keV of energy in the process. The electron's kinetic energy varies, with an average of 5.7 keV, while the remaining energy is carried off by the nearly undetectable electron antineutrino. Beta particles from tritium can penetrate only about of air, and they are incapable of passing through the dead outermost layer of human skin. Because of their low energy compared to other beta particles, the amount of bremsstrahlung generated is also lower. The unusually low energy released in the tritium beta decay makes the decay (along with that of rhenium-187) useful for absolute neutrino mass measurements in the laboratory. The low energy of tritium's radiation makes it difficult to detect tritium-labeled compounds except by using liquid scintillation counting. Production Lithium Tritium is most often produced in nuclear reactors by neutron activation of lithium-6. The release and diffusion of tritium and helium produced by the fission of lithium can take place within ceramics known as breeder ceramics. Production of tritium from lithium-6 in such breeder ceramics is possible with neutrons of any energy, though the cross section is higher when the incident neutrons have lower energy, reaching more than 900 barns for thermal neutrons. This is an exothermic reaction, yielding 4.8 MeV. In comparison, fusion of deuterium with tritium releases about 17.6 MeV. For applications in proposed fusion energy reactors, such as ITER, pebbles consisting of lithium bearing ceramics including LiTiO and LiSiO, are being developed for tritium breeding within a helium-cooled pebble bed, also known as a breeder blanket. + n → (2.05 MeV) + (2.75 MeV) High-energy neutrons can also produce tritium from lithium-7 in an endothermic reaction, consuming 2.466 MeV. This was discovered when the 1954 Castle Bravo nuclear test produced an unexpectedly high yield. Prior to this test, it was incorrectly assumed that would absorb a neutron to become , which would beta-decay to , which in turn would decay to two nuclei on a total timeframe much longer than the duration of the explosion. + n → + + n Boron High-energy neutrons irradiating boron-10, also occasionally produce tritium: + n → 2 + A more common result of boron-10 neutron capture is Li and a single alpha particle. Especially in pressurized water reactors which only partially thermalize neutrons, the interaction between relatively fast neutrons and the boric acid added as a chemical shim produces small but non-negligible quantities of tritium. Deuterium Tritium is also produced in heavy water-moderated reactors whenever a deuterium nucleus captures a neutron. This reaction has a small absorption cross section, making heavy water a good neutron moderator, and relatively little tritium is produced. Even so, cleaning tritium from the moderator may be desirable after several years to reduce the risk of its escaping to the environment. Ontario Power Generation's "Tritium Removal Facility" is capable of processing up to of heavy water a year, and it separates out about of tritium, making it available for other uses. CANDU reactors typically produce of tritium per year, which is recovered at the Darlington Tritium Recovery Facility (DTRF) attached to the 3,512 MW Darlington Nuclear Generating Station in Ontario. The total production at DTRF between 1989 and 2011 was – with an activity of : an average of about per year. Deuterium's absorption cross section for thermal neutrons is about 0.52 millibarn, whereas that of oxygen-16 (O) is about 0.19 millibarn and that of oxygen-17 (O) is about 240 millibarns. While O is by far the most common isotope of oxygen in both natural oxygen and heavy water; depending on the method of isotope separation, heavy water may be slightly richer in O and O. Due to both neutron capture and (n,α) reactions (the latter of which produce C, an undesirable long-lived beta emitter, from O) they are net "neutron consumers" and are thus undesirable in a moderator of a natural uranium reactor which needs to keep neutron absorption outside the fuel as low as feasible. Some facilities that remove tritium also remove (or at least reduce the content of) O and O, which can – at least in principle – be used for isotope labeling. India, which also has a large fleet of pressurized heavy water reactors (initially CANDU technology but since indigenized and further developed IPHWR technology), also removes at least some of the tritium produced in the moderator/coolant of its reactors but due to the dual use nature of tritium and the Indian nuclear bomb program, less information about this is publicly available than for Canada. Fission Tritium is an uncommon product of the nuclear fission of uranium-235, plutonium-239, and uranium-233, with a production of about one atom per 10 fissions. The main pathways of tritium production include ternary fission. The release or recovery of tritium needs to be considered in the operation of nuclear reactors, especially in the reprocessing of nuclear fuel and storage of spent nuclear fuel. The production of tritium is not a goal, but a side-effect. It is discharged to the atmosphere in small quantities by some nuclear power plants. Voloxidation is an optional additional step in nuclear reprocessing that removes volatile fission products (such as all isotopes of hydrogen) before an aqueous process begins. This would in principle enable economic recovery of the produced tritium but even if the tritium is only disposed and not used, it has the potential to reduce tritium contamination in the water used, reducing radioactivity released when the water is discharged since tritiated water cannot be removed from "ordinary" water except by isotope separation. Given the specific activity of tritium at , one TBq is equivalent to roughly . Fukushima Daiichi In June 2016 the Tritiated Water Task Force released a report on the status of tritium in tritiated water at Fukushima Daiichi nuclear plant, as part of considering options for final disposal of the stored contaminated cooling water. This identified that the March 2016 holding of tritium on-site was 760 TBq (equivalent to 2.1 g of tritium or 14 mL of pure tritiated water) in a total of 860,000 m of stored water. This report also identified the reducing concentration of tritium in the water extracted from the buildings etc. for storage, seeing a factor of ten decrease over the five years considered (2011–2016), 3.3 MBq/L to 0.3 MBq/L (after correction for the 5% annual decay of tritium). According to a report by an expert panel considering the best approach to dealing with this issue, "Tritium could be separated theoretically, but there is no practical separation technology on an industrial scale. Accordingly, a controlled environmental release is said to be the best way to treat low-tritium-concentration water." After a public information campaign sponsored by the Japanese government, the gradual release into the sea of the tritiated water began on 24 August 2023 and is the first of four releases through March 2024. The entire process will take "decades" to complete. China reacted with protest. The IAEA has endorsed the plan. The water released is diluted to reduce the tritium concentration to less than 1500 Bq/L, far below the limit recommended in drinking water by the WHO. Helium-3 Tritium's decay product helium-3 has a very large cross section (5330 barns) for reacting with thermal neutrons, expelling a proton; hence, it is rapidly converted back to tritium in nuclear reactors. + n → + Cosmic rays Tritium occurs naturally due to cosmic rays interacting with atmospheric gases. In the most important reaction for natural production, a fast neutron (which must have energy greater than 4.0 MeV) interacts with atmospheric nitrogen: + n → + Worldwide, the production of tritium from natural sources is 148 petabecquerels per year. The global equilibrium inventory of tritium created by natural sources remains approximately constant at 2,590 petabecquerels. This is due to a fixed production rate, and losses proportional to the inventory. Production history USA Tritium for American nuclear weapons was produced in special heavy water reactors at the Savannah River Site until their closures in 1988. With the Strategic Arms Reduction Treaty (START) after the end of the Cold War, the existing supplies were sufficient for the new, smaller number of nuclear weapons for some time. of tritium was produced in the United States from 1955 to 1996. Since it continually decays into helium-3, the total amount remaining was about at the time of the report, and about as of 2023. Tritium production was resumed with irradiation of rods containing lithium (replacing the usual control rods containing boron, cadmium, or hafnium), at the reactors of the commercial Watts Bar Nuclear Plant from 2003 to 2005 followed by extraction of tritium from the rods at the Tritium Extraction Facility at the Savannah River Site beginning in November 2006. Tritium leakage from the rods during reactor operations limits the number that can be used in any reactor without exceeding the maximum allowed tritium levels in the coolant. Properties Tritium has an atomic mass of . Diatomic tritium ( or ) is a gas at standard temperature and pressure. Combined with oxygen, it forms tritiated water (). Compared to hydrogen in its natural composition on Earth, tritium has a higher melting point (20.62 K vs. 13.99 K), a higher boiling point (25.04 K vs. 20.27 K), a higher critical temperature (40.59 K vs. 32.94 K) and a higher critical pressure (1.8317 MPa vs. 1.2858 MPa). Tritium's specific activity is . Tritium figures prominently in studies of nuclear fusion due to its favorable reaction cross section and the large amount of energy (17.6 MeV) produced through its reaction with deuterium: + → + n All atomic nuclei contain protons as their only charged particles. They therefore repel one another because like charges repel (Coulomb's law). However, if the atoms have a high enough temperature and pressure (for example, in the core of the Sun), then their random motions can overcome such repulsion, and they can come close enough for the strong nuclear force to take effect, fusing them into heavier atoms. A tritium nucleus (triton), containing one proton and two neutrons, has the same charge as any hydrogen nucleus, and it experiences the same electrostatic repulsion when close to another nucleus. However, the neutrons in the triton increase the attractive strong nuclear force when close enough to another nucleus. As a result, tritium can fuse more easily with other light atoms, than ordinary hydrogen can. The same is true, albeit to a lesser extent, of deuterium. This is why brown dwarfs ("failed" stars) cannot fuse normal hydrogen, but they do fuse a small minority of deuterium nuclei. Like the other isotopes of hydrogen, tritium is difficult to confine. Rubber, plastic, and some kinds of steel are all somewhat permeable. This has raised concerns that if tritium were used in large quantities, in particular for fusion reactors, it might contribute to radioactive contamination, though its short half-life should prevent significant long-term accumulation in the atmosphere. The high levels of atmospheric nuclear weapons testing that took place prior to the enactment of the Partial Nuclear Test Ban Treaty proved to be unexpectedly useful to oceanographers. The high levels of tritium oxide introduced into upper layers of the oceans have been used in the years since then to measure the rate of mixing of the upper layers of the oceans with their lower levels. Health risks Since tritium is a low energy beta (β) emitter, it is not dangerous externally (its β particles cannot penetrate the skin), but it can be a radiation hazard if inhaled, ingested via food or water, or absorbed through the skin. Organisms can take up HHO, as they would HO. Plants convert HHO into organically bound tritium (OBT), and are consumed by animals. HHO is retained in humans for around 12 days, with a small portion of it remaining in the body. Tritium can be passed along the food chain as one organism feeds on another, though the metabolism of OBT is less understood than that of HHO. Tritium can incorporate to RNA and DNA molecules within organisms which can lead to somatic and genetic impacts. These can emerge in later generations. HHO has a short biological half-life in the human body of 7 to 14 days, which both reduces the total effects of single-incident ingestion and precludes long-term bioaccumulation of HHO from the environment. The biological half-life of tritiated water in the human body, which is a measure of body water turn-over, varies with the season. Studies on the biological half-life of occupational radiation workers for free water tritium in a coastal region of Karnataka, India, show that the biological half-life in winter is twice that of the summer. If tritium exposure is suspected or known, drinking uncontaminated water will help replace the tritium from the body. Increasing sweating, urination or breathing can help the body expel water and thereby the tritium contained in it. However, care should be taken that neither dehydration nor a depletion of the body's electrolytes results, as the health consequences of those things (particularly in the short term) can be more severe than those of tritium exposure. Environmental contamination Tritium has leaked from 48 of 65 nuclear sites in the US. In one case, leaking water contained of tritium per liter, which is 375 times the current EPA limit for drinking water, and 28 times the World Health Organization's recommended limit. This is equivalent to or roughly 0.8 parts per trillion. The US Nuclear Regulatory Commission states that in normal operation in 2003, 56 pressurized water reactors released of tritium (maximum: ; minimum: ; average: ) and 24 boiling water reactors released (maximum: ; minimum: 0 Ci; average: ), in liquid effluents. of tritium weigh about . Regulatory limits The legal limits for tritium in drinking water vary widely from country to country. Some figures are given below: {| class="wikitable" |+ Tritium drinking water limits by country !valign="bottom"| Country !valign="bottom"| Tritium limit(Bq/L) !valign="bottom"| Equivalent dose(μSv/year) |- | Australia |align="right"| 76,103 |1,000 |- | Japan |align="right"| 60,000 | |- | Finland |align="right"| 30,000 | |- | World Health Organization |align="right"| 10,000 | |- | Switzerland |align="right"| 10,000 | |- | Russia |align="right"| 7,700 | |- | Canada (Ontario) |align="right"| 7,000 | |- | United States |align="right"| 740 | |- | Norway |align="right"| 100 | |- |} The American limit results in a dose of 4.0 millirems (or 40 microsieverts in SI units) per year per EPA regulation 40CFR141, and is based on outdated dose calculation standards of National Bureau of Standards Handbook 69 circa 1963. Four millirem per year is about 1.3% of the natural background radiation (~3 mSv). For comparison, the banana equivalent dose (BED) is set at 0.1 μSv, so the statutory limit in the US is set at 400 BED. Updated dose calculation standards based on International Commission on Radiological Protection Report 30 and used in the NRC Regulation 10CFR20 results in a dose of 0.9 millirem (9 μSv) per year at 740 Bq/L (20 nCi/L). Use Radiometric assays in biology and medicine Tritiation of drug candidates allows detailed analysis of their absorption and metabolism. Tritium has also been used for biological radiometric assays, in a process akin to radiocarbon dating. For example, [3H] retinyl acetate was traced through the bodies of rats. Self-powered lighting The beta particles from small amounts of tritium cause chemicals called phosphors to glow. This radioluminescence is used in self-powered lighting devices called betalights, which are used for night illumination of firearm sights, watches, exit signs, map lights, navigational compasses (such as current-use M-1950 U.S. military compasses), knives and a variety of other devices. , commercial demand for tritium is per year and the cost is or more. Nuclear weapons Tritium is an important component in nuclear weapons; it is used to enhance the efficiency and yield of fission bombs and the fission stages of hydrogen bombs in a process known as "boosting" as well as in external neutron initiators for such weapons. Neutron initiator These are devices incorporated in nuclear weapons which produce a pulse of neutrons when the bomb is detonated to initiate the fission reaction in the fissionable core (pit) of the bomb, after it is compressed to a critical mass by explosives. Actuated by an ultrafast switch like a krytron, a small particle accelerator drives ions of tritium and deuterium to energies above the 15 keV or so needed for deuterium-tritium fusion and directs them into a metal target where the tritium and deuterium are adsorbed as hydrides. High-energy fusion neutrons from the resulting fusion radiate in all directions. Some of these strike plutonium or uranium nuclei in the primary's pit, initiating a nuclear chain reaction. The quantity of neutrons produced is large in absolute numbers, allowing the pit to quickly achieve neutron levels that would otherwise need many more generations of chain reaction, though still small compared to the total number of nuclei in the pit. Boosting Before detonation, a few grams of tritium–deuterium gas are injected into the hollow "pit" of fissile material. The early stages of the fission chain reaction supply enough heat and compression to start deuterium–tritium fusion; then both fission and fusion proceed in parallel, the fission assisting the fusion by continuing heating and compression, and the fusion assisting the fission with highly energetic (14.1-MeV) neutrons. As the fission fuel depletes and also explodes outward, it falls below the density needed to stay critical by itself, but the fusion neutrons make the fission process progress faster and continue longer than it would without boosting. Increased yield comes overwhelmingly from the increased fission. The energy from the fusion itself is much smaller because the amount of fusion fuel is much smaller. Effects of boosting include: increased yield (for the same amount of fission fuel, compared to unboosted) the possibility of variable yield by varying the amount of fusion fuel allowing the bomb to require a smaller amount of the very expensive fissile material eliminating the risk of predetonation by nearby nuclear explosions not so stringent requirements on the implosion setup, allowing for a smaller and lighter amount of high explosives to be used The tritium in a warhead is continually undergoing radioactive decay, becoming unavailable for fusion. Also, its decay product, helium-3, absorbs neutrons. This can offset or reverse the intended effect of the tritium, which was to generate many free neutrons, if too much helium-3 has accumulated. Therefore, boosted bombs need fresh tritium periodically. The estimated quantity needed is per warhead. To maintain constant levels of tritium, about per warhead per year must be supplied to the bomb. One mole of deuterium-tritium gas contains about of tritium and of deuterium. In comparison, the 20 moles of plutonium in a nuclear bomb consists of about of plutonium-239. Tritium in hydrogen bomb secondaries Since tritium undergoes radioactive decay, and is also difficult to confine physically, the much larger secondary charge of heavy hydrogen isotopes needed in a true hydrogen bomb uses solid lithium deuteride as its source of deuterium and tritium, producing the tritium in situ during secondary ignition. During the detonation of the primary fission bomb stage in a thermonuclear weapon (Teller–Ulam staging), the sparkplug, a cylinder of U/Pu at the center of the fusion stage(s), begins to fission in a chain reaction, from excess neutrons channeled from the primary. The neutrons released from the fission of the sparkplug split lithium-6 into tritium and helium-4, while lithium-7 is split into helium-4, tritium, and one neutron. As these reactions occur, the fusion stage is compressed by photons from the primary and fission of the U or U/U jacket surrounding the fusion stage. Therefore, the fusion stage breeds its own tritium as the device detonates. In the extreme heat and pressure of the explosion, some of the tritium is then forced into fusion with deuterium, and that reaction releases even more neutrons. Since this fusion process requires an extremely high temperature for ignition, and it produces fewer and less energetic neutrons (only fission, deuterium-tritium fusion, and splitting are net neutron producers), lithium deuteride is not used in boosted bombs, but rather for multi-stage hydrogen bombs. Controlled nuclear fusion Tritium is an important fuel for controlled nuclear fusion in both magnetic confinement and inertial confinement fusion reactor designs. The National Ignition Facility (NIF) uses deuterium–tritium fuel, and the experimental fusion reactor ITER will also do so. The deuterium–tritium reaction is favorable since it has the largest fusion cross section (about 5.0 barns) and it reaches this maximum cross section at the lowest energy (about 65 keV center-of-mass) of any potential fusion fuel. As tritium is very rare on earth, concepts for fusion reactors often include the breeding of tritium. During the operation of envisioned breeder fusion reactors, Breeding blankets, often containing lithium as part of ceramic pebbles, are subjected to neutron fluxes to generate tritium to complete the fuel cycle. The Tritium Systems Test Assembly (TSTA) was a facility at the Los Alamos National Laboratory dedicated to the development and demonstration of technologies required for fusion-relevant deuterium–tritium processing. Electrical power source Tritium can be used in a betavoltaic device to create an atomic battery to generate electricity. Use as an oceanic transient tracer Aside from chlorofluorocarbons, tritium can act as a transient tracer and can "outline" the biological, chemical, and physical paths throughout the world's oceans because of its evolving distribution. Tritium has thus been used as a tool to examine ocean circulation and ventilation and, for such purposes, is usually measured in tritium units, where 1 TU is defined as 1 tritium atom per 10 hydrogen atoms, equal to about 0.118 Bq/liter. As noted earlier, nuclear tests, mainly in the Northern Hemisphere at high latitudes, throughout the late 1950s and early 1960s introduced lots of tritium into the atmosphere, especially the stratosphere. Before these nuclear tests, there were only about 3-4 kg of tritium on the Earth's surface; but these amounts rose by 2-3 orders of magnitude during the post-test period. Some sources reported natural background levels were exceeded by about 1,000 TU in 1963 and 1964 and the isotope is used in the northern hemisphere to estimate the age of groundwater and construct hydrogeologic simulation models. Estimated atmospheric levels at the height of weapons testing to approach 1,000 TU and pre-fallout levels of rainwater to be between 5 and 10 TU. In 1963 Valentia Island Ireland recorded 2,000 TU in precipitation. North Atlantic Ocean While in the stratosphere (post-test period), the tritium interacted with and oxidized to water molecules and was present in much of the rapidly produced rainfall, making tritium a prognostic tool for studying the evolution and structure of the water cycle as well as the ventilation and formation of water masses in the North Atlantic. Bomb-tritium data were used from the Transient Tracers in the Ocean (TTO) program in order to quantify the replenishment and overturning rates for deep water located in the North Atlantic. Bomb-tritium also enters the deep ocean around the Antarctic. Most of the bomb tritiated water (HHO) throughout the atmosphere can enter the ocean through the following processes: precipitation vapor exchange river runoff These processes make HHO a great tracer for time scales of up to a few decades. Using the data from these processes for 1981, the 1-TU isosurface lies between 500 and 1,000 meters deep in the subtropical regions and then extends to 1,500–2,000 meters south of the Gulf Stream due to recirculation and ventilation in the upper portion of the Atlantic Ocean. To the north, the isosurface deepens and reaches the floor of the abyssal plain which is directly related to the ventilation of the ocean floor over 10–20 year time-scales. Also evident in the Atlantic Ocean is the tritium profile near Bermuda between the late 1960s and late 1980s. There is a downward propagation of the tritium maximum from the surface (1960s) to 400 meters (1980s), which corresponds to a deepening rate of about 18 meters per year. There are also tritium increases at 1,500 m depth in the late 1970s and 2,500 m in the middle of the 1980s, both of which correspond to cooling events in the deep water and associated deep water ventilation. From a study in 1991, the tritium profile was used as a tool for studying the mixing and spreading of newly formed North Atlantic Deep Water (NADW), corresponding to tritium increases to 4 TU. This NADW tends to spill over sills that divide the Norwegian Sea from the North Atlantic Ocean and then flows to the west and equatorward in deep boundary currents. This process was explained via the large-scale tritium distribution in the deep North Atlantic between 1981 and 1983. The sub-polar gyre tends to be freshened (ventilated) by the NADW and is directly related to the high tritium values (>1.5 TU). Also evident was the decrease in tritium in the deep western boundary current by a factor of 10 from the Labrador Sea to the Tropics, which is indicative of loss to ocean interior due to turbulent mixing and recirculation. Pacific and Indian oceans In a 1998 study, tritium concentrations in surface seawater and atmospheric water vapor (10 meters above the surface) were sampled at the following locations: the Sulu Sea, Fremantle Bay, the Bay of Bengal, Penang Bay, and the Strait of Malacca. Results indicated that the tritium concentration in surface seawater was highest at the Fremantle Bay (about 0.40 Bq/liter), which could be accredited to the mixing of runoff of freshwater from nearby lands due to large amounts found in coastal waters. Typically, lower concentrations were found between 35 and 45° south, and near the equator. Results also indicated that (in general) tritium has decreased over the years (up to 1997) due to the physical decay of bomb tritium in the Indian Ocean. As for water vapor, the tritium concentration was about one order of magnitude greater than surface seawater concentrations (ranging from 0.46 to 1.15 Bq/L). Therefore, the water vapor tritium is not affected by the surface seawater concentration; thus, the high tritium concentrations in the vapor were concluded to be a direct consequence of the downward movement of natural tritium from the stratosphere to the troposphere (therefore, the ocean air showed a dependence on latitudinal change). In the North Pacific Ocean, the tritium (introduced as bomb tritium in the Northern Hemisphere) spread in three dimensions. There were subsurface maxima in the middle and low latitude regions, which is indicative of lateral mixing (advection) and diffusion processes along lines of constant potential density (isopycnals) in the upper ocean. Some of these maxima even correlate well with salinity extrema. In order to obtain the structure for ocean circulation, the tritium concentrations were mapped on 3 surfaces of constant potential density (23.90, 26.02, and 26.81). Results indicated that the tritium was well-mixed (at 6 to 7 TU) on the 26.81 isopycnal in the subarctic cyclonic gyre and there appeared to be a slow exchange of tritium (relative to shallower isopycnals) between this gyre and the anticyclonic gyre to the south; also, the tritium on the 23.90 and 26.02 surfaces appeared to be exchanged at a slower rate between the central gyre of the North Pacific and the equatorial regions. The depth penetration of bomb tritium can be separated into three distinct layers: Layer 1 Layer 1 is the shallowest layer and includes the deepest, ventilated layer in winter; it has received tritium via radioactive fallout and lost some due to advection and/or vertical diffusion and contains about 28% of the total amount of tritium. Layer 2 Layer 2 is below the first layer but above the 26.81 isopycnal and is no longer part of the mixed layer. Its two sources are diffusion downward from the mixed layer and lateral expansions outcropping strata (poleward); it contains about 58% of the total tritium. Layer 3 Layer 3 is representative of waters that are deeper than the outcrop isopycnal and can only receive tritium via vertical diffusion; it contains the remaining 14% of the total tritium. Mississippi River system Trace amounts of radioactive materials from atomic weapons testing settled throughout the Mississippi River System. Tritium concentrations have been used to understand the residence times of continental hydrologic systems such as lakes, streams, and rivers. In a 2004 study, several rivers were taken into account during the examination of tritium concentrations (starting in the 1960s) throughout the Mississippi River Basin: Ohio River (largest input to the Mississippi River flow), Missouri River, and Arkansas River. The highest tritium concentrations were found in 1963 across locations throughout these rivers. The peak correlates with implementation of the US & Soviet atmospheric test ban treaty in 1962. The overall highest concentrations occurred in the Missouri River (1963) and were greater than 1,200 TU while the lowest concentrations were found in the Arkansas River (never greater than 850 TU and less than 10 TU in the mid-1980s). As for the mass flux of tritium through the main stem of the Mississippi River into the Gulf of Mexico, data indicated that approximately 780 grams of tritium has flowed out of the River and into the Gulf between 1961 and 1997, an average of 21.7 grams/yr and 7.7 PBq/yr. Current fluxes through the Mississippi River are 1 to 2 grams, per year as opposed to the pre-bomb period fluxes of roughly 0.4 grams per year. See also Hypertriton List of elements facing shortage Footnotes References External links Isotopes of hydrogen Environmental isotopes Radiochemistry Radioisotope fuels Nuclear fusion fuels Radionuclides used in radiometric dating
Tritium
[ "Chemistry" ]
7,141
[ "Isotopes of hydrogen", "Environmental isotopes", "Radionuclides used in radiometric dating", "Isotopes", "Radiochemistry", "Radioactivity" ]
31,474
https://en.wikipedia.org/wiki/Transcription%20factor
In molecular biology, a transcription factor (TF) (or sequence-specific DNA-binding factor) is a protein that controls the rate of transcription of genetic information from DNA to messenger RNA, by binding to a specific DNA sequence. The function of TFs is to regulate—turn on and off—genes in order to make sure that they are expressed in the desired cells at the right time and in the right amount throughout the life of the cell and the organism. Groups of TFs function in a coordinated fashion to direct cell division, cell growth, and cell death throughout life; cell migration and organization (body plan) during embryonic development; and intermittently in response to signals from outside the cell, such as a hormone. There are approximately 1600 TFs in the human genome. Transcription factors are members of the proteome as well as regulome. TFs work alone or with other proteins in a complex, by promoting (as an activator), or blocking (as a repressor) the recruitment of RNA polymerase (the enzyme that performs the transcription of genetic information from DNA to RNA) to specific genes. A defining feature of TFs is that they contain at least one DNA-binding domain (DBD), which attaches to a specific sequence of DNA adjacent to the genes that they regulate. TFs are grouped into classes based on their DBDs. Other proteins such as coactivators, chromatin remodelers, histone acetyltransferases, histone deacetylases, kinases, and methylases are also essential to gene regulation, but lack DNA-binding domains, and therefore are not TFs. TFs are of interest in medicine because TF mutations can cause specific diseases, and medications can be potentially targeted toward them. Number Transcription factors are essential for the regulation of gene expression and are, as a consequence, found in all living organisms. The number of transcription factors found within an organism increases with genome size, and larger genomes tend to have more transcription factors per gene. There are approximately 2800 proteins in the human genome that contain DNA-binding domains, and 1600 of these are presumed to function as transcription factors, though other studies indicate it to be a smaller number. Therefore, approximately 10% of genes in the genome code for transcription factors, which makes this family the single largest family of human proteins. Furthermore, genes are often flanked by several binding sites for distinct transcription factors, and efficient expression of each of these genes requires the cooperative action of several different transcription factors (see, for example, hepatocyte nuclear factors). Hence, the combinatorial use of a subset of the approximately 2000 human transcription factors easily accounts for the unique regulation of each gene in the human genome during development. Mechanism Transcription factors bind to either enhancer or promoter regions of DNA adjacent to the genes that they regulate based on recognizing specific DNA motifs. Depending on the transcription factor, the transcription of the adjacent gene is either up- or down-regulated. Transcription factors use a variety of mechanisms for the regulation of gene expression. These mechanisms include: stabilize or block the binding of RNA polymerase to DNA catalyze the acetylation or deacetylation of histone proteins. The transcription factor can either do this directly or recruit other proteins with this catalytic activity. Many transcription factors use one or the other of two opposing mechanisms to regulate transcription: histone acetyltransferase (HAT) activity – acetylates histone proteins, which weakens the association of DNA with histones, which make the DNA more accessible to transcription, thereby up-regulating transcription histone deacetylase (HDAC) activity – deacetylates histone proteins, which strengthens the association of DNA with histones, which make the DNA less accessible to transcription, thereby down-regulating transcription recruit coactivator or corepressor proteins to the transcription factor DNA complex Function Transcription factors are one of the groups of proteins that read and interpret the genetic "blueprint" in the DNA. They bind to the DNA and help initiate a program of increased or decreased gene transcription. As such, they are vital for many important cellular processes. Below are some of the important functions and biological roles transcription factors are involved in: Basal transcriptional regulation In eukaryotes, an important class of transcription factors called general transcription factors (GTFs) are necessary for transcription to occur. Many of these GTFs do not actually bind DNA, but rather are part of the large transcription preinitiation complex that interacts with RNA polymerase directly. The most common GTFs are TFIIA, TFIIB, TFIID (see also TATA binding protein), TFIIE, TFIIF, and TFIIH. The preinitiation complex binds to promoter regions of DNA upstream to the gene that they regulate. Differential enhancement of transcription Other transcription factors differentially regulate the expression of various genes by binding to enhancer regions of DNA adjacent to regulated genes. These transcription factors are critical to making sure that genes are expressed in the right cell at the right time and in the right amount, depending on the changing requirements of the organism. Development Many transcription factors in multicellular organisms are involved in development. Responding to stimuli, these transcription factors turn on/off the transcription of the appropriate genes, which, in turn, allows for changes in cell morphology or activities needed for cell fate determination and cellular differentiation. The Hox transcription factor family, for example, is important for proper body pattern formation in organisms as diverse as fruit flies to humans. Another example is the transcription factor encoded by the sex-determining region Y (SRY) gene, which plays a major role in determining sex in humans. Response to intercellular signals Cells can communicate with each other by releasing molecules that produce signaling cascades within another receptive cell. If the signal requires upregulation or downregulation of genes in the recipient cell, often transcription factors will be downstream in the signaling cascade. Estrogen signaling is an example of a fairly short signaling cascade that involves the estrogen receptor transcription factor: Estrogen is secreted by tissues such as the ovaries and placenta, crosses the cell membrane of the recipient cell, and is bound by the estrogen receptor in the cell's cytoplasm. The estrogen receptor then goes to the cell's nucleus and binds to its DNA-binding sites, changing the transcriptional regulation of the associated genes. Response to environment Not only do transcription factors act downstream of signaling cascades related to biological stimuli but they can also be downstream of signaling cascades involved in environmental stimuli. Examples include heat shock factor (HSF), which upregulates genes necessary for survival at higher temperatures, hypoxia inducible factor (HIF), which upregulates genes necessary for cell survival in low-oxygen environments, and sterol regulatory element binding protein (SREBP), which helps maintain proper lipid levels in the cell. Cell cycle control Many transcription factors, especially some that are proto-oncogenes or tumor suppressors, help regulate the cell cycle and as such determine how large a cell will get and when it can divide into two daughter cells. One example is the Myc oncogene, which has important roles in cell growth and apoptosis. Pathogenesis Transcription factors can also be used to alter gene expression in a host cell to promote pathogenesis. A well studied example of this are the transcription-activator like effectors (TAL effectors) secreted by Xanthomonas bacteria. When injected into plants, these proteins can enter the nucleus of the plant cell, bind plant promoter sequences, and activate transcription of plant genes that aid in bacterial infection. TAL effectors contain a central repeat region in which there is a simple relationship between the identity of two critical residues in sequential repeats and sequential DNA bases in the TAL effector's target site. This property likely makes it easier for these proteins to evolve in order to better compete with the defense mechanisms of the host cell. Regulation It is common in biology for important processes to have multiple layers of regulation and control. This is also true with transcription factors: Not only do transcription factors control the rates of transcription to regulate the amounts of gene products (RNA and protein) available to the cell but transcription factors themselves are regulated (often by other transcription factors). Below is a brief synopsis of some of the ways that the activity of transcription factors can be regulated: Synthesis Transcription factors (like all proteins) are transcribed from a gene on a chromosome into RNA, and then the RNA is translated into protein. Any of these steps can be regulated to affect the production (and thus activity) of a transcription factor. An implication of this is that transcription factors can regulate themselves. For example, in a negative feedback loop, the transcription factor acts as its own repressor: If the transcription factor protein binds the DNA of its own gene, it down-regulates the production of more of itself. This is one mechanism to maintain low levels of a transcription factor in a cell. Nuclear localization In eukaryotes, transcription factors (like most proteins) are transcribed in the nucleus but are then translated in the cell's cytoplasm. Many proteins that are active in the nucleus contain nuclear localization signals that direct them to the nucleus. But, for many transcription factors, this is a key point in their regulation. Important classes of transcription factors such as some nuclear receptors must first bind a ligand while in the cytoplasm before they can relocate to the nucleus. Activation Transcription factors may be activated (or deactivated) through their signal-sensing domain by a number of mechanisms including: ligand binding – Not only is ligand binding able to influence where a transcription factor is located within a cell but ligand binding can also affect whether the transcription factor is in an active state and capable of binding DNA or other cofactors (see, for example, nuclear receptors). phosphorylation – Many transcription factors such as STAT proteins must be phosphorylated before they can bind DNA. interaction with other transcription factors (e.g., homo- or hetero-dimerization) or coregulatory proteins Accessibility of DNA-binding site In eukaryotes, DNA is organized with the help of histones into compact particles called nucleosomes, where sequences of about 147 DNA base pairs make ~1.65 turns around histone protein octamers. DNA within nucleosomes is inaccessible to many transcription factors. Some transcription factors, so-called pioneer factors are still able to bind their DNA binding sites on the nucleosomal DNA. For most other transcription factors, the nucleosome should be actively unwound by molecular motors such as chromatin remodelers. Alternatively, the nucleosome can be partially unwrapped by thermal fluctuations, allowing temporary access to the transcription factor binding site. In many cases, a transcription factor needs to compete for binding to its DNA binding site with other transcription factors and histones or non-histone chromatin proteins. Pairs of transcription factors and other proteins can play antagonistic roles (activator versus repressor) in the regulation of the same gene. Availability of other cofactors/transcription factors Most transcription factors do not work alone. Many large TF families form complex homotypic or heterotypic interactions through dimerization. For gene transcription to occur, a number of transcription factors must bind to DNA regulatory sequences. This collection of transcription factors, in turn, recruit intermediary proteins such as cofactors that allow efficient recruitment of the preinitiation complex and RNA polymerase. Thus, for a single transcription factor to initiate transcription, all of these other proteins must also be present, and the transcription factor must be in a state where it can bind to them if necessary. Cofactors are proteins that modulate the effects of transcription factors. Cofactors are interchangeable between specific gene promoters; the protein complex that occupies the promoter DNA and the amino acid sequence of the cofactor determine its spatial conformation. For example, certain steroid receptors can exchange cofactors with NF-κB, which is a switch between inflammation and cellular differentiation; thereby steroids can affect the inflammatory response and function of certain tissues. Interaction with methylated cytosine Transcription factors and methylated cytosines in DNA both have major roles in regulating gene expression. (Methylation of cytosine in DNA primarily occurs where cytosine is followed by guanine in the 5' to 3' DNA sequence, a CpG site.) Methylation of CpG sites in a promoter region of a gene usually represses gene transcription, while methylation of CpGs in the body of a gene increases expression. TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene. The DNA binding sites of 519 transcription factors were evaluated. Of these, 169 transcription factors (33%) did not have CpG dinucleotides in their binding sites, and 33 transcription factors (6%) could bind to a CpG-containing motif but did not display a preference for a binding site with either a methylated or unmethylated CpG. There were 117 transcription factors (23%) that were inhibited from binding to their binding sequence if it contained a methylated CpG site, 175 transcription factors (34%) that had enhanced binding if their binding sequence had a methylated CpG site, and 25 transcription factors (5%) were either inhibited or had enhanced binding depending on where in the binding sequence the methylated CpG was located. TET enzymes do not specifically bind to methylcytosine except when recruited (see DNA demethylation). Multiple transcription factors important in cell differentiation and lineage specification, including NANOG, SALL4A, WT1, EBF1, PU.1, and E2A, have been shown to recruit TET enzymes to specific genomic loci (primarily enhancers) to act on methylcytosine (mC) and convert it to hydroxymethylcytosine hmC (and in most cases marking them for subsequent complete demethylation to cytosine). TET-mediated conversion of mC to hmC appears to disrupt the binding of 5mC-binding proteins including MECP2 and MBD (Methyl-CpG-binding domain) proteins, facilitating nucleosome remodeling and the binding of transcription factors, thereby activating transcription of those genes. EGR1 is an important transcription factor in memory formation. It has an essential role in brain neuron epigenetic reprogramming. The transcription factor EGR1 recruits the TET1 protein that initiates a pathway of DNA demethylation. EGR1, together with TET1, is employed in programming the distribution of methylation sites on brain DNA during brain development and in learning (see Epigenetics in learning and memory). Structure Transcription factors are modular in structure and contain the following domains: DNA-binding domain (DBD), which attaches to specific sequences of DNA (enhancer or promoter. Necessary component for all vectors. Used to drive transcription of the vector's transgene promoter sequences) adjacent to regulated genes. DNA sequences that bind transcription factors are often referred to as response elements. Activation domain (AD), which contains binding sites for other proteins such as transcription coregulators. These binding sites are frequently referred to as activation functions (AFs), Transactivation domain (TAD) or Trans-activating domain TAD, not to be confused with topologically associating domain (TAD). An optional signal-sensing domain (SSD) (e.g., a ligand-binding domain), which senses external signals and, in response, transmits these signals to the rest of the transcription complex, resulting in up- or down-regulation of gene expression. Also, the DBD and signal-sensing domains may reside on separate proteins that associate within the transcription complex to regulate gene expression. DNA-binding domain The portion (domain) of the transcription factor that binds DNA is called its DNA-binding domain. Below is a partial list of some of the major families of DNA-binding domains/transcription factors: Response elements The DNA sequence that a transcription factor binds to is called a transcription factor-binding site or response element. Transcription factors interact with their binding sites using a combination of electrostatic (of which hydrogen bonds are a special case) and Van der Waals forces. Due to the nature of these chemical interactions, most transcription factors bind DNA in a sequence specific manner. However, not all bases in the transcription factor-binding site may actually interact with the transcription factor. In addition, some of these interactions may be weaker than others. Thus, transcription factors do not bind just one sequence but are capable of binding a subset of closely related sequences, each with a different strength of interaction. For example, although the consensus binding site for the TATA-binding protein (TBP) is TATAAAA, the TBP transcription factor can also bind similar sequences such as TATATAT or TATATAA. Because transcription factors can bind a set of related sequences and these sequences tend to be short, potential transcription factor binding sites can occur by chance if the DNA sequence is long enough. It is unlikely, however, that a transcription factor will bind all compatible sequences in the genome of the cell. Other constraints, such as DNA accessibility in the cell or availability of cofactors may also help dictate where a transcription factor will actually bind. Thus, given the genome sequence, it is still difficult to predict where a transcription factor will actually bind in a living cell. Additional recognition specificity, however, may be obtained through the use of more than one DNA-binding domain (for example tandem DBDs in the same transcription factor or through dimerization of two transcription factors) that bind to two or more adjacent sequences of DNA. Clinical significance Transcription factors are of clinical significance for at least two reasons: (1) mutations can be associated with specific diseases, and (2) they can be targets of medications. Disorders Due to their important roles in development, intercellular signaling, and cell cycle, some human diseases have been associated with mutations in transcription factors. Many transcription factors are either tumor suppressors or oncogenes, and, thus, mutations or aberrant regulation of them is associated with cancer. Three groups of transcription factors are known to be important in human cancer: (1) the NF-kappaB and AP-1 families, (2) the STAT family and (3) the steroid receptors. Below are a few of the better-studied examples: Potential drug targets Approximately 10% of currently prescribed drugs directly target the nuclear receptor class of transcription factors. Examples include tamoxifen and bicalutamide for the treatment of breast and prostate cancer, respectively, and various types of anti-inflammatory and anabolic steroids. In addition, transcription factors are often indirectly modulated by drugs through signaling cascades. It might be possible to directly target other less-explored transcription factors such as NF-κB with drugs. Transcription factors outside the nuclear receptor family are thought to be more difficult to target with small molecule therapeutics since it is not clear that they are "drugable" but progress has been made on Pax2 and the notch pathway. Role in evolution Gene duplications have played a crucial role in the evolution of species. This applies particularly to transcription factors. Once they occur as duplicates, accumulated mutations encoding for one copy can take place without negatively affecting the regulation of downstream targets. However, changes of the DNA binding specificities of the single-copy Leafy transcription factor, which occurs in most land plants, have recently been elucidated. In that respect, a single-copy transcription factor can undergo a change of specificity through a promiscuous intermediate without losing function. Similar mechanisms have been proposed in the context of all alternative phylogenetic hypotheses, and the role of transcription factors in the evolution of all species. Role in biocontrol activity The transcription factors have a role in resistance activity which is important for successful biocontrol activity. The resistant to oxidative stress and alkaline pH sensing were contributed from the transcription factor Yap1 and Rim101 of the Papiliotrema terrestris LS28 as molecular tools revealed an understanding of the genetic mechanisms underlying the biocontrol activity which supports disease management programs based on biological and integrated control. Analysis There are different technologies available to analyze transcription factors. On the genomic level, DNA-sequencing and database research are commonly used. The protein version of the transcription factor is detectable by using specific antibodies. The sample is detected on a western blot. By using electrophoretic mobility shift assay (EMSA), the activation profile of transcription factors can be detected. A multiplex approach for activation profiling is a TF chip system where several different transcription factors can be detected in parallel. The most commonly used method for identifying transcription factor binding sites is chromatin immunoprecipitation (ChIP). This technique relies on chemical fixation of chromatin with formaldehyde, followed by co-precipitation of DNA and the transcription factor of interest using an antibody that specifically targets that protein. The DNA sequences can then be identified by microarray or high-throughput sequencing (ChIP-seq) to determine transcription factor binding sites. If no antibody is available for the protein of interest, DamID may be a convenient alternative. Classes As described in more detail below, transcription factors may be classified by their (1) mechanism of action, (2) regulatory function, or (3) sequence homology (and hence structural similarity) in their DNA-binding domains. They are also classified by 3D structure of their DBD and the way it contacts DNA. Mechanistic There are two mechanistic classes of transcription factors: General transcription factors are involved in the formation of a preinitiation complex. The most common are abbreviated as TFIIA, TFIIB, TFIID, TFIIE, TFIIF, and TFIIH. They are ubiquitous and interact with the core promoter region surrounding the transcription start site(s) of all class II genes. Upstream transcription factors are proteins that bind somewhere upstream of the initiation site to stimulate or repress transcription. These are roughly synonymous with specific transcription factors, because they vary considerably depending on what recognition sequences are present in the proximity of the gene. Functional Transcription factors have been classified according to their regulatory function: I. Constitutive – present in all cells at all times, constantly active, all being activators. Very likely playing an important facilitating role in the transcription of many chromosomal genes, possibly in genes that seem to be always transcribed (e.g., structural proteins like tubulin and actin, and ubiquitous metabolic enzymes such as glyceraldehyde phosphate dehydrogenase (GAPDH)). E.g.: general transcription factors, Sp1, NF1, CCAAT II. Regulatory (conditionally active) – require activation. II.A Developmental (cell-type specific) – beginning in a fertilized egg. Once expressed, require no additional activation. E.g.:GATA, HNF, PIT-1, MyoD, Myf5, Hox, Winged Helix II.B Signal-dependent – may be either developmentally restricted in their expression or present in most or all cells, but all are inactive (or minimally active) until cells containing such proteins are exposed to the appropriate intra- or extracellular signal. II.B.1 Extracellular ligand (endocrine or paracrine)-dependent – nuclear receptors. II.B.2 Intracellular ligand (autocrine)-dependent – activated by small intracellular molecules. E.g.: SREBP, p53, orphan nuclear receptors. II.B.3 Cell surface receptor-ligand interaction-dependent – activated by second messenger signaling cascades. II.B.3.a Constitutive nuclear factors activated by serine phosphorylation – residing within the nucleus. The serine phosphorylation enzymes can be activated by two main routes: G protein-coupled receptors upon ligand binding increase intracellular levels of second messengers (cAMP, IP3, DAG, calcium) which, in turn, activate protein serine-threonine kinase enzymes (such as PKA, PKC). Receptor tyrosine kinases upon ligand binding trigger other pathways that finally terminate in serine phosphorylation of the abundant resident nuclear transcription factors. Examples include: CREB, AP-1, Mef2 II.B.3.b Latent cytoplasmic factors – residing in the cytoplasm when inactive. Structurally and chemically very diverse group, and so are their activation pathways. E.g.: STAT, R-SMAD, NF-κB, Notch, TUBBY, NFAT Structural Transcription factors are often classified based on the sequence similarity and hence the tertiary structure of their DNA-binding domains. The following classification is based of the 3D structure of their DBD and the way it contacts DNA. It was first developed for Human TF and later extended to rodents and also to plants. 1 Superclass: Basic Domains 1.1 Class: Leucine zipper factors (bZIP) 1.1.1 Family: AP-1(-like) components; includes (c-Fos/c-Jun) 1.1.2 Family: CREB 1.1.3 Family: C/EBP-like factors 1.1.4 Family: bZIP / PAR 1.1.5 Family: Plant G-box binding factors 1.1.6 Family: ZIP only 1.2 Class: Helix-loop-helix factors (bHLH) 1.2.1 Family: Ubiquitous (class A) factors 1.2.2 Family: Myogenic transcription factors (MyoD) 1.2.3 Family: Achaete-Scute 1.2.4 Family: Tal/Twist/Atonal/Hen 1.3 Class: Helix-loop-helix / leucine zipper factors (bHLH-ZIP) 1.3.1 Family: Ubiquitous bHLH-ZIP factors; includes USF (USF1, USF2); SREBP (SREBP) 1.3.2 Family: Cell-cycle controlling factors; includes c-Myc 1.4 Class: NF-1 1.4.1 Family: NF-1 (A, B, C, X) 1.5 Class: RF-X 1.5.1 Family: RF-X (1, 2, 3, 4, 5, ANK) 1.6 Class: bHSH 2 Superclass: Zinc-coordinating DNA-binding domains 2.1 Class: Cys4 zinc finger of nuclear receptor type 2.1.1 Family: Steroid hormone receptors 2.1.2 Family: Thyroid hormone receptor-like factors 2.2 Class: diverse Cys4 zinc fingers 2.2.1 Family: GATA-Factors 2.3 Class: Cys2His2 zinc finger domain 2.3.1 Family: Ubiquitous factors, includes TFIIIA, Sp1 2.3.2 Family: Developmental / cell cycle regulators; includes Krüppel 2.3.4 Family: Large factors with NF-6B-like binding properties 2.4 Class: Cys6 cysteine-zinc cluster 2.5 Class: Zinc fingers of alternating composition 3 Superclass: Helix-turn-helix 3.1 Class: Homeo domain 3.1.1 Family: Homeo domain only; includes Ubx 3.1.2 Family: POU domain factors; includes Oct 3.1.3 Family: Homeo domain with LIM region 3.1.4 Family: homeo domain plus zinc finger motifs 3.2 Class: Paired box 3.2.1 Family: Paired plus homeo domain 3.2.2 Family: Paired domain only 3.3 Class: Fork head / winged helix 3.3.1 Family: Developmental regulators; includes forkhead 3.3.2 Family: Tissue-specific regulators 3.3.3 Family: Cell-cycle controlling factors 3.3.0 Family: Other regulators 3.4 Class: Heat Shock Factors 3.4.1 Family: HSF 3.5 Class: Tryptophan clusters 3.5.1 Family: Myb 3.5.2 Family: Ets-type 3.5.3 Family: Interferon regulatory factors 3.6 Class: TEA ( transcriptional enhancer factor) domain 3.6.1 Family: TEA (TEAD1, TEAD2, TEAD3, TEAD4) 4 Superclass: beta-Scaffold Factors with Minor Groove Contacts 4.1 Class: RHR (Rel homology region) 4.1.1 Family: Rel/ankyrin; NF-kappaB 4.1.2 Family: ankyrin only 4.1.3 Family: NFAT (Nuclear Factor of Activated T-cells) (NFATC1, NFATC2, NFATC3) 4.2 Class: STAT 4.2.1 Family: STAT 4.3 Class: p53 4.3.1 Family: p53 4.4 Class: MADS box 4.4.1 Family: Regulators of differentiation; includes (Mef2) 4.4.2 Family: Responders to external signals, SRF (serum response factor) () 4.4.3 Family: Metabolic regulators (ARG80) 4.5 Class: beta-Barrel alpha-helix transcription factors 4.6 Class: TATA binding proteins 4.6.1 Family: TBP 4.7 Class: HMG-box 4.7.1 Family: SOX genes, SRY 4.7.2 Family: TCF-1 (TCF1) 4.7.3 Family: HMG2-related, SSRP1 4.7.4 Family: UBF 4.7.5 Family: MATA 4.8 Class: Heteromeric CCAAT factors 4.8.1 Family: Heteromeric CCAAT factors 4.9 Class: Grainyhead 4.9.1 Family: Grainyhead 4.10 Class: Cold-shock domain factors 4.10.1 Family: csd 4.11 Class: Runt 4.11.1 Family: Runt 0 Superclass: Other Transcription Factors 0.1 Class: Copper fist proteins 0.2 Class: HMGI(Y) (HMGA1) 0.2.1 Family: HMGI(Y) 0.3 Class: Pocket domain 0.4 Class: E1A-like factors 0.5 Class: AP2/EREBP-related factors 0.5.1 Family: AP2 0.5.2 Family: EREBP 0.5.3 Superfamily: AP2/B3 0.5.3.1 Family: ARF 0.5.3.2 Family: ABI 0.5.3.3 Family: RAV Transcription factor databases There are numerous databases cataloging information about transcription factors, but their scope and utility vary dramatically. Some may contain only information about the actual proteins, some about their binding sites, or about their target genes. Examples include the following: footprintDB-- a metadatabase of multiple databases, including JASPAR and others JASPAR: database of transcription factor binding sites for eukaryotes PlantTFD: Plant transcription factor database TcoF-DB: Database of transcription co-factors and transcription factor interactions TFcheckpoint: database of human, mouse and rat TF candidates transcriptionfactor.org (now commercial, selling reagents) MethMotif.org: An integrative cell-specific database of transcription factor binding motifs coupled with DNA methylation profiles. See also Cdx protein family DNA-binding protein Inhibitor of DNA-binding protein Mapper(2) Nuclear receptor, a class of ligand activated transcription factors Open Regulatory Annotation Database Phylogenetic footprinting TRANSFAC database YeTFaSCo References Further reading Carretero-Paulet, Lorenzo; Galstyan, Anahit; Roig-Villanova, Irma; Martínez-García, Jaime F.; Bilbao-Castro, Jose R. «Genome-Wide Classification and Evolutionary Analysis of the bHLH Family of Transcription Factors in Arabidopsis, Poplar, Rice, Moss, and Algae». Plant Physiology, 153, 3, 2010-07, pàg. 1398–1412. doi:10.1104/pp.110.153593. External links Transcription factor database Gene expression Protein families DNA Biophysics Evolutionary developmental biology
Transcription factor
[ "Physics", "Chemistry", "Biology" ]
6,809
[ "Applied and interdisciplinary physics", "Gene expression", "Protein classification", "Signal transduction", "Molecular genetics", "Biophysics", "Induced stem cells", "Cellular processes", "Molecular biology", "Biochemistry", "Protein families", "Transcription factors" ]
31,591
https://en.wikipedia.org/wiki/Time%20travel
Time travel is the hypothetical activity of traveling into the past or future. Time travel is a concept in philosophy and fiction, particularly science fiction. In fiction, time travel is typically achieved through the use of a device known as a time machine. The idea of a time machine was popularized by H. G. Wells's 1895 novel The Time Machine. It is uncertain whether time travel to the past would be physically possible. Such travel, if at all feasible, may give rise to questions of causality. Forward time travel, outside the usual sense of the perception of time, is an extensively observed phenomenon and is well understood within the framework of special relativity and general relativity. However, making one body advance or delay more than a few milliseconds compared to another body is not feasible with current technology. As for backward time travel, it is possible to find solutions in general relativity that allow for it, such as a rotating black hole. Traveling to an arbitrary point in spacetime has very limited support in theoretical physics, and is usually connected only with quantum mechanics or wormholes. History of the concept Mythical time travel Some ancient myths depict a character skipping forward in time. In Hindu mythology, the Vishnu Purana mentions the story of King Raivata Kakudmi, who travels to heaven to meet the creator Brahma and is surprised to learn when he returns to Earth that many ages have passed. The Buddhist Pāli Canon mentions the relativity of time. The Payasi Sutta tells of one of the Buddha's chief disciples, Kumara Kassapa, who explains to the skeptic Payasi that time in the Heavens passes differently than on Earth. The Japanese tale of "Urashima Tarō", first described in the Manyoshu, tells of a young fisherman named Urashima-no-ko () who visits an undersea palace. After three days, he returns home to his village and finds himself 300 years in the future, where he has been forgotten, his house is in ruins, and his family has died. Abrahamic religions One story in Judaism concerns Honi HaMe'agel, a miracle-working sage of the 1st century BC, who was a historical character to whom various myths were attached. While traveling one day, Honi saw a man planting a carob tree and asked him about it. The man explained that the tree would take 70 years to bear fruit, and that he was planting it not for himself but for the generations to follow him. Later that day, Honi sat down to rest but fell asleep for 70 years; when he awoke, he saw a man picking fruit from a fully mature carob tree. Asked whether he had planted it, the man replied that he had not, but that his grandfather had planted it for him. In Christian tradition, there is a similar, story of "the Seven Sleepers of Ephesus", which recounts a group of early Christians who hid in a cave circa 250 AD, to escape the persecution of Christians during the reign of the Roman emperor Decius. They fell into a sleep and woke some 200 years later during the reign of Theodosius II, to discover that the Empire had become Christian. This Christian story is recounted by Islam and appears in a Sura of the Quran, Sura Al-Kahf. The version recalls a group of young monotheists escaping from persecution within a cave and emerging hundreds of years later. This narrative describes divine protection and time suspension. Another similar story in the Islamic tradition is of Uzair (usually identified with the Biblical Ezra) whose grief at the Destruction of Jerusalem by the Babylonians was so great that God took his soul and brought him back to life after Jerusalem was reconstructed. He rode on his revived donkey and entered his native place. But the people did not recognize him, nor did his household, except the maid, who was now an old blind woman. He prayed to God to cure her blindness and she could see again. He meets his son who recognized him by a mole between his shoulders and was older than he was. Science fiction Time travel themes in science fiction and the media can be grouped into three categories: immutable timeline; mutable timeline; and alternate histories, as in the interacting-many-worlds interpretation. The non-scientific term 'timeline' is often used to refer to all physical events in history, so that where events are changed, the time traveler is described as creating a new timeline. Early science fiction stories feature characters who sleep for years and awaken in a changed society, or are transported to the past through supernatural means. Among them L'An 2440, rêve s'il en fût jamais (The Year 2440: A Dream If Ever There Was One, 1770) by Louis-Sébastien Mercier, Rip Van Winkle (1819) by Washington Irving, Looking Backward (1888) by Edward Bellamy, and When the Sleeper Awakes (1899) by H. G. Wells. Prolonged sleep is used as a means of time travel in these stories. The date of the earliest work about backwards time travel is uncertain. The Chinese novel A Supplement to the Journey to the West () by Dong Yue features magical mirrors and jade gateways that connect various points in time. The protagonist Sun Wukong travels back in time to the "World of the Ancients" (Qin dynasty) to retrieve a magical bell and then travels forward to the "World of the Future" (Song dynasty) to find an emperor who has been exiled in time. However, the time travel is taking place inside an illusory dream world created by the villain to distract and entrap him. Samuel Madden's Memoirs of the Twentieth Century (1733) is a series of letters from British ambassadors in 1997 and 1998 to diplomats in the past, conveying the political and religious conditions of the future. Because the narrator receives these letters from his guardian angel, Paul Alkon suggests in his book Origins of Futuristic Fiction that "the first time-traveler in English literature is a guardian angel". Madden does not explain how the angel obtains these documents, but Alkon asserts that Madden "deserves recognition as the first to toy with the rich idea of time-travel in the form of an artifact sent backward from the future to be discovered in the present". In the science fiction anthology Far Boundaries (1951), editor August Derleth claims that an early short story about time travel is An Anachronism; or, Missing One's Coach, written for the Dublin Literary Magazine by an anonymous author in the June 1838 issue. While the narrator waits under a tree for a coach to take him out of Newcastle upon Tyne, he is transported back in time over a thousand years. He encounters the Venerable Bede in a monastery and explains to him the developments of the coming centuries. However, the story never makes it clear whether these events are real or a dream. Another early work about time travel is The Forebears of Kalimeros: Alexander, son of Philip of Macedon by Alexander Veltman published in 1836. Charles Dickens's A Christmas Carol (1843) has early depictions of mystical time travel in both directions, as the protagonist, Ebenezer Scrooge, is transported to Christmases past and future. Other stories employ the same template, where a character naturally goes to sleep, and upon waking up finds themself in a different time. A clearer example of backward time travel is found in the 1861 book Paris avant les hommes (Paris before Men) by the French botanist and geologist Pierre Boitard, published posthumously. In this story, the protagonist is transported to the prehistoric past by the magic of a "lame demon" (a French pun on Boitard's name), where he encounters a Plesiosaur and an apelike ancestor and is able to interact with ancient creatures. Edward Everett Hale's "Hands Off" (1881) tells the story of an unnamed being, possibly the soul of a person who has recently died, who interferes with ancient Egyptian history by preventing Joseph's enslavement. This may have been the first story to feature an alternate history created as a result of time travel. Early time machines One of the first stories to feature time travel by means of a machine is "The Clock that Went Backward" by Edward Page Mitchell, which appeared in the New York Sun in 1881. However, the mechanism borders on fantasy. An unusual clock, when wound, runs backwards and transports people nearby back in time. The author does not explain the origin or properties of the clock. Enrique Gaspar y Rimbau's El Anacronópete (1887) may have been the first story to feature a vessel engineered to travel through time. Andrew Sawyer has commented that the story "does seem to be the first literary description of a time machine noted so far", adding that "Edward Page Mitchell's story The Clock That Went Backward (1881) is usually described as the first time-machine story, but I'm not sure that a clock quite counts". H. G. Wells' The Time Machine (1895) popularized the concept of time travel by mechanical means. Time travel in physics Some solutions to Einstein's equations for general relativity suggest that suitable geometries of spacetime or specific types of motion in space might allow time travel into the past and future if these geometries or motions were possible. In technical papers, physicists discuss the possibility of closed timelike curves, which are world lines that form closed loops in spacetime, allowing objects to return to their own past. There are known to be solutions to the equations of general relativity that describe spacetimes which contain closed timelike curves, such as Gödel spacetime, but the physical plausibility of these solutions is uncertain. Any theory that would allow backward time travel would introduce potential problems of causality. The classic example of a problem involving causality is the "grandfather paradox," which postulates travelling to the past and intervening in the conception of one's ancestors (causing the death of an ancestor before conception being frequently cited). Some physicists, such as Novikov and Deutsch, suggested that these sorts of temporal paradoxes can be avoided through the Novikov self-consistency principle or a variation of the many-worlds interpretation with interacting worlds. General relativity Time travel to the past is theoretically possible in certain general relativity spacetime geometries that permit traveling faster than the speed of light, such as cosmic strings, traversable wormholes, and Alcubierre drives. The theory of general relativity does suggest a scientific basis for the possibility of backward time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed. These semiclassical arguments led Stephen Hawking to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel, but physicists cannot come to a definitive judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory. Different spacetime geometries The theory of general relativity describes the universe under a system of field equations that determine the metric, or distance function, of spacetime. There exist exact solutions to these equations that include closed time-like curves, which are world lines that intersect themselves; some point in the causal future of the world line is also in its causal past, a situation that can be described as time travel. Such a solution was first proposed by Kurt Gödel, a solution known as the Gödel metric, but his (and others') solution requires the universe to have physical characteristics that it does not appear to have, such as rotation and lack of Hubble expansion. Whether general relativity forbids closed time-like curves for all realistic conditions is still being researched. Wormholes Wormholes are a hypothetical warped spacetime permitted by the Einstein field equations of general relativity. A proposed time-travel machine using a traversable wormhole would hypothetically work in the following way: One end of the wormhole is accelerated to some significant fraction of the speed of light, perhaps with some advanced propulsion system, and then brought back to the point of origin. Alternatively, another way is to take one entrance of the wormhole and move it to within the gravitational field of an object that has higher gravity than the other entrance, and then return it to a position near the other entrance. For both these methods, time dilation causes the end of the wormhole that has been moved to have aged less, or become "younger", than the stationary end as seen by an external observer; however, time connects differently through the wormhole than outside it, so that synchronized clocks at either end of the wormhole will always remain synchronized as seen by an observer passing through the wormhole, no matter how the two ends move around. This means that an observer entering the "younger" end would exit the "older" end at a time when it was the same age as the "younger" end, effectively going back in time as seen by an observer from the outside. One significant limitation of such a time machine is that it is only possible to go as far back in time as the initial creation of the machine; in essence, it is more of a path through time than it is a device that itself moves through time, and it would not allow the technology itself to be moved backward in time. According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as "exotic matter". More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions, such as the null energy condition along with the weak, strong, and dominant energy conditions. However, it is known that quantum effects can lead to small measurable violations of the null energy condition, and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics. Although early calculations suggested that a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small. In 1993, Matt Visser argued that the two mouths of a wormhole with such an induced clock difference could not be brought together without inducing quantum field and gravitational effects that would either make the wormhole collapse or the two mouths repel each other. Because of this, the two mouths could not be brought close enough for causality violation to take place. However, in a 1997 paper, Visser hypothesized that a complex "Roman ring" (named after Tom Roman) configuration of an N number of wormholes arranged in a symmetric polygon could still act as a time machine, although he concludes that this is more likely a flaw in classical quantum gravity theory rather than proof that causality violation is possible. Other approaches based on general relativity Another approach involves a dense spinning cylinder usually referred to as a Tipler cylinder, a GR solution discovered by Willem Jacob van Stockum in 1936 and Kornel Lanczos in 1924, but not recognized as allowing closed timelike curves until an analysis by Frank Tipler in 1974. If a cylinder is infinitely long and spins fast enough about its long axis, then a spaceship flying around the cylinder on a spiral path could travel back in time (or forward, depending on the direction of its spiral). However, the density and speed required is so great that ordinary matter is not strong enough to construct it. Physicist Ronald Mallett is attempting to recreate the conditions of a rotating black hole with ring lasers, in order to bend spacetime and allow for time travel. A more fundamental objection to time travel schemes based on rotating cylinders or cosmic strings has been put forward by Stephen Hawking, who proved a theorem showing that according to general relativity it is impossible to build a time machine of a special type (a "time machine with the compactly generated Cauchy horizon") in a region where the weak energy condition is satisfied, meaning that the region contains no matter with negative energy density (exotic matter). Solutions such as Tipler's assume cylinders of infinite length, which are easier to analyze mathematically, and although Tipler suggested that a finite cylinder might produce closed timelike curves if the rotation rate were fast enough, he did not prove this. But Hawking points out that because of his theorem, "it can't be done with positive energy density everywhere! I can prove that to build a finite time machine, you need negative energy." This result comes from Hawking's 1992 paper on the chronology protection conjecture, which Hawking states as "The laws of physics do not allow the appearance of closed timelike curves." Quantum physics No-communication theorem When a signal is sent from one location and received at another location, then as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity in the theory of relativity show that all reference frames agree that the transmission-event happened before the reception-event. When the signal travels faster than light, it is received before it is sent, in all reference frames. The signal could be said to have moved backward in time. This hypothetical scenario is sometimes referred to as a tachyonic antitelephone. Quantum-mechanical phenomena such as quantum teleportation, the EPR paradox, or quantum entanglement might appear to create a mechanism that allows for faster-than-light (FTL) communication or time travel, and in fact some interpretations of quantum mechanics such as the Bohm interpretation presume that some information is being exchanged between particles instantaneously in order to maintain correlations between particles. This effect was referred to as "spooky action at a distance" by Einstein. Nevertheless, the fact that causality is preserved in quantum mechanics is a rigorous result in modern quantum field theories, and therefore modern theories do not allow for time travel or FTL communication. In any specific instance where FTL has been claimed, more detailed analysis has proven that to get a signal, some form of classical communication must also be used. The no-communication theorem also gives a general proof that quantum entanglement cannot be used to transmit information faster than classical signals. Interacting many-worlds interpretation A variation of Hugh Everett's many-worlds interpretation (MWI) of quantum mechanics provides a resolution to the grandfather paradox that involves the time traveler arriving in a different universe than the one they came from; it's been argued that since the traveler arrives in a different universe's history and not their own history, this is not "genuine" time travel. The accepted many-worlds interpretation suggests that all possible quantum events can occur in mutually exclusive histories. However, some variations allow different universes to interact. This concept is most often used in science-fiction, but some physicists such as David Deutsch have suggested that a time traveler should end up in a different history than the one he started from. On the other hand, Stephen Hawking has argued that even if the MWI is correct, we should expect each time traveler to experience a single self-consistent history, so that time travelers remain within their own world rather than traveling to a different one. The physicist Allen Everett argued that Deutsch's approach "involves modifying fundamental principles of quantum mechanics; it certainly goes beyond simply adopting the MWI". Everett also argues that even if Deutsch's approach is correct, it would imply that any macroscopic object composed of multiple particles would be split apart when traveling back in time through a wormhole, with different particles emerging in different worlds. Experimental results Certain experiments carried out give the impression of reversed causality, but fail to show it under closer examination. The delayed-choice quantum eraser experiment performed by Marlan Scully involves pairs of entangled photons that are divided into "signal photons" and "idler photons", with the signal photons emerging from one of two locations and their position later measured as in the double-slit experiment. Depending on how the idler photon is measured, the experimenter can either learn which of the two locations the signal photon emerged from or "erase" that information. Even though the signal photons can be measured before the choice has been made about the idler photons, the choice seems to retroactively determine whether or not an interference pattern is observed when one correlates measurements of idler photons to the corresponding signal photons. However, since interference can be observed only after the idler photons are measured and they are correlated with the signal photons, there is no way for experimenters to tell what choice will be made in advance just by looking at the signal photons, only by gathering classical information from the entire system; thus causality is preserved. The experiment of Lijun Wang might also show causality violation since it made it possible to send packages of waves through a bulb of caesium gas in such a way that the package appeared to exit the bulb 62 nanoseconds before its entry, but a wave package is not a single well-defined object but rather a sum of multiple waves of different frequencies (see Fourier analysis), and the package can appear to move faster than light or even backward in time even if none of the pure waves in the sum do so. This effect cannot be used to send any matter, energy, or information faster than light, so this experiment is understood not to violate causality either. The physicists Günter Nimtz and Alfons Stahlhofen, of the University of Koblenz, claim to have violated Einstein's theory of relativity by transmitting photons faster than the speed of light. They say they have conducted an experiment in which microwave photons traveled "instantaneously" between a pair of prisms that had been moved up to apart, using a phenomenon known as quantum tunneling. Nimtz told New Scientist magazine: "For the time being, this is the only violation of special relativity that I know of." However, other physicists say that this phenomenon does not allow information to be transmitted faster than light. Aephraim M. Steinberg, a quantum optics expert at the University of Toronto, Canada, uses the analogy of a train traveling from Chicago to New York, but dropping off train cars at each station along the way, so that the center of the train moves forward at each stop; in this way, the speed of the center of the train exceeds the speed of any of the individual cars. Shengwang Du claims in a peer-reviewed journal to have observed single photons' precursors, saying that they travel no faster than c in a vacuum. His experiment involved slow light as well as passing light through a vacuum. He generated two single photons, passing one through rubidium atoms that had been cooled with a laser (thus slowing the light) and passing one through a vacuum. Both times, apparently, the precursors preceded the photons' main bodies, and the precursor traveled at c in a vacuum. According to Du, this implies that there is no possibility of light traveling faster than c and, thus, no possibility of violating causality. Absence of time travelers from the future Many have argued that the absence of time travelers from the future demonstrates that such technology will never be developed, suggesting that it is impossible. This is analogous to the Fermi paradox related to the absence of evidence of extraterrestrial life. As the absence of extraterrestrial visitors does not categorically prove they do not exist, so the absence of time travelers fails to prove time travel is physically impossible; it might be that time travel is physically possible but is never developed or is cautiously used. Carl Sagan once suggested the possibility that time travelers could be here but are disguising their existence or are not recognized as time travelers. Some versions of general relativity suggest that time travel might only be possible in a region of spacetime that is warped a certain way, and hence time travelers would not be able to travel back to earlier regions in spacetime, before this region existed. Stephen Hawking stated that this would explain why the world has not already been overrun by "tourists from the future". Several experiments have been carried out to try to entice future humans, who might invent time travel technology, to come back and demonstrate it to people of the present time. Events such as Perth's Destination Day, MIT's Time Traveler Convention and Stephen Hawking's Reception For Time Travellers heavily publicized permanent "advertisements" of a meeting time and place for future time travelers to meet. In 1982, a group in Baltimore, Maryland, identifying itself as the Krononauts, hosted an event of this type welcoming visitors from the future. These experiments only stood the possibility of generating a positive result demonstrating the existence of time travel, but have failed so far—no time travelers are known to have attended either event. Some versions of the many-worlds interpretation can be used to suggest that future humans have traveled back in time, but have traveled back to the meeting time and place in a parallel universe. Time dilation There is a great deal of observable evidence for time dilation in special relativity and gravitational time dilation in general relativity, for example in the famous and easy-to-replicate observation of atmospheric muon decay. The theory of relativity states that the speed of light is invariant for all observers in any frame of reference; that is, it is always the same. Time dilation is a direct consequence of the invariance of the speed of light. Time dilation may be regarded in a limited sense as "time travel into the future": a person may use time dilation so that a small amount of proper time passes for them, while a large amount of proper time passes elsewhere. This can be achieved by traveling at relativistic speeds or through the effects of gravity. For two identical clocks moving relative to each other without accelerating, each clock measures the other to be ticking slower. This is possible due to the relativity of simultaneity. However, the symmetry is broken if one clock accelerates, allowing for less proper time to pass for one clock than the other. The twin paradox describes this: one twin remains on Earth, while the other undergoes acceleration to relativistic speed as they travel into space, turn around, and travel back to Earth; the traveling twin ages less than the twin who stayed on Earth, because of the time dilation experienced during their acceleration. General relativity treats the effects of acceleration and the effects of gravity as equivalent, and shows that time dilation also occurs in gravity wells, with a clock deeper in the well ticking more slowly; this effect is taken into account when calibrating the clocks on the satellites of the Global Positioning System, and it could lead to significant differences in rates of aging for observers at different distances from a large gravity well such as a black hole. A time machine that utilizes this principle might be, for instance, a spherical shell with a diameter of five meters and the mass of Jupiter. A person at its center will travel forward in time at a rate four times slower than that of distant observers. Squeezing the mass of a large planet into such a small structure is not expected to be within humanity's technological capabilities in the near future. With current technologies, it is only possible to cause a human traveler to age less than companions on Earth by a few milliseconds after a few hundred days of space travel. Philosophy Philosophers have discussed the philosophy of space and time since at least the time of ancient Greece; for example, Parmenides presented the view that time is an illusion. Centuries later, Isaac Newton supported the idea of absolute time, while his contemporary Gottfried Wilhelm Leibniz maintained that time is only a relation between events and it cannot be expressed independently. The latter approach eventually gave rise to the spacetime of relativity. Presentism vs. eternalism Many philosophers have argued that relativity implies eternalism, the idea that the past and future exist in a real sense, not only as changes that occurred or will occur to the present. Philosopher of science Dean Rickles disagrees with some qualifications, but notes that "the consensus among philosophers seems to be that special and general relativity are incompatible with presentism". Some philosophers view time as a dimension equal to spatial dimensions, that future events are "already there" in the same sense different places exist, and that there is no objective flow of time; however, this view is disputed. Presentism is a school of philosophy that holds that the future and the past exist only as changes that occurred or will occur to the present, and they have no real existence of their own. In this view, time travel is impossible because there is no future or past to travel to. Keller and Nelson have argued that even if past and future objects do not exist, there can still be definite truths about past and future events, and thus it is possible that a future truth about a time traveler deciding to travel back to the present date could explain the time traveler's actual appearance in the present; these views are contested by some authors. The grandfather paradox A common objection to the idea of traveling back in time is put forth in the grandfather paradox or the argument of auto-infanticide. If one were able to go back in time, inconsistencies and contradictions would ensue if the time traveler were to change anything; there is a contradiction if the past becomes different from the way it is. The paradox is commonly described with a person who travels to the past and kills their own grandfather, prevents the existence of their father or mother, and therefore their own existence. Philosophers question whether these paradoxes prove time travel impossible. Some philosophers answer these paradoxes by arguing that it might be the case that backward time travel could be possible but that it would be impossible to actually change the past in any way, an idea similar to the proposed Novikov self-consistency principle in physics. Ontological paradox Compossibility According to the philosophical theory of compossibility, what can happen, for example in the context of time travel, must be weighed against the context of everything relating to the situation. If the past is a certain way, it's not possible for it to be any other way. What can happen when a time traveler visits the past is limited to what did happen, in order to prevent logical contradictions. Self-consistency principle The Novikov self-consistency principle, named after Igor Dmitrievich Novikov, states that any actions taken by a time traveler or by an object that travels back in time were part of history all along, and therefore it is impossible for the time traveler to "change" history in any way. The time traveler's actions may be the cause of events in their own past though, which leads to the potential for circular causation, sometimes called a predestination paradox, ontological paradox, or bootstrap paradox. The term bootstrap paradox was popularized by Robert A. Heinlein's story "By His Bootstraps". The Novikov self-consistency principle proposes that the local laws of physics in a region of spacetime containing time travelers cannot be any different from the local laws of physics in any other region of spacetime. The philosopher Kelley L. Ross argues in "Time Travel Paradoxes" that in a scenario involving a physical object whose world-line or history forms a closed loop in time there can be a violation of the second law of thermodynamics. Ross uses the film Somewhere in Time as an example of such an ontological paradox, where a watch is given to a person, and 60 years later the same watch is brought back in time and given to the same character. Ross states that entropy of the watch will increase, and the watch carried back in time will be more worn with each repetition of its history. The second law of thermodynamics is understood by modern physicists to be a statistical law, so decreasing entropy and non-increasing entropy are not impossible, just improbable. Additionally, entropy statistically increases in systems which are isolated, so non-isolated systems, such as an object, that interact with the outside world, can become less worn and decrease in entropy, and it's possible for an object whose world-line forms a closed loop to be always in the same condition in the same point of its history. In 2005, Daniel Greenberger and Karl Svozil proposed that quantum theory gives a model for time travel where the past must be self-consistent. See also Claims of time travel Time travel claims and urban legends Parapsychology Culture Time capsule Fiction Time travel in fiction List of time travel works of fiction Time viewer Meetings Time Traveler Convention Hawking's time traveller party Science Krasnikov tube Retrocausality Ring singularity Temporal paradox Wheeler–Feynman absorber theory Time perception Cryonics Suspended animation Time perception Further reading Time Travel: A History – book by James Gleick References External links (Transcript) Black holes, Wormholes and Time Travel, a Royal Society Lecture Time Travel and Modern Physics at the Stanford Encyclopedia of Philosophy Time Travel at the Internet Encyclopedia of Philosophy Philosophy of physics
Time travel
[ "Physics" ]
6,765
[ "Philosophy of physics", "Applied and interdisciplinary physics", "Physical quantities", "Time", "Time travel", "Spacetime" ]
31,880
https://en.wikipedia.org/wiki/Universe
The universe is all of space and time and their contents. It comprises all of existence, any fundamental interaction, physical process and physical constant, and therefore all forms of matter and energy, and the structures they form, from sub-atomic particles to entire galactic filaments. Since the early 20th century, the field of cosmology establishes that space and time emerged together at the Big Bang ago and that the universe has been expanding since then. The portion of the universe that can be seen by humans is approximately 93 billion light-years in diameter at present, but the total size of the universe is not known. Some of the earliest cosmological models of the universe were developed by ancient Greek and Indian philosophers and were geocentric, placing Earth at the center. Over the centuries, more precise astronomical observations led Nicolaus Copernicus to develop the heliocentric model with the Sun at the center of the Solar System. In developing the law of universal gravitation, Isaac Newton built upon Copernicus's work as well as Johannes Kepler's laws of planetary motion and observations by Tycho Brahe. Further observational improvements led to the realization that the Sun is one of a few hundred billion stars in the Milky Way, which is one of a few hundred billion galaxies in the observable universe. Many of the stars in a galaxy have planets. At the largest scale, galaxies are distributed uniformly and the same in all directions, meaning that the universe has neither an edge nor a center. At smaller scales, galaxies are distributed in clusters and superclusters which form immense filaments and voids in space, creating a vast foam-like structure. Discoveries in the early 20th century have suggested that the universe had a beginning and has been expanding since then. According to the Big Bang theory, the energy and matter initially present have become less dense as the universe expanded. After an initial accelerated expansion called the inflationary epoch at around 10−32 seconds, and the separation of the four known fundamental forces, the universe gradually cooled and continued to expand, allowing the first subatomic particles and simple atoms to form. Giant clouds of hydrogen and helium were gradually drawn to the places where matter was most dense, forming the first galaxies, stars, and everything else seen today. From studying the effects of gravity on both matter and light, it has been discovered that the universe contains much more matter than is accounted for by visible objects; stars, galaxies, nebulas and interstellar gas. This unseen matter is known as dark matter. In the widely accepted ΛCDM cosmological model, dark matter accounts for about of the mass and energy in the universe while about is dark energy, a mysterious form of energy responsible for the acceleration of the expansion of the universe. Ordinary ('baryonic') matter therefore composes only of the universe. Stars, planets, and visible gas clouds only form about 6% of this ordinary matter. There are many competing hypotheses about the ultimate fate of the universe and about what, if anything, preceded the Big Bang, while other physicists and philosophers refuse to speculate, doubting that information about prior states will ever be accessible. Some physicists have suggested various multiverse hypotheses, in which the universe might be one among many. Definition The physical universe is defined as all of space and time (collectively referred to as spacetime) and their contents. Such contents comprise all of energy in its various forms, including electromagnetic radiation and matter, and therefore planets, moons, stars, galaxies, and the contents of intergalactic space. The universe also includes the physical laws that influence energy and matter, such as conservation laws, classical mechanics, and relativity. The universe is often defined as "the totality of existence", or everything that exists, everything that has existed, and everything that will exist. In fact, some philosophers and scientists support the inclusion of ideas and abstract concepts—such as mathematics and logic—in the definition of the universe. The word universe may also refer to concepts such as the cosmos, the world, and nature. Etymology The word universe derives from the Old French word , which in turn derives from the Latin word , meaning 'combined into one'. The Latin word 'universum' was used by Cicero and later Latin authors in many of the same senses as the modern English word is used. Synonyms A term for universe among the ancient Greek philosophers from Pythagoras onwards was () 'the all', defined as all matter and all space, and () 'all things', which did not necessarily include the void. Another synonym was () meaning 'the world, the cosmos'. Synonyms are also found in Latin authors (, , ) and survive in modern languages, e.g., the German words , , and for universe. The same synonyms are found in English, such as everything (as in the theory of everything), the cosmos (as in cosmology), the world (as in the many-worlds interpretation), and nature (as in natural laws or natural philosophy). Chronology and the Big Bang The prevailing model for the evolution of the universe is the Big Bang theory. The Big Bang model states that the earliest state of the universe was an extremely hot and dense one, and that the universe subsequently expanded and cooled. The model is based on general relativity and on simplifying assumptions such as the homogeneity and isotropy of space. A version of the model with a cosmological constant (Lambda) and cold dark matter, known as the Lambda-CDM model, is the simplest model that provides a reasonably good account of various observations about the universe. The initial hot, dense state is called the Planck epoch, a brief period extending from time zero to one Planck time unit of approximately 10−43 seconds. During the Planck epoch, all types of matter and all types of energy were concentrated into a dense state, and gravity—currently the weakest by far of the four known forces—is believed to have been as strong as the other fundamental forces, and all the forces may have been unified. The physics controlling this very early period (including quantum gravity in the Planck epoch) is not understood, so we cannot say what, if anything, happened before time zero. Since the Planck epoch, the universe has been expanding to its present scale, with a very short but intense period of cosmic inflation speculated to have occurred within the first 10−32 seconds. This initial period of inflation would explain why space appears to be very flat. Within the first fraction of a second of the universe's existence, the four fundamental forces had separated. As the universe continued to cool from its inconceivably hot state, various types of subatomic particles were able to form in short periods of time known as the quark epoch, the hadron epoch, and the lepton epoch. Together, these epochs encompassed less than 10 seconds of time following the Big Bang. These elementary particles associated stably into ever larger combinations, including stable protons and neutrons, which then formed more complex atomic nuclei through nuclear fusion. This process, known as Big Bang nucleosynthesis, lasted for about 17 minutes and ended about 20 minutes after the Big Bang, so only the fastest and simplest reactions occurred. About 25% of the protons and all the neutrons in the universe, by mass, were converted to helium, with small amounts of deuterium (a form of hydrogen) and traces of lithium. Any other element was only formed in very tiny quantities. The other 75% of the protons remained unaffected, as hydrogen nuclei. After nucleosynthesis ended, the universe entered a period known as the photon epoch. During this period, the universe was still far too hot for matter to form neutral atoms, so it contained a hot, dense, foggy plasma of negatively charged electrons, neutral neutrinos and positive nuclei. After about 377,000 years, the universe had cooled enough that electrons and nuclei could form the first stable atoms. This is known as recombination for historical reasons; electrons and nuclei were combining for the first time. Unlike plasma, neutral atoms are transparent to many wavelengths of light, so for the first time the universe also became transparent. The photons released ("decoupled") when these atoms formed can still be seen today; they form the cosmic microwave background (CMB). As the universe expands, the energy density of electromagnetic radiation decreases more quickly than does that of matter because the energy of each photon decreases as it is cosmologically redshifted. At around 47,000 years, the energy density of matter became larger than that of photons and neutrinos, and began to dominate the large scale behavior of the universe. This marked the end of the radiation-dominated era and the start of the matter-dominated era. In the earliest stages of the universe, tiny fluctuations within the universe's density led to concentrations of dark matter gradually forming. Ordinary matter, attracted to these by gravity, formed large gas clouds and eventually, stars and galaxies, where the dark matter was most dense, and voids where it was least dense. After around 100–300 million years, the first stars formed, known as Population III stars. These were probably very massive, luminous, non metallic and short-lived. They were responsible for the gradual reionization of the universe between about 200–500 million years and 1 billion years, and also for seeding the universe with elements heavier than helium, through stellar nucleosynthesis. The universe also contains a mysterious energy—possibly a scalar field—called dark energy, the density of which does not change over time. After about 9.8 billion years, the universe had expanded sufficiently so that the density of matter was less than the density of dark energy, marking the beginning of the present dark-energy-dominated era. In this era, the expansion of the universe is accelerating due to dark energy. Physical properties Of the four fundamental interactions, gravitation is the dominant at astronomical length scales. Gravity's effects are cumulative; by contrast, the effects of positive and negative charges tend to cancel one another, making electromagnetism relatively insignificant on astronomical length scales. The remaining two interactions, the weak and strong nuclear forces, decline very rapidly with distance; their effects are confined mainly to sub-atomic length scales. The universe appears to have much more matter than antimatter, an asymmetry possibly related to the CP violation. This imbalance between matter and antimatter is partially responsible for the existence of all matter existing today, since matter and antimatter, if equally produced at the Big Bang, would have completely annihilated each other and left only photons as a result of their interaction. These laws are Gauss's law and the non-divergence of the stress–energy–momentum pseudotensor. Size and regions Due to the finite speed of light, there is a limit (known as the particle horizon) to how far light can travel over the age of the universe. The spatial region from which we can receive light is called the observable universe. The proper distance (measured at a fixed time) between Earth and the edge of the observable universe is 46 billion light-years (14 billion parsecs), making the diameter of the observable universe about 93 billion light-years (28 billion parsecs). Although the distance traveled by light from the edge of the observable universe is close to the age of the universe times the speed of light, , the proper distance is larger because the edge of the observable universe and the Earth have since moved further apart. For comparison, the diameter of a typical galaxy is 30,000 light-years (9,198 parsecs), and the typical distance between two neighboring galaxies is 3 million light-years (919.8 kiloparsecs). As an example, the Milky Way is roughly 100,000–180,000 light-years in diameter, and the nearest sister galaxy to the Milky Way, the Andromeda Galaxy, is located roughly 2.5 million light-years away. Because humans cannot observe space beyond the edge of the observable universe, it is unknown whether the size of the universe in its totality is finite or infinite. Estimates suggest that the whole universe, if finite, must be more than 250 times larger than a Hubble sphere. Some disputed estimates for the total size of the universe, if finite, reach as high as megaparsecs, as implied by a suggested resolution of the No-Boundary Proposal. Age and expansion Assuming that the Lambda-CDM model is correct, the measurements of the parameters using a variety of techniques by numerous experiments yield a best value of the age of the universe at 13.799 ± 0.021 billion years, as of 2015. Over time, the universe and its contents have evolved. For example, the relative population of quasars and galaxies has changed and the universe has expanded. This expansion is inferred from the observation that the light from distant galaxies has been redshifted, which implies that the galaxies are receding from us. Analyses of Type Ia supernovae indicate that the expansion is accelerating. The more matter there is in the universe, the stronger the mutual gravitational pull of the matter. If the universe were too dense then it would re-collapse into a gravitational singularity. However, if the universe contained too little matter then the self-gravity would be too weak for astronomical structures, like galaxies or planets, to form. Since the Big Bang, the universe has expanded monotonically. Perhaps unsurprisingly, our universe has just the right mass–energy density, equivalent to about 5 protons per cubic meter, which has allowed it to expand for the last 13.8 billion years, giving time to form the universe as observed today. There are dynamical forces acting on the particles in the universe which affect the expansion rate. Before 1998, it was expected that the expansion rate would be decreasing as time went on due to the influence of gravitational interactions in the universe; and thus there is an additional observable quantity in the universe called the deceleration parameter, which most cosmologists expected to be positive and related to the matter density of the universe. In 1998, the deceleration parameter was measured by two different groups to be negative, approximately −0.55, which technically implies that the second derivative of the cosmic scale factor has been positive in the last 5–6 billion years. Spacetime Modern physics regards events as being organized into spacetime. This idea originated with the special theory of relativity, which predicts that if one observer sees two events happening in different places at the same time, a second observer who is moving relative to the first will see those events happening at different times. The two observers will disagree on the time between the events, and they will disagree about the distance separating the events, but they will agree on the speed of light , and they will measure the same value for the combination . The square root of the absolute value of this quantity is called the interval between the two events. The interval expresses how widely separated events are, not just in space or in time, but in the combined setting of spacetime. The special theory of relativity cannot account for gravity. Its successor, the general theory of relativity, explains gravity by recognizing that spacetime is not fixed but instead dynamical. In general relativity, gravitational force is reimagined as curvature of spacetime. A curved path like an orbit is not the result of a force deflecting a body from an ideal straight-line path, but rather the body's attempt to fall freely through a background that is itself curved by the presence of other masses. A remark by John Archibald Wheeler that has become proverbial among physicists summarizes the theory: "Spacetime tells matter how to move; matter tells spacetime how to curve", and therefore there is no point in considering one without the other. The Newtonian theory of gravity is a good approximation to the predictions of general relativity when gravitational effects are weak and objects are moving slowly compared to the speed of light. The relation between matter distribution and spacetime curvature is given by the Einstein field equations, which require tensor calculus to express. The universe appears to be a smooth spacetime continuum consisting of three spatial dimensions and one temporal (time) dimension. Therefore, an event in the spacetime of the physical universe can be identified by a set of four coordinates: (x, y, z, t). On average, space is observed to be very nearly flat (with a curvature close to zero), meaning that Euclidean geometry is empirically true with high accuracy throughout most of the universe. Spacetime also appears to have a simply connected topology, in analogy with a sphere, at least on the length scale of the observable universe. However, present observations cannot exclude the possibilities that the universe has more dimensions (which is postulated by theories such as string theory) and that its spacetime may have a multiply connected global topology, in analogy with the cylindrical or toroidal topologies of two-dimensional spaces. Shape General relativity describes how spacetime is curved and bent by mass and energy (gravity). The topology or geometry of the universe includes both local geometry in the observable universe and global geometry. Cosmologists often work with a given space-like slice of spacetime called the comoving coordinates. The section of spacetime which can be observed is the backward light cone, which delimits the cosmological horizon. The cosmological horizon, also called the particle horizon or the light horizon, is the maximum distance from which particles can have traveled to the observer in the age of the universe. This horizon represents the boundary between the observable and the unobservable regions of the universe. An important parameter determining the future evolution of the universe theory is the density parameter, Omega (Ω), defined as the average matter density of the universe divided by a critical value of that density. This selects one of three possible geometries depending on whether Ω is equal to, less than, or greater than 1. These are called, respectively, the flat, open and closed universes. Observations, including the Cosmic Background Explorer (COBE), Wilkinson Microwave Anisotropy Probe (WMAP), and Planck maps of the CMB, suggest that the universe is infinite in extent with a finite age, as described by the Friedmann–Lemaître–Robertson–Walker (FLRW) models. These FLRW models thus support inflationary models and the standard model of cosmology, describing a flat, homogeneous universe presently dominated by dark matter and dark energy. Support of life The fine-tuned universe hypothesis is the proposition that the conditions that allow the existence of observable life in the universe can only occur when certain universal fundamental physical constants lie within a very narrow range of values. According to this hypothesis, if any of several fundamental constants were only slightly different, the universe would have been unlikely to be conducive to the establishment and development of matter, astronomical structures, elemental diversity, or life as it is understood. Whether this is true, and whether that question is even logically meaningful to ask, are subjects of much debate. The proposition is discussed among philosophers, scientists, theologians, and proponents of creationism. Composition The universe is composed almost completely of dark energy, dark matter, and ordinary matter. Other contents are electromagnetic radiation (estimated to constitute from 0.005% to close to 0.01% of the total mass–energy of the universe) and antimatter. The proportions of all types of matter and energy have changed over the history of the universe. The total amount of electromagnetic radiation generated within the universe has decreased by 1/2 in the past 2 billion years. Today, ordinary matter, which includes atoms, stars, galaxies, and life, accounts for only 4.9% of the contents of the universe. The present overall density of this type of matter is very low, roughly 4.5 × 10−31 grams per cubic centimeter, corresponding to a density of the order of only one proton for every four cubic meters of volume. The nature of both dark energy and dark matter is unknown. Dark matter, a mysterious form of matter that has not yet been identified, accounts for 26.8% of the cosmic contents. Dark energy, which is the energy of empty space and is causing the expansion of the universe to accelerate, accounts for the remaining 68.3% of the contents. Matter, dark matter, and dark energy are distributed homogeneously throughout the universe over length scales longer than 300 million light-years (ly) or so. However, over shorter length-scales, matter tends to clump hierarchically; many atoms are condensed into stars, most stars into galaxies, most galaxies into clusters, superclusters and, finally, large-scale galactic filaments. The observable universe contains as many as an estimated 2 trillion galaxies and, overall, as many as an estimated 1024 stars – more stars (and earth-like planets) than all the grains of beach sand on planet Earth; but less than the total number of atoms estimated in the universe as 1082; and the estimated total number of stars in an inflationary universe (observed and unobserved), as 10100. Typical galaxies range from dwarfs with as few as ten million (107) stars up to giants with one trillion (1012) stars. Between the larger structures are voids, which are typically 10–150 Mpc (33 million–490 million ly) in diameter. The Milky Way is in the Local Group of galaxies, which in turn is in the Laniakea Supercluster. This supercluster spans over 500 million light-years, while the Local Group spans over 10 million light-years. The universe also has vast regions of relative emptiness; the largest known void measures 1.8 billion ly (550 Mpc) across. The observable universe is isotropic on scales significantly larger than superclusters, meaning that the statistical properties of the universe are the same in all directions as observed from Earth. The universe is bathed in highly isotropic microwave radiation that corresponds to a thermal equilibrium blackbody spectrum of roughly 2.72548 kelvins. The hypothesis that the large-scale universe is homogeneous and isotropic is known as the cosmological principle. A universe that is both homogeneous and isotropic looks the same from all vantage points and has no center. Dark energy An explanation for why the expansion of the universe is accelerating remains elusive. It is often attributed to the gravitational influence of "dark energy", an unknown form of energy that is hypothesized to permeate space. On a mass–energy equivalence basis, the density of dark energy (~ 7 × 10−30 g/cm3) is much less than the density of ordinary matter or dark matter within galaxies. However, in the present dark-energy era, it dominates the mass–energy of the universe because it is uniform across space. Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously, and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space while still permeating them enough to cause the observed rate of expansion. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to vacuum energy. Dark matter Dark matter is a hypothetical kind of matter that is invisible to the entire electromagnetic spectrum, but which accounts for most of the matter in the universe. The existence and properties of dark matter are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. Other than neutrinos, a form of hot dark matter, dark matter has not been detected directly, making it one of the greatest mysteries in modern astrophysics. Dark matter neither emits nor absorbs light or any other electromagnetic radiation at any significant level. Dark matter is estimated to constitute 26.8% of the total mass–energy and 84.5% of the total matter in the universe. Ordinary matter The remaining 4.9% of the mass–energy of the universe is ordinary matter, that is, atoms, ions, electrons and the objects they form. This matter includes stars, which produce nearly all of the light we see from galaxies, as well as interstellar gas in the interstellar and intergalactic media, planets, and all the objects from everyday life that we can bump into, touch or squeeze. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10 percent of the ordinary matter contribution to the mass–energy density of the universe. Ordinary matter commonly exists in four states (or phases): solid, liquid, gas, and plasma. However, advances in experimental techniques have revealed other previously theoretical phases, such as Bose–Einstein condensates and fermionic condensates. Ordinary matter is composed of two types of elementary particles: quarks and leptons. For example, the proton is formed of two up quarks and one down quark; the neutron is formed of two down quarks and one up quark; and the electron is a kind of lepton. An atom consists of an atomic nucleus, made up of protons and neutrons (both of which are baryons), and electrons that orbit the nucleus. Soon after the Big Bang, primordial protons and neutrons formed from the quark–gluon plasma of the early universe as it cooled below two trillion degrees. A few minutes later, in a process known as Big Bang nucleosynthesis, nuclei formed from the primordial protons and neutrons. This nucleosynthesis formed lighter elements, those with small atomic numbers up to lithium and beryllium, but the abundance of heavier elements dropped off sharply with increasing atomic number. Some boron may have been formed at this time, but the next heavier element, carbon, was not formed in significant amounts. Big Bang nucleosynthesis shut down after about 20 minutes due to the rapid drop in temperature and density of the expanding universe. Subsequent formation of heavier elements resulted from stellar nucleosynthesis and supernova nucleosynthesis. Particles Ordinary matter and the forces that act on matter can be described in terms of elementary particles. These particles are sometimes described as being fundamental, since they have an unknown substructure, and it is unknown whether or not they are composed of smaller and even more fundamental particles. In most contemporary models they are thought of as points in space. All elementary particles are currently best explained by quantum mechanics and exhibit wave–particle duality: their behavior has both particle-like and wave-like aspects, with different features dominating under different circumstances. Of central importance is the Standard Model, a theory that is concerned with electromagnetic interactions and the weak and strong nuclear interactions. The Standard Model is supported by the experimental confirmation of the existence of particles that compose matter: quarks and leptons, and their corresponding "antimatter" duals, as well as the force particles that mediate interactions: the photon, the W and Z bosons, and the gluon. The Standard Model predicted the existence of the recently discovered Higgs boson, a particle that is a manifestation of a field within the universe that can endow particles with mass. Because of its success in explaining a wide variety of experimental results, the Standard Model is sometimes regarded as a "theory of almost everything". The Standard Model does not, however, accommodate gravity. A true force–particle "theory of everything" has not been attained. Hadrons A hadron is a composite particle made of quarks held together by the strong force. Hadrons are categorized into two families: baryons (such as protons and neutrons) made of three quarks, and mesons (such as pions) made of one quark and one antiquark. Of the hadrons, protons are stable, and neutrons bound within atomic nuclei are stable. Other hadrons are unstable under ordinary conditions and are thus insignificant constituents of the modern universe. From approximately 10−6 seconds after the Big Bang, during a period known as the hadron epoch, the temperature of the universe had fallen sufficiently to allow quarks to bind together into hadrons, and the mass of the universe was dominated by hadrons. Initially, the temperature was high enough to allow the formation of hadron–anti-hadron pairs, which kept matter and antimatter in thermal equilibrium. However, as the temperature of the universe continued to fall, hadron–anti-hadron pairs were no longer produced. Most of the hadrons and anti-hadrons were then eliminated in particle–antiparticle annihilation reactions, leaving a small residual of hadrons by the time the universe was about one second old. Leptons A lepton is an elementary, half-integer spin particle that does not undergo strong interactions but is subject to the Pauli exclusion principle; no two leptons of the same species can be in exactly the same state at the same time. Two main classes of leptons exist: charged leptons (also known as the electron-like leptons), and neutral leptons (better known as neutrinos). Electrons are stable and the most common charged lepton in the universe, whereas muons and taus are unstable particles that quickly decay after being produced in high energy collisions, such as those involving cosmic rays or carried out in particle accelerators. Charged leptons can combine with other particles to form various composite particles such as atoms and positronium. The electron governs nearly all of chemistry, as it is found in atoms and is directly tied to all chemical properties. Neutrinos rarely interact with anything, and are consequently rarely observed. Neutrinos stream throughout the universe but rarely interact with normal matter. The lepton epoch was the period in the evolution of the early universe in which the leptons dominated the mass of the universe. It started roughly 1 second after the Big Bang, after the majority of hadrons and anti-hadrons annihilated each other at the end of the hadron epoch. During the lepton epoch the temperature of the universe was still high enough to create lepton–anti-lepton pairs, so leptons and anti-leptons were in thermal equilibrium. Approximately 10 seconds after the Big Bang, the temperature of the universe had fallen to the point where lepton–anti-lepton pairs were no longer created. Most leptons and anti-leptons were then eliminated in annihilation reactions, leaving a small residue of leptons. The mass of the universe was then dominated by photons as it entered the following photon epoch. Photons A photon is the quantum of light and all other forms of electromagnetic radiation. It is the carrier for the electromagnetic force. The effects of this force are easily observable at the microscopic and at the macroscopic level because the photon has zero rest mass; this allows long distance interactions. The photon epoch started after most leptons and anti-leptons were annihilated at the end of the lepton epoch, about 10 seconds after the Big Bang. Atomic nuclei were created in the process of nucleosynthesis which occurred during the first few minutes of the photon epoch. For the remainder of the photon epoch the universe contained a hot dense plasma of nuclei, electrons and photons. About 380,000 years after the Big Bang, the temperature of the universe fell to the point where nuclei could combine with electrons to create neutral atoms. As a result, photons no longer interacted frequently with matter and the universe became transparent. The highly redshifted photons from this period form the cosmic microwave background. Tiny variations in the temperature of the CMB correspond to variations in the density of the universe that were the early "seeds" from which all subsequent structure formation took place. Habitability The frequency of life in the universe has been a frequent point of investigation in astronomy and astrobiology, being the issue of the Drake equation and the different views on it, from identifying the Fermi paradox, the situation of not having found any signs of extraterrestrial life, to arguments for a biophysical cosmology, a view of life being inherent to the physical cosmology of the universe. Cosmological models Model of the universe based on general relativity General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. It is the basis of current cosmological models of the universe. General relativity generalizes special relativity and Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of partial differential equations. In general relativity, the distribution of matter and energy determines the geometry of spacetime, which in turn describes the acceleration of matter. Therefore, solutions of the Einstein field equations describe the evolution of the universe. Combined with measurements of the amount, type, and distribution of matter in the universe, the equations of general relativity describe the evolution of the universe over time. With the assumption of the cosmological principle that the universe is homogeneous and isotropic everywhere, a specific solution of the field equations that describes the universe is the metric tensor called the Friedmann–Lemaître–Robertson–Walker metric, where (r, θ, φ) correspond to a spherical coordinate system. This metric has only two undetermined parameters. An overall dimensionless length scale factor R describes the size scale of the universe as a function of time (an increase in R is the expansion of the universe), and a curvature index k describes the geometry. The index k is defined so that it can take only one of three values: 0, corresponding to flat Euclidean geometry; 1, corresponding to a space of positive curvature; or −1, corresponding to a space of positive or negative curvature. The value of R as a function of time t depends upon k and the cosmological constant Λ. The cosmological constant represents the energy density of the vacuum of space and could be related to dark energy. The equation describing how R varies with time is known as the Friedmann equation after its inventor, Alexander Friedmann. The solutions for R(t) depend on k and Λ, but some qualitative features of such solutions are general. First and most importantly, the length scale R of the universe can remain constant only if the universe is perfectly isotropic with positive curvature (k = 1) and has one precise value of density everywhere, as first noted by Albert Einstein. Second, all solutions suggest that there was a gravitational singularity in the past, when R went to zero and matter and energy were infinitely dense. It may seem that this conclusion is uncertain because it is based on the questionable assumptions of perfect homogeneity and isotropy (the cosmological principle) and that only the gravitational interaction is significant. However, the Penrose–Hawking singularity theorems show that a singularity should exist for very general conditions. Hence, according to Einstein's field equations, R grew rapidly from an unimaginably hot, dense state that existed immediately following this singularity (when R had a small, finite value); this is the essence of the Big Bang model of the universe. Understanding the singularity of the Big Bang likely requires a quantum theory of gravity, which has not yet been formulated. Third, the curvature index k determines the sign of the curvature of constant-time spatial surfaces averaged over sufficiently large length scales (greater than about a billion light-years). If k = 1, the curvature is positive and the universe has a finite volume. A universe with positive curvature is often visualized as a three-dimensional sphere embedded in a four-dimensional space. Conversely, if k is zero or negative, the universe has an infinite volume. It may seem counter-intuitive that an infinite and yet infinitely dense universe could be created in a single instant when R = 0, but exactly that is predicted mathematically when k is nonpositive and the cosmological principle is satisfied. By analogy, an infinite plane has zero curvature but infinite area, whereas an infinite cylinder is finite in one direction and a torus is finite in both. The ultimate fate of the universe is still unknown because it depends critically on the curvature index k and the cosmological constant Λ. If the universe were sufficiently dense, k would equal +1, meaning that its average curvature throughout is positive and the universe will eventually recollapse in a Big Crunch, possibly starting a new universe in a Big Bounce. Conversely, if the universe were insufficiently dense, k would equal 0 or −1 and the universe would expand forever, cooling off and eventually reaching the Big Freeze and the heat death of the universe. Modern data suggests that the expansion of the universe is accelerating; if this acceleration is sufficiently rapid, the universe may eventually reach a Big Rip. Observationally, the universe appears to be flat (k = 0), with an overall density that is very close to the critical value between recollapse and eternal expansion. Multiverse hypotheses Some speculative theories have proposed that our universe is but one of a set of disconnected universes, collectively denoted as the multiverse, challenging or enhancing more limited definitions of the universe. Max Tegmark developed a four-part classification scheme for the different types of multiverses that scientists have suggested in response to various problems in physics. An example of such multiverses is the one resulting from the chaotic inflation model of the early universe. Another is the multiverse resulting from the many-worlds interpretation of quantum mechanics. In this interpretation, parallel worlds are generated in a manner similar to quantum superposition and decoherence, with all states of the wave functions being realized in separate worlds. Effectively, in the many-worlds interpretation the multiverse evolves as a universal wavefunction. If the Big Bang that created our multiverse created an ensemble of multiverses, the wave function of the ensemble would be entangled in this sense. Whether scientifically meaningful probabilities can be extracted from this picture has been and continues to be a topic of much debate, and multiple versions of the many-worlds interpretation exist. The subject of the interpretation of quantum mechanics is in general marked by disagreement. The least controversial, but still highly disputed, category of multiverse in Tegmark's scheme is Level I. The multiverses of this level are composed by distant spacetime events "in our own universe". Tegmark and others have argued that, if space is infinite, or sufficiently large and uniform, identical instances of the history of Earth's entire Hubble volume occur every so often, simply by chance. Tegmark calculated that our nearest so-called doppelgänger is 1010115 metres away from us (a double exponential function larger than a googolplex). However, the arguments used are of speculative nature. It is possible to conceive of disconnected spacetimes, each existing but unable to interact with one another. An easily visualized metaphor of this concept is a group of separate soap bubbles, in which observers living on one soap bubble cannot interact with those on other soap bubbles, even in principle. According to one common terminology, each "soap bubble" of spacetime is denoted as a universe, whereas humans' particular spacetime is denoted as the universe, just as humans call Earth's moon the Moon. The entire collection of these separate spacetimes is denoted as the multiverse. With this terminology, different universes are not causally connected to each other. In principle, the other unconnected universes may have different dimensionalities and topologies of spacetime, different forms of matter and energy, and different physical laws and physical constants, although such possibilities are purely speculative. Others consider each of several bubbles created as part of chaotic inflation to be separate universes, though in this model these universes all share a causal origin. Historical conceptions Historically, there have been many ideas of the cosmos (cosmologies) and its origin (cosmogonies). Theories of an impersonal universe governed by physical laws were first proposed by the Greeks and Indians. Ancient Chinese philosophy encompassed the notion of the universe including both all of space and all of time. Over the centuries, improvements in astronomical observations and theories of motion and gravitation led to ever more accurate descriptions of the universe. The modern era of cosmology began with Albert Einstein's 1915 general theory of relativity, which made it possible to quantitatively predict the origin, evolution, and conclusion of the universe as a whole. Most modern, accepted theories of cosmology are based on general relativity and, more specifically, the predicted Big Bang. Mythologies Many cultures have stories describing the origin of the world and universe. Cultures generally regard these stories as having some truth. There are however many differing beliefs in how these stories apply amongst those believing in a supernatural origin, ranging from a god directly creating the universe as it is now to a god just setting the "wheels in motion" (for example via mechanisms such as the big bang and evolution). Ethnologists and anthropologists who study myths have developed various classification schemes for the various themes that appear in creation stories. For example, in one type of story, the world is born from a world egg; such stories include the Finnish epic poem Kalevala, the Chinese story of Pangu or the Indian Brahmanda Purana. In related stories, the universe is created by a single entity emanating or producing something by him- or herself, as in the Tibetan Buddhism concept of Adi-Buddha, the ancient Greek story of Gaia (Mother Earth), the Aztec goddess Coatlicue myth, the ancient Egyptian god Atum story, and the Judeo-Christian Genesis creation narrative in which the Abrahamic God created the universe. In another type of story, the universe is created from the union of male and female deities, as in the Maori story of Rangi and Papa. In other stories, the universe is created by crafting it from pre-existing materials, such as the corpse of a dead god—as from Tiamat in the Babylonian epic Enuma Elish or from the giant Ymir in Norse mythology—or from chaotic materials, as in Izanagi and Izanami in Japanese mythology. In other stories, the universe emanates from fundamental principles, such as Brahman and Prakrti, and the creation myth of the Serers. Philosophical models The pre-Socratic Greek philosophers and Indian philosophers developed some of the earliest philosophical concepts of the universe. The earliest Greek philosophers noted that appearances can be deceiving, and sought to understand the underlying reality behind the appearances. In particular, they noted the ability of matter to change forms (e.g., ice to water to steam) and several philosophers proposed that all the physical materials in the world are different forms of a single primordial material, or arche. The first to do so was Thales, who proposed this material to be water. Thales' student, Anaximander, proposed that everything came from the limitless apeiron. Anaximenes proposed the primordial material to be air on account of its perceived attractive and repulsive qualities that cause the arche to condense or dissociate into different forms. Anaxagoras proposed the principle of Nous (Mind), while Heraclitus proposed fire (and spoke of logos). Empedocles proposed the elements to be earth, water, air and fire. His four-element model became very popular. Like Pythagoras, Plato believed that all things were composed of number, with Empedocles' elements taking the form of the Platonic solids. Democritus, and later philosophers—most notably Leucippus—proposed that the universe is composed of indivisible atoms moving through a void (vacuum), although Aristotle did not believe that to be feasible because air, like water, offers resistance to motion. Air will immediately rush in to fill a void, and moreover, without resistance, it would do so indefinitely fast. Although Heraclitus argued for eternal change, his contemporary Parmenides emphasized changelessness. Parmenides' poem On Nature has been read as saying that all change is an illusion, that the true underlying reality is eternally unchanging and of a single nature, or at least that the essential feature of each thing that exists must exist eternally, without origin, change, or end. His student Zeno of Elea challenged everyday ideas about motion with several famous paradoxes. Aristotle responded to these paradoxes by developing the notion of a potential countable infinity, as well as the infinitely divisible continuum. The Indian philosopher Kanada, founder of the Vaisheshika school, developed a notion of atomism and proposed that light and heat were varieties of the same substance. In the 5th century AD, the Buddhist atomist philosopher Dignāga proposed atoms to be point-sized, durationless, and made of energy. They denied the existence of substantial matter and proposed that movement consisted of momentary flashes of a stream of energy. The notion of temporal finitism was inspired by the doctrine of creation shared by the three Abrahamic religions: Judaism, Christianity and Islam. The Christian philosopher, John Philoponus, presented the philosophical arguments against the ancient Greek notion of an infinite past and future. Philoponus' arguments against an infinite past were used by the early Muslim philosopher, Al-Kindi (Alkindus); the Jewish philosopher, Saadia Gaon (Saadia ben Joseph); and the Muslim theologian, Al-Ghazali (Algazel). Pantheism is the philosophical religious belief that the universe itself is identical to divinity and a supreme being or entity. The physical universe is thus understood as an all-encompassing, immanent deity. The term 'pantheist' designates one who holds both that everything constitutes a unity and that this unity is divine, consisting of an all-encompassing, manifested god or goddess. Astronomical concepts The earliest written records of identifiable predecessors to modern astronomy come from Ancient Egypt and Mesopotamia from around 3000 to 1200 BCE. Babylonian astronomers of the 7th century BCE viewed the world as a flat disk surrounded by the ocean. Later Greek philosophers, observing the motions of the heavenly bodies, were concerned with developing models of the universe based more profoundly on empirical evidence. The first coherent model was proposed by Eudoxus of Cnidos, a student of Plato who followed Plato's idea that heavenly motions had to be circular. In order to account for the known complications of the planets' motions, particularly retrograde movement, Eudoxus' model included 27 different celestial spheres: four for each of the planets visible to the naked eye, three each for the Sun and the Moon, and one for the stars. All of these spheres were centered on the Earth, which remained motionless while they rotated eternally. Aristotle elaborated upon this model, increasing the number of spheres to 55 in order to account for further details of planetary motion. For Aristotle, normal matter was entirely contained within the terrestrial sphere, and it obeyed fundamentally different rules from heavenly material. The post-Aristotle treatise De Mundo (of uncertain authorship and date) stated, "Five elements, situated in spheres in five regions, the less being in each case surrounded by the greater—namely, earth surrounded by water, water by air, air by fire, and fire by ether—make up the whole universe". This model was also refined by Callippus and after concentric spheres were abandoned, it was brought into nearly perfect agreement with astronomical observations by Ptolemy. The success of such a model is largely due to the mathematical fact that any function (such as the position of a planet) can be decomposed into a set of circular functions (the Fourier modes). Other Greek scientists, such as the Pythagorean philosopher Philolaus, postulated (according to Stobaeus' account) that at the center of the universe was a "central fire" around which the Earth, Sun, Moon and planets revolved in uniform circular motion. The Greek astronomer Aristarchus of Samos was the first known individual to propose a heliocentric model of the universe. Though the original text has been lost, a reference in Archimedes' book The Sand Reckoner describes Aristarchus's heliocentric model. Archimedes wrote: You, King Gelon, are aware the universe is the name given by most astronomers to the sphere the center of which is the center of the Earth, while its radius is equal to the straight line between the center of the Sun and the center of the Earth. This is the common account as you have heard from astronomers. But Aristarchus has brought out a book consisting of certain hypotheses, wherein it appears, as a consequence of the assumptions made, that the universe is many times greater than the universe just mentioned. His hypotheses are that the fixed stars and the Sun remain unmoved, that the Earth revolves about the Sun on the circumference of a circle, the Sun lying in the middle of the orbit, and that the sphere of fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface. Aristarchus thus believed the stars to be very far away, and saw this as the reason why stellar parallax had not been observed, that is, the stars had not been observed to move relative each other as the Earth moved around the Sun. The stars are in fact much farther away than the distance that was generally assumed in ancient times, which is why stellar parallax is only detectable with precision instruments. The geocentric model, consistent with planetary parallax, was assumed to be the explanation for the unobservability of stellar parallax. The only other astronomer from antiquity known by name who supported Aristarchus's heliocentric model was Seleucus of Seleucia, a Hellenistic astronomer who lived a century after Aristarchus. According to Plutarch, Seleucus was the first to prove the heliocentric system through reasoning, but it is not known what arguments he used. Seleucus' arguments for a heliocentric cosmology were probably related to the phenomenon of tides. According to Strabo (1.1.9), Seleucus was the first to state that the tides are due to the attraction of the Moon, and that the height of the tides depends on the Moon's position relative to the Sun. Alternatively, he may have proved heliocentricity by determining the constants of a geometric model for it, and by developing methods to compute planetary positions using this model, similar to Nicolaus Copernicus in the 16th century. During the Middle Ages, heliocentric models were also proposed by the Persian astronomers Albumasar and Al-Sijzi. The Aristotelian model was accepted in the Western world for roughly two millennia, until Copernicus revived Aristarchus's perspective that the astronomical data could be explained more plausibly if the Earth rotated on its axis and if the Sun were placed at the center of the universe. As noted by Copernicus, the notion that the Earth rotates is very old, dating at least to Philolaus (), Heraclides Ponticus () and Ecphantus the Pythagorean. Roughly a century before Copernicus, the Christian scholar Nicholas of Cusa also proposed that the Earth rotates on its axis in his book, On Learned Ignorance (1440). Al-Sijzi also proposed that the Earth rotates on its axis. Empirical evidence for the Earth's rotation on its axis, using the phenomenon of comets, was given by Tusi (1201–1274) and Ali Qushji (1403–1474). This cosmology was accepted by Isaac Newton, Christiaan Huygens and later scientists. Newton demonstrated that the same laws of motion and gravity apply to earthly and to celestial matter, making Aristotle's division between the two obsolete. Edmund Halley (1720) and Jean-Philippe de Chéseaux (1744) noted independently that the assumption of an infinite space filled uniformly with stars would lead to the prediction that the nighttime sky would be as bright as the Sun itself; this became known as Olbers' paradox in the 19th century. Newton believed that an infinite space uniformly filled with matter would cause infinite forces and instabilities causing the matter to be crushed inwards under its own gravity. This instability was clarified in 1902 by the Jeans instability criterion. One solution to these paradoxes is the Charlier universe, in which the matter is arranged hierarchically (systems of orbiting bodies that are themselves orbiting in a larger system, ad infinitum) in a fractal way such that the universe has a negligibly small overall density; such a cosmological model had also been proposed earlier in 1761 by Johann Heinrich Lambert. Deep space astronomy During the 18th century, Immanuel Kant speculated that nebulae could be entire galaxies separate from the Milky Way, and in 1850, Alexander von Humboldt called these separate galaxies Weltinseln, or "world islands", a term that later developed into "island universes". In 1919, when the Hooker Telescope was completed, the prevailing view was that the universe consisted entirely of the Milky Way Galaxy. Using the Hooker Telescope, Edwin Hubble identified Cepheid variables in several spiral nebulae and in 1922–1923 proved conclusively that Andromeda Nebula and Triangulum among others, were entire galaxies outside our own, thus proving that the universe consists of a multitude of galaxies. With this Hubble formulated the Hubble constant, which allowed for the first time a calculation of the age of the Universe and size of the Observable Universe, which became increasingly precise with better meassurements, starting at 2 billion years and 280 million light-years, until 2006 when data of the Hubble Space Telescope allowed a very accurate calculation of the age of the Universe and size of the Observable Universe. The modern era of physical cosmology began in 1917, when Albert Einstein first applied his general theory of relativity to model the structure and dynamics of the universe. The discoveries of this era, and the questions that remain unanswered, are outlined in the sections above. See also Cosmic Calendar (scaled down timeline) Cosmic latte Detailed logarithmic timeline Earth's location in the universe False vacuum Future of an expanding universe Galaxy And Mass Assembly survey Heat death of the universe History of the center of the Universe Illustris project Non-standard cosmology Nucleocosmochronology Parallel universe (fiction) Rare Earth hypothesis Space and survival Terasecond and longer Timeline of the early universe Timeline of the far future Timeline of the near future Zero-energy universe References Footnotes Citations Bibliography External links NASA/IPAC Extragalactic Database (NED) / (NED-Distances). There are about 1082 atoms in the observable universe – LiveScience, July 2021. This is why we will never know everything about our universe – Forbes, May 2019. Articles containing video clips Astronomical dynamical systems Concepts in astronomy Environments Main topic articles Physical cosmology
Universe
[ "Physics", "Astronomy", "Mathematics" ]
11,526
[ "Astronomical sub-disciplines", "Concepts in astronomy", "Theoretical physics", "Astrophysics", "Astronomical dynamical systems", "Astronomical objects", "Physical cosmology", "Dynamical systems" ]
31,883
https://en.wikipedia.org/wiki/Uncertainty%20principle
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known. More formally, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the product of the accuracy of certain related pairs of measurements on a quantum system, such as position, x, and momentum, p. Such paired-variables are known as complementary variables or canonically conjugate variables. First introduced in 1927 by German physicist Werner Heisenberg, the formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928: where is the reduced Planck constant. The quintessentially quantum mechanical uncertainty principle comes in many forms other than position–momentum. The energy–time relationship is widely used to relate quantum state lifetime to measured energy widths but its formal derivation is fraught with confusing issues about the nature of time. The basic principle has been extended in numerous directions; it must be considered in many kinds of fundamental physical measurements. Position–momentum It is vital to illustrate how the principle applies to relatively intelligible physical situations since it is indiscernible on the macroscopic scales that humans experience. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily. Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized at the same time. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation , where is the wavenumber. In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable is performed, then the system is in a particular eigenstate of that observable. However, the particular eigenstate of the observable need not be an eigenstate of another observable : If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable. Visualization The uncertainty principle can be visualized using the position- and momentum-space wavefunctions for one spinless particle with mass in one dimension. The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely, the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. These wavefunctions are Fourier transforms of each other: mathematically, the uncertainty principle expresses the relationship between conjugate variables in the transform. Wave mechanics interpretation According to the de Broglie hypothesis, every object in the universe is associated with a wave. Thus every object, from an elementary particle to atoms, molecules and on up to planets and beyond are subject to the uncertainty principle. The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is In the case of the single-mode plane wave, is 1 if and 0 otherwise. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet. On the other hand, consider a wave function that is a sum of many waves, which we may write as where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes with representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that is the Fourier transform of and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta. One way to quantify the precision of the position and momentum is the standard deviation σ. Since is a probability density function for position, we calculate its standard deviation. The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound. Proof of the Kennard inequality using wave mechanics We are interested in the variances of position and momentum, defined as Without loss of generality, we will assume that the means vanish, which just amounts to a shift of the origin of our coordinates. (A more general proof that does not make this assumption is given below.) This gives us the simpler form The function can be interpreted as a vector in a function space. We can define an inner product for a pair of functions u(x) and v(x) in this vector space: where the asterisk denotes the complex conjugate. With this inner product defined, we note that the variance for position can be written as We can repeat this for momentum by interpreting the function as a vector, but we can also take advantage of the fact that and are Fourier transforms of each other. We evaluate the inverse Fourier transform through integration by parts: where in the integration by parts, the cancelled term vanishes because the wave function vanishes at both infinities and , and then use the Dirac delta function which is valid because does not depend on p . The term is called the momentum operator in position space. Applying Plancherel's theorem, we see that the variance for momentum can be written as The Cauchy–Schwarz inequality asserts that The modulus squared of any complex number z can be expressed as we let and and substitute these into the equation above to get All that remains is to evaluate these inner products. Plugging this into the above inequalities, we get and taking the square root with equality if and only if p and x are linearly dependent. Note that the only physics involved in this proof was that and are wave functions for position and momentum, which are Fourier transforms of each other. A similar result would hold for any pair of conjugate variables. Matrix mechanics interpretation In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators and , one defines their commutator as In the case of position and momentum, the commutator is the canonical commutation relation The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let be a right eigenstate of position with a constant eigenvalue . By definition, this means that Applying the commutator to yields where is the identity operator. Suppose, for the sake of proof by contradiction, that is also a right eigenstate of momentum, with constant eigenvalue . If this were true, then one could write On the other hand, the above canonical commutation relation requires that This implies that no quantum state can simultaneously be both a position and a momentum eigenstate. When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations, As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle. Quantum harmonic oscillator stationary states Consider a one-dimensional quantum harmonic oscillator. It is possible to express the position and momentum operators in terms of the creation and annihilation operators: Using the standard rules for creation and annihilation operators on the energy eigenstates, the variances may be computed directly, The product of these standard deviations is then In particular, the above Kennard bound is saturated for the ground state , for which the probability density is just the normal distribution. Quantum harmonic oscillators with Gaussian initial condition In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the -dependent solution. After many cancelations, the probability densities reduce to where we have used the notation to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as From the relations we can conclude the following (the right most equality holds only when ): Coherent states A coherent state is a right eigenstate of the annihilation operator, which may be represented in terms of Fock states as In the picture where the coherent state is a massive particle in a quantum harmonic oscillator, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances, Therefore, every coherent state saturates the Kennard bound with position and momentum each contributing an amount in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general. Particle in a box Consider a particle in a one-dimensional box of length . The eigenfunctions in position and momentum space are and where and we have used the de Broglie relation . The variances of and can be calculated explicitly: The product of the standard deviations is therefore For all , the quantity is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when , in which case Constant momentum Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p0 according to where we have introduced a reference scale , with describing the width of the distribution—cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are Since and , this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is such that the uncertainty product can only increase with time as Mathematical formalism Starting with Kennard's derivation of position-momentum uncertainty, Howard Percy Robertson developed a formulation for arbitrary Hermitian operator operators expressed in terms of their standard deviation where the brackets indicate an expectation value of the observable represented by operator . For a pair of operators and , define their commutator as and the Robertson uncertainty relation is given by Erwin Schrödinger showed how to allow for correlation between the operators, giving a stronger inequality, known as the Robertson–Schrödinger uncertainty relation, where the anticommutator, is used. Phase space In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function with star product ★ and a function f, the following is generally true: Choosing , we arrive at Since this positivity condition is true for all a, b, and c, it follows that all the eigenvalues of the matrix are non-negative. The non-negative eigenvalues then imply a corresponding non-negativity condition on the determinant, or, explicitly, after algebraic manipulation, Examples Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below. Position–linear momentum uncertainty relation: for the position and linear momentum operators, the canonical commutation relation implies the Kennard inequality from above: Angular momentum uncertainty relation: For two orthogonal components of the total angular momentum operator of an object: where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for , a choice , , in angular momentum multiplets, ψ = |j, m⟩, bounds the Casimir invariant (angular momentum squared, ) from below and thus yields useful constraints such as , and hence j ≥ m, among others. For the number of electrons in a superconductor and the phase of its Ginzburg–Landau order parameter Limitations The derivation of the Robertson inequality for operators and requires and to be defined. There are quantum systems where these conditions are not valid. One example is a quantum particle on a ring, where the wave function depends on an angular variable in the interval . Define "position" and "momentum" operators and by and with periodic boundary conditions on . The definition of depends the range from 0 to . These operators satisfy the usual commutation relations for position and momentum operators, . More precisely, whenever both and are defined, and the space of such is a dense subspace of the quantum Hilbert space. Now let be any of the eigenstates of , which are given by . These states are normalizable, unlike the eigenstates of the momentum operator on the line. Also the operator is bounded, since ranges over a bounded interval. Thus, in the state , the uncertainty of is zero and the uncertainty of is finite, so that The Robertson uncertainty principle does not apply in this case: is not in the domain of the operator , since multiplication by disrupts the periodic boundary conditions imposed on . For the usual position and momentum operators and on the real line, no such counterexamples can occur. As long as and are defined in the state , the Heisenberg uncertainty principle holds, even if fails to be in the domain of or of . Mixed states The Robertson–Schrödinger uncertainty can be improved noting that it must hold for all components in any decomposition of the density matrix given as Here, for the probabilities and hold. Then, using the relation for , it follows that where the function in the bound is defined The above relation very often has a bound larger than that of the original Robertson–Schrödinger uncertainty relation. Thus, we need to calculate the bound of the Robertson–Schrödinger uncertainty for the mixed components of the quantum state rather than for the quantum state, and compute an average of their square roots. The following expression is stronger than the Robertson–Schrödinger uncertainty relation where on the right-hand side there is a concave roof over the decompositions of the density matrix. The improved relation above is saturated by all single-qubit quantum states. With similar arguments, one can derive a relation with a convex roof on the right-hand side where denotes the quantum Fisher information and the density matrix is decomposed to pure states as The derivation takes advantage of the fact that the quantum Fisher information is the convex roof of the variance times four. A simpler inequality follows without a convex roof which is stronger than the Heisenberg uncertainty relation, since for the quantum Fisher information we have while for pure states the equality holds. The Maccone–Pati uncertainty relations The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Lorenzo Maccone and Arun K. Pati give non-trivial bounds on the sum of the variances for two incompatible observables. (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., Ref. due to Yichen Huang.) For two non-commuting observables and the first stronger uncertainty relation is given by where , , is a normalized vector that is orthogonal to the state of the system and one should choose the sign of to make this real quantity a positive number. The second stronger uncertainty relation is given by where is a state orthogonal to . The form of implies that the right-hand side of the new uncertainty relation is nonzero unless is an eigenstate of . One may note that can be an eigenstate of without being an eigenstate of either or . However, when is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless is an eigenstate of both. Energy–time An energy–time uncertainty relation like has a long, controversial history; the meaning of and varies and different formulations have different arenas of validity. However, one well-known application is both well established and experimentally verified: the connection between the life-time of a resonance state, and its energy width : In particle-physics, widths from experimental fits to the Breit–Wigner energy distribution are used to characterize the lifetime of quasi-stable or decaying states. An informal, heuristic meaning of the principle is the following: A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy, excited states have a finite lifetime. By the time–energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth. The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics. The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width). Time in quantum mechanics The concept of "time" in quantum mechanics offers many challenges. There is no quantum theory of time measurement; relativity is both fundamental to time and difficult to include in quantum mechanics. While position and momentum are associated with a single particle, time is a system property: it has no operator needed for the Robertson–Schrödinger relation. The mathematical treatment of stable and unstable quantum systems differ. These factors combine to make energy–time uncertainty principles controversial. Three notions of "time" can be distinguished: external, intrinsic, and observable. External or laboratory time is seen by the experimenter; intrinsic time is inferred by changes in dynamic variables, like the hands of a clock or the motion of a free particle; observable time concerns time as an observable, the measurement of time-separated events. An external-time energy–time uncertainty principle might say that measuring the energy of a quantum system to an accuracy requires a time interval . However, Yakir Aharonov and David Bohm have shown that, in some quantum systems, energy can be measured accurately within an arbitrarily short time: external-time uncertainty principles are not universal. Intrinsic time is the basis for several formulations of energy–time uncertainty relations, including the Mandelstam–Tamm relation discussed in the next section. A physical system with an intrinsic time closely matching the external laboratory time is called a "clock". Observable time, measuring time between two events, remains a challenge for quantum theories; some progress has been made using positive operator-valued measure concepts. Mandelstam–Tamm In 1945, Leonid Mandelstam and Igor Tamm derived a non-relativistic time–energy uncertainty relation as follows. From Heisenberg mechanics, the generalized Ehrenfest theorem for an observable B without explicit time dependence, represented by a self-adjoint operator relates time dependence of the average value of to the average of its commutator with the Hamiltonian: The value of is then substituted in the Robertson uncertainty relation for the energy operator and : giving (whenever the denominator is nonzero). While this is a universal result, it depends upon the observable chosen and that the deviations and are computed for a particular state. Identifying and the characteristic time gives an energy–time relationship Although has the dimension of time, it is different from the time parameter t that enters the Schrödinger equation. This can be interpreted as time for which the expectation value of the observable, changes by an amount equal to one standard deviation. Examples: The time a free quantum particle passes a point in space is more uncertain as the energy of the state is more precisely controlled: Since the time spread is related to the particle position spread and the energy spread is related to the momentum spread, this relation is directly related to position–momentum uncertainty. A Delta particle, a quasistable composite of quarks related to protons and neutrons, has a lifetime of 10−23 s, so its measured mass equivalent to energy, 1232 MeV/c2, varies by ±120 MeV/c2; this variation is intrinsic and not caused by measurement errors. Two energy states with energies superimposed to create a composite state The probability amplitude of this state has a time-dependent interference term: The oscillation period varies inversely with the energy difference: . Each example has a different meaning for the time uncertainty, according to the observable and state used. Quantum field theory Some formulations of quantum field theory uses temporary electron–positron pairs in its calculations called virtual particles. The mass-energy and lifetime of these particles are related by the energy–time uncertainty relation. The energy of a quantum systems is not known with enough precision to limit their behavior to a single, simple history. Thus the influence of all histories must be incorporated into quantum calculations, including those with much greater or much less energy than the mean of the measured/calculated energy distribution. The energy–time uncertainty principle does not temporarily violate conservation of energy; it does not imply that energy can be "borrowed" from the universe as long as it is "returned" within a short amount of time. The energy of the universe is not an exactly known parameter at all times. When events transpire at very short time intervals, there is uncertainty in the energy of these events. Harmonic analysis In the context of harmonic analysis the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds, Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function and its Fourier transform : Signal processing In the context of time–frequency analysis uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from "Benedicks's theorem", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited. More accurately, the time-bandwidth or duration-bandwidth product satisfies where and are the standard deviations of the time and frequency energy concentrations respectively. The minimum is attained for a Gaussian-shaped pulse (Gabor wavelet) [For the un-squared Gaussian (i.e. signal amplitude) and its un-squared Fourier transform magnitude ; squaring reduces each by a factor .] Another common measure is the product of the time and frequency full width at half maximum (of the power/energy), which for the Gaussian equals (see bandwidth-limited pulse). Stated differently, one cannot simultaneously sharply localize a signal in both the time domain and frequency domain. When applied to filters, the result implies that one cannot simultaneously achieve a high temporal resolution and high frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform—if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off. Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other. As a result, in order to analyze signals where the transients are important, the wavelet transform is often used instead of the Fourier. Discrete Fourier transform Let be a sequence of N complex numbers and be its discrete Fourier transform. Denote by the number of non-zero elements in the time sequence and by the number of non-zero elements in the frequency sequence . Then, This inequality is sharp, with equality achieved when x or X is a Dirac mass, or more generally when x is a nonzero multiple of a Dirac comb supported on a subgroup of the integers modulo N (in which case X is also a Dirac comb supported on a complementary subgroup, and vice versa). More generally, if T and W are subsets of the integers modulo N, let denote the time-limiting operator and band-limiting operators, respectively. Then where the norm is the operator norm of operators on the Hilbert space of functions on the integers modulo N. This inequality has implications for signal reconstruction. When N is a prime number, a stronger inequality holds: Discovered by Terence Tao, this inequality is also sharp. Benedicks's theorem Amrein–Berthier and Benedicks's theorem intuitively says that the set of points where is non-zero and the set of points where is non-zero cannot both be small. Specifically, it is impossible for a function in and its Fourier transform to both be supported on sets of finite Lebesgue measure. A more quantitative version is One expects that the factor may be replaced by , which is only known if either or is convex. Hardy's uncertainty principle The mathematician G. H. Hardy formulated the following uncertainty principle: it is not possible for and to both be "very rapidly decreasing". Specifically, if in is such that and ( an integer), then, if , while if , then there is a polynomial of degree such that This was later improved as follows: if is such that then where is a polynomial of degree and is a real positive definite matrix. This result was stated in Beurling's complete works without proof and proved in Hörmander (the case ) and Bonami, Demange, and Jaming for the general case. Note that Hörmander–Beurling's version implies the case in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref. A full description of the case as well as the following extension to Schwartz class distributions appears in ref. Additional uncertainty relations Heisenberg limit In quantum metrology, and especially interferometry, the Heisenberg limit is the optimal rate at which the accuracy of a measurement can scale with the energy used in the measurement. Typically, this is the measurement of a phase (applied to one arm of a beam-splitter) and the energy is given by the number of photons used in an interferometer. Although some claim to have broken the Heisenberg limit, this reflects disagreement on the definition of the scaling resource. Suitably defined, the Heisenberg limit is a consequence of the basic principles of quantum mechanics and cannot be beaten, although the weak Heisenberg limit can be beaten. Systematic and statistical errors The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation . Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect. If we let represent the error (i.e., inaccuracy) of a measurement of an observable A and the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Masanao Ozawa − encompassing both systematic and statistical errors - holds: Heisenberg's uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as The formal derivation of the Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years. Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors and . There is increasing experimental evidence that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality. Using the same formalism, it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time): The two simultaneous measurements on A and B are necessarily unsharp or weak. It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson and Ozawa relations we obtain The four terms can be written as: Defining: as the inaccuracy in the measured values of the variable A and as the resulting fluctuation in the conjugate variable B, Kazuo Fujikawa established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors: Quantum entropic uncertainty principle For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period. Other examples include highly bimodal distributions, or unimodal distributions with divergent variance. A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty. This conjecture, also studied by I. I. Hirschman and proven in 1975 by W. Beckner and by Iwo Bialynicki-Birula and Jerzy Mycielski is that, for two normalized, dimensionless Fourier transform pairs and where and the Shannon information entropies and are subject to the following constraint, where the logarithms may be in any base. The probability distribution functions associated with the position wave function and the momentum wave function have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by where and are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function and the momentum wavefunction , the above constraint can be written for the corresponding entropies as where is the Planck constant. Depending on one's choice of the product, the expression may be written in many ways. If is chosen to be , then If, instead, is chosen to be , then If and are chosen to be unity in whatever system of units are being used, then where is interpreted as a dimensionless number equal to the value of the Planck constant in the chosen system of units. Note that these inequalities can be extended to multimode quantum states, or wavefunctions in more than one spatial dimension. The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities (equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations, because In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof). A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as Under the above definition, the entropic uncertainty relation is Here we note that is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research. Uncertainty relation with three angular momentum components For a particle of total angular momentum the following uncertainty relation holds where are angular momentum components. The relation can be derived from and The relation can be strengthened as where is the quantum Fisher information. History In 1925 Heisenberg published the Umdeutung (reinterpretation) paper where he showed that central aspect of quantum theory was the non-commutativity: the theory implied that the relative order of position and momentum measurement was significant. Working with Max Born and Pascual Jordan, he continued to develop matrix mechanics, that would become the first modern quantum mechanics formulation. In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. Writing to Wolfgang Pauli in February 1927, he worked out the basic concepts. In his celebrated 1927 paper "" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement, but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. His paper gave an analysis in terms of a microscope that Bohr showed was incorrect; Heisenberg included an addendum to the publication. In his 1930 Chicago lecture he refined his principle: Later work broadened the concept. Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote:It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant. Kennard in 1927 first proved the modern inequality: where , and , are the standard deviations of position and momentum. (Heisenberg only proved relation () for the special case of Gaussian states.) In 1929 Robertson generalized the inequality to all observables and in 1930 Schrödinger extended the form to allow non-zero covariance of the operators; this result is referred to as Robertson-Schrödinger inequality. Terminology and translation Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word "Ungenauigkeit", to describe the basic theoretical principle. Only in the endnote did he switch to the word "Unsicherheit". Later on, he always used "Unbestimmtheit". When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory, was published in 1930, however, only the English word "uncertainty" was used, and it became the term in the English language. Heisenberg's microscope The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by using the observer effect of an imaginary microscope as a measuring device. He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it. Problem 1 – If the photon has a short wavelength, and therefore, a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision does not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely. Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electron's beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around. The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to the Planck constant. Heisenberg did not care to formulate the uncertainty principle as an exact limit, and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable. Intrinsic quantum uncertainty Historically, the uncertainty principle has been confused with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the system, that is, without changing something in a system. Heisenberg used such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty. It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems, and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology. Critical reactions The Copenhagen interpretation of quantum mechanics and Heisenberg's uncertainty principle were, in fact, initially seen as twin targets by detractors. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be. Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years. Ideal detached observer Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German): Einstein's slit The first of Einstein's thought experiments challenging the uncertainty principle went as follows: Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy , the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to , and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement. A similar analysis with particles diffracting through multiple slits is given by Richard Feynman. Einstein's box Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box. Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to the Planck constant." Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box." "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle." Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the Earth's surface will result in an uncertainty in the rate of the clock", because of Einstein's own theory of gravity's effect on time. "Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape." EPR paradox for entangled particles In 1935, Einstein, Boris Podolsky and Nathan Rosen published an analysis of spatially separated entangled particles (EPR paradox). According to EPR, one could measure the position of one of the entangled particles and the momentum of the second particle, and from those measurements deduce the position and momentum of both particles to any precision, violating the uncertainty principle. In order to avoid such possibility, the measurement of one particle must modify the probability distribution of the other particle instantaneously, possibly violating the principle of locality. In 1964, John Stewart Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out EPR's basic assumption of local hidden variables. Popper's criticism Science philosopher Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist. He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations". In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory. In 1934, Popper published ("Critique of the Uncertainty Relations") in , and in the same year (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing: Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Carl Friedrich von Weizsäcker, Heisenberg, and Einstein; Popper sent his paper to Einstein and it may have influenced the formulation of the EPR paradox. Free will Some scientists, including Arthur Compton and Martin Heisenberg, have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature. Proponents of this theory commonly say that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells. Thermodynamics There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics. See Gibbs paradox. Rejection of the principle Uncertainty principles relate quantum particles – electrons for example – to classical concepts – position and momentum. This presumes quantum particles have position and momentum. Edwin C. Kemble pointed out in 1937 that such properties cannot be experimentally verified and assuming they exist gives rise to many contradictions; similarly Rudolf Haag notes that position in quantum mechanics is an attribute of an interaction, say between an electron and a detector, not an intrinsic property. From this point of view the uncertainty principle is not a fundamental quantum property but a concept "carried over from the language of our ancestors", as Kemble says. Applications Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. All forms of spectroscopy, including particle physics use the relationship to relate measured energy line-width to the lifetime of quantum states. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers. See also — when an attempt is made to use a statistical measure for purposes of control (directing), its statistical validity breaks down (Heisenberg's recollections) References External links Stanford Encyclopedia of Philosophy entry Quantum mechanics Principles Mathematical physics Inequalities Werner Heisenberg Scientific laws 1927 in science 1927 in Germany
Uncertainty principle
[ "Physics", "Mathematics" ]
10,205
[ "Mathematical theorems", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Quantum mechanics", "Equations", "Scientific laws", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical physics" ]
32,061
https://en.wikipedia.org/wiki/Urea%20cycle
The urea cycle (also known as the ornithine cycle) is a cycle of biochemical reactions that produces urea (NH2)2CO from ammonia (NH3). Animals that use this cycle, mainly amphibians and mammals, are called ureotelic. The urea cycle converts highly toxic ammonia to urea for excretion. This cycle was the first metabolic cycle to be discovered by Hans Krebs and Kurt Henseleit in 1932, five years before the discovery of the TCA cycle. The urea cycle was described in more detail later on by Ratner and Cohen. The urea cycle takes place primarily in the liver and, to a lesser extent, in the kidneys. Function Amino acid catabolism results in waste ammonia. All animals need a way to excrete this product. Most aquatic organisms, or ammonotelic organisms, excrete ammonia without converting it. Organisms that cannot easily and safely remove nitrogen as ammonia convert it to a less toxic substance, such as urea, via the urea cycle, which occurs mainly in the liver. Urea produced by the liver is then released into the bloodstream, where it travels to the kidneys and is ultimately excreted in urine. The urea cycle is essential to these organisms, because if the nitrogen or ammonia is not eliminated from the organism it can be very detrimental. In species including birds and most insects, the ammonia is converted into uric acid or its urate salt, which is excreted in solid form. Further, the urea cycle consumes acidic waste carbon dioxide by combining it with the basic ammonia, helping to maintain a neutral pH. Reactions The entire process converts two amino groups, one from and one from aspartate, and a carbon atom from , to the relatively nontoxic excretion product urea. This occurs at the cost of four "high-energy" phosphate bonds (3 ATP hydrolyzed to 2 ADP and one AMP). The conversion from ammonia to urea happens in five main steps. The first is needed for ammonia to enter the cycle and the following four are all a part of the cycle itself. To enter the cycle, ammonia is converted to carbamoyl phosphate. The urea cycle consists of four enzymatic reactions: one mitochondrial and three cytosolic. This uses 6 enzymes. The reactions of the urea cycle 1 L-ornithine2 carbamoyl phosphate3 L-citrulline4 argininosuccinate5 fumarate6 L-arginine7 urea L-Asp L-aspartateCPS-1 carbamoyl phosphate synthetase IOTC Ornithine transcarbamoylaseASS argininosuccinate synthetaseASL argininosuccinate lyaseARG1 arginase 1 First reaction: entering the urea cycle Before the urea cycle begins ammonia is converted to carbamoyl phosphate. The reaction is catalyzed by carbamoyl phosphate synthetase I and requires the use of two ATP molecules. The carbamoyl phosphate then enters the urea cycle. Steps of the urea cycle Carbamoyl phosphate is converted to citrulline. With catalysis by ornithine transcarbamylase, the carbamoyl phosphate group is donated to ornithine and releases a phosphate group. A condensation reaction occurs between the amino group of aspartate and the carbonyl group of citrulline to form argininosuccinate. This reaction is ATP dependent and is catalyzed by argininosuccinate synthetase. Argininosuccinate undergoes cleavage by argininosuccinase to form arginine and fumarate. Arginine is cleaved by arginase to form urea and ornithine. The ornithine is then transported back to the mitochondria to begin the urea cycle again. Overall reaction equation In the first reaction, + is equivalent to NH3 + CO2 + H2O. Thus, the overall equation of the urea cycle is: NH3 + CO2 + aspartate + 3 ATP + 3 H2O → urea + fumarate + 2 ADP + 2 Pi + AMP + PPi + H2O Since fumarate is obtained by removing NH3 from aspartate (by means of reactions 3 and 4), and PPi + H2O → 2 Pi, the equation can be simplified as follows: 2 NH3 + CO2 + 3 ATP + 3 H2O → urea + 2 ADP + 4 Pi + AMP Note that reactions related to the urea cycle also cause the production of 2 NADH, so the overall reaction releases slightly more energy than it consumes. The NADH is produced in two ways: One NADH molecule is produced by the enzyme glutamate dehydrogenase in the conversion of glutamate to ammonium and α-ketoglutarate. Glutamate is the non-toxic carrier of amine groups. This provides the ammonium ion used in the initial synthesis of carbamoyl phosphate. The fumarate released in the cytosol is hydrated to malate by cytosolic fumarase. This malate is then oxidized to oxaloacetate by cytosolic malate dehydrogenase, generating a reduced NADH in the cytosol. Oxaloacetate is one of the keto acids preferred by transaminases, and so will be recycled to aspartate, maintaining the flow of nitrogen into the urea cycle. We can summarize this by combining the reactions: CO2 + glutamate + aspartate + 3 ATP + 2 NAD++ 3 H2O → urea + α-ketoglutarate + oxaloacetate + 2 ADP + 2 Pi + AMP + PPi + 2 NADH The two NADH produced can provide energy for the formation of 5 ATP (cytosolic NADH provides 2.5 ATP with the malate-aspartate shuttle in human liver cell), a net production of two high-energy phosphate bond for the urea cycle. However, if gluconeogenesis is underway in the cytosol, the latter reducing equivalent is used to drive the reversal of the GAPDH step instead of generating ATP. The fate of oxaloacetate is either to produce aspartate via transamination or to be converted to phosphoenolpyruvate, which is a substrate for gluconeogenesis. Products of the urea cycle As stated above many vertebrates use the urea cycle to create urea out of ammonium so that the ammonium does not damage the body. Though this is helpful, there are other effects of the urea cycle. For example: consumption of two ATP, production of urea, generation of H+, the combining of and to forms where it can be regenerated, and finally the consumption of . Regulation N-Acetylglutamic acid The synthesis of carbamoyl phosphate and the urea cycle are dependent on the presence of N-acetylglutamic acid (NAcGlu), which allosterically activates CPS1. NAcGlu is an obligate activator of carbamoyl phosphate synthetase. Synthesis of NAcGlu by N-acetylglutamate synthase (NAGS) is stimulated by both Arg, allosteric stimulator of NAGS, and Glu, a product in the transamination reactions and one of NAGS's substrates, both of which are elevated when free amino acids are elevated. So Glu not only is a substrate for NAGS but also serves as an activator for the urea cycle. Substrate concentrations The remaining enzymes of the cycle are controlled by the concentrations of their substrates. Thus, inherited deficiencies in cycle enzymes other than ARG1 do not result in significant decreases in urea production (if any cycle enzyme is entirely missing, death occurs shortly after birth). Rather, the deficient enzyme's substrate builds up, increasing the rate of the deficient reaction to normal. The anomalous substrate buildup is not without cost, however. The substrate concentrations become elevated all the way back up the cycle to , resulting in hyperammonemia (elevated []P). Although the root cause of toxicity is not completely understood, a high [] puts an enormous strain on the -clearing system, especially in the brain (symptoms of urea cycle enzyme deficiencies include intellectual disability and lethargy). This clearing system involves GLUD1 and GLUL, which decrease the 2-oxoglutarate (2OG) and Glu pools. The brain is most sensitive to the depletion of these pools. Depletion of 2OG decreases the rate of TCAC, whereas Glu is both a neurotransmitter and a precursor to GABA, another neurotransmitter. (p.734) Link with the citric acid cycle The urea cycle and the citric acid cycle are independent cycles but are linked. One of the nitrogen atoms in the urea cycle is obtained from the transamination of oxaloacetate to aspartate. The fumarate that is produced in step three is also an intermediate in the citric acid cycle and is returned to that cycle. Urea cycle disorders Urea cycle disorders are rare and affect about one in 35,000 people in the United States. Genetic defects in the enzymes involved in the cycle can occur, which usually manifest within a few days after birth. The recently born child will typically experience varying bouts of vomiting and periods of lethargy. Ultimately, the infant may go into a coma and develop brain damage. New-borns with UCD are at a much higher risk of complications or death due to untimely screening tests and misdiagnosed cases. The most common misdiagnosis is neonatal sepsis. Signs of UCD can be present within the first 2 to 3 days of life, but the present method to get confirmation by test results can take too long. This can potentially cause complications such as coma or death. Urea cycle disorders may also be diagnosed in adults, and symptoms may include delirium episodes, lethargy, and symptoms similar to that of a stroke. On top of these symptoms, if the urea cycle begins to malfunction in the liver, the patient may develop cirrhosis. This can also lead to sarcopenia (the loss of muscle mass). Mutations lead to deficiencies of the various enzymes and transporters involved in the urea cycle, and cause urea cycle disorders. If individuals with a defect in any of the six enzymes used in the cycle ingest amino acids beyond what is necessary for the minimum daily requirements, then the ammonia that is produced will not be able to be converted to urea. These individuals can experience hyperammonemia, or the build-up of a cycle intermediate. Individual disorders N-Acetylglutamate synthase (NAGS) deficiency Carbamoyl phosphate synthetase (CPS) deficiency Ornithine transcarbamoylase (OTC) deficiency Citrullinemia type I (Deficiency of argininosuccinic acid synthase) Argininosuccinic aciduria (Deficiency of argininosuccinic acid lyase) Argininemia (Deficiency of arginase) Hyperornithinemia, hyperammonemia, homocitrullinuria (HHH) syndrome (Deficiency of the mitochondrial ornithine transporter) All urea cycle defects, except OTC deficiency, are inherited in an autosomal recessive manner. OTC deficiency is inherited as an X-linked recessive disorder, although some females can show symptoms. Most urea cycle disorders are associated with hyperammonemia, however argininemia and some forms of argininosuccinic aciduria do not present with elevated ammonia. Additional images References External links The chemical logic behind the urea cycle Basic Neurochemistry - amino acid disorders Biochemical reactions Nitrogen cycle
Urea cycle
[ "Chemistry", "Biology" ]
2,558
[ "Biochemistry", "Nitrogen cycle", "Metabolism", "Biochemical reactions" ]
32,167
https://en.wikipedia.org/wiki/Ubiquitin
Ubiquitin is a small (8.6 kDa) regulatory protein found in most tissues of eukaryotic organisms, i.e., it is found ubiquitously. It was discovered in 1975 by Gideon Goldstein and further characterized throughout the late 1970s and 1980s. Four genes in the human genome code for ubiquitin: UBB, UBC, UBA52 and RPS27A. The addition of ubiquitin to a substrate protein is called ubiquitylation (or ubiquitination or ubiquitinylation). Ubiquitylation affects proteins in many ways: it can mark them for degradation via the proteasome, alter their cellular location, affect their activity, and promote or prevent protein interactions. Ubiquitylation involves three main steps: activation, conjugation, and ligation, performed by ubiquitin-activating enzymes (E1s), ubiquitin-conjugating enzymes (E2s), and ubiquitin ligases (E3s), respectively. The result of this sequential cascade is to bind ubiquitin to lysine residues on the protein substrate via an isopeptide bond, cysteine residues through a thioester bond; serine, threonine, and tyrosine residues through an ester bond; or the amino group of the protein's N-terminus via a peptide bond. The protein modifications can be either a single ubiquitin protein (monoubiquitylation) or a chain of ubiquitin (polyubiquitylation). Secondary ubiquitin molecules are always linked to one of the seven lysine residues or the N-terminal methionine of the previous ubiquitin molecule. These 'linking' residues are represented by a "K" or "M" (the one-letter amino acid notation of lysine and methionine, respectively) and a number, referring to its position in the ubiquitin molecule as in K48, K29 or M1. The first ubiquitin molecule is covalently bound through its C-terminal carboxylate group to a particular lysine, cysteine, serine, threonine or N-terminus of the target protein. Polyubiquitylation occurs when the C-terminus of another ubiquitin is linked to one of the seven lysine residues or the first methionine on the previously added ubiquitin molecule, creating a chain. This process repeats several times, leading to the addition of several ubiquitins. Only polyubiquitylation on defined lysines, mostly on K48 and K29, is related to degradation by the proteasome (referred to as the "molecular kiss of death"), while other polyubiquitylations (e.g. on K63, K11, K6 and M1) and monoubiquitylations may regulate processes such as endocytic trafficking, inflammation, translation and DNA repair. The discovery that ubiquitin chains target proteins to the proteasome, which degrades and recycles proteins, was honored with the Nobel Prize in Chemistry in 2004. Identification Ubiquitin (originally, ubiquitous immunopoietic polypeptide) was first identified in 1975 as an 8.6 kDa protein expressed in all eukaryotic cells. The basic functions of ubiquitin and the components of the ubiquitylation pathway were elucidated in the early 1980s at the Technion by Aaron Ciechanover, Avram Hershko, and Irwin Rose for which the Nobel Prize in Chemistry was awarded in 2004. The ubiquitylation system was initially characterised as an ATP-dependent proteolytic system present in cellular extracts. A heat-stable polypeptide present in these extracts, ATP-dependent proteolysis factor 1 (APF-1), was found to become covalently attached to the model protein substrate lysozyme in an ATP- and Mg2+-dependent process. Multiple APF-1 molecules were linked to a single substrate molecule by an isopeptide linkage, and conjugates were found to be rapidly degraded with the release of free APF-1. Soon after APF-1-protein conjugation was characterised, APF-1 was identified as ubiquitin. The carboxyl group of the C-terminal glycine residue of ubiquitin (Gly76) was identified as the moiety conjugated to substrate lysine residues. The protein Ubiquitin is a small protein that exists in all eukaryotic cells. It performs its myriad functions through conjugation to a large range of target proteins. A variety of different modifications can occur. The ubiquitin protein itself consists of 76 amino acids and has a molecular mass of about 8.6 kDa. Key features include its C-terminal tail and the 7 lysine residues. It is highly conserved throughout eukaryote evolution; human and yeast ubiquitin share 96% sequence identity. Genes Ubiquitin is encoded in mammals by four different genes. UBA52 and RPS27A genes code for a single copy of ubiquitin fused to the ribosomal proteins L40 and S27a, respectively. The UBB and UBC genes code for polyubiquitin precursor proteins. Ubiquitylation Ubiquitylation (also known as ubiquitination or ubiquitinylation) is an enzymatic post-translational modification in which an ubiquitin protein is attached to a substrate protein. This process most commonly binds the last amino acid of ubiquitin (glycine 76) to a lysine residue on the substrate. An isopeptide bond is formed between the carboxyl group (COO−) of the ubiquitin's glycine and the epsilon-amino group (ε-) of the substrate's lysine. Trypsin cleavage of a ubiquitin-conjugated substrate leaves a di-glycine "remnant" that is used to identify the site of ubiquitylation. Ubiquitin can also be bound to other sites in a protein which are electron-rich nucleophiles, termed "non-canonical ubiquitylation". This was first observed with the amine group of a protein's N-terminus being used for ubiquitylation, rather than a lysine residue, in the protein MyoD and has been observed since in 22 other proteins in multiple species, including ubiquitin itself. There is also increasing evidence for nonlysine residues as ubiquitylation targets using non-amine groups, such as the sulfhydryl group on cysteine, and the hydroxyl group on threonine and serine. The end result of this process is the addition of one ubiquitin molecule (monoubiquitylation) or a chain of ubiquitin molecules (polyubiquitination) to the substrate protein. Ubiquitination requires three types of enzyme: ubiquitin-activating enzymes, ubiquitin-conjugating enzymes, and ubiquitin ligases, known as E1s, E2s, and E3s, respectively. The process consists of three main steps: Activation: Ubiquitin is activated in a two-step reaction by an E1 ubiquitin-activating enzyme, which is dependent on ATP. The initial step involves production of a ubiquitin-adenylate intermediate. The E1 binds both ATP and ubiquitin and catalyses the acyl-adenylation of the C-terminus of the ubiquitin molecule. The second step transfers ubiquitin to an active site cysteine residue, with release of AMP. This step results in a thioester linkage between the C-terminal carboxyl group of ubiquitin and the E1 cysteine sulfhydryl group. The human genome contains two genes that produce enzymes capable of activating ubiquitin: UBA1 and UBA6. Conjugation: E2 ubiquitin-conjugating enzymes catalyse the transfer of ubiquitin from E1 to the active site cysteine of the E2 via a trans(thio)esterification reaction. In order to perform this reaction, the E2 binds to both activated ubiquitin and the E1 enzyme. Humans possess 35 different E2 enzymes, whereas other eukaryotic organisms have between 16 and 35. They are characterised by their highly conserved structure, known as the ubiquitin-conjugating catalytic (UBC) fold. Ligation: E3 ubiquitin ligases catalyse the final step of the ubiquitylation cascade. Most commonly, they create an isopeptide bond between a lysine of the target protein and the C-terminal glycine of ubiquitin. In general, this step requires the activity of one of the hundreds of E3s. E3 enzymes function as the substrate recognition modules of the system and are capable of interaction with both E2 and substrate. Some E3 enzymes also activate the E2 enzymes. E3 enzymes possess one of two domains: the homologous to the E6-AP carboxyl terminus (HECT) domain and the really interesting new gene (RING) domain (or the closely related U-box domain). HECT domain E3s transiently bind ubiquitin in this process (an obligate thioester intermediate is formed with the active-site cysteine of the E3), whereas RING domain E3s catalyse the direct transfer from the E2 enzyme to the substrate. The anaphase-promoting complex (APC) and the SCF complex (for Skp1-Cullin-F-box protein complex) are two examples of multi-subunit E3s involved in recognition and ubiquitylation of specific target proteins for degradation by the proteasome. In the ubiquitylation cascade, E1 can bind with many E2s, which can bind with hundreds of E3s in a hierarchical way. Having levels within the cascade allows tight regulation of the ubiquitylation machinery. Other ubiquitin-like proteins (UBLs) are also modified via the E1–E2–E3 cascade, although variations in these systems do exist. E4 enzymes, or ubiquitin-chain elongation factors, are capable of adding pre-formed polyubiquitin chains to substrate proteins. For example, multiple monoubiquitylation of the tumor suppressor p53 by Mdm2 can be followed by addition of a polyubiquitin chain using p300 and CBP. Types Ubiquitylation affects cellular process by regulating the degradation of proteins (via the proteasome and lysosome), coordinating the cellular localization of proteins, activating and inactivating proteins, and modulating protein–protein interactions. These effects are mediated by different types of substrate ubiquitylation, for example the addition of a single ubiquitin molecule (monoubiquitylation) or different types of ubiquitin chains (polyubiquitylation). Monoubiquitylation Monoubiquitylation is the addition of one ubiquitin molecule to one substrate protein residue. Multi-monoubiquitylation is the addition of one ubiquitin molecule to multiple substrate residues. The monoubiquitylation of a protein can have different effects to the polyubiquitylation of the same protein. The addition of a single ubiquitin molecule is thought to be required prior to the formation of polyubiquitin chains. Monoubiquitylation affects cellular processes such as membrane trafficking, endocytosis and viral budding. Polyubiquitin chains Polyubiquitylation is the formation of a ubiquitin chain on a single lysine residue on the substrate protein. Following addition of a single ubiquitin moiety to a protein substrate, further ubiquitin molecules can be added to the first, yielding a polyubiquitin chain. These chains are made by linking the glycine residue of a ubiquitin molecule to a lysine of ubiquitin bound to a substrate. Ubiquitin has seven lysine residues and an N-terminus that serves as points of ubiquitination; they are K6, K11, K27, K29, K33, K48, K63 and M1, respectively. Lysine 48-linked chains were the first identified and are the best-characterised type of ubiquitin chain. K63 chains have also been well-characterised, whereas the function of other lysine chains, mixed chains, branched chains, M1-linked linear chains, and heterologous chains (mixtures of ubiquitin and other ubiquitin-like proteins) remains more unclear. Lysine 48-linked polyubiquitin chains target proteins for destruction, by a process known as proteolysis. Multi-ubiquitin chains at least four ubiquitin molecules long must be attached to a lysine residue on the condemned protein in order for it to be recognised by the 26S proteasome. This is a barrel-shape structure comprising a central proteolytic core made of four ring structures, flanked by two cylinders that selectively allow entry of ubiquitylated proteins. Once inside, the proteins are rapidly degraded into small peptides (usually 3–25 amino acid residues in length). Ubiquitin molecules are cleaved off the protein immediately prior to destruction and are recycled for further use. Although the majority of protein substrates are ubiquitylated, there are examples of non-ubiquitylated proteins targeted to the proteasome. The polyubiquitin chains are recognised by a subunit of the proteasome: S5a/Rpn10. This is achieved by a ubiquitin-interacting motif (UIM) found in a hydrophobic patch in the C-terminal region of the S5a/Rpn10 unit. Lysine 63-linked chains are not associated with proteasomal degradation of the substrate protein. Instead, they allow the coordination of other processes such as endocytic trafficking, inflammation, translation, and DNA repair. In cells, lysine 63-linked chains are bound by the ESCRT-0 complex, which prevents their binding to the proteasome. This complex contains two proteins, Hrs and STAM1, that contain a UIM, which allows it to bind to lysine 63-linked chains. Methionine 1-linked (or linear) polyubiquitin chains are another type of non-degradative ubiquitin chains. In this case, ubiquitin is linked in a head-to-tail manner, meaning that the C-terminus of the last ubiquitin molecule binds directly to the N-terminus of the next one. Although initially believed to target proteins for proteasomal degradation, linear ubiquitin later proved to be indispensable for NF-kB signaling. Currently, there is only one known E3 ubiquitin ligase generating M1-linked polyubiquitin chains - linear ubiquitin chain assembly complex (LUBAC). Less is understood about atypical (non-lysine 48-linked) ubiquitin chains but research is starting to suggest roles for these chains. There is evidence that atypical chains linked by lysine 6, 11, 27, 29 and methionine 1 can induce proteasomal degradation. Branched ubiquitin chains containing multiple linkage types can be formed. The function of these chains is unknown. Structure Differently linked chains have specific effects on the protein to which they are attached, caused by differences in the conformations of the protein chains. K29-, K33-, K63- and M1-linked chains have a fairly linear conformation; they are known as open-conformation chains. K6-, K11-, and K48-linked chains form closed conformations. The ubiquitin molecules in open-conformation chains do not interact with each other, except for the covalent isopeptide bonds linking them together. In contrast, the closed conformation chains have interfaces with interacting residues. Altering the chain conformations exposes and conceals different parts of the ubiquitin protein, and the different linkages are recognized by proteins that are specific for the unique topologies that are intrinsic to the linkage. Proteins can specifically bind to ubiquitin via ubiquitin-binding domains (UBDs). The distances between individual ubiquitin units in chains differ between lysine 63- and 48-linked chains. The UBDs exploit this by having small spacers between ubiquitin-interacting motifs that bind lysine 48-linked chains (compact ubiquitin chains) and larger spacers for lysine 63-linked chains. The machinery involved in recognising polyubiquitin chains can also differentiate between K63-linked chains and M1-linked chains, demonstrated by the fact that the latter can induce proteasomal degradation of the substrate. Function The ubiquitylation system functions in a wide variety of cellular processes, including: Antigen processing Apoptosis Biogenesis of organelles Cell cycle and division DNA transcription and repair Differentiation and development Immune response and inflammation Neural and muscular degeneration Maintenance of pluripotency Morphogenesis of neural networks Modulation of cell surface receptors, ion channels and the secretory pathway Response to stress and extracellular modulators Ribosome biogenesis Viral infection Membrane proteins Multi-monoubiquitylation can mark transmembrane proteins (for example, receptors) for removal from membranes (internalisation) and fulfil several signalling roles within the cell. When cell-surface transmembrane molecules are tagged with ubiquitin, the subcellular localization of the protein is altered, often targeting the protein for destruction in lysosomes. This serves as a negative feedback mechanism, because often the stimulation of receptors by ligands increases their rate of ubiquitylation and internalisation. Like monoubiquitylation, lysine 63-linked polyubiquitin chains also has a role in the trafficking some membrane proteins. Genomic maintenance Proliferating cell nuclear antigen (PCNA) is a protein involved in DNA synthesis. Under normal physiological conditions PCNA is sumoylated (a similar post-translational modification to ubiquitylation). When DNA is damaged by ultra-violet radiation or chemicals, the SUMO molecule that is attached to a lysine residue is replaced by ubiquitin. Monoubiquitylated PCNA recruits polymerases that can carry out DNA synthesis with damaged DNA; but this is very error-prone, possibly resulting in the synthesis of mutated DNA. Lysine 63-linked polyubiquitylation of PCNA allows it to perform a less error-prone mutation bypass known by the template switching pathway. Ubiquitylation of histone H2AX is involved in DNA damage recognition of DNA double-strand breaks. Lysine 63-linked polyubiquitin chains are formed on H2AX histone by the E2/E3 ligase pair, Ubc13-Mms2/RNF168. This K63 chain appears to recruit RAP80, which contains a UIM, and RAP80 then helps localize BRCA1. This pathway will eventually recruit the necessary proteins for homologous recombination repair. Transcriptional regulation Histones can be ubiquitinated, usually in the form of monoubiquitylation, although polyubiquitylated forms do occur. Histone ubiquitylation alters chromatin structure and allows the access of enzymes involved in transcription. Ubiquitin on histones also acts as a binding site for proteins that either activate or inhibit transcription and also can induce further post-translational modifications of the protein. These effects can all modulate the transcription of genes. Deubiquitination Deubiquitinating enzymes (deubiquitinases; DUBs) oppose the role of ubiquitylation by removing ubiquitin from substrate proteins. They are cysteine proteases that cleave the amide bond between the two proteins. They are highly specific, as are the E3 ligases that attach the ubiquitin, with only a few substrates per enzyme. They can cleave both isopeptide (between ubiquitin and lysine) and peptide bonds (between ubiquitin and the N-terminus). In addition to removing ubiquitin from substrate proteins, DUBs have many other roles within the cell. Ubiquitin is either expressed as multiple copies joined in a chain (polyubiquitin) or attached to ribosomal subunits. DUBs cleave these proteins to produce active ubiquitin. They also recycle ubiquitin that has been bound to small nucleophilic molecules during the ubiquitylation process. Monoubiquitin is formed by DUBs that cleave ubiquitin from free polyubiquitin chains that have been previously removed from proteins. Ubiquitin-binding domains Ubiquitin-binding domains (UBDs) are modular protein domains that non-covalently bind to ubiquitin, these motifs control various cellular events. Detailed molecular structures are known for a number of UBDs, binding specificity determines their mechanism of action and regulation, and how it regulates cellular proteins and processes. Disease associations Pathogenesis The ubiquitin pathway has been implicated in the pathogenesis of a wide range of diseases and disorders, including: Neurodegeneration Infection and immunity Genetic disorders Cancer Neurodegeneration Ubiquitin is implicated in neurodegenerative diseases associated with proteostasis dysfunction, including Alzheimer's disease, motor neuron disease, Huntington's disease and Parkinson's disease. Transcript variants encoding different isoforms of ubiquilin-1 are found in lesions associated with Alzheimer's and Parkinson's disease. Higher levels of ubiquilin in the brain have been shown to decrease malformation of amyloid precursor protein (APP), which plays a key role in triggering Alzheimer's disease. Conversely, lower levels of ubiquilin-1 in the brain have been associated with increased malformation of APP. A frameshift mutation in ubiquitin B can result in a truncated peptide missing the C-terminal glycine. This abnormal peptide, known as UBB+1, has been shown to accumulate selectively in Alzheimer's disease and other tauopathies. Infection and immunity Ubiquitin and ubiquitin-like molecules extensively regulate immune signal transduction pathways at virtually all stages, including steady-state repression, activation during infection, and attenuation upon clearance. Without this regulation, immune activation against pathogens may be defective, resulting in chronic disease or death. Alternatively, the immune system may become hyperactivated and organs and tissues may be subjected to autoimmune damage. On the other hand, viruses must block or redirect host cell processes including immunity to effectively replicate, yet many viruses relevant to disease have informationally limited genomes. Because of its very large number of roles in the cell, manipulating the ubiquitin system represents an efficient way for such viruses to block, subvert or redirect critical host cell processes to support their own replication. The retinoic acid-inducible gene I (RIG-I) protein is a primary immune system sensor for viral and other invasive RNA in human cells. The RIG-I-like receptor (RLR) immune signaling pathway is one of the most extensively studied in terms of the role of ubiquitin in immune regulation. Genetic disorders Angelman syndrome is caused by a disruption of UBE3A, which encodes a ubiquitin ligase (E3) enzyme termed E6-AP. Von Hippel–Lindau syndrome involves disruption of a ubiquitin E3 ligase termed the VHL tumor suppressor, or VHL gene. Fanconi anemia: Eight of the thirteen identified genes whose disruption can cause this disease encode proteins that form a large ubiquitin ligase (E3) complex. 3-M syndrome is an autosomal-recessive growth retardation disorder associated with mutations of the Cullin7 E3 ubiquitin ligase. Diagnostic use Immunohistochemistry using antibodies to ubiquitin can identify abnormal accumulations of this protein inside cells, indicating a disease process. These protein accumulations are referred to as inclusion bodies (which is a general term for any microscopically visible collection of abnormal material in a cell). Examples include: Neurofibrillary tangles in Alzheimer's disease Lewy body in Parkinson's disease Pick bodies in Pick's disease Inclusions in motor neuron disease and Huntington's disease Mallory bodies in alcoholic liver disease Rosenthal fibers in astrocytes Link to cancer Post-translational modification of proteins is a generally used mechanism in eukaryotic cell signaling. Ubiquitylation, ubiquitin conjugation to proteins, is a crucial process for cell cycle progression and cell proliferation and development. Although ubiquitylation usually serves as a signal for protein degradation through the 26S proteasome, it could also serve for other fundamental cellular processes, in endocytosis, enzymatic activation and DNA repair. Moreover, since ubiquitylation functions to tightly regulate the cellular level of cyclins, its misregulation is expected to have severe impacts. First evidence of the importance of the ubiquitin/proteasome pathway in oncogenic processes was observed due to the high antitumor activity of proteasome inhibitors. Various studies have shown that defects or alterations in ubiquitylation processes are commonly associated with or present in human carcinoma. Malignancies could be developed through loss of function mutation directly at the tumor suppressor gene, increased activity of ubiquitylation, and/or indirect attenuation of ubiquitylation due to mutation in related proteins. Direct loss of function mutation of E3 ubiquitin ligase Renal cell carcinoma The VHL (Von Hippel–Lindau) gene encodes a component of an E3 ubiquitin ligase. VHL complex targets a member of the hypoxia-inducible transcription factor family (HIF) for degradation by interacting with the oxygen-dependent destruction domain under normoxic conditions. HIF activates downstream targets such as the vascular endothelial growth factor (VEGF), promoting angiogenesis. Mutations in VHL prevent degradation of HIF and thus lead to the formation of hypervascular lesions and renal tumors. Breast cancer The BRCA1 gene is another tumor suppressor gene in humans which encodes the BRCA1 protein that is involved in response to DNA damage. The protein contains a RING motif with E3 Ubiquitin Ligase activity. BRCA1 could form dimer with other molecules, such as BARD1 and BAP1, for its ubiquitylation activity. Mutations that affect the ligase function are often found and associated with various cancers. Cyclin E As processes in cell cycle progression are the most fundamental processes for cellular growth and differentiation, and are the most common to be altered in human carcinomas, it is expected for cell cycle-regulatory proteins to be under tight regulation. The level of cyclins, as the name suggests, is high only at certain a time point during the cell cycle. This is achieved by continuous control of cyclins or CDKs levels through ubiquitylation and degradation. When cyclin E is partnered with CDK2 and gets phosphorylated, an SCF-associated F-box protein Fbw7 recognizes the complex and thus targets it for degradation. Mutations in Fbw7 have been found in more than 30% of human tumors, characterizing it as a tumor suppressor protein. Increased ubiquitination activity Cervical cancer Oncogenic types of the human papillomavirus (HPV) are known to hijack cellular ubiquitin-proteasome pathway for viral infection and replication. The E6 proteins of HPV will bind to the N-terminus of the cellular E6-AP E3 ubiquitin ligase, redirecting the complex to bind p53, a well-known tumor suppressor gene whose inactivation is found in many types of cancer. Thus, p53 undergoes ubiquitylation and proteasome-mediated degradation. Meanwhile, E7, another one of the early-expressed HPV genes, will bind to Rb, also a tumor suppressor gene, mediating its degradation. The loss of p53 and Rb in cells allows limitless cell proliferation to occur. p53 regulation Gene amplification often occur in various tumor cases, including of MDM2, a gene encodes for a RING E3 Ubiquitin ligase responsible for downregulation of p53 activity. MDM2 targets p53 for ubiquitylation and proteasomal degradation thus keeping its level appropriate for normal cell condition. Overexpression of MDM2 causes loss of p53 activity and therefore allowing cells to have a limitless replicative potential. p27 Another gene that is a target of gene amplification is SKP2. SKP2 is an F-box protein with a role in substrate recognition for ubiquitylation and degradation. SKP2 targets p27Kip-1, an inhibitor of cyclin-dependent kinases (CDKs). CDKs2/4 partner with the cyclins E/D, respectively, forming a family of cell cycle regulators which control cell cycle progression through the G1 phase. Low level of p27Kip-1 protein is often found in various cancers and is due to overactivation of ubiquitin-mediated proteolysis through overexpression of SKP2. Efp Efp, or estrogen-inducible RING-finger protein, is an E3 ubiquitin ligase whose overexpression has been shown to be the major cause of estrogen-independent breast cancer. Efp's substrate is 14-3-3 protein which negatively regulates cell cycle. Evasion of ubiquitination Colorectal cancer The gene associated with colorectal cancer is the adenomatous polyposis coli (APC), which is a classic tumor suppressor gene. APC gene product targets beta-catenin for degradation via ubiquitylation at the N-terminus, thus regulating its cellular level. Most colorectal cancer cases are found with mutations in the APC gene. However, in cases where APC gene is not mutated, mutations are found in the N-terminus of beta-catenin which renders it ubiquitination-free and thus increased activity. Glioblastoma As the most aggressive cancer originated in the brain, mutations found in patients with glioblastoma are related to the deletion of a part of the extracellular domain of the epidermal growth factor receptor (EGFR). This deletion causes CBL E3 ligase unable to bind to the receptor for its recycling and degradation via a ubiquitin-lysosomal pathway. Thus, EGFR is constitutively active in the cell membrane and activates its downstream effectors that are involved in cell proliferation and migration. Phosphorylation-dependent ubiquitylation The interplay between ubiquitylation and phosphorylation has been an ongoing research interest since phosphorylation often serves as a marker where ubiquitylation leads to degradation. Moreover, ubiquitylation can also act to turn on/off the kinase activity of a protein. The critical role of phosphorylation is largely underscored in the activation and removal of autoinhibition in the Cbl protein. Cbl is an E3 ubiquitin ligase with a RING finger domain that interacts with its tyrosine kinase binding (TKB) domain, preventing interaction of the RING domain with an E2 ubiquitin-conjugating enzyme. This intramolecular interaction is an autoinhibition regulation that prevents its role as a negative regulator of various growth factors and tyrosine kinase signaling and T-cell activation. Phosphorylation of Y363 relieves the autoinhibition and enhances binding to E2. Mutations that render the Cbl protein dysfunctional due to the loss of its ligase/tumor suppressor function and maintenance of its positive signaling/oncogenic function have been shown to cause the development of cancer. As a drug target Screening for ubiquitin ligase substrates Deregulation of E3-substrate interactions is a key cause of many human disorders, therefore identifying E3 ligase substrates is crucial. In 2008, 'Global Protein Stability (GPS) Profiling' was developed to discover E3 ubiquitin ligase substrates. This high-throughput system made use of reporter proteins fused with thousands of potential substrates independently. By inhibition of the ligase activity (through the making of Cul1 dominant negative thus renders ubiquitination not to occur), increased reporter activity shows that the identified substrates are being accumulated. This approach added a large number of new substrates to the list of E3 ligase substrates. Possible therapeutic applications Blocking of specific substrate recognition by the E3 ligases, e.g. bortezomib. Challenge Finding a specific molecule that selectively inhibits the activity of a certain E3 ligase and/or the protein–protein interactions implicated in the disease remains as one of the important and expanding research area. Moreover, as ubiquitination is a multi-step process with various players and intermediate forms, consideration of the much complex interactions between components needs to be taken heavily into account while designing the small molecule inhibitors. Similar proteins Ubiquitin is the most-understood post-translation modifier, however, several family of ubiquitin-like proteins (UBLs) can modify cellular targets in a parallel but distinct route. Known UBLs include: small ubiquitin-like modifier (SUMO), ubiquitin cross-reactive protein (UCRP, also known as interferon-stimulated gene-15 ISG15), ubiquitin-related modifier-1 (URM1), neuronal-precursor-cell-expressed developmentally downregulated protein-8 (NEDD8, also called Rub1 in S. cerevisiae), human leukocyte antigen F-associated (FAT10), autophagy-8 (ATG8) and -12 (ATG12), Few ubiquitin-like protein (FUB1), MUB (membrane-anchored UBL), ubiquitin fold-modifier-1 (UFM1) and ubiquitin-like protein-5 (UBL5, which is but known as homologous to ubiquitin-1 [Hub1] in S. pombe). Although these proteins share only modest primary sequence identity with ubiquitin, they are closely related three-dimensionally. For example, SUMO shares only 18% sequence identity, but they contain the same structural fold. This fold is called "ubiquitin fold". FAT10 and UCRP contain two. This compact globular beta-grasp fold is found in ubiquitin, UBLs, and proteins that comprise a ubiquitin-like domain, e.g. the S. cerevisiae spindle pole body duplication protein, Dsk2, and NER protein, Rad23, both contain N-terminal ubiquitin domains. These related molecules have novel functions and influence diverse biological processes. There is also cross-regulation between the various conjugation pathways, since some proteins can become modified by more than one UBL, and sometimes even at the same lysine residue. For instance, SUMO modification often acts antagonistically to that of ubiquitination and serves to stabilize protein substrates. Proteins conjugated to UBLs are typically not targeted for degradation by the proteasome but rather function in diverse regulatory activities. Attachment of UBLs might, alter substrate conformation, affect the affinity for ligands or other interacting molecules, alter substrate localization, and influence protein stability. UBLs are structurally similar to ubiquitin and are processed, activated, conjugated, and released from conjugates by enzymatic steps that are similar to the corresponding mechanisms for ubiquitin. UBLs are also translated with C-terminal extensions that are processed to expose the invariant C-terminal LRGG. These modifiers have their own specific E1 (activating), E2 (conjugating) and E3 (ligating) enzymes that conjugate the UBLs to intracellular targets. These conjugates can be reversed by UBL-specific isopeptidases that have similar mechanisms to that of the deubiquitinating enzymes. Within some species, the recognition and destruction of sperm mitochondria through a mechanism involving ubiquitin is responsible for sperm mitochondria's disposal after fertilization occurs. Prokaryotic origins Ubiquitin is believed to have descended from bacterial proteins similar to ThiS () or MoaD (). These prokaryotic proteins, despite having little sequence identity (ThiS has 14% identity to ubiquitin), share the same protein fold. These proteins also share sulfur chemistry with ubiquitin. MoaD, which is involved in molybdopterin biosynthesis, interacts with MoeB, which acts like an E1 ubiquitin-activating enzyme for MoaD, strengthening the link between these prokaryotic proteins and the ubiquitin system. A similar system exists for ThiS, with its E1-like enzyme ThiF. It is also believed that the Saccharomyces cerevisiae protein Urm1, a ubiquitin-related modifier, is a "molecular fossil" that connects the evolutionary relation with the prokaryotic ubiquitin-like molecules and ubiquitin. Archaea have a functionally closer homolog of the ubiquitin modification system, where "sampylation" with SAMPs (small archaeal modifier proteins) is performed. The sampylation system only uses E1 to guide proteins to the proteosome. Proteoarchaeota, which are related to the ancestor of eukaryotes, possess all of the E1, E2, and E3 enzymes plus a regulated Rpn11 system. Unlike SAMP which are more similar to ThiS or MoaD, Proteoarchaeota ubiquitin are most similar to eukaryotic homologs. Prokaryotic ubiquitin-like protein (Pup) and ubiquitin bacterial (UBact) Prokaryotic ubiquitin-like protein (Pup) is a functional analog of ubiquitin which has been found in the gram-positive bacterial phylum Actinomycetota. It serves the same function (targeting proteins for degradations), although the enzymology of ubiquitylation and pupylation is different, and the two families share no homology. In contrast to the three-step reaction of ubiquitylation, pupylation requires two steps, therefore only two enzymes are involved in pupylation. In 2017, homologs of Pup were reported in five phyla of gram-negative bacteria, in seven candidate bacterial phyla and in one archaeon The sequences of the Pup homologs are very different from the sequences of Pup in gram-positive bacteria and were termed Ubiquitin bacterial (UBact), although the distinction has yet not been proven to be phylogenetically supported by a separate evolutionary origin and is without experimental evidence. The finding of the Pup/UBact-proteasome system in both gram-positive and gram-negative bacteria suggests that either the Pup/UBact-proteasome system evolved in bacteria prior to the split into gram positive and negative clades over 3000 million years ago or, that these systems were acquired by different bacterial lineages through horizontal gene transfer(s) from a third, yet unknown, organism. In support of the second possibility, two UBact loci were found in the genome of an uncultured anaerobic methanotrophic Archaeon (ANME-1;locus CBH38808.1 and locus CBH39258.1). Human proteins containing ubiquitin domain These include ubiquitin-like proteins. ANUBL1; BAG1; BAT3/BAG6; C1orf131; DDI1; DDI2; FAU; HERPUD1; HERPUD2; HOPS; IKBKB; ISG15; LOC391257; MIDN; NEDD8; OASL; PARK2; RAD23A; RAD23B; RPS27A; SACS; 8U SF3A1; SUMO1; SUMO2; SUMO3; SUMO4; TMUB1; TMUB2; UBA52; UBB; UBC; UBD; UBFD1; UBL4A; UBL4B; UBL7; UBLCP1; UBQLN1; UBQLN2; UBQLN3; UBQLN4; UBQLNL; UBTD1; UBTD2; UHRF1; UHRF2; Related proteins Ubiquitin-associated protein domain Prediction of ubiquitination Currently available prediction programs are: UbiPred is a SVM-based prediction server using 31 physicochemical properties for predicting ubiquitylation sites. UbPred is a random forest-based predictor of potential ubiquitination sites in proteins. It was trained on a combined set of 266 non-redundant experimentally verified ubiquitination sites available from our experiments and from two large-scale proteomics studies. CKSAAP_UbSite is SVM-based prediction that employs the composition of k-spaced amino acid pairs surrounding a query site (i.e. any lysine in a query sequence) as input, uses the same dataset as UbPred. See also Autophagy Autophagin Endoplasmic-reticulum-associated protein degradation JUNQ and IPOD Prokaryotic ubiquitin-like protein SUMO enzymes References External links GeneReviews/NCBI/NIH/UW entry on Angelman syndrome OMIM entries on Angelman syndrome UniProt entry for ubiquitin Notes from MIT course. Proteins Post-translational modification Protein structure
Ubiquitin
[ "Chemistry" ]
9,175
[ "Biomolecules by chemical classification", "Gene expression", "Biochemical reactions", "Post-translational modification", "Structural biology", "Molecular biology", "Proteins", "Protein structure" ]
32,245
https://en.wikipedia.org/wiki/Universal%20property
In mathematics, more specifically in category theory, a universal property is a property that characterizes up to an isomorphism the result of some constructions. Thus, universal properties can be used for defining some objects independently from the method chosen for constructing them. For example, the definitions of the integers from the natural numbers, of the rational numbers from the integers, of the real numbers from the rational numbers, and of polynomial rings from the field of their coefficients can all be done in terms of universal properties. In particular, the concept of universal property allows a simple proof that all constructions of real numbers are equivalent: it suffices to prove that they satisfy the same universal property. Technically, a universal property is defined in terms of categories and functors by means of a universal morphism (see , below). Universal morphisms can also be thought more abstractly as initial or terminal objects of a comma category (see , below). Universal properties occur almost everywhere in mathematics, and the use of the concept allows the use of general properties of universal properties for easily proving some properties that would need boring verifications otherwise. For example, given a commutative ring , the field of fractions of the quotient ring of by a prime ideal can be identified with the residue field of the localization of at ; that is (all these constructions can be defined by universal properties). Other objects that can be defined by universal properties include: all free objects, direct products and direct sums, free groups, free lattices, Grothendieck group, completion of a metric space, completion of a ring, Dedekind–MacNeille completion, product topologies, Stone–Čech compactification, tensor products, inverse limit and direct limit, kernels and cokernels, quotient groups, quotient vector spaces, and other quotient spaces. Motivation Before giving a formal definition of universal properties, we offer some motivation for studying such constructions. The concrete details of a given construction may be messy, but if the construction satisfies a universal property, one can forget all those details: all there is to know about the construction is already contained in the universal property. Proofs often become short and elegant if the universal property is used rather than the concrete details. For example, the tensor algebra of a vector space is slightly complicated to construct, but much easier to deal with by its universal property. Universal properties define objects uniquely up to a unique isomorphism. Therefore, one strategy to prove that two objects are isomorphic is to show that they satisfy the same universal property. Universal constructions are functorial in nature: if one can carry out the construction for every object in a category C then one obtains a functor on C. Furthermore, this functor is a right or left adjoint to the functor U used in the definition of the universal property. Universal properties occur everywhere in mathematics. By understanding their abstract properties, one obtains information about all these constructions and can avoid repeating the same analysis for each individual instance. Formal definition To understand the definition of a universal construction, it is important to look at examples. Universal constructions were not defined out of thin air, but were rather defined after mathematicians began noticing a pattern in many mathematical constructions (see Examples below). Hence, the definition may not make sense to one at first, but will become clear when one reconciles it with concrete examples. Let be a functor between categories and . In what follows, let be an object of , and be objects of , and be a morphism in . Then, the functor maps , and in to , and in . A universal morphism from to is a unique pair in which has the following property, commonly referred to as a universal property: For any morphism of the form in , there exists a unique morphism in such that the following diagram commutes: We can dualize this categorical concept. A universal morphism from to is a unique pair that satisfies the following universal property: For any morphism of the form in , there exists a unique morphism in such that the following diagram commutes: Note that in each definition, the arrows are reversed. Both definitions are necessary to describe universal constructions which appear in mathematics; but they also arise due to the inherent duality present in category theory. In either case, we say that the pair which behaves as above satisfies a universal property. Connection with comma categories Universal morphisms can be described more concisely as initial and terminal objects in a comma category (i.e. one where morphisms are seen as objects in their own right). Let be a functor and an object of . Then recall that the comma category is the category where Objects are pairs of the form , where is an object in A morphism from to is given by a morphism in such that the diagram commutes: Now suppose that the object in is initial. Then for every object , there exists a unique morphism such that the following diagram commutes. Note that the equality here simply means the diagrams are the same. Also note that the diagram on the right side of the equality is the exact same as the one offered in defining a universal morphism from to . Therefore, we see that a universal morphism from to is equivalent to an initial object in the comma category . Conversely, recall that the comma category is the category where Objects are pairs of the form where is an object in A morphism from to is given by a morphism in such that the diagram commutes: Suppose is a terminal object in . Then for every object , there exists a unique morphism such that the following diagrams commute. The diagram on the right side of the equality is the same diagram pictured when defining a universal morphism from to . Hence, a universal morphism from to corresponds with a terminal object in the comma category . Examples Below are a few examples, to highlight the general idea. The reader can construct numerous other examples by consulting the articles mentioned in the introduction. Tensor algebras Let be the category of vector spaces -Vect over a field and let be the category of algebras -Alg over (assumed to be unital and associative). Let : -Alg → -Vect be the forgetful functor which assigns to each algebra its underlying vector space. Given any vector space over we can construct the tensor algebra . The tensor algebra is characterized by the fact: “Any linear map from to an algebra can be uniquely extended to an algebra homomorphism from to .” This statement is an initial property of the tensor algebra since it expresses the fact that the pair , where is the inclusion map, is a universal morphism from the vector space to the functor . Since this construction works for any vector space , we conclude that is a functor from -Vect to -Alg. This means that is left adjoint to the forgetful functor (see the section below on relation to adjoint functors). Products A categorical product can be characterized by a universal construction. For concreteness, one may consider the Cartesian product in Set, the direct product in Grp, or the product topology in Top, where products exist. Let and be objects of a category with finite products. The product of and is an object × together with two morphisms : : such that for any other object of and morphisms and there exists a unique morphism such that and . To understand this characterization as a universal property, take the category to be the product category and define the diagonal functor by and . Then is a universal morphism from to the object of : if is any morphism from to , then it must equal a morphism from to followed by . As a commutative diagram: For the example of the Cartesian product in Set, the morphism comprises the two projections and . Given any set and functions the unique map such that the required diagram commutes is given by . Limits and colimits Categorical products are a particular kind of limit in category theory. One can generalize the above example to arbitrary limits and colimits. Let and be categories with a small index category and let be the corresponding functor category. The diagonal functor is the functor that maps each object in to the constant functor (i.e. for each in and for each in ) and each morphism in to the natural transformation in defined as, for every object of , the component at . In other words, the natural transformation is the one defined by having constant component for every object of . Given a functor (thought of as an object in ), the limit of , if it exists, is nothing but a universal morphism from to . Dually, the colimit of is a universal morphism from to . Properties Existence and uniqueness Defining a quantity does not guarantee its existence. Given a functor and an object of , there may or may not exist a universal morphism from to . If, however, a universal morphism does exist, then it is essentially unique. Specifically, it is unique up to a unique isomorphism: if is another pair, then there exists a unique isomorphism such that . This is easily seen by substituting in the definition of a universal morphism. It is the pair which is essentially unique in this fashion. The object itself is only unique up to isomorphism. Indeed, if is a universal morphism and is any isomorphism then the pair , where is also a universal morphism. Equivalent formulations The definition of a universal morphism can be rephrased in a variety of ways. Let be a functor and let be an object of . Then the following statements are equivalent: is a universal morphism from to is an initial object of the comma category is a representation of , where its components are defined by for each object in The dual statements are also equivalent: is a universal morphism from to is a terminal object of the comma category is a representation of , where its components are defined by for each object in Relation to adjoint functors Suppose is a universal morphism from to and is a universal morphism from to . By the universal property of universal morphisms, given any morphism there exists a unique morphism such that the following diagram commutes: If every object of admits a universal morphism to , then the assignment and defines a functor . The maps then define a natural transformation from (the identity functor on ) to . The functors are then a pair of adjoint functors, with left-adjoint to and right-adjoint to . Similar statements apply to the dual situation of terminal morphisms from . If such morphisms exist for every in one obtains a functor which is right-adjoint to (so is left-adjoint to ). Indeed, all pairs of adjoint functors arise from universal constructions in this manner. Let and be a pair of adjoint functors with unit and co-unit (see the article on adjoint functors for the definitions). Then we have a universal morphism for each object in and : For each object in , is a universal morphism from to . That is, for all there exists a unique for which the following diagrams commute. For each object in , is a universal morphism from to . That is, for all there exists a unique for which the following diagrams commute. Universal constructions are more general than adjoint functor pairs: a universal construction is like an optimization problem; it gives rise to an adjoint pair if and only if this problem has a solution for every object of (equivalently, every object of ). History Universal properties of various topological constructions were presented by Pierre Samuel in 1948. They were later used extensively by Bourbaki. The closely related concept of adjoint functors was introduced independently by Daniel Kan in 1958. See also Free object Natural transformation Adjoint functor Monad (category theory) Variety of algebras Cartesian closed category Notes References Paul Cohn, Universal Algebra (1981), D.Reidel Publishing, Holland. . Borceux, F. Handbook of Categorical Algebra: vol 1 Basic category theory (1994) Cambridge University Press, (Encyclopedia of Mathematics and its Applications) N. Bourbaki, Livre II : Algèbre (1970), Hermann, . Milies, César Polcino; Sehgal, Sudarshan K.. An introduction to group rings. Algebras and applications, Volume 1. Springer, 2002. Jacobson. Basic Algebra II. Dover. 2009. External links nLab, a wiki project on mathematics, physics and philosophy with emphasis on the n-categorical point of view André Joyal, CatLab, a wiki project dedicated to the exposition of categorical mathematics formal introduction to category theory. J. Adamek, H. Herrlich, G. Stecker, Abstract and Concrete Categories-The Joy of Cats Stanford Encyclopedia of Philosophy: "Category Theory"—by Jean-Pierre Marquis. Extensive bibliography. List of academic conferences on category theory Baez, John, 1996,"The Tale of n-categories." An informal introduction to higher order categories. WildCats is a category theory package for Mathematica. Manipulation and visualization of objects, morphisms, categories, functors, natural transformations, universal properties. The catsters, a YouTube channel about category theory. Video archive of recorded talks relevant to categories, logic and the foundations of physics. Interactive Web page which generates examples of categorical constructions in the category of finite sets. Category theory Property
Universal property
[ "Mathematics" ]
2,838
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory", "nan" ]
32,248
https://en.wikipedia.org/wiki/Uncountable%20set
In mathematics, an uncountable set, informally, is an infinite set that contains too many elements to be countable. The uncountability of a set is closely related to its cardinal number: a set is uncountable if its cardinal number is larger than aleph-null, the cardinality of the natural numbers. Characterizations There are many equivalent characterizations of uncountability. A set X is uncountable if and only if any of the following conditions hold: There is no injective function (hence no bijection) from X to the set of natural numbers. X is nonempty and for every ω-sequence of elements of X, there exists at least one element of X not included in it. That is, X is nonempty and there is no surjective function from the natural numbers to X. The cardinality of X is neither finite nor equal to (aleph-null). The set X has cardinality strictly greater than . The first three of these characterizations can be proven equivalent in Zermelo–Fraenkel set theory without the axiom of choice, but the equivalence of the third and fourth cannot be proved without additional choice principles. Properties If an uncountable set X is a subset of set Y, then Y is uncountable. Examples The best known example of an uncountable set is the set of all real numbers; Cantor's diagonal argument shows that this set is uncountable. The diagonalization proof technique can also be used to show that several other sets are uncountable, such as the set of all infinite sequences of natural numbers (see: ), and the set of all subsets of the set of natural numbers. The cardinality of is often called the cardinality of the continuum, and denoted by , or , or (beth-one). The Cantor set is an uncountable subset of . The Cantor set is a fractal and has Hausdorff dimension greater than zero but less than one ( has dimension one). This is an example of the following fact: any subset of of Hausdorff dimension strictly greater than zero must be uncountable. Another example of an uncountable set is the set of all functions from to . This set is even "more uncountable" than in the sense that the cardinality of this set is (beth-two), which is larger than . A more abstract example of an uncountable set is the set of all countable ordinal numbers, denoted by Ω or ω1. The cardinality of Ω is denoted (aleph-one). It can be shown, using the axiom of choice, that is the smallest uncountable cardinal number. Thus either , the cardinality of the reals, is equal to or it is strictly larger. Georg Cantor was the first to propose the question of whether is equal to . In 1900, David Hilbert posed this question as the first of his 23 problems. The statement that is now called the continuum hypothesis, and is known to be independent of the Zermelo–Fraenkel axioms for set theory (including the axiom of choice). Without the axiom of choice Without the axiom of choice, there might exist cardinalities incomparable to (namely, the cardinalities of Dedekind-finite infinite sets). Sets of these cardinalities satisfy the first three characterizations above, but not the fourth characterization. Since these sets are not larger than the natural numbers in the sense of cardinality, some may not want to call them uncountable. If the axiom of choice holds, the following conditions on a cardinal are equivalent: and , where and is the least initial ordinal greater than However, these may all be different if the axiom of choice fails. So it is not obvious which one is the appropriate generalization of "uncountability" when the axiom fails. It may be best to avoid using the word in this case and specify which of these one means. See also Aleph number Beth number First uncountable ordinal Injective function References Bibliography Halmos, Paul, Naive Set Theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition). Reprinted by Martino Fine Books, 2011. (Paperback edition). External links Proof that R is uncountable Basic concepts in infinite set theory Infinity Cardinal numbers
Uncountable set
[ "Mathematics" ]
935
[ "Cardinal numbers", "Basic concepts in infinite set theory", "Mathematical objects", "Infinity", "Basic concepts in set theory", "Numbers" ]
32,313
https://en.wikipedia.org/wiki/Unobtainium
Unobtainium (or unobtanium) is a term used in fiction, engineering, and common situations for a material ideal for a particular application but impractically difficult or impossible to obtain. Unobtainium originally referred to materials that do not exist at all, but can also be used to describe real materials that are unavailable due to extreme rarity or cost. Less commonly, it can mean a device with desirable engineering properties for an application that are exceedingly difficult or impossible to achieve. The properties of any particular example of unobtainium depend on the intended use. For example, a pulley made of unobtainium might be massless and frictionless. But for a nuclear rocket, unobtainium might have the needed qualities of lightness, strength at high temperatures, and resistance to radiation damage; a combination of all three qualities is impossible with today's materials. The concept of unobtainium is often applied hand-wavingly, flippantly, or humorously. The word unobtainium derives humorously from unobtainable, with -ium, a suffix for chemical element names. It predates the similar-sounding systematic element names, such as ununennium, unbinilium, unbiunium, and unbiquadium. An alternative spelling, unobtanium, is sometimes used, by analogy to the names of real elements like titanium and uranium. Engineering origin Since the late 1950s, aerospace engineers have used the term "unobtainium" when referring to unusual or costly materials, or when theoretically considering a material perfect for their needs in all respects, except that it does not exist. By the 1990s, the term was in wide use, even in formal engineering papers such as "Towards unobtainium [new composite materials for space applications]." The term may well have been coined in the aerospace industry to refer to materials capable of withstanding the extreme temperatures expected in re-entry. Aerospace engineers are frequently tempted to design aircraft which require parts with strength or resilience beyond that of currently available materials. Later, unobtainium became an engineering term for practical materials that really exist, but are difficult to get. For example, during the development of the SR-71 Blackbird spy plane, Lockheed engineers at the "Skunk Works" under Clarence "Kelly" Johnson used unobtainium to refer to titanium. Titanium allowed a higher strength-to-weight ratio at the high temperatures the Blackbird would reach, but its availability was restricted because the Soviet Union controlled its supply. This created a problem for the U.S. during the Cold War because the Blackbird required huge amounts of titanium; subsequent U.S. military aircraft such as the B-1 Lancer, F-15 Eagle, F/A-18 Hornet, and F-22 Raptor required relatively large amounts of it as well. Contemporary popularization Unobtainium began to be used among people who are neither science fiction fans nor engineers to denote an object that actually exists, but which is very hard to obtain either because of high price (sometimes referred to as "unaffordium") or limited availability. It usually refers to a very high-end and desirable product. By the 1970s, the term had migrated from the aerospace industry to the Southern California automobile and motorcycle cultures and, began to appear in industry publications such as early advertisements for Oakley motorcycle handgrips. Other examples are rear cassettes in the mountain biking community, parts that are no longer available for old-car enthusiasts, parts for reel-to-reel audio-tape recorders, and rare vacuum tubes such as the 1L6 or WD-11 that can now cost more than the equipment in which they were fitted. The eyewear and fashion wear company Oakley, Inc. also frequently denotes the material used for many of their eyeglass nosepieces and earpieces, which has the unusual property of increasing tackiness and thus grip when wet, as unobtanium. By 2010, the term had been used in mainstream news reports to describe the commercially useful rare earth elements (particularly terbium, erbium, dysprosium, yttrium, and neodymium), which are essential to the performance of consumer electronics and green technology, but whose projected demand far outstrips their current supply. There have been repeated attempts to attribute the name to a real material. Space elevator research has long used "unobtainium" to describe a material with the necessary characteristics, but carbon nanotubes might have these characteristics. Science fiction Unobtainium was mentioned briefly in David Brin's 1983 book Startide Rising, as a material that could be used in making weapons and comprising 1% of the core of one of the exomoons of the Kthsemenee system. Unobtainium is briefly mentioned in Wil McCarthy's The Collapsium (2000), where a programmable quantum-technology material called "wellstone" can simulate any conceivable element, including "imaginary substances like unobtainium, impossibilium, and rainbow kryptonite". In the 2003 film The Core, "Unobtainium" is the nickname of a 37-syllable long tungsten-titanium crystal alloy developed by Dr. Edward "Braz" Brazzelton that is able to absorb the extreme pressure and heat of the Earth's molten core and then convert these into usable energy; it's used in building the super resistant outer shell of the ship Virgil. In the 2009 film Avatar, Unobtanium is the common name of a rare-earth mineral found exclusively on the exomoon Pandora, highly prized (and priced) because of its application as a powerful superconductor material. Because of its unusual magnetic properties, entire mountains with high concentrations of unobtanium levitate in the atmosphere of Pandora. Similar terms The term has been used to describe a material which has "eluded" attempts to develop it, with the variant spelling illudium derived from "illusion". This was mentioned in several Looney Tunes cartoons, where Marvin the Martian tried (unsuccessfully) to use his "Eludium Q-36 Explosive Space Modulator" to blow up the Earth. Another largely synonymous term is wishalloy, although the sense is often subtly different in that a wishalloy usually does not exist at all, whereas unobtainium may merely be unavailable. A similar conceptual material in alchemy is the philosopher's stone, a mythical substance with the ability to turn lead into gold, or bestow immortality and youth. While the search to find such a substance was not successful, it did lead to discovery of a new element: phosphorus. In architecture, the term renderite has been used to describe the use of unrealistic materials in concept renders. See also List of fictional elements, materials, isotopes and subatomic particles Materials science in science fiction Dysprosium, a real element whose name means "hard to get" Stuck with Hackett, a TV show which uses the term "obtainium" for found materials to be repurposed References External links World Wide Words — Unobtanium TV Tropes - Unobtainium Fictional materials Placeholder names Avatar (franchise) hu:Unobtainium#Unobtanium az Avatarban
Unobtainium
[ "Physics" ]
1,510
[ "Materials", "Fictional materials", "Matter" ]
32,344
https://en.wikipedia.org/wiki/Variance
In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or . An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions. There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoretical probability distribution and is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real-world system. If all possible observations of the system are present, then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to calculate an estimate of the population variance, as discussed in the section below. The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Definition The variance of a random variable is the expected value of the squared deviation from the mean of , : This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself: The variance is also equivalent to the second cumulant of a probability distribution that generates . The variance is typically designated as , or sometimes as or , or symbolically as or simply (pronounced "sigma squared"). The expression for the variance can be expanded as follows: In other words, the variance of is equal to the mean of the square of minus the square of the mean of . This equation should not be used for computations using floating point arithmetic, because it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude. For other numerically stable alternatives, see algorithms for calculating variance. Discrete random variable If the generator of random variable is discrete with probability mass function , then where is the expected value. That is, (When such a discrete weighted variance is specified by weights whose sum is not 1, then one divides by the sum of the weights.) The variance of a collection of equally likely values can be written as where is the average value. That is, The variance of a set of equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other: Absolutely continuous random variable If the random variable has a probability density function , and is the corresponding cumulative distribution function, then or equivalently, where is the expected value of given by In these formulas, the integrals with respect to and are Lebesgue and Lebesgue–Stieltjes integrals, respectively. If the function is Riemann-integrable on every finite interval then where the integral is an improper Riemann integral. Examples Exponential distribution The exponential distribution with parameter is a continuous distribution whose probability density function is given by on the interval . Its mean can be shown to be Using integration by parts and making use of the expected value already calculated, we have: Thus, the variance of is given by Fair dice A fair six-sided dice can be modeled as a discrete random variable, , with outcomes 1 through 6, each with equal probability 1/6. The expected value of is Therefore, the variance of is The general formula for the variance of the outcome, , of an die is Commonly used probability distributions The following table lists the variance for some commonly used probability distributions. Properties Basic properties Variance is non-negative because the squares are positive or zero: The variance of a constant is zero. Conversely, if the variance of a random variable is 0, then it is almost surely a constant. That is, it always has the same value: Issues of finiteness If a distribution does not have a finite expected value, as is the case for the Cauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is a Pareto distribution whose index satisfies Decomposition The general formula for variance decomposition or the law of total variance is: If and are two random variables, and the variance of exists, then The conditional expectation of given , and the conditional variance may be understood as follows. Given any particular value y of the random variable Y, there is a conditional expectation given the event Y = y. This quantity depends on the particular value y; it is a function . That same function evaluated at the random variable Y is the conditional expectation In particular, if is a discrete random variable assuming possible values with corresponding probabilities , then in the formula for total variance, the first term on the right-hand side becomes where . Similarly, the second term on the right-hand side becomes where and . Thus the total variance is given by A similar formula is applied in analysis of variance, where the corresponding formula is here refers to the Mean of the Squares. In linear regression analysis the corresponding formula is This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated. Similar decompositions are possible for the sum of squared deviations (sum of squares, ): Calculation from the CDF The population variance for a non-negative random variable can be expressed in terms of the cumulative distribution function F using This expression can be used to calculate the variance in situations where the CDF, but not the density, can be conveniently expressed. Characteristic property The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. . Conversely, if a continuous function satisfies for all random variables X, then it is necessarily of the form , where . This also holds in the multidimensional case. Units of measurement Unlike the expected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via their standard deviation or root mean square deviation is often preferred over using the variance. In the dice example the standard deviation is , slightly larger than the expected absolute deviation of 1.5. The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution. Propagation Addition and multiplication by a constant Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged: If all values are scaled by a constant, the variance is scaled by the square of that constant: The variance of a sum of two random variables is given by where is the covariance. Linear combinations In general, for the sum of random variables , the variance becomes: see also general Bienaymé's identity. These results lead to the variance of a linear combination as: If the random variables are such that then they are said to be uncorrelated. It follows immediately from the expression given earlier that if the random variables are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically: Since independent random variables are always uncorrelated (see ), the equation above holds in particular when the random variables are independent. Thus, independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances. Matrix notation for the variance of a linear combination Define as a column vector of random variables , and as a column vector of scalars . Therefore, is a linear combination of these random variables, where denotes the transpose of . Also let be the covariance matrix of . The variance of is then given by: This implies that the variance of the mean can be written as (with a column vector of ones) Sum of variables Sum of uncorrelated variables One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of uncorrelated random variables is the sum of their variances: This statement is called the Bienaymé formula and was discovered in 1853. It is often made with the stronger condition that the variables are independent, but being uncorrelated suffices. So if all the variables have the same variance σ2, then, since division by n is a linear transformation, this formula immediately implies that the variance of their mean is That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem. To prove the initial statement, it suffices to show that The general result then follows by induction. Starting with the definition, Using the linearity of the expectation operator and the assumption of independence (or uncorrelatedness) of X and Y, this further simplifies as follows: Sum of correlated variables Sum of correlated variables with fixed sample size In general, the variance of the sum of variables is the sum of their covariances: (Note: The second equality comes from the fact that .) Here, is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory of Cronbach's alpha in classical test theory. So, if the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing the uncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to This formula is used in the Spearman–Brown prediction formula of classical test theory. This converges to ρ if n goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables. Sum of uncorrelated variables with random sample size There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size is a random variable whose variation adds to the variation of , such that, which follows from the law of total variance. If has a Poisson distribution, then with estimator = . So, the estimator of becomes , giving (see standard error of the sample mean). Weighted sum of variables The scaling property and the Bienaymé formula, along with the property of the covariance jointly imply that This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if X and Y are uncorrelated and the weight of X is two times the weight of Y, then the weight of the variance of X will be four times the weight of the variance of Y. The expression above can be extended to a weighted sum of multiple variables: Product of variables Product of independent variables If two variables X and Y are independent, the variance of their product is given by Equivalently, using the basic properties of expectation, it is given by Product of statistically dependent variables In general, if two variables are statistically dependent, then the variance of their product is given by: Arbitrary functions The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by provided that f is twice differentiable and that the mean and variance of X are finite. Population variance and sample variance Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that one estimates the mean and variance from a limited set of observations by using an estimator equation. The estimator is a function of the sample of n observations drawn without observational bias from the whole population of potential observations. In this example, the sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest. The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance – these are consistent estimators (they converge to the value of the whole population as the number of samples increases) but can be improved. Most simply, the sample variance is computed as the sum of squared deviations about the (sample) mean, divided by n as the number of samples. However, using values other than n improves the estimator in various ways. Four common values for the denominator are n, n − 1, n + 1, and n − 1.5: n is the simplest (the variance of the sample), n − 1 eliminates bias, n + 1 minimizes mean squared error for the normal distribution, and n − 1.5 mostly eliminates bias in unbiased estimation of standard deviation for the normal distribution. Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is a biased estimator: it underestimates the variance by a factor of (n − 1) / n; correcting this factor, resulting in the sum of squared deviations about the sample mean divided by n -1 instead of n, is called Bessel's correction. The resulting estimator is unbiased and is called the (corrected) sample variance or unbiased sample variance. If the mean is determined in some other way than from the same samples used to estimate the variance, then this bias does not arise, and the variance can safely be estimated as that of the samples about the (independently known) mean. Secondly, the sample variance does not generally minimize mean squared error between sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the excess kurtosis of the population (see mean squared error: variance) and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than n − 1) and is a simple example of a shrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by n + 1 (instead of n − 1 or n) minimizes mean squared error. The resulting estimator is biased, however, and is known as the biased sample variation. Population variance In general, the population variance of a finite population of size N with values xi is given bywhere the population mean is and , where is the expectation value operator. The population variance can also be computed using (The right side has duplicate terms in the sum while the middle side has only unique terms to sum.) This is true becauseThe population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations. Sample variance In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a sample of the population. This is generally referred to as sample variance or empirical variance. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution. We take a sample with replacement of n values Y1, ..., Yn from the population of size , where n < N, and estimate the variance on the basis of this sample. Directly taking the variance of the sample data gives the average of the squared deviations: (See the section Population variance for the derivation of this formula.) Here, denotes the sample mean: Since the Yi are selected randomly, both and are random variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples {Yi} of size n from the population. For this gives: Here derived in the section Population variance and due to independency of and are used. Hence gives an estimate of the population variance that is biased by a factor of as the expectation value of is smaller than the population variance (true variance) by that factor. For this reason, is referred to as the biased sample variance. Correcting for this bias yields the unbiased sample variance, denoted : Either estimator may be simply referred to as the sample variance when the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution. The use of the term n − 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n − 1.5 yields an almost unbiased estimator. The unbiased sample variance is a U-statistic for the function ƒ(y1, y2) = (y1 − y2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population. Example For a set of numbers {10, 15, 30, 45, 57, 52 63, 72, 81, 93, 102, 105}, if this set is the whole data population for some measurement, then variance is the population variance 932.743 as the sum of the squared deviations about the mean of this set, divided by 12 as the number of the set members. If the set is a sample from the whole population, then the unbiased sample variance can be calculated as 1017.538 that is the sum of the squared deviations about the mean of the sample, divided by 11 instead of 12. A function VAR.S in Microsoft Excel gives the unbiased sample variance while VAR.P is for population variance. Distribution of the sample variance Being a function of random variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case that Yi are independent observations from a normal distribution, Cochran's theorem shows that the unbiased sample variance S2 follows a scaled chi-squared distribution (see also: asymptotic properties and an elementary proof): where σ2 is the population variance. As a direct consequence, it follows that and If Yi are independent and identically distributed, but not necessarily normally distributed, then where κ is the kurtosis of the distribution and μ4 is the fourth central moment. If the conditions of the law of large numbers hold for the squared observations, S2 is a consistent estimator of σ2. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.). Samuelson's inequality Samuelson's inequality is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated. Values must lie within the limits Relations with the harmonic and arithmetic means It has been shown that for a sample {yi} of positive real numbers, where ymax is the maximum of the sample, A is the arithmetic mean, H is the harmonic mean of the sample and is the (biased) variance of the sample. This bound has been improved, and it is known that variance is bounded by where ymin is the minimum of the sample. Tests of equality of variances The F-test of equality of variances and the chi square tests are adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult. Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, the Capon test, Mood test, the Klotz test and the Sukhatme test. The Sukhatme test applies to two variances and requires that both medians be known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal. The Lehmann test is a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include the Box test, the Box–Anderson test and the Moses test. Resampling methods, which include the bootstrap and the jackknife, may be used to test the equality of variances. Moment of inertia The variance of a probability distribution is analogous to the moment of inertia in classical mechanics of a corresponding mass distribution along a line, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called moments of probability distributions. The covariance matrix is related to the moment of inertia tensor for multivariate distributions. The moment of inertia of a cloud of n points with a covariance matrix of is given by This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to the x axis and distributed along it. The covariance matrix might look like That is, there is the most variance in the x direction. Physicists would consider this to have a low moment about the x axis so the moment-of-inertia tensor is Semivariance The semivariance is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not. For inequalities associated with the semivariance, see . Etymology The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance: The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the square root of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations and , it is found that the distribution, when both causes act together, has a standard deviation . It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance... Generalizations For complex variables If is a scalar complex-valued random variable, with values in then its variance is where is the complex conjugate of This variance is a real scalar. For vector-valued random variables As a matrix If is a vector-valued random variable, with values in and thought of as a column vector, then a natural generalization of variance is where and is the transpose of and so is a row vector. The result is a positive semi-definite square matrix, commonly referred to as the variance-covariance matrix (or simply as the covariance matrix). If is a vector- and complex-valued random variable, with values in then the covariance matrix is where is the conjugate transpose of This matrix is also positive semi-definite and square. As a scalar Another generalization of variance for vector-valued random variables , which results in a scalar value rather than in a matrix, is the generalized variance , the determinant of the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean. A different generalization is obtained by considering the equation for the scalar variance, , and reinterpreting as the squared Euclidean distance between the random variable and its mean, or, simply as the scalar product of the vector with itself. This results in which is the trace of the covariance matrix. See also Bhatia–Davis inequality Coefficient of variation Homoscedasticity Least-squares spectral analysis for computing a frequency spectrum with spectral magnitudes in % of variance or in dB Modern portfolio theory Popoviciu's inequality on variances Measures for statistical dispersion Variance-stabilizing transformation Types of variance Correlation Distance variance Explained variance Pooled variance Pseudo-variance References Moment (mathematics) Statistical deviation and dispersion Articles containing proofs
Variance
[ "Physics", "Mathematics" ]
5,769
[ "Mathematical analysis", "Moments (mathematics)", "Physical quantities", "Articles containing proofs", "Moment (physics)" ]
32,370
https://en.wikipedia.org/wiki/Vector%20space
In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called vectors, can be added together and multiplied ("scaled") by numbers called scalars. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector spaces and complex vector spaces are kinds of vector spaces based on different kinds of scalars: real numbers and complex numbers. Scalars can also be, more generally, elements of any field. Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities (such as forces and velocity) that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrices, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studying systems of linear equations. Vector spaces are characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. This means that, for two vector spaces over a given field and with the same dimension, the properties that depend only on the vector-space structure are exactly the same (technically the vector spaces are isomorphic). A vector space is finite-dimensional if its dimension is a natural number. Otherwise, it is infinite-dimensional, and its dimension is an infinite cardinal. Finite-dimensional vector spaces occur naturally in geometry and related areas. Infinite-dimensional vector spaces occur in many areas of mathematics. For example, polynomial rings are countably infinite-dimensional vector spaces, and many function spaces have the cardinality of the continuum as a dimension. Many vector spaces that are considered in mathematics are also endowed with other structures. This is the case of algebras, which include field extensions, polynomial rings, associative algebras and Lie algebras. This is also the case of topological vector spaces, which include function spaces, inner product spaces, normed spaces, Hilbert spaces and Banach spaces. Definition and basic properties In this article, vectors are represented in boldface to distinguish them from scalars. A vector space over a field is a non-empty set  together with a binary operation and a binary function that satisfy the eight axioms listed below. In this context, the elements of are commonly called vectors, and the elements of  are called scalars. The binary operation, called vector addition or simply addition assigns to any two vectors  and in a third vector in which is commonly written as , and called the sum of these two vectors. The binary function, called scalar multiplication, assigns to any scalar  in and any vector  in another vector in , which is denoted . To have a vector space, the eight following axioms must be satisfied for every , and in , and and in . When the scalar field is the real numbers, the vector space is called a real vector space, and when the scalar field is the complex numbers, the vector space is called a complex vector space. These two cases are the most common ones, but vector spaces with scalars in an arbitrary field are also commonly considered. Such a vector space is called an vector space or a vector space over . An equivalent definition of a vector space can be given, which is much more concise but less elementary: the first four axioms (related to vector addition) say that a vector space is an abelian group under addition, and the four remaining axioms (related to the scalar multiplication) say that this operation defines a ring homomorphism from the field into the endomorphism ring of this group. Subtraction of two vectors can be defined as Direct consequences of the axioms include that, for every and one has implies or Even more concisely, a vector space is a module over a field. Bases, vector coordinates, and subspaces Linear combination Given a set of elements of a -vector space , a linear combination of elements of is an element of of the form where and The scalars are called the coefficients of the linear combination. Linear independence The elements of a subset of a -vector space are said to be linearly independent if no element of can be written as a linear combination of the other elements of . Equivalently, they are linearly independent if two linear combinations of elements of define the same element of if and only if they have the same coefficients. Also equivalently, they are linearly independent if a linear combination results in the zero vector if and only if all its coefficients are zero. Linear subspace A linear subspace or vector subspace of a vector space is a non-empty subset of that is closed under vector addition and scalar multiplication; that is, the sum of two elements of and the product of an element of by a scalar belong to . This implies that every linear combination of elements of belongs to . A linear subspace is a vector space for the induced addition and scalar multiplication; this means that the closure property implies that the axioms of a vector space are satisfied.The closure property also implies that every intersection of linear subspaces is a linear subspace. Linear span Given a subset of a vector space , the linear span or simply the span of is the smallest linear subspace of that contains , in the sense that it is the intersection of all linear subspaces that contain . The span of is also the set of all linear combinations of elements of . If is the span of , one says that spans or generates , and that is a spanning set or a generating set of . Basis and dimension A subset of a vector space is a basis if its elements are linearly independent and span the vector space. Every vector space has at least one basis, or many in general (see ). Moreover, all bases of a vector space have the same cardinality, which is called the dimension of the vector space (see Dimension theorem for vector spaces). This is a fundamental property of vector spaces, which is detailed in the remainder of the section. Bases are a fundamental tool for the study of vector spaces, especially when the dimension is finite. In the infinite-dimensional case, the existence of infinite bases, often called Hamel bases, depends on the axiom of choice. It follows that, in general, no base can be explicitly described. For example, the real numbers form an infinite-dimensional vector space over the rational numbers, for which no specific basis is known. Consider a basis of a vector space of dimension over a field . The definition of a basis implies that every may be written with in , and that this decomposition is unique. The scalars are called the coordinates of on the basis. They are also said to be the coefficients of the decomposition of on the basis. One also says that the -tuple of the coordinates is the coordinate vector of on the basis, since the set of the -tuples of elements of is a vector space for componentwise addition and scalar multiplication, whose dimension is . The one-to-one correspondence between vectors and their coordinate vectors maps vector addition to vector addition and scalar multiplication to scalar multiplication. It is thus a vector space isomorphism, which allows translating reasonings and computations on vectors into reasonings and computations on their coordinates. History Vector spaces stem from affine geometry, via the introduction of coordinates in the plane or three-dimensional space. Around 1636, French mathematicians René Descartes and Pierre de Fermat founded analytic geometry by identifying solutions to an equation of two variables with points on a plane curve. To achieve geometric solutions without using coordinates, Bolzano introduced, in 1804, certain operations on points, lines, and planes, which are predecessors of vectors. introduced the notion of barycentric coordinates. introduced an equivalence relation on directed line segments that share the same length and direction which he called equipollence. A Euclidean vector is then an equivalence class of that relation. Vectors were reconsidered with the presentation of complex numbers by Argand and Hamilton and the inception of quaternions by the latter. They are elements in R2 and R4; treating them using linear combinations goes back to Laguerre in 1867, who also defined systems of linear equations. In 1857, Cayley introduced the matrix notation which allows for harmonization and simplification of linear maps. Around the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations. In his work, the concepts of linear independence and dimension, as well as scalar products are present. Grassmann's 1844 work exceeds the framework of vector spaces as well since his considering multiplication led him to what are today called algebras. Italian mathematician Peano was the first to give the modern definition of vector spaces and linear maps in 1888, although he called them "linear systems". Peano's axiomatization allowed for vector spaces with infinite dimension, but Peano did not develop that theory further. In 1897, Salvatore Pincherle adopted Peano's axioms and made initial inroads into the theory of infinite-dimensional vector spaces. An important development of vector spaces is due to the construction of function spaces by Henri Lebesgue. This was later formalized by Banach and Hilbert, around 1920. At that time, algebra and the new field of functional analysis began to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces. Examples Arrows in the plane The first example of a vector space consists of arrows in a fixed plane, starting at one fixed point. This is used in physics to describe forces or velocities. Given any two such arrows, and , the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows, and is denoted . In the special case of two arrows on the same line, their sum is the arrow on this line whose length is the sum or the difference of the lengths, depending on whether the arrows have the same direction. Another operation that can be done with arrows is scaling: given any positive real number , the arrow that has the same direction as , but is dilated or shrunk by multiplying its length by , is called multiplication of by . It is denoted . When is negative, is defined as the arrow pointing in the opposite direction instead. The following shows a few examples: if , the resulting vector has the same direction as , but is stretched to the double length of (the second image). Equivalently, is the sum . Moreover, has the opposite direction and the same length as (blue vector pointing down in the second image). Ordered pairs of numbers A second key example of a vector space is provided by pairs of real numbers and . The order of the components and is significant, so such a pair is also called an ordered pair. Such a pair is written as . The sum of two such pairs and the multiplication of a pair with a number is defined as follows: The first example above reduces to this example if an arrow is represented by a pair of Cartesian coordinates of its endpoint. Coordinate space The simplest example of a vector space over a field is the field itself with its addition viewed as vector addition and its multiplication viewed as scalar multiplication. More generally, all -tuples (sequences of length ) of elements of form a vector space that is usually denoted and called a coordinate space. The case is the above-mentioned simplest example, in which the field is also regarded as a vector space over itself. The case and (so R2) reduces to the previous example. Complex numbers and other field extensions The set of complex numbers , numbers that can be written in the form for real numbers and where is the imaginary unit, form a vector space over the reals with the usual addition and multiplication: and for real numbers , , , and . The various axioms of a vector space follow from the fact that the same rules hold for complex number arithmetic. The example of complex numbers is essentially the same as (that is, it is isomorphic to) the vector space of ordered pairs of real numbers mentioned above: if we think of the complex number as representing the ordered pair in the complex plane then we see that the rules for addition and scalar multiplication correspond exactly to those in the earlier example. More generally, field extensions provide another class of examples of vector spaces, particularly in algebra and algebraic number theory: a field containing a smaller field is an -vector space, by the given multiplication and addition operations of . For example, the complex numbers are a vector space over , and the field extension is a vector space over . Function spaces Functions from any fixed set to a field also form vector spaces, by performing addition and scalar multiplication pointwise. That is, the sum of two functions and is the function given by and similarly for multiplication. Such function spaces occur in many geometric situations, when is the real line or an interval, or other subsets of . Many notions in topology and analysis, such as continuity, integrability or differentiability are well-behaved with respect to linearity: sums and scalar multiples of functions possessing such a property still have that property. Therefore, the set of such functions are vector spaces, whose study belongs to functional analysis. Linear equations Systems of homogeneous linear equations are closely tied to vector spaces. For example, the solutions of are given by triples with arbitrary and They form a vector space: sums and scalar multiples of such triples still satisfy the same ratios of the three variables; thus they are solutions, too. Matrices can be used to condense multiple linear equations as above into one vector equation, namely where is the matrix containing the coefficients of the given equations, is the vector denotes the matrix product, and is the zero vector. In a similar vein, the solutions of homogeneous linear differential equations form vector spaces. For example, yields where and are arbitrary constants, and is the natural exponential function. Linear maps and matrices The relation of two vector spaces can be expressed by linear map or linear transformation. They are functions that reflect the vector space structure, that is, they preserve sums and scalar multiplication: for all and in all in An isomorphism is a linear map such that there exists an inverse map , which is a map such that the two possible compositions and are identity maps. Equivalently, is both one-to-one (injective) and onto (surjective). If there exists an isomorphism between and , the two spaces are said to be isomorphic; they are then essentially identical as vector spaces, since all identities holding in are, via , transported to similar ones in , and vice versa via . For example, the arrows in the plane and the ordered pairs of numbers vector spaces in the introduction above (see ) are isomorphic: a planar arrow departing at the origin of some (fixed) coordinate system can be expressed as an ordered pair by considering the - and -component of the arrow, as shown in the image at the right. Conversely, given a pair , the arrow going by to the right (or to the left, if is negative), and up (down, if is negative) turns back the arrow . Linear maps between two vector spaces form a vector space , also denoted , or . The space of linear maps from to is called the dual vector space, denoted . Via the injective natural map , any vector space can be embedded into its bidual; the map is an isomorphism if and only if the space is finite-dimensional. Once a basis of is chosen, linear maps are completely determined by specifying the images of the basis vectors, because any element of is expressed uniquely as a linear combination of them. If , a 1-to-1 correspondence between fixed bases of and gives rise to a linear map that maps any basis element of to the corresponding basis element of . It is an isomorphism, by its very definition. Therefore, two vector spaces over a given field are isomorphic if their dimensions agree and vice versa. Another way to express this is that any vector space over a given field is completely classified (up to isomorphism) by its dimension, a single number. In particular, any n-dimensional -vector space is isomorphic to . However, there is no "canonical" or preferred isomorphism; an isomorphism is equivalent to the choice of a basis of , by mapping the standard basis of to , via . Matrices Matrices are a useful notion to encode linear maps. They are written as a rectangular array of scalars as in the image at the right. Any -by- matrix gives rise to a linear map from to , by the following where denotes summation, or by using the matrix multiplication of the matrix with the coordinate vector : Moreover, after choosing bases of and , any linear map is uniquely represented by a matrix via this assignment. The determinant of a square matrix is a scalar that tells whether the associated map is an isomorphism or not: to be so it is sufficient and necessary that the determinant is nonzero. The linear transformation of corresponding to a real n-by-n matrix is orientation preserving if and only if its determinant is positive. Eigenvalues and eigenvectors Endomorphisms, linear maps , are particularly important since in this case vectors can be compared with their image under , . Any nonzero vector satisfying , where is a scalar, is called an eigenvector of with eigenvalue . Equivalently, is an element of the kernel of the difference (where Id is the identity map . If is finite-dimensional, this can be rephrased using determinants: having eigenvalue is equivalent to By spelling out the definition of the determinant, the expression on the left hand side can be seen to be a polynomial function in , called the characteristic polynomial of . If the field is large enough to contain a zero of this polynomial (which automatically happens for algebraically closed, such as ) any linear map has at least one eigenvector. The vector space may or may not possess an eigenbasis, a basis consisting of eigenvectors. This phenomenon is governed by the Jordan canonical form of the map. The set of all eigenvectors corresponding to a particular eigenvalue of forms a vector space known as the eigenspace corresponding to the eigenvalue (and ) in question. Basic constructions In addition to the above concrete examples, there are a number of standard linear algebraic constructions that yield vector spaces related to given ones. Subspaces and quotient spaces A nonempty subset of a vector space that is closed under addition and scalar multiplication (and therefore contains the -vector of ) is called a linear subspace of , or simply a subspace of , when the ambient space is unambiguously a vector space. Subspaces of are vector spaces (over the same field) in their own right. The intersection of all subspaces containing a given set of vectors is called its span, and it is the smallest subspace of containing the set . Expressed in terms of elements, the span is the subspace consisting of all the linear combinations of elements of . Linear subspace of dimension 1 and 2 are referred to as a line (also vector line), and a plane respectively. If W is an n-dimensional vector space, any subspace of dimension 1 less, i.e., of dimension is called a hyperplane. The counterpart to subspaces are quotient vector spaces. Given any subspace , the quotient space (" modulo ") is defined as follows: as a set, it consists of where is an arbitrary vector in . The sum of two such elements and is , and scalar multiplication is given by . The key point in this definition is that if and only if the difference of and lies in . This way, the quotient space "forgets" information that is contained in the subspace . The kernel of a linear map consists of vectors that are mapped to in . The kernel and the image are subspaces of and , respectively. An important example is the kernel of a linear map for some fixed matrix . The kernel of this map is the subspace of vectors such that , which is precisely the set of solutions to the system of homogeneous linear equations belonging to . This concept also extends to linear differential equations where the coefficients are functions in too. In the corresponding map the derivatives of the function appear linearly (as opposed to , for example). Since differentiation is a linear procedure (that is, and for a constant ) this assignment is linear, called a linear differential operator. In particular, the solutions to the differential equation form a vector space (over or ). The existence of kernels and images is part of the statement that the category of vector spaces (over a fixed field ) is an abelian category, that is, a corpus of mathematical objects and structure-preserving maps between them (a category) that behaves much like the category of abelian groups. Because of this, many statements such as the first isomorphism theorem (also called rank–nullity theorem in matrix-related terms) and the second and third isomorphism theorem can be formulated and proven in a way very similar to the corresponding statements for groups. Direct product and direct sum The direct product of vector spaces and the direct sum of vector spaces are two ways of combining an indexed family of vector spaces into a new vector space. The direct product of a family of vector spaces consists of the set of all tuples , which specify for each index in some index set an element of . Addition and scalar multiplication is performed componentwise. A variant of this construction is the direct sum (also called coproduct and denoted ), where only tuples with finitely many nonzero vectors are allowed. If the index set is finite, the two constructions agree, but in general they are different. Tensor product The tensor product or simply of two vector spaces and is one of the central notions of multilinear algebra which deals with extending notions such as linear maps to several variables. A map from the Cartesian product is called bilinear if is linear in both variables and That is to say, for fixed the map is linear in the sense above and likewise for fixed The tensor product is a particular vector space that is a universal recipient of bilinear maps as follows. It is defined as the vector space consisting of finite (formal) sums of symbols called tensors subject to the rules These rules ensure that the map from the to that maps a tuple to is bilinear. The universality states that given any vector space and any bilinear map there exists a unique map shown in the diagram with a dotted arrow, whose composition with equals This is called the universal property of the tensor product, an instance of the method—much used in advanced abstract algebra—to indirectly define objects by specifying maps from or to this object. Vector spaces with additional structure From the point of view of linear algebra, vector spaces are completely understood insofar as any vector space over a given field is characterized, up to isomorphism, by its dimension. However, vector spaces per se do not offer a framework to deal with the question—crucial to analysis—whether a sequence of functions converges to another function. Likewise, linear algebra is not adapted to deal with infinite series, since the addition operation allows only finitely many terms to be added. Therefore, the needs of functional analysis require considering additional structures. A vector space may be given a partial order under which some vectors can be compared. For example, -dimensional real space can be ordered by comparing its vectors componentwise. Ordered vector spaces, for example Riesz spaces, are fundamental to Lebesgue integration, which relies on the ability to express a function as a difference of two positive functions where denotes the positive part of and the negative part. Normed vector spaces and inner product spaces "Measuring" vectors is done by specifying a norm, a datum which measures lengths of vectors, or by an inner product, which measures angles between vectors. Norms and inner products are denoted and respectively. The datum of an inner product entails that lengths of vectors can be defined too, by defining the associated norm Vector spaces endowed with such data are known as normed vector spaces and inner product spaces, respectively. Coordinate space can be equipped with the standard dot product: In this reflects the common notion of the angle between two vectors and by the law of cosines: Because of this, two vectors satisfying are called orthogonal. An important variant of the standard dot product is used in Minkowski space: endowed with the Lorentz product In contrast to the standard dot product, it is not positive definite: also takes negative values, for example, for Singling out the fourth coordinate—corresponding to time, as opposed to three space-dimensions—makes it useful for the mathematical treatment of special relativity. Note that in other conventions time is often written as the first, or "zeroeth" component so that the Lorentz product is written Topological vector spaces Convergence questions are treated by considering vector spaces carrying a compatible topology, a structure that allows one to talk about elements being close to each other. Compatible here means that addition and scalar multiplication have to be continuous maps. Roughly, if and in , and in vary by a bounded amount, then so do and To make sense of specifying the amount a scalar changes, the field also has to carry a topology in this context; a common choice is the reals or the complex numbers. In such topological vector spaces one can consider series of vectors. The infinite sum denotes the limit of the corresponding finite partial sums of the sequence of elements of For example, the could be (real or complex) functions belonging to some function space in which case the series is a function series. The mode of convergence of the series depends on the topology imposed on the function space. In such cases, pointwise convergence and uniform convergence are two prominent examples. A way to ensure the existence of limits of certain infinite series is to restrict attention to spaces where any Cauchy sequence has a limit; such a vector space is called complete. Roughly, a vector space is complete provided that it contains all necessary limits. For example, the vector space of polynomials on the unit interval equipped with the topology of uniform convergence is not complete because any continuous function on can be uniformly approximated by a sequence of polynomials, by the Weierstrass approximation theorem. In contrast, the space of all continuous functions on with the same topology is complete. A norm gives rise to a topology by defining that a sequence of vectors converges to if and only if Banach and Hilbert spaces are complete topological vector spaces whose topologies are given, respectively, by a norm and an inner product. Their study—a key piece of functional analysis—focuses on infinite-dimensional vector spaces, since all norms on finite-dimensional topological vector spaces give rise to the same notion of convergence. The image at the right shows the equivalence of the -norm and -norm on as the unit "balls" enclose each other, a sequence converges to zero in one norm if and only if it so does in the other norm. In the infinite-dimensional case, however, there will generally be inequivalent topologies, which makes the study of topological vector spaces richer than that of vector spaces without additional data. From a conceptual point of view, all notions related to topological vector spaces should match the topology. For example, instead of considering all linear maps (also called functionals) maps between topological vector spaces are required to be continuous. In particular, the (topological) dual space consists of continuous functionals (or to ). The fundamental Hahn–Banach theorem is concerned with separating subspaces of appropriate topological vector spaces by continuous functionals. Banach spaces Banach spaces, introduced by Stefan Banach, are complete normed vector spaces. A first example is the vector space consisting of infinite vectors with real entries whose -norm given by The topologies on the infinite-dimensional space are inequivalent for different For example, the sequence of vectors in which the first components are and the following ones are converges to the zero vector for but does not for but More generally than sequences of real numbers, functions are endowed with a norm that replaces the above sum by the Lebesgue integral The space of integrable functions on a given domain (for example an interval) satisfying and equipped with this norm are called Lebesgue spaces, denoted These spaces are complete. (If one uses the Riemann integral instead, the space is complete, which may be seen as a justification for Lebesgue's integration theory.) Concretely this means that for any sequence of Lebesgue-integrable functions with satisfying the condition there exists a function belonging to the vector space such that Imposing boundedness conditions not only on the function, but also on its derivatives leads to Sobolev spaces. Hilbert spaces Complete inner product spaces are known as Hilbert spaces, in honor of David Hilbert. The Hilbert space with inner product given by where denotes the complex conjugate of is a key case. By definition, in a Hilbert space, any Cauchy sequence converges to a limit. Conversely, finding a sequence of functions with desirable properties that approximate a given limit function is equally crucial. Early analysis, in the guise of the Taylor approximation, established an approximation of differentiable functions by polynomials. By the Stone–Weierstrass theorem, every continuous function on can be approximated as closely as desired by a polynomial. A similar approximation technique by trigonometric functions is commonly called Fourier expansion, and is much applied in engineering. More generally, and more conceptually, the theorem yields a simple description of what "basic functions", or, in abstract Hilbert spaces, what basic vectors suffice to generate a Hilbert space in the sense that the closure of their span (that is, finite linear combinations and limits of those) is the whole space. Such a set of functions is called a basis of its cardinality is known as the Hilbert space dimension. Not only does the theorem exhibit suitable basis functions as sufficient for approximation purposes, but also together with the Gram–Schmidt process, it enables one to construct a basis of orthogonal vectors. Such orthogonal bases are the Hilbert space generalization of the coordinate axes in finite-dimensional Euclidean space. The solutions to various differential equations can be interpreted in terms of Hilbert spaces. For example, a great many fields in physics and engineering lead to such equations, and frequently solutions with particular physical properties are used as basis functions, often orthogonal. As an example from physics, the time-dependent Schrödinger equation in quantum mechanics describes the change of physical properties in time by means of a partial differential equation, whose solutions are called wavefunctions. Definite values for physical properties such as energy, or momentum, correspond to eigenvalues of a certain (linear) differential operator and the associated wavefunctions are called eigenstates. The spectral theorem decomposes a linear compact operator acting on functions in terms of these eigenfunctions and their eigenvalues. Algebras over fields General vector spaces do not possess a multiplication between vectors. A vector space equipped with an additional bilinear operator defining the multiplication of two vectors is an algebra over a field (or F-algebra if the field F is specified). For example, the set of all polynomials forms an algebra known as the polynomial ring: using that the sum of two polynomials is a polynomial, they form a vector space; they form an algebra since the product of two polynomials is again a polynomial. Rings of polynomials (in several variables) and their quotients form the basis of algebraic geometry, because they are rings of functions of algebraic geometric objects. Another crucial example are Lie algebras, which are neither commutative nor associative, but the failure to be so is limited by the constraints ( denotes the product of and ): (anticommutativity), and (Jacobi identity). Examples include the vector space of -by- matrices, with the commutator of two matrices, and endowed with the cross product. The tensor algebra is a formal way of adding products to any vector space to obtain an algebra. As a vector space, it is spanned by symbols, called simple tensors where the degree varies. The multiplication is given by concatenating such symbols, imposing the distributive law under addition, and requiring that scalar multiplication commute with the tensor product ⊗, much the same way as with the tensor product of two vector spaces introduced in the above section on tensor products. In general, there are no relations between and Forcing two such elements to be equal leads to the symmetric algebra, whereas forcing yields the exterior algebra. Related structures Vector bundles A vector bundle is a family of vector spaces parametrized continuously by a topological space X. More precisely, a vector bundle over X is a topological space E equipped with a continuous map such that for every x in X, the fiber π−1(x) is a vector space. The case dim is called a line bundle. For any vector space V, the projection makes the product into a "trivial" vector bundle. Vector bundles over X are required to be locally a product of X and some (fixed) vector space V: for every x in X, there is a neighborhood U of x such that the restriction of π to π−1(U) is isomorphic to the trivial bundle . Despite their locally trivial character, vector bundles may (depending on the shape of the underlying space X) be "twisted" in the large (that is, the bundle need not be (globally isomorphic to) the trivial bundle ). For example, the Möbius strip can be seen as a line bundle over the circle S1 (by identifying open intervals with the real line). It is, however, different from the cylinder , because the latter is orientable whereas the former is not. Properties of certain vector bundles provide information about the underlying topological space. For example, the tangent bundle consists of the collection of tangent spaces parametrized by the points of a differentiable manifold. The tangent bundle of the circle S1 is globally isomorphic to , since there is a global nonzero vector field on S1. In contrast, by the hairy ball theorem, there is no (tangent) vector field on the 2-sphere S2 which is everywhere nonzero. K-theory studies the isomorphism classes of all vector bundles over some topological space. In addition to deepening topological and geometrical insight, it has purely algebraic consequences, such as the classification of finite-dimensional real division algebras: R, C, the quaternions H and the octonions O. The cotangent bundle of a differentiable manifold consists, at every point of the manifold, of the dual of the tangent space, the cotangent space. Sections of that bundle are known as differential one-forms. Modules Modules are to rings what vector spaces are to fields: the same axioms, applied to a ring R instead of a field F, yield modules. The theory of modules, compared to that of vector spaces, is complicated by the presence of ring elements that do not have multiplicative inverses. For example, modules need not have bases, as the Z-module (that is, abelian group) Z/2Z shows; those modules that do (including all vector spaces) are known as free modules. Nevertheless, a vector space can be compactly defined as a module over a ring which is a field, with the elements being called vectors. Some authors use the term vector space to mean modules over a division ring. The algebro-geometric interpretation of commutative rings via their spectrum allows the development of concepts such as locally free modules, the algebraic counterpart to vector bundles. Affine and projective spaces Roughly, affine spaces are vector spaces whose origins are not specified. More precisely, an affine space is a set with a free transitive vector space action. In particular, a vector space is an affine space over itself, by the map If W is a vector space, then an affine subspace is a subset of W obtained by translating a linear subspace V by a fixed vector ; this space is denoted by (it is a coset of V in W) and consists of all vectors of the form for An important example is the space of solutions of a system of inhomogeneous linear equations generalizing the homogeneous case discussed in the above section on linear equations, which can be found by setting in this equation. The space of solutions is the affine subspace where x is a particular solution of the equation, and V is the space of solutions of the homogeneous equation (the nullspace of A). The set of one-dimensional subspaces of a fixed finite-dimensional vector space V is known as projective space; it may be used to formalize the idea of parallel lines intersecting at infinity. Grassmannians and flag manifolds generalize this by parametrizing linear subspaces of fixed dimension k and flags of subspaces, respectively. Notes Citations References Algebra Analysis Historical references . , reprint: Peano, G. (1901) Formulario mathematico: vct axioms via Internet Archive Further references External links Concepts in physics Group theory Mathematical structures Vectors (mathematics and physics)
Vector space
[ "Physics", "Mathematics" ]
7,646
[ "Mathematical structures", "Vector spaces", "Mathematical objects", "Space (mathematics)", "Group theory", "Fields of abstract algebra", "nan" ]
32,410
https://en.wikipedia.org/wiki/Vehicle
A vehicle () is a machine designed for self-propulsion, usually to transport people, cargo, or both. The term "vehicle" typically refers to land vehicles such as human-powered vehicles (e.g. bicycles, tricycles, velomobiles), animal-powered transports (e.g. horse-drawn carriages/wagons, ox carts, dog sleds), motor vehicles (e.g. motorcycles, cars, trucks, buses, mobility scooters) and railed vehicles (trains, trams and monorails), but more broadly also includes cable transport (cable cars and elevators), watercraft (ships, boats and underwater vehicles), amphibious vehicles (e.g. screw-propelled vehicles, hovercraft, seaplanes), aircraft (airplanes, helicopters, gliders and aerostats) and space vehicles (spacecraft, spaceplanes and launch vehicles). This article primarily concerns the more ubiquitous land vehicles, which can be broadly classified by the type of contact interface with the ground: wheels, tracks, rails or skis, as well as the non-contact technologies such as maglev. ISO 3833-1977 is the international standard for road vehicle types, terms and definitions. History It is estimated by historians that boats have been used since prehistory; rock paintings depicting boats, dated from around 50,000 to 15,000 BC, were found in Australia. The oldest boats found by archaeological excavation are logboats, with the oldest logboat found, the Pesse canoe found in a bog in the Netherlands, being carbon dated to 8040–7510 BC, making it 9,500–10,000 years old, A 7,000 year-old seagoing boat made from reeds and tar has been found in Kuwait. Boats were used between 4000 -3000 BC in Sumer, ancient Egypt and in the Indian Ocean. There is evidence of camel pulled wheeled vehicles about 4000–3000 BC. The earliest evidence of a wagonway, a predecessor of the railway, found so far was the long Diolkos wagonway, which transported boats across the Isthmus of Corinth in Greece since around 600 BC. Wheeled vehicles pulled by men and animals ran in grooves in limestone, which provided the track element, preventing the wagons from leaving the intended route. In 200 CE, Ma Jun built a south-pointing chariot, a vehicle with an early form of guidance system. The stagecoach, a four-wheeled vehicle drawn by horses, originated in 13th century England. Railways began reappearing in Europe after the Dark Ages. The earliest known record of a railway in Europe from this period is a stained-glass window in the Minster of Freiburg im Breisgau dating from around 1350. In 1515, Cardinal Matthäus Lang wrote a description of the Reisszug, a funicular railway at the Hohensalzburg Fortress in Austria. The line originally used wooden rails and a hemp haulage rope and was operated by human or animal power, through a treadwheel. 1769: Nicolas-Joseph Cugnot is often credited with building the first self-propelled mechanical vehicle or automobile in 1769. In Russia, in the 1780s, Ivan Kulibin developed a human-pedalled, three-wheeled carriage with modern features such as a flywheel, brake, gear box and bearings; however, it was not developed further. In 1783, the Montgolfier brothers developed the first balloon vehicle. In 1801, Richard Trevithick built and demonstrated his Puffing Devil road locomotive, which many believe was the first demonstration of a steam-powered road vehicle, though it could not maintain sufficient steam pressure for long periods and was of little practical use. In 1817, The Laufmaschine ("running machine"), invented by the German Baron Karl von Drais, became the first human means of transport to make use of the two-wheeler principle. It is regarded as the forerunner of the modern bicycle (and motorcycle). In 1885, Karl Benz built (and subsequently patented) the Benz Patent-Motorwagen, the first automobile, powered by his own four-stroke cycle gasoline engine. In 1885, Otto Lilienthal began experimental gliding and achieved the first sustained, controlled, reproducible flights. In 1903, the Wright brothers flew the Wright Flyer, the first controlled, powered aircraft, in Kitty Hawk, North Carolina. In 1907, Gyroplane No.I became the first tethered rotorcraft to fly. The same year, the Cornu helicopter became the first rotorcraft to achieve free flight. In 1928, Opel initiated the Opel-RAK program, the first large-scale rocket program. The Opel RAK.1 became the first rocket car; the following year, it also became the first rocket-powered aircraft. In 1961, the Soviet space program's Vostok 1 carried Yuri Gagarin into space. In 1969, NASA's Apollo 11 achieved the first Moon landing. In 2010, the number of motor vehicles in operation worldwide surpassed 1 billion, roughly one for every seven people. Types of vehicles There are over 1 billion bicycles in use worldwide. In 2002 there were an estimated 590 million cars and 205 million motorcycles in service in the world. At least 500 million Chinese Flying Pigeon bicycles have been made, more than any other single model of vehicle. The most-produced model of motor vehicle is the Honda Super Cub motorcycle, having sold 60 million units in 2008. The most-produced car model is the Toyota Corolla, with at least 35 million made by 2010. The most common fixed-wing airplane is the Cessna 172, with about 44,000 having been made as of 2017. The Soviet Mil Mi-8, at 17,000, is the most-produced helicopter. The top commercial jet airliner is the Boeing 737, at about 10,000 in 2018. At around 14,000 for both, the most produced trams are the KTM-5 and Tatra T3. The most common trolleybus is ZiU-9. Locomotion Locomotion consists of a means that allows displacement with little opposition, a power source to provide the required kinetic energy and a means to control the motion, such as a brake and steering system. By far, most vehicles use wheels which employ the principle of rolling to enable displacement with very little rolling friction. Energy source It is essential that a vehicle have a source of energy to drive it. Energy can be extracted from external sources, as in the cases of a sailboat, a solar-powered car, or an electric streetcar that uses overhead lines. Energy can also be stored, provided it can be converted on demand and the storing medium's energy density and power density are sufficient to meet the vehicle's needs. Human power is a simple source of energy that requires nothing more than humans. Despite the fact that humans cannot exceed for meaningful amounts of time, the land speed record for human-powered vehicles (unpaced) is , as of 2009 on a recumbent bicycle. The energy source used to power vehicles is fuel. External combustion engines can use almost anything that burns as fuel, whilst internal combustion engines and rocket engines are designed to burn a specific fuel, typically gasoline, diesel or ethanol. Food is the fuel used to power non-motor vehicles such as cycles, rickshaws and other pedestrian-controlled vehicles. Another common medium for storing energy is batteries, which have the advantages of being responsive, useful in a wide range of power levels, environmentally friendly, efficient, simple to install, and easy to maintain. Batteries also facilitate the use of electric motors, which have their own advantages. On the other hand, batteries have low energy densities, short service life, poor performance at extreme temperatures, long charging times, and difficulties with disposal (although they can usually be recycled). Like fuel, batteries store chemical energy and can cause burns and poisoning in event of an accident. Batteries also lose effectiveness with time. The issue of charge time can be resolved by swapping discharged batteries with charged ones; however, this incurs additional hardware costs and may be impractical for larger batteries. Moreover, there must be standard batteries for battery swapping to work at a gas station. Fuel cells are similar to batteries in that they convert from chemical to electrical energy, but have their own advantages and disadvantages. Electrified rails and overhead cables are a common source of electrical energy on subways, railways, trams, and trolleybuses. Solar energy is a more modern development, and several solar vehicles have been successfully built and tested, including Helios, a solar-powered aircraft. Nuclear power is a more exclusive form of energy storage, currently limited to large ships and submarines, mostly military. Nuclear energy can be released by a nuclear reactor, nuclear battery, or repeatedly detonating nuclear bombs. There have been two experiments with nuclear-powered aircraft, the Tupolev Tu-119 and the Convair X-6. Mechanical strain is another method of storing energy, whereby an elastic band or metal spring is deformed and releases energy as it is allowed to return to its ground state. Systems employing elastic materials suffer from hysteresis, and metal springs are too dense to be useful in many cases. Flywheels store energy in a spinning mass. Because a light and fast rotor is energetically favorable, flywheels can pose a significant safety hazard. Moreover, flywheels leak energy fairly quickly and affect a vehicle's steering through the gyroscopic effect. They have been used experimentally in gyrobuses. Wind energy is used by sailboats and land yachts as the primary source of energy. It is very cheap and fairly easy to use, the main issues being dependence on weather and upwind performance. Balloons also rely on the wind to move horizontally. Aircraft flying in the jet stream may get a boost from high altitude winds. Compressed gas is currently an experimental method of storing energy. In this case, compressed gas is simply stored in a tank and released when necessary. Like elastics, they have hysteresis losses when gas heats up during compression. Gravitational potential energy is a form of energy used in gliders, skis, bobsleds and numerous other vehicles that go down hill. Regenerative braking is an example of capturing kinetic energy where the brakes of a vehicle are augmented with a generator or other means of extracting energy. Motors and engines When needed, the energy is taken from the source and consumed by one or more motors or engines. Sometimes there is an intermediate medium, such as the batteries of a diesel submarine. Most motor vehicles have internal combustion engines. They are fairly cheap, easy to maintain, reliable, safe and small. Since these engines burn fuel, they have long ranges but pollute the environment. A related engine is the external combustion engine. An example of this is the steam engine. Aside from fuel, steam engines also need water, making them impractical for some purposes. Steam engines also need time to warm up, whereas IC engines can usually run right after being started, although this may not be recommended in cold conditions. Steam engines burning coal release sulfur into the air, causing harmful acid rain. While intermittent internal combustion engines were once the primary means of aircraft propulsion, they have been largely superseded by continuous internal combustion engines, such as gas turbines. Turbine engines are light and, particularly when used on aircraft, efficient. On the other hand, they cost more and require careful maintenance. They can also be damaged by ingesting foreign objects, and they produce a hot exhaust. Trains using turbines are called gas turbine-electric locomotives. Examples of surface vehicles using turbines are M1 Abrams, MTT Turbine SUPERBIKE and the Millennium. Pulse jet engines are similar in many ways to turbojets but have almost no moving parts. For this reason, they were very appealing to vehicle designers in the past; however, their noise, heat, and inefficiency have led to their abandonment. A historical example of the use of a pulse jet was the V-1 flying bomb. Pulse jets are still occasionally used in amateur experiments. With the advent of modern technology, the pulse detonation engine has become practical and was successfully tested on a Rutan VariEze. While the pulse detonation engine is much more efficient than the pulse jet and even turbine engines, it still suffers from extreme noise and vibration levels. Ramjets also have few moving parts, but they only work at high speed, so their use is restricted to tip jet helicopters and high speed aircraft such as the Lockheed SR-71 Blackbird. Rocket engines are primarily used on rockets, rocket sleds and experimental aircraft. Rocket engines are extremely powerful. The heaviest vehicle ever to leave the ground, the Saturn V rocket, was powered by five F-1 rocket engines generating a combined 180 million horsepower (134.2 gigawatt). Rocket engines also have no need to "push off" anything, a fact that the New York Times denied in error. Rocket engines can be particularly simple, sometimes consisting of nothing more than a catalyst, as in the case of a hydrogen peroxide rocket. This makes them an attractive option for vehicles such as jet packs. Despite their simplicity, rocket engines are often dangerous and susceptible to explosions. The fuel they run off may be flammable, poisonous, corrosive or cryogenic. They also suffer from poor efficiency. For these reasons, rocket engines are only used when absolutely necessary. Electric motors are used in electric vehicles such as electric bicycles, electric scooters, small boats, subways, trains, trolleybuses, trams and experimental aircraft. Electric motors can be very efficient: over 90% efficiency is common. Electric motors can also be built to be powerful, reliable, low-maintenance and of any size. Electric motors can deliver a range of speeds and torques without necessarily using a gearbox (although it may be more economical to use one). Electric motors are limited in their use chiefly by the difficulty of supplying electricity. Compressed gas motors have been used on some vehicles experimentally. They are simple, efficient, safe, cheap, reliable and operate in a variety of conditions. One of the difficulties met when using gas motors is the cooling effect of expanding gas. These engines are limited by how quickly they absorb heat from their surroundings. The cooling effect can, however, double as air conditioning. Compressed gas motors also lose effectiveness with falling gas pressure. Ion thrusters are used on some satellites and spacecraft. They are only effective in a vacuum, which limits their use to spaceborne vehicles. Ion thrusters run primarily off electricity, but they also need a propellant such as caesium, or, more recently xenon. Ion thrusters can achieve extremely high speeds and use little propellant; however, they are power-hungry. Converting energy to work The mechanical energy that motors and engines produce must be converted to work by wheels, propellers, nozzles, or similar means. Aside from converting mechanical energy into motion, wheels allow a vehicle to roll along a surface and, with the exception of railed vehicles, to be steered. Wheels are ancient technology, with specimens being discovered from over 5000 years ago. Wheels are used in a plethora of vehicles, including motor vehicles, armoured personnel carriers, amphibious vehicles, airplanes, trains, skateboards and wheelbarrows. Nozzles are used in conjunction with almost all reaction engines. Vehicles using nozzles include jet aircraft, rockets, and personal watercraft. While most nozzles take the shape of a cone or bell, some unorthodox designs have been created such as the aerospike. Some nozzles are intangible, such as the electromagnetic field nozzle of a vectored ion thruster. Continuous track is sometimes used instead of wheels to power land vehicles. Continuous track has the advantages of a larger contact area, easy repairs on small damage, and high maneuverability. Examples of vehicles using continuous tracks are tanks, snowmobiles and excavators. Two continuous tracks used together allow for steering. The largest land vehicle in the world, the Bagger 293, is propelled by continuous tracks. Propellers (as well as screws, fans and rotors) are used to move through a fluid. Propellers have been used as toys since ancient times; however, it was Leonardo da Vinci who devised what was one of the earliest propeller driven vehicles, the "aerial-screw". In 1661, Toogood & Hays adopted the screw for use as a ship propeller. Since then, the propeller has been tested on many terrestrial vehicles, including the Schienenzeppelin train and numerous cars. In modern times, propellers are most prevalent on watercraft and aircraft, as well as some amphibious vehicles such as hovercraft and ground-effect vehicles. Intuitively, propellers cannot work in space as there is no working fluid; however, some sources have suggested that since space is never empty, a propeller could be made to work in space. Similarly to propeller vehicles, some vehicles use wings for propulsion. Sailboats and sailplanes are propelled by the forward component of lift generated by their sails/wings. Ornithopters also produce thrust aerodynamically. Ornithopters with large rounded leading edges produce lift by leading-edge suction forces. Research at the University of Toronto Institute for Aerospace Studies lead to a flight with an actual ornithopter on July 31, 2010. Paddle wheels are used on some older watercraft and their reconstructions. These ships were known as paddle steamers. Because paddle wheels simply push against the water, their design and construction is very simple. The oldest such ship in scheduled service is the Skibladner. Many pedalo boats also use paddle wheels for propulsion. Screw-propelled vehicles are propelled by auger-like cylinders fitted with helical flanges. Because they can produce thrust on both land and water, they are commonly used on all-terrain vehicles. The ZiL-2906 was a Soviet-designed screw-propelled vehicle designed to retrieve cosmonauts from the Siberian wilderness. Friction All or almost all of the useful energy produced by the engine is usually dissipated as friction; so minimizing frictional losses is very important in many vehicles. The main sources of friction are rolling friction and fluid drag (air drag or water drag). Wheels have low bearing friction, and pneumatic tires give low rolling friction. Steel wheels on steel tracks are lower still. Aerodynamic drag can be reduced by streamlined design features. Friction is desirable and important in supplying traction to facilitate motion on land. Most land vehicles rely on friction for accelerating, decelerating and changing direction. Sudden reductions in traction can cause loss of control and accidents. Control Steering Most vehicles, with the notable exception of railed vehicles, have at least one steering mechanism. Wheeled vehicles steer by angling their front or rear wheels. The B-52 Stratofortress has a special arrangement in which all four main wheels can be angled. Skids can also be used to steer by angling them, as in the case of a snowmobile. Ships, boats, submarines, dirigibles and aeroplanes usually have a rudder for steering. On an airplane, ailerons are used to bank the airplane for directional control, sometimes assisted by the rudder. Stopping With no power applied, most vehicles come to a stop due to friction. But it is often required to stop a vehicle faster than by friction alone, so almost all vehicles are equipped with a braking system. Wheeled vehicles are typically equipped with friction brakes, which use the friction between brake pads (stators) and brake rotors to slow the vehicle. Many airplanes have high-performance versions of the same system in their landing gear for use on the ground. A Boeing 757 brake, for example, has 3 stators and 4 rotors. The Space Shuttle also uses frictional brakes on its wheels. As well as frictional brakes, hybrid and electric cars, trolleybuses and electric bicycles can also use regenerative brakes to recycle some of the vehicle's potential energy. High-speed trains sometimes use frictionless Eddy-current brakes; however, widespread application of the technology has been limited by overheating and interference issues. Aside from landing gear brakes, most large aircraft have other ways of decelerating. In aircraft, air brakes are aerodynamic surfaces that provide braking force by increasing the frontal cross section, thus increasing the increasing the aerodynamic drag of the aircraft. These are usually implemented as flaps that oppose air flow when extended and are flush with the aircraft when retracted. Reverse thrust is also used in many aeroplane engines. Propeller aircraft achieve reverse thrust by reversing the pitch of the propellers, while jet aircraft do so by redirecting their engine exhausts forward. On aircraft carriers, arresting gears are used to stop an aircraft. Pilots may even apply full forward throttle on touchdown, in case the arresting gear does not catch and a go around is needed. Parachutes are used to slow down vehicles travelling very fast. Parachutes have been used in land, air and space vehicles such as the ThrustSSC, Eurofighter Typhoon and Apollo Command Module. Some older Soviet passenger jets had braking parachutes for emergency landings. Boats use similar devices called sea anchors to maintain stability in rough seas. To further increase the rate of deceleration or where the brakes have failed, several mechanisms can be used to stop a vehicle. Cars and rolling stock usually have hand brakes that, while designed to secure an already parked vehicle, can provide limited braking should the primary brakes fail. A secondary procedure called forward-slip is sometimes used to slow airplanes by flying at an angle, causing more drag. Legislation Motor vehicle and trailer categories are defined according to the following international classification: Category M: passenger vehicles. Category N: motor vehicles for the carriage of goods. Category O: trailers and semi-trailers. European Union In the European Union the classifications for vehicle types are defined by: Commission Directive 2001/116/EC of 20 December 2001, adapting to technical progress Council Directive 70/156/EEC on the approximation of the laws of the Member States relating to the type-approval of motor vehicles and their trailers Directive 2002/24/EC of the European Parliament and of the Council of 18 March 2002 relating to the type-approval of two or three wheeled motor vehicles and repealing Council Directive 92/61/EEC European Community is based on the Community's WVTA (whole vehicle type-approval) system. Under this system, manufacturers can obtain certification for a vehicle type in one Member State if it meets the EC technical requirements and then market it EU-wide with no need for further tests. Total technical harmonization already has been achieved in three vehicle categories (passenger cars, motorcycles, and tractors) and soon will extend to other vehicle categories (coaches and utility vehicles). It is essential that European car manufacturers be ensured access to as large a market as possible. While the Community type-approval system allows manufacturers to fully benefit fully from internal market opportunities, worldwide technical harmonization in the context of the United Nations Economic Commission for Europe (UNECE) offers a market beyond European borders. Licensing In many cases, it is unlawful to operate a vehicle without a license or certification. The least strict form of regulation usually limits what passengers the driver may carry or prohibits them completely (e.g., a Canadian ultralight license without endorsements). The next level of licensing may allow passengers, but without any form of compensation or payment. A private driver's license usually has these conditions. Commercial licenses that allow the transport of passengers and cargo are more tightly regulated. The most strict form of licensing is generally reserved for school buses, hazardous materials transports and emergency vehicles. The driver of a motor vehicle is typically required to hold a valid driver's license while driving on public lands, whereas the pilot of an aircraft must have a license at all times, regardless of where in the jurisdiction the aircraft is flying. Registration Vehicles are often required to be registered. Registration may be for purely legal reasons, for insurance reasons, or to help law enforcement recover stolen vehicles. The Toronto Police Service, for example, offers free and optional bicycle registration online. On motor vehicles, registration often takes the form of a vehicle registration plate, which makes it easy to identify a vehicle. In Russia, trucks and buses have their licence plate numbers repeated in large black letters on the back. On aircraft, a similar system is used, where a tail number is painted on various surfaces. Like motor vehicles and aircraft, watercraft also have registration numbers in most jurisdictions; however, the vessel name is still the primary means of identification as has been the case since ancient times. For this reason, duplicate registration names are generally rejected. In Canada, boats with an engine power of or greater require registration, leading to the ubiquitous "" engine. Registration may be conditional on the vehicle being approved for use on public highways, as in the case of the UK and Ontario. Many U.S. states also have requirements for vehicles operating on public highways. Aircraft have more stringent requirements, as they pose a high risk of damage to people and property in the event of an accident. In the U.S., the FAA requires aircraft to have an airworthiness certificate. Because U.S. aircraft must be flown for some time before they are certified, there is a provision for an experimental airworthiness certificate. FAA experimental aircraft are restricted in operation, including no overflights of populated areas, in busy airspace, or with unessential passengers. Materials and parts used in FAA certified aircraft must meet the criteria set forth by the technical standard orders. Mandatory safety equipment In many jurisdictions, the operator of a vehicle is legally obligated to carry safety equipment with or on them. Common examples include seat belts in cars, helmets on motorcycles and bicycles, fire extinguishers on boats, buses and airplanes, and life jackets on boats and commercial aircraft. Passenger aircraft carry a great deal of safety equipment, including inflatable slides, rafts, oxygen masks, oxygen tanks, life jackets, satellite beacons and first aid kits. Some equipment, such as life jackets has led to debate regarding their usefulness. In the case of Ethiopian Airlines Flight 961, the life jackets saved many people but also led to many deaths when passengers inflated their vests prematurely. Right-of-way There are specific real-estate arrangements made to allow vehicles to travel from one place to another. The most common arrangements are public highways, where appropriately licensed vehicles can navigate without hindrance. These highways are on public land and are maintained by the government. Similarly, toll routes are open to the public after paying a toll. These routes and the land they rest on may be government-owned, privately owned or a combination of both. Some routes are privately owned but grant access to the public. These routes often have a warning sign stating that the government does not maintain them. An example of this are byways in England and Wales. In Scotland, land is open to unmotorized vehicles if it meets certain criteria. Public land is sometimes open to use by off-road vehicles. On U.S. public land, the Bureau of Land Management (BLM) decides where vehicles may be used. Railways often pass over land not owned by the railway company. The right to this land is granted to the railway company through mechanisms such as easement. Watercraft are generally allowed to navigate public waters without restriction as long as they do not cause a disturbance. Passing through a lock, however, may require paying a toll. Despite the common law tradition Cuius est solum, eius est usque ad coelum et ad inferos of owning all the air above one's property, the U.S. Supreme Court ruled that aircraft in the U.S. have the right to use air above someone else's property without their consent. While the same rule generally applies in all jurisdictions, some countries, such as Cuba and Russia, have taken advantage of air rights on a national level to earn money. There are some areas that aircraft are barred from overflying. This is called prohibited airspace. Prohibited airspace is usually strictly enforced due to potential damage from espionage or attack. In the case of Korean Air Lines Flight 007, the airliner entered prohibited airspace over Soviet territory and was shot down as it was leaving. Safety Several different metrics used to compare and evaluate the safety of different vehicles. The main three are deaths per billion passenger-journeys, deaths per billion passenger-hours and deaths per billion passenger-kilometers. See also Automotive acronyms and abbreviations ISIRI 6924 Narrow-track vehicle Outline of vehicles Personal transporter Propulsion Single-track vehicle Vehicular dynamics Vehicular metrics References Transport Manufactured goods
Vehicle
[ "Physics" ]
5,865
[ "Vehicles", "Transport", "Physical systems" ]
32,431
https://en.wikipedia.org/wiki/Vanadium
Vanadium is a chemical element; it has symbol V and atomic number 23. It is a hard, silvery-grey, malleable transition metal. The elemental metal is rarely found in nature, but once isolated artificially, the formation of an oxide layer (passivation) somewhat stabilizes the free metal against further oxidation. Spanish-Mexican scientist Andrés Manuel del Río discovered compounds of vanadium in 1801 by analyzing a new lead-bearing mineral he called "brown lead". Though he initially presumed its qualities were due to the presence of a new element, he was later erroneously convinced by French chemist Hippolyte Victor Collet-Descotils that the element was just chromium. Then in 1830, Nils Gabriel Sefström generated chlorides of vanadium, thus proving there was a new element, and named it "vanadium" after the Scandinavian goddess of beauty and fertility, Vanadís (Freyja). The name was based on the wide range of colors found in vanadium compounds. Del Río's lead mineral was ultimately named vanadinite for its vanadium content. In 1867, Henry Enfield Roscoe obtained the pure element. Vanadium occurs naturally in about 65 minerals and fossil fuel deposits. It is produced in China and Russia from steel smelter slag. Other countries produce it either from magnetite directly, flue dust of heavy oil, or as a byproduct of uranium mining. It is mainly used to produce specialty steel alloys such as high-speed tool steels, and some aluminium alloys. The most important industrial vanadium compound, vanadium pentoxide, is used as a catalyst for the production of sulfuric acid. The vanadium redox battery for energy storage may be an important application in the future. Large amounts of vanadium ions are found in a few organisms, possibly as a toxin. The oxide and some other salts of vanadium have moderate toxicity. Particularly in the ocean, vanadium is used by some life forms as an active center of enzymes, such as the vanadium bromoperoxidase of some ocean algae. History Vanadium was discovered in Mexico in 1801 by the Spanish mineralogist Andrés Manuel del Río. Del Río extracted the element from a sample of Mexican "brown lead" ore, later named vanadinite. He found that its salts exhibit a wide variety of colors, and as a result, he named the element panchromium (Greek: παγχρώμιο "all colors"). Later, del Río renamed the element erythronium (Greek: ερυθρός "red") because most of the salts turned red upon heating. In 1805, French chemist Hippolyte Victor Collet-Descotils, backed by del Río's friend Baron Alexander von Humboldt, incorrectly declared that del Río's new element was an impure sample of chromium. Del Río accepted Collet-Descotils' statement and retracted his claim. In 1831 Swedish chemist Nils Gabriel Sefström rediscovered the element in a new oxide he found while working with iron ores. Later that year, Friedrich Wöhler confirmed that this element was identical to that found by del Río and hence confirmed del Río's earlier work. Sefström chose a name beginning with V, which had not yet been assigned to any element. He called the element vanadium after Old Norse Vanadís (another name for the Norse Vanir goddess Freyja, whose attributes include beauty and fertility), because of the many beautifully colored chemical compounds it produces. On learning of Wöhler's findings, del Río began to passionately argue that his old claim be recognized, but the element kept the name vanadium. In 1831, the geologist George William Featherstonhaugh suggested that vanadium should be renamed "rionium" after del Río, but this suggestion was not followed. As vanadium is usually found combined with other elements, the isolation of vanadium metal was difficult. In 1831, Berzelius reported the production of the metal, but Henry Enfield Roscoe showed that Berzelius had produced the nitride, vanadium nitride (VN). Roscoe eventually produced the metal in 1867 by reduction of vanadium(II) chloride, VCl2, with hydrogen. In 1927, pure vanadium was produced by reducing vanadium pentoxide with calcium. The first large-scale industrial use of vanadium was in the steel alloy chassis of the Ford Model T, inspired by French race cars. Vanadium steel allowed reduced weight while increasing tensile strength (). For the first decade of the 20th century, most vanadium ore were mined by the American Vanadium Company from the Minas Ragra in Peru. Later, the demand for uranium rose, leading to increased mining of that metal's ores. One major uranium ore was carnotite, which also contains vanadium. Thus, vanadium became available as a by-product of uranium production. Eventually, uranium mining began to supply a large share of the demand for vanadium. In 1911, German chemist Martin Henze discovered vanadium in the hemovanadin proteins found in blood cells (or coelomic cells) of Ascidiacea (sea squirts). Characteristics Vanadium is an average-hard, ductile, steel-blue metal. Vanadium is usually described as "soft", because it is ductile, malleable, and not brittle. Vanadium is harder than most metals and steels (see Hardnesses of the elements (data page) and iron). It has good resistance to corrosion and it is stable against alkalis and sulfuric and hydrochloric acids. It is oxidized in air at about 933 K (660 °C, 1220 °F), although an oxide passivation layer forms even at room temperature. It also reacts with hydrogen peroxide. Isotopes Naturally occurring vanadium is composed of one stable isotope, 51V, and one radioactive isotope, 50V. The latter has a half-life of 2.71×1017 years and a natural abundance of 0.25%. 51V has a nuclear spin of , which is useful for NMR spectroscopy. Twenty-four artificial radioisotopes have been characterized, ranging in mass number from 40 to 65. The most stable of these isotopes are 49V with a half-life of 330 days, and 48V with a half-life of 16.0 days. The remaining radioactive isotopes have half-lives shorter than an hour, most below 10 seconds. At least four isotopes have metastable excited states. Electron capture is the main decay mode for isotopes lighter than 51V. For the heavier ones, the most common mode is beta decay. The electron capture reactions lead to the formation of element 22 (titanium) isotopes, while beta decay leads to element 24 (chromium) isotopes. Compounds The chemistry of vanadium is noteworthy for the accessibility of the four adjacent oxidation states 2–5. In an aqueous solution, vanadium forms metal aquo complexes of which the colors are lilac [V(H2O)6]2+, green [V(H2O)6]3+, blue [VO(H2O)5]2+, yellow-orange oxides [VO(H2O)5]3+, the formula for which depends on pH. Vanadium(II) compounds are reducing agents, and vanadium(V) compounds are oxidizing agents. Vanadium(IV) compounds often exist as vanadyl derivatives, which contain the VO2+ center. Ammonium vanadate(V) (NH4VO3) can be successively reduced with elemental zinc to obtain the different colors of vanadium in these four oxidation states. Lower oxidation states occur in compounds such as V(CO)6, and substituted derivatives. Vanadium pentoxide is a commercially important catalyst for the production of sulfuric acid, a reaction that exploits the ability of vanadium oxides to undergo redox reactions. The vanadium redox battery utilizes all four oxidation states: one electrode uses the +5/+4 couple and the other uses the +3/+2 couple. Conversion of these oxidation states is illustrated by the reduction of a strongly acidic solution of a vanadium(V) compound with zinc dust or amalgam. The initial yellow color characteristic of the pervanadyl ion [VO2(H2O)4]+ is replaced by the blue color of [VO(H2O)5]2+, followed by the green color of [V(H2O)6]3+ and then the violet color of [V(H2O)6]2+. Another potential vanadium battery based on VB2 uses multiple oxidation state to allow for 11 electrons to be released per VB2, giving it higher energy capacity by order of compared to Li-ion and gasoline per unit volume. VB2 batteries can be further enhanced as air batteries, allowing for even higher energy density and lower weight than lithium battery or gasoline, even though recharging remains a challenge. Oxyanions In an aqueous solution, vanadium(V) forms an extensive family of oxyanions as established by 51V NMR spectroscopy. The interrelationships in this family are described by the predominance diagram, which shows at least 11 species, depending on pH and concentration. The tetrahedral orthovanadate ion, , is the principal species present at pH 12–14. Similar in size and charge to phosphorus(V), vanadium(V) also parallels its chemistry and crystallography. Orthovanadate V is used in protein crystallography to study the biochemistry of phosphate. Besides that, this anion also has been shown to interact with the activity of some specific enzymes. The tetrathiovanadate [VS4]3− is analogous to the orthovanadate ion. At lower pH values, the monomer [HVO4]2− and dimer [V2O7]4− are formed, with the monomer predominant at a vanadium concentration of less than c. 10−2M (pV > 2, where pV is equal to the minus value of the logarithm of the total vanadium concentration/M). The formation of the divanadate ion is analogous to the formation of the dichromate ion. As the pH is reduced, further protonation and condensation to polyvanadates occur: at pH 4–6 [H2VO4]− is predominant at pV greater than ca. 4, while at higher concentrations trimers and tetramers are formed. Between pH 2–4 decavanadate predominates, its formation from orthovanadate is represented by this condensation reaction: 10 [VO4]3− + 24 H+ → [V10O28]6− + 12 H2O In decavanadate, each V(V) center is surrounded by six oxide ligands. Vanadic acid, H3VO4, exists only at very low concentrations because protonation of the tetrahedral species [H2VO4]− results in the preferential formation of the octahedral [VO2(H2O)4]+ species. In strongly acidic solutions, pH < 2, [VO2(H2O)4]+ is the predominant species, while the oxide V2O5 precipitates from solution at high concentrations. The oxide is formally the acid anhydride of vanadic acid. The structures of many vanadate compounds have been determined by X-ray crystallography. Vanadium(V) forms various peroxo complexes, most notably in the active site of the vanadium-containing bromoperoxidase enzymes. The species VO(O2)(H2O)4+ is stable in acidic solutions. In alkaline solutions, species with 2, 3 and 4 peroxide groups are known; the last forms violet salts with the formula M3V(O2)4 nH2O (M= Li, Na, etc.), in which the vanadium has an 8-coordinate dodecahedral structure. Halide derivatives Twelve binary halides, compounds with the formula VXn (n=2..5), are known. VI4, VCl5, VBr5, and VI5 do not exist or are extremely unstable. In combination with other reagents, VCl4 is used as a catalyst for the polymerization of dienes. Like all binary halides, those of vanadium are Lewis acidic, especially those of V(IV) and V(V). Many of the halides form octahedral complexes with the formula VXnL6−n (X= halide; L= other ligand). Many vanadium oxyhalides (formula VOmXn) are known. The oxytrichloride and oxytrifluoride (VOCl3 and VOF3) are the most widely studied. Akin to POCl3, they are volatile, adopt tetrahedral structures in the gas phase, and are Lewis acidic. Coordination compounds Complexes of vanadium(II) and (III) are reducing, while those of V(IV) and V(V) are oxidants. The vanadium ion is rather large and some complexes achieve coordination numbers greater than 6, as is the case in [V(CN)7]4−. Oxovanadium(V) also forms 7 coordinate coordination complexes with tetradentate ligands and peroxides and these complexes are used for oxidative brominations and thioether oxidations. The coordination chemistry of V4+ is dominated by the vanadyl center, VO2+, which binds four other ligands strongly and one weakly (the one trans to the vanadyl center). An example is vanadyl acetylacetonate (V(O)(O2C5H7)2). In this complex, the vanadium is 5-coordinate, distorted square pyramidal, meaning that a sixth ligand, such as pyridine, may be attached, though the association constant of this process is small. Many 5-coordinate vanadyl complexes have a trigonal bipyramidal geometry, such as VOCl2(NMe3)2. The coordination chemistry of V5+ is dominated by the relatively stable dioxovanadium coordination complexes which are often formed by aerial oxidation of the vanadium(IV) precursors indicating the stability of the +5 oxidation state and ease of interconversion between the +4 and +5 states. Organometallic compounds The organometallic chemistry of vanadium is welldeveloped. Vanadocene dichloride is a versatile starting reagent and has applications in organic chemistry. Vanadium carbonyl, V(CO)6, is a rare example of a paramagnetic metal carbonyl. Reduction yields V (isoelectronic with Cr(CO)6), which may be further reduced with sodium in liquid ammonia to yield V (isoelectronic with Fe(CO)5). Occurrence Metallic vanadium is rare in nature (known as native vanadium), having been found among fumaroles of the Colima Volcano, but vanadium compounds occur naturally in about 65 different minerals. Vanadium began to be used in the manufacture of special steels in 1896. At that time, very few deposits of vanadium ores were known. Between 1899 and 1906, the main deposits exploited were the mines of Santa Marta de los Barros (Badajoz), Spain. Vanadinite was extracted from these mines. At the beginning of the 20th century, a large deposit of vanadium ore was discovered near Junín, Cerro de Pasco, Peru (now the Minas Ragra vanadium mine). For several years this patrónite (VS4) deposit was an economically significant source for vanadium ore. In 1920 roughly two-thirds of the worldwide production was supplied by the mine in Peru. With the production of uranium in the 1910s and 1920s from carnotite () vanadium became available as a side product of uranium production. Vanadinite () and other vanadium bearing minerals are only mined in exceptional cases. With the rising demand, much of the world's vanadium production is now sourced from vanadium-bearing magnetite found in ultramafic gabbro bodies. If this titanomagnetite is used to produce iron, most of the vanadium goes to the slag and is extracted from it. Vanadium is mined mostly in China, South Africa and eastern Russia. In 2022 these three countries mined more than 96% of the 100,000 tons of produced vanadium, with China providing 70%. Fumaroles of Colima are known of being vanadium-rich, depositing other vanadium minerals, that include shcherbinaite (V2O5) and colimaite (K3VS4). Vanadium is also present in bauxite and deposits of crude oil, coal, oil shale, and tar sands. In crude oil, concentrations up to 1200 ppm have been reported. When such oil products are burned, traces of vanadium may cause corrosion in engines and boilers. An estimated 110,000 tons of vanadium per year are released into the atmosphere by burning fossil fuels. Black shales are also a potential source of vanadium. During WWII some vanadium was extracted from alum shales in the south of Sweden. In the universe, the cosmic abundance of vanadium is 0.0001%, making the element nearly as common as copper or zinc. Vanadium is the 19th most abundant element in the crust. It is detected spectroscopically in light from the Sun and sometimes in the light from other stars. The vanadyl ion is also abundant in seawater, having an average concentration of 30 nM (1.5 mg/m3). Some mineral water springs also contain the ion in high concentrations. For example, springs near Mount Fuji contain as much as 54 μg per liter. Production Vanadium metal is obtained by a multistep process that begins with roasting crushed ore with NaCl or Na2CO3 at about 850 °C to give sodium metavanadate (NaVO3). An aqueous extract of this solid is acidified to produce "red cake", a polyvanadate salt, which is reduced with calcium metal. As an alternative for small-scale production, vanadium pentoxide is reduced with hydrogen or magnesium. Many other methods are also used, in all of which vanadium is produced as a byproduct of other processes. Purification of vanadium is possible by the crystal bar process developed by Anton Eduard van Arkel and Jan Hendrik de Boer in 1925. It involves the formation of the metal iodide, in this example vanadium(III) iodide, and the subsequent decomposition to yield pure metal: 2 V + 3 I2 2 VI3 Most vanadium is used as a steel alloy called ferrovanadium. Ferrovanadium is produced directly by reducing a mixture of vanadium oxide, iron oxides and iron in an electric furnace. The vanadium ends up in pig iron produced from vanadium-bearing magnetite. Depending on the ore used, the slag contains up to 25% of vanadium. Applications Alloys Approximately 85% of the vanadium produced is used as ferrovanadium or as a steel additive. The considerable increase of strength in steel containing small amounts of vanadium was discovered in the early 20th century. Vanadium forms stable nitrides and carbides, resulting in a significant increase in the strength of steel. From that time on, vanadium steel was used for applications in axles, bicycle frames, crankshafts, gears, and other critical components. There are two groups of vanadium steel alloys. Vanadium high-carbon steel alloys contain 0.15–0.25% vanadium, and high-speed tool steels (HSS) have a vanadium content of 1–5%. For high-speed tool steels, a hardness above HRC 60 can be achieved. HSS steel is used in surgical instruments and tools. Powder-metallurgic alloys contain up to 18% percent vanadium. The high content of vanadium carbides in those alloys increases wear resistance significantly. One application for those alloys is tools and knives. Vanadium stabilizes the beta form of titanium and increases the strength and temperature stability of titanium. Mixed with aluminium in titanium alloys, it is used in jet engines, high-speed airframes and dental implants. The most common alloy for seamless tubing is Titanium 3/2.5 containing 2.5% vanadium, the titanium alloy of choice in the aerospace, defense, and bicycle industries. Another common alloy, primarily produced in sheets, is Titanium 6AL-4V, a titanium alloy with 6% aluminium and 4% vanadium. Several vanadium alloys show superconducting behavior. The first A15 phase superconductor was a vanadium compound, V3Si, which was discovered in 1952. Vanadium-gallium tape is used in superconducting magnets (17.5 teslas or 175,000 gauss). The structure of the superconducting A15 phase of V3Ga is similar to that of the more common Nb3Sn and Nb3Ti. It has been found that a small amount, 40 to 270 ppm, of vanadium in Wootz steel significantly improved the strength of the product, and gave it the distinctive patterning. The source of the vanadium in the original Wootz steel ingots remains unknown. Vanadium can be used as a substitute for molybdenum in armor steel, though the alloy produced is far more brittle and prone to spalling on non-penetrating impacts. The Third Reich was one of the most prominent users of such alloys, in armored vehicles like Tiger II or Jagdtiger. Catalysts Vanadium compounds are used extensively as catalysts; Vanadium pentoxide V2O5, is used as a catalyst in manufacturing sulfuric acid by the contact process In this process sulfur dioxide () is oxidized to the trioxide (): In this redox reaction, sulfur is oxidized from +4 to +6, and vanadium is reduced from +5 to +4: V2O5 + SO2 → 2 VO2 + SO3 The catalyst is regenerated by oxidation with air: 4 VO2 + O2 → 2 V2O5 Similar oxidations are used in the production of maleic anhydride: C4H10 + 3.5 O2 → C4H2O3 + 4 H2O Phthalic anhydride and several other bulk organic compounds are produced similarly. These green chemistry processes convert inexpensive feedstocks to highly functionalized, versatile intermediates. Vanadium is an important component of mixed metal oxide catalysts used in the oxidation of propane and propylene to acrolein, acrylic acid or the ammoxidation of propylene to acrylonitrile. Other uses The vanadium redox battery, a type of flow battery, is an electrochemical cell consisting of aqueous vanadium ions in different oxidation states. Batteries of this type were first proposed in the 1930s and developed commercially from the 1980s onwards. Cells use +5 and +2 formal oxidization state ions. Vanadium redox batteries are used commercially for grid energy storage. Vanadate can be used for protecting steel against rust and corrosion by conversion coating. Vanadium foil is used in cladding titanium to steel because it is compatible with both iron and titanium. The moderate thermal neutron-capture cross-section and the short half-life of the isotopes produced by neutron capture makes vanadium a suitable material for the inner structure of a fusion reactor. Vanadium can be added in small quantities < 5% to LFP battery cathodes to increase ionic conductivity. Proposed Lithium vanadium oxide has been proposed for use as a high energy density anode for lithium-ion batteries, at 745 Wh/L when paired with a lithium cobalt oxide cathode. Vanadium phosphates have been proposed as the cathode in the lithium vanadium phosphate battery, another type of lithium-ion battery. Biological role Vanadium has a more significant role in marine environments than terrestrial ones. Vanadoenzymes Several species of marine algae produce vanadium bromoperoxidase as well as the closely related chloroperoxidase (which may use a heme or vanadium cofactor) and iodoperoxidases. The bromoperoxidase produces an estimated 1–2 million tons of bromoform and 56,000 tons of bromomethane annually. Most naturally occurring organobromine compounds are produced by this enzyme, catalyzing the following reaction (R-H is hydrocarbon substrate): A vanadium nitrogenase is used by some nitrogen-fixing micro-organisms, such as Azotobacter. In this role, vanadium serves in place of the more common molybdenum or iron, and gives the nitrogenase slightly different properties. Vanadium accumulation in tunicates Vanadium is essential to tunicates, where it is stored in the highly acidified vacuoles of certain blood cell types, designated vanadocytes. Vanabins (vanadium-binding proteins) have been identified in the cytoplasm of such cells. The concentration of vanadium in the blood of ascidian tunicates is as much as ten million times higher than the surrounding seawater, which normally contains 1 to 2 μg/L. The function of this vanadium concentration system and these vanadium-bearing proteins is still unknown, but the vanadocytes are later deposited just under the outer surface of the tunic, where they may deter predation. Fungi Amanita muscaria and related species of macrofungi accumulate vanadium (up to 500 mg/kg in dry weight). Vanadium is present in the coordination complex amavadin in fungal fruit-bodies. The biological importance of the accumulation is unknown. Toxic or peroxidase enzyme functions have been suggested. Mammals Deficiencies in vanadium result in reduced growth in rats. The U.S. Institute of Medicine has not confirmed that vanadium is an essential nutrient for humans, so neither a Recommended Dietary Intake nor an Adequate Intake have been established. Dietary intake is estimated at 6 to 18 μg/day, with less than 5% absorbed. The Tolerable Upper Intake Level (UL) of dietary vanadium, beyond which adverse effects may occur, is set at 1.8 mg/day. Research Vanadyl sulfate as a dietary supplement has been researched as a means of increasing insulin sensitivity or otherwise improving glycemic control in people who are diabetic. Some of the trials had significant treatment effects but were deemed as being of poor study quality. The amounts of vanadium used in these trials (30 to 150 mg) far exceeded the safe upper limit. The conclusion of the systemic review was "There is no rigorous evidence that oral vanadium supplementation improves glycaemic control in type 2 diabetes. The routine use of vanadium for this purpose cannot be recommended." In astrobiology, it has been suggested that discrete vanadium accumulations on Mars could be a potential microbial biosignature when used in conjunction with Raman spectroscopy and morphology. Safety All vanadium compounds should be considered toxic. Tetravalent VOSO4 has been reported to be at least 5 times more toxic than trivalent V2O3. The US Occupational Safety and Health Administration (OSHA) has set an exposure limit of 0.05 mg/m3 for vanadium pentoxide dust and 0.1 mg/m3 for vanadium pentoxide fumes in workplace air for an 8-hour workday, 40-hour work week. The US National Institute for Occupational Safety and Health (NIOSH) has recommended that 35 mg/m3 of vanadium be considered immediately dangerous to life and health, that is, likely to cause permanent health problems or death. Vanadium compounds are poorly absorbed through the gastrointestinal system. Inhalation of vanadium and vanadium compounds results primarily in adverse effects on the respiratory system. Quantitative data are, however, insufficient to derive a subchronic or chronic inhalation reference dose. Other effects have been reported after oral or inhalation exposures on blood parameters, liver, neurological development, and other organs in rats. There is little evidence that vanadium or vanadium compounds are reproductive toxins or teratogens. Vanadium pentoxide was reported to be carcinogenic in male rats and in male and female mice by inhalation in an NTP study, although the interpretation of the results has been disputed a few years after the report. The carcinogenicity of vanadium has not been determined by the United States Environmental Protection Agency. Vanadium traces in diesel fuels are the main fuel component in high temperature corrosion. During combustion, vanadium oxidizes and reacts with sodium and sulfur, yielding vanadate compounds with melting points as low as , which attack the passivation layer on steel and render it susceptible to corrosion. The solid vanadium compounds also abrade engine components. See also References Further reading External links Vanadium at The Periodic Table of Videos (University of Nottingham) Chemical elements Transition metals Dietary minerals Restorative dentistry Chemical elements with body-centered cubic structure Native element minerals
Vanadium
[ "Physics" ]
6,299
[ "Chemical elements", "Atoms", "Matter" ]
32,436
https://en.wikipedia.org/wiki/Vinyl%20group
In organic chemistry, a vinyl group (abbr. Vi; IUPAC name: ethenyl group) is a functional group with the formula . It is the ethylene (IUPAC name: ethene) molecule () with one fewer hydrogen atom. The name is also used for any compound containing that group, namely where R is any other group of atoms. An industrially important example is vinyl chloride, precursor to PVC, a plastic commonly known as vinyl. Vinyl is one of the alkenyl functional groups. On a carbon skeleton, sp2-hybridized carbons or positions are often called vinylic. Allyls, acrylates and styrenics contain vinyl groups. (A styrenic crosslinker with two vinyl groups is called divinyl benzene.) Vinyl polymers Vinyl groups can polymerize with the aid of a radical initiator or a catalyst, forming vinyl polymers. Vinyl polymers contain no vinyl groups. Instead they are saturated. The following table gives some examples of vinyl polymers. Synthesis and reactivity Vinyl derivatives are alkenes. If activated by an adjacent group, the increased polarization of the bond gives rise to characteristic reactivity, which is termed vinylogous: In allyl compounds, where the next carbon is saturated but substituted once, allylic rearrangement and related reactions are observed. Allyl Grignard reagents (organomagnesiums) can attack with the vinyl end first. If next to an electron-withdrawing group, conjugate addition (Michael addition) can occur. Vinyl organometallics, e.g. vinyllithium and vinyl tributyltin, participate in vinylations including coupling reactions such as in Negishi coupling. History and etymology The radical was first reported by Henri Victor Regnault in 1835 and initially named aldehydène. Due to the incorrect measurement of the atomic mass of carbon it was believed to be at the time. Then in 1839 it was renamed by Justus von Liebig to "acetyl", because he believed it to be the radical of the acetic acid. The modern term was coined by German chemist Hermann Kolbe in 1851, who rebutted Liebig's hypothesis. However even in 1860 Marcellin Berthelot still based the name he coined for acetylene on Liebig's nomenclature and not on Kolbe's. The etymology of "vinyl" is the Latin vinum = "wine", and the Greek word "hylos" 'υλος (matter or material), because of its relationship with ethyl alcohol. See also Acetylenic Allylic/Homoallylic Alpha-olefin Benzylic Propargylic/Homopropargylic Vinylogous References Alkenyl groups Monomers Functional groups
Vinyl group
[ "Chemistry", "Materials_science" ]
583
[ "Substituents", "Functional groups", "Alkenyl groups", "Polymer chemistry", "Monomers" ]
32,500
https://en.wikipedia.org/wiki/Vacuum%20pump
A vacuum pump is a type of pump device that draws gas particles from a sealed volume in order to leave behind a partial vacuum. The first vacuum pump was invented in 1650 by Otto von Guericke, and was preceded by the suction pump, which dates to antiquity. History Early pumps The predecessor to the vacuum pump was the suction pump. Dual-action suction pumps were found in the city of Pompeii. Arabic engineer Al-Jazari later described dual-action suction pumps as part of water-raising machines in the 13th century. He also said that a suction pump was used in siphons to discharge Greek fire. The suction pump later appeared in medieval Europe from the 15th century. By the 17th century, water pump designs had improved to the point that they produced measurable vacuums, but this was not immediately understood. What was known was that suction pumps could not pull water beyond a certain height: 18 Florentine yards according to a measurement taken around 1635, or about . This limit was a concern in irrigation projects, mine drainage, and decorative water fountains planned by the Duke of Tuscany, so the duke commissioned Galileo Galilei to investigate the problem. Galileo suggested, incorrectly, in his Two New Sciences (1638) that the column of a water pump will break of its own weight when the water has been lifted to 34 feet. Other scientists took up the challenge, including Gasparo Berti, who replicated it by building the first water barometer in Rome in 1639. Berti's barometer produced a vacuum above the water column, but he could not explain it. A breakthrough was made by Galileo's student Evangelista Torricelli in 1643. Building upon Galileo's notes, he built the first mercury barometer and wrote a convincing argument that the space at the top was a vacuum. The height of the column was then limited to the maximum weight that atmospheric pressure could support; this is the limiting height of a suction pump. In 1650, Otto von Guericke invented the first vacuum pump. Four years later, he conducted his famous Magdeburg hemispheres experiment, showing that teams of horses could not separate two hemispheres from which the air had been evacuated. Robert Boyle improved Guericke's design and conducted experiments on the properties of vacuum. Robert Hooke also helped Boyle produce an air pump that helped to produce the vacuum. By 1709, Francis Hauksbee improved on the design further with his two-cylinder pump, where two pistons worked via a rack-and-pinion design that reportedly "gave a vacuum within about one inch of mercury of perfect." This design remained popular and only slightly changed until well into the nineteenth century. 19th century Heinrich Geissler invented the mercury displacement pump in 1855 and achieved a record vacuum of about 10 Pa (0.1 Torr). A number of electrical properties become observable at this vacuum level, and this renewed interest in vacuum. This, in turn, led to the development of the vacuum tube. The Sprengel pump was a widely used vacuum producer of this time. 20th century The early 20th century saw the invention of many types of vacuum pump, including the molecular drag pump, the diffusion pump, and the turbomolecular pump. Types Pumps can be broadly categorized according to three techniques: positive displacement, momentum transfer, and entrapment. Positive displacement pumps use a mechanism to repeatedly expand a cavity, allow gases to flow in from the chamber, seal off the cavity, and exhaust it to the atmosphere. Momentum transfer pumps, also called molecular pumps, use high-speed jets of dense fluid or high-speed rotating blades to knock gas molecules out of the chamber. Entrapment pumps capture gases in a solid or adsorbed state; this includes cryopumps, getters, and ion pumps. Positive displacement pumps are the most effective for low vacuums. Momentum transfer pumps, in conjunction with one or two positive displacement pumps, are the most common configuration used to achieve high vacuums. In this configuration the positive displacement pump serves two purposes. First it obtains a rough vacuum in the vessel being evacuated before the momentum transfer pump can be used to obtain the high vacuum, as momentum transfer pumps cannot start pumping at atmospheric pressures. Second the positive displacement pump backs up the momentum transfer pump by evacuating to low vacuum the accumulation of displaced molecules in the high vacuum pump. Entrapment pumps can be added to reach ultrahigh vacuums, but they require periodic regeneration of the surfaces that trap air molecules or ions. Due to this requirement their available operational time can be unacceptably short in low and high vacuums, thus limiting their use to ultrahigh vacuums. Pumps also differ in details like manufacturing tolerances, sealing material, pressure, flow, admission or no admission of oil vapor, service intervals, reliability, tolerance to dust, tolerance to chemicals, tolerance to liquids and vibration. Positive displacement pump A partial vacuum may be generated by increasing the volume of a container. To continue evacuating a chamber indefinitely without requiring infinite growth, a compartment of the vacuum can be repeatedly closed off, exhausted, and expanded again. This is the principle behind a positive displacement pump, for example the manual water pump. Inside the pump, a mechanism expands a small sealed cavity to reduce its pressure below that of the atmosphere. Because of the pressure differential, some fluid from the chamber (or the well, in our example) is pushed into the pump's small cavity. The pump's cavity is then sealed from the chamber, opened to the atmosphere, and squeezed back to a minute size. More sophisticated systems are used for most industrial applications, but the basic principle of cyclic volume removal is the same: Rotary vane pump, the most common Diaphragm pump, zero oil contamination Liquid ring high resistance to dust Piston pump, fluctuating vacuum Scroll pump, highest speed dry pump Screw pump (10 Pa) Wankel pump External vane pump Roots blower, also called a booster pump, has highest pumping speeds but low compression ratio Multistage Roots pump that combine several stages providing high pumping speed with better compression ratio Toepler pump Lobe pump The base pressure of a rubber- and plastic-sealed piston pump system is typically 1 to 50 kPa, while a scroll pump might reach 10 Pa (when new) and a rotary vane oil pump with a clean and empty metallic chamber can easily achieve 0.1 Pa. A positive displacement vacuum pump moves the same volume of gas with each cycle, so its pumping speed is constant unless it is overcome by backstreaming. Momentum transfer pump In a momentum transfer pump (or kinetic pump), gas molecules are accelerated from the vacuum side to the exhaust side (which is usually maintained at a reduced pressure by a positive displacement pump). Momentum transfer pumping is only possible below pressures of about 0.1 kPa. Matter flows differently at different pressures based on the laws of fluid dynamics. At atmospheric pressure and mild vacuums, molecules interact with each other and push on their neighboring molecules in what is known as viscous flow. When the distance between the molecules increases, the molecules interact with the walls of the chamber more often than with the other molecules, and molecular pumping becomes more effective than positive displacement pumping. This regime is generally called high vacuum. Molecular pumps sweep out a larger area than mechanical pumps, and do so more frequently, making them capable of much higher pumping speeds. They do this at the expense of the seal between the vacuum and their exhaust. Since there is no seal, a small pressure at the exhaust can easily cause backstreaming through the pump; this is called stall. In high vacuum, however, pressure gradients have little effect on fluid flows, and molecular pumps can attain their full potential. The two main types of molecular pumps are the diffusion pump and the turbomolecular pump. Both types of pumps blow out gas molecules that diffuse into the pump by imparting momentum to the gas molecules. Diffusion pumps blow out gas molecules with jets of an oil or mercury vapor, while turbomolecular pumps use high speed fans to push the gas. Both of these pumps will stall and fail to pump if exhausted directly to atmospheric pressure, so they must be exhausted to a lower grade vacuum created by a mechanical pump, in this case called a backing pump. As with positive displacement pumps, the base pressure will be reached when leakage, outgassing, and backstreaming equal the pump speed, but now minimizing leakage and outgassing to a level comparable to backstreaming becomes much more difficult. Entrapment pump An entrapment pump may be a cryopump, which uses cold temperatures to condense gases to a solid or adsorbed state, a chemical pump, which reacts with gases to produce a solid residue, or an ion pump, which uses strong electrical fields to ionize gases and propel the ions into a solid substrate. A cryomodule uses cryopumping. Other types are the sorption pump, non-evaporative getter pump, and titanium sublimation pump (a type of evaporative getter that can be used repeatedly). Other types Regenerative pump Regenerative pumps utilize vortex behavior of the fluid (air). The construction is based on hybrid concept of centrifugal pump and turbopump. Usually it consists of several sets of perpendicular teeth on the rotor circulating air molecules inside stationary hollow grooves like multistage centrifugal pump. They can reach to 1×10−5 mbar (0.001 Pa)(when combining with Holweck pump) and directly exhaust to atmospheric pressure. Examples of such pumps are Edwards EPX (technical paper ) and Pfeiffer OnTool™ Booster 150. It is sometimes referred as side channel pump. Due to high pumping rate from atmosphere to high vacuum and less contamination since bearing can be installed at exhaust side, this type of pumps are used in load lock in semiconductor manufacturing processes. This type of pump suffers from high power consumption(~1 kW) compared to turbomolecular pump (<100W) at low pressure since most power is consumed to back atmospheric pressure. This can be reduced by nearly 10 times by backing with a small pump. More examples Additional types of pump include the: Venturi vacuum pump (aspirator) (10 to 30 kPa) Steam ejector (vacuum depends on the number of stages, but can be very low) Performance measures Pumping speed refers to the volume flow rate of a pump at its inlet, often measured in volume per unit of time. Momentum transfer and entrapment pumps are more effective on some gases than others, so the pumping rate can be different for each of the gases being pumped, and the average volume flow rate of the pump will vary depending on the chemical composition of the gases remaining in the chamber. Throughput refers to the pumping speed multiplied by the gas pressure at the inlet, and is measured in units of pressure·volume/unit time. At a constant temperature, throughput is proportional to the number of molecules being pumped per unit time, and therefore to the mass flow rate of the pump. When discussing a leak in the system or backstreaming through the pump, throughput refers to the volume leak rate multiplied by the pressure at the vacuum side of the leak, so the leak throughput can be compared to the pump throughput. Positive displacement and momentum transfer pumps have a constant volume flow rate (pumping speed), but as the chamber's pressure drops, this volume contains less and less mass. So although the pumping speed remains constant, the throughput and mass flow rate drop exponentially. Meanwhile, the leakage, evaporation, sublimation and backstreaming rates continue to produce a constant throughput into the system. Techniques Vacuum pumps are combined with chambers and operational procedures into a wide variety of vacuum systems. Sometimes more than one pump will be used (in series or in parallel) in a single application. A partial vacuum, or rough vacuum, can be created using a positive displacement pump that transports a gas load from an inlet port to an outlet (exhaust) port. Because of their mechanical limitations, such pumps can only achieve a low vacuum. To achieve a higher vacuum, other techniques must then be used, typically in series (usually following an initial fast pump down with a positive displacement pump). Some examples might be use of an oil sealed rotary vane pump (the most common positive displacement pump) backing a diffusion pump, or a dry scroll pump backing a turbomolecular pump. There are other combinations depending on the level of vacuum being sought. Achieving high vacuum is difficult because all of the materials exposed to the vacuum must be carefully evaluated for their outgassing and vapor pressure properties. For example, oils, greases, and rubber or plastic gaskets used as seals for the vacuum chamber must not boil off when exposed to the vacuum, or the gases they produce would prevent the creation of the desired degree of vacuum. Often, all of the surfaces exposed to the vacuum must be baked at high temperature to drive off adsorbed gases. Outgassing can also be reduced simply by desiccation prior to vacuum pumping. High-vacuum systems generally require metal chambers with metal gasket seals such as Klein flanges or ISO flanges, rather than the rubber gaskets more common in low vacuum chamber seals. The system must be clean and free of organic matter to minimize outgassing. All materials, solid or liquid, have a small vapour pressure, and their outgassing becomes important when the vacuum pressure falls below this vapour pressure. As a result, many materials that work well in low vacuums, such as epoxy, will become a source of outgassing at higher vacuums. With these standard precautions, vacuums of 1 mPa are easily achieved with an assortment of molecular pumps. With careful design and operation, 1 μPa is possible. Several types of pumps may be used in sequence or in parallel. In a typical pumpdown sequence, a positive displacement pump would be used to remove most of the gas from a chamber, starting from atmosphere (760 Torr, 101 kPa) to 25 Torr (3 kPa). Then a sorption pump would be used to bring the pressure down to 10−4 Torr (10 mPa). A cryopump or turbomolecular pump would be used to bring the pressure further down to 10−8 Torr (1 μPa). An additional ion pump can be started below 10−6 Torr to remove gases which are not adequately handled by a cryopump or turbo pump, such as helium or hydrogen. Ultra-high vacuum generally requires custom-built equipment, strict operational procedures, and a fair amount of trial-and-error. Ultra-high vacuum systems are usually made of stainless steel with metal-gasketed vacuum flanges. The system is usually baked, preferably under vacuum, to temporarily raise the vapour pressure of all outgassing materials in the system and boil them off. If necessary, this outgassing of the system can also be performed at room temperature, but this takes much more time. Once the bulk of the outgassing materials are boiled off and evacuated, the system may be cooled to lower vapour pressures to minimize residual outgassing during actual operation. Some systems are cooled well below room temperature by liquid nitrogen to shut down residual outgassing and simultaneously cryopump the system. In ultra-high vacuum systems, some very odd leakage paths and outgassing sources must be considered. The water absorption of aluminium and palladium becomes an unacceptable source of outgassing, and even the absorptivity of hard metals such as stainless steel or titanium must be considered. Some oils and greases will boil off in extreme vacuums. The porosity of the metallic vacuum chamber walls may have to be considered, and the grain direction of the metallic flanges should be parallel to the flange face. The impact of molecular size must be considered. Smaller molecules can leak in more easily and are more easily absorbed by certain materials, and molecular pumps are less effective at pumping gases with lower molecular weights. A system may be able to evacuate nitrogen (the main component of air) to the desired vacuum, but the chamber could still be full of residual atmospheric hydrogen and helium. Vessels lined with a highly gas-permeable material such as palladium (which is a high-capacity hydrogen sponge) create special outgassing problems. Applications Vacuum pumps are used in many industrial and scientific processes, including: Vacuum deaerator Composite plastic moulding processes; Production of most types of electric lamps, vacuum tubes, and CRTs where the device is either left evacuated or re-filled with a specific gas or gas mixture; Semiconductor processing, notably ion implantation, dry etch and PVD, ALD, PECVD and CVD deposition and so on in photolithography; Electron microscopy; Medical processes that require suction; Uranium enrichment; Medical applications such as radiotherapy, radiosurgery and radiopharmacy; Analytical instrumentation to analyse gas, liquid, solid, surface and bio materials; Mass spectrometers to create a high vacuum between the ion source and the detector; vacuum coating on glass, metal and plastics for decoration, for durability and for energy saving, such as low-emissivity glass, hard coating for engine components (as in Formula One), ophthalmic coating, milking machines and other equipment in dairy sheds; Vacuum impregnation of porous products such as wood or electric motor windings; Air conditioning service (removing all contaminants from the system before charging with refrigerant); Trash compactor; Vacuum engineering; Sewage systems (see EN1091:1997 standards); Freeze drying; and Fusion research. In the field of oil regeneration and re-refining, vacuum pumps create a low vacuum for oil dehydration and a high vacuum for oil purification. A vacuum may be used to power, or provide assistance to mechanical devices. In hybrid and diesel engine motor vehicles, a pump fitted on the engine (usually on the camshaft) is used to produce a vacuum. In petrol engines, instead, the vacuum is typically obtained as a side-effect of the operation of the engine and the flow restriction created by the throttle plate but may be also supplemented by an electrically operated vacuum pump to boost braking assistance or improve fuel consumption. This vacuum may then be used to power the following motor vehicle components: vacuum servo booster for the hydraulic brakes, motors that move dampers in the ventilation system, throttle driver in the cruise control servomechanism, door locks or trunk releases. In an aircraft, the vacuum source is often used to power gyroscopes in the various flight instruments. To prevent the complete loss of instrumentation in the event of an electrical failure, the instrument panel is deliberately designed with certain instruments powered by electricity and other instruments powered by the vacuum source. Depending on the application, some vacuum pumps may either be electrically driven (using electric current) or pneumatically-driven (using air pressure), or powered and actuated by other means. Hazards Old vacuum-pump oils that were produced before circa 1980 often contain a mixture of several different dangerous polychlorinated biphenyls (PCBs), which are highly toxic, carcinogenic, persistent organic pollutants. See also An Experiment on a Bird in the Air Pump Vacuum sewerage References Bibliography External links Pumps Pumps Pumps 1640s introductions 1642 beginnings Gas technologies German inventions 17th-century inventions
Vacuum pump
[ "Physics", "Chemistry", "Engineering" ]
4,016
[ "Pumps", "Turbomachinery", "Vacuum pumps", "Vacuum", "Physical systems", "Hydraulics", "Vacuum systems", "Matter" ]
32,502
https://en.wikipedia.org/wiki/Vacuum
A vacuum (: vacuums or vacua) is space devoid of matter. The word is derived from the Latin adjective (neuter ) meaning "vacant" or "void". An approximation to such vacuum is a region with a gaseous pressure much less than atmospheric pressure. Physicists often discuss ideal test results that would occur in a perfect vacuum, which they sometimes simply call "vacuum" or free space, and use the term partial vacuum to refer to an actual imperfect vacuum as one might have in a laboratory or in space. In engineering and applied physics on the other hand, vacuum refers to any space in which the pressure is considerably lower than atmospheric pressure. The Latin term in vacuo is used to describe an object that is surrounded by a vacuum. The quality of a partial vacuum refers to how closely it approaches a perfect vacuum. Other things equal, lower gas pressure means higher-quality vacuum. For example, a typical vacuum cleaner produces enough suction to reduce air pressure by around 20%. But higher-quality vacuums are possible. Ultra-high vacuum chambers, common in chemistry, physics, and engineering, operate below one trillionth (10−12) of atmospheric pressure (100 nPa), and can reach around 100 particles/cm3. Outer space is an even higher-quality vacuum, with the equivalent of just a few hydrogen atoms per cubic meter on average in intergalactic space. Vacuum has been a frequent topic of philosophical debate since ancient Greek times, but was not studied empirically until the 17th century. Clemens Timpler (1605) philosophized about the experimental possibility of producing a vacuum in small tubes. Evangelista Torricelli produced the first laboratory vacuum in 1643, and other experimental techniques were developed as a result of his theories of atmospheric pressure. A Torricellian vacuum is created by filling with mercury a tall glass container closed at one end, and then inverting it in a bowl to contain the mercury (see below). Vacuum became a valuable industrial tool in the 20th century with the introduction of incandescent light bulbs and vacuum tubes, and a wide array of vacuum technologies has since become available. The development of human spaceflight has raised interest in the impact of vacuum on human health, and on life forms in general. Etymology The word vacuum comes , noun use of neuter of vacuus, meaning "empty", related to vacare, meaning "to be empty". Vacuum is one of the few words in the English language that contains two consecutive instances of the vowel u. Historical understanding Historically, there has been much dispute over whether such a thing as a vacuum can exist. Ancient Greek philosophers debated the existence of a vacuum, or void, in the context of atomism, which posited void and atom as the fundamental explanatory elements of physics. Lucretius argued for the existence of vacuum in the first century BC and Hero of Alexandria tried unsuccessfully to create an artificial vacuum in the first century AD. Following Plato, however, even the abstract concept of a featureless void faced considerable skepticism: it could not be apprehended by the senses, it could not, itself, provide additional explanatory power beyond the physical volume with which it was commensurate and, by definition, it was quite literally nothing at all, which cannot rightly be said to exist. Aristotle believed that no void could occur naturally, because the denser surrounding material continuum would immediately fill any incipient rarity that might give rise to a void. In his Physics, book IV, Aristotle offered numerous arguments against the void: for example, that motion through a medium which offered no impediment could continue ad infinitum, there being no reason that something would come to rest anywhere in particular. In the medieval Muslim world, the physicist and Islamic scholar Al-Farabi wrote a treatise rejecting the existence of the vacuum in the 10th century. He concluded that air's volume can expand to fill available space, and therefore the concept of a perfect vacuum was incoherent. According to Ahmad Dallal, Abū Rayhān al-Bīrūnī states that "there is no observable evidence that rules out the possibility of vacuum". The suction pump was described by Arab engineer Al-Jazari in the 13th century, and later appeared in Europe from the 15th century. European scholars such as Roger Bacon, Blasius of Parma and Walter Burley in the 13th and 14th century focused considerable attention on issues concerning the concept of a vacuum. The commonly held view that nature abhorred a vacuum was called horror vacui. There was even speculation that even God could not create a vacuum if he wanted and the 1277 Paris condemnations of Bishop Étienne Tempier, which required there to be no restrictions on the powers of God, led to the conclusion that God could create a vacuum if he so wished. From the 14th century onward increasingly departed from the Aristotelian perspective, scholars widely acknowledged that a supernatural void exists beyond the confines of the cosmos itself by the 17th century. This idea, influenced by Stoic physics, helped to segregate natural and theological concerns. Almost two thousand years after Plato, René Descartes also proposed a geometrically based alternative theory of atomism, without the problematic nothing–everything dichotomy of void and atom. Although Descartes agreed with the contemporary position, that a vacuum does not occur in nature, the success of his namesake coordinate system and more implicitly, the spatial–corporeal component of his metaphysics would come to define the philosophically modern notion of empty space as a quantified extension of volume. By the ancient definition however, directional information and magnitude were conceptually distinct. Medieval thought experiments into the idea of a vacuum considered whether a vacuum was present, if only for an instant, between two flat plates when they were rapidly separated. There was much discussion of whether the air moved in quickly enough as the plates were separated, or, as Walter Burley postulated, whether a 'celestial agent' prevented the vacuum arising. Jean Buridan reported in the 14th century that teams of ten horses could not pull open bellows when the port was sealed. The 17th century saw the first attempts to quantify measurements of partial vacuum. Evangelista Torricelli's mercury barometer of 1643 and Blaise Pascal's experiments both demonstrated a partial vacuum. In 1654, Otto von Guericke invented the first vacuum pump and conducted his famous Magdeburg hemispheres experiment, showing that, owing to atmospheric pressure outside the hemispheres, teams of horses could not separate two hemispheres from which the air had been partially evacuated. Robert Boyle improved Guericke's design and with the help of Robert Hooke further developed vacuum pump technology. Thereafter, research into the partial vacuum lapsed until 1850 when August Toepler invented the Toepler pump and in 1855 when Heinrich Geissler invented the mercury displacement pump, achieving a partial vacuum of about 10 Pa (0.1 Torr). A number of electrical properties become observable at this vacuum level, which renewed interest in further research. While outer space provides the most rarefied example of a naturally occurring partial vacuum, the heavens were originally thought to be seamlessly filled by a rigid indestructible material called aether. Borrowing somewhat from the pneuma of Stoic physics, aether came to be regarded as the rarefied air from which it took its name, (see Aether (mythology)). Early theories of light posited a ubiquitous terrestrial and celestial medium through which light propagated. Additionally, the concept informed Isaac Newton's explanations of both refraction and of radiant heat. 19th century experiments into this luminiferous aether attempted to detect a minute drag on the Earth's orbit. While the Earth does, in fact, move through a relatively dense medium in comparison to that of interstellar space, the drag is so minuscule that it could not be detected. In 1912, astronomer Henry Pickering commented: "While the interstellar absorbing medium may be simply the ether, [it] is characteristic of a gas, and free gaseous molecules are certainly there". Thereafter, however, luminiferous aether was discarded. Later, in 1930, Paul Dirac proposed a model of the vacuum as an infinite sea of particles possessing negative energy, called the Dirac sea. This theory helped refine the predictions of his earlier formulated Dirac equation, and successfully predicted the existence of the positron, confirmed two years later. Werner Heisenberg's uncertainty principle, formulated in 1927, predicted a fundamental limit within which instantaneous position and momentum, or energy and time can be measured. This far reaching consequences also threatened whether the "emptiness" of space between particles exists. Classical field theories The strictest criterion to define a vacuum is a region of space and time where all the components of the stress–energy tensor are zero. This means that this region is devoid of energy and momentum, and by consequence, it must be empty of particles and other physical fields (such as electromagnetism) that contain energy and momentum. Gravity In general relativity, a vanishing stress–energy tensor implies, through Einstein field equations, the vanishing of all the components of the Ricci tensor. Vacuum does not mean that the curvature of space-time is necessarily flat: the gravitational field can still produce curvature in a vacuum in the form of tidal forces and gravitational waves (technically, these phenomena are the components of the Weyl tensor). The black hole (with zero electric charge) is an elegant example of a region completely "filled" with vacuum, but still showing a strong curvature. Electromagnetism In classical electromagnetism, the vacuum of free space, or sometimes just free space or perfect vacuum, is a standard reference medium for electromagnetic effects. Some authors refer to this reference medium as classical vacuum, a terminology intended to separate this concept from QED vacuum or QCD vacuum, where vacuum fluctuations can produce transient virtual particle densities and a relative permittivity and relative permeability that are not identically unity. In the theory of classical electromagnetism, free space has the following properties: Electromagnetic radiation travels, when unobstructed, at the speed of light, the defined value 299,792,458 m/s in SI units. The superposition principle is always exactly true. For example, the electric potential generated by two charges is the simple addition of the potentials generated by each charge in isolation. The value of the electric field at any point around these two charges is found by calculating the vector sum of the two electric fields from each of the charges acting alone. The permittivity and permeability are exactly the electric constant and magnetic constant , respectively (in SI units), or exactly 1 (in Gaussian units). The characteristic impedance () equals the impedance of free space ≈ 376.73 Ω. The vacuum of classical electromagnetism can be viewed as an idealized electromagnetic medium with the constitutive relations in SI units: relating the electric displacement field to the electric field and the magnetic field or H-field to the magnetic induction or B-field . Here is a spatial location and is time. Quantum mechanics In quantum mechanics and quantum field theory, the vacuum is defined as the state (that is, the solution to the equations of the theory) with the lowest possible energy (the ground state of the Hilbert space). In quantum electrodynamics this vacuum is referred to as 'QED vacuum' to distinguish it from the vacuum of quantum chromodynamics, denoted as QCD vacuum. QED vacuum is a state with no matter particles (hence the name), and no photons. As described above, this state is impossible to achieve experimentally. (Even if every matter particle could somehow be removed from a volume, it would be impossible to eliminate all the blackbody photons.) Nonetheless, it provides a good model for realizable vacuum, and agrees with a number of experimental observations as described next. QED vacuum has interesting and complex properties. In QED vacuum, the electric and magnetic fields have zero average values, but their variances are not zero. As a result, QED vacuum contains vacuum fluctuations (virtual particles that hop into and out of existence), and a finite energy called vacuum energy. Vacuum fluctuations are an essential and ubiquitous part of quantum field theory. Some experimentally verified effects of vacuum fluctuations include spontaneous emission and the Lamb shift. Coulomb's law and the electric potential in vacuum near an electric charge are modified. Theoretically, in QCD multiple vacuum states can coexist. The starting and ending of cosmological inflation is thought to have arisen from transitions between different vacuum states. For theories obtained by quantization of a classical theory, each stationary point of the energy in the configuration space gives rise to a single vacuum. String theory is believed to have a huge number of vacua – the so-called string theory landscape. Outer space Outer space has very low density and pressure, and is the closest physical approximation of a perfect vacuum. But no vacuum is truly perfect, not even in interstellar space, where there are still a few hydrogen atoms per cubic meter. Stars, planets, and moons keep their atmospheres by gravitational attraction, and as such, atmospheres have no clearly delineated boundary: the density of atmospheric gas simply decreases with distance from the object. The Earth's atmospheric pressure drops to about at of altitude, the Kármán line, which is a common definition of the boundary with outer space. Beyond this line, isotropic gas pressure rapidly becomes insignificant when compared to radiation pressure from the Sun and the dynamic pressure of the solar winds, so the definition of pressure becomes difficult to interpret. The thermosphere in this range has large gradients of pressure, temperature and composition, and varies greatly due to space weather. Astrophysicists prefer to use number density to describe these environments, in units of particles per cubic centimetre. But although it meets the definition of outer space, the atmospheric density within the first few hundred kilometers above the Kármán line is still sufficient to produce significant drag on satellites. Most artificial satellites operate in this region called low Earth orbit and must fire their engines every couple of weeks or a few times a year (depending on solar activity). The drag here is low enough that it could theoretically be overcome by radiation pressure on solar sails, a proposed propulsion system for interplanetary travel. All of the observable universe is filled with large numbers of photons, the so-called cosmic background radiation, and quite likely a correspondingly large number of neutrinos. The current temperature of this radiation is about . Measurement The quality of a vacuum is indicated by the amount of matter remaining in the system, so that a high quality vacuum is one with very little matter left in it. Vacuum is primarily measured by its absolute pressure, but a complete characterization requires further parameters, such as temperature and chemical composition. One of the most important parameters is the mean free path (MFP) of residual gases, which indicates the average distance that molecules will travel between collisions with each other. As the gas density decreases, the MFP increases, and when the MFP is longer than the chamber, pump, spacecraft, or other objects present, the continuum assumptions of fluid mechanics do not apply. This vacuum state is called high vacuum, and the study of fluid flows in this regime is called particle gas dynamics. The MFP of air at atmospheric pressure is very short, 70 nm, but at 100 mPa (≈) the MFP of room temperature air is roughly 100 mm, which is on the order of everyday objects such as vacuum tubes. The Crookes radiometer turns when the MFP is larger than the size of the vanes. Vacuum quality is subdivided into ranges according to the technology required to achieve it or measure it. These ranges were defined in ISO 3529-1:2019 as shown in the following table (100 Pa corresponds to 0.75 Torr; Torr is a non-SI unit): Atmospheric pressure is variable but are common standard or reference pressures. Deep space is generally much more empty than any artificial vacuum. It may or may not meet the definition of high vacuum above, depending on what region of space and astronomical bodies are being considered. For example, the MFP of interplanetary space is smaller than the size of the Solar System, but larger than small planets and moons. As a result, solar winds exhibit continuum flow on the scale of the Solar System, but must be considered a bombardment of particles with respect to the Earth and Moon. Perfect vacuum is an ideal state of no particles at all. It cannot be achieved in a laboratory, although there may be small volumes which, for a brief moment, happen to have no particles of matter in them. Even if all particles of matter were removed, there would still be photons, as well as dark energy, virtual particles, and other aspects of the quantum vacuum. Relative versus absolute measurement Vacuum is measured in units of pressure, typically as a subtraction relative to ambient atmospheric pressure on Earth. But the amount of relative measurable vacuum varies with local conditions. On the surface of Venus, where ground-level atmospheric pressure is much higher than on Earth, much higher relative vacuum readings would be possible. On the surface of the Moon with almost no atmosphere, it would be extremely difficult to create a measurable vacuum relative to the local environment. Similarly, much higher than normal relative vacuum readings are possible deep in the Earth's ocean. A submarine maintaining an internal pressure of 1 atmosphere submerged to a depth of 10 atmospheres (98 metres; a 9.8-metre column of seawater has the equivalent weight of 1 atm) is effectively a vacuum chamber keeping out the crushing exterior water pressures, though the 1 atm inside the submarine would not normally be considered a vacuum. Therefore, to properly understand the following discussions of vacuum measurement, it is important that the reader assumes the relative measurements are being done on Earth at sea level, at exactly 1 atmosphere of ambient atmospheric pressure. Measurements relative to 1 atm The SI unit of pressure is the pascal (symbol Pa), but vacuum is often measured in torrs, named for an Italian physicist Torricelli (1608–1647). A torr is equal to the displacement of a millimeter of mercury (mmHg) in a manometer with 1 torr equaling 133.3223684 pascals above absolute zero pressure. Vacuum is often also measured on the barometric scale or as a percentage of atmospheric pressure in bars or atmospheres. Low vacuum is often measured in millimeters of mercury (mmHg) or pascals (Pa) below standard atmospheric pressure. "Below atmospheric" means that the absolute pressure is equal to the current atmospheric pressure. In other words, most low vacuum gauges that read, for example 50.79 Torr. Many inexpensive low vacuum gauges have a margin of error and may report a vacuum of 0 Torr but in practice this generally requires a two-stage rotary vane or other medium type of vacuum pump to go much beyond (lower than) 1 torr. Measuring instruments Many devices are used to measure the pressure in a vacuum, depending on what range of vacuum is needed. Hydrostatic gauges (such as the mercury column manometer) consist of a vertical column of liquid in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight is in equilibrium with the pressure differential between the two ends of the tube. The simplest design is a closed-end U-shaped tube, one side of which is connected to the region of interest. Any fluid can be used, but mercury is preferred for its high density and low vapour pressure. Simple hydrostatic gauges can measure pressures ranging from 1 torr (100 Pa) to above atmospheric. An important variation is the McLeod gauge which isolates a known volume of vacuum and compresses it to multiply the height variation of the liquid column. The McLeod gauge can measure vacuums as high as 10−6 torr (0.1 mPa), which is the lowest direct measurement of pressure that is possible with current technology. Other vacuum gauges can measure lower pressures, but only indirectly by measurement of other pressure-controlled properties. These indirect measurements must be calibrated via a direct measurement, most commonly a McLeod gauge. The kenotometer is a particular type of hydrostatic gauge, typically used in power plants using steam turbines. The kenotometer measures the vacuum in the steam space of the condenser, that is, the exhaust of the last stage of the turbine. Mechanical or elastic gauges depend on a Bourdon tube, diaphragm, or capsule, usually made of metal, which will change shape in response to the pressure of the region in question. A variation on this idea is the capacitance manometer, in which the diaphragm makes up a part of a capacitor. A change in pressure leads to the flexure of the diaphragm, which results in a change in capacitance. These gauges are effective from 103 torr to 10−4 torr, and beyond. Thermal conductivity gauges rely on the fact that the ability of a gas to conduct heat decreases with pressure. In this type of gauge, a wire filament is heated by running current through it. A thermocouple or Resistance Temperature Detector (RTD) can then be used to measure the temperature of the filament. This temperature is dependent on the rate at which the filament loses heat to the surrounding gas, and therefore on the thermal conductivity. A common variant is the Pirani gauge which uses a single platinum filament as both the heated element and RTD. These gauges are accurate from 10 torr to 10−3 torr, but they are sensitive to the chemical composition of the gases being measured. Ionization gauges are used in ultrahigh vacuum. They come in two types: hot cathode and cold cathode. In the hot cathode version an electrically heated filament produces an electron beam. The electrons travel through the gauge and ionize gas molecules around them. The resulting ions are collected at a negative electrode. The current depends on the number of ions, which depends on the pressure in the gauge. Hot cathode gauges are accurate from 10−3 torr to 10−10 torr. The principle behind cold cathode version is the same, except that electrons are produced in a discharge created by a high voltage electrical discharge. Cold cathode gauges are accurate from 10−2 torr to 10−9 torr. Ionization gauge calibration is very sensitive to construction geometry, chemical composition of gases being measured, corrosion and surface deposits. Their calibration can be invalidated by activation at atmospheric pressure or low vacuum. The composition of gases at high vacuums will usually be unpredictable, so a mass spectrometer must be used in conjunction with the ionization gauge for accurate measurement. Uses Vacuum is useful in a variety of processes and devices. Its first widespread use was in the incandescent light bulb to protect the filament from chemical degradation. The chemical inertness produced by a vacuum is also useful for electron beam welding, cold welding, vacuum packing and vacuum frying. Ultra-high vacuum is used in the study of atomically clean substrates, as only a very good vacuum preserves atomic-scale clean surfaces for a reasonably long time (on the order of minutes to days). High to ultra-high vacuum removes the obstruction of air, allowing particle beams to deposit or remove materials without contamination. This is the principle behind chemical vapor deposition, physical vapor deposition, and dry etching which are essential to the fabrication of semiconductors and optical coatings, and to surface science. The reduction of convection provides the thermal insulation of thermos bottles. Deep vacuum lowers the boiling point of liquids and promotes low temperature outgassing which is used in freeze drying, adhesive preparation, distillation, metallurgy, and process purging. The electrical properties of vacuum make electron microscopes and vacuum tubes possible, including cathode-ray tubes. Vacuum interrupters are used in electrical switchgear. Vacuum arc processes are industrially important for production of certain grades of steel or high purity materials. The elimination of air friction is useful for flywheel energy storage and ultracentrifuges. Vacuum-driven machines Vacuums are commonly used to produce suction, which has an even wider variety of applications. The Newcomen steam engine used vacuum instead of pressure to drive a piston. In the 19th century, vacuum was used for traction on Isambard Kingdom Brunel's experimental atmospheric railway. Vacuum brakes were once widely used on trains in the UK but, except on heritage railways, they have been replaced by air brakes. Manifold vacuum can be used to drive accessories on automobiles. The best known application is the vacuum servo, used to provide power assistance for the brakes. Obsolete applications include vacuum-driven windscreen wipers and Autovac fuel pumps. Some aircraft instruments (Attitude Indicator (AI) and the Heading Indicator (HI)) are typically vacuum-powered, as protection against loss of all (electrically powered) instruments, since early aircraft often did not have electrical systems, and since there are two readily available sources of vacuum on a moving aircraft, the engine and an external venturi. Vacuum induction melting uses electromagnetic induction within a vacuum. Maintaining a vacuum in the condenser is an important aspect of the efficient operation of steam turbines. A steam jet ejector or liquid ring vacuum pump is used for this purpose. The typical vacuum maintained in the condenser steam space at the exhaust of the turbine (also called condenser backpressure) is in the range 5 to 15 kPa (absolute), depending on the type of condenser and the ambient conditions. Outgassing Evaporation and sublimation into a vacuum is called outgassing. All materials, solid or liquid, have a small vapour pressure, and their outgassing becomes important when the vacuum pressure falls below this vapour pressure. Outgassing has the same effect as a leak and will limit the achievable vacuum. Outgassing products may condense on nearby colder surfaces, which can be troublesome if they obscure optical instruments or react with other materials. This is of great concern to space missions, where an obscured telescope or solar cell can ruin an expensive mission. The most prevalent outgassing product in vacuum systems is water absorbed by chamber materials. It can be reduced by desiccating or baking the chamber, and removing absorbent materials. Outgassed water can condense in the oil of rotary vane pumps and reduce their net speed drastically if gas ballasting is not used. High vacuum systems must be clean and free of organic matter to minimize outgassing. Ultra-high vacuum systems are usually baked, preferably under vacuum, to temporarily raise the vapour pressure of all outgassing materials and boil them off. Once the bulk of the outgassing materials are boiled off and evacuated, the system may be cooled to lower vapour pressures and minimize residual outgassing during actual operation. Some systems are cooled well below room temperature by liquid nitrogen to shut down residual outgassing and simultaneously cryopump the system. Pumping and ambient air pressure Fluids cannot generally be pulled, so a vacuum cannot be created by suction. Suction can spread and dilute a vacuum by letting a higher pressure push fluids into it, but the vacuum has to be created first before suction can occur. The easiest way to create an artificial vacuum is to expand the volume of a container. For example, the diaphragm muscle expands the chest cavity, which causes the volume of the lungs to increase. This expansion reduces the pressure and creates a partial vacuum, which is soon filled by air pushed in by atmospheric pressure. To continue evacuating a chamber indefinitely without requiring infinite growth, a compartment of the vacuum can be repeatedly closed off, exhausted, and expanded again. This is the principle behind positive displacement pumps, like the manual water pump for example. Inside the pump, a mechanism expands a small sealed cavity to create a vacuum. Because of the pressure differential, some fluid from the chamber (or the well, in our example) is pushed into the pump's small cavity. The pump's cavity is then sealed from the chamber, opened to the atmosphere, and squeezed back to a minute size. The above explanation is merely a simple introduction to vacuum pumping, and is not representative of the entire range of pumps in use. Many variations of the positive displacement pump have been developed, and many other pump designs rely on fundamentally different principles. Momentum transfer pumps, which bear some similarities to dynamic pumps used at higher pressures, can achieve much higher quality vacuums than positive displacement pumps. Entrapment pumps can capture gases in a solid or absorbed state, often with no moving parts, no seals and no vibration. None of these pumps are universal; each type has important performance limitations. They all share a difficulty in pumping low molecular weight gases, especially hydrogen, helium, and neon. The lowest pressure that can be attained in a system is also dependent on many things other than the nature of the pumps. Multiple pumps may be connected in series, called stages, to achieve higher vacuums. The choice of seals, chamber geometry, materials, and pump-down procedures will all have an impact. Collectively, these are called vacuum technique. And sometimes, the final pressure is not the only relevant characteristic. Pumping systems differ in oil contamination, vibration, preferential pumping of certain gases, pump-down speeds, intermittent duty cycle, reliability, or tolerance to high leakage rates. In ultra high vacuum systems, some very "odd" leakage paths and outgassing sources must be considered. The water absorption of aluminium and palladium becomes an unacceptable source of outgassing, and even the adsorptivity of hard metals such as stainless steel or titanium must be considered. Some oils and greases will boil off in extreme vacuums. The permeability of the metallic chamber walls may have to be considered, and the grain direction of the metallic flanges should be parallel to the flange face. The lowest pressures currently achievable in laboratory are about . However, pressures as low as have been indirectly measured in a cryogenic vacuum system. This corresponds to ≈100 particles/cm3. Effects on humans and animals Humans and animals exposed to vacuum will lose consciousness after a few seconds and die of hypoxia within minutes, but the symptoms are not nearly as graphic as commonly depicted in media and popular culture. The reduction in pressure lowers the temperature at which blood and other body fluids boil, but the elastic pressure of blood vessels ensures that this boiling point remains above the internal body temperature of Although the blood will not boil, the formation of gas bubbles in bodily fluids at reduced pressures, known as ebullism, is still a concern. The gas may bloat the body to twice its normal size and slow circulation, but tissues are elastic and porous enough to prevent rupture. Swelling and ebullism can be restrained by containment in a flight suit. Shuttle astronauts wore a fitted elastic garment called the Crew Altitude Protection Suit (CAPS) which prevents ebullism at pressures as low as 2 kPa (15 Torr). Rapid boiling will cool the skin and create frost, particularly in the mouth, but this is not a significant hazard. Animal experiments show that rapid and complete recovery is normal for exposures shorter than 90 seconds, while longer full-body exposures are fatal and resuscitation has never been successful. A study by NASA on eight chimpanzees found all of them survived two and a half minute exposures to vacuum. There is only a limited amount of data available from human accidents, but it is consistent with animal data. Limbs may be exposed for much longer if breathing is not impaired. Robert Boyle was the first to show in 1660 that vacuum is lethal to small animals. An experiment indicates that plants are able to survive in a low pressure environment (1.5 kPa) for about 30 minutes. Cold or oxygen-rich atmospheres can sustain life at pressures much lower than atmospheric, as long as the density of oxygen is similar to that of standard sea-level atmosphere. The colder air temperatures found at altitudes of up to 3 km generally compensate for the lower pressures there. Above this altitude, oxygen enrichment is necessary to prevent altitude sickness in humans that did not undergo prior acclimatization, and spacesuits are necessary to prevent ebullism above 19 km. Most spacesuits use only 20 kPa (150 Torr) of pure oxygen. This pressure is high enough to prevent ebullism, but decompression sickness and gas embolisms can still occur if decompression rates are not managed. Rapid decompression can be much more dangerous than vacuum exposure itself. Even if the victim does not hold his or her breath, venting through the windpipe may be too slow to prevent the fatal rupture of the delicate alveoli of the lungs. Eardrums and sinuses may be ruptured by rapid decompression, soft tissues may bruise and seep blood, and the stress of shock will accelerate oxygen consumption leading to hypoxia. Injuries caused by rapid decompression are called barotrauma. A pressure drop of 13 kPa (100 Torr), which produces no symptoms if it is gradual, may be fatal if it occurs suddenly. Some extremophile microorganisms, such as tardigrades, can survive vacuum conditions for periods of days or weeks. Examples See also Decay of the vacuum (Pair production) Engine vacuum False vacuum Helium mass spectrometer – technical instrumentation to detect a vacuum leak Vacuum brazing Pneumatic tube – transport system using vacuum or pressure to move containers in tubes Rarefaction – reduction of a medium's density Suction – creation of a partial vacuum Theta vacuum – vacuum state of semi-classical pure-Yang Mills theories Vactrain Vacuum cementing – natural process of solidifying homogeneous "dust" in vacuum Vacuum column – controlling loose magnetic tape in early computer data recording tape drives Vacuum deposition – process of depositing atoms and molecules in a sub-atmospheric pressure environment Vacuum engineering Vacuum flange – joining of vacuum systems References External links Leybold – Fundamentals of Vacuum Technology (PDF) VIDEO on the nature of vacuum by Canadian astrophysicist Doctor P The Foundations of Vacuum Coating Technology American Vacuum Society Journal of Vacuum Science and Technology A Journal of Vacuum Science and Technology B FAQ on explosive decompression and vacuum exposure. Discussion of the effects on humans of exposure to hard vacuum. Vacuum, Production of Space "Much Ado About Nothing" by Professor John D. Barrow, Gresham College Free pdf copy of The Structured Vacuum – thinking about nothing by Johann Rafelski and Berndt Muller (1985) . Physical phenomena Industrial processes Gases Articles containing video clips Latin words and phrases
Vacuum
[ "Physics", "Chemistry" ]
7,205
[ "Physical phenomena", "Matter", "Phases of matter", "Vacuum", "Statistical mechanics", "Gases" ]
32,640
https://en.wikipedia.org/wiki/Vector%20calculus
Vector calculus or vector analysis is a branch of mathematics concerned with the differentiation and integration of vector fields, primarily in three-dimensional Euclidean space, The term vector calculus is sometimes used as a synonym for the broader subject of multivariable calculus, which spans vector calculus as well as partial differentiation and multiple integration. Vector calculus plays an important role in differential geometry and in the study of partial differential equations. It is used extensively in physics and engineering, especially in the description of electromagnetic fields, gravitational fields, and fluid flow. Vector calculus was developed from the theory of quaternions by J. Willard Gibbs and Oliver Heaviside near the end of the 19th century, and most of the notation and terminology was established by Gibbs and Edwin Bidwell Wilson in their 1901 book, Vector Analysis. In its standard form using the cross product, vector calculus does not generalize to higher dimensions, but the alternative approach of geometric algebra, which uses the exterior product, does (see below for more). Basic objects Scalar fields A scalar field associates a scalar value to every point in a space. The scalar is a mathematical number representing a physical quantity. Examples of scalar fields in applications include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields (known as scalar bosons), such as the Higgs field. These fields are the subject of scalar field theory. Vector fields A vector field is an assignment of a vector to each point in a space. A vector field in the plane, for instance, can be visualized as a collection of arrows with a given magnitude and direction each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from point to point. This can be used, for example, to calculate work done over a line. Vectors and pseudovectors In more advanced treatments, one further distinguishes pseudovector fields and pseudoscalar fields, which are identical to vector fields and scalar fields, except that they change sign under an orientation-reversing map: for example, the curl of a vector field is a pseudovector field, and if one reflects a vector field, the curl points in the opposite direction. This distinction is clarified and elaborated in geometric algebra, as described below. Vector algebra The algebraic (non-differential) operations in vector calculus are referred to as vector algebra, being defined for a vector space and then applied pointwise to a vector field. The basic algebraic operations consist of: Also commonly used are the two triple products: Operators and theorems Differential operators Vector calculus studies various differential operators defined on scalar or vector fields, which are typically expressed in terms of the del operator (), also known as "nabla". The three basic vector operators are: Also commonly used are the two Laplace operators: A quantity called the Jacobian matrix is useful for studying functions when both the domain and range of the function are multivariable, such as a change of variables during integration. Integral theorems The three basic vector operators have corresponding theorems which generalize the fundamental theorem of calculus to higher dimensions: In two dimensions, the divergence and curl theorems reduce to the Green's theorem: Applications Linear approximations Linear approximations are used to replace complicated functions with linear functions that are almost the same. Given a differentiable function with real values, one can approximate for close to by the formula The right-hand side is the equation of the plane tangent to the graph of at Optimization For a continuously differentiable function of several real variables, a point (that is, a set of values for the input variables, which is viewed as a point in ) is critical if all of the partial derivatives of the function are zero at , or, equivalently, if its gradient is zero. The critical values are the values of the function at the critical points. If the function is smooth, or, at least twice continuously differentiable, a critical point may be either a local maximum, a local minimum or a saddle point. The different cases may be distinguished by considering the eigenvalues of the Hessian matrix of second derivatives. By Fermat's theorem, all local maxima and minima of a differentiable function occur at critical points. Therefore, to find the local maxima and minima, it suffices, theoretically, to compute the zeros of the gradient and the eigenvalues of the Hessian matrix at these zeros. Generalizations Vector calculus can also be generalized to other 3-manifolds and higher-dimensional spaces. Different 3-manifolds Vector calculus is initially defined for Euclidean 3-space, which has additional structure beyond simply being a 3-dimensional real vector space, namely: a norm (giving a notion of length) defined via an inner product (the dot product), which in turn gives a notion of angle, and an orientation, which gives a notion of left-handed and right-handed. These structures give rise to a volume form, and also the cross product, which is used pervasively in vector calculus. The gradient and divergence require only the inner product, while the curl and the cross product also requires the handedness of the coordinate system to be taken into account (see for more detail). Vector calculus can be defined on other 3-dimensional real vector spaces if they have an inner product (or more generally a symmetric nondegenerate form) and an orientation; this is less data than an isomorphism to Euclidean space, as it does not require a set of coordinates (a frame of reference), which reflects the fact that vector calculus is invariant under rotations (the special orthogonal group ). More generally, vector calculus can be defined on any 3-dimensional oriented Riemannian manifold, or more generally pseudo-Riemannian manifold. This structure simply means that the tangent space at each point has an inner product (more generally, a symmetric nondegenerate form) and an orientation, or more globally that there is a symmetric nondegenerate metric tensor and an orientation, and works because vector calculus is defined in terms of tangent vectors at each point. Other dimensions Most of the analytic results are easily understood, in a more general form, using the machinery of differential geometry, of which vector calculus forms a subset. Grad and div generalize immediately to other dimensions, as do the gradient theorem, divergence theorem, and Laplacian (yielding harmonic analysis), while curl and cross product do not generalize as directly. From a general point of view, the various fields in (3-dimensional) vector calculus are uniformly seen as being -vector fields: scalar fields are 0-vector fields, vector fields are 1-vector fields, pseudovector fields are 2-vector fields, and pseudoscalar fields are 3-vector fields. In higher dimensions there are additional types of fields (scalar, vector, pseudovector or pseudoscalar corresponding to , , or dimensions, which is exhaustive in dimension 3), so one cannot only work with (pseudo)scalars and (pseudo)vectors. In any dimension, assuming a nondegenerate form, grad of a scalar function is a vector field, and div of a vector field is a scalar function, but only in dimension 3 or 7 (and, trivially, in dimension 0 or 1) is the curl of a vector field a vector field, and only in 3 or 7 dimensions can a cross product be defined (generalizations in other dimensionalities either require vectors to yield 1 vector, or are alternative Lie algebras, which are more general antisymmetric bilinear products). The generalization of grad and div, and how curl may be generalized is elaborated at Curl § Generalizations; in brief, the curl of a vector field is a bivector field, which may be interpreted as the special orthogonal Lie algebra of infinitesimal rotations; however, this cannot be identified with a vector field because the dimensions differ – there are 3 dimensions of rotations in 3 dimensions, but 6 dimensions of rotations in 4 dimensions (and more generally dimensions of rotations in dimensions). There are two important alternative generalizations of vector calculus. The first, geometric algebra, uses -vector fields instead of vector fields (in 3 or fewer dimensions, every -vector field can be identified with a scalar function or vector field, but this is not true in higher dimensions). This replaces the cross product, which is specific to 3 dimensions, taking in two vector fields and giving as output a vector field, with the exterior product, which exists in all dimensions and takes in two vector fields, giving as output a bivector (2-vector) field. This product yields Clifford algebras as the algebraic structure on vector spaces (with an orientation and nondegenerate form). Geometric algebra is mostly used in generalizations of physics and other applied fields to higher dimensions. The second generalization uses differential forms (-covector fields) instead of vector fields or -vector fields, and is widely used in mathematics, particularly in differential geometry, geometric topology, and harmonic analysis, in particular yielding Hodge theory on oriented pseudo-Riemannian manifolds. From this point of view, grad, curl, and div correspond to the exterior derivative of 0-forms, 1-forms, and 2-forms, respectively, and the key theorems of vector calculus are all special cases of the general form of Stokes' theorem. From the point of view of both of these generalizations, vector calculus implicitly identifies mathematically distinct objects, which makes the presentation simpler but the underlying mathematical structure and generalizations less clear. From the point of view of geometric algebra, vector calculus implicitly identifies -vector fields with vector fields or scalar functions: 0-vectors and 3-vectors with scalars, 1-vectors and 2-vectors with vectors. From the point of view of differential forms, vector calculus implicitly identifies -forms with scalar fields or vector fields: 0-forms and 3-forms with scalar fields, 1-forms and 2-forms with vector fields. Thus for example the curl naturally takes as input a vector field or 1-form, but naturally has as output a 2-vector field or 2-form (hence pseudovector field), which is then interpreted as a vector field, rather than directly taking a vector field to a vector field; this is reflected in the curl of a vector field in higher dimensions not having as output a vector field. See also Vector calculus identities Vector algebra relations Directional derivative Conservative vector field Solenoidal vector field Laplacian vector field Helmholtz decomposition Tensor Geometric calculus References Citations Sources Sandro Caparrini (2002) "The discovery of the vector representation of moments and angular velocity", Archive for History of Exact Sciences 56:151–81. Barry Spain (1965) Vector Analysis, 2nd edition, link from Internet Archive. Chen-To Tai (1995). A historical study of vector analysis. Technical Report RL 915, Radiation Laboratory, University of Michigan. External links The Feynman Lectures on Physics Vol. II Ch. 2: Differential Calculus of Vector Fields A survey of the improper use of ∇ in vector analysis (1994) Tai, Chen-To Vector Analysis: A Text-book for the Use of Students of Mathematics and Physics, (based upon the lectures of Willard Gibbs) by Edwin Bidwell Wilson, published 1902. Mathematical physics
Vector calculus
[ "Physics", "Mathematics" ]
2,365
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
32,664
https://en.wikipedia.org/wiki/Virial%20theorem
In mechanics, the virial theorem provides a general equation that relates the average over time of the total kinetic energy of a stable system of discrete particles, bound by a conservative force (where the work done is independent of path), with that of the total potential energy of the system. Mathematically, the theorem states that where is the total kinetic energy of the particles, represents the force on the th particle, which is located at position , and angle brackets represent the average over time of the enclosed quantity. The word virial for the right-hand side of the equation derives from , the Latin word for "force" or "energy", and was given its technical definition by Rudolf Clausius in 1870. The significance of the virial theorem is that it allows the average total kinetic energy to be calculated even for very complicated systems that defy an exact solution, such as those considered in statistical mechanics; this average total kinetic energy is related to the temperature of the system by the equipartition theorem. However, the virial theorem does not depend on the notion of temperature and holds even for systems that are not in thermal equilibrium. The virial theorem has been generalized in various ways, most notably to a tensor form. If the force between any two particles of the system results from a potential energy that is proportional to some power of the interparticle distance , the virial theorem takes the simple form Thus, twice the average total kinetic energy equals times the average total potential energy . Whereas represents the potential energy between two particles of distance , represents the total potential energy of the system, i.e., the sum of the potential energy over all pairs of particles in the system. A common example of such a system is a star held together by its own gravity, where History In 1870, Rudolf Clausius delivered the lecture "On a Mechanical Theorem Applicable to Heat" to the Association for Natural and Medical Sciences of the Lower Rhine, following a 20-year study of thermodynamics. The lecture stated that the mean vis viva of the system is equal to its virial, or that the average kinetic energy is one half of the average potential energy. The virial theorem can be obtained directly from Lagrange's identity as applied in classical gravitational dynamics, the original form of which was included in Lagrange's "Essay on the Problem of Three Bodies" published in 1772. Carl Jacobi's generalization of the identity to N bodies and to the present form of Laplace's identity closely resembles the classical virial theorem. However, the interpretations leading to the development of the equations were very different, since at the time of development, statistical dynamics had not yet unified the separate studies of thermodynamics and classical dynamics. The theorem was later utilized, popularized, generalized and further developed by James Clerk Maxwell, Lord Rayleigh, Henri Poincaré, Subrahmanyan Chandrasekhar, Enrico Fermi, Paul Ledoux, Richard Bader and Eugene Parker. Fritz Zwicky was the first to use the virial theorem to deduce the existence of unseen matter, which is now called dark matter. Richard Bader showed that the charge distribution of a total system can be partitioned into its kinetic and potential energies that obey the virial theorem. As another example of its many applications, the virial theorem has been used to derive the Chandrasekhar limit for the stability of white dwarf stars. Illustrative special case Consider particles with equal mass , acted upon by mutually attractive forces. Suppose the particles are at diametrically opposite points of a circular orbit with radius . The velocities are and , which are normal to forces and . The respective magnitudes are fixed at and . The average kinetic energy of the system in an interval of time from to is Taking center of mass as the origin, the particles have positions and with fixed magnitude . The attractive forces act in opposite directions as positions, so . Applying the centripetal force formula results in as required. Note: If the origin is displaced, then we'd obtain the same result. This is because the dot product of the displacement with equal and opposite forces , results in net cancellation. Statement and derivation Although the virial theorem depends on averaging the total kinetic and potential energies, the presentation here postpones the averaging to the last step. For a collection of point particles, the scalar moment of inertia about the origin is where and represent the mass and position of the th particle. is the position vector magnitude. Consider the scalar where is the momentum vector of the th particle. Assuming that the masses are constant, is one-half the time derivative of this moment of inertia: In turn, the time derivative of is where is the mass of the th particle, is the net force on that particle, and is the total kinetic energy of the system according to the velocity of each particle, Connection with the potential energy between particles The total force on particle is the sum of all the forces from the other particles in the system: where is the force applied by particle on particle . Hence, the virial can be written as Since no particle acts on itself (i.e., for ), we split the sum in terms below and above this diagonal and add them together in pairs: where we have used Newton's third law of motion, i.e., (equal and opposite reaction). It often happens that the forces can be derived from a potential energy that is a function only of the distance between the point particles and . Since the force is the negative gradient of the potential energy, we have in this case which is equal and opposite to , the force applied by particle on particle , as may be confirmed by explicit calculation. Hence, Thus Special case of power-law forces In a common special case, the potential energy between two particles is proportional to a power of their distance : where the coefficient and the exponent are constants. In such cases, the virial is where is the total potential energy of the system. Thus For gravitating systems the exponent equals −1, giving Lagrange's identity which was derived by Joseph-Louis Lagrange and extended by Carl Jacobi. Time averaging The average of this derivative over a duration is defined as from which we obtain the exact equation The virial theorem states that if , then There are many reasons why the average of the time derivative might vanish. One often-cited reason applies to stably bound systems, that is, to systems that hang together forever and whose parameters are finite. In this case, velocities and coordinates of the particles of the system have upper and lower limits, so that is bounded between two extremes, and , and the average goes to zero in the limit of infinite : Even if the average of the time derivative of is only approximately zero, the virial theorem holds to the same degree of approximation. For power-law forces with an exponent , the general equation holds: For gravitational attraction, , and the average kinetic energy equals half of the average negative potential energy: This general result is useful for complex gravitating systems such as planetary systems or galaxies. A simple application of the virial theorem concerns galaxy clusters. If a region of space is unusually full of galaxies, it is safe to assume that they have been together for a long time, and the virial theorem can be applied. Doppler effect measurements give lower bounds for their relative velocities, and the virial theorem gives a lower bound for the total mass of the cluster, including any dark matter. If the ergodic hypothesis holds for the system under consideration, the averaging need not be taken over time; an ensemble average can also be taken, with equivalent results. In quantum mechanics Although originally derived for classical mechanics, the virial theorem also holds for quantum mechanics, as first shown by Fock using the Ehrenfest theorem. Evaluate the commutator of the Hamiltonian with the position operator and the momentum operator of particle , Summing over all particles, one finds that for the commutator is where is the kinetic energy. The left-hand side of this equation is just , according to the Heisenberg equation of motion. The expectation value of this time derivative vanishes in a stationary state, leading to the quantum virial theorem: Pokhozhaev's identity In the field of quantum mechanics, there exists another form of the virial theorem, applicable to localized solutions to the stationary nonlinear Schrödinger equation or Klein–Gordon equation, is Pokhozhaev's identity, also known as Derrick's theorem. Let be continuous and real-valued, with . Denote . Let be a solution to the equation in the sense of distributions. Then satisfies the relation In special relativity For a single particle in special relativity, it is not the case that . Instead, it is true that , where is the Lorentz factor and . We have The last expression can be simplified to Thus, under the conditions described in earlier sections (including Newton's third law of motion, , despite relativity), the time average for particles with a power law potential is In particular, the ratio of kinetic energy to potential energy is no longer fixed, but necessarily falls into an interval: where the more relativistic systems exhibit the larger ratios. Examples The virial theorem has a particularly simple form for periodic motion. It can be used to perform perturbative calculation for nonlinear oscillators. It can also be used to study motion in a central potential. If the central potential is of the form , the virial theorem simplifies to . In particular, for gravitational or electrostatic (Coulomb) attraction, . Driven damped harmonic oscillator Analysis based on Sivardiere, 1986. For a one-dimensional oscillator with mass , position , driving force , spring constant , and damping coefficient , the equation of motion is When the oscillator has reached a steady state, it performs a stable oscillation , where is the amplitude, and is the phase angle. Applying the virial theorem, we have , which simplifies to , where is the natural frequency of the oscillator. To solve the two unknowns, we need another equation. In steady state, the power lost per cycle is equal to the power gained per cycle: which simplifies to . Now we have two equations that yield the solution Ideal-gas law Consider a container filled with an ideal gas consisting of point masses. The force applied to the point masses is the negative of the forces applied to the wall of the container, which is of the form , where is the unit normal vector pointing outwards. Then the virial theorem states that By the divergence theorem, . And since the average total kinetic energy , we have . Dark matter In 1933, Fritz Zwicky applied the virial theorem to estimate the mass of Coma Cluster, and discovered a discrepancy of mass of about 450, which he explained as due to "dark matter". He refined the analysis in 1937, finding a discrepancy of about 500. Theoretical analysis He approximated the Coma cluster as a spherical "gas" of stars of roughly equal mass , which gives . The total gravitational potential energy of the cluster is , giving . Assuming the motion of the stars are all the same over a long enough time (ergodicity), . Zwicky estimated as the gravitational potential of a uniform ball of constant density, giving . So by the virial theorem, the total mass of the cluster is Data Zwicky estimated that there are galaxies in the cluster, each having observed stellar mass (suggested by Hubble), and the cluster has radius . He also measured the radial velocities of the galaxies by doppler shifts in galactic spectra to be . Assuming equipartition of kinetic energy, . By the virial theorem, the total mass of the cluster should be . However, the observed mass is , meaning the total mass is 450 times that of observed mass. Generalizations Lord Rayleigh published a generalization of the virial theorem in 1900, which was partially reprinted in 1903. Henri Poincaré proved and applied a form of the virial theorem in 1911 to the problem of formation of the Solar System from a proto-stellar cloud (then known as cosmogony). A variational form of the virial theorem was developed in 1945 by Ledoux. A tensor form of the virial theorem was developed by Parker, Chandrasekhar and Fermi. The following generalization of the virial theorem has been established by Pollard in 1964 for the case of the inverse square law: A boundary term otherwise must be added. Inclusion of electromagnetic fields The virial theorem can be extended to include electric and magnetic fields. The result is where is the moment of inertia, is the momentum density of the electromagnetic field, is the kinetic energy of the "fluid", is the random "thermal" energy of the particles, and are the electric and magnetic energy content of the volume considered. Finally, is the fluid-pressure tensor expressed in the local moving coordinate system and is the electromagnetic stress tensor, A plasmoid is a finite configuration of magnetic fields and plasma. With the virial theorem it is easy to see that any such configuration will expand if not contained by external forces. In a finite configuration without pressure-bearing walls or magnetic coils, the surface integral will vanish. Since all the other terms on the right hand side are positive, the acceleration of the moment of inertia will also be positive. It is also easy to estimate the expansion time . If a total mass is confined within a radius , then the moment of inertia is roughly , and the left hand side of the virial theorem is . The terms on the right hand side add up to about , where is the larger of the plasma pressure or the magnetic pressure. Equating these two terms and solving for , we find where is the speed of the ion acoustic wave (or the Alfvén wave, if the magnetic pressure is higher than the plasma pressure). Thus the lifetime of a plasmoid is expected to be on the order of the acoustic (or Alfvén) transit time. Relativistic uniform system In case when in the physical system the pressure field, the electromagnetic and gravitational fields are taken into account, as well as the field of particles’ acceleration, the virial theorem is written in the relativistic form as follows: where the value exceeds the kinetic energy of the particles by a factor equal to the Lorentz factor of the particles at the center of the system. Under normal conditions we can assume that , then we can see that in the virial theorem the kinetic energy is related to the potential energy not by the coefficient , but rather by the coefficient close to 0.6. The difference from the classical case arises due to considering the pressure field and the field of particles’ acceleration inside the system, while the derivative of the scalar is not equal to zero and should be considered as the material derivative. An analysis of the integral theorem of generalized virial makes it possible to find, on the basis of field theory, a formula for the root-mean-square speed of typical particles of a system without using the notion of temperature: where is the speed of light, is the acceleration field constant, is the mass density of particles, is the current radius. Unlike the virial theorem for particles, for the electromagnetic field the virial theorem is written as follows: where the energy considered as the kinetic field energy associated with four-current , and sets the potential field energy found through the components of the electromagnetic tensor. In astrophysics The virial theorem is frequently applied in astrophysics, especially relating the gravitational potential energy of a system to its kinetic or thermal energy. Some common virial relations are for a mass , radius , velocity , and temperature . The constants are Newton's constant , the Boltzmann constant , and proton mass . Note that these relations are only approximate, and often the leading numerical factors (e.g. or ) are neglected entirely. Galaxies and cosmology (virial mass and radius) In astronomy, the mass and size of a galaxy (or general overdensity) is often defined in terms of the "virial mass" and "virial radius" respectively. Because galaxies and overdensities in continuous fluids can be highly extended (even to infinity in some models, such as an isothermal sphere), it can be hard to define specific, finite measures of their mass and size. The virial theorem, and related concepts, provide an often convenient means by which to quantify these properties. In galaxy dynamics, the mass of a galaxy is often inferred by measuring the rotation velocity of its gas and stars, assuming circular Keplerian orbits. Using the virial theorem, the velocity dispersion can be used in a similar way. Taking the kinetic energy (per particle) of the system as , and the potential energy (per particle) as we can write Here is the radius at which the velocity dispersion is being measured, and is the mass within that radius. The virial mass and radius are generally defined for the radius at which the velocity dispersion is a maximum, i.e. As numerous approximations have been made, in addition to the approximate nature of these definitions, order-unity proportionality constants are often omitted (as in the above equations). These relations are thus only accurate in an order of magnitude sense, or when used self-consistently. An alternate definition of the virial mass and radius is often used in cosmology where it is used to refer to the radius of a sphere, centered on a galaxy or a galaxy cluster, within which virial equilibrium holds. Since this radius is difficult to determine observationally, it is often approximated as the radius within which the average density is greater, by a specified factor, than the critical density where is the Hubble parameter and is the gravitational constant. A common choice for the factor is 200, which corresponds roughly to the typical over-density in spherical top-hat collapse (see Virial mass), in which case the virial radius is approximated as The virial mass is then defined relative to this radius as Stars The virial theorem is applicable to the cores of stars, by establishing a relation between gravitational potential energy and thermal kinetic energy (i.e. temperature). As stars on the main sequence convert hydrogen into helium in their cores, the mean molecular weight of the core increases and it must contract to maintain enough pressure to support its own weight. This contraction decreases its potential energy and, the virial theorem states, increases its thermal energy. The core temperature increases even as energy is lost, effectively a negative specific heat. This continues beyond the main sequence, unless the core becomes degenerate since that causes the pressure to become independent of temperature and the virial relation with equals −1 no longer holds. See also Virial coefficient Virial stress Virial mass Chandrasekhar tensor Chandrasekhar virial equations Derrick's theorem Equipartition theorem Ehrenfest theorem Pokhozhaev's identity Statistical mechanics References Further reading External links The Virial Theorem at MathPages Gravitational Contraction and Star Formation, Georgia State University Physics theorems Dynamics (mechanics) Solid mechanics Concepts in physics Equations of astronomy
Virial theorem
[ "Physics", "Astronomy" ]
3,959
[ "Solid mechanics", "Physical phenomena", "Equations of physics", "Concepts in astronomy", "Classical mechanics", "Motion (physics)", "Mechanics", "Dynamics (mechanics)", "nan", "Equations of astronomy", "Physics theorems" ]
32,754
https://en.wikipedia.org/wiki/Valve
A valve is a device or natural object that regulates, directs or controls the flow of a fluid (gases, liquids, fluidized solids, or slurries) by opening, closing, or partially obstructing various passageways. Valves are technically fittings, but are usually discussed as a separate category. In an open valve, fluid flows in a direction from higher pressure to lower pressure. The word is derived from the Latin valva, the moving part of a door, in turn from volvere, to turn, roll. The simplest, and very ancient, valve is simply a freely hinged flap which swings down to obstruct fluid (gas or liquid) flow in one direction, but is pushed up by the flow itself when the flow is moving in the opposite direction. This is called a check valve, as it prevents or "checks" the flow in one direction. Modern control valves may regulate pressure or flow downstream and operate on sophisticated automation systems. Valves have many uses, including controlling water for irrigation, industrial uses for controlling processes, residential uses such as on/off and pressure control to dish and clothes washers and taps in the home. Valves are also used in the military and transport sectors. In HVAC ductwork and other near-atmospheric air flows, valves are instead called dampers. In compressed air systems, however, valves are used with the most common type being ball valves. Applications Valves are found in virtually every industrial process, including water and sewage processing, mining, power generation, processing of oil, gas and petroleum, food manufacturing, chemical and plastic manufacturing and many other fields. People in developed nations use valves in their daily lives, including plumbing valves, such as taps for tap water, gas control valves on cookers, small valves fitted to washing machines and dishwashers, safety devices fitted to hot water systems, and poppet valves in car engines. In nature, there are valves, for example one-way valves in veins controlling the blood circulation, and heart valves controlling the flow of blood in the chambers of the heart and maintaining the correct pumping action. Valves may be operated manually, either by a handle or grip, lever, pedal or wheel. Valves may also be automatic, driven by changes in pressure, temperature, or flow. These changes may act upon a diaphragm or a piston which in turn activates the valve, examples of this type of valve found commonly are safety valves fitted to hot water systems or boilers. More complex control systems using valves requiring automatic control based on an external input (i.e., regulating flow through a pipe to a changing set point) require an actuator. An actuator will stroke the valve depending on its input and set-up, allowing the valve to be positioned accurately, and allowing control over a variety of requirements. Variation Valves vary widely in form and application. Sizes typically range from 0.1 mm to 60 cm. Special valves can have a diameter exceeding 5 meters. Valve costs range from simple inexpensive disposable valves to specialized valves which cost thousands of dollars (US) per inch of the diameter of the valve. Disposable valves may be found in common household items including mini-pump dispensers and aerosol cans. A common use of the term valve refers to the poppet valves found in the vast majority of modern internal combustion engines such as those in most fossil fuel powered vehicles which are used to control the intake of the fuel-air mixture and allow exhaust gas venting. Types Valves are quite diverse and may be classified into a number of basic types. Valves may also be classified by how they are actuated: Hydraulic Pneumatic Manual Solenoid valve Motor Components The main parts of the most usual type of valve are the body and the bonnet. These two parts form the casing that holds the fluid going through the valve. Body The valve's body is the outer casing of most or all of the valve that contains the internal parts or trim. The bonnet is the part of the encasing through which the stem (see below) passes and that forms a guide and seal for the stem. The bonnet typically screws into or is bolted to the valve body. Valve bodies are usually metallic or plastic. Brass, bronze, gunmetal, cast iron, steel, alloy steels and stainless steels are very common. Seawater applications, like desalination plants, often use duplex valves, as well as super duplex valves, due to their corrosion resistant properties, particularly against warm seawater. Alloy 20 valves are typically used in sulphuric acid plants, whilst monel valves are used in hydrofluoric acid (HF Acid) plants. Hastelloy valves are often used in high temperature applications, such as nuclear plants, whilst inconel valves are often used in hydrogen applications. Plastic bodies are used for relatively low pressures and temperatures. PVC, PP, PVDF and glass-reinforced nylon are common plastics used for valve bodies. Bonnet A bonnet acts as a cover on the valve body. It is commonly semi-permanently screwed into the valve body or bolted onto it. During manufacture of the valve, the internal parts are put into the body and then the bonnet is attached to hold everything together inside. To access internal parts of a valve, a user would take off the bonnet, usually for maintenance. Many valves do not have bonnets; for example, plug valves usually do not have bonnets. Many ball valves do not have bonnets since the valve body is put together in a different style, such as being screwed together at the middle of the valve body. Ports Ports are passages that allow fluid to pass through the valve. Ports are obstructed by the valve member or disc to control flow. Valves most commonly have 2 ports, but may have as many as 20. The valve is almost always connected at its ports to pipes or other components. Connection methods include threadings, compression fittings, glue, cement, flanges, or welding. Handle or actuator A handle is used to manually control a valve from outside the valve body. Automatically controlled valves often do not have handles, but some may have a handle (or something similar) anyway to manually override automatic control, such as a stop-check valve. An actuator is a mechanism or device to automatically or remotely control a valve from outside the body. Some valves have neither handle nor actuator because they automatically control themselves from inside; for example, check valves and relief valves may have neither. Disc A disc, also known as a valve member, is a movable obstruction inside the stationary body that adjustably restricts flow through the valve. Although traditionally disc-shaped, discs come in various shapes. Depending on the type of valve, a disc can move linearly inside a valve, or rotate on the stem (as in a butterfly valve), or rotate on a hinge or trunnion (as in a check valve). A ball is a round valve member with one or more paths between ports passing through it. By rotating the ball, flow can be directed between different ports. Ball valves use spherical rotors with a cylindrical hole drilled as a fluid passage. Plug valves use cylindrical or conically tapered rotors called plugs. Other round shapes for rotors are possible as well in rotor valves, as long as the rotor can be turned inside the valve body. However, not all round or spherical discs are rotors; for example, a ball check valve uses the ball to block reverse flow, but is not a rotor because operating the valve does not involve rotation of the ball. Seat The "seat" is the interior surface of the body which contacts the disc to form a leak-tight seal. In discs that move linearly or swing on a hinge or trunnion, the disc comes into contact with the seat only when the valve is shut. In disks that rotate, the seat is always in contact with the disk, but the area of contact changes as the disc is turned. The seat always remains stationary relative to the body. Seats are classified by whether they are cut directly into the body, or if they are made of a different material: Hard seats are integral to the valve body. Nearly all hard seated metal valves have a small amount of leakage. Soft seats are fitted to the valve body and made of softer materials such as PTFE or various elastomers such as NBR, EPDM, or FKM depending on the maximum operating temperature. A closed soft seated valve is much less liable to leak when shut while hard seated valves are more durable. Gate, globe, and check valves are usually hard seated while butterfly, ball, plug, and diaphragm valves are usually soft seated. Stem The stem transmits motion from the handle or controlling device to the disc. The stem typically passes through the bonnet when present. In some cases, the stem and the disc can be combined in one piece, or the stem and the handle are combined in one piece. The motion transmitted by the stem may be a linear force, a rotational torque, or some combination of these (Angle valve using torque reactor pin and Hub Assembly). The valve and stem can be threaded such that the stem can be screwed into or out of the valve by turning it in one direction or the other, thus moving the disc back or forth inside the body. Packing is often used between the stem and the bonnet to maintain a seal. Some valves have no external control and do not need a stem as in most check valves. Valves whose disc is between the seat and the stem and where the stem moves in a direction into the valve to shut it are normally-seated or front seated. Valves whose seat is between the disc and the stem and where the stem moves in a direction out of the valve to shut it are reverse-seated or back seated. These terms don't apply to valves with no stem or valves using rotors. Gaskets Gaskets are the mechanical seals, or packings, used to prevent the leakage of a gas or fluids from valves. Valve balls A valve ball is also used for severe duty, high-pressure, high-tolerance applications. They are typically made of stainless steel, titanium, Stellite, Hastelloy, brass, or nickel. They can also be made of different types of plastic, such as ABS, PVC, PP or PVDF. Spring Many valves have a spring for spring-loading, to normally shift the disc into some position by default but allow control to reposition the disc. Relief valves commonly use a spring to keep the valve shut, but allow excessive pressure to force the valve open against the spring-loading. Coil springs are normally used. Typical spring materials include zinc plated steel, stainless steel, and for high temperature applications Inconel X750. Trim The internal elements of a valve are collectively referred to as a valve's trim. According to API Standards 600, "Steel Gate Valve-Flanged and Butt-welding Ends, Bolted Bonnets", the trim consists of stem, seating surface in the body, gate seating surface, bushing or a deposited weld for the backseat and stem hole guide, and small internal parts that normally contact the service fluid, excluding the pin that is used to make a stem-to-gate connection (this pin shall be made of an austenitic stainless steel material). Valve operating positions Valve positions are operating conditions determined by the position of the disc or rotor in the valve. Some valves are made to be operated in a gradual change between two or more positions. Return valves and non-return valves allow fluid to move in 2 or 1 directions respectively. Two-port valves Operating positions for 2-port valves can be either shut (closed) so that no flow at all goes through, fully open for maximum flow, or sometimes partially open to any degree in between. Many valves are not designed to precisely control intermediate degree of flow; such valves are considered to be either open or shut. Some valves are specially designed to regulate varying amounts of flow. Such valves have been called by various names such as regulating, throttling, metering, or needle valves. For example, needle valves have elongated conically tapered discs and matching seats for fine flow control. For some valves, there may be a mechanism to indicate by how much the valve is open, but in many cases other indications of flow rate are used, such as separate flow meters. In plants with remote-controlled process operation, such as oil refineries and petrochemical plants, some 2-way valves can be designated as normally closed (NC) or normally open (NO) during regular operation. Examples of normally-closed valves are sampling valves, which are only opened while a sample is taken. Other examples of normally-closed valves are emergency shutdown valves, which are kept open when the system is in operation and will automatically shut by taking away the power supply. This happens when there is a problem with a unit or a section of a fluid system such as a leak in order to isolate the problem from the rest of the system. Examples of normally-open valves are purge-gas supply valves or emergency-relief valves. When there is a problem these valves open (by switching them 'off') causing the unit to be flushed and emptied. Although many 2-way valves are made in which the flow can go in either direction between the two ports, when a valve is placed into a certain application, flow is often expected to go from one certain port on the upstream side of the valve, to the other port on the downstream side. Pressure regulators are variations of valves in which flow is controlled to produce a certain downstream pressure, if possible. They are often used to control flow of gas from a gas cylinder. A back-pressure regulator is a variation of a valve in which flow is controlled to maintain a certain upstream pressure, if possible. Three-port valves Valves with three ports serve many different functions. A few of the possibilities are listed here. Three-way ball valves come with T- or L-shaped fluid passageways inside the rotor. The T valve might be used to permit connection of one inlet to either or both outlets or connection of the two outlets. The L valve could be used to permit disconnection of both or connection of either but not both of two inlets to one outlet. Shuttle valves automatically connect the higher pressure inlet to the outlet while (in some configurations) preventing flow from one inlet to the other. Single handle mixer valves produce a variable mixture of hot and cold water at a variable flow rate under control of a single handle. Thermostatic mixing valves mix hot and cold water to produce a constant temperature in the presence of variable pressures and temperatures on the two input ports. Four-port valves A 4-port valve is a valve whose body has four ports equally spaced round the body and the disc has two passages to connect adjacent ports. It is operated with two positions. It can be used to isolate and to simultaneously bypass a sampling cylinder installed on a pressurized water line. It is useful to take a fluid sample without affecting the pressure of a hydraulic system and to avoid degassing (no leak, no gas loss or air entry, no external contamination).... Control Many valves are controlled manually with a handle attached to the stem. If the handle is turned ninety degrees between operating positions, the valve is called a quarter-turn valve. Butterfly, ball valves, and plug valves are often quarter-turn valves. If the handle is circular with the stem as the axis of rotation in the center of the circle, then the handle is called a handwheel. Valves can also be controlled by actuators attached to the stem. They can be electromechanical actuators such as an electric motor or solenoid, pneumatic actuators which are controlled by air pressure, or hydraulic actuators which are controlled by the pressure of a liquid such as oil or water. Actuators can be used for the purposes of automatic control such as in washing machine cycles, remote control such as the use of a centralised control room, or because manual control is too difficult such as when the valve is very large. Pneumatic actuators and hydraulic actuators need pressurised air or liquid lines to supply the actuator: an inlet line and an outlet line. Pilot valves are valves which are used to control other valves. Pilot valves in the actuator lines control the supply of air or liquid going to the actuators. The fill valve in a toilet water tank is a liquid level-actuated valve. When a high water level is reached, a mechanism shuts the valve which fills the tank. In some valve designs, the pressure of the flow fluid itself or pressure difference of the flow fluid between the ports automatically controls flow through the valve. Other considerations Valves are typically rated for maximum temperature and pressure by the manufacturer. The wetted materials in a valve are usually identified also. Some valves rated at very high pressures are available. When a designer, engineer, or user decides to use a valve for an application, he/she should ensure the rated maximum temperature and pressure are never exceeded and that the wetted materials are compatible with the fluid the valve interior is exposed to. In Europe, valve design and pressure ratings are subject to statutory regulation under the Pressure Equipment Directive 97/23/EC (PED). Some fluid system designs, especially in chemical or power plants, are schematically represented in piping and instrumentation diagrams. In such diagrams, different types of valves are represented by certain symbols. Valves in good condition should be leak-free. However, valves may eventually wear out from use and develop a leak, either between the inside and outside of the valve or, when the valve is shut to stop flow, between the disc and the seat. A particle trapped between the seat and disc could also cause such leakage. Images See also , medical References External links ISO-15926-4 - Nearly 500 valve base classifications and definitions from the ISO 15926 standard. Animations showing Internal Function of Various Types of Valve, tlv.com Flow in known Design Types of Shut-off Valves , home.arcor.de Valves: Piping and Instrumentation Diagram Standard Notation, controls.engin.umich.edu Department of Energy Fundamentals Handbook, Mechanical Science, Module 4 Valves Piping Plumbing Water industry
Valve
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
3,757
[ "Hydrology", "Building engineering", "Chemical engineering", "Plumbing", "Physical systems", "Construction", "Valves", "Hydraulics", "Water industry", "Mechanical engineering", "Piping" ]
33,426
https://en.wikipedia.org/wiki/Wave%E2%80%93particle%20duality
Wave-particle duality is the concept in quantum mechanics that fundamental entities of the universe, like photons and electrons, exhibit particle or wave properties according to the experimental circumstances. It expresses the inability of the classical concepts such as particle or wave to fully describe the behavior of quantum objects. During the 19th and early 20th centuries, light was found to behave as a wave then later discovered to have a particulate behavior, whereas electrons behaved like particles in early experiments then later discovered to have wavelike behavior. The concept of duality arose to name these seeming contradictions. History Wave-particle duality of light In the late 17th century, Sir Isaac Newton had advocated that light was corpuscular (particulate), but Christiaan Huygens took an opposing wave description. While Newton had favored a particle approach, he was the first to attempt to reconcile both wave and particle theories of light, and the only one in his time to consider both, thereby anticipating modern wave-particle duality. Thomas Young's interference experiments in 1801, and François Arago's detection of the Poisson spot in 1819, validated Huygens' wave models. However, the wave model was challenged in 1901 by Planck's law for black-body radiation. Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, E, that was proportional to the frequency of its associated electromagnetic wave. In 1905 Einstein interpreted the photoelectric effect also with discrete energies for photons. These both indicate particle behavior. Despite confirmation by various experimental observations, the photon theory (as it came to be called) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light. The experimental evidence of particle-like momentum and energy seemingly contradicted the earlier work demonstrating wave-like interference of light. Wave-particle duality of matter The contradictory evidence from electrons arrived in the opposite order. Many experiments by J. J. Thomson, Robert Millikan, and Charles Wilson among others had shown that free electrons had particle properties, for instance, the measurement of their mass by Thomson in 1897. In 1924, Louis de Broglie introduced his theory of electron waves in his PhD thesis Recherches sur la théorie des quanta. He suggested that an electron around a nucleus could be thought of as being a standing wave and that electrons and all matter could be considered as waves. He merged the idea of thinking about them as particles, and of thinking of them as waves. He proposed that particles are bundles of waves (wave packets) that move with a group velocity and have an effective mass. Both of these depend upon the energy, which in turn connects to the wavevector and the relativistic formulation of Albert Einstein a few years before. Following de Broglie's proposal of wave–particle duality of electrons, in 1925 to 1926, Erwin Schrödinger developed the wave equation of motion for electrons. This rapidly became part of what was called by Schrödinger undulatory mechanics, now called the Schrödinger equation and also "wave mechanics". In 1926, Max Born gave a talk in an Oxford meeting about using the electron diffraction experiments to confirm the wave–particle duality of electrons. In his talk, Born cited experimental data from Clinton Davisson in 1923. It happened that Davisson also attended that talk. Davisson returned to his lab in the US to switch his experimental focus to test the wave property of electrons. In 1927, the wave nature of electrons was empirically confirmed by two experiments. The Davisson–Germer experiment at Bell Labs measured electrons scattered from Ni metal surfaces. George Paget Thomson and Alexander Reid at Cambridge University scattered electrons through thin metal films and observed concentric diffraction rings. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident and is rarely mentioned. These experiments were rapidly followed by the first non-relativistic diffraction model for electrons by Hans Bethe based upon the Schrödinger equation, which is very close to how electron diffraction is now described. Significantly, Davisson and Germer noticed that their results could not be interpreted using a Bragg's law approach as the positions were systematically different; the approach of Bethe, which includes the refraction due to the average potential, yielded more accurate results. Davisson and Thomson were awarded the Nobel Prize in 1937 for experimental verification of wave property of electrons by diffraction experiments. Similar crystal diffraction experiments were carried out by Otto Stern in the 1930s using beams of helium atoms and hydrogen molecules. These experiments further verified that wave behavior is not limited to electrons and is a general property of matter on a microscopic scale. Classical waves and particles Before proceeding further, it is critical to introduce some definitions of waves and particles both in a classical sense and in quantum mechanics. Waves and particles are two very different models for physical systems, each with an exceptionally large range of application. Classical waves obey the wave equation; they have continuous values at many points in space that vary with time; their spatial extent can vary with time due to diffraction, and they display wave interference. Physical systems exhibiting wave behavior and described by the mathematics of wave equations include water waves, seismic waves, sound waves, radio waves, and more. Classical particles obey classical mechanics; they have some center of mass and extent; they follow trajectories characterized by positions and velocities that vary over time; in the absence of forces their trajectories are straight lines. Stars, planets, spacecraft, tennis balls, bullets, sand grains: particle models work across a huge scale. Unlike waves, particles do not exhibit interference. Some experiments on quantum systems show wave-like interference and diffraction; some experiments show particle-like collisions. Quantum systems obey wave equations that predict particle probability distributions. These particles are associated with discrete values called quanta for properties such as spin, electric charge and magnetic moment. These particles arrive one at time, randomly, but build up a pattern. The probability that experiments will measure particles at a point in space is the square of a complex-number valued wave. Experiments can be designed to exhibit diffraction and interference of the probability amplitude. Thus statistically large numbers of the random particle appearances can display wave-like properties. Similar equations govern collective excitations called quasiparticles. Electrons behaving as waves and particles The electron double slit experiment is a textbook demonstration of wave-particle duality. A modern version of the experiment is shown schematically in the figure below. Electrons from the source hit a wall with two thin slits. A mask behind the slits can expose either one or open to expose both slits. The results for high electron intensity are shown on the right, first for each slit individually, then with both slits open. With either slit open there is a smooth intensity variation due to diffraction. When both slits are open the intensity oscillates, characteristic of wave interference. Having observed wave behavior, now change the experiment, lowering the intensity of the electron source until only one or two are detected per second, appearing as individual particles, dots in the video. As shown in the movie clip below, the dots on the detector seem at first to be random. After some time a pattern emerges, eventually forming an alternating sequence of light and dark bands. The experiment shows wave interference revealed a single particle at a time -- quantum mechanical electrons display both wave and particle behavior. Similar results have been shown for atoms and even large molecules. Observing photons as particles While electrons were thought to be particles until their wave properties were discovered; for photons it was the opposite. In 1887, Heinrich Hertz observed that when light with sufficient frequency hits a metallic surface, the surface emits cathode rays, what are now called electrons. In 1902, Philipp Lenard discovered that the maximum possible energy of an ejected electron is unrelated to its intensity. This observation is at odds with classical electromagnetism, which predicts that the electron's energy should be proportional to the intensity of the incident radiation. In 1905, Albert Einstein suggested that the energy of the light must occur a finite number of energy quanta. He postulated that electrons can receive energy from an electromagnetic field only in discrete units (quanta or photons): an amount of energy E that was related to the frequency f of the light by where h is the Planck constant (6.626×10−34 J⋅s). Only photons of a high enough frequency (above a certain threshold value which is the work function) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal he used, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light below the threshold frequency could release an electron. Despite confirmation by various experimental observations, the photon theory (as it came to be called later) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light. Both discrete (quantized) energies and also momentum are, classically, particle attributes. There are many other examples where photons display particle-type properties, for instance in solar sails, where sunlight could propel a space vehicle and laser cooling where the momentum is used to slow down (cool) atoms. These are a different aspect of wave-particle duality. Which slit experiments In a "which way" experiment, particle detectors are placed at the slits to determine which slit the electron traveled through. When these detectors are inserted, quantum mechanics predicts that the interference pattern disappears because the detected part of the electron wave has changed (loss of coherence). Many similar proposals have been made and many have been converted into experiments and tried out. Every single one shows the same result: as soon as electron trajectories are detected, interference disappears. A simple example of these "which way" experiments uses a Mach–Zehnder interferometer, a device based on lasers and mirrors sketched below. A laser beam along the input port splits at a half-silvered mirror. Part of the beam continues straight, passes though a glass phase shifter, then reflects downward. The other part of the beam reflects from the first mirror then turns at another mirror. The two beams meet at a second half-silvered beam splitter. Each output port has a camera to record the results. The two beams show interference characteristic of wave propagation. If the laser intensity is turned sufficiently low, individual dots appear on the cameras, building up the pattern as in the electron example. The first beam-splitter mirror acts like double slits, but in the interferometer case we can remove the second beam splitter. Then the beam heading down ends up in output port 1: any photon particles on this path gets counted in that port. The beam going across the top ends up on output port 2. In either case the counts will track the photon trajectories. However, as soon as the second beam splitter is removed the interference pattern disappears. See also Einstein's thought experiments Interpretations of quantum mechanics Uncertainty principle Matter wave Corpuscular theory of light References External links Articles containing video clips Dichotomies Foundational quantum physics Waves Particles
Wave–particle duality
[ "Physics" ]
2,361
[ "Physical phenomena", "Foundational quantum physics", "Quantum mechanics", "Waves", "Motion (physics)", "Physical objects", "Particles", "Matter" ]
18,308,428
https://en.wikipedia.org/wiki/Constant%20problem
In mathematics, the constant problem is the problem of deciding whether a given expression is equal to zero. The problem This problem is also referred to as the identity problem or the method of zero estimates. It has no formal statement as such but refers to a general problem prevalent in transcendental number theory. Often proofs in transcendence theory are proofs by contradiction. Specifically, they use some auxiliary function to create an integer n ≥ 0, which is shown to satisfy n < 1. Clearly, this means that n must have the value zero, and so a contradiction arises if one can show that in fact n is not zero. In many transcendence proofs, proving that n ≠ 0 is very difficult, and hence a lot of work has been done to develop methods that can be used to prove the non-vanishing of certain expressions. The sheer generality of the problem is what makes it difficult to prove general results or come up with general methods for attacking it. The number n that arises may involve integrals, limits, polynomials, other functions, and determinants of matrices. Results In certain cases, algorithms or other methods exist for proving that a given expression is non-zero, or of showing that the problem is undecidable. For example, if x1, ..., xn are real numbers, then there is an algorithm for deciding whether there are integers a1, ..., an such that If the expression we are interested in contains an oscillating function, such as the sine or cosine function, then it has been shown that the problem is undecidable, a result known as Richardson's theorem. In general, methods specific to the expression being studied are required to prove that it cannot be zero. See also Integer relation algorithm References Analytic number theory Undecidable problems
Constant problem
[ "Mathematics" ]
375
[ "Analytic number theory", "Computational problems", "Undecidable problems", "Mathematical problems", "Number theory" ]
18,311,607
https://en.wikipedia.org/wiki/Hong%E2%80%93Ou%E2%80%93Mandel%20effect
The Hong–Ou–Mandel effect is a two-photon interference effect in quantum optics that was demonstrated in 1987 by Chung Ki Hong (), Zheyu Jeff Ou () and Leonard Mandel at the University of Rochester:. The effect occurs when two identical single photons enter a 1:1 beam splitter, one in each input port. When the temporal overlap of the photons on the beam splitter is perfect, the two photons will always exit the beam splitter together in the same output mode, meaning that there is zero chance that they will exit separately with one photon in each of the two outputs giving a coincidence event. The photons have a 50:50 chance of exiting (together) in either output mode. If they become more distinguishable (e.g. because they arrive at different times or with different wavelength), the probability of them each going to a different detector will increase. In this way, the interferometer coincidence signal can accurately measure bandwidth, path lengths, and timing. Since this effect relies on the existence of photons and the second quantization it can not be fully explained by classical optics. The effect provides one of the underlying physical mechanisms for logic gates in linear optical quantum computing (the other mechanism being the action of measurement). Quantum-mechanical description Physical description When a photon enters a beam splitter, there are two possibilities: it will either be reflected or transmitted. The relative probabilities of transmission and reflection are determined by the reflectivity of the beam splitter. Here, we assume a 1:1 beam splitter, in which a photon has equal probability of being reflected and transmitted. Next, consider two photons, one in each input mode of a 1:1 beam splitter. There are four possibilities regarding how the photons will behave: The photon coming in from above is reflected and the photon coming in from below is transmitted. Both photons are transmitted. Both photons are reflected. The photon coming in from above is transmitted and the photon coming in from below is reflected. We assume now that the two photons are identical in their physical properties (i.e., polarization, spatio-temporal mode structure, and frequency). Since the state of the beam splitter does not "record" which of the four possibilities actually happens, Feynman rules dictates that we have to add all four possibilities at the probability amplitude level. In addition, reflection from the bottom side of the beam splitter introduces a relative phase shift of π, corresponding to a factor of −1 in the associated term in the superposition. This sign is required by the reversibility (or unitarity of the quantum evolution) of the beam splitter. Since the two photons are identical, we cannot distinguish between the output states of possibilities 2 and 3, and their relative minus sign ensures that these two terms cancel. This cancelation can be interpreted as destructive interference of the transmission/transmission and reflection/reflection possibilities. If a detector is set up on each of the outputs then coincidences can never be observed, while both photons can appear together in either one of the two detectors with equal probability. A classical prediction of the intensities of the output beams for the same beam splitter and identical coherent input beams would suggest that all of the light should go to one of the outputs (the one with the positive phase). Mathematical description Consider two optical input modes a and b that carry annihilation and creation operators , , and , . Identical photons in different modes can be described by the Fock states, so, for example corresponds to mode a empty (the vacuum state), and inserting one photon into a corresponds to , etc. A photon in each input mode is therefore When the two modes a and b are mixed in a 1:1 beam splitter, they produce output modes c and d. Inserting a photon in a produces a superposition state of the outputs: if the beam splitter is 50:50 then the probabilities of each output are equal, i.e. , and similarly for inserting a photon in b. Therefore The relative minus sign appears because the classical lossless beam splitter produces a unitary transformation. This can be seen most clearly when we write the two-mode beam splitter transformation in matrix form: Similar transformations hold for the creation operators. Unitarity of the transformation implies unitarity of the matrix. Physically, this beam splitter transformation means that reflection from one surface induces a relative phase shift of π, corresponding to a factor of −1, with respect to reflection from the other side of the beam splitter (see the Physical description above). When two photons enter the beam splitter, one on each side, the state of the two modes becomes where we used etc. Since the commutator of the two creation operators and is zero because they operate on different spaces, the product term vanishes. The surviving terms in the superposition are only the and terms. Therefore, when two identical photons enter a 1:1 beam splitter, they will always exit the beam splitter in the same (but random) output mode. The result is non-classical: a classical light wave entering a classical beam splitter with the same transfer matrix would always exit in arm c due to destructive interference in arm d, whereas the quantum result is random. Changing the beam splitter phases can change the classical result to arm d or a mixture of both, but the quantum result is independent of these phases. For a more general treatment of the beam splitter with arbitrary reflection/transmission coefficients, and arbitrary numbers of input photons, see the general quantum mechanical treatment of a beamsplitter for the resulting output Fock state. Experimental signature Customarily the Hong–Ou–Mandel effect is observed using two photodetectors monitoring the output modes of the beam splitter. The coincidence rate of the detectors will drop to zero when the identical input photons overlap perfectly in time. This is called the Hong–Ou–Mandel dip, or HOM dip. The coincidence count reaches a minimum, indicated by the dotted line. The minimum drops to zero when the two photons are perfectly identical in all properties. When the two photons are perfectly distinguishable, the dip completely disappears. The precise shape of the dip is directly related to the power spectrum of the single-photon wave packet and is therefore determined by the physical process of the source. Common shapes of the HOM dip are Gaussian and Lorentzian. A classical analogue to the HOM effect occurs when two coherent states (e.g. laser beams) interfere at the beamsplitter. If the states have a rapidly varying phase difference (i.e. faster than the integration time of the detectors) then a dip will be observed in the coincidence rate equal to one half the average coincidence count at long delays. (Nevertheless, it can be further reduced with a proper discriminating trigger level applied to the signal.) Consequently, to prove that destructive interference is two-photon quantum interference rather than a classical effect, the HOM dip must be lower than one half. The Hong–Ou–Mandel effect can be directly observed using single-photon-sensitive intensified cameras. Such cameras have the ability to register single photons as bright spots clearly distinguished from the low-noise background. In the figure above, the pairs of photons are registered in the middle of the Hong–Ou–Mandel dip. In most cases, they appear grouped in pairs either on the left or right side, corresponding to two output ports of a beam splitter. Occasionally a coincidence event occurs, manifesting a residual distinguishability between the photons. Applications and experiments The Hong–Ou–Mandel effect can be used to test the degree of indistinguishability of the two incoming photons. When the HOM dip reaches all the way down to zero coincident counts, the incoming photons are perfectly indistinguishable, whereas if there is no dip, the photons are distinguishable. In 2002, the Hong–Ou–Mandel effect was used to demonstrate the purity of a solid-state single-photon source by feeding two successive photons from the source into a 1:1 beam splitter. The interference visibility V of the dip is related to the states of the two photons and as If , then the visibility is equal to the purity of the photons. In 2006, an experiment was performed in which two atoms independently emitted a single photon each. These photons subsequently produced the Hong–Ou–Mandel effect. Multimode Hong–Ou–Mandel interference was studied in 2003. The Hong–Ou–Mandel effect also underlies the basic entangling mechanism in linear optical quantum computing, and the two-photon quantum state that leads to the HOM dip is the simplest non-trivial state in a class called NOON states. In 2015 the Hong–Ou–Mandel effect for photons was directly observed with spatial resolution using an sCMOS camera with an image intensifier. Also in 2015 the effect was observed with helium-4 atoms. The HOM effect can be used to measure the biphoton wave function from a spontaneous four-wave mixing process. In 2016 a frequency converter for photons demonstrated the Hong–Ou–Mandel effect with different-color photons. In 2018, HOM interference was used to demonstrate high-fidelity quantum interference between topologically protected states on a photonic chip. Topological photonics have intrinsically high-coherence, and unlike other quantum processor approaches, do not require strong magnetic fields and operate at room temperature. Three-photon interference Three-photon interference effect has been identified in experiments. See also Degree of coherence Photon antibunching Photon bunching References External links Lectures on Quantum Computing: Interference (2 of 6) - David Deutsch lecture video, video of related experiment (a single photon in a sharp direction is split, mirrored and rejoined in a second splitter (joiner) output in the sharp direction). Can Two-Photon Interference be Considered the Interference of Two Photons? - Discussion of the interpretation of the HOM interferometer results. YouTube animation showing HOM effect in a semiconductor device. YouTube movie showing experimental results of HOM effect observed on a camera. Hong-Ou-Mandel in the Virtual Lab by Quantum Flytrap, an interactive simulation Quantum optics Interferometry
Hong–Ou–Mandel effect
[ "Physics" ]
2,138
[ "Quantum optics", "Quantum mechanics" ]
18,313,212
https://en.wikipedia.org/wiki/UrQMD
UrQMD (Ultra relativistic Quantum Molecular Dynamics) is a fully integrated Monte Carlo simulation package for Proton+Proton, Proton+nucleus and nucleus+nucleus interactions. UrQMD has many applications in particle physics, high energy experimental physics and engineering, shielding, detector design, cosmic ray studies, and medical physics. Since version 3.3, an option has been incorporated to substitute part of the collision with a hydrodynamic model. UrQMD is available in as open-source Fortran code. UrQMD is developed using the FORTRAN language. Under Linux the gfortran compiler is necessary to build and run the program. The UrQMD model is part of the GEANT4 simulation package and can be used as a low-energy hadronic interaction model within the air shower simulation code CORSIKA. External links Official site of UrQMD collaboration Fortran software Physics software Monte Carlo particle physics software Science software for Linux
UrQMD
[ "Physics" ]
194
[ "Physics software", "Computational physics" ]
18,313,757
https://en.wikipedia.org/wiki/Master/Session
In cryptography, Master/Session is a key management scheme in which a pre-shared Key Encrypting Key (called the "Master" key) is used to encrypt a randomly generated and insecurely communicated Working Key (called the "Session" key). The Working Key is then used for encrypting the data to be exchanged. Its advantage is simplicity, but it suffers the disadvantage of having to communicate the pre-shared Key Exchange Key, which can be difficult to update in the event of compromise. The Master/Session technique was created in the days before asymmetric techniques, such as Diffie-Hellman, were invented. This technique still finds widespread use in the financial industry, and is routinely used between corporate parties such as issuers, acquirers, switches. Its use in device communications (such as PIN pads), however, is in decline given the advantages of techniques such as DUKPT. References Cryptography
Master/Session
[ "Mathematics", "Engineering" ]
197
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
18,315,951
https://en.wikipedia.org/wiki/Visual%20odometry
In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers. Overview In navigation, odometry is the use of data from the movement of actuators to estimate change in position over time through devices such as rotary encoders to measure wheel rotations. While useful for many wheeled or tracked vehicles, traditional odometry techniques cannot be applied to mobile robots with non-standard locomotion methods, such as legged robots. In addition, odometry universally suffers from precision problems, since wheels tend to slip and slide on the floor creating a non-uniform distance traveled as compared to the wheel rotations. The error is compounded when the vehicle operates on non-smooth surfaces. Odometry readings become increasingly unreliable as these errors accumulate and compound over time. Visual odometry is the process of determining equivalent odometry information using sequential camera images to estimate the distance traveled. Visual odometry allows for enhanced navigational accuracy in robots or vehicles using any type of locomotion on any surface. Types There are various types of VO. Monocular and stereo Depending on the camera setup, VO can be categorized as Monocular VO (single camera), Stereo VO (two camera in stereo setup). Feature-based and direct method Traditional VO's visual information is obtained by the feature-based method, which extracts the image feature points and tracks them in the image sequence. Recent developments in VO research provided an alternative, called the direct method, which uses pixel intensity in the image sequence directly as visual input. There are also hybrid methods. Visual inertial odometry If an inertial measurement unit (IMU) is used within the VO system, it is commonly referred to as Visual Inertial Odometry (VIO). Algorithm Most existing approaches to visual odometry are based on the following stages. Acquire input images: using either single cameras., stereo cameras, or omnidirectional cameras. Image correction: apply image processing techniques for lens distortion removal, etc. Feature detection: define interest operators, and match features across frames and construct optical flow field. Feature extraction and correlation. Use correlation, not long term feature tracking, to establish correspondence of two images. Construct optical flow field (Lucas–Kanade method). Check flow field vectors for potential tracking errors and remove outliers. Estimation of the camera motion from the optical flow. Choice 1: Kalman filter for state estimate distribution maintenance. Choice 2: find the geometric and 3D properties of the features that minimize a cost function based on the re-projection error between two adjacent images. This can be done by mathematical minimization or random sampling. Periodic repopulation of trackpoints to maintain coverage across the image. An alternative to feature-based methods is the "direct" or appearance-based visual odometry technique which minimizes an error directly in sensor space and subsequently avoids feature matching and extraction. Another method, coined 'visiodometry' estimates the planar roto-translations between images using Phase correlation instead of extracting features. Egomotion Egomotion is defined as the 3D motion of a camera within an environment. In the field of computer vision, egomotion refers to estimating a camera's motion relative to a rigid scene. An example of egomotion estimation would be estimating a car's moving position relative to lines on the road or street signs being observed from the car itself. The estimation of egomotion is important in autonomous robot navigation applications. Overview The goal of estimating the egomotion of a camera is to determine the 3D motion of that camera within the environment using a sequence of images taken by the camera. The process of estimating a camera's motion within an environment involves the use of visual odometry techniques on a sequence of images captured by the moving camera. This is typically done using feature detection to construct an optical flow from two image frames in a sequence generated from either single cameras or stereo cameras. Using stereo image pairs for each frame helps reduce error and provides additional depth and scale information. Features are detected in the first frame, and then matched in the second frame. This information is then used to make the optical flow field for the detected features in those two images. The optical flow field illustrates how features diverge from a single point, the focus of expansion. The focus of expansion can be detected from the optical flow field, indicating the direction of the motion of the camera, and thus providing an estimate of the camera motion. There are other methods of extracting egomotion information from images as well, including a method that avoids feature detection and optical flow fields and directly uses the image intensities. See also Dead reckoning Odometry Optical flow Optical motion capture References Robotic sensing Motion in computer vision Surveying
Visual odometry
[ "Physics", "Engineering" ]
1,006
[ "Physical phenomena", "Surveying", "Motion (physics)", "Civil engineering", "Motion in computer vision" ]
102,718
https://en.wikipedia.org/wiki/Aluminium%20gallium%20arsenide
Aluminium gallium arsenide (also gallium aluminium arsenide) (AlxGa1−xAs) is a semiconductor material with very nearly the same lattice constant as GaAs, but a larger bandgap. The x in the formula above is a number between 0 and 1 - this indicates an arbitrary alloy between GaAs and AlAs. The chemical formula AlGaAs should be considered an abbreviated form of the above, rather than any particular ratio. The bandgap varies between 1.42 eV (GaAs) and 2.16 eV (AlAs). For x < 0.4, the bandgap is direct. The refractive index is related with the bandgap via the Kramers–Kronig relations and varies between 2.9 (x = 1) and 3.5 (x = 0). This allows the construction of Bragg mirrors used in VCSELs, RCLEDs, and substrate-transferred crystalline coatings. Aluminium gallium arsenide is used as a barrier material in GaAs based heterostructure devices. The AlGaAs layer confines the electrons to a gallium arsenide region. An example of such a device is a quantum well infrared photodetector (QWIP). It is commonly used in GaAs-based red- and near-infra-red-emitting (700–1100 nm) double-hetero-structure laser diodes. Safety and toxicity aspects The toxicology of AlGaAs has not been fully investigated. The dust is an irritant to skin, eyes and lungs. The environment, health and safety aspects of aluminium gallium arsenide sources (such as trimethylgallium and arsine) and industrial hygiene monitoring studies of standard MOVPE sources have been reported recently in a review. References External links Arsenides Aluminium compounds Gallium compounds III-V semiconductors III-V compounds Light-emitting diode materials Zincblende crystal structure
Aluminium gallium arsenide
[ "Chemistry" ]
406
[ "Inorganic compounds", "Semiconductor materials", "III-V semiconductors", "Light-emitting diode materials", "III-V compounds" ]
102,847
https://en.wikipedia.org/wiki/Solid-state%20physics
Solid-state physics is the study of rigid matter, or solids, through methods such as solid-state chemistry, quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics. Solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a theoretical basis of materials science. Along with solid-state chemistry, it also has direct applications in the technology of transistors and semiconductors. Background Solid materials are formed from densely packed atoms, which interact intensely. These interactions produce the mechanical (e.g. hardness and elasticity), thermal, electrical, magnetic and optical properties of solids. Depending on the material involved and the conditions in which it was formed, the atoms may be arranged in a regular, geometric pattern (crystalline solids, which include metals and ordinary water ice) or irregularly (an amorphous solid such as common window glass). The bulk of solid-state physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling. Likewise, crystalline materials often have electrical, magnetic, optical, or mechanical properties that can be exploited for engineering purposes. The forces between the atoms in a crystal can take a variety of forms. For example, in a crystal of sodium chloride (common salt), the crystal is made up of ionic sodium and chlorine, and held together with ionic bonds. In others, the atoms share electrons and form covalent bonds. In metals, electrons are shared amongst the whole crystal in metallic bonding. Finally, the noble gases do not undergo any of these types of bonding. In solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding. History The physical properties of solids have been common subjects of scientific inquiry for centuries, but a separate field going by the name of solid-state physics did not emerge until the 1940s, in particular with the establishment of the Division of Solid State Physics (DSSP) within the American Physical Society. The DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society. Large communities of solid state physicists also emerged in Europe after World War II, in particular in England, Germany, and the Soviet Union. In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, and diverse other phenomena. During the early Cold War, research in solid state physics was often not restricted to solids, which led some physicists in the 1970s and 1980s to found the field of condensed matter physics, which organized around common techniques used to investigate solids, liquids, plasmas, and other complex matter. Today, solid-state physics is broadly considered to be the subfield of condensed matter physics, often referred to as hard condensed matter, that focuses on the properties of solids with regular crystal lattices. Crystal structure and properties Many properties of materials are affected by their crystal structure. This structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction. The sizes of the individual crystals in a crystalline solid material vary depending on the material involved and the conditions when it was formed. Most crystalline materials encountered in everyday life are polycrystalline, with the individual crystals being microscopic in scale, but macroscopic single crystals can be produced either naturally (e.g. diamonds) or artificially. Real crystals feature defects or irregularities in the ideal arrangements, and it is these defects that critically determine many of the electrical and mechanical properties of real materials. Electronic properties Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics. An early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid. By assuming that the material contains immobile positive ions and an "electron gas" of classical, non-interacting electrons, the Drude model was able to explain electrical and thermal conductivity and the Hall effect in metals, although it greatly overestimated the electronic heat capacity. Arnold Sommerfeld combined the classical Drude model with quantum mechanics in the free electron model (or Drude-Sommerfeld model). Here, the electrons are modelled as a Fermi gas, a gas of particles which obey the quantum mechanical Fermi–Dirac statistics. The free electron model gave improved predictions for the heat capacity of metals, however, it was unable to explain the existence of insulators. The nearly free electron model is a modification of the free electron model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. By introducing the idea of electronic bands, the theory explains the existence of conductors, semiconductors and insulators. The nearly free electron model rewrites the Schrödinger equation for the case of a periodic potential. The solutions in this case are known as Bloch states. Since Bloch's theorem applies only to periodic potentials, and since unceasing random movements of atoms in a crystal disrupt periodicity, this use of Bloch's theorem is only an approximation, but it has proven to be a tremendously valuable approximation, without which most solid-state physics analysis would be intractable. Deviations from periodicity are treated by quantum mechanical perturbation theory. Modern research Modern research topics in solid-state physics include: High-temperature superconductivity Quasicrystals Spin glass Strongly correlated materials Two-dimensional materials Nanomaterials See also Condensed matter physics Crystallography Nuclear spectroscopy Solid mechanics References Further reading Neil W. Ashcroft and N. David Mermin, Solid State Physics (Harcourt: Orlando, 1976). Charles Kittel, Introduction to Solid State Physics (Wiley: New York, 2004). H. M. Rosenberg, The Solid State (Oxford University Press: Oxford, 1995). Steven H. Simon, The Oxford Solid State Basics (Oxford University Press: Oxford, 2013). Out of the Crystal Maze. Chapters from the History of Solid State Physics, ed. Lillian Hoddeson, Ernest Braun, Jürgen Teichmann, Spencer Weart (Oxford: Oxford University Press, 1992). M. A. Omar, Elementary Solid State Physics (Revised Printing, Addison-Wesley, 1993). Condensed matter physics Metallurgy
Solid-state physics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,378
[ "Metallurgy", "Phases of matter", "Materials science", "Condensed matter physics", "nan", "Matter" ]
103,194
https://en.wikipedia.org/wiki/Leidenfrost%20effect
The Leidenfrost effect is a physical phenomenon in which a liquid, close to a solid surface of another body that is significantly hotter than the liquid's boiling point, produces an insulating vapor layer that keeps the liquid from boiling rapidly. Because of this repulsive force, a droplet hovers over the surface, rather than making physical contact with it. The effect is named after the German doctor Johann Gottlob Leidenfrost, who described it in A Tract About Some Qualities of Common Water. This is most commonly seen when cooking, when drops of water are sprinkled onto a hot pan. If the pan's temperature is at or above the Leidenfrost point, which is approximately for water, the water skitters across the pan and takes longer to evaporate than it would take if the water droplets had been sprinkled onto a cooler pan. Details The effect can be seen as drops of water are sprinkled onto a pan at various times as it heats up. Initially, as the temperature of the pan is just below , the water flattens out and slowly evaporates, or if the temperature of the pan is well below , the water stays liquid. As the temperature of the pan rises above , the water droplets hiss when touching the pan, and these droplets evaporate quickly. When the temperature exceeds the Leidenfrost point, the Leidenfrost effect appears. On contact with the pan, the water droplets bunch up into small balls of water and skitter around, lasting much longer than when the temperature of the pan was lower. This effect works until a much higher temperature causes any further drops of water to evaporate too quickly to cause this effect. The effect happens because, at temperatures at or above the Leidenfrost point, the bottom part of the water droplet vaporizes immediately on contact with the hot pan. The resulting gas suspends the rest of the water droplet just above it, preventing any further direct contact between the liquid water and the hot pan. As steam has much poorer thermal conductivity than the metal pan, further heat transfer between the pan and the droplet is slowed down dramatically. This also results in the drop being able to skid around the pan on the layer of gas just under it. The temperature at which the Leidenfrost effect appears is difficult to predict. Even if the volume of the drop of liquid stays the same, the Leidenfrost point may be quite different, with a complicated dependence on the properties of the surface, as well as any impurities in the liquid. Some research has been conducted into a theoretical model of the system, but it is quite complicated. The effect was also described by the Victorian steam boiler designer, William Fairbairn, in reference to its effect on massively reducing heat transfer from a hot iron surface to water, such as within a boiler. In a pair of lectures on boiler design, he cited the work of Pierre Hippolyte Boutigny (1798–1884) and Professor Bowman of King's College, London, in studying this. A drop of water that was vaporized almost immediately at persisted for 152 seconds at . Lower temperatures in a boiler firebox might evaporate water more quickly as a result; compare Mpemba effect. An alternative approach was to increase the temperature beyond the Leidenfrost point. Fairbairn considered this, too, and may have been contemplating the flash steam boiler, but considered the technical aspects insurmountable for the time. The Leidenfrost point may also be taken to be the temperature for which the hovering droplet lasts longest. It has been demonstrated that it is possible to stabilize the Leidenfrost vapor layer of water by exploiting superhydrophobic surfaces. In this case, once the vapor layer is established, cooling never collapses the layer, and no nucleate boiling occurs; the layer instead slowly relaxes until the surface is cooled. Droplets of different liquids with different boiling temperatures will also exhibit a Leidenfrost effect with respect to each other and repel each other. The Leidenfrost effect has been used for the development of high sensitivity ambient mass spectrometry. Under the influence of the Leidenfrost condition, the levitating droplet does not release molecules, and the molecules are enriched inside the droplet. At the last moment of droplet evaporation, all the enriched molecules release in a short time period and thereby increase the sensitivity. A heat engine based on the Leidenfrost effect has been prototyped; it has the advantage of extremely low friction. The effect also applies when the surface is at room temperature but the liquid is cryogenic, allowing liquid nitrogen droplets to harmlessly roll off exposed skin. Conversely, the inverse Leidenfrost effect lets drops of relatively warm liquid levitate on a bath of liquid nitrogen. Leidenfrost point The Leidenfrost point signifies the onset of stable film boiling. It represents the point on the boiling curve where the heat flux is at the minimum and the surface is completely covered by a vapor blanket. Heat transfer from the surface to the liquid occurs by conduction and radiation through the vapour. In 1756, Leidenfrost observed that water droplets supported by the vapor film slowly evaporate as they move about on the hot surface. As the surface temperature is increased, radiation through the vapor film becomes more significant and the heat flux increases with increasing excess temperature. The minimum heat flux for a large horizontal plate can be derived from Zuber's equation, where the properties are evaluated at saturation temperature. Zuber's constant, , is approximately 0.09 for most fluids at moderate pressures. Heat transfer correlations The heat transfer coefficient may be approximated using Bromley's equation, where is the outside diameter of the tube. The correlation constant C is 0.62 for horizontal cylinders and vertical plates, and 0.67 for spheres. Vapor properties are evaluated at film temperature. For stable film boiling on a horizontal surface, Berenson has modified Bromley's equation to yield, For vertical tubes, Hsu and Westwater have correlated the following equation, where m is the mass flow rate in at the upper end of the tube. At excess temperatures above that at the minimum heat flux, the contribution of radiation becomes appreciable, and it becomes dominant at high excess temperatures. The total heat transfer coefficient is thus a combination of the two. Bromley has suggested the following equations for film boiling from the outer surface of horizontal tubes: If , The effective radiation coefficient, can be expressed as, where is the emissivity of the solid and is the Stefan–Boltzmann constant. Pressure field in a Leidenfrost droplet The equation for the pressure field in the vapor region between the droplet and the solid surface can be solved for using the standard momentum and continuity equations using a Boundary layer model. In this model for the sake of simplicity in solving, a linear temperature profile and a parabolic velocity profile are assumed within the vapor phase. The heat transfer within the vapor phase is assumed to be through conduction. With these approximations, the Navier–Stokes equations can be solved to get the pressure field. Leidenfrost temperature and surface tension effects The Leidenfrost temperature is the property of a given set of solid–liquid pair. The temperature of the solid surface beyond which the liquid undergoes the Leidenfrost phenomenon is termed the Leidenfrost temperature. Calculation of the Leidenfrost temperature involves the calculation of the minimum film boiling temperature of a fluid. Berenson obtained a relation for the minimum film boiling temperature from minimum heat flux arguments. While the equation for the minimum film boiling temperature, which can be found in the reference above, is quite complex, the features of it can be understood from a physical perspective. One critical parameter to consider is the surface tension. The proportional relationship between the minimum film boiling temperature and surface tension is to be expected, since fluids with higher surface tension need higher quantities of heat flux for the onset of nucleate boiling. Since film boiling occurs after nucleate boiling, the minimum temperature for film boiling should have a proportional dependence on the surface tension. Henry developed a model for Leidenfrost phenomenon which includes transient wetting and microlayer evaporation. Since the Leidenfrost phenomenon is a special case of film boiling, the Leidenfrost temperature is related to the minimum film boiling temperature via a relation which factors in the properties of the solid being used. While the Leidenfrost temperature is not directly related to the surface tension of the fluid, it is indirectly dependent on it through the film boiling temperature. For fluids with similar thermophysical properties, the one with higher surface tension usually has a higher Leidenfrost temperature. For example, for a saturated water–copper interface, the Leidenfrost temperature is . The Leidenfrost temperatures for glycerol and common alcohols are significantly smaller because of their lower surface tension values (density and viscosity differences are also contributing factors.) Reactive Leidenfrost effect Non-volatile materials were discovered in 2015 to also exhibit a 'reactive Leidenfrost effect', whereby solid particles were observed to float above hot surfaces and skitter around erratically. Detailed characterization of the reactive Leidenfrost effect was completed for small particles of cellulose (~0.5 mm) on high temperature polished surfaces by high speed photography. Cellulose was shown to decompose to short-chain oligomers which melt and wet smooth surfaces with increasing heat transfer associated with increasing surface temperature. Above , cellulose was observed to exhibit transition boiling with violent bubbling and associated reduction in heat transfer. Liftoff of the cellulose droplet (depicted at the right) was observed to occur above about , associated with a dramatic reduction in heat transfer. High speed photography of the reactive Leidenfrost effect of cellulose on porous surfaces (macroporous alumina) was also shown to suppress the reactive Leidenfrost effect and enhance overall heat transfer rates to the particle from the surface. The new phenomenon of a 'reactive Leidenfrost (RL) effect' was characterized by a dimensionless quantity, (φRL= τconv/τrxn), which relates the time constant of solid particle heat transfer to the time constant of particle reaction, with the reactive Leidenfrost effect occurring for 10−1< φRL< 10+1. The reactive Leidenfrost effect with cellulose will occur in numerous high temperature applications with carbohydrate polymers, including biomass conversion to biofuels, preparation and cooking of food, and tobacco use. The Leidenfrost effect has also been used as a means to promote chemical change of various organic liquids through their conversion by thermal decomposition into various products. Examples include decomposition of ethanol, diethyl carbonate, and glycerol. In popular culture In Jules Verne's 1876 book Michael Strogoff, the protagonist is saved from being blinded with a hot blade by evaporating tears. In the 2009 season 7 finale of MythBusters, "Mini Myth Mayhem", the team demonstrated that a person can wet their hand and briefly dip it into molten lead without injury, using the Leidenfrost effect as the scientific basis. See also Critical heat flux Region-beta paradox References External links Essay about the effect and demonstrations by Jearl Walker (PDF) Site with high-speed video, pictures and explanation of film-boiling by Heiner Linke at the University of Oregon, USA "Scientists make water run uphill" by BBC News about using the Leidenfrost effect for cooling of computer chips. "Uphill Water" – ABC Catalyst story "Leidenfrost Maze" – University of Bath undergraduate students Carmen Cheng and Matthew Guy "When Water Flows Uphill" – Science Friday with Univ. of Bath professor Kei Takashina Carolyn Embach, ResearchGate: English translation of Johan Gottlob Leidenfrost, De aquae communes nonnullis qualitatibus tractatus, Duisburg on Rhine, 1756. (Carolyn S. E. Wares aka Carolyn Embach, translator, 1964) Physical phenomena Heat transfer Articles containing video clips
Leidenfrost effect
[ "Physics", "Chemistry" ]
2,464
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics" ]
103,437
https://en.wikipedia.org/wiki/Peptide%20nucleic%20acid
Peptide nucleic acid (PNA) is an artificially synthesized polymer similar to DNA or RNA. Synthetic peptide nucleic acid oligomers have been used in recent years in molecular biology procedures, diagnostic assays, and antisense therapies. Due to their higher binding strength, it is not necessary to design long PNA oligomers for use in these roles, which usually require oligonucleotide probes of 20–25 bases. The main concern of the length of the PNA-oligomers is to guarantee the specificity. PNA oligomers also show greater specificity in binding to complementary DNAs, with a PNA/DNA base mismatch being more destabilizing than a similar mismatch in a DNA/DNA duplex. This binding strength and specificity also applies to PNA/RNA duplexes. PNAs are not easily recognized by either nucleases or proteases, making them resistant to degradation by enzymes. PNAs are also stable over a wide pH range. Though an unmodified PNA cannot readily cross the cell membrane to enter the cytosol, covalent coupling of a cell penetrating peptide to a PNA can improve cytosolic delivery. PNA is not known to occur naturally but N-(2-aminoethyl)-glycine (AEG), the backbone of PNA, has been hypothesized to be an early form of genetic molecule for life on Earth and is produced by cyanobacteria and is a neurotoxin. PNA was invented by Peter E. Nielsen (Univ. Copenhagen), Michael Egholm (Univ. Copenhagen), Rolf H. Berg (Risø National Lab), and Ole Buchardt (Univ. Copenhagen) in 1991. Structure DNA and RNA have a deoxyribose and ribose sugar backbone, respectively, whereas PNA's backbone is composed of repeating N-(2-aminoethyl)-glycine units linked by peptide bonds. The various purine and pyrimidine bases are linked to the backbone by a methylene bridge (--) and a carbonyl group (-(C=O)-). PNAs are depicted like peptides, with the N-terminus at the first (left) position and the C-terminus at the last (right) position. Binding Since the backbone of PNA contains no charged phosphate groups, the binding between PNA/DNA strands is stronger than between DNA/DNA strands due to the lack of electrostatic repulsion. Unfortunately, this also causes it to be rather hydrophobic, which makes it difficult to deliver to body cells in solution without being flushed out of the body first. Early experiments with homopyrimidine strands (strands consisting of only one repeated pyrimidine base) have shown that the Tm ("melting" temperature) of a 6-base thymine PNA/adenine DNA double helix was 31 °C in comparison to an equivalent 6-base DNA/DNA duplex that denatures at a temperature less than 10 °C. Mixed base PNA molecules are true mimics of DNA molecules in terms of base-pair recognition. PNA/PNA binding is stronger than PNA/DNA binding. PNA translation from other nucleic acids Several labs have reported sequence-specific polymerization of peptide nucleic acids from DNA or RNA templates. Liu and coworkers used these polymerization methods to evolve functional PNAs with the ability to fold into three-dimensional structures, similar to proteins, aptamers and ribozymes. Delivery In 2015, Jain et al. described a trans-acting DNA-based amphiphatic delivery system for convenient delivery of poly A tailed uncharged nucleic acids (UNA) such as PNAs and morpholinos, so that several UNA's can be easily screened ex vivo. PNA world hypothesis It has been hypothesized that the earliest life on Earth may have used PNA as a genetic material due to its extreme robustness, simpler formation, and possible spontaneous polymerization at 100 °C (while water at standard pressure boils at this temperature, water at high pressure—as in deep ocean—boils at higher temperatures). If this is so, life evolved to a DNA/RNA-based system only at a later stage. Evidence for this PNA world hypothesis is, however, far from conclusive. If it existed though, it must have preceded the widely accepted RNA world. Applications Applications include alteration of gene expression - both as inhibitor and promoter in different cases, antigene and antisense therapeutic agent, anticancer agent, antiviral, antibacterial and antiparasitic agent, molecular tools and probes of biosensor, detection of DNA sequences, and nanotechnology. PNAs can be used to improve high-throughput 16S ribosomal RNA gene sequencing of plant and soil samples by blocking amplification of contaminant plastid and mitochondrial sequences. Cellular – Functional Antagonism/Inhibition. In 2001, Strauss and colleagues reported the design of an application for PNA oligomers in living mammalian cells. The Xist chromatin binding region was first elucidated in female mouse fibroblastic cells, and embryonic stem cells though the use of a PNA molecular antagonist. The novel PNA approach directly demonstrated function of a lncRNA. The long non-coding (lncRNA) RNA, Xist directly binds to the inactive X-chromosome. Functional PNA inhibition experiments revealed that specific repeat regions of the Xist RNA were responsible for chromatin binding, and hence could be considered domain regions of the RNA transcript. The PNA molecular antagonist was administered to living cells and functionally inhibited the association of Xist with inactive X-chromosome using the approach for studying noncoding RNA function in living cells called peptide nucleic acid (PNA) interference mapping. In the reported experiments, a single 19-bp antisense cell-permeating PNA targeted against a particular region of Xist RNA caused the disruption of the Xi. The association of the Xi with macro-histone H2A is also disturbed by PNA interference mapping. See also Clicked peptide polymer Glycol nucleic acid Oligonucleotide synthesis Peptide synthesis Threose nucleic acid References Further reading Nucleic acids Origin of life
Peptide nucleic acid
[ "Chemistry", "Biology" ]
1,326
[ "Biological hypotheses", "Biomolecules by chemical classification", "Origin of life", "Nucleic acids" ]
20,490,276
https://en.wikipedia.org/wiki/Glidant
A glidant is a substance that is added to a powder to improve its flowability. A glidant will only work at a certain range of concentrations. Above a certain concentration, the glidant will in fact function to inhibit flowability. In tablet manufacture, glidants are usually added just prior to compression. Examples Examples of glidants include ascorbyl palmitate, calcium palmitate, magnesium stearate, fumed silica (colloidal silicon dioxide), starch and talc. Mechanism of action A glidant's effect is due to the counter-action of factors that cause poor flowability of powders. For instance, correcting surface irregularity, reducing interparticular friction and decreasing surface charge. The result is a decrease in the angle of repose which is an indication of an enhanced powder's flowability. References Chemical engineering Drug manufacturing Granularity of materials Powders
Glidant
[ "Physics", "Chemistry", "Engineering" ]
193
[ "Chemical engineering", "Materials stubs", "Materials", "Powders", "nan", "Particle technology", "Granularity of materials", "Matter" ]
20,494,183
https://en.wikipedia.org/wiki/Avulsion%20%28river%29
In sedimentary geology and fluvial geomorphology, avulsion is the rapid abandonment of a river channel and the formation of a new river channel. Avulsions occur as a result of channel slopes that are much less steep than the slope that the river could travel if it took a new course. Deltaic and net-depositional settings Avulsions are common in river deltas, where sediment deposits as the river enters the ocean and channel gradients are typically very small. This process is also known as delta switching. Deposition from the river results in the formation of an individual deltaic lobe that pushes out into the sea. An example of a deltaic lobe is the bird's-foot delta of the Mississippi River, pictured at right with its sediment plumes. As the deltaic lobe advances, the slope of the river channel becomes lower, as the river channel is longer but has the same change in elevation. As the slope of the river channel decreases, it becomes unstable for two reasons. First, water under the force of gravity will tend to flow in the most direct course downslope. If the river could breach its natural levees (i.e., during a flood), it would spill out onto a new course with a shorter route to the ocean, thereby obtaining a more stable steeper slope. Second, as its slope is reduced, the amount of shear stress on the bed will decrease, resulting in deposition of more sediment within the channel and thus raising of the channel bed relative to the floodplain. This will make it easier for the river to breach its levees and cut a new channel that enters the ocean at a steeper slope. When this avulsion occurs, the new channel carries sediment out to the ocean, building a new deltaic lobe. The abandoned delta eventually subsides. This process is also related to the distributary network of river channels that can be observed within a river delta. When the channel does this, some of its flow can remain in the abandoned channel. When these channel switching events happen repeatedly over time, a mature delta will gain a distributary network. Subsidence of the delta and/or sea-level rise can further cause backwater and deposition in the delta. This deposition fills the channels and leaves a geologic record of channel avulsion in sedimentary basins. On average, an avulsion will occur every time the bed of a river channel aggrades enough that the river channel is superelevated above the floodplain by one channel-depth. In this situation, enough hydraulic head is available that any breach of the natural levees will result in an avulsion. Erosional avulsions Rivers can also avulse due to the erosion of a new channel that creates a straighter path through the landscape. This can happen during large floods in situations in which the slope of the new channel is significantly greater than that of the old channel. Where the new channel's slope is about the same as the old channel's slope, a partial avulsion will occur in which both channels are occupied by flow. An example of an erosional avulsion is the 2006 avulsion of the Suncook River in New Hampshire, in which heavy rains caused flow levels to rise. The river level backed up behind an old mill dam, which produced a shallowly-sloping pool that overtopped a sand and gravel quarry, connected with a downstream section of channel, and cut a new shorter channel at 25–50 meters per hour. Sediment mobilised by this erosional avulsion produced a depositionally-forced meander cutoff further downstream by superelevating the bed around the meander bend to nearly the level of the floodplain. Another example is the Cheslatta River, once a small tributary of the Nechako River in British Columbia. In the 1950s the Cheslatta River was made to be the spillway of the then new Nechako Reservoir. The discharge of the spills far exceeds the original flow of the Cheslatta River, which has resulted in major erosion in the upper Cheslatta valley, with the scoured sediment being deposited in the lower valley. Large reservoir spills caused the lower Cheslatta River to avulse in 1961 and again in 1972, carving a new route to the Nechako River and depositing a fan of sediment called the Cheslatta Fan in the Nechako River. After 1972 a cofferdam was built to restore the river to its original course. Meander cutoffs An example of a minor avulsion is known as a meander cutoff, when a pronounced meander (hook) in a river is breached by a flow that connects the two closest parts of the hook to form a new channel. This occurs when the ratio between the channel slope and the potential slope after an avulsion is less than about 1/5. Occurrence Avulsion typically occurs during large floods which carry the power necessary to rapidly change the landscape. Dam removal could also lead to avulsion. Avulsions usually occur as a downstream to upstream process via head cutting erosion. If a bank of a current stream is breached a new trench will be cut into the existing floodplain. It either cuts through floodplain deposits or reoccupies an old channel. Avulsions have been investigated in deltas or coastal plain channels as a result of obstructions such as log-jams and possible tectonic influences. See also References Geomorphology Sedimentology Hydraulic engineering Rivers Coastal geography Water streams Geological processes
Avulsion (river)
[ "Physics", "Engineering", "Environmental_science" ]
1,133
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
20,496,245
https://en.wikipedia.org/wiki/Pound%E2%80%93Drever%E2%80%93Hall%20technique
The Pound–Drever–Hall (PDH) technique is a widely used and powerful approach for stabilizing the frequency of light emitted by a laser by means of locking to a stable cavity. The PDH technique has a broad range of applications including interferometric gravitational wave detectors, atomic physics, and time measurement standards, many of which also use related techniques such as frequency modulation spectroscopy. Named after R. V. Pound, Ronald Drever, and John L. Hall, the PDH technique was described in 1983 by Drever, Hall and others working at the University of Glasgow and the U. S. National Bureau of Standards. This optical technique has many similarities to an older frequency-modulation technique developed by Pound for microwave cavities. Since a wide range of conditions contribute to determine the linewidth produced by a laser, the PDH technique provides a means to control and decrease the laser's linewidth, provided an optical cavity that is more stable than the laser source. Alternatively, if a stable laser is available, the PDH technique can be used to stabilize and/or measure the instabilities in an optical cavity length. The PDH technique responds to the frequency of laser emission independently of intensity, which is significant because many other methods that control laser frequency, such as a side-of-fringe lock are also affected by intensity instabilities. Laser stabilization In recent years the Pound–Drever–Hall technique has become a mainstay of laser frequency stabilization. Frequency stabilization is needed for high precision because all lasers demonstrate frequency wander at some level. This instability is primarily due to temperature variations, mechanical imperfections, and laser gain dynamics, which change laser cavity lengths, laser driver current and voltage fluctuations, atomic transition widths, and many other factors. PDH locking offers one possible solution to this problem by actively tuning the laser to match the resonance condition of a stable reference cavity. The ultimate linewidth obtained from PDH stabilization depends on a number of factors. From a signal analysis perspective, the noise on the locking signal can not be any lower than that posed by the shot noise limit. However, this constraint dictates how closely the laser can be made to follow the cavity. For tight locking conditions, the linewidth depends on the absolute stability of the cavity, which can reach the limits imposed by thermal noise. Using the PDH technique, optical linewidths below 40 mHz have been demonstrated. Applications Prominently, the field of interferometric gravitational wave detection depends critically on enhanced sensitivity afforded by optical cavities. The PDH technique is also used when narrow spectroscopic probes of individual quantum states are required, such as atomic physics, time measurement standards, and quantum computers. Overview of technique Phase modulated light, consisting of a carrier frequency and two side bands, is directed onto a two-mirror cavity. Light reflected off the cavity is measured using a high speed photodetector; the reflected signal consists of the two unaltered side bands along with a phase-shifted carrier component. The photodetector signal is mixed down with a local oscillator, which is in phase with the light modulation. After phase shifting and filtering, the resulting electronic signal gives a measure of how far the laser carrier is off resonance with the cavity and may be used as feedback for active stabilization. The feedback is typically carried out using a PID controller which takes the PDH error signal readout and converts it into a voltage that can be fed back to the laser to keep it locked on resonance with the cavity. The main innovation of the PDH technique is to monitor the derivative of the cavity transmission with respect to detuning, rather than the cavity transmission itself, which is symmetric about the resonant frequency. Unlike a side-of-fringe lock, this allows the sign of the feedback signal to be correctly determined on both sides of resonance. The derivative is measured via rapid modulation of the input signal and subsequent mixing with the drive waveform, much as in electron paramagnetic resonance. PDH readout function The PDH readout function gives a measure of the resonance condition of a cavity. By taking the derivative of the cavity transfer function (which is symmetric and even) with respect to frequency, it is an odd function of frequency and hence indicates not only whether there is a mismatch between the output frequency ω of the laser and the resonant frequency ωres of the cavity, but also whether ω is greater or less than ωres. The zero-crossing of the readout function is sensitive only to intensity fluctuations due to the frequency of light in the cavity and insensitive to intensity fluctuations from the laser itself. Light of frequency can be represented mathematically by its electric field, E0eiωt. If this light is then phase-modulated by βsin(ωmt), where ωm is the modulation frequency and β is the modulation depth, the resulting field Ei is This field may be regarded as the superposition of three frequency components. The first component is an electric field of angular frequency ω, known as the carrier, and the second and third components are fields of angular frequency and , respectively, called the sidebands. In general, the light Er reflected out of a Fabry–Pérot two-mirror cavity is related to the light Ei incident on the cavity by the following transfer function: where , and where r1 and r2 are the reflection coefficients of mirrors 1 and 2 of the cavity, and t1 and t2 are the transmission coefficients of the mirrors. Applying this transfer function to the phase-modulated light Ei gives the reflected light Er: The power Pr of the reflected light is proportional to the square magnitude of the electric field, Er* Er, which after some algebraic manipulation can be shown to be Here P0 ∝ |E0|2 is the power of the light incident on the Fabry–Pérot cavity, and χ is defined by This χ is the ultimate quantity of interest; it is an antisymmetric function of . It can be extracted from Pr by demodulation. First, the reflected beam is directed onto a photodiode, which produces a voltage Vr that is proportional to Pr. Next, this voltage is mixed with a phase-delayed version of the original modulation voltage to produce V′r: Finally, V′r is sent through a low-pass filter to remove any sinusoidally oscillating terms. This combination of mixing and low-pass filtering produces a voltage V that contains only the terms involving χ: In theory, χ can be completely extracted by setting up two demodulation paths, one with and another with . In practice, by judicious choice of ωm it is possible to make χ almost entirely real or almost entirely imaginary, so that only one demodulation path is necessary. V(ω), with appropriately chosen φ, is the PDH readout signal. Notes References Synchronization Optical devices
Pound–Drever–Hall technique
[ "Materials_science", "Engineering" ]
1,425
[ "Glass engineering and science", "Telecommunications engineering", "Optical devices", "Synchronization" ]
20,497,041
https://en.wikipedia.org/wiki/Sonoporation
Sonoporation, or cellular sonication, is the use of sound in the ultrasonic range for increasing the permeability of the cell plasma membrane. This technique is usually used in molecular biology and non-viral gene therapy in order to allow uptake of large molecules such as DNA into the cell, in a cell disruption process called transfection or transformation. Sonoporation employs the acoustic cavitation of microbubbles to enhance delivery of these large molecules. The exact mechanism of sonoporation-mediated membrane translocation remains unclear, with a few different hypotheses currently being explored. Sonoporation is under active study for the introduction of foreign genes in tissue culture cells, especially mammalian cells. Sonoporation is also being studied for use in targeted Gene therapy in vivo, in a medical treatment scenario whereby a patient is given modified DNA, and an ultrasonic transducer might target this modified DNA into specific regions of the patient's body. The bioactivity of this technique is similar to, and in some cases found superior to, electroporation. Extended exposure to low-frequency (<MHz) ultrasound has been demonstrated to result in complete cellular death (rupturing), thus cellular viability must also be accounted for when employing this technique. Equipment Sonoporation is performed with a dedicated sonoporator. Sonoporation may also be performed with custom-built piezoelectric transducers connected to bench-top function generators and acoustic amplifiers. Standard ultrasound medical devices may also be used in some applications. Measurement of the acoustics used in sonoporation is listed in terms of mechanical index, which quantifies the likelihood that exposure to diagnostic ultrasound will produce an adverse biological effect by a non-thermal action based on pressure. Microbubble contrast agents Microbubble contrast agents are generally used in contrast-enhanced ultrasound applications to enhance the acoustic impact of ultrasound. For sonoporation specifically, microbubbles are used to significantly enhance membrane translocation of molecular therapeutics. General features The microbubbles used today are composed of a gas core and a surrounding shell. The makeup of these elements may vary depending on the preferred physical and chemical properties. Microbubble shells have been formed with lipids, galactose, albumin, or polymers. The gas core can be made up of air or heavy gases like nitrogen or perfluorocarbon. Mechanism of action Microbubble gas cores have high compressibility relative to their liquid environment, making them highly responsive to acoustic application. As a result of ultrasound stimulation, microbubbles undergo expansion and contraction, a phenomenon called stable cavitation. If a microbubble is attached to the cell membrane, the microbubble oscillations produced by ultrasound stimulation may push and pull on the membrane to produce a membrane opening. These rapid oscillations are also responsible for adjacent fluid flow called microstreaming which increases pressure on surrounding cells producing further sonoporation to whole cell populations. The physical mechanisms supposedly involved with microbubble-enhanced sonoporation have been referred to as push, pull, microstreaming, translation, and jetting. Membrane translocation mechanism The mechanism by which molecules cross cellular membrane barriers during sonoporation remains unclear. Different theories exist that may potentially explain barrier permeabilization and molecular delivery. The dominant hypotheses include pore formation, endocytosis, and membrane wounds. Pore formation Pore formation following ultrasound application was first reported in 1999 in a study that observed cell membrane craters following ultrasound application at 255 kHz. Later, sonoporation mediated microinjection of dextran molecules showed that membrane permeability mechanisms differ depending on the size of dextran molecules. Microinjection of dextran molecules from 3 to 70 kDa was reported to have crossed the cellular membrane via transient pores. In contrast, dextran molecules of 155 and 500 kDa were predominantly found in vesicle-like structures, likely indicating the mechanism of endocytosis. This variability in membrane behavior has led to other studies investigating membrane rupture and resealing characteristics depending on ultrasound amplitude and duration. Endocytosis Various cellular reactions to ultrasound indicate the mechanism of molecular uptake via endocytosis. These observed reactionary phenomena include ion exchange, hydrogen peroxide, and cell intracellular calcium concentration. Studies have used patch clamping techniques to monitor membrane potential ion exchange for the role of endocytosis in sonoporation. Ultrasound application to cells and adjacent microbubbles was shown to produce marked cell membrane hyperpolarization along with progressive intracellular calcium increase, which is believed to be a consequence of calcium channels opening in response to microbubble oscillations. These findings act as support for ultrasound application inducing calcium-mediated uncoating of clathrin-coated pits seen in traditional endocytosis pathways. Other work reported sonoporation induced the formation of hydrogen peroxide, a cellular reaction that is also known to be involved with endocytosis. Membrane wounds Mechanically created wounds in the plasma membrane have been observed as a result of sonoporation-produced shear forces. The nature of these wounds may vary based on the degree of acoustic cavitation leading to a spectrum of cell behavior, from membrane blebbing to instant cell lysis. Multiple studies examining membrane wounds note observing resealing behavior, a process dependent on recruitment of ATP and intracellular vesicles. Membrane resealing Following sonoporation-mediated membrane permeabilization, cells can automatically repair the membrane openings through a phenomenon called "reparable sonoporation." The membrane resealing process has been shown to be calcium-dependent. This property may suggest that the membrane repair process involves a cell's active repair mechanism in response to the cellular influx of calcium. Preclinical studies In vitro The first study reporting molecular delivery using ultrasound was a 1987 in vitro study attempting to transfer plasmid DNA to cultured mouse fibroblast cells using sonoporation. This successful plasmid DNA transfection conferring G418 antibiotic resistance ultimately led to further in vitro studies that hinted at the potential for sonoporation transfection of plasmid DNA and siRNA in vivo. In vivo In vivo ultrasound mediated drug delivery was first reported in 1991 and many other preclinical studies involving sonoporation have followed. This method is being used to deliver therapeutic drugs or genes to treat a variety of diseases including: Stroke, Cancer, Parkinson's, Alzheimer's... The preclinical utility of sonoporation is well illustrated through past tumor radiation treatments which have reported a more than 10-fold cellular destruction when ionizing radiation is coupled with ultrasound-mediated microbubble vascular disruption. This increase in delivery efficiency could allow for the appropriate reduction in therapeutic dosing. References Biotechnology Molecular biology
Sonoporation
[ "Chemistry", "Biology" ]
1,414
[ "Biochemistry", "nan", "Biotechnology", "Molecular biology" ]
3,399,064
https://en.wikipedia.org/wiki/Total%20air%20temperature
In aviation, stagnation temperature is known as total air temperature and is measured by a temperature probe mounted on the surface of the aircraft. The probe is designed to bring the air to rest relative to the aircraft. As the air is brought to rest, kinetic energy is converted to internal energy. The air is compressed and experiences an adiabatic increase in temperature. Therefore, total air temperature is higher than the static (or ambient) air temperature. Total air temperature is an essential input to an air data computer in order to enable the computation of static air temperature and hence true airspeed. The relationship between static and total air temperatures is given by: where: static air temperature, SAT (kelvins or degrees Rankine) total air temperature, TAT (kelvins or degrees Rankine) Mach number ratio of specific heats, approx 1.400 for dry air In practice, the total air temperature probe will not perfectly recover the energy of the airflow, and the temperature rise may not be entirely due to adiabatic process. In this case, an empirical recovery factor (less than 1) may be introduced to compensate: where e is the recovery factor (also noted Ct) Typical recovery factors Platinum wire ratiometer thermometer ("flush bulb type"): e ≈ 0.75 − 0.9 Double platinum tube ratiometer thermometer ("TAT probe"): e ≈ 1 Other notations Total air temperature (TAT) is also called: indicated air temperature (IAT) or ram air temperature (RAT) Static air temperature (SAT) is also called: outside air temperature (OAT) or true air temperature Ram rise The difference between TAT and SAT is called ram rise (RR) and is caused by compressibility and friction of the air at high velocities. In practice the ram rise is negligible for aircraft flying at (true) airspeeds under Mach 0.2. For airspeeds (TAS) over Mach 0.2, as airspeed increases the temperature exceeds that of still air. This is caused by a combination of kinetic (friction) heating and adiabatic compression. Kinetic heating. As the airspeed increases, more and more molecules of air per second hit the aircraft. This causes a temperature rise in the Direct Reading thermometer probe of the aircraft due to friction. Because the airflow is thought to be compressible and isentropic, which, by definition, is adiabatic and reversible, the equations used in this article do not take account of friction heating. This is why the calculation of static air temperature requires the use of the recovery factor, . Kinetic heating for modern passenger jets is almost negligible. Adiabatic compression. As described above, this is caused by a conversion of energy and not by direct application of heat. At airspeeds over Mach 0.2, in the Remote Reading temperature probe (TAT-probe), the outside airflow, which may be several hundred knots, is brought virtually to rest very rapidly. The energy (Specific Kinetic Energy) of the moving air is then released (converted) in the form of a temperature rise (Specific Enthalpy). Energy cannot be destroyed but only transformed; this means that according to the first law of thermodynamics, the total energy of an isolated system must remain constant. The total of kinetic heating and adiabatic temperature change (caused by adiabatic compression) is the Total Ram Rise. Combining equations () & (), we get: If we use the Mach number equation for dry air: where , we get Which can be simplified to: by using and local speed of sound. adiabatic index (ratio of heat capacities) and is assumed for aviation purposes to be 7/5 = 1.400. specific gas constant. The approximate value of for dry air is 286.9 J·kg−1·K−1. heat capacity constant for constant pressure. heat capacity constant for constant volume. static air temperature, SAT, measured in kelvins. true airspeed of the aircraft, TAS. recovery factor, which has an approximate value of 0.98, typical for a modern TAT-probe. By solving (3) for the above values with TAS in knots, a simple accurate formula for ram rise is then: See also Stagnation point Stagnation temperature Outside air temperature Mach number Speed of sound Adiabatic process Isentropic process Specific enthalpy External links In-Flight Temperature Measurements Measurement of Temperature on Aircraft TAT Sensor Operation and Equations TAT Sensor Heater Error Effect High speed flight - Viscous Interaction Atmospheric thermodynamics Aircraft instruments Atmospheric temperature
Total air temperature
[ "Technology", "Engineering" ]
956
[ "Aircraft instruments", "Measuring instruments" ]
3,400,953
https://en.wikipedia.org/wiki/Parametric%20surface
A parametric surface is a surface in the Euclidean space which is defined by a parametric equation with two parameters Parametric representation is a very general way to specify a surface, as well as implicit representation. Surfaces that occur in two of the main theorems of vector calculus, Stokes' theorem and the divergence theorem, are frequently given in a parametric form. The curvature and arc length of curves on the surface, surface area, differential geometric invariants such as the first and second fundamental forms, Gaussian, mean, and principal curvatures can all be computed from a given parametrization. Examples The simplest type of parametric surfaces is given by the graphs of functions of two variables: A rational surface is a surface that admits parameterizations by a rational function. A rational surface is an algebraic surface. Given an algebraic surface, it is commonly easier to decide if it is rational than to compute its rational parameterization, if it exists. Surfaces of revolution give another important class of surfaces that can be easily parametrized. If the graph , is rotated about the z-axis then the resulting surface has a parametrization It may also be parameterized showing that, if the function is rational, then the surface is rational. The straight circular cylinder of radius R about x-axis has the following parametric representation: Using the spherical coordinates, the unit sphere can be parameterized by This parametrization breaks down at the north and south poles where the azimuth angle θ is not determined uniquely. The sphere is a rational surface. The same surface admits many different parametrizations. For example, the coordinate z-plane can be parametrized as for any constants a, b, c, d such that , i.e. the matrix is invertible. Local differential geometry The local shape of a parametric surface can be analyzed by considering the Taylor expansion of the function that parametrizes it. The arc length of a curve on the surface and the surface area can be found using integration. Notation Let the parametric surface be given by the equation where is a vector-valued function of the parameters (u, v) and the parameters vary within a certain domain D in the parametric uv-plane. The first partial derivatives with respect to the parameters are usually denoted and and similarly for the higher derivatives, In vector calculus, the parameters are frequently denoted (s,t) and the partial derivatives are written out using the ∂-notation: Tangent plane and normal vector The parametrization is regular for the given values of the parameters if the vectors are linearly independent. The tangent plane at a regular point is the affine plane in R3 spanned by these vectors and passing through the point r(u, v) on the surface determined by the parameters. Any tangent vector can be uniquely decomposed into a linear combination of and The cross product of these vectors is a normal vector to the tangent plane. Dividing this vector by its length yields a unit normal vector to the parametrized surface at a regular point: In general, there are two choices of the unit normal vector to a surface at a given point, but for a regular parametrized surface, the preceding formula consistently picks one of them, and thus determines an orientation of the surface. Some of the differential-geometric invariants of a surface in R3 are defined by the surface itself and are independent of the orientation, while others change the sign if the orientation is reversed. Surface area The surface area can be calculated by integrating the length of the normal vector to the surface over the appropriate region D in the parametric uv plane: Although this formula provides a closed expression for the surface area, for all but very special surfaces this results in a complicated double integral, which is typically evaluated using a computer algebra system or approximated numerically. Fortunately, many common surfaces form exceptions, and their areas are explicitly known. This is true for a circular cylinder, sphere, cone, torus, and a few other surfaces of revolution. This can also be expressed as a surface integral over the scalar field 1: First fundamental form The first fundamental form is a quadratic form on the tangent plane to the surface which is used to calculate distances and angles. For a parametrized surface its coefficients can be computed as follows: Arc length of parametrized curves on the surface S, the angle between curves on S, and the surface area all admit expressions in terms of the first fundamental form. If , represents a parametrized curve on this surface then its arc length can be calculated as the integral: The first fundamental form may be viewed as a family of positive definite symmetric bilinear forms on the tangent plane at each point of the surface depending smoothly on the point. This perspective helps one calculate the angle between two curves on S intersecting at a given point. This angle is equal to the angle between the tangent vectors to the curves. The first fundamental form evaluated on this pair of vectors is their dot product, and the angle can be found from the standard formula expressing the cosine of the angle via the dot product. Surface area can be expressed in terms of the first fundamental form as follows: By Lagrange's identity, the expression under the square root is precisely , and so it is strictly positive at the regular points. Second fundamental form The second fundamental form is a quadratic form on the tangent plane to the surface that, together with the first fundamental form, determines the curvatures of curves on the surface. In the special case when and the tangent plane to the surface at the given point is horizontal, the second fundamental form is essentially the quadratic part of the Taylor expansion of z as a function of x and y. For a general parametric surface, the definition is more complicated, but the second fundamental form depends only on the partial derivatives of order one and two. Its coefficients are defined to be the projections of the second partial derivatives of onto the unit normal vector defined by the parametrization: Like the first fundamental form, the second fundamental form may be viewed as a family of symmetric bilinear forms on the tangent plane at each point of the surface depending smoothly on the point. Curvature The first and second fundamental forms of a surface determine its important differential-geometric invariants: the Gaussian curvature, the mean curvature, and the principal curvatures. The principal curvatures are the invariants of the pair consisting of the second and first fundamental forms. They are the roots κ1, κ2 of the quadratic equation The Gaussian curvature K = κ1κ2 and the mean curvature can be computed as follows: Up to a sign, these quantities are independent of the parametrization used, and hence form important tools for analysing the geometry of the surface. More precisely, the principal curvatures and the mean curvature change the sign if the orientation of the surface is reversed, and the Gaussian curvature is entirely independent of the parametrization. The sign of the Gaussian curvature at a point determines the shape of the surface near that point: for the surface is locally convex and the point is called elliptic, while for the surface is saddle shaped and the point is called hyperbolic. The points at which the Gaussian curvature is zero are called parabolic. In general, parabolic points form a curve on the surface called the parabolic line. The first fundamental form is positive definite, hence its determinant is positive everywhere. Therefore, the sign of K coincides with the sign of , the determinant of the second fundamental. The coefficients of the first fundamental form presented above may be organized in a symmetric matrix: And the same for the coefficients of the second fundamental form, also presented above: Defining now matrix , the principal curvatures κ1 and κ2 are the eigenvalues of A. Now, if is the eigenvector of A corresponding to principal curvature κ1, the unit vector in the direction of is called the principal vector corresponding to the principal curvature κ1. Accordingly, if is the eigenvector of A corresponding to principal curvature κ2, the unit vector in the direction of is called the principal vector corresponding to the principal curvature κ2. See also Spline (mathematics) Surface normal References External links Java applets demonstrate the parametrization of a helix surface m-ART(3d) - iPad/iPhone application to generate and visualize parametric surfaces. Surfaces Equations
Parametric surface
[ "Mathematics" ]
1,706
[ "Mathematical objects", "Equations" ]
3,401,787
https://en.wikipedia.org/wiki/Superprism
A superprism is a photonic crystal in which an entering beam of light will lead to an extremely large angular dispersion. The ability of the photonic crystal to send optical beams with different wavelengths to considerably different angles in space in superprisms has been used to demonstrate wavelength demultiplexing in these structures. The first superprism also modified group velocity rather than phase velocity in order to achieve the "superprism phenomena". This effect was interpreted as anisotropic dispersion in contrast to an isotropic dispersion. Furthermore, the two beams of light appear to show negative bending within the crystal. See also Superlens Prism (optics) Metamaterial Perfect mirror References Further reading Photonics
Superprism
[ "Physics", "Materials_science" ]
151
[ "Materials science stubs", "Condensed matter stubs", "Condensed matter physics" ]
3,402,426
https://en.wikipedia.org/wiki/H%C3%BCckel%20method
The Hückel method or Hückel molecular orbital theory, proposed by Erich Hückel in 1930, is a simple method for calculating molecular orbitals as linear combinations of atomic orbitals. The theory predicts the molecular orbitals for π-electrons in π-delocalized molecules, such as ethylene, benzene, butadiene, and pyridine. It provides the theoretical basis for Hückel's rule that cyclic, planar molecules or ions with π-electrons are aromatic. It was later extended to conjugated molecules such as pyridine, pyrrole and furan that contain atoms other than carbon and hydrogen (heteroatoms). A more dramatic extension of the method to include σ-electrons, known as the extended Hückel method (EHM), was developed by Roald Hoffmann. The extended Hückel method gives some degree of quantitative accuracy for organic molecules in general (not just planar systems) and was used to provide computational justification for the Woodward–Hoffmann rules. To distinguish the original approach from Hoffmann's extension, the Hückel method is also known as the simple Hückel method (SHM). Although undeniably a cornerstone of organic chemistry, Hückel's concepts were undeservedly unrecognized for two decades. Pauling and Wheland characterized his approach as "cumbersome" at the time, and their competing resonance theory was relatively easier to understand for chemists without fundamental physics background, even if they couldn't grasp the concept of quantum superposition and confused it with tautomerism. His lack of communication skills contributed: when Robert Robinson sent him a friendly request, he responded arrogantly that he is not interested in organic chemistry. In spite of its simplicity, the Hückel method in its original form makes qualitatively accurate and chemically useful predictions for many common molecules and is therefore a powerful and widely taught educational tool. It is described in many introductory quantum chemistry and physical organic chemistry textbooks, and organic chemists in particular still routinely apply Hückel theory to obtain a very approximate, back-of-the-envelope understanding of π-bonding. Hückel characteristics The method has several characteristics: It limits itself to conjugated molecules. Only π electron molecular orbitals are included because these determine much of the chemical and spectral properties of these molecules. The σ electrons are assumed to form the framework of the molecule and σ connectivity is used to determine whether two π orbitals interact. However, the orbitals formed by σ electrons are ignored and assumed not to interact with π electrons. This is referred to as σ-π separability. It is justified by the orthogonality of σ and π orbitals in planar molecules. For this reason, the Hückel method is limited to systems that are planar or nearly so. The method is based on applying the variational method to linear combination of atomic orbitals and making simplifying assumptions regarding the overlap, resonance and Coulomb integrals of these atomic orbitals. It does not attempt to solve the Schrödinger equation, and neither the functional form of the basis atomic orbitals nor details of the Hamiltonian are involved. For hydrocarbons, the method takes atomic connectivity as the only input; empirical parameters are only needed when heteroatoms are introduced. The method predicts how many energy levels exist for a given molecule, which levels are degenerate and it expresses the molecular orbital energies in terms of two parameters, called α, the energy of an electron in a 2p orbital, and β, the interaction energy between two 2p orbitals (the extent to which an electron is stabilized by allowing it to delocalize between two orbitals). The usual sign convention is to let both α and β be negative numbers. To understand and compare systems in a qualitative or even semi-quantitative sense, explicit numerical values for these parameters are typically not required. In addition the method also enables calculation of charge density for each atom in the π framework, the fractional bond order between any two atoms, and the overall molecular dipole moment. Hückel results Results for simple molecules and general results for cyclic and linear systems The results for a few simple molecules are tabulated below: The theory predicts two energy levels for ethylene with its two π electrons filling the low-energy HOMO and the high energy LUMO remaining empty. In butadiene the 4 π-electrons occupy 2 low energy molecular orbitals, out of a total of 4, and for benzene 6 energy levels are predicted, two of them degenerate. For linear and cyclic systems (with N atoms), general solutions exist: Linear system (polyene/polyenyl): . Energy levels are all distinct. Cyclic system, Hückel topology (annulene/annulenyl): . Energy levels are each doubly degenerate. Cyclic system, Möbius topology (hypothetical for N < 8): . Energy levels are each doubly degenerate. The energy levels for cyclic systems can be predicted using the mnemonic (named after the American chemist ). A circle centered at α with radius 2β is inscribed with a regular N-gon with one vertex pointing down; the y-coordinate of the vertices of the polygon then represent the orbital energies of the [N]annulene/annulenyl system. Related mnemonics exists for linear and Möbius systems. The values of α and β The value of α is the energy of an electron in a 2p orbital, relative to an unbound electron at infinity. This quantity is negative, since the electron is stabilized by being electrostatically bound to the positively charged nucleus. For carbon this value is known to be approximately –11.4 eV. Since Hückel theory is generally only interested in energies relative to a reference localized system, the value of α is often immaterial and can be set to zero without affecting any conclusions. Roughly speaking, β physically represents the energy of stabilization experienced by an electron allowed to delocalize in a π molecular orbital formed from the 2p orbitals of adjacent atoms, compared to being localized in an isolated 2p atomic orbital. As such, it is also a negative number, although it is often spoken of in terms of its absolute value. The value for |β| in Hückel theory is roughly constant for structurally similar compounds, but not surprisingly, structurally dissimilar compounds will give very different values for |β|. For example, using the π bond energy of ethylene (65 kcal/mole) and comparing the energy of a doubly-occupied π orbital (2α + 2β) with the energy of electrons in two isolated p orbitals (2α), a value of |β| = 32.5 kcal/mole can be inferred. On the other hand, using the resonance energy of benzene (36 kcal/mole, derived from heats of hydrogenation) and comparing benzene (6α + 8β) with a hypothetical "non-aromatic 1,3,5-cyclohexatriene" (6α + 6β), a much smaller value of |β| = 18 kcal/mole emerges. These differences are not surprising, given the substantially shorter bond length of ethylene (1.33 Å) compared to benzene (1.40 Å). The shorter distance between the interacting p orbitals accounts for the greater energy of interaction, which is reflected by a higher value of |β|. Nevertheless, heat of hydrogenation measurements of various polycyclic aromatic hydrocarbons like naphthalene and anthracene all imply values of |β| between 17 and 20 kcal/mol. However, even for the same compound, the correct assignment of |β| can be controversial. For instance, it is argued that the resonance energy measured experimentally via heats of hydrogenation is diminished by the distortions in bond lengths that must take place going from the single and double bonds of "non-aromatic 1,3,5-cyclohexatriene" to the delocalized bonds of benzene. Taking this distortion energy into account, the value of |β| for delocalization without geometric change (called the "vertical resonance energy") for benzene is found to be around 37 kcal/mole. On the other hand, experimental measurements of electronic spectra have given a value of |β| (called the "spectroscopic resonance energy") as high as 3 eV (~70 kcal/mole) for benzene. Given these subtleties, qualifications, and ambiguities, Hückel theory should not be called upon to provide accurate quantitative predictions – only semi-quantitative or qualitative trends and comparisons are reliable and robust. Other successful predictions With this caveat in mind, many predictions of the theory have been experimentally verified: The HOMO–LUMO gap, in terms of the β constant, correlates directly with the respective molecular electronic transitions observed with UV/VIS spectroscopy. For linear polyenes, the energy gap is given as: from which a value for β can be obtained between −60 and −70 kcal/mol (−250 to −290 kJ/mol). The predicted molecular orbital energies as stipulated by Koopmans' theorem correlate with photoelectron spectroscopy. The Hückel delocalization energy correlates with the experimental heat of combustion. This energy is defined as the difference between the total predicted π energy (in benzene 8β) and a hypothetical π energy in which all ethylene units are assumed isolated, each contributing 2β (making benzene 3 × 2β = 6β). Molecules with molecular orbitals paired up such that only the sign differs (for example α ± β) are called alternant hydrocarbons and have in common small molecular dipole moments. This is in contrast to non-alternant hydrocarbons, such as azulene and fulvene that have large dipole moments. The Hückel theory is more accurate for alternant hydrocarbons. For cyclobutadiene the theory predicts that the two high-energy electrons occupy a degenerate pair of molecular orbitals (following from Hund's rules) that are neither stabilized nor destabilized. Hence the square molecule would be a very reactive triplet diradical (the ground state is actually rectangular without degenerate orbitals). In fact, all cyclic conjugated hydrocarbons with a total of 4n π-electrons share this molecular orbital pattern, and this forms the basis of Hückel's rule. Dewar reactivity numbers deriving from the Hückel approach correctly predict the reactivity of aromatic systems with nucleophiles and electrophiles. The benzyl cation and anion serve as simple models for arenes with electron-withdrawing and electron-donating groups, respectively. The π-electron population correctly implies the meta- and ortho-/para-selectivity for electrophilic aromatic substitution of π electron-poor and π electron-rich arenes, respectively. Application in optical activity analysis The analysis of the optical activity of a molecule depends to a certain extent on the study of its chiral characteristics. However, for achiral molecules applying pesudoscalars to simplify the calculations of optical activity cannot be achieved due to the lack of spatial average. Instead of traditional chiroptical solution measurements, Hückel theory helps focus on oriented π systems by separating from σ electrons especially in the planar, -symmetric cases. Transition dipole moments derived by multiplying each wavefunction of individual planar molecule one by one, contribute to the directions of the most optical activity, where sit at the bisectors of two orthogonal ones. Despite the zero value for the trace of the tensor, cis-butadiene shows considerable off diagonal component which was computed as the first optical activity evaluation of achiral molecule. Consider 3,5-dimethylene-1-cyclopentene as an example. Transition electric dipole, magnetic dipole and electric quadrupole moments interactions result in optical rotation(OR), which can be described by both tensor components and chemical geometries. The in phase overlap of two molecular orbitals yield negative charge while depleting charge out of phase. The movement can be interpreted quantitatively by corresponding π and π* orbitals coefficients. Delocalization energy, π-bond orders, and π-electron populations The delocalization energy, π-bond orders, and π-electron population are chemically significant parameters that can be gleaned from the orbital energies and coefficients that are the direct outputs of Hückel theory. These are quantities strictly derived from theory, as opposed to measurable physical properties, though they correlate with measurable qualitative and quantitative properties of the chemical species. Delocalization energy is defined as the difference in energy between that of the most stable localized Lewis structure and the energy of the molecule computed from Hückel theory orbital energies and occupancies. Since all energies are relative, we set without loss of generality to simplify discussion. The energy of the localized structure is then set to be 2β for every two-electron localized π-bond. The Hückel energy of the molecule is , where the sum is over all Hückel orbitals, is the occupancy of orbital i, set to be 2 for doubly-occupied orbitals, 1 for singly-occupied orbitals, and 0 for unoccupied orbitals, and is the energy of orbital i. Thus, the delocalization energy, conventionally a positive number, is defined as . In the case of benzene, the occupied orbitals have energies (again setting ) 2β, β, and β. This gives the Hückel energy of benzene as . Each Kekulé structure of benzene has three double bonds, so the localized structure is assigned an energy of . The delocalization energy, measured in units of , is then . The π-bond orders derived from Hückel theory are defined using the orbital coefficients of the Hückel MOs. The π-bond order between atoms j and k is defined as , where is again the orbital occupancy of orbital i and and are the coefficients on atoms j and k, respectively, for orbital i. For benzene, the three occupied MOs, expressed as linear combinations of AOs , are: , []; , []; , []. Perhaps surprisingly, the π-bond order formula gives a bond order of for the bond between carbons 1 and 2. The resulting total (σ + π) bond order of is the same between any other pair of adjacent carbon atoms. This is more than the naive π-bond order of (for a total bond order of ) that one might guess when simply considering the Kekulé structures and the usual definition of bond order in valence bond theory. The Hückel definition of bond order attempts to quantify any additional stabilization that the system enjoys resulting from delocalization. In a sense, the Hückel bond order suggests that there are four π-bonds in benzene instead of the three that are implied by the Kekulé-type Lewis structures. The "extra" bond is attributed to the additional stabilization that results from the aromaticity of the benzene molecule. (This is only one of several definitions for non-integral bond orders, and other definitions will lead to different values that fall between 1 and 2.) The π-electron population is calculated in a very similar way to the bond order using the orbital coefficients of the Hückel MOs. The π-electron population on atom j is defined as . The associated Hückel Coulomb charge is defined as , where is the number of π-electrons contributed by a neutral, sp2-hybridized atom j (we always have for carbon). For carbon 1 on benzene, this yields a π-electron population of . Since each carbon atom contributes one π-electron to the molecule, this gives a Coulomb charge of 0 for carbon 1 (and all other carbon atoms), as expected. In the cases of benzyl cation and benzyl anion shown above, and , and . Mathematics behind the Hückel method The mathematics of the Hückel method is based on the Ritz method. In short, given a basis set of n normalized atomic orbitals , an ansatz molecular orbital is written down, with normalization constant N and coefficients which are to be determined. In other words, we are assuming that the molecular orbital (MO) can be written as a linear combination of atomic orbitals, a conceptually intuitive and convenient approximation (the linear combination of atomic orbitals or LCAO approximation). The variational theorem states that given an eigenvalue problem with smallest eigenvalue and corresponding wavefunction , any normalized trial wavefunction (i.e., holds) will satisfy , with equality holding if and only if . Thus, by minimizing with respect to coefficients for normalized trial wavefunctions , we obtain a closer approximation of the true ground-state wavefunction and its energy. To start, we apply the normalization condition to the ansatz and expand to get an expression for N in terms of the . Then, we substitute the ansatz into the expression for E and expand, yielding , where , , and . In the remainder of the derivation, we will assume that the atomic orbitals are real. (For the simple case of the Hückel theory, they will be the 2pz orbitals on carbon.) Thus, , and because the Hamiltonian operator is hermitian, . Setting for to minimize E and collecting terms, we obtain a system of n simultaneous equations . When , and are called the overlap and resonance (or exchange) integrals, respectively, while is called the Coulomb integral, and simply expresses the fact that the are normalized. The n × n matrices and are known as the overlap and Hamiltonian matrices, respectively. By a well-known result from linear algebra, nontrivial solutions to the above system of linear equations can only exist if the coefficient matrix is singular. Hence, must have a value such that the determinant of the coefficient matrix vanishes: . (*) This determinant expression is known as the secular determinant and gives rise to a generalized eigenvalue problem. The variational theorem guarantees that the lowest value of that gives rise to a nontrivial (that is, not all zero) solution vector represents the best LCAO approximation of the energy of the most stable π orbital; higher values of with nontrivial solution vectors represent reasonable estimates of the energies of the remaining π orbitals. The Hückel method makes a few further simplifying assumptions concerning the values of the and . In particular, it is first assumed that distinct have zero overlap. Together with the assumption that are normalized, this means that the overlap matrix is the n × n identity matrix: . Solving for E in (*) then reduces to finding the eigenvalues of the Hamiltonian matrix. Second, in the simplest case of a planar, unsaturated hydrocarbon, the Hamiltonian matrix is parameterized in the following way: (**) To summarize, we are assuming that: (1) the energy of an electron in an isolated C(2pz) orbital is ; (2) the energy of interaction between C(2pz) orbitals on adjacent carbons i and j (i.e., i and j are connected by a σ-bond) is ; (3) orbitals on carbons not joined in this way are assumed not to interact, so for nonadjacent i and j; and, as mentioned above, (4) the spatial overlap of electron density between different orbitals, represented by non-diagonal elements of the overlap matrix, is ignored by setting , even when the orbitals are adjacent. This neglect of orbital overlap is an especially severe approximation. In actuality, orbital overlap is a prerequisite for orbital interaction, and it is impossible to have while . For typical bond distances (1.40 Å) as might be found in benzene, for example, the true value of the overlap for C(2pz) orbitals on adjacent atoms i and j is about ; even larger values are found when the bond distance is shorter (e.g., ethylene). A major consequence of having nonzero overlap integrals is the fact that, compared to non-interacting isolated orbitals, bonding orbitals are not energetically stabilized by nearly as much as antibonding orbitals are destabilized. The orbital energies derived from the Hückel treatment do not account for this asymmetry (see Hückel solution for ethylene (below) for details). The eigenvalues of are the Hückel molecular orbital energies , expressed in terms of and , while the eigenvectors are the Hückel MOs , expressed as linear combinations of the atomic orbitals . Using the expression for the normalization constant N and the fact that , we can find the normalized MOs by incorporating the additional condition . The Hückel MOs are thus uniquely determined when eigenvalues are all distinct. When an eigenvalue is degenerate (two or more of the are equal), the eigenspace corresponding to the degenerate energy level has dimension greater than 1, and the normalized MOs at that energy level are then not uniquely determined. When that happens, further assumptions pertaining to the coefficients of the degenerate orbitals (usually ones that make the MOs orthogonal and mathematically convenient) have to be made in order to generate a concrete set of molecular orbital functions. If the substance is a planar, unsaturated hydrocarbon, the coefficients of the MOs can be found without appeal to empirical parameters, while orbital energies are given in terms of only and . On the other hand, for systems containing heteroatoms, such as pyridine or formaldehyde, values of correction constants and have to be specified for the atoms and bonds in question, and and in (**) are replaced by and , respectively. Hückel solution for ethylene in detail In the Hückel treatment for ethylene, we write the Hückel MOs as a linear combination of the atomic orbitals (2p orbitals) on each of the carbon atoms: . Applying the result obtained by the Ritz method, we have the system of equations , where: and . (Since 2pz atomic orbital can be expressed as a pure real function, the * representing complex conjugation can be dropped.) The Hückel method assumes that all overlap integrals (including the normalization integrals) equal the Kronecker delta, , all Coulomb integrals are equal, and the resonance integral is nonzero when the atoms i and j are bonded. Using the standard Hückel variable names, we set , , , and . The Hamiltonian matrix is . The matrix equation that needs to be solved is then , or, dividing by , . Setting , we obtain . (***) This homogeneous system of equations has nontrivial solutions for (solutions besides the physically meaningless ) iff the matrix is singular and the determinant is zero: . Solving for , , or . Since , the energy levels are , or . The coefficients can then be found by expanding (***): and . Since the matrix is singular, the two equations are linearly dependent, and the solution set is not uniquely determined until we apply the normalization condition. We can only solve for in terms of : , or . After normalization with , the numerical values of and can be found: and . Finally, the Hückel molecular orbitals are . The constant β in the energy term is negative; therefore, with is the lower energy corresponding to the HOMO energy and with is the LUMO energy. If, contrary to the Hückel treatment, a positive value for were included, the energies would instead be , while the corresponding orbitals would take the form . An important consequence of setting is that the bonding (in-phase) combination is always stabilized to a lesser extent than the antibonding (out-of-phase) combination is destabilized, relative to the energy of the free 2p orbital. Thus, in general, 2-center 4-electron interactions, where both the bonding and antibonding orbitals are occupied, are destabilizing overall. This asymmetry is ignored by Hückel theory. In general, for the orbital energies derived from Hückel theory, the sum of stabilization energies for the bonding orbitals is equal to the sum of destabilization energies for the antibonding orbitals, as in the simplest case of ethylene shown here and the case of butadiene shown below. Hückel solution for 1,3-butadiene The Hückel MO theory treatment of 1,3-butadiene is largely analogous to the treatment of ethylene, shown in detail above, though we must now find the eigenvalues and eigenvectors of a 4 × 4 Hamiltonian matrix. We first write the molecular orbital as a linear combination of the four atomic orbitals (carbon 2p orbitals) with coefficients : . The Hamiltonian matrix is . In the same way, we write the secular equations in matrix form as , which leads to and , or approximately, , where 1.618... and 0.618... are the golden ratios and . The orbitals are given by , , , and . See also Möbius–Hückel concept Möbius aromaticity Tight Binding External links "Hückel method" at chem.swin.edu.au, webpage: mod3-huckel. Rauk, Arvi. SHMO, Simple Hückel Molecular Orbital Theory Calculator. Java Applet (downloadable) . Further reading The HMO-Model and its applications: Basis and Manipulation, E. Heilbronner and H. Bock, English translation, 1976, Verlag Chemie. The HMO-Model and its applications: Problems with Solutions, E. Heilbronner and H. Bock, English translation, 1976, Verlag Chemie. The HMO-Model and its applications: Tables of Hückel Molecular Orbitals, E. Heilbronner and H. Bock, English translation, 1976, Verlag Chemie. References Molecular physics Semiempirical quantum chemistry methods
Hückel method
[ "Physics", "Chemistry" ]
5,553
[ "Quantum chemistry", "Molecular physics", "Computational chemistry", " molecular", "nan", "Atomic", "Semiempirical quantum chemistry methods", " and optical physics" ]
3,406,142
https://en.wikipedia.org/wiki/Relativity%20of%20simultaneity
In physics, the relativity of simultaneity is the concept that distant simultaneity – whether two spatially separated events occur at the same time – is not absolute, but depends on the observer's reference frame. This possibility was raised by mathematician Henri Poincaré in 1900, and thereafter became a central idea in the special theory of relativity. Description According to the special theory of relativity introduced by Albert Einstein, it is impossible to say in an absolute sense that two distinct events occur at the same time if those events are separated in space. If one reference frame assigns precisely the same time to two events that are at different points in space, a reference frame that is moving relative to the first will generally assign different times to the two events (the only exception being when motion is exactly perpendicular to the line connecting the locations of both events). For example, a car crash in London and another in New York that appear to happen at the same time to an observer on Earth will appear to have occurred at slightly different times to an observer on an airplane flying between London and New York. Furthermore, if the two events cannot be causally connected, depending on the state of motion, the crash in London may appear to occur first in a given frame, and the New York crash may appear to occur first in another. However, if the events can be causally connected, precedence order is preserved in all frames of reference. History In 1892 and 1895, Hendrik Lorentz used a mathematical method called "local time" t' = t – v x/c2 for explaining the negative aether drift experiments. However, Lorentz gave no physical explanation of this effect. This was done by Henri Poincaré who already emphasized in 1898 the conventional nature of simultaneity and who argued that it is convenient to postulate the constancy of the speed of light in all directions. However, this paper did not contain any discussion of Lorentz's theory or the possible difference in defining simultaneity for observers in different states of motion. This was done in 1900, when Poincaré derived local time by assuming that the speed of light is invariant within the aether. Due to the "principle of relative motion", moving observers within the aether also assume that they are at rest and that the speed of light is constant in all directions (only to first order in v/c). Therefore, if they synchronize their clocks by using light signals, they will only consider the transit time for the signals, but not their motion in respect to the aether. So the moving clocks are not synchronous and do not indicate the "true" time. Poincaré calculated that this synchronization error corresponds to Lorentz's local time. In 1904, Poincaré emphasized the connection between the principle of relativity, "local time", and light speed invariance; however, the reasoning in that paper was presented in a qualitative and conjectural manner. Albert Einstein used a similar method in 1905 to derive the time transformation for all orders in v/c, i.e., the complete Lorentz transformation. Poincaré obtained the full transformation earlier in 1905 but in the papers of that year he did not mention his synchronization procedure. This derivation was completely based on light speed invariance and the relativity principle, so Einstein noted that for the electrodynamics of moving bodies the aether is superfluous. Thus, the separation into "true" and "local" times of Lorentz and Poincaré vanishes – all times are equally valid and therefore the relativity of length and time is a natural consequence. In 1908, Hermann Minkowski introduced the concept of a world line of a particle in his model of the cosmos called Minkowski space. In Minkowski's view, the naïve notion of velocity is replaced with rapidity, and the ordinary sense of simultaneity becomes dependent on hyperbolic orthogonality of spatial directions to the worldline associated to the rapidity. Then every inertial frame of reference has a rapidity and a simultaneous hyperplane. In 1990, Robert Goldblatt wrote Orthogonality and Spacetime Geometry, directly addressing the structure Minkowski had put in place for simultaneity. In 2006, Max Jammer, through Project MUSE, published Concepts of Simultaneity: from antiquity to Einstein and beyond. The book culminates in chapter 6, "The transition to the relativistic conception of simultaneity". Jammer indicates that Ernst Mach demythologized the absolute time of Newtonian physics. Naturally the mathematical notions preceded physical interpretation. For instance, conjugate diameters of conjugate hyperbolas are related as space and time. The principle of relativity can be expressed as the arbitrariness of which pair are taken to represent space and time in a plane. Thought experiments Einstein's train Einstein's version of the experiment presumed that one observer was sitting midway inside a speeding traincar and another was standing on a platform as the train moved past. As measured by the standing observer, the train is struck by two bolts of lightning simultaneously, but at different positions along the axis of train movement (back and front of the train car). In the inertial frame of the standing observer, there are three events which are spatially dislocated, but simultaneous: standing observer facing the moving observer (i.e., the center of the train), lightning striking the front of the train car, and lightning striking the back of the car. Since the events are placed along the axis of train movement, their time coordinates become projected to different time coordinates in the moving train's inertial frame. Events which occurred at space coordinates in the direction of train movement happen earlier than events at coordinates opposite to the direction of train movement. In the moving train's inertial frame, this means that lightning will strike the front of the train car before the two observers align (face each other). The train-and-platform A popular picture for understanding this idea is provided by a thought experiment similar to those suggested by Daniel Frost Comstock in 1910 and Einstein in 1917. It also consists of one observer midway inside a speeding traincar and another observer standing on a platform as the train moves past. A flash of light is given off at the center of the traincar just as the two observers pass each other. For the observer on board the train, the front and back of the traincar are at fixed distances from the light source and as such, according to this observer, the light will reach the front and back of the traincar at the same time. For the observer standing on the platform, on the other hand, the rear of the traincar is moving (catching up) toward the point at which the flash was given off, and the front of the traincar is moving away from it. As the speed of light is, according to the second postulate of special relativity, same in all directions for all observers, the light headed for the back of the train will have less distance to cover than the light headed for the front. Thus, the flashes of light will strike the ends of the traincar at different times. Spacetime diagrams It may be helpful to visualize this situation using spacetime diagrams. For a given observer, the t-axis is defined to be a point traced out in time by the origin of the spatial coordinate x, and is drawn vertically. The x-axis is defined as the set of all points in space at the time t = 0, and is drawn horizontally. The statement that the speed of light is the same for all observers is represented by drawing a light ray as a 45° line, regardless of the speed of the source relative to the speed of the observer. In the first diagram, the two ends of the train are drawn as grey lines. Because the ends of the train are stationary with respect to the observer on the train, these lines are just vertical lines, showing their motion through time but not space. The flash of light is shown as the 45° red lines. The points at which the two light flashes hit the ends of the train are at the same level in the diagram. This means that the events are simultaneous. In the second diagram, the two ends of the train moving to the right, are shown by parallel lines. The flash of light is given off at a point exactly halfway between the two ends of the train, and again form two 45° lines, expressing the constancy of the speed of light. In this picture, however, the points at which the light flashes hit the ends of the train are not at the same level; they are not simultaneous. Lorentz transformation The relativity of simultaneity can be demonstrated using the Lorentz transformation, which relates the coordinates used by one observer to coordinates used by another in uniform relative motion with respect to the first. Assume that the first observer uses coordinates labeled t, x, y, and z, while the second observer uses coordinates labeled t′, x′, y′, and z′. Now suppose that the first observer sees the second observer moving in the x-direction at a velocity v. And suppose that the observers' coordinate axes are parallel and that they have the same origin. Then the Lorentz transformation expresses how the coordinates are related: where c is the speed of light. If two events happen at the same time in the frame of the first observer, they will have identical values of the t-coordinate. However, if they have different values of the x-coordinate (different positions in the x-direction), they will have different values of the t''' coordinate, so they will happen at different times in that frame. The term that accounts for the failure of absolute simultaneity is the vx/c2. The equation t′ = constant defines a "line of simultaneity" in the (x′, t′) coordinate system for the second (moving) observer, just as the equation t = constant defines the "line of simultaneity" for the first (stationary) observer in the (x, t) coordinate system. From the above equations for the Lorentz transform it can be seen that t' is constant if and only if t − vx/c2 = constant. Thus the set of points that make t constant are different from the set of points that makes t' constant. That is, the set of events which are regarded as simultaneous depends on the frame of reference used to make the comparison. Graphically, this can be represented on a spacetime diagram by the fact that a plot of the set of points regarded as simultaneous generates a line which depends on the observer. In the spacetime diagram, the dashed line represents a set of points considered to be simultaneous with the origin by an observer moving with a velocity v of one-quarter of the speed of light. The dotted horizontal line represents the set of points regarded as simultaneous with the origin by a stationary observer. This diagram is drawn using the (x, t) coordinates of the stationary observer, and is scaled so that the speed of light is one, i.e., so that a ray of light would be represented by a line with a 45° angle from the x axis. From our previous analysis, given that v = 0.25 and c = 1, the equation of the dashed line of simultaneity is t − 0.25x = 0 and with v = 0, the equation of the dotted line of simultaneity is t = 0. In general the second observer traces out a worldline in the spacetime of the first observer described by t = x/v, and the set of simultaneous events for the second observer (at the origin) is described by the line t = vx. Note the multiplicative inverse relation of the slopes of the worldline and simultaneous events, in accord with the principle of hyperbolic orthogonality. Accelerated observers The Lorentz-transform calculation above uses a definition of extended-simultaneity (i.e. of when and where events occur at which you were not present'') that might be referred to as the co-moving or "tangent free-float-frame" definition. This definition is naturally extrapolated to events in gravitationally-curved spacetimes, and to accelerated observers, through use of a radar-time/distance definition that (unlike the tangent free-float-frame definition for accelerated frames) assigns a unique time and position to any event. The radar-time definition of extended-simultaneity further facilitates visualization of the way that acceleration curves spacetime for travelers in the absence of any gravitating objects. This is illustrated in the figure at right, which shows radar time/position isocontours for events in flat spacetime as experienced by a traveler (red trajectory) taking a constant proper-acceleration roundtrip. One caveat of this approach is that the time and place of remote events are not fully defined until light from such an event is able to reach our traveler. See also Andromeda paradox Causal structure Einstein's thought experiments Ehrenfest's paradox Einstein synchronisation References External links Simultaneity History of physics Thought experiments in physics
Relativity of simultaneity
[ "Physics" ]
2,716
[ "Special relativity", "Theory of relativity" ]